PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 28th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:08:00] <SQLDarkly> Taking post: http://pastebin.com/CrrDsxNv into consideration. Im getting the following error : >ArgumentError: wrong number of arguments (0 for 2)< when attempting to >Node.create< any help or advice would be appreciated. This is using mongomapper.
[00:34:33] <Cygnus_X> can anyone help me with a query problem?
[01:19:12] <progolferyo> has anyone here had experience with aggregation and $group in a sharded cluster. im running a fairly simple $group command and the results never find any total value greater than 1. here is my code: https://gist.github.com/69de51047a7307b0efad
[01:20:10] <progolferyo> the only thing i can think of is that the group is only grouping per shard and not across the whole cluster
[01:20:17] <progolferyo> any ideas?
[02:11:57] <progolferyo> does anyone know if aggregation supports running things on secondaries? im having trouble getting anything to run on a secondary, even when i do setSlaveOk
[03:12:56] <timah> in what ways am i able to interact with multi-dimensional arrays when using the aggregation framework?
[03:16:48] <timah> i have a field with a multi-dimensional array as its value, similar to this: [ [ 0, 0, 0 ], [ 0, 0, 0 ] ].
[03:19:56] <timah> i'm looking to aggregate these values as such… $sum [0][0] across all documents… $sum [0][1] across all documents… etc...
[03:20:14] <timah> but also $sum [0] across all documents.
[03:24:25] <IAD> timah: try to $unwind : http://www.mongodb.org/display/DOCS/Aggregation+Framework+-+$unwind
[03:25:15] <timah> iad: thank you… i've looked at $unwind as well… but i am unable to determine which element i'm dealing, with exception to first or last.
[03:25:33] <timah> *dealing with
[03:32:32] <timah> is there a way to either a) somehow determine the iteration/element of the $unwind operation, or b) access array/sub-array elements using dot notation?
[03:32:44] <timah> i know i can do this very easily with map-reduce.
[03:34:53] <timah> it just feels like i'm missing something in direct regards to the aggregation framework.
[03:48:42] <IAD> timah: can you add an index into nested arrays like { 1 : { 1:0, 2:0, 3:0 }, 2 : {1:0, 2:0, 3:0 } }?
[03:49:52] <timah> well i could at the expense of storing unnecessary keys.
[03:49:59] <timah> you know?
[03:55:08] <timah> i mean, as of this very moment the fastest, most efficient method for aggregating this data is to retrieve the date specific range of data and roll it up in the application context.
[04:00:39] <timah> could i do something like combine $addToSet with $each and $inc to +1 for each $unwind and provide that positional info i'm looking for?
[05:13:43] <alex__> hello,how to disable journal under ubuntu (mongodb installed from repository)
[06:08:15] <Vishu> hi
[06:08:33] <Vishu> How good mongodb for Video hosting sites?
[06:08:43] <Vishu> any body can answer please?
[06:09:19] <algernon> depends on what you want to store in it, and how.
[06:09:53] <Vishu> Now i am using mysql.... planning to move to mongodb...
[06:10:05] <Vishu> can you suggest... is it right approch?
[06:10:05] <penzur> why?
[06:10:28] <penzur> your having problem with mysql?
[06:11:04] <Vishu> only performance issue
[06:11:06] <Vishu> with mysq
[06:11:10] <Vishu> mysql*
[06:12:43] <penzur> true that
[06:13:12] <Vishu> we are serving ... thousends of videos ... it's growing day by day... so planning to move to best arch...
[07:49:06] <vr__> Is it recommended to run multiple mongo instances on one machine if I am planning on putting them into a sharded cluster?
[07:49:19] <vr__> to get around the db locking issue\
[07:50:22] <oskie> db locking issue?
[07:51:10] <oskie> it is more or less recommended to run mongod (data), mongod (cfg) and mongos, and perhaps an arbiter for a different replica set on the same machine in sharded configs,
[07:51:31] <oskie> but not two database mongods on the same server in production environments
[07:54:45] <vr__> I'm currently seeing that my locked db is often reaching 50-60%
[07:54:52] <vr__> while iostat doesn't indicate there is a big problem w/ the disk
[07:55:11] <vr__> that means to me that the database level lock is potentially the problem
[07:55:18] <vr__> (my insert rate is also <10K)
[08:35:31] <vr__> any idea whats the minimum chunk size that mongos will automatically scale to?
[08:42:58] <[AD]Turbo> hola
[08:56:07] <oskie> vr__: I don't know about the locking issue... 50-60% is very much. maybe it would be worthwhile to look into what is locking it
[09:00:40] <TeTeT> hi, I'm still struggling with deleting a dbref from an array - http://pastebin.ubuntu.com/1572169/
[09:01:24] <TeTeT> meantime I tried to just remove an objectid and that worked with: db.arr.update( {'_id': 4}, { $pull: { users: new ObjectId("510634355fc223c57bd919b3") } } )
[09:02:28] <TeTeT> it seems I cannot get the syntax for the dbref right - I tried, among others, $pull: { "users.$id": new ObjectId("510299093004e51abdd65c9d") } OR $pull: { users: DBRef("User", new ObjectId("510299093004e51abdd65c9d")) }
[09:02:44] <TeTeT> any advice?
[09:03:20] <vr__> vr_: I have a feeling it's actually an incorrectly set chunkSize
[09:53:26] <TeTeT> finally got it: db.Org.update( {'users.$id' : ObjectId("510299093004e51abdd65c9d") }, { $pull: { "users": DBRef("User", ObjectId("510299093004e51abdd65c9d")) } } )
[10:13:37] <aroj> Hi, a question on arbiter in replica set
[10:15:01] <aroj> whats the trade off between the following conifiguration ( 1 Primary + 4 secondary) vs ( 1 Primary + 3 secondary + 1 arbiter )?
[10:15:05] <aroj> given that in both cases
[10:15:11] <aroj> we have an odd number of servers
[10:15:17] <aroj> for the election to be a majority
[10:19:21] <aroj> DC1 -> 1P + 1S, DC2-> 2S, DC-3 -> 1 arbiter
[12:03:07] <NodeX> \\0_0//
[12:05:33] <Killerguy> hi
[12:06:00] <Killerguy> I did a wrong command on my mongos shell
[12:06:13] <Killerguy> I did ctrl-C but it's still running on shard
[12:06:18] <Killerguy> how can I cancel a command?
[12:07:57] <kali> db.currentOp() to see the various ops
[12:08:06] <kali> and db.killOp(id) to kill it
[12:11:20] <Killerguy> ok thx :)
[12:24:45] <NodeX> I dont suppose there is any way to split mongo data dirs up on a system is there ... for example I want my most frequently accessed DB's on SSD and some less important ones in another drive/partition
[12:27:52] <kali> NodeX: http://docs.mongodb.org/manual/reference/mongod/#cmdoption-mongod--directoryperdb
[12:27:55] <kali> + symlinks ?
[12:28:54] <NodeX> ah, good idea
[12:29:52] <NodeX> symlinks is an awesome idea
[12:30:29] <NodeX> better to symlink to the SSD or make the SSD default and symlink to the SATA?
[12:31:09] <kali> i don't think that will make any measurable difference :)
[12:32:56] <NodeX> in db.stats() is there a line that tells me total usage (disk) for everything ?
[12:33:03] <NodeX> I normaly add them up myself
[12:42:41] <Derick> NodeX: depends what you want the default to be
[12:43:02] <Derick> (for new dbs)
[12:46:49] <NodeX> is that regarding my total disk usage?
[12:47:03] <NodeX> I wanna find out what my total usage is on disk
[12:47:10] <Derick> well, it's about you picking SSD over SATA
[12:47:12] <NodeX> du -h would probably do it
[12:47:34] <NodeX> ok, I have 3 more important DB's than others so I would like them on SSD
[12:47:45] <Derick> right
[12:47:47] <NodeX> or rather the MMAP files on SSD
[12:47:53] <Derick> right
[12:48:19] <Derick> so put those on the SSD through a symlink - and set the default data dir to the SATA. If you then create a new DB, it shows up on the SATA automatically.
[12:48:43] <NodeX> that's what I thought, else it will appear on the SSD first
[12:49:06] <Derick> yup
[12:49:13] <NodeX> I'm condensing 2 servers into 1
[12:50:30] <NodeX> thanks Derick, Kali ;)
[13:09:38] <jtomasrl> is it possible to use aggregate $match for nested keys?
[13:15:32] <skot> sure, same with a query.
[13:15:48] <kali> jtomasrl: depends what you mean... it will behave as in find() and match full documents. if you need to extract sub docs, you need to $unwind them first
[13:16:18] <jtomasrl> i just did it :) thanks
[13:30:38] <jtomasrl> does update $pushAll work with { upsert: true } ?
[13:32:11] <NodeX> yup
[13:58:39] <listerine> Can someone help me with this issue? https://jira.mongodb.org/browse/DOCS-577?focusedCommentId=251700#comment-251700
[13:59:38] <listerine> I'm trying to update my mongo through brew since I use OSX, running mongo --version returns the correct version but "db.runCommand( {buildInfo: 1} )" returns the old version.
[14:00:10] <listerine> I'm guiding myself on these two questions: http://stackoverflow.com/questions/13695851/mongodb-java-driver-no-such-cmd-aggregate and http://stackoverflow.com/questions/8495293/whats-a-clean-way-to-stop-mongod-on-mac-os-x
[14:04:46] <vr__> can we use mongos (mongo sharded) in a replica set?
[14:06:19] <jtomasrl> imnot sure how to perform an upsert for a nested object. I have a checkins document and i want to update orders of different items, if the item exist, just add an order, if it doesnt, add the item and nest the ordet. this is what i got so far, but i dont see it going to work. https://gist.github.com/4655708
[14:28:42] <JoeyJoeJo> How can I see what a collection's shard key is?
[14:31:52] <joe_p> vr__: if your running a simple replicaset then you do not use mongos. mongos is used only if you have a sharded mongo environment
[14:32:48] <vr__> joe_p: yes. but can I use mongos in a highly-available fashion.
[14:32:56] <bean> I'd say yes
[14:33:18] <bean> RS w/ 3 nodes, mongos in front of it. Main mongod goes down, new master is elected, right?
[14:33:37] <joe_p> you should have many mongos for HA
[14:33:44] <vr__> I mean in case mongos goes down
[14:33:52] <vr__> Yes, but does my app know which one to connect to?
[14:34:15] <joe_p> vr__: if you tell it what mongos are in the environment
[14:34:40] <vr__> I see … so nothing as nice as a replica set which automatically does that.
[14:34:45] <vr__> I have to give it a set of machines manually.
[14:34:55] <joe_p> vr__: correct
[14:36:03] <vr__> i see. That seems ghetto given how nice mongo is in general about failover
[14:37:22] <joe_p> if you only list a single host to connect to and the host is down is it suppose to magically figure out where to connect to?
[14:38:04] <vr__> yes. In the same way that if I am using a replicaSet it 'magically' figures out where to connect to.
[14:40:33] <joe_p> vr__: explain how that works? as if you connect directly to a single member of a replica set and that member is down your no longer connected to mongodb. you have to do the same thing there and have multiple hosts listed in your connection string. I fail to see how mongos and replica set connections are different in that regard
[14:41:08] <vr__> if I connect to one member, it discovers the other members
[14:41:32] <vr__> and connects to those as well. if that mongo dies it will still connect to the other ones
[14:43:44] <Killerguy> hi
[14:44:05] <Killerguy> I'm trying to do agregation to sum value on a field
[14:44:15] <Killerguy> but on some documents
[14:44:51] <Killerguy> like db.foo.find({test:"bar"}), and sum results on a special field
[14:45:20] <Killerguy> I need to first do $match then $group and $sum?
[14:50:34] <NodeX> match to get the smallest amount of docs, then group and sum
[14:53:19] <DinMamma> Hiya
[14:54:36] <DinMamma> I understand it might be difficult to say, but would a compress of a collection that is 490Gb be a bad idea when I have 28G free disk space?
[14:57:17] <bean> DinMamma: It won't work
[14:57:28] <bean> DinMamma: you need to have, for some reason, double the space of the DB
[14:57:46] <DinMamma> Are you sure your not thinking about repair?
[14:58:39] <DinMamma> Ive seen that double space mentioned in relation to repair but not compress..
[15:03:12] <rquilliet> hi all
[15:03:17] <DinMamma> Howdie
[15:03:41] <rquilliet> i am using mongo + php and am wondering if I can use mongoimport in my php script ?
[15:03:48] <rquilliet> i can't find it in any documentation
[15:05:01] <rquilliet> the real issue being : i have a 10M lines file (very small lines) that i'd like to upload in a mongoDB collection
[15:05:28] <rquilliet> i'm wondering which way is the most efficient in PHP (for now i'm using batchInsert, and it's pretty slow)
[15:06:10] <DinMamma> Get some SSDs and smash those in the server :)
[15:06:37] <rquilliet> hehe
[15:06:52] <DinMamma> Or more RAM, Im being serious Im afraid.
[15:07:17] <rquilliet> what about that mongoimport thing
[15:07:21] <rquilliet> it seems pretty efficient
[15:07:37] <rquilliet> but i can't find anything about it in php
[15:08:07] <DinMamma> Mongoimport is for when you have created a dump with mongoexport.
[15:08:22] <rquilliet> is it not working with a csv file as an input ?
[15:08:34] <NodeX> what are you trying to achieve?
[15:09:02] <rquilliet> i have a file with 10M lines (not heavy one, 2 int, 1 string and 2 pipes)
[15:09:15] <rquilliet> i'd like to import it in a mongodb collection
[15:09:22] <rquilliet> using php
[15:09:25] <NodeX> the whole
[15:09:26] <NodeX> file?
[15:09:31] <rquilliet> yep
[15:09:46] <NodeX> as one file?
[15:09:58] <DinMamma> Pardon me, it seems like mongoimport actually deals with CSV.
[15:10:14] <DinMamma> You probably want to run it from the command line rather than programatically.
[15:10:25] <NodeX> you want to avoid that
[15:10:35] <NodeX> do you want to store the file or its contents?
[15:10:40] <rquilliet> its content
[15:10:48] <DinMamma> Also, if at all possible delete indexes before importing, that should make the import quicker.
[15:10:49] <rquilliet> my file is a txt one
[15:11:14] <rquilliet> each line is "int1|int2|string(2 characters)"
[15:11:18] <DinMamma> rquilliet: If I understand you correctly you want to store each line as its own document?
[15:11:23] <rquilliet> yep
[15:11:24] <NodeX> well you can do it one of 2 ways... first split the file into smaller chunks
[15:11:27] <rquilliet> 1 line = 1 document
[15:11:39] <NodeX> parse the chunks - this will unload memory after each insert
[15:11:52] <NodeX> 2. do the same but fork PHP to do it in X processes.
[15:11:59] <NodeX> 3. Do the whole thing in one large loop
[15:12:26] <rquilliet> about the indexes, if i create them after importing, won't it last for hours if the collection contains 10M docs ?
[15:12:37] <NodeX> background them
[15:12:53] <NodeX> and depending on the index you dont need to worry to much about that if you use unsafe writes
[15:12:53] <rquilliet> what does that mean?
[15:13:03] <NodeX> it means run them in the backgroun
[15:13:06] <NodeX> background *
[15:13:15] <DinMamma> Unsage is unsafe..
[15:13:34] <NodeX> unsage?
[15:13:37] <DinMamma> Its like sprinkling data over the database and hope that it sticks, I wouldnt recomment iit.
[15:13:44] <DinMamma> Unsafe*
[15:14:03] <NodeX> err the only reason unsafe writes would not write would be due to a reboot mid operation
[15:14:31] <Zelest> what am I missing? what is the problem by splitting on \n and inserting it?
[15:14:49] <rquilliet> i can't do it line by line, it's too lon
[15:14:50] <rquilliet> g
[15:14:51] <NodeX> Zelest : sort of, the quickest way
[15:14:59] <NodeX> split the file into smaller files
[15:15:07] <Zelest> why split at all?
[15:15:08] <NodeX> end of story, you wont get a faster import with php than that
[15:15:33] <rquilliet> the use of batchInsert or mongoimport won't help me you think
[15:15:44] <NodeX> you said "using php"
[15:16:14] <rquilliet> yep, so you confirm mongoimport can't be used in php
[15:16:15] <Zelest> what do you mean by too long?
[15:16:15] <rquilliet> ?
[15:16:58] <rquilliet> i mean 15 min for 10 M lines
[15:16:59] <DinMamma> rquilliet: A file needs to be seriously long to not be iterable, at least on my machine.
[15:17:14] <DinMamma> Which is not the beefiest of machines.
[15:19:03] <NodeX> 15 mins, wth are you doing in the loop lol?
[15:21:21] <JoeyJoeJo> I have a few databases on 4 shards and I want to wipe one completely out. Is it safe to go to my dbdir and just delete the files (MyDB.1, MyDB.2, etc) manually?
[15:25:12] <NodeX> yeh
[15:25:21] <NodeX> but make sure you remvoe the shard from the config
[15:25:39] <kali> what's wrong with drop() ?
[15:26:08] <NodeX> not as clean
[15:26:23] <jtomasrl> im not sure how to perform an upsert for a nested object. I have a checkins document and i want to update orders of different items, if the item exist, just add an order, if it doesnt, add the item and nest the ordet. this is what i got so far, but i dont see it going to work. https://gist.github.com/4655708
[15:29:16] <Zelest> rquilliet, http://pastie.org/private/3ai9zgv6bnbj31ncalblg
[15:29:32] <Zelest> rquilliet, might be the ugliest code I've ever written.. but that let you read huge files without loading up the entire file into RAM.
[15:29:42] <JoeyJoeJo> kali: I have 220+ million documents and .drop() is taking forever
[15:29:47] <Zelest> rquilliet, instead of echo, simply insert it into mongo.
[15:33:04] <JoeyJoeJo> NodeX: I think I phrased my question poorly. I want to completely remove a database from 4 shards, not a shard. Is that possible?
[15:33:15] <rquilliet> Zelest : thanks tor the doc
[15:33:53] <Zelest> :-)
[15:34:49] <rquilliet> NodeX : i confirm 10s for 100K lines, 100s for 1M lines
[15:35:03] <rquilliet> and i just fgets() each line and $db->insert the doc
[15:35:20] <rquilliet> no index, nothing
[15:35:40] <NodeX> JoeyJoeJo : drop() as kali says then if you need to clean things do an rm -rf
[15:35:59] <NodeX> rquilliet : can you pastebin your php?
[15:36:27] <rquilliet> $conn = new MongoClient();
[15:36:27] <rquilliet> $db = $conn->deezer_db;
[15:36:27] <rquilliet> $data = $db->data_test;
[15:36:27] <rquilliet> $data->drop();
[15:36:30] <rquilliet> damn
[15:38:17] <kali> i thought db.drop() was more or less O(1)...
[15:39:04] <rquilliet> Nodex, do you want me to copy paste here ?
[15:40:01] <kali> rquilliet: use pastebin ç
[15:40:03] <kali> !
[15:40:04] <rquilliet> http://pastebin.com/Z8kmRkYG
[15:40:16] <kali> thanks
[15:40:28] <rquilliet> you're welcome
[15:43:09] <NodeX> why the preg_match ?
[15:43:56] <NodeX> personaly I would split the files and fork it with non safe writes and remove the preg_match - this will give you the biggest performance
[15:46:15] <rquilliet> so you would 1/ split the files in chunks
[15:46:40] <NodeX> yes
[15:46:53] <NodeX> and fork the process
[15:47:03] <rquilliet> 2/ use pcntl_fork
[15:47:08] <NodeX> spawn 20 instances to insert
[15:47:25] <NodeX> you can use that if it's available, else just fork it normaly
[15:47:52] <rquilliet> mmh i'm not sure i understand
[15:48:01] <rquilliet> (not too expert in forking ...)
[15:48:41] <NodeX> tell php to spawnitself
[15:48:45] <NodeX> spawn itself *
[15:51:17] <rquilliet> and about the "fork it with the non safe writes"
[15:51:18] <rquilliet> ?
[15:51:37] <rquilliet> how can i do that without checking the safety of the line ?
[15:51:58] <NodeX> if you want safe writes then have safe writes
[15:53:30] <rquilliet> right
[15:58:05] <rquilliet> about forking NodeX
[15:58:23] <rquilliet> there is a balance to reach between number of "sons" and size of the chunks isn't it ?
[15:58:31] <rquilliet> what would you advise ?
[16:01:06] <NodeX> I normaly just create X nodes and loop through them with X children
[16:01:45] <rquilliet> and how do you calibrate the X ?
[16:02:15] <NodeX> X is down to you
[16:02:31] <NodeX> 10M / 100 = 100k per child
[16:03:34] <timah> yeah… i just recently finished a single thread process that does 30M in 15 minutes.
[16:03:56] <rquilliet> how did you do that ?
[16:04:04] <timah> i'll paste bin it.
[16:04:22] <timah> have to get ready for work so won't be here to hand out tips.
[16:04:28] <timah> but maybe something will help.
[16:04:32] <rquilliet> alright
[16:04:34] <rquilliet> thanks a lot
[16:09:30] <timah> here you go: http://pastebin.com/8k3nyi9T
[16:10:02] <rquilliet> cheers
[16:10:08] <JoeyJoeJo> I'm inserting a few million documents using a python script I wrote. It usually inserts at about 5000 docs/sec but for some reason it'll just hang for 200-300 seconds at a time, then continue inserting. How can I track down what is causing inserts to hang?
[16:10:28] <JoeyJoeJo> I'm only running one instance of my script and nothing else is using that DB or collection, so I don't think it's a write lock issue
[16:13:10] <timah> rquilliet: http://pastebin.com/dvJnZhbE
[16:13:20] <timah> use that one…
[16:14:02] <rquilliet> so you're inserting 30M docs in 15min
[16:15:23] <timah> yes.
[16:15:33] <timah> but even better than than.
[16:16:46] <JoeyJoeJo> timah: Holy crap, I wish I could insert half as fast as that. What's your setup like?
[16:17:57] <timah> it's taking a single collection containing 30m+ documents, iterating the cursor (find all), transforming from old document schema to new (including hash to mysql id lookups via apc), and inserting batches of 5000, which equates to being anywhere between 25k-50k/sec.
[16:18:24] <timah> this is just my macbook pro.
[16:18:52] <timah> doing the transform on my local machine being its a one-time only kinda thing.
[16:18:56] <JoeyJoeJo> WTF? And I'm only getting 5000/sec max on 4 shards
[16:19:07] <timah> :S
[16:19:49] <timah> yeah… it's a 2.8Ghz quad-core with 16Gb ram and 128Gb SSD.
[16:20:05] <timah> and i'm sure the SSD & RAM makes the biggest difference.
[16:20:44] <JoeyJoeJo> That's nothing. Each of my shards is 32 cores, 256GB and raided SSDs
[16:20:56] <JoeyJoeJo> So I feel like I should be inserting at like 1 million/sec
[16:21:14] <NodeX> not a chance of 1M ips
[16:21:50] <JoeyJoeJo> I was exaggerating, but still. 5000/sec on that setup sucks and I can't for the life of me figure out why it's that slow
[16:22:04] <timah> are you batching?
[16:22:24] <JoeyJoeJo> No, that's a new concept for me
[16:22:28] <timah> dude.
[16:24:02] <timah> create a plain ol' array before your loop. push your document onto that array when you would usually insert your document into the collection. then check the length of your array and if it's >= your desired batch insert amount then call your .insert(batchArray).
[16:24:09] <timah> check out my pastebin.
[16:24:16] <timah> http://pastebin.com/dvJnZhbE
[16:25:30] <timah> line 134.
[16:25:42] <JoeyJoeJo> That makes sense. I'm going from csv->mongo and I just iterate over each line of the csv. The current csv files I'm inserting are about 4.8M lines
[16:27:15] <JoeyJoeJo> I'm also using python. I wonder if there is any speed difference between the python driver and the php driver?
[16:28:18] <timah> $ffCollection->batchInsert($batch, array('w' => 0));
[16:28:37] <timah> that's another thing… i've disabled write concerns.
[16:30:11] <JoeyJoeJo> timah: Are there any down sides to disabling write concerns?
[16:30:28] <timah> in a production environment, yes.
[16:31:05] <timah> however… i'm running a follow-up script to validate at a very high-level whether the data made it.
[16:34:53] <NodeX> write concern is down to you, if you can miss the data then dont enable it else do
[16:39:03] <supernayan> Meta-1
[16:45:13] <bean> does a db.copyDatabase() read lock?
[17:25:49] <JoeyJoeJo> Is this line from my log file and error? CMD fsync: sync:1 lock:0
[17:26:04] <JoeyJoeJo> Or maybe a better question is, why does it show up so much?
[18:02:55] <theotherguy> recommendation on a dbaas for mongodb?
[18:14:37] <JoeyJoeJo> Is there a quick way to stop a collection? By stop I mean tell mongo to not process any queries and stop whatever ops it's currently doing? I have one collection taking up a huge amount of lock % and screwing over my other collections. I'm trying to do .drop() but it's taking forever.
[18:20:27] <moskiteau> hello
[20:34:03] <rideh> struggling to import a csv, renders fine in excel, when i open with sublime it carries past a ton of lines, tried fixing enconding and line nedings, cannot get it into mongo properly. advice?
[20:57:54] <JoeyJoeJo> How can I remove a document if the value of a field is not an int?
[21:36:37] <owen1> one of my hosts have old version of the replica set configuration. how to force him to reconfigure?
[21:37:04] <owen1> when i try to reconfigure it it say: exception: member localhost:27017 has a config version >= to the new cfg version; cannot change config
[22:10:28] <jtomasrl> how can i store an array of objectId
[22:32:07] <timah> so i know how to successfully generate an objectid based on timestamp, but it is missing everything in it's definition except for that timestamp.
[22:32:30] <timah> is there a way to generate an objectid and have it include the machine id, process id, inc, etc?
[23:03:32] <weq> What's the difference between MongoClient.connect and MongoClient.open?
[23:28:47] <kenneth> hey there; does anybody know how to do multiple sets in array with the following syntax:
[23:28:48] <kenneth> schema: {_id: …, my_arr: [{id: 1, name: "one"}, {id: 2, name: "two"}]}
[23:28:49] <kenneth> update({_id: whatever, "my_arr.id": 1}, {"my_arr.$.name": "ONE"})
[23:29:09] <kenneth> what if i want to update my_arr[0] and my_arr[1] in the same udpate?