[00:41:04] <fommil> if I do a sorted search and go through the cursor, and this takes say 5 minutes or so – what will happen if the collection is edited in that time?
[00:45:41] <fommil> also, does the timeout work for the whole search, or just for each next() call of the cursor?
[00:45:59] <fommil> I'm really struggling to find this info from the docs
[02:02:49] <Dr{Who}> anyone know what a bson type of 120 is? I am storing what I think is a copy of a bson oid type in a BSONElement but it seems to change from 7 to 120 I am thiking its not a copy but a pointer to the real record? so what is 120 and what is a safe way to copy an BSONElement
[02:04:58] <Dr{Who}> meh. ya just saying BSONElement A = B does not actually copy the data looks to be by ref as soon as i do a c.next() it changes to 120
[02:12:22] <Dr{Who}> ok i see part of the issue. I do copy the BSONObj using .copy but then extract to an element the _id once I change my copy it breaks. So I need a real copy of my field in my case _id something I can keep around a bit.
[02:13:04] <Dr{Who}> or do I have to just settle with a full copy of the document?
[02:41:34] <Guest_1448> is it possible to group documents by a field and return a result with the documents keyed by field?
[02:43:50] <Guest_1448> e.g. if I have [{a: 'b', foo: 'bar'}, {a: 'b', y: 'z'}, {a: 'c', blah: 'blah'}] I want mongo to group by 'a' field and return {'b': [{a: 'b', foo: 'bar'}, {a: 'b', y: 'z'}], 'c': [{a: 'c', blah: 'blah'}]}
[11:55:55] <Kakera> is it possible to remove multiple documents by id?
[11:57:46] <hever> Hello, as mongo db provides using Array.unique in Map/Reduce, I'm asking myself what kind of JavaScript Version or JavaScript Frameworks works serverside because Array.unique is not a "standard" JS function.
[12:06:20] <hever_> Hello, as mongo db provides using Array.unique in Map/Reduce, I'm asking myself what kind of JavaScript Version or JavaScript Frameworks works serverside because Array.unique is not a "standard" JS function.
[13:09:37] <hever_> Can I somehow achieve that data that have just one value are ignored and not part of the result ?
[13:11:35] <hever_> I'm using map and reduce and if there are data with just one value there's nothing todo for reduce (and it's not passed to reduce) so in finalize I've to change the data but I'm going to skip the data there from the result or by some flag perhaps...
[13:12:43] <kali> hever_: there is something fichy here. reduce should not change the format of the data, just aggregate it
[13:12:58] <kali> hever_: precisely because some data can go through the reduce 0 to N times
[15:29:10] <bizzle> Does anyone know how in java, I can not have an insert attempt throw and exception on a duplicate key error? I would like to just handle the application by doing commandResult.getLastError.ok()
[15:29:25] <bizzle> and avoid using try { } catch { }
[15:38:36] <Guest_1448> if I do db.coll.find({...}).limit(100)
[15:38:59] <Guest_1448> is it possible to find out the total amount of documents that the query would return IF I hadn't applied the limit?
[15:39:06] <Guest_1448> I just want to get a 'max' value for paging
[15:39:44] <ron> nope. you'd have to do a count() separately.
[15:40:27] <Guest_1448> can I do foo=db.coll.find({...}); foo.count() then continue calling .limit() etc on the 'foo' cursor?
[15:56:11] <Guest_1448> is there a way to 'apply' the options to a cursor later then? like .find(..).options({sort:'x', limit: 200}) instead of .find(..., options)
[16:23:04] <skot> anyway, the cursor is just a holder for fields which will be sent to the server before you start getting results.
[16:27:45] <Guest_1448> not the query, the options object that holds the sort/limit/skip options
[16:28:06] <Guest_1448> I want to do a .find(query), get the count then apply the rest of the options/paging stuff and carry on
[16:42:22] <igotux> hey guys.. am getting this on a slave node :- "[replslave] all sources dead: data too stale halted replication, sleeping for 5 seconds"
[16:42:40] <igotux> can someone help me how to make this slave node up2date ?
[16:53:23] <kali> igotux: replsave ? you're still using the old master/slave replication ?
[17:01:16] <igotux> kali: this is for migrating to a new RS...
[17:01:50] <igotux> currently our master is small.. so thought of porting this data to a bigger node first and then we will make this slave node master and start replica set
[17:02:50] <kali> i assume this is a live system ?
[17:02:58] <kali> you can't affors much downtime ?
[17:04:47] <igotux> it synces almost all of the data to new system and around at the end, it started showing this error and stops replication...
[17:05:26] <kali> so you had one master and one slave, but you outgrowed them, so you're trying to substitute the slave by a bigger one, and plan to use this slave as a seed to a replica set ?
[17:06:50] <igotux> our DB is growing pretty fast and now we need to convert it to a RS ... so we are using a big node and made as slave of current master.. while current slave is also syncing from current master...
[17:06:58] <igotux> but in the bigger node, am seing this issue
[17:07:16] <igotux> these bigger node, i'll use as RS node
[17:07:33] <igotux> and add more fat nodes to this bigger node.. sounds good ?
[17:08:46] <igotux> the idea is to get rid of current master -> slave nodes
[17:10:39] <kali> i think i would try to degrade the current master/slave to standalone, and seed the replica set from there
[17:11:14] <igotux> you mean, start the current master with a RS name ?
[17:12:19] <kali> mmm yeah: 1/ make the current master a standalone node 2/ check everything is fine 3/ restart the standalone node with a rs name, add a node, and hope the sync will be fast enough
[17:13:17] <kali> igotux: but i've never done that: i don't know how "broken" is a master slave if you try to "downgrade" it to standalone
[17:14:16] <igotux> there is no change on master side if you make it a standalone na.. as per my understanding, a master node is a standalone node as well..correct ?
[17:14:42] <kali> i don't know, i have no experience on the old master/slave system
[17:15:29] <kali> but i have some on RS, so at least, i might help you once we're on this terrain :)
[17:15:30] <igotux> master and slave are running on 2.2.1
[17:16:24] <igotux> kali: in the pasteit, "Sat Jan 5 08:32:12 [replslave] repl: nextOpTime Jan 5 06:26:37 50e7c79d:4 > syncedTo Jan 5 05:26:56 50e7b9a0:1c"
[17:17:11] <kali> it looks like your oplog is not deep enough
[22:24:55] <timah> that's the new design… the previous design (created by someone other than myself) :) looked something like this: { _id: ObjectId(), properties: { some_key: 'some_static_option_value', some_other_long_key: 'some_other_long_static_option_value', etc: 'etc_etc' } }
[22:27:49] <timah> my new design includes the application layer handling the on-the-fly transformation (compacting, really) of both keys and values for all of this static data.
[22:31:56] <timah> by utilizing multiple collections (g.1.e - events, g.1.c - counts/aggregates), as well as including the second level grouping id in the document._id itself i've exponentially reduced the overall size of the data (dataSize?), as well as improved performance.
[22:33:22] <timah> however… it seems as though either the preallocation or nssize is forcing my uber tiny documents to still pad much more than necessary.
[22:34:23] <timah> in size that is… if i'm not being clear i apologize… it's not really all that complicated… i'm really just trying to figure out why the size on disk doesn't seem to change much, even though the size of my documents has changed alot.
[22:35:24] <Dr{Who}> timah: did you rebuild the entire collection or just modify it on the fly?
[22:36:19] <timah> this is without any modifications post-transformation (old>new).
[22:37:29] <Dr{Who}> timah: you may try exporting all the bson data and then re importing or just doing a repair on the collection to see if it changes.
[22:43:50] <Dr{Who}> timah be warned btree indexes in general on any database tend to fall apart for insert speed when you get into the millions or billions of records.
[22:44:51] <timah> yeah… speaking of… so what i've done is partitioned using collections.
[22:45:11] <Dr{Who}> ya that works from my experience.
[22:46:35] <Dr{Who}> a recent downside I am finding is collections dont seem to have any unique id so it can be hard to manage the shards.
[22:47:43] <cad> Are pymongo lists atomic? For ex. if I append something to a list in an existing document. Than save() it. Will this be atomic ?