[00:29:14] <beilabs> Hi all, what's the best approach to copy and replace all the data from one mongo database to another? both are on the same cluster. About 4GB.
[05:31:04] <hekman> Yeah, I’ll just wait I guess! :)
[05:31:11] <hekman> sounds like it should be any day now
[05:35:52] <voidhouse> How can I exactly select the 1/Nth part of a collection?
[05:51:04] <jeff1220> Hey, I have a couple quick questions that I can't seem to find a definite answer to in the docs. I am wanting to know if I create an index for a capped collection if those indexes are cleaned up as old documents are aged out of the capped collection. From everything I can gather, this seems to be the case, but oddly enough I can't find anywhere that this is stated outright.
[05:52:34] <jeff1220> Basically, I have a sha256 field that I want to index on, and I want to make sure that my indexes won't continue to grow forever in a capped collection
[05:52:52] <bcows> any quick/hack to make $in do a case-insensitive comparison ?
[07:37:46] <voidhouse> Hey, why I can't do something like 'db.colleciton.db.ensureIndex({ 'prop': 1 }, { unique: true, dropDups: true})` to drop duplicts?
[07:43:51] <joannac> voidhouse: you can, the syntax is here http://docs.mongodb.org/manual/tutorial/create-a-unique-index/#drop-duplicates, looks like you have an extra "db"
[07:44:40] <voidhouse> joannac: Thanks a lot, that was a silly mistake. worked like a charm.
[08:43:57] <svm_invictvs> Using that method, can that be used to implement an atomic remove operation?
[08:44:16] <svm_invictvs> For example, can I do a find, get the whole document, and then pass the whole document?
[08:44:30] <svm_invictvs> Does the pass object have to match what's stored in the database exactly?
[08:45:38] <voidhouse> Woh, now it is so cumbersome: db.words.findAndModify({query: {name:'abc'}, update: {$set : {somenewfield: 'data' }}})
[09:56:43] <voidhouse> I am using node-mongodb-native if I run `db.collection.findAndModify({ _id: ObjectId("527fa9676ae09a7d370017da") }, [['__id','asc']], {'$set': { score: '7' }}, {{new:true}})` removes all other keys in my document!
[09:57:12] <voidhouse> I want to just upsert the 'score' key.
[10:13:09] <x0f> voidhouse, change new:true to upsert:true
[10:32:51] <johnny_bravo> and there are other apps running too
[10:32:52] <Derick> that makes network issues even more likely
[10:33:12] <johnny_bravo> and i have monitoring running on all boxes
[10:34:06] <johnny_bravo> hmm i'm not convinced it's network related
[10:34:19] <voidhouse> x0f: Thanks a lot, that did the trick. Although the documenttion says that 'upsert' would create a new Document if not there.
[10:34:23] <johnny_bravo> it's been going on for a while and i have been keeping a close eye on it
[10:34:30] <johnny_bravo> never seen a packet lost
[10:35:11] <johnny_bravo> i even run long pings and the reply time didnt get really long but the secondaries still lost sight of the primary
[10:38:43] <johnny_bravo> on a different note what happens if i have an even number of nodes in a replica set but they have different priorities set to ensure there is no issue electing a primary?
[10:39:57] <johnny_bravo> would even number of nodes be ok in that case?
[10:45:57] <tiller> Is there a way to do an upsert looking like that: http://pastebin.com/jjJq1p0q ?
[10:46:28] <tiller> with the "_id" on the $setOnInsert mongo gives me : "Mod on _id not allowed"
[10:46:56] <tiller> and without, it gives me : "E11000 duplicate key error index" when it tries to insert (no problem on update)
[10:56:07] <tiller> Oh, I found a way. Instead of: {"_id": "ID1", ...}, if I use : {"_id": {$in: ["ID1"]}, ...} the upsert works fine ;o
[11:08:25] <padan> I'm using a map/reduce like this: http://pastebin.com/39NKk6Y3 .. on docs like this: http://pastebin.com/vLsftU2j ... and i'd like results like this: http://pastebin.com/f6Dyz3RD ... it is returning results like this: http://pastebin.com/R191EA77
[12:14:16] <padan> how can I troubleshoot slow mapreduce queries? Is there something like a query plan that will show me what objects it is trying to use?
[12:15:23] <kali> padan: on way to speed up considerably your query is to move the LevelId condition to the "query" argument of m/r
[12:15:38] <kali> padan: if there is no "query" argument, m/r scan the whole table
[12:15:54] <kali> padan: if you move it to the query, it will make use of indexes when possible
[12:15:59] <padan> Ah. I did create an index on levelid and was curious on why it didn't use it
[12:16:07] <padan> didn't know i need some other bit in there.
[12:24:32] <padan> How bad of an idea is it to sort my values in the reduce function? I'd like to return some percentiles as well as the averages and sums
[12:29:20] <kali> you can't do this in reduce. maybe in finalize, but i would not advise it
[12:29:36] <kali> anything that you can do out of the DB, do it outside
[12:30:37] <padan> unfortunatly to get something like percentiles you need to use a sorted result set -- i supose i can just return the raw values back to the client and do it there, but it would be better for the db to do the work
[12:33:09] <fredfred> anyone involved in the official C# driver.. especially the connection pooling?
[12:33:46] <padan> in a case like this, is it typically recommended to return the raw values to the client or to use some other structure/function in the database that i dont know about?
[12:34:12] <Derick> fredfred: I don't think they're on here
[12:36:11] <fredfred> Derick: Perhaps the issue i'm facing is also in other drivers? When using authentication i get "auth failed" when the conn pool starts a second connection to the server. Seams like the new thread is not authentication properly
[12:36:47] <Derick> fredfred: all drivers do this differently, and I am not sure how C# does this. You'd probably be best off writing to the mongo-dev mailinglist.
[12:38:41] <fredfred> Derick: Ok thank you, will do that.
[12:40:44] <Derick> it could also be called mongodb-dev...
[13:00:36] <__mek__> Is there a way to mongorestore a dump from a directory just by specifying a directory containing the dump of all collections in that db? I've been trying: <sudo> mongorestore --host 127.0.0.1:27017 -d mydb --drop --dbpath /var/lib/mongodb dump/mydb. I get errors like: "Wed Nov 13 06:49:25 [conn2] build index done 0 records 0 secs853 objects found don't know what to do with file"Is there a way to mongorestore a dump from a directory just b
[13:07:36] <kali> padan: somebody/something will have to perform the sort (if the dataset is big, consider doing 10 quickselect instead of sorting btw). scaling a stateless application layer being so much easier than scaling a database, i strongly suggest you do that app-side
[14:40:37] <HashMap> i meant anybody else.. cause I have only 9 questions there.. and don't know if its the same..
[14:47:46] <ashak> can anyone point me in the direction of documentation that might allow me to work out how much RAM a machine running mongodb might require to do a full recovery? I have a mongo setup that's been working fine for ages, I recently rebuilt one of the nodes in the replicaset but eventually the mongodb process gets killed by the OOM killer.
[15:40:51] <kfb4> hey, i'm looking at the opcounters measure in MMS and I see a lot of update and "command" operations. The updates are expected, but what does "command" mean in this context? It seems to always match the update number.
[15:40:54] <liquid-silence> that does not seem to work either
[15:41:46] <liquid-silence> this returns the correct review though find({ "reviews.users.user_id" : "528070a20000005c6c000002", "reviews._id": "5283527df9047540206bc531"}, { "reviews.$":1})
[15:44:15] <liquid-silence> when I do .update( {$and: [{ "reviews.users.user_id" : "528070a20000005c6c000002", "reviews._id": "5283527df9047540206bc531"}]}, {"reviews.users.$.approved":true}), I get uncaught exception: can't have . in field names [reviews.users.$.approved]
[15:46:35] <liquid-silence> Derick there has to be easier way of doing this
[15:56:12] <liquid-silence> ok let me rather ask this, is there a way I can get this to work
[16:04:22] <liquid-silence> and is the idea of mongodb not to have nested documents?
[16:05:11] <liquid-silence> but without giving a adequate query language, its quite disappointing, that I have to build this as a relation database in a document store.
[16:05:58] <Derick> liquid-silence: yes, but you need to still think about your schema. And design it according on how you add, update and query it.
[16:06:22] <liquid-silence> Well if an item is only belonging to one parent item
[16:15:30] <Nodex> else you might want to change the schema slightly to somehting like this.... reviews : { "5283a197f9047540206bc534" : {users:[{......}]}}
[16:15:55] <Nodex> you wil have to try it and see, I don't see why it wont work
[16:16:06] <liquid-silence> same error as before can't append to array using string field name: users
[16:16:21] <Nodex> then you'll need to adapt your schema to the one I showed you
[16:16:34] <Nodex> you will save a few bytes too ;)
[16:29:15] <Nodex> with reviews being an array you need to use the positional operator to get inside it which stops you getting into it later when you need it for the users array
[16:29:41] <Nodex> if you change the review to an id for a key you can break into it without giving up your postional match for later
[16:30:12] <Nodex> I can't think of any reason not to use an id for a key (as long as it's not a numerical one) aside the obvious space saver
[16:30:51] <Nodex> please tell me if there I'm overlooking one though
[16:41:03] <Derick> liquid-silence: you're too afraid to normalize I think. Instead of arrays and arrays, split it out into multiple documents and duplicate some data.
[16:41:20] <liquid-silence> yeah I need to learn that
[16:43:33] <ashak> can anyone point me in the direction of documentation that might allow me to work out how much RAM a machine running mongodb might require to do a full recovery? I have a mongo setup that's been working fine for ages, I recently rebuilt one of the nodes in the replicaset but eventually the mongodb process gets killed by the OOM killer.
[16:43:34] <Derick> yes, that first comment is good
[16:44:00] <Nodex> But the comment doesn't say why
[16:45:11] <liquid-silence> So so using documents is bad
[16:49:53] <Nodex> ok so my answer stands, change to an object structure (unless you feel you dont want to, or, adapt the schema, or update it in your app
[16:50:36] <Nodex> I'm quite willing to be told that objects are a bad idea, I just never have been
[16:50:58] <Nodex> I've done quite a lot of research on it over the years and people just say "Don't use them" and give no reason
[16:51:34] <Nodex> {} & [] are both primatives, they probably share the same type
[16:53:13] <liquid-silence> can you do a performance test?
[16:53:35] <Nodex> I use the object method myself in places. Granted, it's not in venues where performance is an issue
[16:54:33] <Nodex> you can time it easily enough. Do an $elemMatch on foo : [{_id:'abc'}] and a straight match on "foo.abc" -> {"foo"":"abc":{....}}
[16:59:33] <kfb4> hey just asking this again, but it should be a simple question. i'm looking at the opcounters measure in MMS and I see a lot of update and "command" operations. The updates are expected, but what does "command" mean in this context? It seems to always match the update number, so is it really separate from the updates?
[18:18:22] <Tomasso> how about text indexes, are they good anough for a production environment? should I consider taking into consideration other text search techniques such as implementing my own levenshtein distance algorithm?
[18:19:28] <Nodex> liquid-silence : I block all pm's by default sorry
[18:19:46] <Nodex> + not sure how to turn it off tbh
[18:55:34] <Tomasso> kali: I did it, and found it out.. maybe i will need to refactor my documents by a script.. :S
[19:17:54] <postitnote> Is there a rough formula for calculating a json documents size in string length represented as a document in mongodb? I'm storing 15m documents and the json is on average about 3000 bytes.
[19:30:43] <Tomasso> luckily i did a dump today, i was playing with text indexes, and figured out i lost all my information
[19:36:25] <Tomasso> how could that happend, is that possible?
[21:26:45] <postitnote> I get "error: Couldn't fork %pre: Cannot allocate memory" when installing mongo server on cents 5.6. Any suggestions?
[21:26:55] <postitnote> This is an ec2 micro instance
[21:47:45] <maxamillion> has anyone ever seen the index count on the local db be different between secondaries of a replica set but the rs.status() still shows everything in good standing?
[22:05:25] <joannac> Different numbers of indexes?
[22:15:59] <joannac> That's not that weird. My guess is the other secondary was primary at one stage
[22:16:27] <joannac> That's why it has that index, because when it was primary it needed to know who its secondaries were, so it has that extra "local.slaves" collection
[22:17:26] <maxamillion> joannac: alright, is there a way to make the two secondaries re-sync?
[23:12:49] <iamacomp> (when compiling after apt-get install mongodb)
[23:12:59] <iamacomp> /usr/include/boost/thread/tss.hpp: In instantiation of ‘void boost::thread_specific_ptr<T>::reset(T*) [with T = int]’:
[23:12:59] <iamacomp> /usr/include/mongo/client/../util/goodies.h:184:13: required from ‘void mongo::ThreadLocalValue<T>::set(const T&) [with T = int]’
[23:12:59] <iamacomp> /usr/include/mongo/client/../util/net/../../db/../util/../db/../util/concurrency/rwlock.h:357:34: required from here
[23:13:01] <iamacomp> /usr/include/boost/thread/tss.hpp:105:17: error: use of deleted function ‘boost::shared_ptr<boost::detail::tss_cleanup_function>::shared_ptr(const boost::shared_ptr<boost::detail::tss_cleanup_function>&)’
[23:13:03] <iamacomp> In file included from /usr/include/mongo/client/../pch.h:92:0,
[23:13:08] <iamacomp> (oops that's not so nice to look at)
[23:13:17] <iamacomp> and I also compiled from the github
[23:13:34] <iamacomp> using scons --full --use-system-all install-mongoclient
[23:13:48] <iamacomp> and then i get the same error, but in a smaller form :-)
[23:14:27] <iamacomp> /usr/include/boost/thread/pthread/thread_data.hpp: In constructor ‘boost::detail::tss_data_node::tss_data_node(boost::shared_ptr<boost::detail::tss_cleanup_function>, void*)’:
[23:14:27] <iamacomp> /usr/include/boost/thread/pthread/thread_data.hpp:36:41: error: use of deleted function ‘boost::shared_ptr<boost::detail::tss_cleanup_function>::shared_ptr(const boost::shared_ptr<boost::detail::tss_cleanup_function>&)’
[23:14:27] <iamacomp> In file included from /usr/include/boost/shared_ptr.hpp:17:0,
[23:14:29] <iamacomp> from /usr/local/include/mongo/pch.h:49,
[23:14:31] <iamacomp> from /usr/local/include/mongo/client/dbclient.h:30,