PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 13th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:14] <beilabs> Hi all, what's the best approach to copy and replace all the data from one mongo database to another? both are on the same cluster. About 4GB.
[04:27:13] <silne30> Hello all.
[04:27:41] <silne30> I am working on creating a data structure for my database. I am used to relational DBMS (big surprise).
[04:28:24] <silne30> I know MondoDB operates in a Schemaless way since it is NoSQL.
[04:28:57] <silne30> But I want to relate data in a document to other documents.
[04:43:00] <k_sze[work]> How does MongoDB deal with timezones of Date values?
[04:43:11] <k_sze[work]> It's always UTC, right?
[04:44:58] <barisc> k_sze[work], would happy if you share if you get an answer somewhere else as well
[04:46:11] <k_sze[work]> yeah, it appears that MongoDB dates are always assumed to be UTC.
[05:17:59] <hekman> Does anybody in here know when pymongo 2.7 is being released?
[05:26:42] <dangayle> hekman: MongoDB jira says 26 of 29 issues resolved for pymongo 2.7
[05:27:33] <dangayle> Actually, it says that it was due Last Thursday
[05:27:42] <hekman> Yeah that’s what I was noticing...
[05:28:30] <hekman> I really need PYTHON-549… :)
[05:29:40] <dangayle> Why not grab the latest dev version off of github?
[05:30:09] <hekman> I was thinking of forking it and running off my fork, but this is for a production issue :(
[05:30:57] <dangayle> mmm. Sorry.
[05:31:04] <hekman> Yeah, I’ll just wait I guess! :)
[05:31:11] <hekman> sounds like it should be any day now
[05:35:52] <voidhouse> How can I exactly select the 1/Nth part of a collection?
[05:51:04] <jeff1220> Hey, I have a couple quick questions that I can't seem to find a definite answer to in the docs. I am wanting to know if I create an index for a capped collection if those indexes are cleaned up as old documents are aged out of the capped collection. From everything I can gather, this seems to be the case, but oddly enough I can't find anywhere that this is stated outright.
[05:52:34] <jeff1220> Basically, I have a sha256 field that I want to index on, and I want to make sure that my indexes won't continue to grow forever in a capped collection
[05:52:52] <bcows> any quick/hack to make $in do a case-insensitive comparison ?
[07:37:46] <voidhouse> Hey, why I can't do something like 'db.colleciton.db.ensureIndex({ 'prop': 1 }, { unique: true, dropDups: true})` to drop duplicts?
[07:43:51] <joannac> voidhouse: you can, the syntax is here http://docs.mongodb.org/manual/tutorial/create-a-unique-index/#drop-duplicates, looks like you have an extra "db"
[07:44:40] <voidhouse> joannac: Thanks a lot, that was a silly mistake. worked like a charm.
[08:29:42] <[AD]Turbo> hola
[08:36:44] <voidhouse> How can I do this: `db.words.findAndModify({'name':'abc'}, {$set : {'someNewFied': 'yes' }})` ?
[08:36:59] <voidhouse> I get '"errmsg" : "need remove or update"' :(
[08:37:16] <voidhouse> s\Fied\Property
[08:42:29] <svm_invictvs> Heya
[08:43:45] <svm_invictvs> http://api.mongodb.org/java/current/com/mongodb/DBCollection.html#remove(com.mongodb.DBObject)
[08:43:57] <svm_invictvs> Using that method, can that be used to implement an atomic remove operation?
[08:44:16] <svm_invictvs> For example, can I do a find, get the whole document, and then pass the whole document?
[08:44:30] <svm_invictvs> Does the pass object have to match what's stored in the database exactly?
[08:45:38] <voidhouse> Woh, now it is so cumbersome: db.words.findAndModify({query: {name:'abc'}, update: {$set : {somenewfield: 'data' }}})
[09:56:43] <voidhouse> I am using node-mongodb-native if I run `db.collection.findAndModify({ _id: ObjectId("527fa9676ae09a7d370017da") }, [['__id','asc']], {'$set': { score: '7' }}, {{new:true}})` removes all other keys in my document!
[09:57:12] <voidhouse> I want to just upsert the 'score' key.
[10:13:09] <x0f> voidhouse, change new:true to upsert:true
[10:29:08] <johnny_bravo> hi
[10:29:44] <johnny_bravo> i have a replica set in ec2 v2.4.3
[10:30:06] <johnny_bravo> i can see the master keeps going down
[10:30:29] <johnny_bravo> from the secondarys point of view
[10:30:42] <johnny_bravo> nothing in the primary's logs
[10:31:10] <johnny_bravo> anyone knows why this happens
[10:31:11] <johnny_bravo> ?
[10:31:22] <johnny_bravo> i set the ulimits to the recomended ones
[10:31:47] <johnny_bravo> and it seems to happen 3 minutes past the hour
[10:31:52] <johnny_bravo> which is strange
[10:32:05] <johnny_bravo> i dont have any cron jobs running there
[10:32:20] <Derick> network issues?
[10:32:36] <johnny_bravo> it's in a VPC in AWS
[10:32:51] <johnny_bravo> and there are other apps running too
[10:32:52] <Derick> that makes network issues even more likely
[10:33:12] <johnny_bravo> and i have monitoring running on all boxes
[10:34:06] <johnny_bravo> hmm i'm not convinced it's network related
[10:34:19] <voidhouse> x0f: Thanks a lot, that did the trick. Although the documenttion says that 'upsert' would create a new Document if not there.
[10:34:23] <johnny_bravo> it's been going on for a while and i have been keeping a close eye on it
[10:34:30] <johnny_bravo> never seen a packet lost
[10:35:11] <johnny_bravo> i even run long pings and the reply time didnt get really long but the secondaries still lost sight of the primary
[10:38:43] <johnny_bravo> on a different note what happens if i have an even number of nodes in a replica set but they have different priorities set to ensure there is no issue electing a primary?
[10:39:57] <johnny_bravo> would even number of nodes be ok in that case?
[10:40:06] <tiller> Hi!
[10:45:57] <tiller> Is there a way to do an upsert looking like that: http://pastebin.com/jjJq1p0q ?
[10:46:28] <tiller> with the "_id" on the $setOnInsert mongo gives me : "Mod on _id not allowed"
[10:46:56] <tiller> and without, it gives me : "E11000 duplicate key error index" when it tries to insert (no problem on update)
[10:56:07] <tiller> Oh, I found a way. Instead of: {"_id": "ID1", ...}, if I use : {"_id": {$in: ["ID1"]}, ...} the upsert works fine ;o
[11:08:25] <padan> I'm using a map/reduce like this: http://pastebin.com/39NKk6Y3 .. on docs like this: http://pastebin.com/vLsftU2j ... and i'd like results like this: http://pastebin.com/f6Dyz3RD ... it is returning results like this: http://pastebin.com/R191EA77
[11:08:34] <padan> any help appreciated...
[11:39:04] <padan> n'm, i ifugred it out
[12:14:16] <padan> how can I troubleshoot slow mapreduce queries? Is there something like a query plan that will show me what objects it is trying to use?
[12:15:23] <kali> padan: on way to speed up considerably your query is to move the LevelId condition to the "query" argument of m/r
[12:15:38] <kali> padan: if there is no "query" argument, m/r scan the whole table
[12:15:54] <kali> padan: if you move it to the query, it will make use of indexes when possible
[12:15:59] <padan> Ah. I did create an index on levelid and was curious on why it didn't use it
[12:16:07] <padan> didn't know i need some other bit in there.
[12:18:42] <padan> perfect.
[12:18:47] <kali> padan: mongodb will not analyse your javascript functions. it can not guess that it can use the index to filter the input...
[12:19:03] <padan> so if I use the query bit i can ignore checking for that in the js map part?
[12:19:06] <padan> AH.
[12:19:07] <kali> yes
[12:19:09] <padan> that explains it
[12:19:11] <padan> thanks!
[12:24:32] <padan> How bad of an idea is it to sort my values in the reduce function? I'd like to return some percentiles as well as the averages and sums
[12:29:20] <kali> you can't do this in reduce. maybe in finalize, but i would not advise it
[12:29:36] <kali> anything that you can do out of the DB, do it outside
[12:30:37] <padan> unfortunatly to get something like percentiles you need to use a sorted result set -- i supose i can just return the raw values back to the client and do it there, but it would be better for the db to do the work
[12:33:09] <fredfred> anyone involved in the official C# driver.. especially the connection pooling?
[12:33:46] <padan> in a case like this, is it typically recommended to return the raw values to the client or to use some other structure/function in the database that i dont know about?
[12:33:56] <Derick> do it in the app
[12:34:12] <Derick> fredfred: I don't think they're on here
[12:36:11] <fredfred> Derick: Perhaps the issue i'm facing is also in other drivers? When using authentication i get "auth failed" when the conn pool starts a second connection to the server. Seams like the new thread is not authentication properly
[12:36:47] <Derick> fredfred: all drivers do this differently, and I am not sure how C# does this. You'd probably be best off writing to the mongo-dev mailinglist.
[12:38:41] <fredfred> Derick: Ok thank you, will do that.
[12:40:44] <Derick> it could also be called mongodb-dev...
[13:00:36] <__mek__> Is there a way to mongorestore a dump from a directory just by specifying a directory containing the dump of all collections in that db? I've been trying: <sudo> mongorestore --host 127.0.0.1:27017 -d mydb --drop --dbpath /var/lib/mongodb dump/mydb. I get errors like: "Wed Nov 13 06:49:25 [conn2] build index done 0 records 0 secs853 objects found don't know what to do with file"Is there a way to mongorestore a dump from a directory just b
[13:01:04] <__mek__> Sorry, funky buffer.
[13:07:36] <kali> padan: somebody/something will have to perform the sort (if the dataset is big, consider doing 10 quickselect instead of sorting btw). scaling a stateless application layer being so much easier than scaling a database, i strongly suggest you do that app-side
[14:20:02] <ehershey> good morning
[14:38:58] <HashMap> Hi everyone. Is anybody doing finals for MongoDB for DBAs this week?
[14:39:37] <BurtyB2> hmm you?
[14:39:49] <HashMap> well.. yeah.. :D
[14:40:14] <BurtyB> ;)
[14:40:37] <HashMap> i meant anybody else.. cause I have only 9 questions there.. and don't know if its the same..
[14:47:46] <ashak> can anyone point me in the direction of documentation that might allow me to work out how much RAM a machine running mongodb might require to do a full recovery? I have a mongo setup that's been working fine for ages, I recently rebuilt one of the nodes in the replicaset but eventually the mongodb process gets killed by the OOM killer.
[15:34:08] <liquid-silence> hi all
[15:34:14] <liquid-silence> is there a reason why this will not work
[15:34:15] <liquid-silence> update({ $and:[{ "reviews._id": id, "reviews.$.users.user_id": user_id }]}, { $set: { "reviews.users.$.approved": true,
[15:34:57] <liquid-silence> its not updating the user's document that is in the users array, nested underneath the reviews
[15:35:05] <liquid-silence> s/reviews/review
[15:38:23] <liquid-silence> I need to get the current user node for the particular review
[15:38:36] <liquid-silence> review has an array of users
[15:39:05] <Derick> I don't think you can use $ in queries
[15:39:22] <Derick> you shouldn't have to either though
[15:40:36] <liquid-silence> @Derick what do you suggest then
[15:40:49] <liquid-silence> .update({ "reviews.users.user_id" : "528070a20000005c6c000002", "reviews._id": "5283527df9047540206bc531"}, { $set: { "reviews.users.$.approved":true}})
[15:40:51] <kfb4> hey, i'm looking at the opcounters measure in MMS and I see a lot of update and "command" operations. The updates are expected, but what does "command" mean in this context? It seems to always match the update number.
[15:40:54] <liquid-silence> that does not seem to work either
[15:41:46] <liquid-silence> this returns the correct review though find({ "reviews.users.user_id" : "528070a20000005c6c000002", "reviews._id": "5283527df9047540206bc531"}, { "reviews.$":1})
[15:44:00] <liquid-silence> @Derick
[15:44:15] <liquid-silence> when I do .update( {$and: [{ "reviews.users.user_id" : "528070a20000005c6c000002", "reviews._id": "5283527df9047540206bc531"}]}, {"reviews.users.$.approved":true}), I get uncaught exception: can't have . in field names [reviews.users.$.approved]
[15:46:35] <liquid-silence> Derick there has to be easier way of doing this
[15:56:12] <liquid-silence> ok let me rather ask this, is there a way I can get this to work
[15:56:13] <liquid-silence> update({ "reviews._id": "5283527df9047540206bc531", "reviews.users.user_id": "528070a20000005c6c000002"}, { $set: { "reviews.$.users.$.approved":true}})
[15:56:30] <liquid-silence> query portion is fine, the set is not working
[15:56:35] <Derick> i think you have too many nested levels
[15:56:51] <Derick> you can't use two $'s yet in an update - there is a SERVER ticket for it though I think
[15:57:20] <liquid-silence> ok so how do I get past it?
[15:57:32] <Nodex> can't you $elemMatch it then update on the elemMatch ?
[15:57:35] <Derick> redesign your schema
[15:58:47] <liquid-silence> Nodex?
[15:59:44] <Nodex> I don;t think it's possible tbh, you would have to pull the document down and adjust it in your app
[15:59:54] <Nodex> as Derick says, re-think your schema
[16:00:47] <liquid-silence> I added a object id to the reviews.users
[16:02:28] <liquid-silence> even if I do this, .update({"reviews.users._id": "5283a197f9047540206bc534"}, { $set: { "reviews.users.$.approved":true}})
[16:02:31] <liquid-silence> it will not work
[16:02:32] <liquid-silence> wtf?
[16:04:22] <liquid-silence> and is the idea of mongodb not to have nested documents?
[16:05:11] <liquid-silence> but without giving a adequate query language, its quite disappointing, that I have to build this as a relation database in a document store.
[16:05:58] <Derick> liquid-silence: yes, but you need to still think about your schema. And design it according on how you add, update and query it.
[16:06:22] <liquid-silence> Well if an item is only belonging to one parent item
[16:06:24] <liquid-silence> what is the problem
[16:06:55] <Nodex> how you've described it is not how you described it the first time
[16:07:03] <Nodex> $set: { "reviews.$.users.$.approved":true}}) <----- not possible
[16:07:28] <liquid-silence> I know thats not possible, so I suggested that I add a object id to the "users" array items and update from there
[16:07:40] <liquid-silence> but that is seemingly not possible either
[16:07:53] <Nodex> without seeing your data it's really difficult to know what you mean
[16:08:06] <Nodex> pastebin a typical document
[16:09:57] <liquid-silence> http://pastebin.com/idRggui8
[16:09:59] <liquid-silence> there you go
[16:10:23] <liquid-silence> all I need to do is update "1" user item that is nested underneath the reviews
[16:11:39] <Nodex> yes you can match that
[16:11:45] <Nodex> reviews.users.$.approved
[16:12:05] <Nodex> by depth of nesting I think Derick meant the number of "$" that you had
[16:12:11] <Derick> yes
[16:12:39] <liquid-silence> if I do this
[16:12:40] <liquid-silence> s.update({"reviews.users._id": "5283a197f9047540206bc534"}, { $set: { "reviews.users.$.approved":true}})
[16:12:51] <Nodex> you will have to add a secondary match into the find() part if you want to target a single document
[16:13:05] <liquid-silence> assets
[16:13:07] <liquid-silence> err
[16:13:12] <liquid-silence> can't append to array using string field name: users
[16:14:00] <liquid-silence> so Nodex I am not understanding wht this will not work
[16:14:20] <liquid-silence> I think its not finding the correct user's element
[16:14:25] <Nodex> you need to match the upper _id on it
[16:14:40] <Nodex> i/e reviews._id also
[16:15:26] <liquid-silence> .update({"reviews.users._id": "5283a197f9047540206bc534", "reviews._id": "5283527df9047540206bc531"}, { $set: { "reviews.users.$.approved":true}})
[16:15:30] <Nodex> else you might want to change the schema slightly to somehting like this.... reviews : { "5283a197f9047540206bc534" : {users:[{......}]}}
[16:15:55] <Nodex> you wil have to try it and see, I don't see why it wont work
[16:16:06] <liquid-silence> same error as before can't append to array using string field name: users
[16:16:21] <Nodex> then you'll need to adapt your schema to the one I showed you
[16:16:34] <Nodex> you will save a few bytes too ;)
[16:16:37] <liquid-silence> that is stupid
[16:16:52] <Nodex> it's deep nesting
[16:17:06] <Nodex> if you don't like it then change the schema or fix it in your app
[16:17:08] <liquid-silence> yeah which should work in a document orientated database
[16:17:36] <liquid-silence> at this point in time NoSQL can kiss my ass
[16:17:51] <Nodex> haha
[16:18:07] <liquid-silence> There is a reason why people still use relational databases, and that is to not have issues like this
[16:18:25] <liquid-silence> I do not see why this query will not work
[16:19:19] <Nodex> mongodb is not a silver bullet, it is not an RDBMS and it does not claim to be a drop in replacement for an RDBMS
[16:19:38] <Nodex> choose the right toll for the job, perhaps mongodb is not suited to your app
[16:19:44] <Nodex> tool*
[16:19:46] <liquid-silence> I know this, but if its trying to be a document orientated database, then be good at it
[16:19:55] <liquid-silence> don't have limitations like this
[16:20:26] <Nodex> it's not a limitation, change the schema = fixed
[16:20:40] <liquid-silence> Apparently PostgreSQL's hstore is faster than mongodb
[16:20:47] <Nodex> go use it then ;)
[16:20:55] <liquid-silence> Nodex I am looking at changing the schema bro
[16:21:14] <Nodex> if every review has an _id it makes sense to use that as the key
[16:21:22] <Nodex> it will give you one more depth of access
[16:22:20] <liquid-silence> so reviews : { "sdasdsadA", created_by: "testt", {users: {
[16:22:38] <Nodex> no
[16:23:00] <Nodex> reviews : {"123456" : {created_by : ..., users: [{},{},{}]}}
[16:23:19] <Nodex> then it would be $set : {reviews.123456 ......
[16:23:26] <Derick> Nodex: is "123456" a key or a value?
[16:23:32] <Nodex> key
[16:23:38] <Derick> don't use values as keys!
[16:23:39] <Nodex> obvisouly dont use a number
[16:23:44] <Nodex> it's not a value
[16:23:48] <Nodex> it's an id
[16:23:49] <Derick> it looks like one
[16:23:58] <Derick> an ID is a value :P
[16:24:15] <Nodex> in this case it's needed
[16:25:20] <Nodex> http://lists.w3.org/Archives/Public/ietf-http-wg/2013OctDec/0625.html
[16:25:30] <Nodex> looks like http 2.0 want's to be all https
[16:25:47] <Derick> yes
[16:25:55] <liquid-silence> this is bull
[16:26:13] <liquid-silence> reviews" : { "5283527df9047540206bc531" : {
[16:26:44] <liquid-silence> does not seem to work
[16:26:47] <liquid-silence> gah
[16:27:55] <Nodex> $set : {"reviews.5283527df9047540206bc531.users.$.rejected" : true} .... witha query of {"reviews.5283527df9047540206bc531.users._id":"SOME_ID"}
[16:28:01] <Derick> Nodex: using and ID as a key as liquid-silence is doing now is *not* a good idea
[16:28:08] <Derick> you can not query on that specific element now
[16:28:23] <Nodex> you can with that query ^^
[16:29:15] <Nodex> with reviews being an array you need to use the positional operator to get inside it which stops you getting into it later when you need it for the users array
[16:29:41] <Nodex> if you change the review to an id for a key you can break into it without giving up your postional match for later
[16:30:12] <Nodex> I can't think of any reason not to use an id for a key (as long as it's not a numerical one) aside the obvious space saver
[16:30:51] <Nodex> please tell me if there I'm overlooking one though
[16:30:55] <Nodex> -there
[16:31:34] <liquid-silence> I'm missing and overlooking a lot here
[16:32:39] <Nodex> are you trying to keep everything in highly nested documents for a reason or because that's what you read somewhere?
[16:32:56] <Nodex> because it doens't -always- make sense to nest X levels deep
[16:34:02] <liquid-silence> Well to be honest I dont want 30 collections
[16:34:19] <Nodex> brb
[16:34:45] <liquid-silence> The reviews.users mean nothing without the top level
[16:34:47] <Derick> liquid-silence: but you can flatten it out and denormalize it a little perhaps
[16:34:51] <liquid-silence> so it makes sense to nest it
[16:38:20] <liquid-silence> Derick I don't see why it should not be nested
[16:39:14] <liquid-silence> Actually I think I might move back to SQL
[16:39:19] <liquid-silence> 2 weeks work, gone
[16:39:20] <liquid-silence> :D
[16:39:36] <Nodex> lol
[16:39:58] <liquid-silence> Nodex its not really funny
[16:40:01] <liquid-silence> :P
[16:41:03] <Derick> liquid-silence: you're too afraid to normalize I think. Instead of arrays and arrays, split it out into multiple documents and duplicate some data.
[16:41:20] <liquid-silence> yeah I need to learn that
[16:41:24] <liquid-silence> let me try
[16:41:50] <liquid-silence> I presume you are talking named documents?
[16:42:14] <Derick> what's a named document?
[16:42:32] <liquid-silence> reviews : { "something": { }, "anotherthing": {} }
[16:42:40] <liquid-silence> http://stackoverflow.com/questions/13638122/mongodb-embedded-vs-array-sub-document-performance
[16:42:49] <liquid-silence> read the first comment to this person's question
[16:43:12] <Nodex> [16:41:16] <liquid-silence> reviews : { "something": { }, "anotherthing": {} } <--- exactly what I said
[16:43:18] <liquid-silence> I know
[16:43:33] <ashak> can anyone point me in the direction of documentation that might allow me to work out how much RAM a machine running mongodb might require to do a full recovery? I have a mongo setup that's been working fine for ages, I recently rebuilt one of the nodes in the replicaset but eventually the mongodb process gets killed by the OOM killer.
[16:43:34] <Derick> yes, that first comment is good
[16:44:00] <Nodex> But the comment doesn't say why
[16:45:11] <liquid-silence> So so using documents is bad
[16:45:24] <liquid-silence> Array's are better
[16:45:43] <Nodex> If someone can answer why objects (documents) are bad then maybe
[16:45:59] <Nodex> there is no definitive answer on it and sometimes they're unavoidable - same as joins
[16:46:27] <liquid-silence> yes but, changing my schema now will be a pain in the ass
[16:46:41] <Nodex> does the update only ever effect one document?
[16:47:01] <liquid-silence> the update will only ever affect that user object
[16:47:07] <Nodex> in that one document?
[16:47:19] <liquid-silence> its a documentent in the array yes
[16:47:29] <liquid-silence> s/documentent/document
[16:47:31] <Nodex> the parent document I am talking about
[16:48:18] <Nodex> if you truly believe that objects are worse and cba to change your schema then pull the doc in your app and modify it there
[16:48:46] <liquid-silence> the parent is a document yes
[16:49:07] <Nodex> ok so your update ONLY effects something in that ONE document, not multiple documents
[16:49:13] <liquid-silence> yes
[16:49:53] <Nodex> ok so my answer stands, change to an object structure (unless you feel you dont want to, or, adapt the schema, or update it in your app
[16:50:00] <liquid-silence> ok
[16:50:00] <Nodex> how often will this update take place?
[16:50:15] <liquid-silence> quite often
[16:50:36] <Nodex> I'm quite willing to be told that objects are a bad idea, I just never have been
[16:50:58] <Nodex> I've done quite a lot of research on it over the years and people just say "Don't use them" and give no reason
[16:51:34] <Nodex> {} & [] are both primatives, they probably share the same type
[16:53:13] <liquid-silence> can you do a performance test?
[16:53:35] <Nodex> I use the object method myself in places. Granted, it's not in venues where performance is an issue
[16:54:33] <Nodex> you can time it easily enough. Do an $elemMatch on foo : [{_id:'abc'}] and a straight match on "foo.abc" -> {"foo"":"abc":{....}}
[16:59:33] <kfb4> hey just asking this again, but it should be a simple question. i'm looking at the opcounters measure in MMS and I see a lot of update and "command" operations. The updates are expected, but what does "command" mean in this context? It seems to always match the update number, so is it really separate from the updates?
[17:09:31] <Nodex> http://frontenddevreactions.tumblr.com/
[17:09:32] <Nodex> lmao
[17:15:33] <liquid-silence> Nodex :) had a good chuckle
[17:20:16] <kali> me too, but i'm kinda drunk
[17:21:08] <Nodex> haha
[17:21:35] <Nodex> First Vodka of the say for me
[17:21:40] <Nodex> err day * lol
[17:22:01] <kali> :)
[17:22:14] <Nodex> that makes me sound like an Alchaholic, I mean the evening haha
[17:22:16] <kali> just champagne here
[17:22:25] <Nodex> posh :P
[17:22:39] <Nodex> celebrating?
[17:23:12] <kali> yeah, cleaning lady leaving, another one arriving :)
[17:23:39] <Nodex> that calls for Champagne ?
[17:23:44] <kali> why not ?
[17:23:57] <kali> who needs a reason anyway
[17:24:17] <Nodex> haha ++
[17:24:20] <kali> let's have it while it's still affordable, it may not last
[17:24:57] <Nodex> i'm coining the phrase "LOL++" - remember you heard it here first
[17:25:18] <Nodex> Champagne is expensive in the UK, infact all alchahol is
[17:25:46] <kali> except beer
[17:26:05] <Nodex> pints of beer are expensive
[17:26:28] <Nodex> average about 5-6 euro per pint I would say acorss the whole of the UK
[17:26:32] <Nodex> across *
[17:26:45] <kali> it's more like 7 or 8 in France
[17:26:51] <kali> or at least in PAris
[17:27:06] <Nodex> yer, parts of london it's 10-12 euro
[17:27:08] <kali> it's one of the rare things that are less expensive
[17:27:38] <Nodex> I'm very confused byt Grey Goose Vodka. it's very expensive but tastes the same as normal vodka
[17:27:42] <kali> time to go home anyway, i certainly don't want to commit code tonight.
[17:27:48] <Nodex> and I don't class France as a vodka producing country
[17:28:54] <Nodex> Drunk code commiting haha
[17:28:57] <Nodex> sounds like fun :P
[17:30:29] <Derick> Nodex: pints are £3-£5
[17:32:39] <Nodex> not last time I went to soho
[17:33:04] <Nodex> even in Swindon they're £5
[17:33:25] <Nodex> £3.50 outside manchester, in the city they're ~£5
[17:33:48] <Nodex> I don't drink beer so it's all good for me
[17:34:04] <Nodex> and occasional Stella every now and again maybe
[17:34:08] <Nodex> an*}
[17:34:10] <Nodex> an*
[17:34:19] <Derick> Nodex: only tourists go to soho
[17:36:19] <Nodex> I understand if you live in the city you know where to drink for reasonable money
[17:36:33] <Nodex> it's been 20 years since I lived in London
[17:37:24] <Derick> and you don't only want cheap... you want good too
[17:37:51] <Nodex> well yeh of course
[17:38:24] <Nodex> when I was of a legal drinking age, beer was about £1.50 per pint
[17:38:37] <Nodex> a pack of 20 smokes was ~£1.20
[17:38:51] <Derick> yhehe
[17:38:52] <Nodex> now the same pack of smokes is £8.45
[17:39:08] <cheeser> smoking--
[17:39:11] <Nodex> 800% raise in 18 years
[17:39:27] <Nodex> I gave up 3 years ago thank f***
[17:39:49] <Nodex> this country is mad for prices on "Non essentials" haha
[17:50:15] <iamacomp> greetings, i have a little question about replication bandwidth
[17:51:26] <Derick> shoot
[17:51:27] <iamacomp> is the replication bandwidth the sum of the bandwidth to master, or are changes grouped?
[17:51:40] <Derick> it is likely *more* bandwidth
[17:51:53] <iamacomp> and, is it possible to comopress from master to slave in certain circumstances
[17:52:07] <Derick> no compression is built into the protocol
[17:52:09] <iamacomp> como-press ;-)
[17:52:15] <iamacomp> ok
[17:52:19] <iamacomp> thanks a lot
[17:55:29] <liquid-silence> Nodex I settled to fetch the document, manipulate the review and save
[17:56:47] <Nodex> liquid-silence : if scale becomes a problem I advise to re-look at the problem another tiem
[17:56:56] <liquid-silence> yeah
[17:56:59] <liquid-silence> not phased now
[17:57:06] <liquid-silence> deadline looming
[17:57:15] <Nodex> deadlines :(
[18:07:52] <liquid-silence> Nodex, and they decided to use mongodb
[18:07:55] <liquid-silence> not my choice
[18:11:42] <liquid-silence> Nodex mind a pm not related to mongodb
[18:11:45] <liquid-silence> or code in general
[18:18:22] <Tomasso> how about text indexes, are they good anough for a production environment? should I consider taking into consideration other text search techniques such as implementing my own levenshtein distance algorithm?
[18:19:28] <Nodex> liquid-silence : I block all pm's by default sorry
[18:19:46] <Nodex> + not sure how to turn it off tbh
[18:19:51] <liquid-silence> lol
[18:49:48] <Tomasso> how can you specify a field name whose parent varies? for example 'person.*variablefield*.model'
[18:50:14] <Tomasso> for a text search index definition.. for example
[18:51:57] <Tomasso> the variable field can be a category, like electronics.. vehicles.. and model is constant for all of them
[18:54:35] <kali> Tomasso: don't do that
[18:54:46] <kali> never use a variable as a key name in a mongodb document
[18:55:01] <kali> it makes everything difficult
[18:55:34] <Tomasso> kali: I did it, and found it out.. maybe i will need to refactor my documents by a script.. :S
[19:17:54] <postitnote> Is there a rough formula for calculating a json documents size in string length represented as a document in mongodb? I'm storing 15m documents and the json is on average about 3000 bytes.
[19:30:43] <Tomasso> luckily i did a dump today, i was playing with text indexes, and figured out i lost all my information
[19:36:25] <Tomasso> how could that happend, is that possible?
[21:26:45] <postitnote> I get "error: Couldn't fork %pre: Cannot allocate memory" when installing mongo server on cents 5.6. Any suggestions?
[21:26:55] <postitnote> This is an ec2 micro instance
[21:47:45] <maxamillion> has anyone ever seen the index count on the local db be different between secondaries of a replica set but the rs.status() still shows everything in good standing?
[22:05:25] <joannac> Different numbers of indexes?
[22:05:56] <maxamillion> joannac: yes
[22:06:01] <joannac> Sure, if you're in the middle of a rolling index build?
[22:06:23] <maxamillion> joannac: it's been like this for a few days
[22:06:35] <joannac> which indexes are you missing?
[22:06:50] <maxamillion> joannac: primary and one of the secondaries both show 3 indexes for the local db, the other secondary only shows 2 indexes
[22:09:16] <maxamillion> joannac: I don't know how I'd check that
[22:09:31] <maxamillion> joannac: I'm just showing a different count in db.stats()
[22:10:24] <joannac> use local; db.system.indexes.find()
[22:11:19] <maxamillion> joannac: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
[22:11:47] <joannac> rs.slaveOk(); then try again
[22:12:34] <maxamillion> joannac: something seems goobered ... http://www.fpaste.org/53885/84380663/
[22:13:18] <joannac> You're not in local on your secondary
[22:13:39] <maxamillion> joannac: ah, right ... sorry
[22:14:07] <maxamillion> joannac: http://www.fpaste.org/53886/13843807/
[22:15:59] <joannac> That's not that weird. My guess is the other secondary was primary at one stage
[22:16:27] <joannac> That's why it has that index, because when it was primary it needed to know who its secondaries were, so it has that extra "local.slaves" collection
[22:17:26] <maxamillion> joannac: alright, is there a way to make the two secondaries re-sync?
[22:17:46] <joannac> What do you mean?
[22:18:47] <maxamillion> joannac: actually, nvm ... I'm just losing my mind
[22:18:53] <maxamillion> joannac: awesome, thanks for the info!
[22:23:17] <maxamillion> joannac: how do I turn off rs.slaveOK()?
[22:37:57] <ujan> Anyone using MMS? It appears to be having problems at the moment.
[22:45:13] <joannac> Working on it
[23:11:07] <iamacomp> greetings
[23:11:16] <iamacomp> i'm having a problem which I solved a month ago
[23:11:24] <iamacomp> but somehow it is cropping up yet again
[23:11:37] <iamacomp> maybe there is something obvious which i'm missing
[23:11:57] <iamacomp> this problem pertains to compiling with mongoclient on an ubuntu 12.04 x64 machine
[23:12:09] <iamacomp> the problem is this:
[23:12:49] <iamacomp> (when compiling after apt-get install mongodb)
[23:12:59] <iamacomp> /usr/include/boost/thread/tss.hpp: In instantiation of ‘void boost::thread_specific_ptr<T>::reset(T*) [with T = int]’:
[23:12:59] <iamacomp> /usr/include/mongo/client/../util/goodies.h:184:13: required from ‘void mongo::ThreadLocalValue<T>::set(const T&) [with T = int]’
[23:12:59] <iamacomp> /usr/include/mongo/client/../util/net/../../db/../util/../db/../util/concurrency/rwlock.h:357:34: required from here
[23:13:01] <iamacomp> /usr/include/boost/thread/tss.hpp:105:17: error: use of deleted function ‘boost::shared_ptr<boost::detail::tss_cleanup_function>::shared_ptr(const boost::shared_ptr<boost::detail::tss_cleanup_function>&)’
[23:13:03] <iamacomp> In file included from /usr/include/mongo/client/../pch.h:92:0,
[23:13:08] <iamacomp> (oops that's not so nice to look at)
[23:13:17] <iamacomp> and I also compiled from the github
[23:13:34] <iamacomp> using scons --full --use-system-all install-mongoclient
[23:13:48] <iamacomp> and then i get the same error, but in a smaller form :-)
[23:14:27] <iamacomp> /usr/include/boost/thread/pthread/thread_data.hpp: In constructor ‘boost::detail::tss_data_node::tss_data_node(boost::shared_ptr<boost::detail::tss_cleanup_function>, void*)’:
[23:14:27] <iamacomp> /usr/include/boost/thread/pthread/thread_data.hpp:36:41: error: use of deleted function ‘boost::shared_ptr<boost::detail::tss_cleanup_function>::shared_ptr(const boost::shared_ptr<boost::detail::tss_cleanup_function>&)’
[23:14:27] <iamacomp> In file included from /usr/include/boost/shared_ptr.hpp:17:0,
[23:14:29] <iamacomp> from /usr/local/include/mongo/pch.h:49,
[23:14:31] <iamacomp> from /usr/local/include/mongo/client/dbclient.h:30,
[23:14:33] <iamacomp> ..
[23:14:35] <iamacomp> the mystery to me is
[23:14:42] <iamacomp> that I have another ubuntu 12.04 x64 box
[23:14:47] <iamacomp> which i set up last month
[23:14:54] <iamacomp> using the apt-get install mong
[23:15:10] <iamacomp> and things compile just fine there
[23:15:24] <iamacomp> .....
[23:16:31] <iamacomp> has anyone seen these boost::shared_ptr errors
[23:17:03] <iamacomp> i've googled but I end up at dead ends
[23:17:40] <iamacomp> when I had this problem a month ago it was on osx, and on osx i did eventually compile with scons
[23:17:45] <iamacomp> to get the mongo client