PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 20th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:58] <nadav_> hi folks! Was hoping someone could help me with a silly configuration issue... I have a Mongo 2.4 instance configured with user/pass that has the following roles: "readWriteAnyDatabase", "userAdminAnyDatabase", "dbAdminAnyDatabase", "readWrite" . I logged in with that user successfully, but db.currentOp() always returns "unauthorized". Any idea?
[00:22:18] <joannac> you need clusterAdmin for that
[00:22:24] <joannac> http://docs.mongodb.org/v2.4/reference/user-privileges/#clusterAdmin
[00:22:48] <joannac> ...damn users who don't stay long enough to get their questions answered
[00:33:26] <jimpop> heh. ;-) To be fair, I asked a question weeks ago and it went unanswered. :-)
[00:38:44] <joannac> I have join/part turned off so I'm mostly annoyed i went to the effort of looking it up and it was wasted :(
[00:39:45] <joannac> jimpop: what was the question?
[00:40:41] <jimpop> it was a question about offline replication
[00:40:56] <jimpop> but i've determined that it's not feasible
[00:41:33] <joannac> ah ok
[00:41:40] <jimpop> i do a huge update each week, so rather than a bunch of db updates I create a new db and then rename it
[00:42:01] <jimpop> that would cause all the secondaries to be so far out of sync they would do copies
[00:42:13] <jimpop> so replication would be unnecessary
[00:43:16] <jimpop> what I do is have all the "secondaries" (acutal independent masters) perform a collection clone each week
[00:43:50] <jimpop> it works, but it's not as nice as replication. :-)
[06:56:17] <JAVA> by default in mongodb is there any unique id to find duplicate store data ?
[07:06:08] <joannac> JAVA: ?
[07:08:44] <joannac> what does "duplicate store data" mean?
[07:10:25] <JAVA> i'm comparing couchDB and mongoDB and i readed in mongo dont have any internal unique id reference : http://stackoverflow.com/questions/12437790 second answer by Mark
[07:12:27] <joannac> um, that's about sparse unique references
[07:12:47] <joannac> s/references/indexes/
[07:13:52] <JAVA> so this problem maybe could to make bigger DB volume
[07:14:25] <joannac> if you require that specific use case
[07:25:55] <JAVA> An empty database takes up 192Mb in mongoDB !! this is true ?
[07:25:59] <noose> anyone using mongo river with ES?
[07:26:48] <noose> https://github.com/richardwilly98/elasticsearch-river-mongodb/wiki
[07:27:47] <kees_> JAVA, with a bit of preallocating it doesn't sound too off
[07:32:39] <rspijker> JAVA: if you have a single empty DB, it will be a lot more even, if you have journaling turned on
[07:33:56] <JAVA> sounds like bad !
[07:34:14] <JAVA> but why in couchDB this isnt happen ?!
[07:36:30] <rspijker> it’s a design choice
[07:36:39] <rspijker> the overhead is relatively high on empty DBs
[07:36:49] <rspijker> but who cares, who has a DB system to have empty dbs?
[07:37:10] <rspijker> it’s designed for large DBs, for large DBs the overhead is relatively small
[07:37:23] <JAVA> so if i put some record this size can to grow up
[07:38:31] <rspijker> if you put records in, at some point it will grow, yes...
[07:38:50] <rspijker> you can’t store unlimited data in 192MB
[07:40:17] <JAVA> OK i know this need , so what is the minimum Ram requirement to working with mongo ?!
[07:42:01] <rspijker> well… the minimum is very low…
[07:42:10] <rspijker> it all depends on your data and how you use it, what is actually wise
[07:42:59] <rspijker> if you are storing billions and continuously using millions of documents, your RAM usage will be different than when you are still storing billions, but only using 1000 at any given time
[07:43:58] <JAVA> i means in start the mongo server and when it's ready to use , without any huge records
[07:44:36] <JAVA> i must be calculate which server is enough for working
[07:46:32] <rspijker> I just spun up an empty mongod and it uses < 30MB
[07:53:32] <JAVA> so good
[10:16:16] <rdsoze> How do I add a key value pair to the last dictionary in an array of dicts ?
[13:05:34] <abhiram> anyone there?
[13:05:53] <JAVA> all are here :D
[13:09:21] <Nodex> even some who are not here are here in spirit
[13:10:24] <abhiram> https://www.irccloud.com/pastebin/oNAcgshA
[13:11:10] <abhiram> children is an array of objects which also contain another array.How can i access the subjects array using model?
[13:11:32] <hocza> how may I add a new attribute to all documents? My ORM cannot change this attribute for old documents, since the new attribute does not exist in previous documents.
[13:14:06] <Nodex> hocza: db.foo.update({},{$set:{foo:1}},{multi:true});
[13:15:09] <Nodex> abhiram : out of interest why are you bloating your document with countryName and country_id... countries are unique in name anyway
[13:15:50] <Nodex> and you can access what you need with dot notation ... "children.subjects"
[13:16:43] <abhiram> this was just sample iam working on.will be removing unnecessary records however.my bad.sorry children is an array i pasted wrong schema i guess
[13:17:33] <hocza> Nodex: ohh that easy?:) sorry for being that newbie :)
[13:17:46] <Nodex> :D
[13:18:02] <JAVA> in mongo does losing data by sloppy storing ?! also mongo never calls fsync() ? are these true ?!
[13:18:07] <abhiram> https://www.irccloud.com/pastebin/hWEMizHm
[13:18:17] <cheeser> JAVA: um... no.
[13:18:27] <abhiram> please check the latest one.
[13:19:02] <JAVA> cheeser, this problem is there in mongo in past ?
[13:19:15] <abhiram> children is an array of objects which contain subjects as array .
[13:19:16] <hocza> Nodex: One more: and If I want to delete an attribute completely?
[13:19:41] <Nodex> JAVA : I don't understadn whatt that means
[13:19:48] <Nodex> hocza : $unset iirc
[13:19:58] <hocza> ooookay:)
[13:20:00] <cheeser> JAVA: in the past? i dunno. today? no.
[13:20:05] <Nodex> http://docs.mongodb.org/manual/reference/operator/update/unset/
[13:20:26] <abhiram> nodex can you please look on the schema i pasted?
[13:21:13] <Nodex> abhiram : you will need to match the document you wish or use number based selection
[13:21:17] <hocza> Nodex: WriteResult({ "nMatched" : 314497, "nUpserted" : 0, "nModified" : 314496 }) , thanks! :)
[13:21:27] <Nodex> children.0.subject <--- first
[13:21:43] <Nodex> else use the projection operator to pluck from the array the child you wish
[13:32:21] <JAVA> i'm confused !! compares doesnt give me which is better !! couchDB or mongoDB !? can you help me which is better in what points please ?
[13:39:30] <kali> you're kidding, right ?
[13:59:17] <Nodex> haha
[13:59:24] <Nodex> 42 is the best period
[14:05:31] <JAVA> kali, why you thinked i kidding u ?! i really confused !
[14:07:58] <Nodex> they're two different tools
[14:11:11] <JAVA> diffrent tools !?! both are nosql, document orientet DataBase !
[14:12:12] <Nodex> oki doki
[14:14:17] <JAVA> ! just this ?
[14:14:36] <joshua> You sure like to use ! and ?
[14:14:54] <Nodex> !? ++
[14:18:42] <JAVA> if you can help me , please give me your help! if not please dont spam !
[14:19:20] <Nodex> if you ask a relevant question then perhaps you will get help. However, if you have come to troll then you will get trolled - simples ;)
[14:20:03] <cheeser> https://www.google.com/search?q=couchdb+vs+mongodb
[14:20:17] <cheeser> 104k results to answer your question
[14:22:10] <JAVA> i'm not newbie in IRC ! i searched this more than 4 hours and then i think i can to find any to help me here !
[14:23:49] <JAVA> Nodex, where are know couchDB and mongDB is difference tools !!?
[14:25:25] <JAVA> if is hard by u to type characters and talk about it , is better to take a coffee and relax to answer any ;)
[14:26:50] <JAVA> anyway thanks for your advise
[14:26:52] <joshua> This channel is created for mongodb discussion, so you might not get a completely unbiased answer
[14:27:54] <JAVA> yeah maybe i must be have just mongoDB ask here
[14:28:10] <JAVA> anyway thanks guys
[14:29:45] <Nodex> the answer is still 42. You have not asked any question that has an easy answer. We DO NOT KNOW what your project is or what you intend to do. They both have pros and cons - it's that simple
[15:22:01] <sachiman> hello
[15:23:02] <sachiman> anyone aware of apache httpd module for gridFS?
[15:34:11] <sachiman> gridfs anyone
[15:34:12] <sachiman> ?
[15:46:07] <kali> sachiman: there a more or less experimental nginx module
[15:46:38] <sachiman> it is old one.. not supported anymore
[15:46:55] <sachiman> + it uses old mongodb c++ driver
[15:50:04] <kali> what about the node.js one ? for once that's something node.js should be good at...
[15:50:44] <sachiman> kali.. cant add nodejs as of now..
[15:51:23] <sachiman> have developed a module in Apache httpd using mongo c client.. but not sure if it is efficient enough..
[15:51:33] <sachiman> so wanted to check if there is one available already
[15:52:27] <kali> https://bitbucket.org/onyxmaster/mod_gridfs
[15:54:08] <sachiman> kali.. again this one is old. does not work with latest mongodb
[15:54:26] <cheeser> patch it!
[15:54:31] <sachiman> :)
[15:55:18] <sachiman> okay..
[15:56:21] <sachiman> thanks
[16:21:04] <acalbaza> hey
[16:21:43] <acalbaza> can there be an issue if all replica-sets go offline BUT the primary is still available?
[16:21:55] <kali> acalbaza: yes. it will step down
[16:22:15] <acalbaza> kali: interesting... why would it cause a stepdown even if the primary is still active?
[16:23:50] <kali> because not seeing its siblings, the primary can not distinguish a failure of them, or a netsplit isolating it. so in order to avoid a split brain, it must step down
[16:24:17] <kali> acalbaza: a primary must see a majority of the cluster to stay primary. as long as it sees less than that, it steps down
[16:24:32] <kali> s/as long as/as soon as/
[16:25:08] <Nodex> kali : how do you deal with database locking at fotalia ... do you find it's a huge problem? and if so do you implement queues to get round it
[16:26:21] <kali> Nodex: i'm fotopedia, not fotolia :)
[16:26:29] <Nodex> sorry, my mistake
[16:26:33] <kali> Nodex: they use postgres
[16:26:39] <kali> as far as i know
[16:26:48] <Nodex> I meant in your company
[16:26:56] <acalbaza> thanks kali
[16:27:23] <kali> we've actually had exactly one issue of a "lock" problem, just a few weeks ago
[16:27:59] <kali> and we have async queues, but they're more for sending emails and generating thumbs than dealing with mongodb
[16:30:14] <Nodex> I have a very annoying problem... I have one process that writes/updates about 90k docs perhaps 10 times a day... when they're being written they're causing the databaase to lock for whatever reason, I have another process which is essentially a cron every second checking for notifications to send out - sms, email etc, obviously we can't have more than one "notifier" running at once else
[16:30:14] <Nodex> someone will get two notifications for the same notification so we "lock" (set the notifications cron to active=0)... the trouble comes that when the DB is locked it cannot then update the cron to set active =1 and it triggers madness round the system
[16:30:39] <Nodex> part of me wants to just move the notifications cron to another database and avoid this entirely but I really want it in the same namespace
[16:32:11] <kali> Nodex: we have a separate mongodb system for "volatile" stuff
[16:32:24] <kali> Nodex: queues, locks, notification management, etc
[16:32:26] <Nodex> I think it's best to just move it to another db tbh
[16:32:36] <Nodex> eradicate the problem all together
[16:33:15] <Nodex> I think I'm going to migrate to tokumx to allow document level locking, it's more inline with a new product we're working on
[16:34:04] <kali> it's also good when you need a resync... because the volatile system is tipically a small database that performs lots of write, while the main system is bigger but sees less operations
[16:34:37] <Nodex> for most apps (website loading for example) the locks are not seen as the pages take ages to render anyway but I stupidly went and built very fast loading/rendering pages and you can physically notice when the pad load goes from 300ms to 1.5s
[16:35:39] <Nodex> I have even been weighing up postgres to see if it's a better fit but I can't give up the document flexibility of mongodb
[16:36:56] <kali> Nodex: well, splitting system was a huge improvements for us, for what it's worth
[16:37:22] <kali> Nodex: we actually have 5 mongodb logically independent systems
[16:38:03] <Nodex> I have been thinking along similar lines... I just didn't want to fragment to much
[16:40:16] <Nodex> kali: did you watch "Under the dome" ?
[16:42:05] <kali> Nodex: that said, the lock issue i was having was relatively interesting. i was updating a read_count property on one specific document instance (and doing so quite often). the document had an unusually huge subarray of docs (10MB) with an index on some subfield. there was also an index on the read_count
[16:42:25] <kali> Nodex: when mongo detects it has to update an index, it actually update all of them
[16:42:54] <kali> Nodex: and updating the subfield one was actually taking about 1sec... during which the lock was held.
[16:43:03] <kali> Nodex: we went down on that one :)
[16:43:24] <kali> Nodex: i've seen "under the dome" and found it... average
[16:44:03] <Nodex> @mongo problem - :(
[16:44:10] <Nodex> do you like "Falling skies" ?
[16:45:03] <Kaiju> Falling skies is pretty bad. I'm tired of all the sci fi reality series. I much prefer the exploration and political style ones.
[16:45:19] <Nodex> have you been watching the new "24" ?
[16:45:33] <Nodex> I think I saw Derick in it
[16:45:35] <Nodex> :P
[16:45:45] <kali> i've two or three 24
[16:45:55] <kali> i keep them for a binge watching session
[16:46:03] <kali> +seen
[16:46:06] <Nodex> I found a great little app for TV series
[16:46:12] <shoshy> hey, i'm trying to update a document, i have success doing so, i use db.col.update({_id: ObjectId('...')},{$set: {blurb: { a: 1, b:2}} }) . The document pre-update looks like: {_id: .., blurb:{c: 3, d:4}} and post-update: {_id:..., blurb:{a:1,b:2}} instead of {_id:..., blurb:{a:1,b:2,c:3,d:4}} what am i missing?
[16:46:15] <kali> i've not seen falling skies
[16:46:45] <Nodex> shoshy : on the terminal?
[16:46:58] <kali> shoshy: $set: { "blurb.a" :1, "blurb.b": 2}
[16:47:10] <kali> shoshy: you updating the full blurb with your syntax
[16:47:21] <shoshy> on robomongo and also via node node.js native driver
[16:47:35] <Nodex> yes, this (in my opinion) needs fixing... it is not concurrent with other operations in mongodb
[16:47:52] <shoshy> kali, Nodex: i see.. because i use code, i actuall pass an object.... so the object adds fields dynamically to be updated
[16:48:03] <acalbaza> related replicat-set question... if i have multiple data centers on different networks should i set up even #s replicas in each to protect against a netsplit?
[16:48:04] <shoshy> *actually
[16:48:22] <kali> acalbaza: no !
[16:48:33] <kali> acalbaza: never use an even number of replica
[16:48:42] <Nodex> shoshy : you might need to join your code into a string
[16:48:46] <Nodex> implode*
[16:48:50] <kali> acalbaza: you'll get two data center full or secondaries
[16:50:07] <acalbaza> so, an arrangement of something like 3 in dc1 and 5 and dc2...
[16:50:17] <shoshy> thank you very much Nodex, kali
[16:50:20] <shoshy> solved it!
[16:50:31] <acalbaza> but not like 3 in dc1 and 3 in dc2
[16:50:32] <tscanausa> you could arbitor in a 3rd
[16:50:35] <shoshy> obj["blurb.c"] ... , obj["blurb.d"] = ...
[16:50:40] <shoshy> that was that :)
[16:50:47] <acalbaza> tscanausa: true
[16:50:52] <kali> acalbaza: at least with 3 and 5, in cas of a split the 5 one will survive
[16:51:07] <Nodex> shoshy :)
[16:53:14] <kali> Nodex: fargo is great these days
[16:53:31] <Nodex> kali: I hae not seen ti
[16:53:33] <Nodex> it*
[16:53:43] <kali> Nodex: newsroom will not be back til september (i think that's good news for sorkin, less for us)
[16:53:45] <Nodex> +have
[16:53:50] <Nodex> :(
[16:53:59] <Nodex> 9 was a very weird number
[16:55:59] <Kaiju> Quick question about mapreduce. Since the code is shipped off to the cluster for processing. Can you declare functions outside the map reduce and use them inside or do they have to be declared inside the mapreduce?
[16:56:09] <Kaiju> node
[16:58:03] <kali> Kaiju: you can define function in the "scope" parameter, but that's about it. they will be transferred as text code, nothing facny, no closure or magic tricks.
[16:58:46] <Kaiju> kali: Ginat inline script it is then, thanks!
[16:58:49] <Kaiju> giant
[17:16:11] <ejb> How can I get distances from a $near query? Something like annotation of the distance onto each doc
[17:42:10] <Skunkwaffle_> I'm going to be building aggregation queries dynamically, and I'm wondering if there's any difference in performance between: {foo: 'bar'} vs{foo: {in:['bar']}} in a $match stage
[17:43:24] <Skunkwaffle_> in other words: is the $in operation any slower than just straight matching if there's only one value in the array
[17:46:51] <ep1032> hey all. I have a variable list of mongo object ids, and i need to update all of the items in that list
[17:46:56] <ep1032> is there a way to do it in a single bulk update?
[17:47:30] <djMax> is there a conventional way to do pub/sub with idempotent "consumption" in mongo?
[17:47:47] <LouisT> ep1032: http://docs.mongodb.org/manual/reference/method/db.collection.update/#multi-parameter
[17:49:03] <imaginationcoder> @ep1032, once way is to write a for loop in JS and execute each update separately
[17:50:34] <ep1032> LouisT: I'm not trying to write a query to do an update, I'm trying to do, essentially the mongo equivalent of sql's "update where in(list)", and need to know if that's possible, and if there's any limit to the number of items that can be in that list
[17:51:00] <ep1032> imaginationcoder: yeah, that's what I'm doing now, but I'd really rather not run thousands of individual updates if I could potentially just do it in one call
[17:51:03] <ep1032> just for the sake of the db
[17:52:07] <LouisT> ep1032: yea, it'd be something like: uptime({'$where': {'$in':<list>},'$set':...}})
[17:52:19] <LouisT> if i remember correctly
[17:52:40] <LouisT> er, update
[17:52:43] <imaginationcoder> for the sake of db, I guess you can do bulk update. but, there could be a single statement way
[17:52:44] <imaginationcoder> http://docs.mongodb.org/manual/reference/method/db.collection.initializeOrderedBulkOp/#db.collection.initializeOrderedBulkOp
[17:53:31] <imaginationcoder> problem with single statement(IMO), query size is proportional to list size. if your list is small, you are fine
[17:54:13] <ep1032> I'm pulling logging records out of a collection, and moving them into the datacenter's consolidated logging area. So it could be 1 record, it could be hundreds of thousands. Though at larger sizes I could obviously chunk them
[17:54:34] <ep1032> I assume the multi option in update is a syntax for doing this bulk thing you listed?
[17:54:44] <cheeser> bulk is different
[17:54:47] <djMax> if I have a sharded, replicated Mongo cluster, will findAndModify still be "atomic" across the whole cluster?
[17:55:14] <cheeser> only single document updates are atomic
[17:57:00] <djMax> yeah, that's fine. I want to do something like "find a document where this field is 0, and add 1" as a way to make sure only one person can "claim" a job
[18:44:24] <ejb> Can anyone help with that distances/$near/annotation question?
[18:58:04] <Kaiju> ejb: http://www.mongovue.com/2010/11/03/yet-another-mongodb-map-reduce-tutorial/
[19:01:08] <prateekp> is there a way to rollback the changes in monogodb database
[19:01:09] <prateekp> ?
[19:13:20] <ejb> Kaiju: what about runCommand geoNear
[19:14:47] <Kaiju> ejb: Where are you running the command from, shell or external driver?
[19:15:02] <ejb> Kaiju: external (meteor)
[19:17:27] <Kaiju> ejb: I use node so I'm guessing its close to meteor. Are you doing something like this? db.collection.runCommand( "text", { $near:{geo:0,0} })
[19:17:49] <ejb> Kaiju: I'm about to... will report back
[19:18:32] <Kaiju> ejb: k, heading to lunch. Good luck. Shell differs from external drivers. I would research your drivers method.
[19:18:48] <ejb> Kaiju: ok, thanks for the tips