[00:07:58] <nadav_> hi folks! Was hoping someone could help me with a silly configuration issue... I have a Mongo 2.4 instance configured with user/pass that has the following roles: "readWriteAnyDatabase", "userAdminAnyDatabase", "dbAdminAnyDatabase", "readWrite" . I logged in with that user successfully, but db.currentOp() always returns "unauthorized". Any idea?
[00:22:18] <joannac> you need clusterAdmin for that
[07:08:44] <joannac> what does "duplicate store data" mean?
[07:10:25] <JAVA> i'm comparing couchDB and mongoDB and i readed in mongo dont have any internal unique id reference : http://stackoverflow.com/questions/12437790 second answer by Mark
[07:12:27] <joannac> um, that's about sparse unique references
[07:36:39] <rspijker> the overhead is relatively high on empty DBs
[07:36:49] <rspijker> but who cares, who has a DB system to have empty dbs?
[07:37:10] <rspijker> it’s designed for large DBs, for large DBs the overhead is relatively small
[07:37:23] <JAVA> so if i put some record this size can to grow up
[07:38:31] <rspijker> if you put records in, at some point it will grow, yes...
[07:38:50] <rspijker> you can’t store unlimited data in 192MB
[07:40:17] <JAVA> OK i know this need , so what is the minimum Ram requirement to working with mongo ?!
[07:42:01] <rspijker> well… the minimum is very low…
[07:42:10] <rspijker> it all depends on your data and how you use it, what is actually wise
[07:42:59] <rspijker> if you are storing billions and continuously using millions of documents, your RAM usage will be different than when you are still storing billions, but only using 1000 at any given time
[07:43:58] <JAVA> i means in start the mongo server and when it's ready to use , without any huge records
[07:44:36] <JAVA> i must be calculate which server is enough for working
[07:46:32] <rspijker> I just spun up an empty mongod and it uses < 30MB
[13:11:10] <abhiram> children is an array of objects which also contain another array.How can i access the subjects array using model?
[13:11:32] <hocza> how may I add a new attribute to all documents? My ORM cannot change this attribute for old documents, since the new attribute does not exist in previous documents.
[13:15:09] <Nodex> abhiram : out of interest why are you bloating your document with countryName and country_id... countries are unique in name anyway
[13:15:50] <Nodex> and you can access what you need with dot notation ... "children.subjects"
[13:16:43] <abhiram> this was just sample iam working on.will be removing unnecessary records however.my bad.sorry children is an array i pasted wrong schema i guess
[13:17:33] <hocza> Nodex: ohh that easy?:) sorry for being that newbie :)
[13:21:43] <Nodex> else use the projection operator to pluck from the array the child you wish
[13:32:21] <JAVA> i'm confused !! compares doesnt give me which is better !! couchDB or mongoDB !? can you help me which is better in what points please ?
[14:18:42] <JAVA> if you can help me , please give me your help! if not please dont spam !
[14:19:20] <Nodex> if you ask a relevant question then perhaps you will get help. However, if you have come to troll then you will get trolled - simples ;)
[14:29:45] <Nodex> the answer is still 42. You have not asked any question that has an easy answer. We DO NOT KNOW what your project is or what you intend to do. They both have pros and cons - it's that simple
[16:21:43] <acalbaza> can there be an issue if all replica-sets go offline BUT the primary is still available?
[16:21:55] <kali> acalbaza: yes. it will step down
[16:22:15] <acalbaza> kali: interesting... why would it cause a stepdown even if the primary is still active?
[16:23:50] <kali> because not seeing its siblings, the primary can not distinguish a failure of them, or a netsplit isolating it. so in order to avoid a split brain, it must step down
[16:24:17] <kali> acalbaza: a primary must see a majority of the cluster to stay primary. as long as it sees less than that, it steps down
[16:25:08] <Nodex> kali : how do you deal with database locking at fotalia ... do you find it's a huge problem? and if so do you implement queues to get round it
[16:26:21] <kali> Nodex: i'm fotopedia, not fotolia :)
[16:27:23] <kali> we've actually had exactly one issue of a "lock" problem, just a few weeks ago
[16:27:59] <kali> and we have async queues, but they're more for sending emails and generating thumbs than dealing with mongodb
[16:30:14] <Nodex> I have a very annoying problem... I have one process that writes/updates about 90k docs perhaps 10 times a day... when they're being written they're causing the databaase to lock for whatever reason, I have another process which is essentially a cron every second checking for notifications to send out - sms, email etc, obviously we can't have more than one "notifier" running at once else
[16:30:14] <Nodex> someone will get two notifications for the same notification so we "lock" (set the notifications cron to active=0)... the trouble comes that when the DB is locked it cannot then update the cron to set active =1 and it triggers madness round the system
[16:30:39] <Nodex> part of me wants to just move the notifications cron to another database and avoid this entirely but I really want it in the same namespace
[16:32:11] <kali> Nodex: we have a separate mongodb system for "volatile" stuff
[16:32:26] <Nodex> I think it's best to just move it to another db tbh
[16:32:36] <Nodex> eradicate the problem all together
[16:33:15] <Nodex> I think I'm going to migrate to tokumx to allow document level locking, it's more inline with a new product we're working on
[16:34:04] <kali> it's also good when you need a resync... because the volatile system is tipically a small database that performs lots of write, while the main system is bigger but sees less operations
[16:34:37] <Nodex> for most apps (website loading for example) the locks are not seen as the pages take ages to render anyway but I stupidly went and built very fast loading/rendering pages and you can physically notice when the pad load goes from 300ms to 1.5s
[16:35:39] <Nodex> I have even been weighing up postgres to see if it's a better fit but I can't give up the document flexibility of mongodb
[16:36:56] <kali> Nodex: well, splitting system was a huge improvements for us, for what it's worth
[16:37:22] <kali> Nodex: we actually have 5 mongodb logically independent systems
[16:38:03] <Nodex> I have been thinking along similar lines... I just didn't want to fragment to much
[16:40:16] <Nodex> kali: did you watch "Under the dome" ?
[16:42:05] <kali> Nodex: that said, the lock issue i was having was relatively interesting. i was updating a read_count property on one specific document instance (and doing so quite often). the document had an unusually huge subarray of docs (10MB) with an index on some subfield. there was also an index on the read_count
[16:42:25] <kali> Nodex: when mongo detects it has to update an index, it actually update all of them
[16:42:54] <kali> Nodex: and updating the subfield one was actually taking about 1sec... during which the lock was held.
[16:43:03] <kali> Nodex: we went down on that one :)
[16:43:24] <kali> Nodex: i've seen "under the dome" and found it... average
[16:46:06] <Nodex> I found a great little app for TV series
[16:46:12] <shoshy> hey, i'm trying to update a document, i have success doing so, i use db.col.update({_id: ObjectId('...')},{$set: {blurb: { a: 1, b:2}} }) . The document pre-update looks like: {_id: .., blurb:{c: 3, d:4}} and post-update: {_id:..., blurb:{a:1,b:2}} instead of {_id:..., blurb:{a:1,b:2,c:3,d:4}} what am i missing?
[16:47:10] <kali> shoshy: you updating the full blurb with your syntax
[16:47:21] <shoshy> on robomongo and also via node node.js native driver
[16:47:35] <Nodex> yes, this (in my opinion) needs fixing... it is not concurrent with other operations in mongodb
[16:47:52] <shoshy> kali, Nodex: i see.. because i use code, i actuall pass an object.... so the object adds fields dynamically to be updated
[16:48:03] <acalbaza> related replicat-set question... if i have multiple data centers on different networks should i set up even #s replicas in each to protect against a netsplit?
[16:55:59] <Kaiju> Quick question about mapreduce. Since the code is shipped off to the cluster for processing. Can you declare functions outside the map reduce and use them inside or do they have to be declared inside the mapreduce?
[16:58:03] <kali> Kaiju: you can define function in the "scope" parameter, but that's about it. they will be transferred as text code, nothing facny, no closure or magic tricks.
[16:58:46] <Kaiju> kali: Ginat inline script it is then, thanks!
[17:16:11] <ejb> How can I get distances from a $near query? Something like annotation of the distance onto each doc
[17:42:10] <Skunkwaffle_> I'm going to be building aggregation queries dynamically, and I'm wondering if there's any difference in performance between: {foo: 'bar'} vs{foo: {in:['bar']}} in a $match stage
[17:43:24] <Skunkwaffle_> in other words: is the $in operation any slower than just straight matching if there's only one value in the array
[17:46:51] <ep1032> hey all. I have a variable list of mongo object ids, and i need to update all of the items in that list
[17:46:56] <ep1032> is there a way to do it in a single bulk update?
[17:47:30] <djMax> is there a conventional way to do pub/sub with idempotent "consumption" in mongo?
[17:49:03] <imaginationcoder> @ep1032, once way is to write a for loop in JS and execute each update separately
[17:50:34] <ep1032> LouisT: I'm not trying to write a query to do an update, I'm trying to do, essentially the mongo equivalent of sql's "update where in(list)", and need to know if that's possible, and if there's any limit to the number of items that can be in that list
[17:51:00] <ep1032> imaginationcoder: yeah, that's what I'm doing now, but I'd really rather not run thousands of individual updates if I could potentially just do it in one call
[17:53:31] <imaginationcoder> problem with single statement(IMO), query size is proportional to list size. if your list is small, you are fine
[17:54:13] <ep1032> I'm pulling logging records out of a collection, and moving them into the datacenter's consolidated logging area. So it could be 1 record, it could be hundreds of thousands. Though at larger sizes I could obviously chunk them
[17:54:34] <ep1032> I assume the multi option in update is a syntax for doing this bulk thing you listed?
[17:54:47] <djMax> if I have a sharded, replicated Mongo cluster, will findAndModify still be "atomic" across the whole cluster?
[17:55:14] <cheeser> only single document updates are atomic
[17:57:00] <djMax> yeah, that's fine. I want to do something like "find a document where this field is 0, and add 1" as a way to make sure only one person can "claim" a job
[18:44:24] <ejb> Can anyone help with that distances/$near/annotation question?
[19:17:27] <Kaiju> ejb: I use node so I'm guessing its close to meteor. Are you doing something like this? db.collection.runCommand( "text", { $near:{geo:0,0} })
[19:17:49] <ejb> Kaiju: I'm about to... will report back
[19:18:32] <Kaiju> ejb: k, heading to lunch. Good luck. Shell differs from external drivers. I would research your drivers method.