[01:11:06] <SirFunk> it's supposed to include userAdminAnyDatabase
[01:18:54] <proteneer_osx> root does not include the ability to insert data directly into the system.users and system.roles collections in the admin database
[05:35:40] <narutimateum> what are alternatives to join in mongo?
[05:36:14] <lqez> narutimateum: alternatives... just saving duplicated data.
[05:36:24] <narutimateum> what are alternatives to join in mongo? i need the stores in here http://i.imgur.com/rFiqKbU.png to have info of these http://i.imgur.com/1ZlfLmS.png
[05:37:02] <narutimateum> iqez refer the screenshot.. any idea to handle that?
[05:45:35] <lqez> and 'no join' is not only problem of mongodb
[05:46:09] <lqez> Many key-value or document based database have no support for joining.
[07:30:45] <Viesti> Hi, I was reading about bulk updates: http://blog.mongohq.com/bulk-updates-for-all/
[07:32:09] <Viesti> we'r using mongoimport at the moment, after a quick glance it seemed to me that it doesn't use the bulk import API, could someone confirm if I'm right or totally wrong :)
[07:56:41] <Ponyo> In order to get real high availability out of a 3 node replica set, do I need to run the mongos router on each node as well?
[07:57:35] <rspijker> Ponyo: you don;t need mongos for a replica set…
[07:59:19] <Ponyo> So when I connect to the secondaries using the mongo client and try to write, i get the error they aren't the primary, how does one use a replica set appropriately if only one can do writes at a time and the others aren't forwarding the writes to the master?
[08:00:11] <Ponyo> that is, if I can't use round robin to achieve fail over, then how does the application client know which is the master accepting writes?
[08:02:47] <rspijker> Ponyo: connect strings, think the driver just figures it out
[08:16:45] <Ponyo> Does mongos work more like a proxy or a lookup server?
[08:43:32] <rspijker> Ponyo: mongos knows how data is distributed across shards. It’s a router that does aggregation of results etc.
[08:44:04] <rspijker> in a replica set, data is not distributed. The same data is on all of the nodes (normally), so when one fails another can take it’s place without problems.
[08:44:36] <rspijker> shards is partitioning, so each shard (which is usually a replica set) contains a part of the data
[08:44:55] <kali> it also colate partial results from two shards when needed
[08:45:31] <rspijker> you query against the mongos and it knows where to go and get the different data. It also handles the underlying querying for you. So it will ask the relevant shards for the data, then stitch it together and return it to you as if you had just queried a single DB
[09:21:12] <AlexejK> With a single mongod, is it possible that when one of the DBs is getting heavy writes (or doing MapReduce), the reads from others are affected? I know that its Single Write-Multiple Read locks but I'm observing this problema nd trying to see if this is somewhere on Mongo side or my webapp (performance is affected even when i do an operation through console)
[09:21:45] <Ponyo> I read that a replica set can't have more than 12 members, does this include config servers? I also read that I need 3 config servers per shard, so does that mean the total max size of a replica set is actually 15 servers including it's config servers?
[09:22:03] <kali> Ponyo: ho, my, this is wrong on so many levels...
[09:23:15] <Ponyo> someone might wanna work on the wording of that then, I find it confusing as all getout
[09:23:21] <rspijker> AlexejK: some locks are still global, unfortunately
[09:24:42] <kali> AlexejK: heavy load will affect performance, regardless the locks. for instance, if you're mapreducing a whole collection, you're impacting the cache distribution and that will degrade the read performance
[09:27:15] <kali> AlexejK: then, fish for low hanging fruits: queries with missing indexes (the log will show you slow queries) or map/reduce that can be replaced by aggregation framework
[09:28:43] <kali> Ponyo: you will find a form in the low left corner to report a documentation page that you find confusing
[09:39:49] <AlexejK> kali: Thanks, we looked into that a while back but deffo worth looking into, thanks
[11:41:49] <gancl> http://stackoverflow.com/questions/23993558/node-js-mongodb-cursor-each-get-empty-cant-run-in-sequence Hi! How to let mongodb cursor.each(function (err, docs) run in sequence?
[12:29:58] <kali> gancl: not sure i understand the question, and the code formatting is off in your question
[12:45:27] <gancl> kali: Thanks! I solved by using "cursor.toArray"
[12:51:49] <Repox> Hi guys. I'm trying to add a replica with rs.add(). Isn't it possible to add the username and password for authentication?
[12:52:20] <Repox> I'm trying to do rs.add('10.0.3.50:27017')
[13:07:51] <jackw411> anybody got a guide for mongo on ec2 and being able to connect remotely? having difficulties!
[13:18:05] <rspijker> Repox: you don’t add creds there, you use keyfiles
[13:27:10] <Ponyo> Is there a limit to how many times you can shard?
[13:30:44] <rspijker> check the “Authentication Between MongoDB Instances” section
[13:31:35] <rspijker> Ponyo: not that you will ever hit
[13:37:50] <hactor> Folks if you have an AGPL source, and it's interfaced to an Apache License source, and the latter is interfaced to you code, aren't you still bound by the AGPL?
[13:41:37] <rspijker> hactor: that’s fairly off-topic… not that I mind particularly, you’re just not very ;ikely to get a proper answer
[13:54:40] <Ephexeve> Hey guys, I wonder what's the best way for me to check if the user exists in mongoengine before loggin in? Is it like this? http://docs.mongoengine.org/tutorial.html#accessing-our-data
[13:54:44] <Ephexeve> Say I have milions of users, woudn't that be tiring?
[14:00:04] <hactor> rspijker, it's the scheme used by MongoDB, I don't think it's been proven in court.
[14:07:34] <rspijker> hactor: ah, didn't know that :)
[14:09:12] <jackw411> is there anything i need to do to make mongo accessible externally on a port on a server?
[14:09:13] <hactor> Basically MongoDB itself is AGPL.
[14:09:23] <hactor> And the drivers for it (from same team) are Apache License.
[14:09:39] <hactor> However from the AGPL it follows the drivers are bound under the terms of the AGPL.
[14:09:55] <hactor> So technically MongoDB can sue themselves. And win and lose at the same time.
[14:10:07] <kali> jackw411: check out 1/ the firewalling rules in your security groups, 2/ the presence of a "listen" directive in your mongo configuration
[14:11:32] <kali> hactor: I don't think AGPL is "catching" through the network :)
[14:12:16] <hactor> It was normally defined to do so I think.
[14:15:13] <kali> mpfff. sorry, please disregard my comment.
[17:59:20] <modcure> I setup a test shared environment... there is data on shared-a but not shared-b.. not sure why shared-b has no data in it. thoughts?
[18:00:21] <kali> have sou sharded a database and a collection ?
[18:17:23] <kali> modcure: your collection is too small. you'll start to see things happening when it reach the chunk size (it should be 64MB)
[18:18:01] <modcure> kali, thanks. new to sharding.. pretty cool stuff
[18:20:30] <modcure> kali, about the sharding... mongodb uses the shared key to perform a range paritioning... about the chunks.. each chunk is consider a new partition?
[18:27:48] <modcure> kali, im not sure what i mean :)
[18:39:00] <Repox> When I try to do rs.addArb() I get the message "replSetReconfig command must be sent to the current replica set primary." What does this mean and how do I fix it?
[18:47:09] <synth_> one of our servers crashed that had a mongodb set up on it, is there an easy way to import all data from the old database which we have on another drive to a new installation of mongo?
[18:52:40] <joshua> synth_: the mongorestore command has a --dbpath option to import from a data directory
[19:03:38] <synth_> joshua thanks, i'll give that a shot!
[19:12:54] <MacWinner> hi, had a quick question regaridn the steps outlined here: http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
[19:13:30] <MacWinner> in the section, "Expand the replica set", step 1, it says start 2 standalone instances.. do we need to start these standalone instances with the --replSet flag?
[19:13:51] <MacWinner> or do you just set --replSet on teh first node of the replication set
[19:18:36] <joshua> you start all the servers in your set with the same replSet value
[19:21:31] <MacWinner> if the initial server has 500M of data in its collections, and you add the second item in the replica set, will the primary node be stalled until the replication is complete?
[19:23:29] <synth_> joshua: it looks like that did the trick! thank you!
[19:23:52] <joshua> You should be able to continue using it while it syncs the data, but if you want to speed it up you can possibly copy the data directory before you start.
[19:24:08] <joshua> Eh, but maybe that is more trouble than its worth
[20:00:21] <cpennington> what's the best way to use a c-backed version of the SON class as the document_type argument when connecting using PyMongo?
[20:00:43] <cpennington> the SON class you import from bson/son.py is implemented in python, and has a number of things that slow it down significantly
[20:17:55] <smatt706> Hello, I'm trying to adjust a centos/rhel 6 to increase the ulimit for open files and processes. I edited /etc/security/limits.d/90-nproc.conf per http://docs.mongodb.org/manual/reference/ulimit/#unix-ulimit-settings however this doesn't seem to get read unless there is also an adjustment in /etc/security/limits.conf
[20:25:51] <smatt706> perhaps the question should be directed towards the centos crowd instad. :)