[04:25:54] <lovelydovely> I have a few questions for how to optimize the size and arrangement of data within documents.
[04:27:52] <lovelydovely> Is it inherently better or worse to have many small documents than it is to attempt to back the data into fewer large ones? My assumption is that this might impact locality of reference.
[04:29:32] <lovelydovely> Also, how does the performance of looking up documents by their index change as the number of entities in the collection grows or shrinks? Say, if you have 10 elements in the collection vs 10,000,000, how much longer will index lookup times be on average for the latter?
[04:31:35] <lovelydovely> I assume it uses a hash algorithm (not red-black tree, at least by default) and I assume that that it has some way of (linearly?) searches bins when hashes collide...?
[04:35:37] <lovelydovely> Is there any way to delete an entity at a particular index in an array? $pull does a value based compare, and $position seems to only be good for insertions?
[12:16:04] <zer0def> hey guys, got a quick question on role management - say i need a user that's able to readWrite and enableSharding to all databases, except for admin ones; is my reasoning correct, when i do allowance on {db: '', collection: ''} and strip those actions from {db: 'config', collection: ''}?
[14:31:46] <freeone3000> I have a replication set of 8 nodes whose replication time normally stays below 5s. Currently, two nodes are over 600s. How can I solve this issue?
[14:34:22] <freeone3000> None of thoe normal things seem to apply - their effective lock rate is *lower* than it is usually, same with page faults. It's possible they're just not getting traffic routed...
[14:41:24] <msn> when i add a host for replication in mongodb-3.0.7 i get this error Our config version of 2 is no larger than the version on slave1:27017, which is 6
[14:44:26] <freeone3000> msn: Config is versioned. You're adding a new host with an older version of config than master. You should update the replication set from master.
[16:10:09] <Lujeni> Hello - There is a better way than a mongodump/restore to copy a shard database to another server ? Thx
[16:48:59] <deepdeep> can I do a 2-d sort? i.e., sort by one column, then within that sort do a subsort?
[16:49:11] <deepdeep> i want to sort by n_ratings, then sort by n_views within the n_ratings sort
[17:26:52] <cheeser> deepdeep: in 3.2 you'll be able to via aggregation
[17:33:11] <Sendoushi> Hey guys. Trying for the first time mongodb and mongoose. Using it with node.js. I was on postman trying to list / create / whatever... but it isn't working... I know it is hard to see like this but... http://pastebin.com/QjxTnawu on that list it gets to requesting... but the keeps loading and loading and loading... nothing happens. the second log doesn't also
[17:37:11] <StephenLynx> I advice you to not use mongoose.
[17:40:51] <deepdeep> i'm on ubuntu, i can't get this process to stop starting: 1293 ? Ssl 0:00 mongod -f /mongodb.conf
[20:06:53] <Sendoushi> getting constructor of null when doing findOne in mongoose. ideas why?
[20:36:01] <CaptTofu> cheeser: unable to do that at this time. Need to migrate data.
[20:36:18] <CaptTofu> cheeser: is there a way to make a 3.0 server handle 2.0 clients?
[20:42:02] <d-snp> is the directory structure of the mongo source tree documented somewhere?
[20:42:11] <d-snp> I'm wandering what mongo/src/mongo/db/s would be
[21:43:12] <d-snp> can I just say that it's really weird that you have DatabaseCatalogEntry, which is an adapter pattern, and mmapv1 and kv implements it
[21:43:18] <d-snp> and then kv is another adapter pattern
[21:43:24] <d-snp> which is implemented by wiredtiger
[23:30:05] <Ramone> hey all... I'm trying to upsert based on a unique key, and I get `E11000 duplicate key error` for precisely that key... can anyone tell me what might be going on?
[23:31:51] <joannac> Ramone: what's the upsert command?
[23:33:32] <Ramone> well this is through the node.js client, but possibly it's similar enough to the shell... I'm doing a findAndModify( {k : token}, {[some object here without an _id]}, {upsert:true, multi:false, new:true}}
[23:34:12] <Ramone> it's a production issue that occurs intermittently that I can't repro locally
[23:34:46] <Ramone> `E11000 duplicate key error index: dojo.sessiontokens.$k_1 dup key: { : "E-2oHRzjyUZoO46vlBIFASZ8kZbbi8DJ" }` is the full error message
[23:35:07] <Ramone> there's a unique index on that "k" field
[23:37:34] <joannac> and if you search for a document with that token, what do you get?
[23:43:04] <Ramone> alright... unfortunately I don't have access to those... we use mongo as a service
[23:44:33] <Ramone> seems a lot like http://stackoverflow.com/questions/29305405/mongodb-impossible-e11000-duplicate-key-error-dup-key-when-upserting , but no one has a sol'n there either
[23:56:22] <Ramone> my query only has the field k, which is uniquely indexed
[23:56:38] <joannac> Ramone: did you read the document I linked?
[23:56:55] <Ramone> yeah just the first paragraph s ofar... now I see what you wanted me to read :)
[23:57:13] <Ramone> `If all the commands finish the query phase before any command starts the modify phase, and there is no unique index on the name field, the commands may each perform an upsert, creating multiple duplicate documents.`