PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 29th of October, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:25:54] <lovelydovely> I have a few questions for how to optimize the size and arrangement of data within documents.
[04:27:52] <lovelydovely> Is it inherently better or worse to have many small documents than it is to attempt to back the data into fewer large ones? My assumption is that this might impact locality of reference.
[04:28:06] <lovelydovely> *pack the data*
[04:29:32] <lovelydovely> Also, how does the performance of looking up documents by their index change as the number of entities in the collection grows or shrinks? Say, if you have 10 elements in the collection vs 10,000,000, how much longer will index lookup times be on average for the latter?
[04:31:35] <lovelydovely> I assume it uses a hash algorithm (not red-black tree, at least by default) and I assume that that it has some way of (linearly?) searches bins when hashes collide...?
[04:35:37] <lovelydovely> Is there any way to delete an entity at a particular index in an array? $pull does a value based compare, and $position seems to only be good for insertions?
[12:16:04] <zer0def> hey guys, got a quick question on role management - say i need a user that's able to readWrite and enableSharding to all databases, except for admin ones; is my reasoning correct, when i do allowance on {db: '', collection: ''} and strip those actions from {db: 'config', collection: ''}?
[13:52:03] <glundgren> hey folks!
[13:52:22] <glundgren> i have this mongodb in production, but im using cloud to backup it
[13:52:41] <glundgren> it says i have to convert my standalone to replica set
[13:52:50] <glundgren> someone can help me?
[13:53:26] <glundgren> mongod --port 27017 --dbpath /srv/mongodb/db0 --replSet rs0
[13:54:08] <glundgren> ok i was able to start it changing the /srv/mongodb/db0 to /var/lib/mongodb that is where my database is
[13:54:36] <glundgren> if i try to connect using mongohub, just to check out, it shows an empty database
[13:55:19] <glundgren> 1. Connect to the mongod instance. 2. Use rs.initiate() to initiate the new replica set:
[13:55:30] <glundgren> how can i do that?
[13:55:42] <glundgren> it will erase my standalone database?
[13:55:47] <glundgren> please help :OOO
[14:31:46] <freeone3000> I have a replication set of 8 nodes whose replication time normally stays below 5s. Currently, two nodes are over 600s. How can I solve this issue?
[14:34:22] <freeone3000> None of thoe normal things seem to apply - their effective lock rate is *lower* than it is usually, same with page faults. It's possible they're just not getting traffic routed...
[14:41:24] <msn> when i add a host for replication in mongodb-3.0.7 i get this error Our config version of 2 is no larger than the version on slave1:27017, which is 6
[14:44:26] <freeone3000> msn: Config is versioned. You're adding a new host with an older version of config than master. You should update the replication set from master.
[14:48:18] <msn> thanks got it freeone3000
[16:10:09] <Lujeni> Hello - There is a better way than a mongodump/restore to copy a shard database to another server ? Thx
[16:48:59] <deepdeep> can I do a 2-d sort? i.e., sort by one column, then within that sort do a subsort?
[16:49:11] <deepdeep> i want to sort by n_ratings, then sort by n_views within the n_ratings sort
[17:26:52] <cheeser> deepdeep: in 3.2 you'll be able to via aggregation
[17:33:11] <Sendoushi> Hey guys. Trying for the first time mongodb and mongoose. Using it with node.js. I was on postman trying to list / create / whatever... but it isn't working... I know it is hard to see like this but... http://pastebin.com/QjxTnawu on that list it gets to requesting... but the keeps loading and loading and loading... nothing happens. the second log doesn't also
[17:37:11] <StephenLynx> I advice you to not use mongoose.
[17:40:51] <deepdeep> i'm on ubuntu, i can't get this process to stop starting: 1293 ? Ssl 0:00 mongod -f /mongodb.conf
[17:40:53] <deepdeep> i also can't kill it
[17:47:41] <Sendoushi> StephenLynx, why not?
[17:48:13] <StephenLynx> its slow, bad documentation, weird behavior
[17:48:56] <Sendoushi> hm... you would use mongodb directly?
[17:49:39] <StephenLynx> not really, I would use the native driver.
[17:49:48] <StephenLynx> "mongodb" on npm.
[17:50:09] <StephenLynx> https://www.npmjs.com/package/mongodb
[17:50:55] <Sendoushi> ok i'll check...
[17:51:41] <Sendoushi> i'll try it out
[19:16:34] <CaptTofu> howdy!
[19:16:48] <CaptTofu> quick question: what is the trick to allow 2.0 clients to talk to 3.0?
[19:16:53] <CaptTofu> namely, shell
[20:02:09] <cheeser> why not upgrade your client?
[20:06:53] <Sendoushi> getting constructor of null when doing findOne in mongoose. ideas why?
[20:36:01] <CaptTofu> cheeser: unable to do that at this time. Need to migrate data.
[20:36:18] <CaptTofu> cheeser: is there a way to make a 3.0 server handle 2.0 clients?
[20:42:02] <d-snp> is the directory structure of the mongo source tree documented somewhere?
[20:42:11] <d-snp> I'm wandering what mongo/src/mongo/db/s would be
[21:43:12] <d-snp> can I just say that it's really weird that you have DatabaseCatalogEntry, which is an adapter pattern, and mmapv1 and kv implements it
[21:43:18] <d-snp> and then kv is another adapter pattern
[21:43:24] <d-snp> which is implemented by wiredtiger
[21:43:25] <d-snp> ..why
[21:52:12] <cheeser> d-snp: if you have questions about the code itself, you're better off asking on mongodb-dev
[21:52:31] <cheeser> https://groups.google.com/forum/#!forum/mongodb-dev
[22:00:29] <d-snp> oh thanks cheeser
[22:00:51] <d-snp> is there no irc channel? I'm just browsing the code wondering about random things
[22:00:59] <d-snp> not really a forum thread worthy
[22:01:03] <cheeser> not for *those* kinds of questions.
[22:01:19] <cheeser> this channel is usually more end user/developer related
[22:01:38] <cheeser> the kernel devs aren't terribly active here. a few wander in and out not very often.
[22:01:43] <d-snp> ok
[23:30:05] <Ramone> hey all... I'm trying to upsert based on a unique key, and I get `E11000 duplicate key error` for precisely that key... can anyone tell me what might be going on?
[23:31:51] <joannac> Ramone: what's the upsert command?
[23:32:47] <joannac> and full error message?
[23:33:32] <Ramone> well this is through the node.js client, but possibly it's similar enough to the shell... I'm doing a findAndModify( {k : token}, {[some object here without an _id]}, {upsert:true, multi:false, new:true}}
[23:34:12] <Ramone> it's a production issue that occurs intermittently that I can't repro locally
[23:34:46] <Ramone> `E11000 duplicate key error index: dojo.sessiontokens.$k_1 dup key: { : "E-2oHRzjyUZoO46vlBIFASZ8kZbbi8DJ" }` is the full error message
[23:35:07] <Ramone> there's a unique index on that "k" field
[23:37:34] <joannac> and if you search for a document with that token, what do you get?
[23:37:40] <Ramone> it exists
[23:38:08] <joannac> with the same fields as in your update?
[23:38:25] <Ramone> the update doesn't succeed, no... it's unchanged
[23:38:45] <Ramone> I do log the before and after, and it's unchanged
[23:39:14] <joannac> where's the line in the mongod log?
[23:39:42] <Ramone> this is from app logs... I don't have access to mongod logs
[23:39:51] <Ramone> and can't repro locally
[23:40:18] <joannac> any chance 2 threads are sending the same update?
[23:40:54] <Ramone> yeah for sure... two servers could be sending the same update
[23:41:14] <Ramone> isn't the op atomic?
[23:42:21] <joannac> right, but it sure sounds like the standard issue you get running an upsert
[23:42:32] <joannac> look in the mongod logs
[23:43:04] <Ramone> alright... unfortunately I don't have access to those... we use mongo as a service
[23:44:33] <Ramone> seems a lot like http://stackoverflow.com/questions/29305405/mongodb-impossible-e11000-duplicate-key-error-dup-key-when-upserting , but no one has a sol'n there either
[23:53:20] <joannac> Ramone: https://docs.mongodb.org/manual/reference/command/findAndModify/#upsert-and-unique-index
[23:56:22] <Ramone> my query only has the field k, which is uniquely indexed
[23:56:38] <joannac> Ramone: did you read the document I linked?
[23:56:55] <Ramone> yeah just the first paragraph s ofar... now I see what you wanted me to read :)
[23:57:13] <Ramone> `If all the commands finish the query phase before any command starts the modify phase, and there is no unique index on the name field, the commands may each perform an upsert, creating multiple duplicate documents.`
[23:57:18] <Ramone> ie, it's not atomic
[23:59:21] <Ramone> seems like two findAndModify()s running at the same time might have one create and the other try to create