PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 9th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:15] <curlhelp> anyone?
[00:01:32] <curlhelp> In MongoDB is it better to have a parent record with an array of child IDs? Or for every child to have a "parent" column with the parent ID? Which is more efficient/cost effective memory wise?
[00:02:15] <cheeser> done
[00:02:22] <StephenLynx> depends on the case, curlhelp
[00:02:33] <StephenLynx> either approaches are valid.
[00:02:38] <StephenLynx> approach*
[00:02:58] <curlhelp> say i have a "company" collection StephenLynx
[00:03:11] <curlhelp> and I'm expecting thousands of people to be under that ocllection
[00:03:29] <curlhelp> should i have the "company" collection have "children" : [ID, ID, ID, ID]
[00:03:50] <curlhelp> or just have a "worker" have "company_id":ID
[00:03:55] <cheeser> it does depend on the case, but i generally prefer children pointing to parents.
[00:04:10] <curlhelp> is there any concern via memory that way @cheeser?
[00:04:34] <cheeser> read my SO answer first. ;)
[00:04:52] <cheeser> (this is why crossposting is bad. it fragments the conversation.)
[00:05:03] <curlhelp> ahh ok @cheeser thanks (and sorry will remember next time)
[00:05:22] <curlhelp> to expand on your answer, however (for records); "The children should have parent IDs so that the number of children can grow without bound. Putting children IDs in the parent would cause the parent document to grow over time leading to document moves as the doc needs more space on disk and ultimately the 16MB limit on doc sizes. Though if you get to that 16MB limit you probably have other issues..."
[00:05:30] <curlhelp> is that a limit for an item in a collection?
[00:05:33] <curlhelp> or the whole collection
[00:06:55] <cheeser> a single document can only be 16mb large. you can have a trillion of those documents.
[00:06:58] <StephenLynx> if you expect a large number of children, I suggest you put the children in a separate collection.
[00:07:17] <cheeser> what would that achieve?
[00:07:21] <curlhelp> interesting... yeah so i think i'll definitely go with the children with parent id approach
[00:07:55] <curlhelp> if i want to track something like "facebook" likes (dw i'm not building another social media thing lol), would a "post" have "liked_by":[iD, ID, ID]
[00:08:03] <curlhelp> or same case really
[00:08:12] <curlhelp> or for that should i create another collection "upvotes"
[00:08:14] <StephenLynx> embedded children are good if you expect the children to have multiple parents, to be complex cases or to have a limited number of children.
[00:08:30] <curlhelp> yeah expecting a few thousand
[00:08:33] <cheeser> curlhelp: in that case, yes. separate collection because those are different entities.
[00:08:39] <curlhelp> @cheeser, great thanks
[00:09:11] <curlhelp> thanks for your help everyone sorry for xposting and hope i was able to mend that by planting question here
[00:09:19] <curlhelp> interesting stuff to think about
[03:49:21] <energizer> How do I do a find({}, {}) that shows all of the data for every matching document?
[03:52:35] <energizer> in pymongo
[05:42:51] <Jonno_FTW> can anyone help with the nodejs driver?
[05:43:00] <Jonno_FTW> every time I do a query I get 0 results
[05:47:22] <Boomtime> @Jonno_FTW: there are complete self-contained examples here: https://mongodb.github.io/node-mongodb-native/api-articles/nodekoarticle1.html#time-to-query
[05:48:08] <Boomtime> you should also test every query in the mongo shell
[05:52:17] <Jonno_FTW> Boomtime: is there a decent node shell?
[05:57:45] <Boomtime> Jonno_FTW: what's wrong with the regular mongo shell? you need to test the query first
[05:58:23] <Boomtime> "every time I do a query I get 0 results" -> and is that not correct?
[05:58:33] <Jonno_FTW> no
[05:59:02] <Boomtime> what does the exact same query do in the mongo shell?
[05:59:53] <Boomtime> if you provide the output from each of these things, and pastebin it - the call you make in node, and the query line from the mongo shell, someone here might be able to determine what is wrong
[06:00:42] <Jonno_FTW> Boomtime: here's my script: http://pastebin.ws/f5l639
[06:00:48] <synthmeat> anyone managed to build mongodb 3.x on freebsd?
[06:01:41] <Jonno_FTW> woops nvm, turns out I suck at nodejs
[06:01:50] <Jonno_FTW> was exiting/closing before the result came back
[06:02:33] <Boomtime> good that you found it
[06:02:41] <Boomtime> .. and yes, node is like that
[06:04:11] <synthmeat> this is what i get https://gist.github.com/synthmeat/23396e0e15e3b757a0cc
[06:04:59] <Jonno_FTW> synthmeat: and you ran with -v?
[06:05:43] <synthmeat> Jonno_FTW: no, i don't think that works. "scons all --ssl -v"?
[06:06:04] <synthmeat> nope, that just shows version of scons ^
[06:07:32] <Jonno_FTW> I got no idea then
[06:15:17] <Jonno_FTW> can I pass a unix timestamp to a find query in nodejs driver??
[06:15:47] <Jonno_FTW> for a datetime type
[09:08:19] <mapc> Hi. Is there some sort of mongo db scheduler?
[09:08:28] <mapc> I mean query scheduler.
[09:09:10] <mapc> Specifically we're having the problem, that we have some very expensive, long running queries and we'd like to give them a lower priority/nice value
[09:09:39] <mapc> so those queries are just using any leftover resources and not claiming any resources for their own.
[09:12:16] <Derick> there is no such thing
[09:14:44] <mapc> that's a shame :)
[09:17:03] <vagelis> Is it possible to find the document with the most fields? :S
[09:18:18] <gcfhvjbkn> i get "could not verify that config servers are in sync" error on my mongos instances; after googling a little bit i see that i should find the problem on my own and fix it manually; my problem is that i don't really know what to look for, it doesn't seem to be any documentation about this
[09:19:18] <gcfhvjbkn> s/it doesn't seem to be any/it seems there is no
[09:26:11] <gcfhvjbkn> btw is it ok if some of the configservers only have two tables in "config" db? ("lock" and "lockpings") out of all these http://docs.mongodb.org/manual/reference/config-database/
[10:09:42] <gcfhvjbkn> what happens if i wipe out configsvr data folder? do i lose my data?
[10:52:13] <Fernandos> hi
[12:50:47] <taspat> hi, having this this.find(something, cc).sort({ lastU: -1})
[12:51:18] <taspat> i want to say: take from the returned collection items from index X to index Y
[12:51:43] <taspat> with underscore I can do that but maybe there is some built in fucntion in mongo
[12:51:44] <taspat> ?
[12:59:28] <nekyian> I am building a website in PHP & MongoDB. I have this weird error while running an import script. Can you help?
[12:59:37] <nekyian> [MongoCursorTimeoutException]
[12:59:37] <nekyian> localhost:27017: Read timed out after reading 0 bytes, waited for 30.000000 seconds
[13:18:28] <Kobbb> Hey guys
[13:30:30] <christo_m> is there anyway to clone from local to remote directly?
[13:30:31] <christo_m> i dont want to log into the remote and copy from there and have to open ports etc to access my machine here
[13:41:15] <christo_m> anyone?
[13:52:43] <darius93> christo_m, clone what?
[14:03:03] <EricL> I have a sharded Mongo 2.4 cluster with 3 shards, 3 replicas per shard. I am trying to reduce it down to 2 shards. I followed these instructions: http://docs.mongodb.org/v2.4/tutorial/remove-shards-from-cluster/ and have made it to the point where I am just checking the status. However the remaining chunks hasn't changed since I started it an hour ago and the status is "draining ongoing". How do I know if something is happening?
[14:03:08] <EricL> I don't see anything in the logs.
[14:03:55] <CustosL1men> hi
[14:04:10] <CustosL1men> is there something like dbegines ranking which was maintained before 2011 ?
[14:37:45] <preyalone> how do i select certain fields, without matching on any particular values for those fields?
[14:40:31] <EricL> preyalone: db.collection_name.find({}, {field1: 1, field2:1, field3: 1});
[14:42:15] <preyalone> odd, db.config.find({}, {submission:1}) returns documents with _id's, but no submissions
[14:43:36] <cheeser> what do your documents look like?
[14:44:05] <preyalone> my documents don't appear to have a submission field, but that query is returning results anyway. without submission fields.
[14:44:36] <cheeser> why wouldn't it?
[14:45:53] <preyalone> > db.config.find({}, {submission:1}).limit(1)
[14:45:53] <preyalone> { "_id" : ObjectId("51ee88ac76a2951c7a000000") }
[14:46:12] <saml> that's not query, it's projection
[14:46:21] <saml> db.config.find({submission:1})
[14:46:29] <MatheusOl> preyalone: perhaps you wanted to use $exists
[14:46:52] <MatheusOl> preyalone: db.config.find({submission: {$exists: true}})
[14:47:10] <preyalone> !
[14:48:06] <preyalone> MatheusOl: thanks!
[14:48:39] <MatheusOl> yw
[14:49:22] <EricL> I have a sharded Mongo 2.4 cluster with 3 shards, 3 replicas per shard. I am trying to reduce it down to 2 shards. I followed these instructions: http://docs.mongodb.org/v2.4/tutorial/remove-shards-from-cluster/ and have made it to the point where I am just checking the status. However the remaining chunks hasn't changed since I started it two hours ago and the status is "draining ongoing". How do I know if something is happening? I
[14:49:22] <EricL> don't see anything in the logs
[16:02:42] <EricL> I have a sharded Mongo 2.4 cluster with 3 shards, 3 replicas per shard. I am trying to reduce it down to 2 shards. I followed these instructions: http://docs.mongodb.org/v2.4/tutorial/remove-shards-from-cluster/ and have made it to the point where I am just checking the status. However the remaining chunks hasn't changed since I started it two hours ago and the status is "draining ongoing". How do I know if something is happening? I
[16:02:43] <EricL> don't see anything in the logs
[16:29:48] <kashike> does mongodb 3.0.6 not work on ubuntu 15.04?
[16:31:27] <StephenLynx> not officially supported.
[16:32:00] <kashike> looks like I'm downgrading to 2.6.3 :(
[16:32:07] <StephenLynx> if you are going to use ubuntu (which is not recommended for a server by a considerable amount of people) you would be better using 14.
[16:32:12] <StephenLynx> or use a good server distro.
[16:32:54] <kashike> it's a bit too late to be switching OS's
[16:33:02] <StephenLynx> I personally can recommend centOS for use with mongo.
[16:36:35] <kashike> is there really no fix for this? running the mongod process manually works, just the init script doesn't work
[16:37:36] <BadCodSmell> kashike: mongo provides custom repos
[16:38:14] <BadCodSmell> I only see for older versions though :(
[16:38:34] <kashike> I'm using 'eb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse' as the repo currently
[16:38:40] <kashike> deb, not eb
[16:38:49] <BadCodSmell> I have similar problems with this for debs :(
[16:39:01] <BadCodSmell> I suggest using a proper server distro like centos
[16:39:08] <StephenLynx> I told you the fix.
[16:39:11] <StephenLynx> change your system.
[16:39:13] <BadCodSmell> Debian is no good for servers and I doubt ubuntu is much better.
[16:39:14] <StephenLynx> or build from source.
[16:39:22] <BadCodSmell> They do things like patch in loads of random patches
[16:39:26] <StephenLynx> yeah, debian is bad for servers.
[16:39:32] <BadCodSmell> so even sourcing their packages and manually building an update is a nightmare
[16:39:37] <StephenLynx> anything derivate from debian is, IMO.
[16:39:53] <BadCodSmell> at least with things like centos you know all those patches are almost always going to be to make it build properly or backports andway you can ignore
[16:39:56] <StephenLynx> for a server you either want a rolling distro or something derivate from RHEL
[16:40:15] <StephenLynx> a rolling distro will just have the issue with possible instability.
[16:40:26] <StephenLynx> but you will never be left out.
[16:40:39] <BadCodSmell> one package in debian I tried to manually update but gave up after it had several patches including new features including a 100k patch to add ipv6 and a massive patch to add authentication
[16:41:02] <BadCodSmell> And where they get those patches from *shrugs* unofficial pull requests in git?
[16:41:11] <BadCodSmell> its a hobbyist distro
[16:42:19] <BadCodSmell> and if you look at the complexity of their packaging and how much work they give themselves compared to a simple spec file, OMG, and the awful quality of the huge helper bash scripts (vomit)
[16:42:33] <StephenLynx> /\
[16:44:33] <kashike> doesn't help that error messages aren't helpful at all
[16:44:37] <kashike> 'Failed at step USER spawning /usr/bin/mongod: No such process'
[16:45:50] <StephenLynx> sounds very clear to me.
[16:46:46] <kashike> oh? please share.
[16:47:29] <StephenLynx> there was no such process called mongod.
[16:48:11] <StephenLynx> at least at /usr/bin/mongod
[16:48:17] <bendem> it's very clear it didn't spawn, doesn't make it helpful in anyway tho
[16:48:44] <StephenLynx> have you tried running which mongod?
[19:04:33] <mprelude> http://stackoverflow.com/questions/32486579/implementing-mongodb-sasl-authentication-in-cl-mongo In case anyone knows how this works? Have got SCRAM implemented, but having issues making initial request.
[19:11:10] <StephenLynx> cam lisp interface with C binaries?
[19:13:22] <mprelude> StephenLynx: Yes, but that removes the portability
[19:54:45] <saml> how can I make a hot standby node?
[19:55:37] <saml> i don't want clients to be connected to the hot standby member with mongodb://mongo1,mongo2,mongo3/test where mongo4 is hot standby
[19:56:13] <saml> but mongo4 gets all content of mongo1,mongo2,mong3 (rs0)
[19:57:05] <saml> http://docs.mongodb.org/master/core/replica-set-hidden-member/
[20:01:54] <deathanchor> saml: why doesn't hidden member work for you?
[20:02:02] <saml> works for me
[20:02:11] <saml> i didn't even try
[20:02:23] <deathanchor> oh that's how I do backups
[20:02:31] <saml> nice
[20:03:06] <saml> delayed member seems better for backup
[20:03:32] <deathanchor> oh I just stop mongod on the hidden, start a backup of the disk, and then start it back up.
[20:03:52] <deathanchor> it's automatically delayed
[20:04:15] <saml> that's true
[20:05:15] <deathanchor> delayed backups are nice to run queries on "historic" data
[20:05:25] <deathanchor> er.. delayed memebrs
[20:06:15] <cheeser> "delayed backups" seems to be a fundamentally flawed concept
[20:06:35] <deathanchor> cheeser: aren't all backups delayed?
[20:06:51] <cheeser> not at all
[20:07:43] <deathanchor> other than replica members how can backups not be delayed?
[20:09:04] <cheeser> like so: https://www.mongodb.com/cloud/backup
[20:10:28] <saml> so it's about definition of delay
[20:11:05] <saml> backup history
[20:21:29] <Doyle> Does anyone know if the response to this ticket is accurate? https://jira.mongodb.org/browse/SERVER-19261 "local.oplog.rs Assertion failure a() != -1 src/mongo/db/storage/record.cpp 538"
[20:21:53] <Doyle> Data corruption via network or disk?
[20:22:27] <cheeser> did you do as requested and ask on mongodb-user?
[20:22:56] <cheeser> and, yes, since it came from a mongodb emp, i'd consider it fairly accurate.
[20:25:21] <Doyle> Ahhh, that little leaf?
[20:25:26] <Doyle> = mongodb emp?
[20:25:47] <cheeser> yep
[20:25:49] <Doyle> And I had no idea there was a mongodb-user channel
[20:26:00] <cheeser> not channel. mailing list.
[20:26:10] <Doyle> oh... :/
[20:26:17] <cheeser> https://groups.google.com/forum/#!forum/mongodb-user
[20:26:22] <cheeser> he mentioned in the response
[20:26:34] <Doyle> I like IRC. Not much history with mailing lists.
[20:29:05] <saml> Doyle, https://jira.mongodb.org/browse/SERVER-7890
[20:35:05] <Doyle> Interesting. I hadn't noticed before, but the lgoSizeMB reports the default 51200, while the local db is 120GB, as set when I adjusted the oplog size initially.
[20:37:20] <Doyle> db.local.stast() gives ... { "ok" : 0, "errmsg" : "Collection [test.local] not found." }
[20:39:11] <Doyle> Immediately before the assertion failure, "replSet initial sync cloning db: test"
[20:39:30] <Doyle> I'm not a super mongodb guy yet, working on it, but that seems fishy to me.
[22:15:06] <EricL> I have a sharded Mongo 2.4 cluster with 3 shards, 3 replicas per shard. I am trying to reduce it down to 2 shards. I followed these instructions: http://docs.mongodb.org/v2.4/tutorial/remove-shards-from-cluster/ and have made it to the point where I am just checking the status. However the remaining chunks hasn't changed since I started it two hours ago and the status is "draining ongoing". How do I know if something is happening?
[22:15:32] <EricL> Sorry, started it ten hours ago.
[22:28:24] <Fernandos> I added a new array field to my schema. How can I extend existing objects in a collection that don't have this field? Creating new objects is no problem, but editing objects created without that field doesn't work.
[22:31:55] <tejasmanohar> what mongodb query can i use to add a certain field to all documents?
[22:32:01] <tejasmanohar> add if it doesn't already exist is preferable
[22:32:06] <tejasmanohar> with a default val
[22:33:12] <EricL> db.collection.update({},{new_field:"val"}, false, true})
[22:33:40] <EricL> That might overwrite.
[22:34:25] <tejasmanohar> overwriting is ok
[22:34:27] <tejasmanohar> in this case
[22:34:32] <tejasmanohar> hmm
[22:34:41] <EricL> Sorry, I messed that up.
[22:34:51] <tejasmanohar> wait
[22:34:56] <tejasmanohar> that will override whole object, no?
[22:35:03] <tejasmanohar> i'm talking about overriding a field
[22:35:17] <EricL> db.collection.update({}, {$set:{new_field:"val"}}, false, true})
[22:35:29] <tejasmanohar> oh
[22:35:30] <tejasmanohar> $set
[22:35:33] <tejasmanohar> let me look at that
[22:35:34] <EricL> It will, that's why I said I messed it up.
[22:35:39] <tejasmanohar> gotcha
[22:37:41] <tejasmanohar> db.getCollection('pres').update({}, {$set:{new_field:'source'}}, false, true)
[22:37:42] <tejasmanohar> :)
[22:38:41] <EricL> Yep, that.
[22:39:23] <cheeser> in your query, use $exists to check that it doesn't exist first.
[22:42:37] <tejasmanohar> yeah
[22:42:39] <tejasmanohar> did :)
[23:16:19] <kashike> StephenLynx: just thought I'd let you know I have mongodb 3.0.6 running on ubuntu 15.04 successfully
[23:28:13] <StephenLynx> ok