PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 20th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:17:13] <sovern> What happens if a replica member is way behind the master and its fellow slaves and it s started? does it accept connections adn return out of date data?
[01:42:47] <nofxx> sovern, iirc no, it'll be in STARTUP or something state
[01:43:05] <sovern> nofxx: it came up in sTARTUP2, moved to SECONDARY, and accepted connections.
[01:43:09] <nofxx> until it's up to date, it's usually very fast
[01:43:25] <nofxx> sovern, when it moved to secondary wasn't ita lready up to date?
[01:43:33] <sovern> nope. it was 90 hours out.
[01:45:16] <sovern> I cant find anything saying it SHOULDNT have moved to secondary. But im curious what happens when it does -- and why my app showed a 3x increase in mongo query response.
[01:46:26] <nofxx> which stat are you using to get this 90 hours?
[01:46:31] <sovern> mms
[01:46:51] <sovern> The "Replicaiton Lag" chart counter
[01:46:55] <nofxx> hm, mms isn't much realtime, I think it's already up to date
[01:47:06] <nofxx> test with rs.status()
[01:47:10] <sovern> its upto date now
[01:47:13] <nofxx> or the oplog itself
[01:47:31] <nofxx> so I think it what I said: it's pretty quickly too sync
[01:47:47] <nofxx> it was up to date, moved to secondary and mms was still showing old data
[01:47:58] <nofxx> but, might be wrong, let see other folks think
[01:48:13] <sovern> it took 2 hours to sync itself
[01:48:18] <sovern> i started it back up hidden
[01:49:41] <nofxx> damn, that's a huge data app. My biggest ~100M rows table take ~15min to fully replicate.
[01:49:50] <sovern> 1.2TB of data
[01:50:22] <nofxx> big table! heh
[01:50:46] <sovern> one of 7 clusters with 1-2tb/ea
[01:51:00] <nofxx> nice, is it some public site?
[01:51:05] <sovern> yup
[01:54:34] <nofxx> sovern, now I'm curious too, and sorry coulndt help more. Maybe it's up to the driver then? move to back of the line older nodes?
[02:07:04] <sovern> nofxx: and hiding nodes causes a reelection.
[02:58:50] <bros> http://cryto.net/~joepie91/blog/2015/07/19/why-you-should-never-ever-ever-use-mongodb/ Should I switch back to SQL?
[03:04:38] <preaction> do what you want
[05:40:06] <_blizzy_> im being called ignorant for using mongodb
[05:40:17] <_blizzy_> :/
[06:06:17] <_blizzy_> dead.
[06:06:38] <_blizzy_> lol.
[06:06:51] <_blizzy_> there's an argument in programming about why mongodb is bad
[06:06:58] <_blizzy_> I was being called ignorant for using it
[09:43:32] <Zelest> lazy question, can I have 4 replicaset nodes or is the rule still to use an odd number of nodes in a replicaset?
[09:46:29] <nixstt> Zelest: you should have an odd number, if it’s even add an arbiter -> http://docs.mongodb.org/manual/core/replica-set-architectures/
[09:47:53] <Zelest> ah, same as before then :D thanks :)
[09:55:03] <_blizzy_> I hear a lot of criticism about mongo. what are some things to say to counter that?
[10:32:06] <coalado> Hey.
[10:32:20] <coalado> A Mongo Query can only use one index, correct?
[11:02:11] <Soapie> Hi. Can anyone point me to a good example of Unwind using C# .Net driver 2.0?
[13:52:22] <schlitzer> hey, i have a mongo 2.4 installation with auth, and i like to upgrade to 2.6, therefore i need to have a user with the "userAdminAnyDatabase" role
[13:52:32] <schlitzer> how can i add this role to my admin user?
[13:56:25] <schlitzer> i cannot use "db.grantRolesToUser" for this, because that would require schema version 3 or 2, but my current version is 1
[14:45:41] <johnflux> I'm copy and pasting code, and I have some comments in my code like /* blah */
[14:46:40] <johnflux> if my comment contains a '
[14:46:44] <johnflux> then the shell complains about it!
[14:48:00] <johnflux> e.g.
[14:48:02] <johnflux> db.actionlaunch./* hello ' */
[14:48:04] <johnflux> find()
[14:48:11] <johnflux> doesn't work. remove the ' and it works
[15:12:12] <boucher> i have what’s probably a stupid question: i’m trying to set up a username/password on my database — i created a single user, ‘admin’ with the “root” built in role. i’m able to connect with that user (i’ve started with —auth), but that user doesn’t seem to have permissions on my actuall app database — i just keep getting unauthorized errors when my app tries to do anything
[15:12:26] <boucher> does the root user really not have permissions on anything other than the admin database?
[15:12:33] <boucher> root role^
[15:12:48] <boucher> the docs seem to say it should encompass readwriteanydatabase and adminanydatabase roles
[15:17:48] <boucher> is the issue that I need to run a db.grantPrivilegesToRole() first?
[15:29:32] <saml> {a:[1,2,3], b:[1,2,3]} i have docs like that, in aggregation, how can I $unwind a and b and zip them somehow so that I get {a:1,b:1}, {a:2,b:2}, {a:3, b:3} ?
[15:29:53] <saml> when i $unwind twice, once for a and another for b, too many rows
[15:44:08] <[diecast]> is it safer to remove a collection, then insert it or just update in place?
[15:44:20] <Derick> a collection or a document?
[15:44:30] <Derick> and what do you mean by "safe"?
[15:45:23] <[diecast]> db.collection_name.remove()
[15:45:40] <Derick> that removes every document from a collection, but not the collection
[15:45:49] <Derick> I would recommand db.collection_name.drop()
[15:46:03] <Derick> but be aware, that if you have set options on a collection, you need to set them again
[15:46:22] <[diecast]> ok, getting my terminology mixed up. i'm removing documents first, then inserting them.
[15:46:50] <[diecast]> i wanted to be sure there were no lingering items in the document
[15:47:12] <[diecast]> not sure if an update account for every possible change
[15:47:30] <Derick> ok
[15:48:22] <[diecast]> is that an irrational thought?
[15:48:42] <Derick> no...
[15:48:50] <[diecast]> i didn't want to have to verify after the update that everything was correct so i thought it would be cleaner to remove/insert.
[15:48:51] <[diecast]> ok
[15:49:12] <johnflux> how can I kill everything running?
[15:49:28] <johnflux> i have some javascript that is running, inserting lots of fields
[15:49:32] <johnflux> I can't work out how to kill it
[15:49:42] <johnflux> (it's running in a mongo shell)
[15:49:57] <[diecast]> ok, so i will keep doing remove() and insert() - i don't want to drop the collection
[15:50:10] <[diecast]> unless it has some added benefit over remove()
[15:50:16] <johnflux> oh, I can kill the mongo instance :)
[15:50:28] <Derick> johnflux: just quitting the shell should do
[15:50:30] <[diecast]> sounds like it is more work in case i have set options
[15:50:50] <johnflux> Derick: if it running, I can't quit the mongo shell
[15:50:58] <johnflux> Derick: unless you mean killing the konsole
[15:51:09] <Derick> johnflux: press Ctrl-C
[15:51:37] <johnflux> Derick: yeah tried that
[16:00:19] <saml> [diecast], update won't change _id
[16:00:39] <[diecast]> saml: thanks, good to know
[16:02:26] <[diecast]> is there any benefit to dropping the collection?
[16:02:47] <saml> yah if you don't want collection, you can drop it
[16:03:42] <[diecast]> ok, good point
[16:30:10] <leandroa> hi, what happens if a key from a unique compound index is missing? it's still indexed? and how this missing key is considered? null?
[16:30:25] <Derick> null
[16:31:10] <leandroa> so, if I want to insert a new doc with this missing key it will be considered as a duplicate right?
[16:31:27] <Derick> if the other part of the compound key is the same, then yes
[16:32:01] <leandroa> thanks Derick
[17:16:38] <kaimast> hi. is there a way to use geojson with a flat surface?
[17:17:01] <kaimast> or is there a way to have more complex shapes in the legacy format? it does only seem to support points...
[17:20:13] <kaimast> \
[17:58:40] <kaimast> it seems very limited that 2dsphere can only handle one specific datum. am I missing something? what if i want to index coordinates that aren't on earth?
[18:14:09] <blizzow> Does the mongorouter process use a lot more CPU/RAM between 2.6 and 3.0?
[18:29:37] <kaseano> Hi, I'm new to mongo, I'm struggling with this data model: students write an essay, other students vote on it. Normally in SQL you'd have a students table, an essay table with a FK to the author, and a many-many third table for voting ex "essays-students"
[18:29:38] <kexmex> blizzow: what's mongorouter
[18:30:01] <kaseano> in Mongo is it best to have a student table, with an array property of essays, then each essay has an array of students that voted?
[18:30:08] <kaseano> like nested, nested documents?
[18:30:28] <kexmex> i'd have a collection with just essays
[18:30:31] <kexmex> and users
[18:30:50] <kaseano> oh ok kexmex that felt a lot more comfortable
[18:30:58] <kaseano> it seemed like you weren't supposed to use FKs in mongo tho
[18:31:02] <kexmex> why not
[18:31:14] <kexmex> mysql didn't have PK/FK for a while :)
[18:31:19] <kaseano> haha because some website somewhere said that lol I'm glad you disagree though
[18:31:34] <kexmex> kaseano: there are document size limits
[18:31:53] <kaseano> oh so an even better reason to split documents across multiple collections
[18:31:58] <kexmex> imo, in mongo you just think of your usecases
[18:32:10] <kexmex> if you need to pull a single essay with all the votes in one shot
[18:32:13] <kexmex> then you have a document for that
[18:32:36] <kaseano> oh ok I get what you're saying
[18:32:56] <kexmex> you can do whatever you want basically but you need to think about concurrency
[18:32:59] <kexmex> atomicity
[18:33:05] <blizzow> kexmex: sorry, mongos
[18:33:06] <kaseano> well so in that case, a user collection, and then a votes collection would be alright right? Granted it'd be two queries (get the user, get the votes by useriD)
[18:33:09] <kexmex> indempotency :)
[18:33:23] <kexmex> kaseano: wouldn't the user be in memory already?
[18:33:23] <johnflux> is there any way to know the progress of a job?
[18:33:40] <kaseano> yeah probably
[18:33:43] <kaseano> ok I understand
[18:33:50] <kexmex> what's the usecase?
[18:33:51] <kaseano> thanks a lot kexmex
[18:34:07] <kexmex> oh, votes collection, i dunno, up to you
[18:34:19] <kexmex> maybe keep votes in essay doc, in an array
[18:34:34] <kaseano> a really simple app at the moment, but the landing shows a bunch of essays written (and user who wrote it), if you click on it, it gives more info ex who voted
[18:34:45] <boutell> Hi. How do I time a single query in MongoDB 3.0.x? In 2.6.x, the explain() method included the time for the query. In 3.0.x it does not. Thanks.
[18:34:49] <kexmex> ahh
[18:35:07] <kaseano> so mongo's a lot more contextual about how you store data
[18:35:14] <kexmex> yea i guess you'd need to have a cache of user info in memory
[18:35:15] <kaseano> normally in SQL, it's very straight forward
[18:35:25] <kexmex> yea sql is easy... dont repeat values, use ids :)
[18:35:31] <kaseano> yeah
[18:35:39] <kexmex> what i like about mongo is that you can write a big chunk in one shot
[18:35:46] <kexmex> with sql, you need to do many many inserts
[18:35:54] <kexmex> into diff tables... for one concept thingie
[18:35:55] <kaseano> right and a bunch of JOINs
[18:36:24] <kexmex> lke if you have an essay and it has paragraphs or sometihng... you'd need to insert an essay, get the ID, and then insert each paragraph indivudally in a diff table
[18:36:27] <kexmex> bla bla
[18:36:31] <kexmex> here you just jam it in
[18:36:35] <kaseano> ah got it
[18:36:55] <kaseano> but I have to think more haha, plus if your use case changes, don't you have to change the data model?
[18:37:16] <kaseano> although i can't think of an example
[18:37:19] <kexmex> kaseano: you can have a job that takes data from Essays and inserts into a better suited collection
[18:37:32] <boutell> heh, no one knows how to time a query?
[18:37:35] <kaseano> oh so you run jobs when you have to change the data model
[18:37:37] <kexmex> like if you want all votes for one user... you can have a job
[18:37:38] <boutell> I mean I could write some node just to do this but...
[18:37:46] <boutell> I was also hoping to avoid messing around with the profiling features just to time one query
[18:37:52] <kexmex> you can have a collection that stores all essay ids that user voted on for example
[18:38:09] <kexmex> and then the job would populate that collection by scanning essays collection for example
[18:38:21] <kexmex> and it would eventually catch up... usually relatively quickly for anyone to notice
[18:38:23] <kaseano> wouldn't you run into redundant data
[18:38:34] <kexmex> not really
[18:38:35] <kaseano> which would lead to out of sync data
[18:38:37] <kaseano> oh
[18:38:49] <kexmex> yea it's out of sync momentarily... but for that usecase it's not critical
[18:39:04] <kaseano> I feel like I can't wrap my brain around it lol I'm going to keep trying though
[18:39:19] <symbol> Is it convention to camelCase database names?
[18:39:19] <kexmex> like it depends what you'll need
[18:40:21] <kexmex> symbol: i use all lowercase with _
[18:40:24] <kexmex> user_id
[18:40:57] <symbol> kexmex: I assume you're talkings about fields?
[18:41:07] <symbol> *talking
[18:41:09] <kexmex> symbol: database lowercase too for me
[18:41:21] <kexmex> kaseano: in mongo there are no transactactions so can't rollback
[18:41:24] <symbol> Ah, so you'd do online_store vs onlineStore
[18:42:00] <kexmex> so if you insert many pieces of one thing, when it breaks in the middle, you'll have a buncha orphan stuff-- so i prefer big chunks where it is logical
[18:42:01] <kaseano> well they're all one shot queries if you correctly set up the data model right?
[18:42:05] <kaseano> one insert*
[18:42:06] <kexmex> yea
[18:42:25] <kexmex> gtg
[18:42:28] <kaseano> haha wow I love it
[18:42:30] <kaseano> aw
[18:43:48] <kaseano> I have one other quick question, can you store objects with methods into mongo? Or do I have to strip off the methods into a new object? Also can I bind the JSON (BSON) directory onto an object with methods, or do I have to manually do that?
[18:43:54] <kaseano> kinda like how an ORM would work
[18:43:59] <kaseano> I'm on Node
[18:44:14] <kaseano> directly* not directory
[18:45:12] <kaseano> it looks like people use "mongoose" for that
[18:45:27] <kaseano> maybe.
[20:12:07] <domo> hello. when running a query like 'values.foo.bar': { $in: [ .., .. ] }, can I index on "values.foo.bar" even if data sometimes won't exist?
[20:12:18] <domo> instead of indexing on just "values"
[20:30:09] <StephenLynx> GothAlice what was the name of that kind of hosting where you provide the hardware?
[20:30:15] <StephenLynx> co something
[20:30:18] <GothAlice> "Colocation"
[20:30:34] <StephenLynx> that
[20:30:36] <StephenLynx> thanks
[21:00:49] <amitprakash> What would help most to speed up indexing? CPU? Ram? IO?
[21:07:47] <Wulf> Good Morning!
[21:09:38] <Wulf> my mongodb runs on an ec2 instance and ebs drive. I would like to take consistent ebs snapshots without stopping mongodb. Is there some way to ask mongodb to write dirty buffers or similar to disk?
[21:14:26] <leandroa> it's not possible to have two mixed sparse+unique indexes? one that says a field is part of the unique compound and another it's not.. example: https://gist.github.com/lardissone/af7d77d2c02118cf3e33
[21:16:59] <Wulf> never mind, I found http://docs.mongodb.org/ecosystem/tutorial/backup-and-restore-mongodb-on-amazon-ec2/
[21:40:07] <leporello> Hi. How can I set a field in a query to value of another field?
[21:43:47] <johnflux> in aggregate, how can I do something like: { $group : { _id : NumberInt("$numWeeks"), number : { $sum : 1 } } }
[21:44:06] <johnflux> basically numWeeks is a float. I want to convert it to an int, and then group by that int
[21:44:15] <johnflux> so that 0.4 and 0.3 get grouped together
[22:26:42] <svm_invictvs> Two Morphia questions...
[22:27:08] <cheeser> 1.5 answers
[22:27:24] <svm_invictvs> One, when I set an @Reference, I am getting an exception that the object cannnot be found. However, when I look at the database I see that the data is correct and I can fetch the object with the ID that's stored in the document.
[22:27:44] <svm_invictvs> And #2, I have the same problem with the query. In Morphia the query fails to return anything with the object reference.
[22:29:11] <svm_invictvs> However, if I (again) set a breakpoint and copy of the query right out of what Morphia is passing to Mongo, I get a result.
[22:29:24] <cheeser> are you saving the referenced object first?
[22:29:31] <svm_invictvs> cheeser: Yeah.
[22:29:41] <cheeser> pastebin your code
[22:29:49] <svm_invictvs> cheeser: In fact, I set up my REST API so it specifically does that.
[22:30:01] <cheeser> i'm on vacation btw so you're getting help now because i love you man.
[22:30:03] <svm_invictvs> cheeser: You ahve to create the "parent" object first via the API, then create it's subordinate object.
[22:30:16] <svm_invictvs> Let me paste the interesting bits.
[22:32:19] <svm_invictvs> cheeser: http://pastebin.ca/3069523
[22:32:30] <svm_invictvs> cheeser: On line 28, I get the exception.
[22:33:04] <stuntmachine> i'm running mongo 3.0.4 on centos 6.5, and at startup i got those errors about /sys/kernel/mm/transparent_hugepage/defrag and /sys/kernel/mm/transparent_hugepage/enabled. i was able to fix the /sys/kernel/mm/transparent_hugepage/enabled one by adding a /etc/security/limits.d/99-mongodb-nproc.conf file with limits in it, but i'm still getting the warning about ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
[22:33:06] <stuntmachine> any ideas?
[22:33:32] <stuntmachine> this is after rebooting btw
[22:34:21] <cheeser> svm_invictvs: what do your models look like?
[22:36:50] <svm_invictvs> http://pastebin.ca/3069526
[22:38:18] <svm_invictvs> cheeser: I'm wondering if splitting the class into its "Abstract" type is the problem. Somehow confusing Morphia
[22:38:43] <svm_invictvs> cheeser: 'cause the more I think about it, the more that may not be necessary in this case.
[22:40:23] <cheeser> those @Property annotations are redundant fwiw
[22:41:09] <cheeser> svm_invictvs: run just your query without the update and see what you get
[22:41:58] <svm_invictvs> cheeser: in some cases it returns nothing.
[22:42:08] <svm_invictvs> cheeser: Let me double check, sec
[22:46:14] <svm_invictvs> cheeser: { "$and" : [ { "name" : "test_np_id"} , { "active" : false}]}
[22:46:44] <svm_invictvs> cheeser: Finds nothing jsut before the insert
[22:47:41] <svm_invictvs> cheeser: I'm confused, though, it doesn't use that query to find does it? Or does it find it by what document it actually inserted?
[22:48:00] <cheeser> hrm?
[22:48:27] <cheeser> there's no insert going yet until the end of the findAndModify()
[22:48:44] <svm_invictvs> fwiw, I dropeed everything int he table
[22:48:47] <svm_invictvs> afterwards I get this
[22:48:48] <svm_invictvs> > db.application_profile.find()
[22:48:49] <svm_invictvs> { "_id" : ObjectId("55ad79cafad878f4fe5186f3"), "active" : true, "name" : "test_np_id", "platform" : "PSN_PS4", "client_secret" : "asdfasdf", "parent" : DBRef("application", "55ab3683c9a4225364d6342d") }
[22:49:04] <cheeser> active isn't false there...
[22:49:31] <svm_invictvs> cheeser: I want to enter 1) insert a new record, or 2) flip the active flag on an existing one
[22:50:05] <cheeser> then you probably don't want "active" in the query
[22:50:29] <cheeser> why are you setting the name again?
[22:50:48] <svm_invictvs> cheeser: I want to make sure that I only reactive an inactive record.
[22:50:54] <svm_invictvs> cheeser: "active" is basically a soft delete
[22:51:06] <svm_invictvs> cheeser: If it's an old record that's been deleted, I jsut want to restore it
[22:53:18] <svm_invictvs> cheeser: Correct me if I'm wrong, but findAndModify uses the query to find the object (and then modify it) not to find it again right?
[22:53:35] <svm_invictvs> cheeser: Even so, other code that works similarly has no issue.
[22:56:28] <cheeser> if the query doesn't match a document, that query is used to create a new one upon upsert
[22:56:34] <svm_invictvs> I see
[22:56:55] <svm_invictvs> So...
[22:56:59] <svm_invictvs> basically I'm doing it wrong
[22:57:06] <cheeser> wrong-ish
[22:57:09] <cheeser> it's close
[22:57:34] <svm_invictvs> Well, I commented out the "false" flag
[22:57:38] <svm_invictvs> And I still have hte same issue
[22:58:49] <svm_invictvs> I mean, I also always thought that FindAndModify was kinda screwy
[23:04:41] <kexmex> johnflux: mapreduce needed?
[23:06:10] <svm_invictvs> cheeser: So I tried to to just a find by ID directly in Mongo. Same error. Took all of findAndModify out
[23:06:38] <svm_invictvs> cheeser: This is the strange bit
[23:06:39] <svm_invictvs> cheeser: Could not map com.namazustudios.socialengine.dao.mongo.model.MongoPSNApplicationProfile with ID: 55ad7e32e584ba15d0493580
[23:07:39] <svm_invictvs> cheeser: The ID is not for that type. It looks like Morphia is getting confused over what type it's looking for.
[23:07:51] <svm_invictvs> That or I'm confusing it somehow.
[23:09:34] <svm_invictvs> This is my tinker project, I think I'll just let it be for now.
[23:09:47] <svm_invictvs> I'm sure there's something I'm missing