PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 13th of January, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:22:03] <bryon> hello mongo mavens! quick question: when doing an update operation with upsert:true, does it matter whether the word upsert is quotes?
[00:23:28] <bryon> or, is { upsert : true } the same as { 'upsert' : true } ?
[00:24:57] <Boomtime> @bryon: it's the same thing - technically, with quotes is correct JSON, the first form (without quotes) is supported by many parsers but not technically correct
[00:25:24] <Boomtime> -> http://json.org/
[00:26:39] <bryon> thanks Boomtime
[00:27:12] <Boomtime> and i just noticed that in fact, technically neither form is correct - the BNF on the right says double quotes only
[00:27:22] <bryon> i'm working with someone else's code and have very little mogo experience
[00:27:49] <bryon> i guess the quotes on upsert are NOT the cause of my problems. =]
[00:27:52] <Boomtime> what you are asking about isn't mongodb, it's just JSON syntax
[00:28:02] <bryon> yeah
[00:28:18] <bryon> duly noted
[00:28:26] <bryon> thanks again. =]
[00:28:28] <bryon> Boomtime++
[00:28:28] <Boomtime> :D
[08:55:33] <Derick> lipiec: looks like there are plans to make viewing tools available for this diagnostic.data thing - in the future
[09:15:13] <lipiec> Derick: great to hear that
[09:15:41] <lipiec> Derick: anyway, when 3.2.1 version will come out for Debian in mongodb repository?
[09:18:11] <Derick> lipiec: it's in the repo: http://repo.mongodb.org/apt/debian/dists/wheezy/mongodb-org/3.2/main/binary-amd64/
[09:19:11] <Derick> just not in the Packages file
[10:15:47] <lipiec> Derick: thanks
[10:22:06] <Derick> i pinged people, but they're not likely to respond til morning US time
[10:58:54] <fish_> are the stats that serverStatus returns per server or per cluster?
[10:59:14] <fish_> or does that depend on where I connect to?
[10:59:38] <fish_> the name sounds like it's per server
[11:29:21] <khrm> Is there anyway to do batch update in such a way that we assign different value to each of the field of different ids.
[11:29:27] <khrm> ?
[11:30:06] <khrm> Let's say I have Order in a collection call Person.
[11:30:35] <khrm> And now I have Address.
[11:31:20] <khrm> Now some 100 people address change.
[11:31:38] <khrm> We send an id and new address. Can this be done in a single query?
[11:32:17] <khrm> Here I have taken address but we could have let's say hierarchy. In that case I want either all pass or all fail.
[11:32:26] <jost> Hi! I'm using spring data, and want to copy a collection. How can I do that? I can execute arbitrary BSON commands, but cannot find any resources on how to construct a BSON document to copy a collection
[11:32:30] <khrm> Is this kind of transaction possible in mongodb?
[11:32:49] <StephenLynx> bulkwrite, khrm
[11:33:18] <jost> For example I've found db.source.copyTo(target) (I know it's deprecated), how would the BSON command to execute it look?
[11:37:34] <Derick> jost: there is none
[11:38:16] <Derick> jost: the shell command just does:
[11:38:23] <Derick> while ( cursor.hasNext() ){
[11:38:23] <Derick> var o = cursor.next();
[11:38:23] <Derick> count++;
[11:38:23] <Derick> to.save( o );
[11:38:24] <Derick> }
[11:38:38] <khrm> Thanks, StephenLynx
[11:38:42] <Derick> you can see it by typing "db.source.copyTo" (without ( and other things))
[11:39:11] <khrm> But couldn't find bulk wrtie update in mgo. Anyone using mgo (golang driver)?
[11:39:17] <khrm> *write
[11:39:28] <Derick> it might use it under the hood...
[11:39:43] <Derick> and a bulk write is *not* a transaction
[11:39:51] <Derick> each element in a bulk write succeeds or fails
[11:40:03] <Derick> but you can't guarantee for *all* items to either fail, or succeed
[11:40:11] <Derick> mongodb does not have transactions
[11:44:05] <khrm> OK.
[11:45:07] <StephenLynx> oh, I didnt read that much from his question :v
[11:45:23] <StephenLynx> I just read some and though "oh, multiple documents with different values for each"
[11:47:21] <ams_> I'd quite like to have the ability to "mark" a collection for backup and then have a script run mongodump and only back up those collections. Is that feasible?
[11:51:45] <m3t4lukas> cheeser, u there?
[11:55:41] <acedude> heyo
[11:56:11] <acedude> I'm a newbie, no denying that, having trouble with translating Postgresql query to mongo
[11:56:15] <acedude> SELECT COUNT(*) as count, is_valid, date_trunc('day', created_at) as date
[11:56:15] <acedude> FROM table_name
[11:56:15] <acedude> WHERE created_at > '2015-12-01' AND another_bool = false
[11:56:15] <acedude> GROUP BY is_valid, date;
[11:56:26] <Derick> acedude: please use a pastebin and don't paste things into the channel
[11:56:39] <acedude> whoops, sorry
[11:57:10] <Derick> what have you tried?
[11:57:25] <acedude> part that's giving me most trouble is "date_trunc('day', created_at) as date", I've been googling for quite some time without any good results
[11:57:27] <acedude> one sec
[11:58:00] <Derick> is created_at an ISODate value?
[11:59:56] <acedude> here's the query: http://pastebin.com/KqwWTCt3
[11:59:58] <acedude> yeah it is
[12:00:20] <acedude> I just don't know how to aggregate by date, ignoring time
[12:02:01] <Derick> acedude: instead of using group command, have a look at the aggregation framework's $group stage
[12:02:26] <Derick> in the A/F there is also an operator to get only the Year/Month/Day arguments from an ISODate value
[12:02:39] <m3t4lukas> Derick: is there anyone working on mongod here?
[12:02:45] <acedude> thanks! :) that's the answer I was looking for
[12:03:00] <Derick> acedude:
[12:03:01] <Derick> https://docs.mongodb.org/manual/reference/operator/aggregation/group/
[12:03:36] <Derick> https://docs.mongodb.org/v3.0/reference/operator/aggregation/year/#exp._S_year
[12:03:53] <Derick> m3t4lukas: most mongodb-ers here work on drivers
[12:04:11] <acedude> kick ass, I'll have a read, thanks again Derick!
[12:04:21] <StephenLynx> I work on chinese cartoon websites :v
[12:05:27] <m3t4lukas> Derick: what about GothAlice, joannac and ranman? I never see them write anything :P I ran into a bit of a problem extending mongod with a new accumulator.
[12:06:27] <cheeser> only one of those work for mongodb
[12:06:45] <StephenLynx> from what I seen GothAlice doesn't work on mongo nor on drivers, but is an extreme power user and IMO, knows more about it than many driver developers.
[12:07:00] <m3t4lukas> cheeser: ah, okay
[12:07:14] <StephenLynx> I think joannac is a 10gen employee
[12:07:19] <StephenLynx> no idea about ranman
[12:07:22] <cheeser> StephenLynx: i would agree with that assessment. she pushes things in to odd, awesome places.
[12:07:47] <cheeser> joannac is one of our great support peeps. ranman works for amazon last I heard.
[12:09:03] <Derick> he did work with us though
[12:10:49] <cheeser> he did
[12:11:35] <m3t4lukas> the family sticks together in IRC ;)
[12:56:44] <jost> Where is the documentation for the server side javascripts? Especially which objects exist, and what their properties are? E.g. what the db-object does?
[12:57:53] <jost> ok, got it
[13:14:48] <warriorkitty> Hi all. With Mongoose inside NodeJS, I have something like this:
[13:14:50] <warriorkitty> db.collection(mRecord.database).find().skip(30 * (page - 1)).limit(30);
[13:14:57] <warriorkitty> Where to put the callback?
[13:15:00] <GothAlice> m3t4lukas: I tended to do things a few years before 10gen would figure out it was a good idea to put in core. Things like compression, full text indexing with stemming and Okapi BM-25 ranking, etc.
[13:16:19] <GothAlice> m3t4lukas: At this point I'm working on storing database schemas in the database to self-describe in a front-end editable way. Databaseception.
[13:17:09] <GothAlice> warriorkitty: AFIK there's a "go" method (probably not actually called that) to trigger the query to actually run and pass in a callback. Skip, limit, etc. just change values stored in the query before it's run.
[13:18:13] <warriorkitty> GothAlice: Thank you. I understand there must be some "go" function but I don't know how it's called. I've tried with "exec" already and that is not it.
[13:18:51] <GothAlice> Alas, I can do no more than you in terms of scouring documentation. I've never used Mongoose, and from all indications it's… painful to use.
[13:20:26] <warriorkitty> GothAlice: OK, thank you very much.
[13:34:29] <m3t4lukas> GothAlice: sounds like a lot of fun :)
[13:48:19] <acedude> Derick - I finally figured my query out, I'm still familiarizing with basic concepts, but I managed to achieve what I needed: http://pastebin.com/6dqPDeSE
[13:53:43] <Derick> cool, you're missing your WHERE clause though... make sure you put these in a $match before the $group
[13:55:34] <acedude> yup, that's the next step :)
[14:02:25] <Derick> cheeser: randall works for spacex now
[14:02:39] <Derick> (just saw him checkin on foursquare)
[14:04:42] <cheeser> oh, very nice!
[14:05:00] <m3t4lukas> whoa, lucky him
[14:05:10] <cheeser> that'd be an awesome place to work, i think. at least, the tech/vision. the people there might be utter assholes. who knows.
[14:05:43] <m3t4lukas> maybe they have so much fun at work that there is no need to be an asshole
[14:14:39] <cheeser> m3t4lukas: that's what I want to believe, too.
[14:14:44] <cheeser> i mean, rockets!
[14:16:16] <m3t4lukas> yeah, I wish I could go to mars one day and die there
[14:16:49] <Derick> ... i'd rather stay alive, come back, and tell everybody about it!
[14:17:11] <kablaaam> I have a mongodb instance in a docker container running and have the credentials ready but I want to translate them to URL form (I have an environment variable file with URL authentications). Ran without password with the command from here: https://github.com/tutumcloud/mongodb#run-mongodb-without-password
[14:23:13] <m3t4lukas> Derick: If you give me an electic motor cycle and some solar panels I'd willingly trade some extra time on mars for coming back and telling anyone about it :D
[17:42:48] <traph> hi all
[17:44:26] <traph> if the mongodb keyfile is multi-line, instead of a one-liner, does it concatenate every individual line to compose the key?
[17:46:03] <traph> so if we have 3 lines, each on a new line, the string should be "line1line2line3", right?
[21:22:31] <msx> m3t4lukas: cheers. FYI: do you remember the strange issue I was having with Mongo hanging while importing some collections (from 2.6 mmap to 3.0.8 wiredTiger)? It turned out that the size (1GB) I assigned to the wiredTiger cache (/etc/mongod.conf) was too small so after increasing it to 2GB everything worked as expected.
[21:30:02] <ranman> hi I'm ranman I work at spacex
[21:31:14] <cheeser> <everyone>hi ranman!</everyone>
[21:31:23] <ranman> I'm a mongoholic.
[21:33:21] <GothAlice> Don't worry, ranman, it's not a fatal condition. ;P
[21:35:16] <metame> hi @ranman what sorts of things are you using mongo for over at spacex?
[21:38:34] <StephenLynx> ranman just be careful to not become a fanboy
[21:47:50] <m3t4lukas> cheers msx :)
[21:48:13] <m3t4lukas> hi ranman :D
[21:48:47] <m3t4lukas> yeah ranman, does spacex use mongo, too?
[21:49:05] <m3t4lukas> if yes then mongo literally becomes rocket science :D
[21:51:56] <MacWinner> I'm having an interesting issue with the mongo node driver (i'm using the mongoose orm, however I think the problem is with connection pooling). Basically it looks like my node processes keep tearing down the connection pool and reconnecting them in the background.. throughout the day, I occasionality get a No Primary Server available even though the cluster is fine.. anyone seen this or have any tips? There is a ticket on github, and Christian from
[21:51:57] <MacWinner> mongodb says it can happen if all the connection pool threads are not utlized within the initial timeout, then all the pool threads will be torn down
[21:52:38] <MacWinner> has anyone experienced this connection pooling issue?
[21:57:58] <cheeser> ugh. node. :)
[22:05:05] <MacWinner> doh. I think this issue has been happening for a while, we just didn't notice it
[22:05:30] <MacWinner> now i'm pretty concerned since my mongodb log file is getting filled up like crazy with connect/disconnect/reconnect messages
[22:09:35] <evgeniuz> hi. can someone suggest what is considered best practice: store multiple small documents related using properties or store big nested document and use projections for requests?
[22:10:04] <GothAlice> cheeser: I think you meant "ugh, mongoose". :P
[22:10:11] <evgeniuz> logically big document is appealing, but I wonder what performance impact is
[22:11:43] <GothAlice> evgeniuz: Neither. Model your data the way you intend to consume it, with a mind that MongoDB can't do JOINs. Optimize query performance, and optimize the cases that count. (Since any form of optimization without measurement is by definition premature.) As one example, I have forums using MongoDB. The individual forums are in one collection, then there's a collection for threads.
[22:12:34] <GothAlice> In a relational model, I might have another table for replies to threads (storing only metadata about threads in the threads table), but in MongoDB since a) comments need to be deleted when the thread is deleted, and b) when viewing a reply to a thread, one needs the metadata anyway I instead store all replies embedded in the thread.
[22:13:23] <GothAlice> MongoDB lets me range query the list of embedded documents ($slice) and thus paginate easily, or omit the replies if all I want is the metadata (i.e. in the thread listing for a forum).
[22:13:24] <evgeniuz> but consider case when you want to get title and description for thread, but not huge replies section
[22:13:41] <GothAlice> db.threads.find({_id: …}, {replies: 0})
[22:13:47] <GothAlice> ^ MongoDB covers that.
[22:14:15] <GothAlice> (That'd be "all data except the replies".) Alternatively: db.threads.find({_id: …}, {_id: 0, title: 1, description: 1})
[22:14:24] <GothAlice> Only title and description, excluding even the _id from the result.
[22:14:30] <evgeniuz> so it will not spend time searching that data and sending it over the network?
[22:14:34] <GothAlice> Correct.
[22:14:50] <evgeniuz> great, that's what I needed, thanks
[22:15:04] <GothAlice> But you'll note I don't go crazy and embed replies inside threads which are embedded in forums… that way lies madness. The general rule is to only go one level deep.
[22:16:17] <GothAlice> http://www.javaworld.com/article/2088406/enterprise-java/how-to-screw-up-your-mongodb-schema-design.html goes into some of the design considerations.
[22:17:31] <evgeniuz> there's basically user objects that can have multiple "nodes" and each "node" can have multiple properties (different ones)
[22:17:41] <evgeniuz> most of the time all nodes for users will be selected
[22:18:03] <evgeniuz> but sometimes (relatively rare) I need to select single node
[22:18:41] <GothAlice> Hmm; careful on that. You might think a model like {name: "foo", properties: {age: 27, gender: "robot"}} but you can't easily index and search truly "arbitrary user-supplied properties" like that.
[22:18:46] <evgeniuz> so I was wondering if storing user as single document (instead of multiple collections for user/node) is ok performance-wise
[22:19:16] <evgeniuz> I don't need to search properties, each node will have an id (index probably)
[22:19:24] <evgeniuz> so that should be ok
[22:19:38] <GothAlice> https://gist.github.com/amcgregor/aaf7e0f3266c68af20cd is a concrete example of bad and good approaches to arbitrary properties.
[22:20:52] <GothAlice> Yeah, if you avoid the need to query the arbitrary data, you save a lot of headaches. ;)
[22:28:06] <ranman> m3t4lukas we use mongo yes
[22:29:01] <ranman> metame: we have a system for sharing plots of various combinations of sensors/telemetry etc. -- mongodb is one component of that
[22:48:57] <cheeser> ranman: so i guess you could say that mongodb is helping you to ... SCALE THE UNIVERSE!
[22:50:51] <ranman> cheeser mostly just the solar system
[22:51:02] <ranman> first mongodb deployment on mars
[22:53:09] <Derick> hah
[22:53:15] <cheeser> that would be awesome
[22:53:29] <cheeser> can you imagine the replication lag on that?
[22:53:38] <Derick> about 15 minutes