PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 22nd of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:10:17] <LyndsySimon> I finally got in my nib smoothing supplies - and just finished grinding my first nib.
[00:10:36] <LyndsySimon> I made a ~0.6mm stub from my fine Hero 338.
[00:15:35] <rOOb> Hello all. I'm banging my head against my desk on this. How can I aggrigate the sum of a field in my documents? I have this so far http://pastebin.com/SnCWEf5G
[00:15:45] <rOOb> That also includes what my document look slike
[00:16:34] <rOOb> I want to find all deposits by "user -> id" then sum all of the documents "amount" field
[00:16:51] <rOOb> I tried: db.deposits.aggregate([{$match: {'user.id': "555ba8a5dc5b678f46adcf70"}},{$group: {_id: "$account", total: {$sum: "$amount"}}}])
[00:16:54] <rOOb> Without luck
[00:21:45] <joannac> rOOb: what does "without luck" mean? what do you get?
[00:23:45] <rOOb> joannac I get nothing back
[00:24:06] <joannac> rOOb: remove the group part, does it match anything?
[00:24:41] <StephenLynx> does it returns an empty array?
[00:25:12] <rOOb> joannac It doesnt. But it should
[00:25:38] <joannac> rOOb: well, it doesn't. so fix your $match
[00:25:49] <rOOb> StephenLynx When I use the same code in mongoose it does return a empty array
[00:26:28] <StephenLynx> do a find to get all objects from it and show me.
[00:26:35] <StephenLynx> and I strongly suggest not using mongoose.
[00:26:39] <rOOb> joannac Ok. In the match I had to wrap user.id into "user.id" since user.id is a reference
[00:27:09] <joannac> rOOb: db.deposits.find({'user.id': "555ba8a5dc5b678f46adcf70"}})
[00:27:16] <joannac> I predict that will return nothing
[00:27:27] <joannac> keep editing that until it returns something
[00:27:30] <rOOb> StephenLynx http://pastebin.com/msbBu8EX
[00:28:13] <rOOb> joannac Corrct that errors then when I remove the extra } it returns nothing
[00:28:35] <joannac> rOOb: right. so fix it
[00:32:25] <rOOb> My brain is fried lol. I cant get it to work. Google suggests that you cannot do what I want.
[00:33:24] <rOOb> Wait
[00:33:34] <rOOb> This worked: db.deposits.find({'user.id': ObjectId("555ba6698274f800287a64c2")})
[00:34:20] <joannac> right. so now feed that back to your aggregate
[00:34:30] <rOOb> And not this works:
[00:34:31] <rOOb> db.deposits.aggregate([{$match: {'user.id': ObjectId("555ba6698274f800287a64c2")}},{$group: {_id: "$account", total: {$sum: "$amount"}}}])
[00:35:08] <rOOb> I didnt realize that the objectid is a special kinda field that is not searchable via just strings
[00:35:17] <joannac> well, now you know :)
[00:35:32] <rOOb> And knowing is half the battle!
[00:35:36] <rOOb> \o/
[05:19:03] <Streemo> im doing an upsert where I increment X, and Y. I have another field Z = f(X,Y). Is there a way to update Z upon upsertion, or do i need to run another query?
[05:20:07] <joannac> Streemo: why can't you do it in your app?
[05:20:18] <Streemo> I can.
[05:20:23] <Streemo> But I'd rather not
[05:20:31] <Streemo> because it involves converting the cursor to an array
[05:20:41] <Streemo> every time a template is rendered
[05:21:31] <Streemo> so i'd rather cache the data inside the document itself
[05:22:13] <joannac> I mean, when you need to update X or Y, why not also update Z in the same update?
[05:22:39] <Streemo> because Z depends on the accumulated X, Y values. The upsertion arguments only know the values to increment by
[05:23:06] <Streemo> so I need Z(X+dX, Y+dY), and the only thing i know (without running a query) is dX and dY
[05:23:25] <joannac> ah. then another query, yeah
[05:23:32] <Streemo> yeah :/
[05:25:24] <Streemo> well if Z was linear this wouldnt be problem lol
[05:25:37] <Streemo> i guess im gonna have to run another query
[05:29:21] <lucasgonze> My data disk for a Mongo primary with two replicas is at 95%. I did db.collection.remove({}) on a 12GB collection and verified via dataSize that it was empty. But df -h shows that the disk utilization is still 95%.
[05:29:54] <lucasgonze> Could there be a delay in executing the deletion? Maybe for propagating it to the secondaries?
[05:31:44] <joannac> removing documents does not free disk space
[05:31:46] <preaction> no. mongo never gives disk space back. it uses it
[05:32:13] <preaction> *never subject to conditions and limitations. not valid in KY or LA
[05:32:39] <joannac> lucasgonze: resync your nodes to reclaim the disk space. or if you can, dropping a database will release disk space
[05:33:56] <lucasgonze> thanks for the insight
[05:34:05] <lucasgonze> does resync make sense in the context of a master?
[05:35:16] <joannac> lucasgonze: yes, step it down, and resync it from one of the others
[05:36:06] <lucasgonze> if mongo doesn't give disk space back, so it can reuse it, does that mean it ordinarily claims a fixed size data file?
[05:36:51] <lucasgonze> the issue here is the chance of filling up the disk over night, given that we're at 95% right now. but if the space used is fixed, that's not an issue.
[05:37:16] <joannac> also, how large is your oplog?
[05:37:29] <joannac> and how much data are you needing to resync?
[05:40:38] <Auger> I'm building a commenting system for single pages
[05:40:49] <Auger> the comment schema has the pages id stored as the post_id
[05:40:57] <Auger> if it's a top level comment, it has a parent of 0
[05:41:12] <Auger> if it has a parent, it has that parents id stored in parent_id
[05:41:28] <Auger> if it has a child comment, it has the id stored in an array called 'children'
[05:41:56] <Auger> i am trying to display all the comments in as few queries as possible
[05:42:01] <Auger> can anyone point me in the right direction
[05:42:28] <Auger> currently i do a .find() which returns all comments with a parent of 0
[05:42:42] <Auger> then i do an ajax get on each of those comments to get its children
[05:42:50] <Auger> and then i do an ajax get for each of those to get its children
[05:42:55] <Auger> and so on, it's so ugly
[05:42:59] <Auger> please, help me :X
[05:44:58] <Streemo> Auger: you should only do the ajax when the user reuqests to see the next nest
[05:45:06] <Streemo> and not all at the beginning
[05:45:31] <Auger> oi, i wanted to at least show 2 or 3 levels
[05:46:04] <preaction> cache the entire lineage of the comment. if you have A -> B -> C, then C's lineage is "A:B". then you can query for all comments that have a lineage that begins with "A:"
[05:46:27] <preaction> or "A:B:"
[05:49:21] <Auger> preaction: ok i will look into that
[05:49:37] <Streemo> preaction: are you telling him to denormalize his document to an arbitrary number of levels? or do this in-app?
[05:50:09] <Streemo> or in a new collection?
[05:50:17] <Streemo> of Lineages
[05:52:47] <preaction> i don't care where? i've done it in a separate table. it's a cache of the real tree structure that exists, the parent_id
[05:56:09] <Auger> Streemo, preaction: I appreciate your input, thanks!
[05:56:52] <Streemo> preaction: yeah i would probably do in a separate one also
[05:58:53] <Streemo> but honestly i think it might be better to hold off and run separate calls when the user actually wants to see the comments children to this comment
[05:59:41] <preaction> it depends on the content and users
[06:01:04] <Streemo> yeah that's true
[09:13:49] <sabrehagen> heads up: http://plugins.mongoosejs.com/ is down
[10:01:19] <techie28> hello
[10:07:39] <techie28> Im trying to import a database using mongorestore --db name /here/path/directory
[10:07:50] <techie28> but it is not reading all the files of that directory
[10:09:17] <techie28> any ideas why?
[11:28:05] <techie28> hy
[11:31:46] <techie28> Im trying to import a database using mongorestore --db name /here/path/directory.. but it did not import all the files
[11:45:20] <joannac> techie28: how so?
[11:56:06] <techie28> jonnac: Im sorry it is error 16633 err: "text search not enabled" which halts the importing process.
[11:56:39] <techie28> thats why it is not reading the records/files after that.. am I correct?
[11:56:59] <joannac> you're using a very old version
[11:57:13] <joannac> also, yes.
[11:57:41] <techie28> 2.4.10.. it is what they have in Debian Jessie repo's
[11:58:56] <techie28> joannac: they dont have latest version there?
[12:01:32] <techie28> any ideas please?
[12:01:43] <joannac> any ideas about what?
[12:02:04] <techie28> can I get a latest version for Debian Jessie>
[12:02:47] <joannac> I didn't think there was a build for debian 8 yet?
[12:03:33] <joannac> in any case, you can always download the tarball
[12:07:01] <techie28> jessie has 2.4.10 only
[12:07:19] <techie28> joannac: any other workaround to that error
[12:07:21] <techie28> ?
[12:07:35] <joannac> enable text search
[12:09:43] <Petazz> Is there a way to make a query that matches everything that is not {}?
[12:12:14] <techie28> I have got this db dump from someone else
[12:13:37] <joannac> Petazz: why?
[12:13:53] <joannac> Petazz: also, i don't think you can
[12:18:55] <Petazz> joannac: Because I need to, and apparently $or: [{},...] works
[12:25:24] <joannac> how that that different to just db.coll.find({}) ?
[12:25:42] <joannac> I'm still not clear what this use case is
[12:44:22] <techie28> exit;
[13:59:03] <pjammer> the fileSize in db.stats() on the primary and secondary should be roughly the same size always, right?
[14:00:47] <pjammer> rs.status() has both secondaries up and heatlhy; optime and optimeDate are the same basically and current.
[14:19:39] <pjammer> anyone?
[14:43:39] <mattmaybeno> I would assume they'd be the same or similar
[14:45:29] <mattmaybeno> as long as they were correctly syncing with the primary.
[14:48:12] <_ari> hio
[14:48:38] <_ari> how can i not include negative numbers in my sum query
[14:49:01] <cheeser> filter with $gte : 0
[14:49:06] <_ari> oh right
[14:49:07] <_ari> hehe
[14:51:55] <_ari> hey cheese, sorry where do i put the filter in this query
[14:51:57] <_ari> db.billing.aggregate({$group: {_id: "$name", total: {$sum: "$cost"}}})
[14:52:05] <_ari> cheeser*
[14:54:53] <cheeser> hrm. i'd have to think about that... my agg skills are weak...
[14:55:39] <pjammer> mattmaybeno thanks. i did a db.getReplicationInfo() and the tLast is seconds ago. which is close enough.
[14:56:11] <_ari> cheeser: it's tough stuff
[14:56:22] <mattmaybeno> that makes sense.. sounds good
[14:56:31] <pjammer> i read something about journal and writes; maybe it explains the GB difference?
[14:56:37] <_ari> i'm learning it right now and i have to perform some complex queries
[14:57:13] <pjammer> i don't honestly care though tbh, if i can prove a secondary is up to date, can't i just stand down the primary, make a new one elect, remove the old primary and add it back to re replicate?
[14:57:20] <cheeser> _ari: before the $group, add a $match stage to your pipeline
[14:57:43] <pjammer> would that reallocate the files, i wonder? or is repairDatabase really the only way to get the disk usage down?
[14:58:50] <bsdhat> Hi, I'm having trouble compiling the tutorial code for the simple cpp client. Can anyone assist me?
[14:59:03] <_ari> cheeser, ah putting it before worked!
[14:59:10] <_ari> cheeser, why before?
[14:59:21] <_ari> and after $group doesn't work
[14:59:30] <_ari> cause it's already performed the $group?
[15:02:28] <cheeser> _ari: because the $sum is already done.
[15:03:01] <_ari> cheeser: right
[15:09:54] <bsdhat> Greetings all... looking for some mongo assistance. :~)
[15:18:24] <bsdhat> gotta go - cheers.
[15:21:51] <saml> http://data.jobsintech.io/companies/mongodb-inc/2015 why do you pay little salary to engineers?
[15:21:53] <saml> in nyc
[15:23:10] <cheeser> what?
[15:34:42] <Lonesoldier728> hey someone here use mongoose trying to figure out how to do this comparison
[15:34:44] <Lonesoldier728> http://pastebin.com/0NJ3QMde
[15:34:56] <Lonesoldier728> this seems incorrect because I have to search an object i think in the array
[15:43:46] <FIFOd> I'm using MongoDB 2.4 and have a field that is either 0 or a double. It looks like a query that has $type: 1 (which is the double type), does not find 0 values correctly. Is there a way to check if an item is zero or double?
[15:44:26] <cheeser> using which driver?
[15:44:36] <cheeser> e.g., in java you'd use 0.0
[15:44:49] <cheeser> 0 == int, 0.0 == double
[15:45:15] <cheeser> then the driver would use the double value in the bson and the types would match
[15:45:19] <FIFOd> @cheeser I tried setting it via the command line to 0.0 and it didn't work
[15:45:52] <FIFOd> @cheeser .update({_id: ObjectId("555f4cba8abd8ba58d103fbf")}, {$set: {score: 0.0}}) set the score to 0
[15:45:56] <cheeser> hrm. i need to go meet some folks for lunch, but i'll be back in a bit and can take a closer look if you're still having problems.
[15:48:40] <Lonesoldier728> does this look efficient or is there something better I can do
[15:48:41] <Lonesoldier728> http://pastebin.com/U5f3Md8G
[15:49:31] <FIFOd> @cheeser actually it looks like an issue with the node driver?
[15:56:56] <FIFOd> 0 !== 0.0 in mongodb seems really odd choice
[16:00:19] <Lonesoldier728> sorry did it wrong, does this seem efficient or is there a better way to tackle this http://pastebin.com/XMiV8DRS
[16:00:29] <Lonesoldier728> I need only to know if the user shows in the following or followers array
[16:03:08] <Lonesoldier728> http://stackoverflow.com/questions/30401204/how-to-check-a-mongoose-object-ref-if-it-contains-a-field-that-matches
[16:27:57] <csd_> If I'm using range balancing, for example, and I have shards that need to be rebalanced after adding a new shard, will the keys within each shard still be linearly distributed afterwards e.g. all keys in shard number A < B will be smaller than the keys in shard B?
[18:30:30] <zhenq> after enable auth, from one replica member(secondary, as m-002), I still be able to get all results if running remote call of "db.isMaster()" towards primary member(as m-001). is this expected? command: /bin/mongo —host m—001:27017 —eval 'printjson(db.isMaster())'
[21:25:47] <Lonesoldier728> http://mongoosejs.com/docs/schematypes.html I am looking at mongoose and "long" is not a data type does that make sense or can I use a long
[23:29:30] <aniasis> Hello
[23:36:35] <Boomtime> Hi aniasis
[23:37:54] <aniasis> XPath selectors for MongoDB?
[23:38:45] <aniasis> Is that possible?
[23:38:54] <Boomtime> this is something you want? XPath is an XML language
[23:39:26] <Boomtime> given that XPath has both elements and attributes, the mapping is automatically impossible, you'll need a conversion layer
[23:39:59] <Boomtime> it also has joins, loops, and lots of other funky programmatic stuff, it can also traverse the tree from any point
[23:40:10] <Boomtime> XPath is designed to operate on a single document though, not a collection
[23:50:47] <aniasis> MySQL offers this, I have seen some posting about the feature being in MongoDB, just inquiring.
[23:54:43] <Boomtime> aniasis: the XPath support in mysql refers to handling XML stored inside a column, yes?
[23:56:13] <Boomtime> to be clear: mysql only did it this way because tables lack the structure that mongodb has built-in
[23:56:38] <Boomtime> mongodb has this power natively, not bolted on as a kludge to a column selector