PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 11th of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:47:42] <tejasmanohar> hey
[00:47:50] <tejasmanohar> anyone here familiar with mongoose?
[00:48:00] <tejasmanohar> CastError: Cast to undefined failed for value "[object Object]" at path "referrals"
[00:48:04] <tejasmanohar> https://gist.github.com/tejasmanohar/085b373d4f8da9ea8d98#file-controller-js-L80
[00:48:20] <tejasmanohar> not sure how to track down this issue... i console logged both values and they both exist as shown in the error log i also attached to that gist
[00:48:26] <tejasmanohar> so im not srue whats there to cast
[00:48:33] <tejasmanohar> referrals[] is an array as defined in my schema
[00:49:16] <StephenLynx> familiar enough to recommend people to avoid it.
[00:49:48] <StephenLynx> GothAlice would recommend alternatives( MongoEngine in particular), I would recommend just using the driver.
[00:50:19] <StephenLynx> and since I just found out the connection you get from connect is a connection pool
[00:50:30] <StephenLynx> reasons to use something like that really decreased for me in io.js.
[00:50:37] <tejasmanohar> oh
[00:50:45] <StephenLynx> less reasons*
[00:50:45] <tejasmanohar> StephenLynx: i dont really wanna switch my project now :\
[00:50:51] <tejasmanohar> but maybe later when its not rushed time
[00:50:59] <tejasmanohar> do you know what the issue is here tho? this error is from Mongo directly
[00:51:30] <StephenLynx> no idea.
[00:51:35] <tejasmanohar> StephenLynx: i console.logged the values of those variables so im certain enough that they are correct vals
[00:51:37] <StephenLynx> I just use the driver.
[00:51:38] <tejasmanohar> aw :\
[00:51:50] <tejasmanohar> oh ok, but i mean this part is the same?
[00:52:06] <StephenLynx> no because the driver just returns the stuff it finds.
[00:52:17] <StephenLynx> it doesn't try to apply schemas or anything
[00:52:37] <StephenLynx> if you have bananas in a doc and camels in another doc in the same collection you get your bananas and camels and thats i.
[00:52:45] <StephenLynx> thats it*
[00:53:12] <tejasmanohar> o i see
[00:53:22] <tejasmanohar> StephenLynx: would the result of the raw query from mongo driver help here?
[00:53:26] <tejasmanohar> because i know how to get that from mongoose
[00:55:16] <StephenLynx> probably.
[00:55:16] <StephenLynx> because there would be no cast at all.
[00:55:20] <StephenLynx> just query it and print after a stringify
[00:56:34] <tejasmanohar> ok one sec
[01:36:58] <GothAlice> Fun aside: since MongoEngine is Python, not JavaScript, I would only recommend using the base JS driver or an async variant; async is nice if you can keep your code clean.
[01:37:02] <GothAlice> tejasmanohar: ^
[01:37:15] <tejasmanohar> Ooh ok
[01:37:22] <tejasmanohar> i should use promises
[01:37:33] <StephenLynx> yeah, I though mongoengine was available for node.js too
[01:37:34] <StephenLynx> :v
[01:37:40] <tejasmanohar> just got lazy for this project since its a waiting list app and once the waiit is done in 2 weeks from tmrw
[01:37:43] <tejasmanohar> its useless
[01:37:44] <GothAlice> The async drivers are very nice. :)
[01:37:54] <StephenLynx> and I heard the performance of promises is very lacking.
[01:38:05] <tejasmanohar> StephenLynx: but the readability isnt
[01:38:18] <tejasmanohar> and performance doesnt matter too much for me rn
[01:38:19] <StephenLynx> is as readable as anything.
[01:38:37] <tejasmanohar> with promises i wouldnt need this if(err) stuff all over
[01:38:39] <tejasmanohar> no, not really.
[01:38:57] <tejasmanohar> with effort X you can get more readable code in promises than callbacks and `async` i ensure u
[01:39:25] <StephenLynx> you can't because you are adding more abstraction and moving parts to the code.
[01:39:40] <tejasmanohar> "moving parts to the code"?
[01:42:07] <tejasmanohar> StephenLynx: ^
[01:42:34] <StephenLynx> what about that.
[01:43:25] <tejasmanohar> explain some more, i dont understand
[01:43:27] <tejasmanohar> rephrase
[01:43:33] <tejasmanohar> please :)
[01:44:10] <StephenLynx> I am on a 1mbps and loading a video, if you wait a little, I will do.
[01:44:21] <tejasmanohar> sure :)
[01:44:39] <tejasmanohar> but oh god im hating mongoose rn, "MongoError: n/a" n/a is that really what mongo told you?
[01:44:58] <StephenLynx> lol
[01:45:06] <StephenLynx> mongoose is pretty bad.
[01:45:07] <GothAlice> Mongoose is a steaming pile.
[01:45:14] <GothAlice> #1 cause of JS-related support incidents I see.
[01:45:23] <GothAlice> (Directly… or indirectly.)
[01:45:45] <GothAlice> (Having "[object Object]" strings is a general JS silly thing, but Mongoose tends to draw it out. ;)
[01:46:09] <StephenLynx> yeah, everytime it isn't just an issue of failing to RTFM, is mongoose.
[01:47:20] <StephenLynx> ok, so tejasmanohar according to this
[01:47:21] <StephenLynx> http://www.html5rocks.com/en/tutorials/es6/promises/
[01:48:12] <StephenLynx> you would still have to check if the query returned an error because the driver isn't designed around promises
[01:48:15] <StephenLynx> am I right?
[01:48:40] <StephenLynx> but now you have to implement a function elsewhere that will receive the two separate callbacks
[01:48:41] <tejasmanohar> yes, you are right- unless there's an extension
[01:48:49] <tejasmanohar> like express promise router
[01:48:59] <StephenLynx> and now you have not one, but two callbacks
[01:49:01] <tejasmanohar> use bluebird if you try promises first btw, at least rn
[01:49:24] <tejasmanohar> mongoose uses promises
[01:49:26] <tejasmanohar> but
[01:49:29] <StephenLynx> its shit
[01:49:31] <tejasmanohar> im not gonna go there after whats happening rn
[01:49:32] <tejasmanohar> yeah
[01:49:37] <GothAlice> Language, StephenLynx.
[01:49:51] <StephenLynx> javascript.
[01:50:11] <StephenLynx> anyway, the point is that now you have a code more complex to work with promises
[01:50:31] <StephenLynx> and double the amount of callbacks
[01:50:35] <StephenLynx> or add dependencies.
[01:51:05] <StephenLynx> instead of just having this one single callback.
[01:51:45] <StephenLynx> that also enables you to further expand your logic by passing the callback around.
[01:52:27] <StephenLynx> what if you have a condition that you will perform a second asynchronous calls?
[01:52:59] <StephenLynx> asynchronous call*
[01:53:57] <tejasmanohar> yeah i see what you mean now
[01:55:33] <tejasmanohar> btu for your own logic i love promises
[01:55:41] <StephenLynx> and besides, callbacks are the most elegant way to deal with abstraction, if you ask me.
[01:55:45] <StephenLynx> is what C did.
[04:56:37] <swhitt> does this look right for a nodejs implementation of a tailable cursor? https://gist.github.com/downslope7/9ec51905e4e8f2677224
[04:56:47] <swhitt> i'm trying that and it's not getting anything
[04:58:18] <swhitt> updated to have what i did on the mongo cli
[04:58:54] <swhitt> i inserted multiple records into the capped collection, nothing happens in the node console
[04:59:19] <swhitt> and db.log.count() is 5 on the mongo cli
[09:42:17] <abishek> could somebody advice on what could be causing this query to be less performant? http://pastebin.com/Jk5jMNNR . I have all the columns on the cocndition field indexed. what could be causing it to perform slow?
[09:48:57] <abishek> could anybody help on the above?
[09:52:59] <abishek> is using the group less performant that aggregate?
[09:54:56] <nfroidure> abishek, since JS is used, yes
[09:55:20] <nfroidure> i would have use an aggregate for this
[09:56:24] <abishek> could you suggest how I convert that query to aggregate. I am newt o this and finding it difficult to figure out how to create the aggregate query
[09:56:48] <boboc> Guys i have a question, if i have a post with comments and choose to embed them, if i want to uniquely identify a comment and update it how can i do that? Would be better to have a separate document for comments?
[10:00:34] <lemonxah> hello
[10:07:20] <abishek> nfroidure, when using aggregate how do I specify conditions in the reduce in the aggregate?
[10:07:59] <nfroidure> abishek, look at the $cond operator
[10:08:34] <nfroidure> http://docs.mongodb.org/manual/reference/operator/aggregation/cond/
[10:08:39] <abishek> nfroidure, thats the global condition for the whole query. how about the if conditions on the reduce for the paste that I provided?
[10:10:10] <nfroidure> don't understand a thing in your pastebin, looks so wierd :/
[10:10:23] <nfroidure> if(true != null) ?
[10:11:21] <abishek> `if (obj.order_id != null && obj.order_id > 0 && obj.status == "APPROVED")`
[10:13:50] <abishek> that is an if condition within the data that is retrieved by the condition
[10:18:15] <nfroidure> best i can do, a real world usage of $cond: http://pastebin.com/66XxKrkX
[10:26:31] <abishek> nfroidure, what does $project do?
[10:30:45] <nfroidure> abishek, it reshape items of the pipeline
[10:30:58] <abishek> ok
[10:31:49] <nfroidure> abishek, you can do a lot of things with aggregates once you master them. In fact, i never had to user reduces
[10:32:29] <abishek> yes, am right now figuring out how to master them. I have a lot of aggregates that I need to implement, so figuring out where to start
[10:34:33] <nfroidure> Practice ;)
[10:45:17] <Cust0sL1men> hi
[10:45:36] <Cust0sL1men> I can migrate to wiredtiger without downtime if I just upgrade one replica set member at a time right /
[10:46:08] <Cust0sL1men> essentially remove it, and then add it back with wiredtiger enabled ?
[11:02:13] <Cust0sL1men> anybody use wiredtiger on solaris ?
[11:02:36] <Cust0sL1men> and what happened to all the users in here ;(
[11:02:39] <Cust0sL1men> is it dying
[11:07:46] <bogn1> Hi all, is there anybody here that knows where mms-automation-agent stores its process configuration?
[11:07:46] <Zelest> we're here, just idling :)
[11:08:10] <bogn1> I have a pesky mongos process that was removed from MMS
[11:08:26] <bogn1> but the automation agent always starts that process
[11:09:23] <bogn1> removing mms-cluster-config-backup.json, the automation pid file and the /data dir doesn't help
[11:09:25] <Cust0sL1men> Zelest, there used to be 600 people in here I think
[11:10:08] <Cust0sL1men> someone should make a question rotation bot for irc
[11:26:38] <bogn1> Please help, I'm going nuts. MMS is continuously restarting a process that I have removed with their web interface. Removing mms-cluster-config-backup.json, the automation pid file and the /data dir doesn't help either. Where can I find the automation agent's config?
[11:38:35] <Cust0sL1men> so queries can still only use one index ?
[11:44:22] <ignasr> hi, what does category mean in this index?
[11:44:23] <ignasr> { "v" : 1, "key" : { "keyword" : 1 }, "name" : "keyword_1", "ns" : "skelbiu.keywordsLogTmp", "category" : 1 }
[11:44:29] <Cust0sL1men> nvm
[11:44:32] <Cust0sL1men> https://slack-redir.net/link?url=http%3A%2F%2Fdocs.mongodb.org%2Fmanual%2Fcore%2Findex-intersection%2F&v=3
[11:44:39] <Cust0sL1men> http://docs.mongodb.org/manual/core/index-intersection/
[11:49:00] <Cust0sL1men> if I have x = 1 & y = 2 || x = 1 & z = 2 - does it help to make x, y, and z a compound ?
[11:49:22] <Cust0sL1men> or should I make two compounds, ( x, y ) + ( x, z )
[13:23:25] <lemonxah> how would you do a if record exists ?
[13:25:09] <StephenLynx> hm?
[13:25:15] <StephenLynx> ah
[13:25:29] <StephenLynx> findOne and check if it returns a document.
[13:25:32] <StephenLynx> or count
[13:25:41] <StephenLynx> and see if it returns a value greater than one.
[13:25:55] <StephenLynx> if you just need to know if it exists.
[13:26:14] <swhitt> can someone help me with my tailable cursor / nodejs setup? the node process isn't picking up anything added to the capped collection. https://gist.github.com/downslope7/9ec51905e4e8f2677224
[13:29:28] <lemonxah> anyone using scala and the casbah driver?
[13:29:36] <lemonxah> how do i do that find?
[13:29:42] <lemonxah> what do i specify in the find?
[13:29:47] <lemonxah> sorry new to mongodb
[13:30:52] <StephenLynx> dunno, never used that platform, you should consult the driver's documentation.
[13:36:47] <cheeser> lemonxah: you'd use $exists http://docs.mongodb.org/manual/reference/operator/query/exists/
[13:39:58] <cheeser> lemonxah: my scala is weak to say the least, but i think it'd look something like this: coll.findOne(MongoDBObject(“someField” -> MongoDBObject(“$exists” -> true))
[13:50:15] <AlienDrizzt> Hello. I have a question about de C API. Is is possible to make a call to a function in a filter collection, Ie I'm looking for C code that do something like that > db.historic20150511.find({}).count()
[13:50:22] <AlienDrizzt> (passing a filter to find)
[13:55:11] <abishek> what is the issue here http://pastebin.com/vAzeP3yU ? I get this error message "exception: A pipeline stage specification object must contain exactly one field."
[13:58:37] <StephenLynx> I think is at cond
[13:58:44] <StephenLynx> " "campaign_owner": 343"
[13:59:20] <StephenLynx> hm, nvm
[13:59:23] <StephenLynx> that seems to be right
[14:02:37] <abishek> anything else you think could be wrong?
[14:03:53] <StephenLynx> I am not sure a $cond goes as a primary aggregation operator. can't you use a $match for that?
[14:04:07] <StephenLynx> also, try it without one of the operators
[14:04:14] <StephenLynx> so you can discard it as being the wrong one
[14:05:22] <abishek> ok
[14:06:00] <ignasr> hello, anyone here done a database compaction? How much it took with how much of data?
[14:12:36] <abishek> could I ping someone to discuss on some performance issue that I am having on Mongo?
[14:57:48] <freeone3000> I'm getting the error https://gist.github.com/anonymous/68552dcc2879eab4f0e9 when restoring from a disk-based backup. How can I tell mongo to just resync the bad records?
[14:58:34] <cheeser> you can't selectively resync docs. you'd have to do a full resync, i think.
[14:59:00] <freeone3000> It's 2TB of data.
[14:59:08] <freeone3000> By the time I do a full resync, I'll be behind my oplog time.
[14:59:43] <freeone3000> How can I get a reliable backup without taking an entire node offline, since apparently taking the disk image isn't going to give me consistent records?
[15:00:10] <cheeser> use mms backup?
[15:00:35] <cheeser> it's the only PIT backup solution i know of atm for mongodb
[15:00:55] <freeone3000> Can I get it in a way that's not a hosted service?
[15:02:20] <cheeser> yes, i believe so
[15:05:19] <freeone3000> Ah. Okay, found it. Thanks.
[15:07:10] <erenburakalic> Hi everyone, is there anyone who DOES NOT ADVICEs to use mongoexport as backup solution? (DB IS ABOUT ~ 1-1.5GB). if yes why?
[15:07:42] <StephenLynx> I have used neither, but isn't a replica viable for it?
[15:09:08] <erenburakalic> we may need to restore a document
[15:09:27] <erenburakalic> after some possible errors occurs by our client
[15:09:33] <GothAlice> erenburakalic: https://gist.github.com/amcgregor/4fb7052ce3166e2612ab#backups
[15:10:30] <GothAlice> erenburakalic: I run a replication secondary in the office that is delayed by 24 hours, allowing us to recover not just from catastrophic failure of the main cluster "in the cloud" (we also have a non-delayed in-house secondary) but also recover from user error such as accidentally dropping a collection, or garbling a record.
[15:10:56] <GothAlice> We had one dev accidentally multi-update the same value across an entire collection once. That was fun.
[15:11:41] <StephenLynx> hue
[15:12:29] <erenburakalic> GothAlice: LOL
[15:12:35] <erenburakalic> anyways thanks
[15:12:41] <GothAlice> It never hurts to help. :)
[15:17:32] <SixEcho> so i have a unique+sparse index on .things.name… mongo will fail correctly when dup keys on different documents, but on the same document it appends dups to the array… { _id:x; things: [ { name: 'hi' }, { name: 'hi' }] } any way to make it fail in this case? bug? wad?
[15:17:55] <GothAlice> erenburakalic: Gist updated to include a delayed replicas and a lot more links into the documentation on the backup topics.
[15:36:03] <christo_m> can i ask about mongoose here?
[15:36:50] <GothAlice> christo_m: You can, though there are few expert users (and many who strongly dislike mongoose) hanging around. I believe it has its own channel, though.
[15:37:06] <christo_m> GothAlice: i dont think its #mongoose though
[15:37:44] <GothAlice> #mongoosejs
[15:38:10] <GothAlice> Ask, though, don't ask to ask. IRC is asynchronous, and you never know if someone can answer. :)
[15:49:09] <Cygn> Hello everyone. I want to use this query to fetch the id of one, the count of all entries and in addition i want to get the sum of the price of each sale. Fetching the id and the amount of sales works smooth and fine, but i can't get a value out of my sum, any hints? http://pastebin.com/rzEcTvnc
[15:49:30] <Cygn> Line 19 is where i try to calculate. Returns 0 for each entry.
[15:50:08] <StephenLynx> $saled.10 means "tenth value in sales array"
[15:50:19] <StephenLynx> i find it weird to have the index hard coded.
[15:50:20] <StephenLynx> is that right?
[15:52:11] <Cygn> StephenLynx: Sounds weird but it is, actually this comes from the business logic, where we have id's (out of the factory) to each item attribute. f.e. 10 stands for the price, 1 for the article id, 6 is the time of sale etc.
[15:52:31] <StephenLynx> :D
[15:52:33] <StephenLynx> FUG
[15:52:39] <Cygn> StephenLynx: Anyway, sales.10 always contains the price
[15:52:39] <StephenLynx> why not put in actual fields?
[15:52:58] <StephenLynx> yeah, then I don't know.
[15:53:07] <Cygn> StephenLynx: i could, but that would mean i would have to rename it back to the numbers everytime i give an export for the client.
[15:53:29] <GothAlice> Cygn: Using real fields would let you use indexes.
[15:53:38] <Cygn> StephenLynx: okay, but in general this should work? so the field of each sale should be summed with that code right?
[15:54:49] <Cygn> GothAlice: It's not about my use, more about the namings they need, the really use "ten" as the naming when they talk about the price for example. Anyway, sure i could change it in my intern logic and rename it when i handover data to the client, but right now this is acutally okay for me.
[15:55:38] <StephenLynx> you could project before handling it back.
[15:55:44] <StephenLynx> no application code needed for that.
[15:55:46] <GothAlice> Cygn: I'm saying the way you have structured this is bypassing any assistance MongoDB can offer in terms of efficiently querying or manipulating that data. How the data is presented to the user is pretty much irrelevant to how you store it, and the current method is actively defeating MongoDB's capabilities.
[15:56:03] <StephenLynx> also that
[15:56:20] <GothAlice> Let alone having queries that are 100% opaque. ;)
[15:56:33] <GothAlice> (Readability counts.)
[15:56:34] <Cygn> GothAlice: Yeah that IS an argument :D
[15:56:37] <StephenLynx> you are losing much more by not doing two operations than you would by doing these operations.
[15:56:39] <Cygn> Sure it does!
[15:57:19] <Cygn> Okay, i am with you, i will most likely change this afterwards… anyway for now… i am really confused by my problem. $min, $max, $last, all working fine. As soon as i want to use $avg or $sum i get 0 …
[15:57:49] <StephenLynx> do a regular query and print the document.
[15:57:55] <StephenLynx> see if its how you expect it to be.
[15:58:26] <Cygn> StephenLynx: By regular you mean just fetch some entries?
[15:58:27] <GothAlice> Cygn: It's like this other trap people often fall into: {foo: 1, bar: 2, baz: 3, …} with an ever growing list of names that are completely dynamic. That won't fly. {fields: [{name: 'foo', value: 1}, …]} will preserve the ability to update individual values, while also letting you make use of indexes. (Index on fields.name == really fast lookup. No index on "foo", "bar", "baz", and friends means terrifyingly slow collection scans.)
[15:58:38] <StephenLynx> yeah
[15:59:39] <Cygn> StephenLynx: Seems to be fine, even $min and $max present me the correct and expected values (as they are the highest/lowest price in my collection)
[16:00:01] <StephenLynx> print a doc.
[16:02:16] <Cygn> StephenLynx: http://pastebin.com/HNrWKPGu
[16:03:15] <StephenLynx> ah
[16:03:17] <StephenLynx> here is the problem
[16:03:24] <StephenLynx> sales is an array of objects
[16:03:37] <StephenLynx> this is a mess
[16:06:13] <Cygn> StephenLynx: What would you expect it to be?
[16:06:21] <StephenLynx> [value,value,value]
[16:06:39] <StephenLynx> that way sales.10 would work.
[16:06:53] <StephenLynx> lets see what you can do
[16:06:53] <GothAlice> T_T
[16:07:25] <StephenLynx> dunno, man.
[16:07:33] <StephenLynx> refactor it and try again.
[16:07:47] <GothAlice> Refactoring seems like a good idea.
[16:07:54] <StephenLynx> heah, I need to get some stuff done too here.
[16:07:57] <GothAlice> The "1", "2", etc. needs to go.
[16:08:09] <StephenLynx> 13:00 and did nothing :v
[16:09:17] <Cygn> StephenLynx: Where are you sitting? 18:00 around here.
[16:09:25] <StephenLynx> br
[16:09:30] <StephenLynx> gib moni :v
[16:09:37] <StephenLynx> huehue
[16:11:20] <Cygn> StephenLynx: okay… seems like i will have to refactor :(
[16:11:44] <StephenLynx> yeah, because you are not even actually using indexes.
[16:11:54] <StephenLynx> you are just using a field with a completely ass backwards name.
[16:12:06] <StephenLynx> that is very easily fixed on the projection block when outputting.
[16:12:16] <StephenLynx> and it won't change a thing while inputting.
[16:12:51] <StephenLynx> if at least they were arrays where you consulted the indexes then it would make SOME sense.
[16:16:22] <Cygn> StephenLynx: Haha ;) okay, i see your point.
[17:10:30] <saml> why would you use hash index instead of default ?
[17:16:56] <mrmccrac> a db.repairDatabase() is supposed to reclaim disk space no longer needed i thought?
[17:18:17] <mrmccrac> http://pastebin.com/raw.php?i=XBuQwin9
[17:18:28] <mrmccrac> my storage size is still larger than my database size
[17:31:05] <christo_m> im not sure how i can make the id field here and the user field be unique pairs in the database: http://pastie.org/10182760
[17:39:44] <Razerglass> hey is anyone availible to answer a quick question about saving a 2d array into my db?
[17:41:55] <greyTEO> Razerglass, it's never good to ask to ask. You will get much better results if you post your question.
[17:42:14] <christo_m> according to the mongo docs that should be enough http://docs.mongodb.org/manual/core/index-compound/
[17:42:26] <christo_m> and also removing the "unique" part in the schema definition
[17:42:37] <christo_m> however im getting duplication errors on the "id"
[17:46:03] <Razerglass> for when i declare my new schema does this look correct for saving a 2d array ({when i create my mongoose schema " about: [{ about1 : Array, about2 : Array}] })
[17:48:08] <Razerglass> then when i am setting the values up before saving im putting the 2 arrays in this format newDBentry({ about: {about1: array1, about2: array2})
[17:48:46] <Razerglass> for some reason when it saves in my DB its a empty array, i have something wrong with the formatting
[17:50:01] <christo_m> I basically want these fields to be unique pairings.. ie, i dont care if there are duplicate id or user entries, i only care that together when i look at id,user .. its a unique pair
[18:09:44] <christo_m> anyone?
[18:14:39] <christo_m> GothAlice: ??
[18:14:44] <christo_m> :3
[19:23:23] <xaxxon> how do I do a query to see if a particular value exists in a subarray in a document? Kind of like $in, but switched around
[19:24:45] <xaxxon> like does "a" exist in {foo:["a", "b", "c"]}
[19:25:02] <GothAlice> db.collection.find({foo: "a"})
[19:25:04] <cheeser> { foo : a }, iirc
[19:25:07] <GothAlice> It really is that simple. :)
[19:25:15] <cheeser> mongodb++
[19:25:25] <xaxxon> oh
[19:25:43] <Zelest> :o
[19:25:56] <GothAlice> There are caveats if the nested values are complex objects (nested documents) instead of bare simple values (strings, integers, etc). Notably that when comparing documents the order of the keys is vitally important.
[19:26:04] <GothAlice> {foo: 1, bar: 2} !== {bar: 2, foo: 1}
[19:26:10] <Zelest> isn't it db.coll.find({foo: {$in: 'a'}}) ?
[19:26:17] <GothAlice> That's the reverse.
[19:26:20] <GothAlice> And wrong. ;)
[19:26:21] <Zelest> oh
[19:26:25] <Zelest> ignore me then :D
[19:26:31] <GothAlice> db.collection.find({foo: {$in: ["a", "b"]}})
[19:26:38] <GothAlice> $in expects a list of candidate values.
[19:26:52] <Zelest> Aaah
[19:27:11] <Zelest> Ugh, a week or so left..
[19:27:17] <Zelest> then the deadline and project is done at work..
[19:27:23] <Zelest> and I finally get some spare time to play with mongodb again :D
[19:27:43] <GothAlice> ^_^ 9 days on my own at-work project before our 1.2 feature release… it's a doozie.
[19:28:02] <Zelest> Ah, we aim for 2.0.. a complete revamp of the entire website :S
[19:28:13] <GothAlice> After that, my "ruling the world" project is scheduled for completion in September. ;)
[19:28:23] <Zelest> Hehe
[19:28:33] <Zelest> I plan on building a pretty sophisticated crawler
[19:28:42] <Zelest> and build my own mini-google
[19:28:54] <Zelest> (a project I've been working on back and forth since 2006 really)
[19:28:55] <GothAlice> Hehe. That was part of 0.9 at work. ^_^
[19:29:01] <Zelest> Hehe
[19:29:02] <xaxxon> so what if it is {foo: [ {key: a, otherthing:asdf}, {key: b, otherthing: fdsa} ] }? and i want to know if key: a exists, but I don't know otherthing
[19:29:17] <Zelest> GothAlice, where do you work? except 10gen?
[19:29:32] <GothAlice> Zelest: https://gist.github.com/amcgregor/07ded9226bfce1e7d5d4 < we're using distributed processing pipelines defined using a custom declarative schema. :3
[19:29:37] <GothAlice> Zelest: I do not work for 10gen.
[19:29:43] <Zelest> Oh
[19:29:50] <Zelest> Ah :)
[19:30:14] <GothAlice> And that'd be warning one. :P
[19:30:34] <GothAlice> Please keep language in-channel clean, thanks. (It's permanently logged… so it's not just for everyone else's sake!)
[19:31:17] <christo_m> rekt
[19:31:19] <cheeser> completely unacceptable.
[19:31:42] <GothAlice> xaxxon: db.collection.find({"foo.key": "a"})
[19:31:51] <xaxxon> ok
[19:31:58] <GothAlice> You can likewise create an index on "foo.key" to optimize those queries.
[19:32:33] <xaxxon> very cool, thank you
[19:32:39] <xaxxon> good to know
[19:33:27] <GothAlice> And, the coolest thing: $elemMatch and $ in the query projection let you get back not just the record with a "key" of "a", but lets you get back just the one "foo" element that matches.
[19:33:55] <GothAlice> See: http://docs.mongodb.org/manual/reference/operator/query/elemMatch/ and http://docs.mongodb.org/manual/reference/operator/projection/positional/#proj._S_
[19:34:42] <GothAlice> E.g. db.collection.find({"foo": {$elemMatch: {"key": "a"}}}, {"foo.$": 1}) — you'll only get back {foo: [{key: "a", otherthing: "asdf"}]}, not the full array.
[19:35:08] <GothAlice> (Likewise you can use $elemMatch in a query to update specific nested array elements.)
[19:52:06] <SixEcho> so i have a unique+sparse index on .things.name… mongo will fail correctly when dup keys on different documents, but on the same document it appends dups to the array…    { _id:x;  things: [ { name: 'hi' }, { name: 'hi' }] }        any way to make it fail in this case?  bug? wad?
[20:32:41] <abishek> could someone suggest a good tutorial to understand and master the aggregation framework in mongo, am actually new and I am right now using the group method to retrieve data which is not very performant and would like to change it to an aggregate. I am quite new to the aggregation framework
[20:33:33] <abishek> i find it a bit confusing at the first look, but find it to be very powerful
[20:40:38] <StephenLynx> just keep in mind that each stage will process the collection and create a new output to be processed by the next stage abishek
[20:41:10] <abishek> OK
[20:41:22] <StephenLynx> http://docs.mongodb.org/manual/meta/aggregation-quick-reference/
[20:42:35] <StephenLynx> and particularly, aggregation is what I use in almost any case that I query for documents.
[20:42:45] <StephenLynx> there are few cases where I use something else.
[20:54:27] <GothAlice> I go the opposite route; standard queries are pretty much guaranteed to be faster and less server-intensive than aggregates. Thus things like http://s.webcore.io/image/142o1W3U2y0x tend to be a hybrid of both approaches.
[20:55:08] <GothAlice> (Aggregates are only used on the analytics sections of each detail view, and as the fundamental approach to reporting. Absolutely everything else uses standard queries.)
[20:55:36] <StephenLynx> what about sorting and other stuff?
[20:55:50] <StephenLynx> doing multiple operations for these is faster than an aggregation?
[20:57:06] <StephenLynx> gotta go, ill ask again when I'm at home
[20:57:17] <StephenLynx> since I didn't know aggregations took a heavier load
[21:06:26] <bros> I have a collection called orders. Each document has a subdocument called shipments. I have an existing aggregation going over orders, so I don't want to unwind shipments and spoil this aggregation, but shipments has a boolean field called voided.
[21:06:46] <bros> How do I include the size of shipments in the aggregation, but only the non-voided shipments? Do I have to unwind?
[21:13:35] <joannac> bros: yes
[21:13:54] <bros> I know you can have multiple unwinds. Can you have multiple groups?
[21:13:58] <bros> Do I need a $project?
[21:14:10] <joannac> yes, you can have multiple groups
[21:41:25] <bros> I stored a float as a string accidentally. How can I cast it back to a float in an aggregation?
[22:52:53] <godzirra> So this is not a geojson point, but i'm not sure what's wrong with it: > db.incidents.find( { loc : { $near : { $geometry: { type: 'Point', coordinates: [ 36 , -115 ]}, $maxDistance: 10 } }})
[22:58:23] <StephenLynx> GothAlice so, which one generally is faster? a find, sort,skip,limit and toArray on a cursor or an aggregate that does the sort,skip and limit?
[22:58:55] <GothAlice> StephenLynx: Try it and find out. :)
[22:59:39] <StephenLynx> :v
[23:00:24] <DrTachyon> Hi
[23:00:39] <DrTachyon> So, is anyone up at this channel?
[23:00:50] <DrTachyon> I have a terrible problem with an app I'm making
[23:03:17] <GothAlice> Ask, don't ask to ask. IRC is asynchronous, DrTachyon.
[23:03:42] <DrTachyon> Funny, asynchronous code is exactly what I'm having problems with
[23:03:58] <StephenLynx> dibs on js.
[23:04:03] <DrTachyon> Transaction.aggregate()
[23:04:03] <DrTachyon> //.match( {"$or": [{"debitAccount": user}, {"creditAccount" : user}]})
[23:04:03] <DrTachyon> .match( {"debitAccount": user]})
[23:04:03] <DrTachyon> .project({ "balance": { "$cond": [
[23:04:04] <DrTachyon> {"$eq": [ "$debitAccount", user ]},
[23:04:04] <DrTachyon> {"$multiply": [ -1, "$amount" ]},
[23:04:04] <DrTachyon> "$amount"
[23:04:05] <DrTachyon> ]},
[23:04:05] <DrTachyon> "account": user})
[23:04:06] <DrTachyon> .group({"_id": "$creditAccount","total": {"$sum": "$balance"}})
[23:04:06] <DrTachyon> .exec(function(err,object) {console.log(object);});
[23:04:07] <DrTachyon> Shit
[23:04:08] <DrTachyon> Sorry
[23:04:11] <StephenLynx> :v
[23:04:23] <DrTachyon> Clearly I'm an IRC noob
[23:05:02] <DrTachyon> Okay, so I have this query that runs perfectly well on a random db with dummy data
[23:05:21] <DrTachyon> but when I moved it to live, it didn't work.. the array it returns is empty
[23:05:42] <DrTachyon> The only difference is that there was tons of dummy data, while there's only two live entries.
[23:06:08] <StephenLynx> is that the actual code? where is that running?
[23:06:51] <DrTachyon> http://pastebin.com/Fdk3Qqm2
[23:07:08] <DrTachyon> Yes it is..
[23:07:27] <StephenLynx> node/io?
[23:07:30] <DrTachyon> node
[23:07:34] <DrTachyon> sorry..
[23:07:36] <DrTachyon> Mongoose
[23:07:39] <StephenLynx> ah
[23:07:41] <StephenLynx> mongoose
[23:08:12] <StephenLynx> I don't know about it, just that is the main cause of people coming here with stuff not working. I work with io.js and the node.js driver.
[23:08:44] <DrTachyon> It's not Mongoose, since the query runs fine with dummy data.. I know, but you're like my last hope.. humor me..
[23:09:12] <DrTachyon> I have a collection, like a ledger, where the documents are transaction records
[23:10:03] <StephenLynx> no idea, I have never used mongoose.
[23:11:12] <StephenLynx> but generally I can't see anything wrong regarding js. you pass a callback in exec and it will give you the error and the data.
[23:11:17] <StephenLynx> are you checking the error object?
[23:12:15] <DrTachyon> No, there's no error, the query runs fine
[23:13:08] <DrTachyon> given "user" the query is supposed to match documents where "user" is either in the "debitAccount" or "creditAccount" key. It projects the "amount" value with the condition that if the "user" appears in "debitAccount", it multiplies the "amount" by -1. finally it groups the amount into a "total", and then executes a callback.
[23:13:56] <StephenLynx> are you sure you got stuff on that collection?
[23:14:05] <StephenLynx> and are using the correct database?
[23:14:15] <StephenLynx> try just running a find and spitting everything that comes out.
[23:15:03] <DrTachyon> Okay, brb
[23:15:43] <DrTachyon> Also, i'm going to connect through the console, to eliminate mongoose problems! :D
[23:16:16] <StephenLynx> if you are getting stuff from one deploy but not from the other, mongoose seems to not be the culprit in this case
[23:30:38] <godzirra> So this is not a geojson point, but i'm not sure what's wrong with it. Can anyone give some feedback? db.incidents.find( { loc : { $near : { $geometry: { type: 'Point', coordinates: [ 36 , -115 ]}, $maxDistance: 10 } }})
[23:35:40] <DrTachyon> StephenLynx: I did it! I got an error! Oh, happy day!
[23:36:19] <DrTachyon> exception: FieldPath '5529a1b4d61235156a10ca08' doesn't start with $
[23:51:50] <DrTachyon> Nevermind, I figured it out
[23:51:52] <DrTachyon> I'm an idiot
[23:51:57] <DrTachyon> thanks guys..
[23:52:01] <DrTachyon> whoever's listening :P
[23:53:41] <StephenLynx> :v