PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 22nd of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:55] <nooga> um
[00:04:36] <nooga> i have a collection of documents that have arrays of object inside them
[00:05:20] <nooga> i'd like to find certain documents and retrieve only a subset of fields that are in the objects in the array
[00:05:26] <nooga> http://pastie.org/4565106 <- like so
[00:05:32] <nooga> is it even possible?
[00:06:00] <nooga> or do i have to retrieve whole a
[00:16:08] <geoffeg> how do i get a secondary to reconnect to a different secondary for syncing? i just took one secondary into recovery for compaction and there are two other secondaries syncing off it. which are now falling behind
[01:51:36] <doxavore> To be sure I understand correctly, using the 10gen apt repo, you can only install the latest version?
[01:52:15] <doxavore> So do people actually use that, or does everyone just install the same version from the binary zip to keep their servers running the same version?
[02:17:19] <svm_invictvs> Can Mongo run in memory only?
[02:48:56] <vsmatck> svm_invictvs: wassssssup! .. Mongo uses a memory mapped file. If you keep your database smaller than your physical memory it's basically in-memory.
[02:49:11] <vsmatck> But there's no option to explicitly do it.
[04:01:40] <svm_invictvs> Hm, I"m having trouble doing an usert
[04:01:44] <svm_invictvs> *upsert
[04:02:25] <svm_invictvs> If I have a document like {$id:"some_id", foo:"bar"} and I want to ensure that for all documents foo is unique, how would I go about doing that?
[04:02:41] <svm_invictvs> I'm looking at examples for findAndModify, and I'm striking out.
[04:09:12] <svm_invictvs> http://mysticpaste.com/private/brc6uUS6yX/
[04:09:15] <svm_invictvs> Code, fwiw
[07:38:09] <[AD]Turbo> hola
[07:44:57] <wereHamster> svm_invictvs: add a unique index on 'foo'
[07:54:17] <cubud> MongoD.exe crashed on me last night after about 250 million object inserts
[07:57:05] <cubud> 269 million before it crashed
[07:58:01] <NodeX> :/
[07:58:28] <cubud> I am running it again on a new DB
[07:58:33] <cubud> see if it is repeatable
[07:58:40] <cubud> I hope it is :)
[07:59:20] <ShishKabab> A unit test I wrote just broke in a way that puzzles me. I think it is best explained by some (Python) code. Could anyone tell me why this fails? http://pastebin.com/iJNcKrP9
[08:00:10] <NodeX> are you likely to be doing 250million inserts in a 4gb windows 7 box constantly ?
[08:13:04] <cubud> I'd be likely to scale it up to about 32GB, but I would expect Mongo to degrade performance rather than to crash
[08:21:20] <cubud> Using YouTube video comments as an example. If a web page were to show a list of 10 videos including "Total number of comments", "Total thumbs up", and also "Total thumbs down" would I run a map-reduce on the action data (liked, commented, etc) or would it be better to periodically count them up and store them in the Videos collection object?
[08:21:37] <NodeX> depends on your app
[08:21:42] <cubud> YouTube :)
[08:21:50] <NodeX> depends how you query your app
[08:22:09] <cubud> I don't know yet, just trying to understand options so as to make an informed decision :)
[08:22:11] <NodeX> how many comments do youtube display by default?
[08:22:24] <cubud> about 20
[08:22:48] <NodeX> I would store that hot data inside an embedded object in the document
[08:22:58] <NodeX> then page the rest out to a comments collection
[08:23:31] <cubud> So a new comment would go into a comments collection + store the most recent 20 in the video? Or do you mean store the comment count in the video?
[08:23:35] <NodeX> I would strongly suggest you store the count of the comments in the document too as count() of collections when using queries is costly
[08:23:42] <NodeX> both
[08:24:08] <NodeX> pop the embedded comments, $push the comment, also add the comment to a comments collection
[08:24:58] <cubud> So update the Video's CommentCount in real-time as the comment is added?
[08:26:09] <NodeX> yes
[08:27:37] <cubud> I am just reading a page on conflict resolution, it seems that there might be a programmatic way of dealing with 2 comments added to the same video on different DB servers
[08:27:46] <NodeX> it does mean you'll have 20 comments of duplicate data for every video but it stops the need for a second query and HDD's are cheaper than RAM so it will scale more efficiently
[08:28:10] <NodeX> different DB servers ?
[08:28:18] <cubud> Shards
[08:28:30] <NodeX> shards go thru the mongos
[08:28:44] <NodeX> you dont normaly connect to them individualy
[08:28:54] <cubud> So there is always a single connection point that acts as a load balancer?
[08:29:02] <NodeX> as the ditributer yes
[08:29:10] <NodeX> else how do things get distributed?
[08:29:37] <cubud> Cassandra let's you connect to any and then they distribute the data amongst themselves
[08:29:49] <NodeX> Mongo != cassandra
[08:29:50] <cubud> That is for slaves actually, not shards
[08:30:13] <cubud> So you could have servers in different parts of the world and they would be eventually consistent
[08:30:32] <cubud> Does Mongo not do that with nodes then?
[08:30:48] <NodeX> not that I know of
[08:31:01] <NodeX> why use all that bandwidth for no reason?
[08:31:37] <NodeX> IIrc you -can- connect to a specific shard and read/write but I dont know what knock on effect it would have on your data
[08:31:41] <zykes-> how is MongoDB contra CouchBase?
[08:31:48] <NodeX> I imagine it will eventualy persist
[08:31:58] <NodeX> contra ?
[08:32:11] <zykes-> differences
[08:32:40] <NodeX> google it and find out
[08:32:59] <zykes-> ;)
[08:33:15] <cubud> The purpose is for load balancing. You could have servers on opposite sides of the world serving IP ranges geographically close to the client and then replicating between themselves. It also means there is no single point of failure because the client will try another master node if it is unable to reach the one it used last
[08:33:39] <cubud> It also receives load statistics from the master and so knows which server will give the fastest response for the next request
[08:33:41] <NodeX> cubud : then use cassandra if it's more suited to your needs
[08:34:38] <cubud> I don't know yet
[08:34:47] <cubud> I was wondering how Mongo worked
[08:35:22] <NodeX> http://www.mongodb.org/display/DOCS/Data+Center+Awareness
[08:35:29] <NodeX> perhaps reading the docs might help
[08:36:05] <cubud> Yes it will when I get that far, I am just participating in idle conversation atm :)
[08:36:47] <cubud> reading distributed consistency atm :)
[08:37:09] <NodeX> always remember that all these things bleed thru app facing caches
[08:37:26] <NodeX> they then trickle to the databases and shards over time via queues
[08:37:50] <NodeX> so as long as the caches are close to the client then the app never slows
[09:01:24] <mbuf> is there a recommended way to run tests for a Rails project that uses MongoDB? any specific object mapper that you suggest that can also be used with the testing?
[09:02:25] <NodeX> what sort of tests?
[09:02:45] <mbuf> NodeX: unit, functional tests with Rails
[09:03:48] <mbuf> NodeX: right now I am using Mongoid, but, I couldn't find documentation on running tests
[09:04:06] <NodeX> I dont understand what they are sorry
[09:04:12] <mbuf> NodeX: someone mentioned that it is sort of difficult to mock the calls to mongo database, IIRC
[09:08:12] <algernon> it is, yes.
[09:08:35] <algernon> you can still run tests though, just have a test db available
[09:08:51] <mbuf> algernon: is there an example that I can see for a reference?
[09:08:52] <NodeX> is there a purpose for such tests?
[09:09:10] <mbuf> NodeX: just to keep a check on my code, have Continuous Integration going, for example
[09:09:35] <NodeX> dont you test as you write ?
[09:09:52] <mbuf> NodeX: sure
[09:09:57] <algernon> mbuf: dunno. I don't follow any rails stuff. I can show you C & python examples only, but it really is stupidly straightforward.
[09:10:11] <mbuf> algernon: good enough
[09:12:09] <algernon> mbuf: https://github.com/algernon/mojology/blob/master/mojology/tests/browser.py & stuff under https://github.com/algernon/libmongo-client/tree/master/tests
[09:12:47] <algernon> output of the latter can be seen at http://travis-ci.org/#!/algernon/libmongo-client/jobs/2141943 for example.
[09:14:14] <mbuf> algernon: nice!
[09:20:46] <mbuf> algernon: found this https://github.com/mongoid/echo, but, it uses ruby1.9
[09:43:48] <cubud> NodeX: $push was a nice tip, thanks
[09:44:54] <cubud> Is there a way to say $pop but only if RecentComments.Count == 10?
[09:45:13] <NodeX> no
[09:45:21] <NodeX> you must keep the count
[09:45:30] <cubud> ok
[09:45:42] <NodeX> but you should keep the total comment count anyway for paging
[09:46:02] <cubud> Yes that wouldn't be too difficult
[09:46:15] <NodeX> so you'll allways know... you can then do in one query $pop : where your count $gt : 20
[09:47:14] <cubud> yes I suppose I could run two queries couldn't I? Push the comment, inc the CommentCount, then another query to pop a comment but with a filter "CommentCount >= 10"
[09:47:45] <NodeX> It's probably more efficient to always push then every so often just pop the last few
[09:48:14] <NodeX> you're only every going to read the last 20 anyway so what does it matter if an extra 5 creep through if it maintains a faster app
[09:48:24] <NodeX> you're not going to echo them out to the client
[09:49:19] <cubud> What I might do is add them anyway, and have a thread which selects the videos with the highest RecentCommentCount then pops and resets the RecentCommentCount to 0
[09:50:01] <cubud> Comments, CommentCount, and RecentCommentCount - I'll think it through
[09:50:18] <cubud> I am certainly going to have to change my app from a Domain based one :)
[09:50:34] <cubud> Oh, actually
[09:51:09] <NodeX> DOmain based one?
[09:51:12] <cubud> If I have a Video collection with the info in, then a VideoComments collection I could simply push the latest comment to the Comments list and inc CommentCount, and then when paging select a slice
[09:51:29] <cubud> That sounds promising doesn't it?
[09:52:27] <NodeX> you're collections are capped to a size limit, at present it's 16mb
[09:52:32] <NodeX> sorry documents *
[09:52:36] <cubud> ah crap
[09:52:57] <NodeX> and transporting that size into memory so slice 20 comments is a bad idea
[09:53:02] <NodeX> secondly it's a second query
[09:53:07] <cubud> How would you store a large binary object then?
[09:53:19] <NodeX> define large?
[09:53:22] <cubud> I currently don't need to, but other apps might
[09:53:29] <cubud> Say, a 17mb zip file :)
[09:53:34] <NodeX> in gridFS
[09:54:06] <NodeX> take my advice and store the first page of comments in both the Video document and the comments collection
[09:54:46] <cubud> Yes, considering what you have just told me about a 16mb limit and having to read all comments into memory even when splicing I think that would be best
[09:55:18] <NodeX> even not regarding that 1 query is better than 2
[09:55:37] <cubud> No, "that" = what you said :)
[09:56:06] <cubud> only last 10 in video rather than all comments in a single document in another collection
[09:56:19] <cubud> then 1 comment per doc in VideoComments
[09:56:58] <NodeX> 1 comment?
[09:57:16] <cubud> VideoID, UserID, Body
[09:57:18] <NodeX> yes sorry
[09:57:35] <NodeX> I thought you meant the video
[09:57:44] <cubud> :)
[09:57:56] <NodeX> I can tell you;re a windows based programmer from the CamelCasing
[09:58:14] <cubud> Initially Pascal :)
[09:58:18] <cubud> Now C#
[09:58:26] <NodeX> arew you intending to run this app on winblows?
[09:58:31] <NodeX> -w
[09:58:51] <cubud> The web app yes. The DB initially, but might move it to Linux if the load is high
[09:59:03] <cubud> I will use ASP MVC for the web front end
[10:04:19] <cubud> By "Domain" I mean that I currently load a C# object in its entirety, update stuff, save it all. For Users this is okay, but for other stuff (such as Video) I will need my app to update individual column values only
[10:04:45] <cubud> I like this! I hope MongoD doesn't crash again in a couple of hours :)
[10:07:20] <_johnny> [11:57] < NodeX> I can tell you;re a windows based programmer from the CamelCasing <- lol, NodeX :)
[10:11:47] <NodeX> lol
[10:15:01] <_johnny> btw, i converted all my xml data (3gb) to json (800mb). roughly 2,4 mio docs, imported in half an hour. i have restored faith in mongoimport :)
[10:16:00] <_johnny> the biggest time delay was my ignorance (big surprise, huh) though. i was querying the upper case names with a compiled regex, rather than just the regex itself
[10:16:30] <_johnny> compiled added 0.5 secs of latency! removed it, got 0.008 in average on queries. d'oh
[10:19:06] <NodeX> nice
[10:19:15] <NodeX> half a second of latency :/
[10:20:34] <_johnny> heh, yeah, on the app side
[10:36:09] <Littlex> hey, i am looking for a recommended mongodb configuration for testsystem
[10:36:30] <Littlex> aka i would like to lower the load on our testsystem
[10:36:59] <Littlex> can anyone point me to an appropriate guide?
[10:38:58] <NodeX> is your app read or write heavyt
[10:40:53] <Littlex> hmm, i would say its balanced
[10:44:31] <NodeX> what is spiking the load ?
[10:51:29] <justin_j> hi there
[10:51:40] <justin_j> does anyone know how to store large bitfields in mongo?
[11:15:40] <kali> justin_j: do you expect to perform operation on them inside mongodb ? or just store and retrieve ?
[11:15:48] <kali> justin_j: also, define "large" :)
[11:17:24] <justin_j> just store and retrieve at the moment
[11:17:49] <justin_j> and large, I'm talking up to 50,000 bits which is just over 6kb - so not that large :-)
[11:18:46] <kali> justin_j: 6kb, you can put them in binary inside the document
[11:19:05] <justin_j> yes, that's what I'm just looking into now
[11:19:11] <justin_j> bin data right?
[11:19:18] <kali> justin_j: the document size limit is 16MB, so if they grow much bigger, you can use gridfs
[11:19:52] <justin_j> 16mb should be more than enough
[11:21:38] <kali> I'm not sure what the exact terminology would be, I guess it depends on the driver... It's called Binary in java for instance
[11:32:36] <justin_j> I'll be reading it with Java to (well, Scala)
[11:34:13] <kali> justin_j: nothing fancy in casbah for Binary, IIRC
[11:34:52] <justin_j> is there anything?
[11:36:01] <kali> justin_j: well casbah is a thin lay over the java driver anyway. for Binary, just use the java class
[11:36:09] <justin_j> gotcha
[11:36:11] <kali> layer ^
[11:54:03] <Littlex> NodeX: Sorry for my late reply, well the testsystem is just a small machine, without any action mongo generates a load of 0.7
[11:54:09] <Littlex> i would like to reduce that
[11:54:28] <NodeX> define small
[11:55:40] <Littlex> its a virtualized system with. 10gig of space, 1gig of ram, 2 x 3,3ghz cores
[11:58:00] <NodeX> how much data do you have in the databases?
[11:58:10] <NodeX> and indexes?
[12:06:17] <Littlex> NodeX: empty database ;)
[12:06:58] <NodeX> psatebin your config
[12:07:08] <Littlex> NodeX: in the current state its empty, with data it would contain about 2,2 gig
[12:07:18] <Littlex> with one indexed field afaik
[12:07:36] <NodeX> are you positive that mongod is causing the load?
[12:08:26] <Littlex> yes
[12:08:27] <Littlex> [root@... logs]# ps -eo pcpu,pid,user,args | sort -k 1 -r %CPU PID USER COMMAND 1.0 20480 root ps -eo pcpu,pid,user,args 0.7 1870 mongod /usr/bin/mongod -f /etc/mongod.conf
[12:09:57] <NodeX> restarted it?
[12:10:27] <Littlex> yes, same result
[12:11:05] <Littlex> here is the configuration https://gist.github.com/3424923
[12:12:50] <NodeX> you can't really tweak it more than that
[12:13:02] <Littlex> hum :)
[12:14:10] <NodeX> I'm just guessing because I would never run mongo on a 1gb RAM box but I would hazard a guess that some of the internals that go on normal in RAM are happening on disk
[12:15:15] <Littlex> okay, so what would be the minimum of ram you would assign?
[12:15:18] <NodeX> 16.3 2337 mongodb /usr/bin/mongod --dbpath /home/mongodb --logpath /var/log/mongodb/mongodb.log run --config /etc/mongodb.conf
[12:15:30] <NodeX> there is my output for the same command
[12:16:05] <NodeX> 0.4 on another box
[12:16:48] <NodeX> 24.4 on a live box with 100k uniques a day
[12:17:19] <NodeX> I thnk your 0.7% CPU out of 200% is acceptable?
[12:45:43] <[AD]Turbo> is it possible to make mongodb log using syslogd ?
[13:07:31] <jY> [AD]Turbo: it's in 2.1
[13:25:57] <Littlex> NodeX: well yeah
[14:39:54] <circlicious> anyone out there who can help me
[14:39:57] <circlicious> pathetic
[14:39:58] <circlicious> :(
[14:40:50] <ShishKabab> Is there any way to query on a subobject without key order mattering? Because I use Python with Pymongo, the key order of my objects are not guaranteed. So if I do a query like db.coll.find({x: {y: 1, z: 2}}) it randomly fails. See http://pastebin.com/Mwpqx72g .
[14:41:56] <circlicious> 343 eople no one helps :(
[14:42:21] <circlicious> can someone hlp me with this - https://gist.github.com/53039e8b3f209759d091 ?
[14:44:57] <algernon> circlicious: you'll have to do that client side (or use m/r), I think. Unless the new(ish) aggregation framework can do that.
[14:46:28] <algernon> looks like the aggregation framework can help, at a first blink: http://docs.mongodb.org/manual/applications/aggregation/
[14:47:47] <algernon> it's probably more straightforward to do it on client side, though.
[14:47:49] <circlicious> oh NodeX was talking about this ? ok will have to read. but i am not using 2.1, i am using 2.0.6 i think, algernon
[14:48:03] <circlicious> can you help me with map/reduce to achieve this?
[14:48:04] <algernon> well, client side it is then :)
[14:48:09] <circlicious> what is client0-size?
[14:48:12] <circlicious> client-size?
[14:48:15] <circlicious> in browser JS?
[14:48:19] <algernon> client side. in your application.
[14:48:25] <circlicious> oh
[14:48:39] <circlicious> is it somehow possible with m/r?
[14:48:43] <algernon> (server side is mongodb, client side is whatever is talking to it)
[14:48:46] <circlicious> i tried, but couldnt achieve what i wanted to
[14:48:53] <algernon> it should be, yes.
[14:49:04] <circlicious> for example, i wanted to filter data from the m/r results and get all my document fields
[14:49:15] <circlicious> ok, would you like to take a look at my code and help?
[14:49:45] <circlicious> algernon: https://gist.github.com/0acc2dfcea26b4d32779
[14:50:14] <circlicious> first of all, i want all fields of document, does that make sense ? secondly i wanna filter the result set, where count > 1
[14:51:05] <algernon> I'm afraid you'll have to figure out the rest yourself.
[14:51:40] <circlicious> what do you mean?
[14:53:27] <algernon> that I can't help more than pointing you to the right direction ($work > irc)
[14:53:49] <circlicious> oh thats sad :(
[14:53:53] <circlicious> very sad
[14:53:58] <circlicious> i have been stuck on this for days now
[15:05:13] <Flo_> HI
[15:05:18] <Flo_> How to enable TTL via c++ driver?
[15:23:21] <oreiator> win 5
[15:28:56] <scott1> I tried posting a question to the google group yesterday, but it appears not to have shown up. Is there some kind of moderation that I'm not getting to, or am I just delusional about having posted?
[15:32:50] <Derick> scott1: first post by a new person is indeed moderated - sadly, I don't have access to the moderation queue
[15:33:28] <scott1> thanks Derick. I suppose I will just have to be patient then.
[15:38:01] <scott1> ah! just popped up.
[15:55:36] <jQuy> Hi all! I would like to find a good GUI for MongoDB. Can you help me? I have Windows 7.
[15:56:47] <circlicious> i am using rockmongo
[15:59:13] <jQuy> circlicious: I don't have PHP installed. I'm running Node.js server.
[16:00:17] <nateabele> Derick: Good morrow, sir.
[16:00:40] <nateabele> Am I correct in understanding you are the keeper of pecl/mongo?
[16:01:38] <Vile> Good morning, good people!
[16:03:45] <Vile> I need an idea. I have an hierarchically arranged collection (using materialized path). I need to do a m/r on it, but… for proper processing of each document i need all of its parents
[16:07:27] <Vile> basically, i need to emit parent node for each of its direct and indirect child nodes (and for itself of course)
[16:08:28] <Vile> question is - how can i do that?
[17:51:41] <thedahv1> Anybody had to implement a change/audit-log on documents?
[17:51:47] <thedahv1> I'm thinking about how I want to design it
[17:51:56] <thedahv1> Might be nice to throw some ideas against the proverbial wall and see what sticks
[18:22:46] <jjbohn> back
[19:25:46] <jallan> if I have a field that holds a string, is there any easy way for me to go in and edit just part of the string?
[19:26:25] <jallan> I want to replace a word with another.
[19:36:56] <DrShoggoth> can mongo map/reduce jobs be run outside of the console?
[19:37:25] <DrShoggoth> ie generated procedurally and run from a ruby script
[19:39:07] <BurtyB> DrShoggoth, it works for me from php
[19:39:16] <DrShoggoth> ok, i'm new to it
[19:39:19] <DrShoggoth> just learning some things
[19:42:07] <NodeX> DrShoggoth : yes
[19:42:17] <NodeX> most drivers have helpers for this
[20:25:48] <icedstitch> :q
[21:36:08] <svm_invictvs> Heay
[21:36:09] <svm_invictvs> *Heya
[21:42:04] <dufflebunk> Is there a developer for the mongo Java API around? I think there's a bug in the RC2 version.
[21:43:08] <crudson1> dufflebunk: create a ticket at jira.mongodb.org detailing the issue
[21:43:28] <dufflebunk> crudson1, Ok, thanks.
[22:15:32] <Almindor> is there a way to import "None" value via mongoimport?
[22:16:51] <jarrod> null
[22:16:52] <jarrod> ?
[22:18:05] <Almindor> jarrod: JSON allows null without quotes?
[22:18:11] <joshontheweb> how do you create a rollup field that is built on query?
[22:18:54] <Almindor> oh it does
[22:19:13] <joshontheweb> basically I'm trying to build a list of an objects children when you query the object
[22:28:46] <owen1> i have a collection called questions with array of answers. how can i query for all answers of a specific user?
[22:44:27] <crudson1> owen1: a paste of your document structure would be helpful
[23:04:00] <owen1> crudson1: http://pastebin.com/bMAA3YnB
[23:04:39] <owen1> i want to query all answers of all questions.
[23:07:36] <svm_invictvs> If I have a an object and I want to constrcut a query which will return that exact object, how would I do that?
[23:18:50] <svm_invictvs> The query would be the object itself, right?
[23:20:04] <vsmatck> It would be a object that contains one or more fields of the object you're looking for.
[23:20:48] <vsmatck> Like if you had a user object your query object might be a object with a UID, or username.
[23:22:17] <svm_invictvs> vsmatck: Well, what i'm trying to do is rig up the atomic compare and swap.
[23:22:41] <svm_invictvs> http://www.mongodb.org/display/DOCS/Atomic+Operations
[23:22:43] <svm_invictvs> Outlined there.
[23:26:46] <vsmatck> Ah, I see what is going on there. In that first example it's fetching the document and decrementing that "qty" field.
[23:27:09] <svm_invictvs> So I'ma ssuming that I'd basically do something like, original = find("_id":"myId"); changes = original.copy(); /* make changes */ insert(original, change, false, false);
[23:27:10] <vsmatck> The update command is trying to find a document with a specific _id and a specific qty. If it can't find that document it won't update.
[23:27:18] <svm_invictvs> yeah
[23:30:25] <vsmatck> Hm. "s/insert/update" ?
[23:31:25] <svm_invictvs> yeah, update
[23:31:26] <svm_invictvs> sorry
[23:31:39] <svm_invictvs> Basically, I get the object, make changes and then insert.
[23:31:50] <svm_invictvs> ONly if the old record matches the new record will it insert
[23:31:54] <svm_invictvs> matches identically
[23:32:19] <vsmatck> yeah, it seems like this should work. With update.
[23:32:20] <svm_invictvs> I just tried using my local shell, seems to work alright.
[23:40:22] <crudson1> owen1: so you want not just the full documents that a certain user is in, but just their bits in answers array for all documents
[23:45:01] <rboyer> if w=majority is specified for a write, is that measured off of the same N as the majority vote calculation for Election? or something else?
[23:45:21] <fbjork> is it possible to return only matching sub array elements?
[23:45:25] <rboyer> my gut would say it's the majority of N, where N is the total number of electable members (master, and non-hidden secondaries)
[23:45:34] <rboyer> but i can't find reference documentation to corroborate that
[23:47:24] <crudson1> owen1: if so you could do something like: db.u.aggregate({$match:{'answers.user':'josh'}}, {$unwind:'$answers'}, {$match:{'answers.user':'josh'}})
[23:48:57] <jarrod> is aggregate used in favor of groups now?