[00:09:54] <stevenxl> regreddit, Yea. I see that it's an array, but I"m not sure how to modify my query document so that it does work.
[00:12:05] <regreddit> so, if funding_round is an array, i assume you will have multiple round_code objects?
[00:12:53] <regreddit> also, is the data model your design? it's a bit 'odd'
[00:13:18] <regreddit> will the funding_round object have more than one property?
[00:13:59] <stevenxl> regreddit, It's not my design. If you follow the link above, I paste in a "founding_round" so you can have a better sense of the schema.
[00:14:17] <regreddit> like [{round_code:'c',date:1234,round_amt:1234.56}, {round_code:'b',date:1235,round_amt:14.56}] ?
[02:31:04] <FelixFire619> so like on https://docs.mongodb.org/getting-started/shell/introduction/ "address" : { } Is the collection, "street" : "2 Avenue", street is a field, 2 Avenue is a value of that field?
[02:34:27] <FelixFire619> I'm just trying to figure this all out, working on a web registration with 5 things, id username email password & access, but im not quite getting how this works ;|
[04:19:06] <Waheedi> i got that read preference nearest working finally :)
[04:19:29] <Waheedi> its been three days almost and Im nagging about it
[10:07:38] <Keksike> when doing an aggregation pipeline, is there any way to easily make a $group $sum a conditional, so lets say I have a boolean-field 'isTrue' in the documents I am handling, and if isTrue = 'true' then it would be summed into the $sum
[10:07:53] <Keksike> whats the approach I should take?
[10:19:11] <Tachaikowsky> I figured it, thanks Derick
[11:06:31] <pchoo> Hi all, I'm looking into optimizing a collection (in Mongodb 3.0) for searching/sorting. Currently I have simple indexes on a few fields. Are there benefits to using compound indexes, and will no indexes be used if a field that is not in any of the compound indexes searched on?
[13:53:24] <Bookwormser> Hello, when trying to import a dump containing 10,000 records, the import only inserts a single document. Is there a way to have the import do all 10,000 records?
[14:06:15] <Bookwormser> mongodump dumps in a json format, but mongorestore throws an argument that the import isn't in a bson format. Is there a way around that?
[14:06:34] <StephenLynx> what arguments are you using for each?
[14:07:19] <Bookwormser> Should I paste them here?
[14:43:39] <pchoo> Do you remember when Microsoft put out that article, claiming "w00t" was some kind of internet bullying, meaning "We 0wn the 0ther Team" or some shit?
[14:44:04] <StephenLynx> no, mongorestore uses mongodump
[14:44:30] <Derick> did I get it the wrong way around?
[14:44:44] <StephenLynx> The mongorestore program writes data from a binary database dump created by mongodump to a MongoDB instance. mongorestore can create a new database or add data to an existing database.
[14:46:18] <Derick> or at least, that's what I wanted to say :P
[14:47:24] <StephenLynx> <Derick> mongoexport works with mongorestore, mongodump works with mongoimport
[14:52:28] <scruz> hello. the $setIsSubset op checks if arg1 is subset of arg2, but suppose i want to check if a certain value is an element of a set, how do i do this?
[14:56:45] <Ben_1> hi, I'm using the async mongodb driver and trying to instantiate a SingleResultCallback, but everytime I try a compiler error occurs: The type com.mongodb.async.SingleResultCallback cannot be resolved. It is indirectly referenced from required .class files
[14:56:46] <Ben_1> seems like the async driver needs another dependency but can't found which one because I do not use maven
[15:29:06] <MANCHUCK> I'm having an issue with MongoDB on AWS. When we start doing a lot of aggregation, the replication lag starts building and the secondaries go into recovery
[15:29:23] <MANCHUCK> has anyone had a similar issue?
[15:50:02] <tantamount> cheeser: I finally managed to finish my 11-stage aggregation pipeline with triple-unwind stages, and amazingly, the results actually came out correct! But I still maintain that the whole thing would have been a lot easier if the set operators allowed scalar/array comparison instead of just array/array!
[17:10:14] <Doyle> Hey. Thought. The use of multiple mongos (routers) in the connection URI is a bad idea. Yes? Cursor not found issues likely?
[17:10:45] <Doyle> I know the docs mention using it when connecting to a replica set, but I don't think they warn against putting mongos hosts in here.
[17:12:12] <tinylobsta> is it bad form to have a document that contains an array of subdocuments, which in turn contains another array of subdocuments, which in turn contains yet *another* array of subdocuments? this seems awful, because if i want to find a specific element in the deepest array i'm going to have to have three nested loops...
[17:19:21] <StephenLynx> nesting itself isn't, but complex nesting in a dynamic document is.
[17:19:46] <StephenLynx> consider multiple collections and managing relations between them.
[17:19:52] <StephenLynx> is not the ideal, though.
[17:22:55] <tinylobsta> stephenlynx: thanks, i thought doing that might be going against convention, since it seems a lot of the rhetoric out there talks about how great it is to contain as much as you can within a single document + subdocs
[17:23:01] <tinylobsta> but yeah, i'll definitely do that then
[17:52:56] <magicantler> via node.js is there anyway to get information from mongos or the config files about the route shard mappings for the key?
[17:56:55] <StephenLynx> unless mongo can handle you that, you will have to read from the FS.
[17:56:59] <StephenLynx> which requires permissions.
[18:25:53] <magicantler> StephenLynx: If it's in the mongoshell, but no in the native node.js mongo driver, then wouldn't i need to call the shell through a forked process?
[21:00:37] <MacWinner> so i have a simple 3-node replicaset on my own dedicated servers.. about 200GB of data in them. I'm planning for the future of growth and I will need to shard at some point. now I'm trying to figure out whether I want to do this myself, or if there is a good hosted alternative. I feel like a lot of the hosted alternatives are pricey based on a per GB model. Any pointers or recommendations here?
[21:01:46] <cheeser> i'd recommend using ec2 and CloudManager
[21:01:57] <cheeser> pardon me. i need to go cash my commission check.
[21:11:15] <MacWinner> tangential question.. when you do mongodump, is it only for data on that specific node? or does mongodump attempt ot backup all data in your cluster?
[21:14:38] <cheeser> MacWinner: depends on what you connect to: https://docs.mongodb.org/manual/reference/program/mongodump/#behavior
[22:03:14] <MacWinner> cause theoretically teh data is the same size.. it's just stored smaller.. will be interesting
[23:00:17] <phretor> why in the world PyMongo would complain about this http://api.mongodb.org/python/current/faq.html#using-pymongo-with-multiprocessing if I only use MongoClient(connect=False) in my code?