PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 22nd of January, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:01] <stevenxl> Hi everyone. I'm new to mongo and trying to build a query object.
[00:03:10] <stevenxl> I have a list of companies with the following schema:
[00:03:11] <stevenxl> {"funding_round": [{"round_code": "c"}]}
[00:03:31] <stevenxl> The round_code can be any of a number of things, such as 'seed', 'a', 'b', etc.
[00:03:38] <stevenxl> I'm trying to filter for 'c'.
[00:03:49] <stevenxl> I am using the following query object, but I am not getting any results back
[00:04:34] <stevenxl> db.companies.find({"funding_round": {"round_code": "c"}})
[00:04:41] <stevenxl> companies, of course, being the collection.
[00:06:14] <stevenxl> You can find the actual json here if you want to take a look: http://paste.ubuntu.com/14593647/
[00:09:15] <regreddit> stevenxl, roundcode is in an arry
[00:09:16] <regreddit> array
[00:09:22] <regreddit> so that query wont work
[00:09:54] <stevenxl> regreddit, Yea. I see that it's an array, but I"m not sure how to modify my query document so that it does work.
[00:12:05] <regreddit> so, if funding_round is an array, i assume you will have multiple round_code objects?
[00:12:53] <regreddit> also, is the data model your design? it's a bit 'odd'
[00:13:18] <regreddit> will the funding_round object have more than one property?
[00:13:59] <stevenxl> regreddit, It's not my design. If you follow the link above, I paste in a "founding_round" so you can have a better sense of the schema.
[00:14:17] <regreddit> like [{round_code:'c',date:1234,round_amt:1234.56}, {round_code:'b',date:1235,round_amt:14.56}] ?
[00:14:25] <regreddit> ok, one sec
[00:14:49] <stevenxl> I see.
[00:14:56] <stevenxl> I found the solution
[00:14:58] <stevenxl> had to use dot notation
[00:15:12] <regreddit> ok
[00:15:26] <stevenxl> {"funding_rounds.round_code": "c"} did it
[00:15:28] <stevenxl> thank you
[00:15:44] <regreddit> if that's all you needed, then yeah, I though you had a more complex query, using the $in operator or soemthing
[00:15:55] <regreddit> thats the reason for my questions
[00:16:30] <stevenxl> I hear you - yea it was much simpler than that
[01:57:00] <FelixFire619> Can someone explain to me how a mongodb works? if its not similar to mysql is there one that is like it?
[01:59:09] <cheeser> https://docs.mongodb.org/getting-started/shell/introduction/
[02:28:31] <FelixFire619> Ok so with mysql you have, DB>Tables>Fields>Value, but with mongo its just a db > field & value?
[02:30:04] <cheeser> db/collection/fields
[02:31:04] <FelixFire619> so like on https://docs.mongodb.org/getting-started/shell/introduction/ "address" : { } Is the collection, "street" : "2 Avenue", street is a field, 2 Avenue is a value of that field?
[02:32:40] <cheeser> that's a field in a document
[02:32:53] <cheeser> https://docs.mongodb.org/getting-started/shell/introduction/#collections
[02:34:27] <FelixFire619> I'm just trying to figure this all out, working on a web registration with 5 things, id username email password & access, but im not quite getting how this works ;|
[04:19:06] <Waheedi> i got that read preference nearest working finally :)
[04:19:29] <Waheedi> its been three days almost and Im nagging about it
[09:18:28] <Tachaikowsky> Hello
[09:18:55] <Tachaikowsky> I am getting this error while trying to push information into mongolab aws
[09:18:56] <Tachaikowsky> rrmsg": "E11000 duplicate key error index
[09:19:07] <Tachaikowsky> Anyone { : null }
[09:19:18] <Tachaikowsky> Can someone help me figure tis?
[09:39:13] <Derick> Tachaikowsky: please post the full error in a pastebin
[09:39:22] <Derick> and also, which query (insert?) you're running
[09:39:27] <Derick> and your indexes
[10:07:38] <Keksike> when doing an aggregation pipeline, is there any way to easily make a $group $sum a conditional, so lets say I have a boolean-field 'isTrue' in the documents I am handling, and if isTrue = 'true' then it would be summed into the $sum
[10:07:53] <Keksike> whats the approach I should take?
[10:19:11] <Tachaikowsky> I figured it, thanks Derick
[11:06:31] <pchoo> Hi all, I'm looking into optimizing a collection (in Mongodb 3.0) for searching/sorting. Currently I have simple indexes on a few fields. Are there benefits to using compound indexes, and will no indexes be used if a field that is not in any of the compound indexes searched on?
[13:53:24] <Bookwormser> Hello, when trying to import a dump containing 10,000 records, the import only inserts a single document. Is there a way to have the import do all 10,000 records?
[13:54:53] <cheeser> using what import tool?
[13:54:59] <Bookwormser> mongoimport
[13:55:17] <cheeser> that should import the entire file...
[13:55:42] <Bookwormser> It does but instead of 10,000 documents it inserts 1 single document containing 10,000 records
[13:55:43] <cheeser> can you pastebin a sample of your import file and how you ran the import?
[13:56:06] <StephenLynx> you probably screwed up on the export or import command.
[13:56:17] <StephenLynx> oh, wait
[13:56:20] <Bookwormser> The export showed 10,000 records dumped
[13:56:23] <StephenLynx> I was thinking of dump/restore
[13:56:34] <StephenLynx> not export/import
[13:56:47] <StephenLynx> dump/restore works better, afaik
[13:57:18] <cheeser> dump/restore maintains types. export/import is lossy because it passes through a json phase.
[13:57:48] <Bookwormser> ah ok great, thank you
[13:57:54] <Bookwormser> trying it now
[14:06:15] <Bookwormser> mongodump dumps in a json format, but mongorestore throws an argument that the import isn't in a bson format. Is there a way around that?
[14:06:34] <StephenLynx> what arguments are you using for each?
[14:07:19] <Bookwormser> Should I paste them here?
[14:08:09] <Bookwormser> mongodump -u user -d 'stage' -c 'one' -q '{_id: {$gte: ObjectId("56a1fda8593f302916eaa012")}}'
[14:08:13] <Bookwormser> thats the dump
[14:08:14] <Derick> mongoexport works with mongorestore, mongodump works with mongoimport
[14:08:25] <Bookwormser> ah ok I will try mongoexport
[14:08:42] <Derick> first two use bson, second two use json (or csv)
[14:35:46] <Bookwormser> perfect, all set
[14:35:48] <Bookwormser> Thank you
[14:40:01] <cheeser> w00t
[14:43:39] <pchoo> Do you remember when Microsoft put out that article, claiming "w00t" was some kind of internet bullying, meaning "We 0wn the 0ther Team" or some shit?
[14:44:04] <StephenLynx> no, mongorestore uses mongodump
[14:44:30] <Derick> did I get it the wrong way around?
[14:44:36] <StephenLynx> yes
[14:44:44] <StephenLynx> The mongorestore program writes data from a binary database dump created by mongodump to a MongoDB instance. mongorestore can create a new database or add data to an existing database.
[14:44:47] <cheeser> mongoexport/import, mongodump/restore
[14:46:00] <Derick> that's what I said!
[14:46:18] <Derick> or at least, that's what I wanted to say :P
[14:47:24] <StephenLynx> <Derick> mongoexport works with mongorestore, mongodump works with mongoimport
[14:52:28] <scruz> hello. the $setIsSubset op checks if arg1 is subset of arg2, but suppose i want to check if a certain value is an element of a set, how do i do this?
[14:53:38] <cheeser> $in?
[14:56:45] <Ben_1> hi, I'm using the async mongodb driver and trying to instantiate a SingleResultCallback, but everytime I try a compiler error occurs: The type com.mongodb.async.SingleResultCallback cannot be resolved. It is indirectly referenced from required .class files
[14:56:46] <Ben_1> seems like the async driver needs another dependency but can't found which one because I do not use maven
[14:57:22] <scruz> cheeser: it’s for aggregation
[14:57:31] <Ben_1> aah found a line on the manual
[14:57:37] <Ben_1> bson and driver core
[14:57:40] <Ben_1> I will try it thx
[14:58:06] <cheeser> scruz: and you can't use a $match stage?
[15:02:31] <scruz> cheeser: trying to do something like: {status: {$if: {$in: array, value}, “yes”, “no}}}
[15:02:37] <scruz> please excuse the bad syntax
[15:03:26] <scruz> basically, assign a status based on if a value is in a set or not.
[15:16:08] <mkjgore> hey folks, I'm in Google keyword hell here. Is there a way I could query (from within mongo) the location of the db (db path)?
[15:16:43] <cheeser> db.serverCmdLineOpts()
[15:26:38] <mkjgore> cheeser: thanks a bunch! :-D
[15:27:26] <cheeser> np
[15:29:06] <MANCHUCK> I'm having an issue with MongoDB on AWS. When we start doing a lot of aggregation, the replication lag starts building and the secondaries go into recovery
[15:29:23] <MANCHUCK> has anyone had a similar issue?
[15:49:28] <kernasi> hi
[15:50:02] <tantamount> cheeser: I finally managed to finish my 11-stage aggregation pipeline with triple-unwind stages, and amazingly, the results actually came out correct! But I still maintain that the whole thing would have been a lot easier if the set operators allowed scalar/array comparison instead of just array/array!
[15:51:07] <cheeser> heh
[15:51:36] <cheeser> that last half makes no sense, though, because as i understand your problem, that feature exists.
[15:55:50] <tantamount> That don't think it be like it is but it do
[15:55:55] <tantamount> They*
[15:56:47] <cheeser> could not parse
[15:56:48] <cheeser> :D
[17:10:14] <Doyle> Hey. Thought. The use of multiple mongos (routers) in the connection URI is a bad idea. Yes? Cursor not found issues likely?
[17:10:45] <Doyle> I know the docs mention using it when connecting to a replica set, but I don't think they warn against putting mongos hosts in here.
[17:10:55] <Doyle> is that a valid concern?
[17:12:12] <tinylobsta> is it bad form to have a document that contains an array of subdocuments, which in turn contains another array of subdocuments, which in turn contains yet *another* array of subdocuments? this seems awful, because if i want to find a specific element in the deepest array i'm going to have to have three nested loops...
[17:19:04] <StephenLynx> yup
[17:19:06] <StephenLynx> pretty bad.
[17:19:21] <StephenLynx> nesting itself isn't, but complex nesting in a dynamic document is.
[17:19:46] <StephenLynx> consider multiple collections and managing relations between them.
[17:19:52] <StephenLynx> is not the ideal, though.
[17:22:55] <tinylobsta> stephenlynx: thanks, i thought doing that might be going against convention, since it seems a lot of the rhetoric out there talks about how great it is to contain as much as you can within a single document + subdocs
[17:23:01] <tinylobsta> but yeah, i'll definitely do that then
[17:52:56] <magicantler> via node.js is there anyway to get information from mongos or the config files about the route shard mappings for the key?
[17:56:55] <StephenLynx> unless mongo can handle you that, you will have to read from the FS.
[17:56:59] <StephenLynx> which requires permissions.
[17:57:06] <StephenLynx> can hand you that*
[18:08:40] <magicantler> StephenLynx do you mean by forking a bash command from node?
[18:09:03] <StephenLynx> no
[18:09:06] <StephenLynx> you got the fs module.
[18:14:02] <magicantler> yes, i have it
[18:14:36] <StephenLynx> so
[18:15:14] <magicantler> so you're saying read in /data/config?
[18:15:16] <magicantler> or something like that?
[18:15:24] <StephenLynx> yeah
[18:15:40] <StephenLynx> if mongo doesn't hand you that info, you will have to read it directly.
[18:15:56] <magicantler> i see. anywhere i can see what /data/config typically has? I don't have a shard setup
[18:16:09] <magicantler> i'd really just want to know where a user (sharded by name) is mapped (hash shard)?
[18:16:15] <StephenLynx> I would see if mongo can tell you the info you are looking for first.
[18:16:36] <StephenLynx> did you looked into that?
[18:16:41] <magicantler> I can't find anything on it..
[18:16:54] <magicantler> even with the native mongo driver
[18:17:08] <StephenLynx> what about regular mongo documentation?
[18:17:52] <magicantler> i'll check. if it's there, I'd need to have node fork a bash call right?
[18:18:02] <magicantler> since then it wouldn't be supported by native driver
[18:19:44] <StephenLynx> I told you twice about using fs module.
[18:20:11] <StephenLynx> why would you run a bash command if you have a module that does that in the standard runtime environment?
[18:20:41] <Doyle> Hey. Thought. The use of multiple mongos (routers) in the connection URI is a bad idea. Yes? Cursor not found issues likely?
[18:25:28] <magicantler> StephenLynx: because I need to call it from within node.js
[18:25:53] <StephenLynx> and where
[18:25:53] <magicantler> StephenLynx: If it's in the mongoshell, but no in the native node.js mongo driver, then wouldn't i need to call the shell through a forked process?
[18:25:54] <StephenLynx> do you think
[18:25:57] <StephenLynx> fe is?
[18:26:01] <StephenLynx> fs*
[18:26:15] <magicantler> fs is a node module..
[18:26:18] <StephenLynx> exactly
[18:26:36] <magicantler> right, but you said look for commands
[18:26:44] <StephenLynx> fs is for reading the actual file
[18:26:48] <magicantler> so i'm looking at alternatives to fs on the config file
[18:27:06] <magicantler> Doyle: I thought you could have multiple mongos servers
[19:06:20] <SaintMoriarty> hi
[20:05:08] <Waheedi> sharding works similarly to RAID 5 then
[20:05:28] <Waheedi> logically speaking
[20:08:31] <morf> O_O
[20:10:14] <Waheedi> alright :)
[20:10:43] <Waheedi> maybe raid 0 then :P
[21:00:37] <MacWinner> so i have a simple 3-node replicaset on my own dedicated servers.. about 200GB of data in them. I'm planning for the future of growth and I will need to shard at some point. now I'm trying to figure out whether I want to do this myself, or if there is a good hosted alternative. I feel like a lot of the hosted alternatives are pricey based on a per GB model. Any pointers or recommendations here?
[21:01:46] <cheeser> i'd recommend using ec2 and CloudManager
[21:01:57] <cheeser> pardon me. i need to go cash my commission check.
[21:08:39] <MacWinner> got it
[21:11:15] <MacWinner> tangential question.. when you do mongodump, is it only for data on that specific node? or does mongodump attempt ot backup all data in your cluster?
[21:14:38] <cheeser> MacWinner: depends on what you connect to: https://docs.mongodb.org/manual/reference/program/mongodump/#behavior
[21:17:08] <MacWinner> ahh, thanks
[21:18:25] <MacWinner> wtf.. i thought MMS was free for my number of nodes.. i just see now that my free trial is expriing in 113 days
[21:18:38] <MacWinner> must have missed the fine print
[21:19:11] <cheeser> that changed recently, i think.
[21:19:17] <cheeser> fsvo, recent
[22:02:18] <MacWinner> cheeser, after we upgrade to wiredtiger, whats the best way to check the compression ratios? should I do show dbs before?
[22:02:20] <MacWinner> and after
[22:02:33] <cheeser> that i don't know.
[22:03:14] <MacWinner> cause theoretically teh data is the same size.. it's just stored smaller.. will be interesting
[23:00:17] <phretor> why in the world PyMongo would complain about this http://api.mongodb.org/python/current/faq.html#using-pymongo-with-multiprocessing if I only use MongoClient(connect=False) in my code?