PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 13th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:22:49] <tubuliferous> Hey folks, I'm new to databases, and I'm sure of the best database software for my needs. Would you mind offering some advice?
[02:22:55] <tubuliferous> *I'm not sure
[10:17:18] <Veejay> Hi, I'm trying to restore data using mongorestore and I am getting the following message: assertion: 13053 help failed: { errmsg: "no such cmd: isdbgrid", bad cmd: { isdbgrid: "1", help: 1 }, ok: 0.0 }
[10:18:11] <Veejay> Apparently this error message is returned by mongod because it is only available to mongos
[10:19:03] <Veejay> Since I am not issuing this isdbgrid command myself, I was wondering if it is part of the mongorestore dance and if so, is there anything I can do to restore the data?
[10:22:44] <Veejay> Ah got it, I was specifying the whole replica set in the --host arg
[10:22:48] <Veejay> Which is not the way to go
[11:20:13] <traplin> how would I use a json data variable as the query in findAndModify?
[11:23:59] <traplin> or any variable as the query?
[11:37:46] <traplin> could anyone help me please?
[12:01:10] <Nodex> traplin : can you ellaborate?
[12:04:47] <traplin> Nodex: i will paste bin my code
[12:05:38] <traplin> Nodex: http://pastebin.com/Ux59wLMb
[12:06:08] <traplin> so basically i have data being sent from the client in JSON format. now i want to use one of the variables of that data as the query for the findAndModify
[12:06:26] <traplin> if i use say "61743322", it works fine, and updates that record. but with the data.user_id, it won't update at all
[12:20:58] <Nodex> does data.user_id exist?
[13:38:04] <traplin> nodex: yes
[13:38:49] <traplin> i can log it and it prints out the right value
[13:57:21] <Nodex> is it an ObjectId ?
[13:57:34] <Nodex> or a string and expecting an int?
[13:57:43] <Nodex> or vv
[13:58:34] <Nodex> http://dev.mensfeld.pl/2014/01/using-multiple-mongodb-databases-instead-of-one-performance-check/
[13:58:41] <traplin> is a string of numbers
[13:58:43] <Nodex> oh dear.....
[13:58:45] <traplin> its/
[13:59:00] <Nodex> what error is raised?
[13:59:14] <traplin> well thats the thing, i can't see any, the console shows nothing
[13:59:58] <Nodex> "data.user_id"
[14:00:11] <Nodex> that's looking for the string "data.user_id"
[14:00:11] <traplin> that logs the string of numbers
[14:00:25] <Nodex> query: {userid: data.user_id},
[14:00:54] <traplin> yeah i had that before
[14:00:58] <traplin> same thing happens
[14:01:43] <Nodex> then there is a problem with your data, if you're expecting an integer and it's casting as a string or Vice versa
[14:02:45] <traplin> should i may cast "data.user_id" to an int?
[14:03:48] <Nodex> if it's in int in your database / collection
[14:04:26] <traplin> well its a MongoDB, so they are all data objects
[14:05:21] <traplin> weird thing is, it saves "data.song_url" perfectly
[14:05:24] <Nodex> you're missing the point. Data is CAST
[14:05:33] <Nodex> because song_url is probably a STRING
[14:05:48] <traplin> okay, i see
[14:05:54] <Nodex> "1234" is not the same as 1234
[14:06:17] <traplin> yeah i see what you mean now, so i should cast user_id
[14:06:19] <Nodex> mongo will treat them different when querying them
[14:06:43] <Nodex> you should case user_id as whatever it is set in your collection
[14:07:07] <Nodex> db.foo.findOne() ... if user_id is wrapped in quotes then cast as a string else cast as an integer
[14:07:51] <traplin> yeah user_id is wrapped in quotes
[14:20:04] <Nodex> then you need to cast it in your javascript
[14:20:37] <Nodex> perhaps toString() or somehting?
[14:20:54] <traplin> tried toString() now, still the same problem
[14:21:31] <Nodex> console.log(typeof data.user_id;
[14:21:33] <Nodex> console.log(typeof data.user_id);
[14:22:20] <traplin> doing that ow
[14:22:24] <traplin> now*
[14:22:40] <traplin> ok its a string
[14:23:33] <traplin> song_url is also string
[14:24:15] <Nodex> can you pastebin a typical document (minus sentitive information)
[14:24:28] <traplin> sure, one second
[14:25:26] <traplin> http://pastebin.com/EXLhjt1L
[14:41:04] <Nodex> userid: "617301662"
[14:41:09] <Nodex> can you see what's wrong?
[14:41:39] <Nodex> "user_id" != "userid"
[14:41:52] <cheeser> seems a bit nit picky
[14:42:36] <Nodex> lol, when you're trying to update a field called "userid" and you send "user_id" in the query ....
[14:45:23] <leifw> Computers are nothing if not nitpicky
[14:45:45] <Nodex> and so they should be
[14:46:05] <Nodex> not sure I would want a computer that presumed to know what I wanted or meant
[14:54:01] <traplin> Nodex: i dont understand
[14:54:09] <traplin> i am querying userid with the value of data.user_id
[14:59:03] <Nodex> you're going to have to pastebin the actual query that's called
[14:59:17] <tiller> Hi there
[14:59:36] <Nodex> var q={userid: data.user_id}; console.log(q);
[15:00:17] <traplin> http://pastebin.com/JrBAK1Kb
[15:00:23] <traplin> thats the query and when it is called
[15:00:39] <hipsterslapfight> so i can use $pull to remove a given field from an array of an object ... how would i go about removing a field of an object if it matches something
[15:00:42] <Nodex> the actual data I need
[15:00:49] <hipsterslapfight> basically like $pull for fields not arrays?
[15:01:20] <Nodex> hipsterslapfight : make the query look for what you wnt
[15:01:25] <tiller> I have what is, I think, a tricky question: Is there a way, during an update/upsert to merge some "subdocuments"? For example, I have: db.users.insert({name: "Tiller", projects: [{id: proj1, roles: [role1]}]});
[15:01:46] <Nodex> for example. db.foo.update({foo:'bar'},{$unset:{foo:'bar'}});
[15:01:48] <hipsterslapfight> oh bloody hell Nodex .. i was just typing up that i don't know what i want but of course i do i have the field right here :/
[15:01:52] <hipsterslapfight> thanks :v
[15:02:14] <Nodex> tiller : no
[15:02:17] <tiller> and I want to update/upsert doing: {$set: {OAuthId: <id>}, $addToSet: {projects: {id: proj1, roles: [role2]}}
[15:02:36] <Nodex> there is a $setOnInsert flag you can use perhaps
[15:02:53] <tiller> Nah, I can't use $setOnInsert
[15:03:10] <tiller> ok, then I think I'll just split my query. Thanks :)
[15:12:12] <theblang> Anyone use mongoDB with Spring?
[15:30:49] <ram_> hi all :D how i can create new object in flask/mongoDB (line 7 : https://gist.github.com/anonymous/8402206) i get this err : AttributeError: 'NoneType' object has no attribute 'amount'
[15:38:39] <ekristen> as I understand it I can use mongos without setting up sharding initially so that I can do that in the future
[15:40:46] <Nodex> i that a question or a statement
[15:40:49] <Nodex> +?
[15:41:10] <ekristen> Nodex: yes question, I left off the ?
[15:42:04] <ekristen> so I’m doing testing right now, I have a single mongodb instance with replSet turned on, plus a config server on the same box, on a secondary box, I’ve created a mongos instance and connected to it via mongo
[15:42:21] <ekristen> connections seemed to work and show dbs shows admin, config
[15:42:28] <ekristen> but when I try to create a new collection it fails
[15:43:20] <Nodex> with what error does it fail?
[15:44:36] <ekristen> “error creating initial database config information :: caused by :: can't find a shard to put new db on”
[15:55:14] <ekristen> Nodex: ?
[15:56:21] <Nodex> where are you executing that command?
[15:56:34] <Nodex> and you will have to setup at least one shard to take the initial data iirc
[15:56:39] <ekristen> mongo connected to mongos
[16:15:18] <ekristen> Nodex: that did it for me, I just added a testing database and added a shard there
[16:15:20] <ekristen> thanks
[16:15:35] <Nodex> ;)
[16:52:31] <stymo> Hi, we're considering Mongo but the lack of decimal support seems problematic. I thought I'd ask here if anyone can talk about how they've handled that.
[16:55:27] <Nodex> lack of decimal support?
[16:58:04] <stymo> Yes, meaning decimal instead of floating point
[16:58:35] <stymo> We work with sensor data that sometimes needs a very high degree of precision
[17:00:37] <luca3m> stymo: you can use integers
[17:01:14] <stymo> So we've considered this, and then using some mulitplier in the application to convert back to decimal...
[17:01:17] <luca3m> and *10^precision or /10^precision on application code
[17:01:25] <stymo> Exactly
[17:01:31] <luca3m> I used this way to store money data
[17:02:25] <stymo> The integers would have to be very large, and I'm wondering if it's possible we may end up losing precision because we might get to a value too large to represent
[17:02:41] <stymo> Say if we were calculating an average over a year
[17:04:34] <stymo> Has anything like this been an issue for you?
[18:16:44] <slammmm> var databaseUrl = "mydb"; // "username:password@example.com/mydb"
[18:16:44] <slammmm> var followers = ["id", "following"];
[18:16:45] <slammmm> var server =["user_id"];
[18:16:45] <slammmm> var db = require("mongojs").connect(databaseUrl, server,followers);
[18:16:51] <slammmm> console.log("jayesh"+db.id.find(1)+"\n");
[18:17:01] <slammmm> whats the error in last statement
[18:17:04] <slammmm> ?
[18:17:38] <slammmm> followers and server are two collections in database here
[18:20:36] <cheeser> um. what?
[18:20:45] <cheeser> also, please use a pastebin
[18:26:57] <slammmm> cheeser: http://pastebin.com/aDykeqey
[18:32:31] <slammmm> anyone der ?
[18:36:46] <cheeser> it would help to know what the actual problem is.
[18:38:20] <slammmm> the thing is I just want to check if myid is present in server collection or not
[18:38:59] <slammmm> and use if else statement accordingly but not been able to figure out correct syntax
[18:39:45] <cheeser> ah. i'm not really a js guy so i'm not sure.
[18:40:05] <cheeser> you might start by putting the results of that findOne in a var and printing that out.
[18:40:33] <slammmm> tried that but returning something undefined
[19:13:01] <Pinkamena_D> In python, is something like this the best way to get the most recent document?
[19:13:03] <Pinkamena_D> self.printers.find_one({'printer':printer}).sort([('date','-1')])
[19:14:31] <cheeser> yeah provided you have an index on data
[19:29:14] <Pinkamena_D> any way to match onle one of each element with value?
[19:30:09] <Pinkamena_D> for example if I have {type'x':date:something}, {type:'x',date:something}, {type:y,date:something}
[19:30:25] <Pinkamena_D> I only want one of each type, one x, one y, etc
[19:31:33] <Joeskyyy> Errr… In a query? Like a find?
[19:31:51] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.distinct/
[19:31:55] <Nodex> Pinkamena_D : they're sub / nested docs so you need a projection
[19:32:06] <Nodex> unless they're not then you need a distinct ^^
[19:32:29] <Pinkamena_D> ok, I will check out distinct...was that even in M101? maybe I missed it.
[19:44:02] <slammmm> anyone from node.js basic knowledge ?
[20:12:43] <jergason> slammmm: what do you need?
[20:14:11] <slammmm> jergason:http://pastebin.com/nXqfTv02
[20:14:43] <slammmm> getting error saying "TypeError: Object has no method 'count"
[20:15:24] <slammmm> but I guess count method should work on it
[20:19:30] <slammmm> anyone der ?
[20:20:31] <jergason> you are getting an array back i assume
[20:20:37] <jergason> arrays don't have a .count() method
[20:20:42] <jergason> they have a .length property though
[20:20:45] <jergason> check .length
[21:20:28] <ghost1> Are any of the core team members here?
[21:25:14] <Derick> ghost1: I write code for MongoDB's PHP driver... all I can offer :-)
[21:25:44] <ghost1> Derick: Hi Thanks, its a bit of topic but I was hoping I could get help. I was curious as to how the team manages system and user documentation? What tool do you guys use? I am working on a project that is turning into a huge project, and I was hoping on getting pointers from a group that is already managing large and complex project.
[21:27:20] <Derick> ghost1: hmm, for the PHP driver we use PHP's system. docbook mostly. I think at MongoDB we use something in github.
[21:27:23] <Derick> let me think
[21:27:44] <Derick> Looks like we use Sphynx: https://github.com/mongodb/docs
[21:29:15] <ghost1> Perfect, I'm going to take a look at that... is that using something like markdown?
[21:29:40] <leifw> I think it's asciidoc or restructured text
[21:30:13] <leifw> So yes "something like markdown"
[21:31:14] <ghost1> Cool!
[21:32:44] <ghost1> Again thanks to everyone! This will help!
[21:32:59] <ghost1> by the way do you guys write up specs or requirements internally?
[21:36:37] <Derick> ghost1: sure we do
[21:37:04] <Derick> we use restructured text for that
[21:37:11] <Derick> and python's docutils
[21:38:08] <ghost1> never heard of that will have to look into that.
[21:43:16] <Derick> ghost1: it's what I used for my blog as well
[21:43:42] <ghost1> Derick: do you have any examples that I could use?
[21:44:02] <Derick> ghost1: *use* - see https://github.com/derickr/derickrethans-articles
[21:44:12] <Derick> its all hacky though... that code is not open :-)
[21:45:14] <ghost1> Does github support restructured text? it seems to render the file instead of showing the code
[21:46:07] <ghost1> I know that github renders markdown
[21:47:26] <Derick> yes, it does both
[21:47:59] <ghost1> thats very cool
[22:07:56] <cortexman> what kind of index should I create for arbitrary queries against {a:,b:,c:}
[22:09:21] <retran> you shoudnt
[22:09:49] <retran> you should create indexes for known queries
[22:10:12] <cortexman> find all b where a is blah
[22:10:17] <retran> if you try to plan indexes for ad-hoc's you're probably being wasteful
[22:10:17] <cortexman> find all c where b is blerg
[22:10:30] <retran> oh
[22:10:32] <cortexman> find all a where c is blerg and b is blah
[22:10:45] <retran> index(b,c)
[22:11:05] <retran> multi-column index
[22:11:15] <cortexman> i ran all possible combinations of: db.lexicon.ensureIndex({"lang":1, "word":1, "pron":1})
[22:11:37] <cortexman> was that correct?
[22:11:50] <retran> put in pastie example data dump, and your example find
[22:12:19] <cortexman> https://gist.github.com/brianmingus/572374048dd46d2aa99a/raw/278670f87ae79ae9637b8e2531a05094b572b6fa/gistfile1.txt
[22:12:45] <retran> for that all your need is db.lexicon.ensureIndex("lang":1);
[22:12:57] <retran> if that's your find
[22:12:59] <cortexman> i ran that
[22:13:05] <cortexman> i ran all possible combinations..
[22:13:14] <retran> all possible combinations of what
[22:13:23] <retran> you have to create your indexes for a particular find
[22:13:28] <yho> hiya. i'm using the node.js api. if i have two GridStore objects with the same filename (but different id's) stored at a different time, how would i get the newest one using the constructor? it seems to return the oldest one by default.
[22:13:44] <retran> tell me what find's you want to see how to index
[22:13:52] <cortexman> db.lexicon.ensureIndex("lang":1), db.lexicon.ensureIndex("word":1), db.lexicon.ensureIndex("pron":1), db.lexicon.ensureIndex("lang":1, "word":1), etc...
[22:14:01] <retran> why are you doing those indexes
[22:14:25] <retran> that's not needed if you're only doing find(lang:"blabla")
[22:14:50] <cortexman> as i said, i want to support arbitrary queries.
[22:14:58] <retran> that's sutpid
[22:15:16] <cortexman> ...
[22:15:21] <cortexman> how would you know
[22:15:25] <retran> because i know it's stupid
[22:15:28] <cortexman> what are you, a mind reading super genius?
[22:15:33] <cortexman> "your users will never want to do that"
[22:15:34] <cortexman> i am my user.
[22:15:36] <retran> i'm a person who can tell you that it's stupid
[22:15:45] <retran> i'm telling you that it's a dumb idea
[22:15:47] <cortexman> i think that's retarded
[22:15:51] <retran> me too
[22:15:53] <cortexman> i'm telling you your that your idea is a dumb idea
[22:15:56] <cortexman> QED
[22:15:59] <retran> i dont have an idea
[22:16:02] <retran> i'm challenging yours
[22:16:12] <cortexman> no, you're being a child. http://lesswrong.com/lw/85h/better_disagreement/
[22:16:25] <kkspy_> Hello. Do you have idea how to get unique int64 from mongoid?
[22:16:29] <kkspy_> I need it for Sphinx.
[22:16:39] <retran> you're just saying your users "need it"
[22:17:09] <retran> in a db, you dont make indexes on every possible combition, generally
[22:17:17] <retran> not to say you couldn't
[22:40:58] <Guest-Pirc> hi team
[22:41:39] <Cort> question about recursive queries
[22:41:57] <Cort> i see the docs on the net about best practices today
[22:42:02] <Cort> none of which are awesome
[22:42:23] <Cort> anybody have any visibility into mongodb inc. plans to improve i.e. offer a recursive option?
[22:42:26] <Cort> OR
[22:42:57] <Cort> does anybody have any idea how to map/reduce for recursion? i.e. extract trees from a collection of docs that can each point to a parent doc?
[22:44:21] <cheeser> "recursive query" implies joins which mongo doesn't do
[22:44:43] <Cort> i am saying it wrong. i mean extract trees of documents that refer to eachother
[22:44:55] <Cort> which is discussed in the mongo docs
[22:45:10] <Cort> in fact i don't need to extract the tree, only count the number of docs in it
[22:45:59] <Cort> in no way am i confused that mongo is an RDBMS w/ joins
[22:46:03] <Cort> i am referring to: http://docs.mongodb.org/manual/tutorial/model-tree-structures/
[22:46:13] <Cort> and finding it insufficient
[22:46:17] <Cort> for a sufficiently large tree
[22:46:28] <Cort> w/ sufficiently high rate of change
[22:48:13] <Joeskyyy> I think that relies a bit on exactly how your tree structure is set up.
[22:48:28] <Cort> yep
[22:48:41] <Cort> can implement in any way that will work for counting the number of docs in the tree
[22:48:49] <Cort> currently have "point to the parent" implemented
[22:49:10] <Cort> a la http://docs.mongodb.org/manual/tutorial/model-tree-structures-with-parent-references/
[22:49:46] <Joeskyyy> If you're doing a lot of updates an array of ancestors may make sense, dependings on what you're doing. Since you can rewrite an entire path in one atomic movement.
[22:50:08] <Joeskyyy> But again, that all depends on what you're doing exactly.
[22:50:29] <Cort> yes array of ancestors and nested sets both help for extraction
[22:50:40] <Cort> but the impact on moving a node is not good
[22:50:51] <Cort> i have multiple actors moving docs in a tree
[22:51:17] <Cort> so assume path x>y>z and x>a>b>c
[22:51:26] <Joeskyyy> True. Although if you kept track of where you're moving it to, the array of ancestors would still provide the best movements in regards to atomicity
[22:52:04] <Cort> well i have to update the ancestors of all the children of the doc being moved
[22:52:11] <Cort> say i have thousands of children
[22:52:21] <Cort> it's not great
[22:52:39] <Cort> especially when i have two actors moving two nodes in the same lineage
[22:52:47] <Cort> at the same time
[22:54:13] <Cort> similarly with nested sets, large tree + many concurrent updates is a problem
[22:54:14] <Joeskyyy> Yeah not really sure how to make it faster for you though. With that many operations, you're going to run into something like that at some point.
[22:54:39] <Cort> yes which is why it's an operation that belongs in mongo
[22:54:41] <Cort> not the client
[22:54:52] <Cort> which is why i am asking for insight into future mongo roadmap
[22:55:06] <Joeskyyy> *shrugs* Don't work for mongo sorry :\
[22:55:11] <Cort> np
[22:55:15] <Cort> just hunting around
[22:55:23] <Cort> will ask them directly
[22:55:31] <Cort> thanks for attacking the problem w/ me
[22:59:39] <unholycrab> is it okay to take and use a live snapshot of a drive containing the mongodb data? like an an AWS EBS volume
[22:59:59] <unholycrab> ive been doing that to clone mongodb instances, but it goes into journal recovery when i start the new instance using the snapshot
[23:01:06] <Joeskyyy> I'd do a mongodump, if you're aiming to get a good chunk of data over
[23:02:30] <unholycrab> im aiming to get enough so that i can add it to the replica set and sync up
[23:03:49] <Joeskyyy> Right, a mongodump will acheive that as well
[23:03:50] <Joeskyyy> http://docs.mongodb.org/manual/reference/program/mongodump/#bin.mongodump
[23:05:21] <Joeskyyy> That way you can get a big chunk of the data over, then all you need to do is sync the diff from when you took the mongodump
[23:06:45] <unholycrab> a low level drive snapshot is way easier, as long as im not risking corrupting data
[23:07:08] <unholycrab> so my question is whether or not that is safe
[23:07:34] <Joeskyyy> I'm not familiar with how EBS takes a snapshot at the infrastructure level. So I don't know tbh.
[23:07:49] <Joeskyyy> Mongodump takes a clean backup of your data an puts it into a file.
[23:22:28] <unholycrab> its a block-level snapshot of the filesystem
[23:23:47] <unholycrab> er, the drive containing the filesystem
[23:25:12] <Joeskyyy> What I mean is, what if you take the snapshot while a write is happening, and hasn't been written to the replset, or something of that nature.
[23:25:20] <Joeskyyy> a mongodump will ensure it's a clean backup.
[23:25:33] <leifw> Cort: maybe multi-document transactional semantics in tokumx could help?
[23:25:34] <Joeskyyy> A snapshot all depends on chance when you pull the trigger.
[23:25:42] <unholycrab> got it, thanks Joeskyyy http://docs.mongodb.org/manual/tutorial/backup-sharded-cluster-with-filesystem-snapshots/
[23:26:26] <Joeskyyy> That's for shards, just an fyi.
[23:26:33] <unholycrab> yeah i am safe in that case because the member i am using for backups does not perform writes
[23:26:33] <Joeskyyy> Not sure if you're sharding as well.
[23:26:47] <Joeskyyy> Ah, then yeah. 100% fine.
[23:27:13] <Joeskyyy> Always ensure it's not the primary though, what with elections and what not ;)
[23:27:32] <Joeskyyy> Might even wanna make it so it can't be elected a primary if that's your use case for that mongod.
[23:27:33] <unholycrab> http://docs.mongodb.org/manual/tutorial/back-up-databases-with-filesystem-snapshots/
[23:28:51] <Joeskyyy> I'd still personally go with mongodump, but that's just me being paranoid :D