PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 23rd of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:09:03] <LoneSoldier728> is there a way to say you want no duplicates for a combination of fields
[00:19:03] <Guno> how can i see if the find() found no objects?
[00:24:14] <vl4kn0> Hi, is it possible to match all the elements that match an array with $all operator, but where order of the elements also matters
[00:24:42] <vl4kn0> that means, [1, 2, 3] will match [0, 1, 2, 3, 4], but won't match [0, 2, 1, 3, 4]
[00:33:22] <Guno> how to detect if nothing is found with the .find() medhod?
[00:36:43] <Guno> this is useless
[03:08:32] <das3in> hi! i'm just getting started with mongodb and am having trouble deciding when to use methods vs statics (http://mongoosejs.com/docs/2.7.x/docs/methods-statics.html) and what the 'cb' argument is
[07:20:25] <liquid-silence> is there a way to update and select the entity?
[07:20:33] <liquid-silence> db.collection("assets").update({ _id: id }, { name: name }, function(err, result) {
[07:20:43] <liquid-silence> result will equal the number of rows affected
[07:21:02] <liquid-silence> so I dont really want to do two db calls
[09:09:30] <acstll> hi there, I got a question regarding handling connections in node.js using the mongojs module:
[09:09:33] <acstll> do I use a singleton instance of a connection for all my application, or should I use a new connection for every other module of my app that needs to perform queries?
[09:13:13] <Nodex> ideally you connect once when your app starts
[09:14:25] <acstll> Nodex: thank you
[09:14:43] <Nodex> no probs
[09:15:05] <acstll> Nodex: so there's no performance advantage in having 6 or 7 connection (one for each other module) than just 1 for all?
[09:15:33] <acstll> Nodex: I just found this: http://stackoverflow.com/questions/10656574/how-to-manage-mongodb-connections-in-a-nodejs-webapp
[09:16:03] <Nodex> acstll, I am sure that a single connection across 7 apps is not as good as 1 per app
[09:16:23] <Nodex> as far as "module" goes - you wil have to explain your definition of a module
[09:17:55] <acstll> Nodex: by modules I mean just "parts" of my app than handle different routes, there's just one app (node instance), sorry about that
[09:19:08] <Nodex> I recently re-wrote parts of our infrastructure to node + mongo + solr + redis (a common api for dealing with areas / geo / postcodes / reverse geocoding etc) and I connect once at the start - performance is fine
[09:19:21] <Nodex> 3000 - 4000 req/s with no problems
[09:21:29] <acstll> Nodex: perfect, thanks a lot for your help
[09:21:36] <Nodex> no problemo
[09:29:05] <liquid-silence> db.collection("assets").update({ _id: id }, { name: name }, function(err, result) {, can I return the actual updated object instead of the number of affected results
[09:31:43] <Nodex> findAndModify
[09:35:15] <liquid-silence> b.collection("assets").findAndModify({ query: { _id: id }, update: { name: name }, upsert: true}, function(err, result) {
[09:37:19] <liquid-silence> seems to not work
[09:37:52] <Nodex> http://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html#find-and-modify
[09:41:07] <liquid-silence> db.collection("assets").findAndModify({ _id: id }, { $set: {name: name} }, function(err, result) {
[09:41:09] <liquid-silence> that?
[09:41:13] <liquid-silence> also seems to fail
[09:43:37] <liquid-silence> Nodex am I being stupid?
[09:44:07] <Nodex> perhaps it cant find the doc?
[09:44:18] <Nodex> what error does it give?
[09:44:27] <liquid-silence> Uncaught TypeError: object is not a function
[09:44:46] <Nodex> is "id" an ObjectId ?
[09:45:26] <liquid-silence> no
[09:46:07] <Nodex> without seeing the full code in context I can't help sorry
[09:46:40] <liquid-silence> fuck testing tools suck
[09:49:55] <liquid-silence> ok findOne with that _id works
[09:50:02] <liquid-silence> if I do find and modify
[09:50:33] <liquid-silence> it does not find the object
[10:43:06] <cym3try> does anyone have any experience with putting haproxy in front of mongodb?
[10:45:10] <bmcgee> Hey guys, I'm experiencing some problems with findAndModify. Description is here: https://gist.github.com/brianmcgee/7116279
[10:55:43] <dnear> Hello guys
[10:56:42] <dnear> I have some issues
[10:57:34] <dnear> I want to autheticate my mongodb, but i cant do it.. I will get an error in mongod if i want to run it. The error says "dbpath /data/db/ does not exist" How can i fix this?
[11:00:37] <Nodex> create the /data/db path or change the path in your config?
[11:01:07] <bmcgee> Nodex: any ideas about this: https://gist.github.com/brianmcgee/7116279
[11:03:03] <cym3try> win 3
[11:03:54] <Nomikos> is there a way to findAndModify while also sorting? I want to find the document that was updated least recently and return it after setting status =
[11:03:59] <Nomikos> *status=
[11:04:12] <Nomikos> ... status='processing'
[11:04:18] <Nodex> bmcgee : I think that findAndModify is atomic
[11:05:19] <bmcgee> Nodex: that's what I assumed. Since I'm not deleting the old document, but simply updating it with a separate process potentially at the same time it doesn't account for findAndModify returning null.
[11:07:20] <Nodex> if the document is in a locked state then that might account for it
[11:07:38] <cym3try> When i put mongo behind haproxy, i get timeouts - was wondering if haproxy is offcially supported by 10gen
[11:07:41] <Nodex> I am not familair with the ins and outs of findAndModify
[11:07:59] <bmcgee> Nodex: that would suck if it is the case.
[11:08:41] <Nodex> ideally it would return a read only copy of the document but that's probably not efficient for people who expect the -latest- copy
[11:09:04] <Nodex> in that case it shoud retry (imo) the document until it's unlocked and return it - this is speculation though
[11:11:17] <bmcgee> yeah I would agree
[11:14:44] <dnear> okay i now have the path selected
[11:14:58] <dnear> But how can i enable authentication
[11:15:08] <dnear> So i have to login first before i can see my data
[11:38:15] <Nodex> dnear : I'm pretty sure it's all in the docs
[11:38:24] <Nodex> iirc there is a part explaining with examples
[11:39:52] <dnear> Nodex
[11:39:57] <dnear> I tried to follow the documents
[11:40:05] <dnear> But it didnt work out for me
[11:40:31] <Nodex> you should raise a jira issue that they're wrong then, seems that everyone else would be in the same boat as you
[11:54:36] <scanf> whats the best way to check the localhost machine's mongodb priority? doing lik rs.conf or local.system.replset.members[n].priority forces me to choose an _id
[11:56:51] <scanf> or is there a way to get reliably get my localhost machine's _id, potential workaround :)
[11:59:25] <joannac> rs.status() has self:true for the current node?
[12:09:06] <scanf> joannac: not sure how i can get at that directly from the console with specifying an _id
[12:23:17] <Neptu> I think I know very much the answer but I want to ask does anyone know if mongo treast uuid v1 as oid or just as string...
[12:37:29] <Nodex> Neptu : ObjectId is a bson type it has nothing to do with uuid
[12:39:49] <Neptu> Nodex, just wanted to realize if a uuid on the _id field will be optimized or i have to store it as a binary
[12:43:13] <cheeser> it would stored as a String, iirc
[12:43:23] <cheeser> heh. String. showing my java roots.
[12:57:05] <nicktodd> I'm trying to use aggregate to group by country name and then, for each country, group by number of instances per city in that country group. I've tried using map reduce and aggregate and can get close to it but not how it should be. Any Ideas how I should go about it? Thanks.
[13:21:02] <d0x> Hi, how to add the "index" of a subdocument inside an array to the subdocument itself?
[13:21:18] <d0x> using the aggregation framework
[13:21:39] <d0x> it's hard to google the word "index" with mongodb :)
[13:22:01] <Rhaven> Hello everybody, is there someone knows the impact on datas of draining a shard while my service is running? At what kind of problem should i expect?
[13:41:02] <saml> Rhaven, if you store data in mongodb, you are agreeing to lose your data
[13:41:18] <saml> unless you're careful
[13:41:21] <cheeser> no he's not
[13:41:35] <Derick> saml: uh?
[13:44:00] <remonvv> Not this again..
[13:45:48] <Rhaven> saml: ty for reply, but that doesn't help me
[13:45:59] <saml> i wanted to scare you
[13:46:11] <saml> what's draining a shard?
[13:46:20] <remonvv> A well informed source then.
[13:46:33] <saml> you want to remove a shard?
[13:46:38] <Rhaven> yes
[13:46:39] <remonvv> Rhaven: Performance will be degraded, potentially significantly.
[13:46:42] <saml> http://docs.mongodb.org/manual/tutorial/remove-shards-from-cluster/
[13:46:55] <saml> To remove a shard you must ensure the shard’s data is migrated to the remaining shards in the cluster.
[13:47:00] <saml> make sure that. otherwise you lose data
[13:47:00] <Rhaven> i know this page
[13:47:29] <saml> i haven't used replica set. so from now on, i'll be misleading you. don't listen to me.
[13:47:31] <remonvv> If there's high write contention on that collection (or specifically the chunk being migrated) it will affect performance. In our experience performance is significantly degraded during a drain but I think that's not supposed to be the case.
[13:48:59] <saml> why remove a shard? unless you have really huge database (terabytes) it could be easier to backup, start new cluster on backup and do delta import from old cluster to new cluster.
[13:49:30] <remonvv> And lose all the writes between backup time and recovery time?
[13:49:38] <Rhaven> ^^
[13:49:39] <saml> taht's delta import
[13:49:53] <remonvv> That's way more fragile than draining a shard. And it's pretty common for us.
[13:50:03] <saml> ah i agree
[13:50:07] <remonvv> Actually, on that note, https://jira.mongodb.org/browse/SERVER-11328
[13:50:10] <saml> no wonder i lost production data :P
[13:50:34] <remonvv> Well don't let that stop you from giving people advice on data security kind sir ;)
[13:52:31] <Rhaven> for me draining still running fine but it takes too much time..
[13:52:58] <saml> how big was your shard being drained?
[13:53:40] <saml> let's make a web app that calculates time it'll take to finish mongoimport or draining a shard given size and index
[13:53:46] <Rhaven> "remaining" : {
[13:53:46] <Rhaven> "chunks" : NumberLong(673),
[13:53:46] <Rhaven> "dbs" : NumberLong(2)
[13:53:46] <Rhaven> },
[13:55:21] <d0x> I've opened a SO question for this: http://stackoverflow.com/questions/19543459/add-the-index-of-each-array-element-to-the-element-itself
[13:56:21] <Rhaven> saml: sure, it should be great
[13:56:39] <Rhaven> ^^
[13:56:53] <Nodex> d0x : aggregation framework doesn't update documetns
[13:57:07] <d0x> Nodex: Ah, maybe update is wrong
[13:57:13] <Nodex> document's. You want a map reduce for that or just loop your data
[13:57:15] <d0x> i like to use it in a later stage
[13:57:29] <d0x> so smth. like "projection"
[13:57:32] <Nodex> a later stage of the aggregation pipeline?
[13:57:37] <d0x> yes
[13:58:03] <Nodex> not sure that's possible
[13:58:24] <Nodex> I am not convinced the AF is object aware as in "this"
[13:59:18] <Nodex> but if you unwind it then as long as you dont apply a sort to the unwound elements then it will return in the order it was found
[14:00:40] <ronakrrb> Hello every one
[14:00:58] <ronakrrb> I am trying to install mongodb on mac mini using brew.
[14:01:09] <saml> that's great
[14:01:17] <saml> or you can use pre-compiled builds?
[14:01:23] <ronakrrb> it installed successfully but showing a warning: mongodb-2.4.6 already installed, it's just not linked
[14:01:27] <saml> http://www.mongodb.org/downloads
[14:01:50] <saml> you can just download that and decompress
[14:02:36] <ronakrrb> saml, how am i suppose link
[14:02:54] <ronakrrb> i had installed through ssh
[14:02:57] <saml> link? oh you're writing a driver?
[14:03:04] <ronakrrb> no
[14:03:24] <ronakrrb> its giving a warning mongodb-2.4.6 already installed, it's just not linked
[14:03:54] <Nodex> update your path?
[14:04:00] <Derick> ronakrrb: it's much easier to download the package
[14:04:09] <Nodex> sorry, I don't know about Mac crap
[14:04:19] <cheeser> brew link mongodb
[14:04:23] <saml> brew link
[14:04:27] <saml> http://stackoverflow.com/questions/tagged/homebrew?page=8&sort=votes
[14:07:57] <remonvv> Rhaven : Only way to speed up a drain is to move the chunks yourself which is various kinds of tricky.
[14:08:29] <Nodex> telepathy is the current hipster trending way
[14:22:41] <ronakrrb> i am downloading the package itself.
[14:22:54] <ronakrrb> how to remove the previously installed using brew?
[14:23:58] <cheeser> brew remove mongodb, of course.
[14:24:04] <cheeser> homebrew has documentation for all this.
[14:25:20] <ronakrrb> cheeser, thnx
[14:25:24] <ronakrrb> new to mac
[14:26:24] <Nodex> run while you can still get a refund :P
[14:29:03] <Rhaven> remonvv: while draining a shard, if there is a write operation on the chunk who is moving to another shard. Is that the changes will be lost?
[14:29:50] <Derick> Rhaven: no, that should not happen
[14:31:46] <Rhaven> Derick: so changes on that chunk will be considered?
[14:34:41] <cheeser> i would think writes to the shard wouldn't happen at all while being drained. i'd think that the new document would be routed to the correct new shard.
[14:35:54] <Derick> that's what I expect as well, but I can't give an authorative answer
[14:39:30] <Rhaven> Ok so all write operations can't happened while the draining is not finished
[14:39:35] <Rhaven> ?
[14:40:18] <oceanx> hi I have a ~70GB mongodb database in which i stored lot of files using gridfs (through python), I did a mongodump to migrate the database but restoring is taking really a lot time (it's almost 24 hours and still didn't end) is it ok? or there are some issues?
[14:40:39] <cheeser> Rhaven: no. i was talking about writes to the draining shard.
[14:40:52] <cheeser> i can't imagine why the system would write toa shard it's trying to drain.
[14:45:01] <Nodex> oceanx : there are some python tools that someone opensourced for speeding these kind of things up
[14:45:19] <Nodex> the links are on the mongodb blog - it's a write up by a company but I dont recall the name of them
[14:45:33] <Nodex> iirc they're a server monitoring company with a dark website
[14:45:55] <oceanx> thank you Nodex! I'll take a look
[14:48:58] <Nodex> if the name comes to me I'll let you know
[14:51:39] <saml> ObjectID is always 24 characters?
[14:51:56] <cheeser> always? dunno.
[14:52:08] <saml> it says 12 bytes
[14:52:42] <saml> so i have db.articles collection. editors create an article (draft). when it gets published, it gets caonicalUrl
[14:53:10] <saml> i need a way to identify all articles including drafts.
[14:53:17] <saml> maybe i should put drafts in separate collection?
[14:53:39] <cheeser> what does that have to do with ObjectID size?
[14:53:44] <saml> i've been addressing articles using canonicalUrl. db.collection was imported from existing cms
[14:54:13] <saml> so, I was thinking GET /articles/<str of 24 characters> will query mongodb using _id
[14:54:27] <saml> hrm i don't know how to query published articles using canonicalUrl
[14:54:44] <saml> maybe GET /articles/?canonicalUrl=http://reddit.com/some/stuff.html
[14:55:10] <saml> probably frontends won't like that url with ?canonicalUrl=
[14:55:37] <saml> yah objectid size won't matter
[14:56:07] <saml> GET /articles/<str:_id> and GET /articles/<path:canonicalUrlMinusScheme>>
[14:56:27] <cheeser> you'll need to url encode those parameteres
[14:57:17] <saml> so, /articles/507f191e810c19729de860ea and /articles/nymag.com/daily/intelligencer/2013/10/romney-bookcase-bookshelf-hidden-room-secret.html
[14:57:41] <cheeser> that's not exactly what I said, saml
[14:57:57] <saml> i don't get it
[15:00:30] <saml> you mean /articles/http%3A//nymag.com/daily/intelligencer/2013/10/romney-bookcase-bookshelf-hidden-room-secret.html ?
[15:00:47] <saml> or /articles/?canonicalUrl=http%3A//nymag.com/daily/intelligencer/2013/10/romney-bookcase-bookshelf-hidden-room-secret.html
[15:00:54] <saml> man programming is too hard for me
[15:01:03] <ron> then don't program. quit.
[15:01:06] <saml> thankfully i'm not using php
[15:01:15] <saml> if i used php, i'd cause too many trouble
[15:15:06] <remonvv> Rhaven: Sorry, didn't see your question. Depending on which step of the chunk migration process is active your write will either go to the old shard or the new shard. In the case where it goes to the old shard the writes are automatically replicated on the new shard. This by the way can cause a chunk migration to take very long if the chunk is "hot", meaning there are so many writes to its shard key range that the balancer can never finalize
[15:16:50] <Rhaven> remonvv: Thank you remonvv for reply. that's good to know :)
[15:17:00] <ron> o/ remonvv
[15:19:24] <remonvv> Rhaven, np!
[15:19:26] <remonvv> Hey ron!
[15:19:39] <ron> sup
[15:46:10] <remonvv> Not much.
[16:02:24] <joshua> I am so used to mongodb now that I wish there was an sqlite type thing called something like mongolite
[16:13:57] <remonvv> I wish I was a little bit taller, I was I was a baller, I was I had a rabbit in a hat and a 64' Impala.
[16:16:51] <Nodex> LOL
[16:16:57] <joshua> Same
[16:17:11] <Nodex> I'm to tall
[16:17:13] <Nodex> too *
[16:18:31] <LoonaTick> Hi. I'm trying to setup (as a test) a sharded MongoDB instance on my Windows workstation. I have 3 config servers running on 290179, 29018, 29019 1 mongos running on 27017 and 3 shards running on 25017, 25018, 25019. I don't understand why, but the mongo shards are also listening on 27017, 27018, 27019. Anybody know what I'm doing wrong?
[16:23:32] <hogepodge> ping: any 10Gen folks around?
[16:24:56] <cheeser> well, there are some mongodb folks around...
[16:24:59] <joshua> I guess the technical answer would be no since they renamed to MongoDB
[16:25:05] <cheeser> :D
[16:25:10] <hogepodge> :-)
[16:25:19] <joshua> Damn I'm too slow
[16:25:28] <remonvv> I wish I had @. Sometimes I just don't feel appreciated enough :(
[16:26:00] <remonvv> What's wrong with "are there any people around somewhat knowledgable about MongoDB that have no problem spending time on my particular problem even though I'm not paying them?"
[16:26:14] <hogepodge> Shows how far behind I am on MongoDB news. :-)
[16:26:29] <remonvv> Well, in all honesty a company rename isn't the most exciting thing ever.
[16:26:53] <remonvv> With the possible exception of them changing their name to "Baby Pink Croissant" or something.
[16:26:58] <joshua> People were probably calling them MongoDB anyway
[16:27:06] <remonvv> Baby Pink Croissant, the creators of the popular database MongoDB...
[16:27:19] <remonvv> Possibly.
[16:27:19] <Nodex> BPC ftw
[16:27:23] <Derick> that'd be fun
[16:27:37] <hogepodge> I'm with Puppet Labs. We have a MongoDB module that we were using for one of our projects, but now we're really not maintaining it any more.
[16:27:52] <remonvv> AND HOW IS THAT OUR FAULT?!?
[16:27:56] <remonvv> I mean, continue..
[16:28:05] <Nodex> hahah
[16:28:11] <Nodex> power goes straight to his head :P
[16:28:20] <Nodex> (or the illusion of power :P)
[16:28:26] <hogepodge> It's a dependency for OpenStack Ceilometer, so I have a personal interest in keeping it alive and well.
[16:28:27] <remonvv> What power?
[16:28:29] <remonvv> Oh.
[16:28:43] <Derick> it's not a very useful power
[16:28:49] <remonvv> What can I do??
[16:28:53] <remonvv> Show me the buttons!
[16:29:16] <hogepodge> It's module that has a more general use, though, and before moving it to the OpenStack Stackforge infrastructure for maintenance I wanted to give the MongoDB team a chance to voice opinions about it (like where it's hosted, and so on).
[16:29:21] <cheeser> you can speak now, remonvv
[16:29:24] <cheeser> you have a voice!
[16:29:36] <remonvv> I always have a voice. It's just that nobody listens to that voice.
[16:30:04] <Derick> hogepodge: not sure who I should connect you with actually.
[16:30:16] <remonvv> Puppet labs hm. Ah the bones I have to pick with you.
[16:30:45] <hogepodge> Here's the git repo for it. https://github.com/puppetlabs/puppetlabs-mongodb
[16:30:51] <remonvv> How do I unplus myself. I can't be trusted with this power.
[16:30:59] <remonvv> I have a dark side.
[16:31:29] <hogepodge> Not trying to push responsibility onto you, just wanted to give you a chance to direct its disposition. :-)
[16:32:29] <hogepodge> Since I was a MongoDB admin/user in a previous life, I can take on its maintenance.
[16:32:49] <remonvv> dinner time, tc
[16:33:42] <hogepodge> Derick: Yeah, I though I should check in here first. Do you know if any MongoDB people are going to be in Hong Kong for the OpenStack Summit?
[16:34:11] <Derick> hogepodge: maybe, I think we have a person there now, but not sure :S
[16:34:47] <Derick> off home, back later if you have further comments/questions
[16:36:31] <hogepodge> Ok, I'll hang out in the channel, or you can e-mail me at chris.hoge at puppet labs.
[16:37:12] <rbento> Hey, anyone was able to install mongodb on OS X Mavericks?
[16:38:22] <hogepodge> remonvv did we kick your puppy or something?
[16:42:30] <joshua> rbento: I haven't tried yet. Are you running into a problem?
[16:43:32] <cheeser> there are ... issues with Mavericks last I heard.
[16:43:52] <rbento> #include <tr1/unordered_map>
[16:43:52] <rbento> ^
[16:43:52] <rbento> 1 error generated.
[16:43:52] <rbento> scons: *** [build/darwin/64/mongo/tools/bsondump.o] Error 1
[16:43:52] <rbento> scons: building terminated because of errors.
[16:44:08] <cheeser> oh, you're trying to build it.
[16:44:11] <rbento> Please, take a look: https://groups.google.com/forum/#!msg/mongodb-user/xbZzM-QMdfI/9aP92YNZ_IYJ
[16:44:30] <rbento> I'm installing with homebrew
[17:05:23] <ehershey> mongodb installs fine on mavericks :)
[17:05:43] <ehershey> the 2.4 branch just doesn't compile happily
[17:06:13] <eucalyptus> ehershey: you mean the binaries
[17:06:27] <ehershey> rbento: you can use --devel to build 2.5.3, which is unstable, or use http://downloads.mongodb.org/osx/mongodb-osx-x86_64-2.4.7.tgz
[17:06:30] <ehershey> yeah
[17:07:04] <rbento> thanks, I'm doing that right now
[18:36:01] <eph3meral> I'd like to migrate from a relational SQL db (postgres 9.1) to MongoDB. I know this is sort of a "do it yourself" thing for the most part, and I've already started/tried - I just used RABL templates from rails controllers to piece things together and it was really quite simple to bring up, but it's extraordinarily slow and gobbles my ram til I must restart :( are there any/many other reasonable options for doing this that others have u
[18:36:01] <eph3meral> sed to reasonable success?
[18:38:09] <cheeser> i used a combo of hibernate and morphia for that.
[19:17:07] <fjay> if a replica set member has had its ability to vote removed.. and then you remove it... will that still cause a re-election?
[19:18:07] <fjay> remove it from the replica set that is
[19:40:40] <eph3meral> cheeser, k thx for the suggestion
[19:42:00] <cheeser> np