PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 8th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:13] <rossdm> is it useful to be able to grab a specific URL? then maybe key-value pair where URL is value. If not, perhaps use an array?
[00:01:40] <spartango> its generally not all that useful to grab a specific url, although it'd be useful to have information about the ordering
[00:02:43] <spartango> note that the appends have no guarantee of being done in order (concurrent) so i'd be doing some kind of "put at position" with an array
[00:05:24] <spartango> i'll go look more carefully at the docs for array. thanks though
[01:46:31] <ckd> cjhanks: did you file a ticket?
[01:50:37] <cjhanks> ckd: First one I tried was Fedora. There was a ticket marked resolved about the issue with other people still claiming issue persisted -- I did not want to add noise. Debian installer I didn't bother since I assumed it must be endemic.
[01:51:10] <ckd> cjhanks: Interesting, thanks for the info… I'm so used to living in Ubuntu-land I completely forgot that there's a world outside :)
[02:26:53] <thomas___> Is there any way for me to return multiple counts in one request?
[02:29:57] <thomas___> rather then running users.find({gender:"male"}).count(); and getting 231 then running users.find({gender:"female"}).count(); getting 416 something that could return {male:"231",female:"416"} I know map reduce can do this, is that my best bet?
[02:45:42] <IAD> thomas___: look at http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-{{count%28%29}} : a = users.find({gender:"male"}) && all = a.count(true) && m = a.count() && f = all - m
[02:57:04] <thomas___> IAD: thanks
[03:01:05] <IAD> thomas___: this will work only if you have a men and a woman in collection(not gay, lesbian, ...) =)
[03:05:10] <mrpro> IAD: what about shemale
[03:07:25] <IAD> mrpro: if the sex is a list - other variants are possible =)
[03:08:02] <squawknull> IAD: what about "pat"
[03:10:06] <IAD> I think there are many exceptions, such as zombies, orcs =)
[03:32:35] <thomas___> I just updated my database via homebrew, when I got back into the console all my dbs were gone. Are they still on my computer somewhere? Or are they whiped?
[03:32:41] <thomas___> wiped*
[03:36:02] <thomas___> I updated mongodb not database*
[03:40:33] <ckd> do you have anything in your data directory?
[03:41:36] <thomas___> I don't remember what the data directory was or if it changed from verison 2.0 to 2.2
[03:42:20] <ckd> check in /etc/mongod.conf
[03:42:31] <thomas___> ckd: there is stuff in data/db/ that I'm not seeing in the console
[03:43:47] <ckd> one possibility is that your new install points to a different data dir
[03:44:00] <ckd> and that your original data is happily gathering dust in its original location
[03:49:37] <thomas___> ckd : I'm not in linux, i'm on mac and installed with homebrew so my conf isn't in etc
[03:52:41] <ckd> you should be able to see where your config file is in your process list
[03:55:42] <ckd> also you can run db.serverCmdLineOpts(); from the console
[04:05:02] <thomas___> ckd: can i just drop the old folder into the new
[04:07:45] <ckd> yep! obviously make backups first
[04:07:59] <ckd> does it look like your new install IS in fact using a different data dir than your original install?
[04:08:16] <thomas___> yes it s
[04:08:26] <ckd> awesome
[04:08:45] <ckd> yeah, shut her down, and replace your new dir with a copy of the orig and start it up again
[04:38:30] <ckd> thomas___: any luck?
[06:28:37] <samurai2> hi there, is making backup using mongodump for shard replicaset cluster is better than using lvm? thanks :)
[06:58:09] <kizzx2> hi all
[06:58:45] <kizzx2> the "faults" column of my `mongostat` is constantly over 100 during normal load
[06:58:51] <kizzx2> should i start worrying? what's the safe range usually?
[07:06:32] <ckd> how big is your db and much RAM do you have?
[07:07:13] <kizzx2> ckd: the DB is about 70GB and i have 16GB ram on a EC2 EBS instance
[07:08:02] <kizzx2> insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn
[07:08:02] <kizzx2> 0 0 0 0 0 1 0 68.2g 139g 4.92g 131 18.1 0 0|1 3|1 62b 1k 20 07:04:53
[07:08:06] <kizzx2> ^ random sample
[07:08:23] <ckd> well, someone more informed can chime in, but basically… it just means that it's having to read from disk
[07:08:36] <ckd> so if you're ok with that and it's not visibly affecting performance, i wouldn't worry about it
[07:08:57] <kizzx2> yeah, i guess for any reasonably sized DB it's inevitable to read from disk
[07:09:23] <kizzx2> just wondering how big a number would warrant some usage pattern optimization
[07:14:55] <ckd> well, that sorta thing only gets worse as your dataset gets bigger
[07:15:02] <ckd> so might as well start sorting it out now
[08:36:11] <[AD]Turbo> hola
[08:45:35] <NodeX> lol ron, you child
[08:45:43] <ron> :D
[08:58:49] <codemagician> I'm using the PHP $obj = new \Mongo(); in my web app. I stop the mongod and I start my webapp again, without restarting the mongod but my app still gets a connection back. Is this possible?
[08:59:25] <codemagician> Is there some kind of persistence happening at the PHP level, that doesn't check to see if the mongod server has gone away?
[09:01:06] <fleetfox> doubt that
[09:01:11] <fleetfox> are you sure it's running?
[09:05:45] <codemagician> yes 100%
[09:06:01] <codemagician> I refresh the web app and after about 10-20 times is knows its gone
[09:06:06] <codemagician> which suggests a connection pool
[09:06:14] <codemagician> fleetfox: ^
[09:06:31] <NodeX> there is a connection pool yes
[09:06:51] <ron> NodeX: we miss you!
[09:07:01] <NodeX> unlucky ;)
[09:07:06] <ron> haha ;)
[09:07:52] <codemagician> NodeX: is my web app going through the collection of PHP processes, or a connection pool of mongodb connections
[09:08:31] <NodeX> when you call new Mongo() it's checking the pool from the driver
[09:08:47] <codemagician> NodeX: is there a way to force the driver to check if the db has gone away
[09:09:06] <codemagician> for development purposes
[09:09:18] <NodeX> well a try / catch on new Mongo() will check it
[09:09:26] <codemagician> i have this
[09:09:31] <codemagician> it passes it fine
[09:09:35] <codemagician> even with no running server
[09:09:38] <codemagician> that's the problem
[09:09:50] <NodeX> try catch on your query?
[09:10:14] <codemagician> http://pastebin.com/GDvVGJWQ
[09:10:53] <NodeX> perhaps try the selectDb() call instead
[09:11:29] <NodeX> it's not something I have ever needed to do tbj
[09:11:30] <codemagician> NodeX: $this->connection = new \Mongo(); this line still gets a connection back regardless of selecting db
[09:11:53] <NodeX> great, but selecting a DB will throw an error if there is no database (no connection to the db)
[09:11:59] <codemagician> so will $this->connection = new \Mongo();
[09:12:13] <codemagician> as it does on the command line
[09:12:50] <NodeX> wrap your selectDB() in a try/catch
[09:12:51] <codemagician> when running PHP to test it on the command line, as soon as the mongo server is gone, an exception is throw indicating that the connection failed
[09:13:07] <codemagician> it is
[09:13:12] <NodeX> on it's own
[09:13:15] <codemagician> it's already inside a try catch
[09:13:19] <NodeX> on its own
[09:13:22] <codemagician> it makes no difference
[09:13:35] <codemagician> i can remove it and it still failed
[09:13:36] <NodeX> ok great, you have your answer as you obvisouly know more than me ;)
[09:13:57] <codemagician> NodeX: $this->connection = new \Mongo(); this line raises an exception so the line you are referring to is not relavant
[09:14:30] <codemagician> if I remove the selectDB() then it still gets a connection, so what's the point?
[09:15:27] <NodeX> you;re not listening to what I am telling you
[09:15:42] <ron> NodeX: you're unlistenable.
[09:15:44] <ron> ;)
[09:15:51] <NodeX> die
[09:16:04] <ron> you <3 me
[09:16:24] <codemagician> PHP website says "When the PHP process exits, all connections in the pools will be closed."
[09:17:13] <codemagician> but this doesn't happen
[09:18:37] <codemagician> anyone any idea?
[09:18:55] <ron> codemagician: you should listen to NodeX.
[09:19:21] <codemagician> has no idea
[09:19:40] <codemagician> i'll simply the question for NodeX
[09:19:46] <NodeX> right I'll tell you the answer again
[09:20:09] <NodeX> put a try catch on selectDB - this will Throw an exception if it can't select a DB
[09:20:25] <NodeX> and it can't select one because it has NO CONNECTION to the database
[09:20:29] <NodeX> *server
[09:21:11] <codemagician> http://pastebin.com/NirFk8Rb
[09:21:37] <codemagician> there is need to discuss selectDB
[09:21:52] <codemagician> this STILL returns a connection after the PHP process dies
[09:21:59] <NodeX> and?
[09:22:05] <codemagician> and the server isn't running
[09:22:07] <NodeX> and?
[09:22:17] <codemagician> are you broken?
[09:22:22] <NodeX> I just told you how to determine that
[09:22:54] <codemagician> why do I care to select a DB
[09:23:07] <codemagician> my web app thinks there is a connection already
[09:23:18] <codemagician> the problem is there, not later
[09:23:21] <NodeX> ok good luck
[09:23:35] <codemagician> the connection is not removed from the connection pool
[09:24:12] <codemagician> NodeX: you idea of suggesting to test to see if I can select a DB from a connection that doesn't exists is dumb
[09:24:23] <NodeX> ok dude, it's dumb, good luc
[09:24:25] <NodeX> luck*
[09:24:49] <codemagician> so be quiet
[09:24:58] <NodeX> lol, go back to using SQL
[09:25:28] <ron> NodeX: is codemagician one of the fans you talked about in #solr? ;)
[09:25:45] <NodeX> probably, one of those "thinks he knows best"
[09:26:07] <codemagician> NodeX: you have no idea what you are talking about
[09:26:17] <codemagician> you are referencing code not even in my current example
[09:26:20] <ron> NodeX: how long have you been using PHP and mongo?
[09:26:22] <NodeX> no dude, I have only been using Mongo for 2 or so years, no clue at all
[09:26:23] <NodeX> LOL
[09:26:28] <ron> \o/
[09:26:54] <NodeX> anyway, moving on
[09:26:56] <ron> NodeX: now come back to #solr and help the poor souls ;)
[09:27:05] <NodeX> so obama for another term, who called that on
[09:27:08] <NodeX> one *
[09:29:16] <NinjaPenguin> morning guys
[09:29:28] <NinjaPenguin> we have a production box that has gone live with logging verbose on
[09:29:36] <NinjaPenguin> i think this is hammering performance :(
[09:29:42] <NinjaPenguin> can I disable this without a restart?
[09:30:04] <NodeX> no :(
[09:30:45] <remonvv> use admin; db.runCommand( { setParameter: 1, logLevel: 1 } )
[09:31:11] <remonvv> You're very welcome NodeX ;)
[09:31:12] <NinjaPenguin> ooh nice!
[09:31:13] <codemagician> NodeX: I've been using it for 1 day, but long enough to know this is a connection pooling problem
[09:31:14] <NinjaPenguin> thank you
[09:31:24] <ron> remonvv: Dude! SUP! you haven't been here for a long time!
[09:31:32] <NodeX> how many other commands can be run like that
[09:31:48] <remonvv> ron, yeah had a holiday ;)
[09:32:07] <ron> codemagician: I've been developing for 10+ years to know the problem is that you use PHP instead of a real programming language. that doesn't help solve your problem though.
[09:32:13] <ron> remonvv: ooh.. good stuff
[09:32:15] <remonvv> NodeX, basically all command parameters that you'd expect could be changed runtime.
[09:32:48] <NodeX> see ron, I told you he was on vacation
[09:32:54] <codemagician> ron: the drivers are written in c, so what's your point
[09:32:59] <NodeX> ron has been crying becuase he thought you died remonvv
[09:33:00] <ron> NodeX: no, you didn't.
[09:33:11] <NodeX> I did and have the chan logs to prove it ;)
[09:33:16] <ron> codemagician: who cares about the drivers?
[09:33:32] <ron> remonvv: wanted to tell you about my new job!
[09:35:26] <codemagician> ron: listen, driver, PHP or C. The original question was about connection pooling, since I want my development server to recognise when the mongod instance exists. The PHP caches a pool of connections and this is the issue, not using workaround to attempt to selectDbs. The select works too, because PHP caches that also
[09:36:31] <codemagician> ron: PHP will not raise an exception for either attempting to create a new connection, nor select a DB because it caches both in a pool.
[09:37:54] <remonvv> ron, do share ;)
[09:38:11] <NodeX> he's a fluffer
[09:38:13] <NodeX> hahahahahaha
[09:39:55] <codemagician> Anyone else know why PHP MongoDb driver does not close the connections in the pool when the process exits?
[09:41:44] <ron> codemagician: I don't care.
[09:41:51] <codemagician> ron: then keep quiet
[09:42:00] <ron> codemagician: nope. feel free to /ignore me.
[09:42:01] <codemagician> ron: I don't remember asking you a god damn thing
[09:42:12] <ron> codemagician: you asked the channel. I _AM_ the channel.
[09:42:18] <ron> remonvv: well, not here.
[09:42:20] <remonvv> I feel it's about time to start doing our daily group hug again.
[09:42:37] <remonvv> ron, alright ;)
[09:46:27] <remonvv> codemagician, most drivers will throw exceptions only if you perform an actual operation on the connection.
[09:46:41] <remonvv> Things like selecting database is local state and will not do anything to raise exceptions.
[09:47:41] <IAD1> codemagician: persistent connections http://php.net/manual/en/mongo.construct.php
[09:47:44] <codemagician> remonvv: when I refresh my browser window again and again, the web app gets a connection every N cycles, which shows there is likely a pool of connections held inside PHP
[09:47:52] <NodeX> LOL ... I ignored this kid and he's still on about the same thing
[09:48:17] <codemagician> running it from the command line means that there is no connection pooling so the exception is thrown instantly
[09:48:36] <codemagician> the documentation says when the PHP process exists the connection is closed, but this doesn't happen
[09:49:15] <codemagician> so the local state doesn't match the underlying state
[09:51:52] <codemagician> NodeX: you don't even understand the problem. just be quiet I was programming whilst you were in nappies
[09:52:19] <stevie-bash> hello.
[09:53:37] <stevie-bash> I once used a mongo shell command with the database name specified in it. Now I can't recall this. How is the syntax to use the db name in a db.'dbname'.collection.command()?
[09:53:48] <NodeX> use db
[09:53:55] <NodeX> db.collection.command();
[09:54:11] <stevie-bash> NodeX: I don't want to use 'use'
[09:55:40] <NodeX> are you usre you;ve done this before?
[09:55:43] <NodeX> sure*
[09:55:46] <stevie-bash> yes
[09:56:22] <stevie-bash> I have the problem, that my shard config thinks, that databases exist on some shards, which are not there
[09:56:23] <codemagician> remonvv: when you say local state, do you mean local state of the PHP
[09:56:28] <stevie-bash> so I want to drop these
[09:56:41] <codemagician> remonvv: or is this some concept of mongo
[09:56:45] <stevie-bash> I can't use ' use dbname'
[09:57:01] <stevie-bash> since the name of the database is "*"
[09:57:09] <IAD1> A day some of NoSQL guys walk into RDBMS bar. But they couldn't open the door! any one know why?
[09:57:23] <remonvv> remonvv, driver side
[09:57:40] <remonvv> like, doing something like setDB(..) or whatever the equivalent is in your driver does nothing with the server.
[09:57:53] <NodeX> IAD1 : no
[09:58:11] <NodeX> they couldnt relate to the door?
[09:58:17] <IAD1> they don't have foreign key! =)
[09:58:21] <NodeX> were not allowed to join
[09:58:22] <NodeX> oh LOL
[09:58:33] <codemagician> remonvv: then the implementation of the PHP mongodb should disassociate the connection from the pool, but doesn't?
[09:58:35] <NodeX> :D
[09:58:45] <remonvv> If the connection has issues for whatever reason it will throw exceptions only if you actually do something that requires the driver to send operations to the server. Also note that most connection pool implementations will quietly attempt to create new connections if one timed out or had IO issues (rather than MongoDB oriented errors)(
[09:58:47] <ppetermann> whywouldyounameadatabase*?
[09:59:29] <stevie-bash> ppetermann: I don't know how or who created this
[09:59:35] <NodeX> lmao
[09:59:42] <stevie-bash> you can't switch to it
[09:59:51] <NodeX> someone needs to be shot
[09:59:57] <stevie-bash> Thu Nov 8 09:57:08 TypeError: rest.trim is not a function src/mongo/shell/utils.js:1369
[10:00:00] <NodeX> in the face
[10:00:07] <stevie-bash> if you try use "*"
[10:00:13] <codemagician> remonvv: but in the case of the $obj = new Mongo(); for PHP it returns an object with a connection inside, even after the original PHP process has died, and the mongo server too.
[10:00:21] <stevie-bash> I also see a db calles "E00"
[10:00:29] <stevie-bash> I can't swithc to this either
[10:00:50] <remonvv> codemagician, I'm not familiar with the PHP implementation but the Java driver closes the connection wrapper (and obviously in that case the actual connection isn't there anymore)
[10:01:09] <codemagician> remonvv: The PHP implementation is broken
[10:01:22] <NodeX> stevie-bash : try to copy the db name to a new name
[10:01:25] <Gargoyle_> stevie-bash: You are not trying hard enough!
[10:01:34] <NodeX> db.copyDatabase('*','new_name');
[10:01:35] <stevie-bash> I found in the doc something like this db['name'] , does this refer to a collection?
[10:01:46] <remonvv> codemagician, highly unlikely given the usage. Again I'm really unfamiliar with PHP but in most drivers the Mongo object is a singleton and should not be created more than once.
[10:01:53] <remonvv> Throughout the lifespan of the application, that is.
[10:02:02] <Gargoyle_> stevie-bash: There's no reason for the mongo shell not to allow a DB called *
[10:02:04] <Gargoyle_> http://pastie.org/private/hp6xu3syxhvry0y34xnw
[10:02:05] <ron> Derick: fyi, codemagician just said that your driver implementation is broken.
[10:02:07] <NodeX> not sure how to drop it but at elast it will get you online
[10:02:19] <ron> Gargoyle_: dude, use your original nick. you have a different color.
[10:02:30] <Gargoyle_> nick Gargoyle
[10:02:34] <Gargoyle_> doh!!
[10:02:35] <NodeX> lol
[10:02:36] <stevie-bash> Gargoyle_: I didn\t use escape
[10:02:41] <NodeX> damn
[10:02:44] <codemagician> fyi, maybe I can check?
[10:02:45] <NodeX> I was gonna steal it
[10:02:55] <stevie-bash> ty, it's dropped now
[10:03:06] <Gargoyle> NodeX: I have nick protection with nickserv! :)
[10:03:07] <NodeX> stevie-bash : sorry I assumed you used an escape else I wouldve told you that
[10:03:19] <NodeX> nickserv is my best friend
[10:03:40] <Derick> ron: okay
[10:03:45] <codemagician> whoops, Derick, maybe I can check with you on PHP implementation issue pleaes?
[10:03:46] <remonvv> Does PHP have global scope? If so I'm going to guesstimate your Mongo instance has to be assigned to a global variable.
[10:03:58] <ron> Derick: are you american or british? (if that's okay to ask)
[10:04:02] <NodeX> remonvv : it has a connection pool
[10:04:06] <Derick> ron: I'm Dutch, but living in London
[10:04:07] <NodeX> dutch
[10:04:17] <ron> Derick: oh, didn't realize that. cool stuff.
[10:04:24] <Derick> codemagician: ask away
[10:04:34] <codemagician> Derick: shall I private message?
[10:04:50] <Derick> maybe here, as I see some other people having misconceptions
[10:05:18] <codemagician> Derick: http://pastebin.com/GDvVGJWQ
[10:05:38] <codemagician> Derick: my connections appears to be living on after the PHP process has existed
[10:05:41] <codemagician> *exited
[10:05:59] <Derick> after the PHP *process* has existed, or the *request*
[10:06:07] <codemagician> Derick: such that the exceptions are not caught even if the mongo db server goes away
[10:06:09] <Derick> a process/working does many more than one request
[10:06:48] <Derick> codemagician: there are issues in the 1.2 release - the 1.3 series should solve quite a few things
[10:06:53] <Derick> and it's also shiny new code
[10:06:58] <Derick> but it *does* detect it
[10:07:01] <Derick> just it takes some time
[10:07:20] <codemagician> Derick: can you help explain something just to clear up some mystifying behaviour
[10:08:00] <IAD1> codemagician: use it before at destructing : http://www.php.net/manual/en/mongo.close.php
[10:08:09] <codemagician> Derick: when I stop the mongodb server and hit refresh on the browser, the page gives an exception, an exception, .. and then returns am apparently valid connection instance as if it's cycling through the connection pool
[10:08:26] <Derick> IAD1: no, don't do that
[10:08:29] <Derick> making connections is slow
[10:08:55] <Derick> codemagician: possible - the pooling code is confusing. (Hence we got rid of it in 1.3)
[10:09:06] <codemagician> Derick: So before you mention a distinction between PHP request and PHP process. I assume the PHP process continues running, even when a new HTTP request occurs?
[10:09:13] <Derick> yes
[10:09:21] <Derick> back in 5
[10:10:42] <IAD1> Derick: yep, but if he want it...
[10:11:42] <codemagician> Derick: I am correct in assuming that the PHP request isn't informing the PHP process to update it's connection state when the PHP program terminates?
[10:12:33] <codemagician> Derick: that is, currently in 1.2 the PHP script ends, but should inform its container PHP process and in tern update the connection state for that pool instance internally ?
[10:13:32] <codemagician> *turn
[10:18:20] <Derick> codemagician: there is no such thing as a "container PHP process"
[10:18:25] <Derick> there is state in the apache module
[10:18:33] <codemagician> Derick: I see
[10:18:38] <Derick> and that state is shared, not separate from the request
[10:18:45] <codemagician> Derick: gotcha
[10:19:03] <codemagician> Derick: so why did you ask about request versus process?
[10:19:10] <codemagician> is there a distinction ?
[10:19:14] <Derick> yes
[10:19:19] <Derick> one process runs more than one request
[10:19:22] <Derick> 10000s even
[10:19:27] <Derick> all in serie
[10:19:53] <codemagician> is the mongo php state held in the master apache process ?
[10:19:59] <Derick> no
[10:20:04] <Derick> in each worker process
[10:20:16] <codemagician> Derick: is that why connection pooling is confusing?
[10:20:18] <Derick> so the worker processes don't share state among each other
[10:20:20] <Derick> yes :-)
[10:20:27] <Derick> pooling is confusing, that's why we removed it :-)
[10:20:29] <codemagician> Derick: Now I see the problem you have
[10:20:44] <remonvv> Derick has many problems.
[10:20:51] <Derick> remonvv: hush ;-)
[10:20:53] <NodeX> not as many as ron
[10:20:54] <remonvv> Haha.
[10:20:55] <codemagician> Derick: so there is no way for an exiting script to signal that the connection has terminated
[10:21:06] <Derick> codemagician: that *should* happen automatically
[10:21:11] <Derick> practice is that it doesn't
[10:21:28] <Derick> codemagician: do you have the opportunity to try 1.3.0rc1? (or the very soon upcoming rc2?)
[10:21:35] <codemagician> sure
[10:21:55] <Derick> need to fix/change one more thing before I can make rc2
[10:22:30] <codemagician> Derick: is there no interprocess communication between apache workers?
[10:22:38] <Derick> correct
[10:22:54] <NodeX> is that "fix" adding back domain sockets? :P
[10:22:56] <codemagician> Derick: it would have been nice to be able to configure connection pools
[10:23:06] <Derick> NodeX: they're back already - even in rc1
[10:23:22] <NodeX> is Rc1 stable?
[10:23:25] <codemagician> Derick: when will I be able to test rc1
[10:23:47] <Derick> codemagician: IMO, that stuff should go fit in a proxy... it's very hard to do that from a apache/php application with so many distinct workers
[10:23:53] <Derick> NodeX: it is an RC
[10:24:02] <Derick> codemagician: you can already - we released it on monday
[10:24:05] <codemagician> NodeX: why now do you care? You spent the last hour flaming me and telling me PHP isn't a real programming language?
[10:24:16] <remonvv> codemagician, that was ron
[10:24:25] <remonvv> He's our resident anti-PHP enthusiast.
[10:24:32] <Gargoyle_> Derick: Just spotted your msg from last night. :)
[10:24:53] <codemagician> NodeX: sorry, ron
[10:24:56] <Derick> Gargoyle: bjori is on the case
[10:25:03] <Gargoyle> :)
[10:25:06] <Derick> Gargoyle: but can't find it yet
[10:25:22] <Gargoyle> :(
[10:25:31] <codemagician> i've been ducking and bobbing the PHP abuse so much I've forgotten who was throwing
[10:25:43] <NodeX> Gargoyle must be the most unlucky person in the world with the number of bugs he's had
[10:26:13] <Gargoyle> NodeX: To be expected when you run beta :)
[10:26:24] <Gargoyle> And I'm not the one that has to fix them!
[10:26:35] <Derick> NodeX: It's exactly what type of users we (as devs) really value. People wanting to try out new shiny things in production.
[10:26:43] <codemagician> Derick: is there anyway for the mongodb to signal that it has quit back up to the PHP layer?
[10:26:48] <NodeX> I run 1.3 and I dont hit these bugs
[10:26:50] <Derick> But yes, things then will break.
[10:26:52] <Derick> codemagician: nope
[10:27:06] <Gargoyle> NodeX: You running a RS ?
[10:27:08] <Derick> NodeX: Gargoyle is just a better (ab)user then :-)
[10:27:12] <Gargoyle> :P
[10:27:15] <Gargoyle> \o/
[10:27:28] <codemagician> Derick: aren't there any callbacks?
[10:27:33] <NodeX> (ab) ?
[10:27:37] <Derick> codemagician: only be the connection disappearing - and no, no callbacks
[10:27:47] <Gargoyle> NodeX: abuser!
[10:27:48] <codemagician> Derick: that makes your jobs harder
[10:27:50] <NodeX> ah
[10:27:56] <NodeX> no, I dont run replica sets
[10:28:13] <Derick> and on that bombshell, I need to go back fixing that for RC2
[10:28:35] <Gargoyle> NodeX: I only seem to get the problems when connecting to the RS. When running on dev to just a local copy of the db, it's fine.
[10:28:57] <codemagician> Derick: one last question. as a stop-gap, is there a way to force $obj = new MongoDb(); to throw an exception when the server has died
[10:29:14] <Gargoyle> ^^ Derick, bjori ^^ (Think I have already mentioned that, haven't I?)
[10:29:25] <Derick> codemagician: no, you can't even close all connections in 1.2
[10:29:27] <Derick> Gargoyle: yes
[10:30:12] <codemagician> Derick: not sure who maintains the PHP page, but the page says it does
[10:31:06] <codemagician> Derick: or I guess that's the process worker
[10:31:17] <Derick> codemagician: where is that?
[10:31:19] <codemagician> Derick: as opposed to the PHP process
[10:31:53] <Derick> it should throw an exception, in theory
[10:31:54] <codemagician> Derick: http://php.net/manual/en/mongo.connecting.php connection pooling section. last few words of that
[10:32:17] <Derick> but as I said, there is quite abit of code-rot there that we now replaced
[10:32:45] <codemagician> Derick: can I ask, in that document what does "PHP process" mean exacty?
[10:32:59] <codemagician> Derick: as I understand it from here it means apache worker thread
[10:33:04] <Derick> yes
[10:33:11] <codemagician> Derick: but at glance it seems to mean instance of PHP script
[10:33:35] <Derick> that's not what it means :-)
[10:34:37] <codemagician> Derick: I'm looking forward to the connection pool going
[10:35:00] <durre> I have a noob question: I have a document like this: http://pastebin.ca/index.php … how do I query all the documents that has "SE" in the markets array?
[10:35:23] <NodeX> LOL
[10:35:30] <NodeX> you need to pastebin the actual doc
[10:35:46] <durre> NodeX: whoops, sorry :) http://pastebin.ca/2249682
[10:36:09] <Derick> { filters.markers: 'SE' }
[10:36:11] <NodeX> db.collection.find({""});
[10:36:12] <NodeX> ^^
[10:36:18] <NodeX> what Derick said
[10:36:59] <durre> ahh, a simple dot… thx :)
[10:37:32] <NinjaPenguin> Derick: can I ask how (if at all) this connection pooling is handled differently FPM
[10:42:09] <Derick> NinjaPenguin: for PHP-FPM, substitute apache worker for FPM worker
[10:42:21] <Derick> i'll make sure to put that in my article!
[10:42:31] <Derick> (on the new connection handling in 1.3)
[10:42:43] <NinjaPenguin> that would be great :)
[10:43:11] <NinjaPenguin> when can we expect the article? and is there a rough ETA on the final realease?
[10:43:25] <NinjaPenguin> we will be testing the RC over the next few days against our RS
[10:45:10] <Derick> RC2 coming out probably on monday, then the article as well
[10:45:25] <NinjaPenguin> great stuff
[10:45:31] <Derick> I will fix something else today, which you probably want to use when connecting to an RS
[10:46:12] <NinjaPenguin> ah ok, what is that?
[10:47:11] <Derick> it doesn't always see all connections
[10:47:21] <Derick> this is something that I broke after RC1 though, so that one should be good
[10:47:44] <NinjaPenguin> cool ok, we will get testing then
[13:13:20] <NodeX> http://www.zdnet.com/microsoft-patent-spies-on-consumers-to-enforce-drm-7000007102/
[13:13:31] <NodeX> Whatever will they come up with next
[13:16:46] <Zelest> lol
[13:18:27] <NodeX> "watching a film with freidns
[13:18:33] <NodeX> freinds* is not allowed ... wtf ?
[13:22:50] <remonvv> Microsoft has a lot of extremely smart people forced into implementing some rather bad ideas.
[13:25:17] <NodeX> this corproate mentality about always having to maximise profits really is not good
[13:25:33] <NodeX> corporate*
[13:25:48] <NodeX> responsibility > greed
[13:26:44] <Gargoyle> new TV has a camera!
[13:26:55] <Derick> NodeX: it's required by law though
[13:27:10] <NodeX> Derick : In the USA yes, but still doesn't make it right
[13:27:15] <Derick> NodeX: you as a company have to do everything to please your shareholders :-/
[13:27:19] <Derick> NodeX: and yes, it's sick
[13:27:29] <NodeX> the world is a mess because of the top 5% of people
[13:27:41] <NodeX> imo within 5 years the whole system will crumble
[13:28:23] <NodeX> Gargoyle : tape the lens up LOL
[13:28:28] <NodeX> I do on my laptop webcam
[13:28:40] <Derick> it has a light when it turns on
[13:29:08] <NodeX> LOL, like that can't be turned off by Microsoft :P
[13:29:33] <NodeX> perhaps I'm paranoid
[13:30:01] <NodeX> iirc the UK is the most camera'd up place in the world
[13:31:32] <kali> NodeX: depends how big a place you consider... i've heard there is a village in france with 1 camera for 17 people
[13:31:43] <kali> NodeX: but bigger than that, yes, you're right.
[13:31:54] <NodeX> kali : I think it's on average camera per capita or somehting
[13:32:12] <remonvv> You know. For a support channel for a NoSQL database we do tend to get horribly off topic on occasion.
[13:32:17] <NodeX> london is terrible for it
[13:32:20] <Derick> london is crazy
[13:32:21] <Derick> hah
[13:32:30] <NodeX> Manchester and Liverpool are bad
[13:32:45] <NodeX> Liverpool not as bad coz the scousers keep stealing the cameras
[13:32:47] <NodeX> LOL
[13:32:54] <kali> NodeX :)
[13:33:03] <NodeX> remonvv : gotta have some lite chat every now and again
[13:33:10] <NodeX> keeps me from working to hard :P
[13:33:26] <NodeX> lol
[13:33:40] <NodeX> fortunately I answer to no-one
[13:33:50] <remonvv> We should have a RL #mongodb meet and force everyone to wear nametags with their IRC handle
[13:34:02] <NodeX> I'm trying to develop the ultimate CRSF prevention system atop mongo
[13:34:25] <NodeX> I imagine Kali to look like a blonde surfer dude LOL
[13:34:36] <kali> NodeX: arf
[13:34:42] <NodeX> kali - cali :P
[13:34:43] <kali> NodeX: i wiesh :)
[13:34:46] <kali> wish
[13:35:07] <NodeX> lol
[13:35:21] <NodeX> WormDrink
[13:35:43] <NodeX> anyone seen MongoDBIdiot recently?
[13:35:48] <NodeX> not seen him for a week or so
[13:36:04] <NodeX> he's probably off trolling #mysql or something LOL
[13:36:11] <remonvv> Yes, can't help but feel somewhat happy I didn't go for my second name choice "inappropriatefondling"
[13:36:20] <NodeX> Lmao
[13:36:22] <remonvv> Probably.
[13:36:35] <NodeX> I used to use female names
[13:36:40] <NodeX> I find you get helped quicker
[13:37:02] <NodeX> HelenHunt was always a good one in #security on linknet
[13:38:29] <NodeX> anyways, time for a haircut then Halo 4 me thinks
[13:41:39] <remonvv> Yeah. You're really busy with work.
[13:47:03] <durre> with this document: "filters" : { "markets" : [ "SE", "DK", "GB" ] } … how do I find all the documents that exist in the market SE or GB?
[13:47:37] <remonvv> find({'filters.markets:{$in:["SE", "GB"]}})
[13:49:22] <remonvv> that's OR, for AND find({'filters.markets:{$all:["SE", "GB"]}})
[13:51:12] <durre> remonvv: cool, thx!
[13:53:55] <remonvv> You're welcome.
[14:22:49] <rshade98> is this the proper way to setup a replica set
[14:22:54] <rshade98> http://pastebin.ca/2249730
[14:24:22] <remonvv> Yep, provided your config document is in order.
[14:27:03] <durre> using java, is there a way to see the query being generated by the mongodb driver?
[14:30:00] <rshade98> remonvv this is my whole string
[14:30:01] <rshade98> db.command("{ replSetInitiate : { _id: 'repl1', members: [{_id: 0, host: '10.202.18.48'},{_id: 1, host: '10.144.153.251'},{_id: 2, host: '10.144.15.96'}, ] } }")
[14:30:12] <durre> starting to really hate java btw :) working with json is a pain
[14:51:04] <nopz> Hi !
[14:51:17] <nopz> I have to rename a database, what is the best way to do it?
[14:55:09] <rshade98> any idea why I get this: Mongo::MongoArgumentError: command must be given a selector
[14:55:22] <rshade98> on the replSetInitiate
[14:56:54] <Derick> nopz: http://stackoverflow.com/questions/9201832/how-do-you-rename-a-mongodb-database
[14:57:08] <nopz> Derick, yeah that's actually what I
[14:57:12] <nopz> I'm using
[14:57:58] <Derick> I suggest you vote up https://jira.mongodb.org/browse/SERVER-701 then
[14:58:00] <nopz> by chance my Database is 19 Go so it will takes only 1 or 2 hour, but it's really a problem anyway
[14:59:10] <nopz> Yup
[15:28:07] <jtomasrl> how can i just get the id's of nested objects in an array?
[15:30:45] <squawknull> jtomasrl: you could use closures, but you'll have to wait until java 8 :)
[15:31:06] <squawknull> honestly, i think you're just going toh ave to iterate through the array and add them to a list
[15:31:12] <squawknull> or a second array
[15:31:25] <jtomasrl> i think so
[15:31:27] <ckd> rshade98: in ruby?
[15:31:40] <squawknull> oh crap, wrong channel, i thought i was in #java :)
[15:31:46] <NodeX> you can add "foo._id" to the fields
[15:32:06] <squawknull> ignore everything i've said as it's totally non-relevant unless you happen to be using java :)
[15:32:41] <NodeX> where foo is your nested field
[15:32:54] <NodeX> s/field/key
[15:36:39] <jtomasrl> lol
[15:40:24] <rshade98> ckd, yes
[15:40:55] <rshade98> I am getting close, I got it to start the init, however it does not seem to like my config doc
[15:41:07] <Derick> ckd: close to having the fix ready
[15:41:08] <ckd> rshade98: selector should be a hash
[15:41:42] <rshade98> yeah, I keep getting
[15:41:44] <rshade98> "info2"=>"no configuration explicitly specified -- making one",
[15:42:11] <rshade98> str="{ _id : 'repl1', members : [{_id : 0, host : ' 10.84.246.236'},{_id : 1, host : '10.190.235.118'},{_id : 2, host : '10.211.61.49'}, ]}"
[15:42:11] <rshade98> db.command({ "replSetInitiate" => str.to_json } )
[15:42:33] <ckd> Derick: hooray!
[15:42:44] <Derick> just testing it now actually
[15:46:01] <rshade98> the second part of the config doc looks right. I have to have it quoted or ruby gripes. should make it into a hash and then parse it to json
[15:48:49] <ckd> Derick: sounds like you had a long night then!
[16:02:04] <Derick> ckd: I'm in London, it's night all year round :-)
[16:24:46] <jtomasrl> need some help with this plz http://stackoverflow.com/questions/13292393/find-object-inside-json-using-nested-id/13292960#13292960
[16:27:38] <ppetermann> jtomasrl: that document looks weird
[16:28:26] <jtomasrl> ppetermann: maybe, im new into this mongodb thing and working with JSON
[16:29:38] <remonvv> jtomasrl, pastie the document itself please.
[16:29:58] <Gargoyle> jtomasrl: You've not really explained what you are trying to do very well. So it's hard to follow!
[16:30:45] <nopz> does copyDatabase lock the whole mongo server?
[16:31:44] <jtomasrl> https://gist.github.com/4f33aaab970ccd0c7c09
[16:32:33] <jtomasrl> Gargoyle: From the songSchedule song_id's i need to thet the corresponding song name and artist from Songs
[16:33:58] <Gargoyle> jtomasrl: So "songs", and "songsSchedule" are not related?
[16:34:38] <jtomasrl> Gargoyle: each song has it's own ID and songSchedule save that ID
[16:36:44] <Gargoyle> jtomasrl: Are we talking lots of songs, and song Schedules? (It's looking like you might have your schema a little complicated)
[16:38:28] <Gargoyle> Well actually, considering your tob level item is an array, I'm going to suggest you have a total rethink of you schema
[16:39:06] <jtomasrl> Gargoyle: is there a guide for making this schemas? Im so used to relational DB's
[16:39:58] <Gargoyle> jtomasrl: There is a good section in the mongodb docs - I havent got the exact url.
[16:40:12] <ckd> http://www.mongodb.org/display/DOCS/Schema+Design
[16:40:24] <jtomasrl> thx
[16:40:52] <NodeX> jtomasrl : here is a good tip, think of how you would design it for a relational DB and do the exact opposite
[16:41:23] <NodeX> forget ease of management because it goes out of the window
[16:41:41] <ckd> think about why you're actually using MongoDB in lieu of a relational db
[16:41:51] <NodeX> ^^
[16:42:14] <ckd> and if your reasons are legit, play to those strengths
[16:43:02] <ckd> but just sorta shoehorning table rows into a document is never a good idea
[16:44:25] <jdevelop> Hi all! given a list of ObjectId, I need to return only documents with those IDs and containing specific properties. How do I do that in mongo?
[16:44:50] <jdevelop> it's like "where id in (1,2,4,5) and property=value" in terms of SQL
[16:45:22] <Gargoyle> jdevelop: using $in - check quering in the docs.
[16:45:56] <ckd> jdevelop: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24in
[16:46:41] <jdevelop> thanks )
[16:47:04] <jtomasrl> cxd, NodeX: thx :P im getting it now
[16:47:13] <blueprints> if i run db.foo.find().sort({foo:1}).limit(int), can i be sure that sort() is applied before limit() on the collection?
[16:47:24] <blueprints> can't see a note on this in the docs
[17:08:12] <eka> hi all
[17:08:47] <eka> have a mongodb working on a 8gb machine... and is faulting... but mongostat shows that is only using 3.5gb any clue?
[17:17:47] <ckd> how big is your db?
[17:19:53] <kali> eka: your machine is 64bit ?
[17:20:04] <eka> kali: yes
[17:20:20] <eka> ckd: binary dump is ~14gb
[17:20:37] <eka> kali: I think let me check
[17:20:52] <eka> kali: yes
[17:21:11] <eka> kali: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/pPrc3zal6p4
[17:21:54] <kali> well, you have 45.9g mapped
[17:22:44] <eka> kali: cause I loaded the exported dump on the shard0 then added the shards
[17:23:16] <kali> dump are smaller than active db, they contain no padding, and no indexes
[17:23:22] <eka> kali: but why is using only 3g from 8g available?
[17:24:52] <kali> eka: it's mapping 45g. the 3g are the process space, they don't contain the data, but the "running" stuff, threads stacks, connection, temporary stuff, that kind of thing
[17:25:38] <eka> kali: AFAIK mapping is the file size... so that doesn't map to the whole DB size... or I'm wrong?
[17:26:28] <Tobsn> you guys mean fragmentation?
[17:27:15] <kali> eka: i don't understand what you mean. "DB size" is ambiguous
[17:27:22] <eka> I mean, now it has 1/3 of the data since is sharded
[17:28:48] <eka> kali: so should I check my storage size in db.stats()
[17:28:49] <eka> ?
[17:28:54] <kali> ha, so Tobsn is probably right, these files must be full or air
[17:29:02] <Tobsn> yep
[17:29:11] <Tobsn> if you look at the pure database files
[17:29:11] <kali> you probably can benefit from a compact()
[17:29:13] <Tobsn> those are mostly air
[17:29:26] <Tobsn> it preallocates the space for the next higher tear
[17:29:28] <Tobsn> *tier
[17:29:39] <Tobsn> so 500mb of data, take 2GB in filesize
[17:29:40] <Tobsn> etc.
[17:29:44] <Tobsn> gets bigger and bigger
[17:30:12] <eka> Tobsn: ok.. so how to know really why is faulting?
[17:30:13] <kali> Tobsn: yep, but it's even worse in eka's case, because by sharding his data, he moved data away to other servers
[17:30:21] <kali> eka: run a compact()
[17:30:24] <kali> eka: or a repair()
[17:30:24] <eka> ok
[17:30:30] <Tobsn> yeah run a compact
[17:30:34] <Tobsn> and what does "faulting" mean?
[17:30:50] <Tobsn> it crashes? it doesnt find data? shards are not connecting?
[17:30:57] <kali> Tobsn: it means mongodb tries to access a memory mapped file wich is not im memory
[17:31:33] <Tobsn> oh
[17:31:34] <Tobsn> yeah thats bad
[17:31:49] <Tobsn> i'd run a memory test first to see if not one of the sticks is broken
[17:32:05] <Tobsn> then, compact, turn off, turn back on
[17:32:11] <Tobsn> maybe even kill the db and resync
[17:32:19] <kali> no nead for that now. there is no evidence of hardware issue, here
[17:32:30] <kali> it's jsut regular kernel page caching
[17:32:31] <Tobsn> k
[17:32:35] <eka> ok still data size is 10gig :P
[17:32:39] <Tobsn> hehe
[17:32:51] <kali> your compact is already finished ?
[17:33:02] <Tobsn> hehe
[17:33:06] <Tobsn> fastest compact ever
[17:33:10] <eka> kali: stopping applications so I can start the compact
[17:33:17] <kali> ha, ok
[17:33:45] <Tobsn> btw. does anyone know by chance how i define in mongoose unknown children?
[17:33:52] <eka> kali: but db stats says that data size is 10g on each shard... need moar shards
[17:34:07] <Tobsn> hmm you could try that
[17:34:10] <Tobsn> add a shard
[17:34:12] <Tobsn> let it balance
[17:34:14] <Tobsn> see what happens
[17:34:40] <kali> eka: you don't necessarily to have 100% of your DB in RAM
[17:34:50] <kali> +need
[17:35:06] <eka> kali: running repairDB so it will do all the collections... then will see... and add shard accordingly
[17:36:39] <eka> still... if mongodb maps everything into memory ... why the res was 3g...
[17:37:01] <kali> because memory mapped does not count in res
[17:37:03] <kali> :)
[17:37:26] <Tobsn> god damn magic
[17:37:27] <Tobsn> ;)
[17:37:38] <cyounkins> Is anyone using a PyMongo pool which does NOT create a new socket per greenlet?
[17:38:00] <kali> memory-mapped can be swapped out (and they are all the time, actually). res is for binary libraries, thread stacks, very alive stuff
[17:38:08] <eka> kali: where it counts?
[17:38:17] <kali> eka: in mapped, for instance :)
[17:38:19] <eka> I mean reading the mongotop
[17:38:44] <kali> i think you mean mongostats
[17:38:49] <eka> yes sorry
[17:38:50] <kali> mongostat
[17:38:58] <kali> well, it's in the "mapped" column :)
[17:39:23] <eka> kali:
[17:39:27] <eka> res
[17:39:27] <eka> The amount of (resident) memory used by the process at the time of the last mongostat call.
[17:39:42] <kali> yes. mapped files are not resident
[17:40:00] <kali> eka: http://en.wikipedia.org/wiki/Memory-mapped_file
[17:40:36] <eka> kali: yes but why it didn't assigned 7g instead of 3g for the memory mapped?
[17:40:52] <eka> I think I'm lost
[17:40:53] <eka> lol
[17:41:01] <kali> eka: yes, you are :)
[17:41:11] <kali> eka: please read and understand the wiki page first :)
[18:22:19] <nicobn_> hi, I'm trying to shard a collection and even if the index exists, I still get the following error message:
[18:22:28] <nicobn_> please create an index over the sharding key before sharding.
[18:22:35] <nicobn_> what am I doing wrong ?
[18:26:11] <frsk> As the message says.. Create an index for the data you'll be using as a sharding key
[18:31:56] <nicobn_> fredix: of course, yet the index already exists
[18:32:17] <nicobn_> I wish my problem were that easy
[18:33:06] <ckd> nicobn_: can you pastie the exact commands
[18:34:08] <nicobn_> ckd: http://pastie.org/5347028
[18:36:07] <jonno11> Hi - I have a node app accessing my MongoDB instance on my local machine. For some reason, when I log into mongod (shell) to the same database, the data doesn't seem to be the same as what the node instance is accessing. Any ideas as to why this is?
[18:56:31] <jonno11> Hi - I have a node app accessing my MongoDB instance on my local machine. When I log into mongod (shell) to the same database, the data doesn't seem to be the same as what the node instance is accessing. (ie. a find({}) won't return anything!) Any ideas as to why this is?
[19:03:18] <_m> jonno11: The commands "mongo" and "monogd" are quite different. You should be using the former to connect locally.
[19:04:12] <jonno11> _m: How so? Is mongod not a cli client?
[19:04:25] <ckd> jonno11: mongod is the server, mongo is the client
[19:04:35] <_m> monogDaemon
[19:04:39] <jonno11> _m: sorry yes
[19:04:52] <jonno11> _m: got it wrong in the question
[19:05:10] <jonno11> _m: but the issue still is happening
[19:05:25] <ckd> nicobn_: sorry, wasn't ignoring, you just happened to hit the rather shallow ceiling of my knowledge on sharding… your command LOOKS correct to me…. what's that last index though?
[19:05:52] <nicobn_> composite index of some sort
[19:05:54] <nicobn_> we don't use it
[19:05:54] <ckd> jonno11: make a pastie of the entire sequence of commands you're using in the shell
[19:05:58] <nicobn_> could it "confuse" mongodb ?
[19:06:10] <jonno11> db.User.find({})
[19:06:21] <ckd> jonno11: pastie.org
[19:06:22] <jonno11> ckd: Literally, that's it
[19:06:34] <ckd> jonno11: and the entire sequence, from starting the shell on
[19:07:29] <jonno11> ckd: http://pastie.org/5347171
[19:07:47] <ckd> nicobn_: test it out on a duplicate, but see if dropping that last index
[19:08:24] <ckd> jonno11: thanks! in the shell, looks like, unless you're actually using the db "test" in node as well, you aren't switching to the appropriate DB
[19:08:45] <jonno11> ckd: Yep, I am using 'test' in node too
[19:09:36] <ckd> what is the output of show dbs?
[19:10:52] <jonno11> ckd: I've switched to using another, other than test. encore-test 0.203125GB
[19:11:14] <jonno11> ckd: But the problem still occurs on the other database
[19:11:34] <ckd> stick your node.js stuff on pastie
[19:14:36] <ckd> also possible that the collection names are mismatched… they are case sensitive
[20:34:15] <fatninja_> I want to do db.coll.save(obj);
[20:34:19] <fatninja_> how do I get the valued inserted ?
[20:34:31] <fatninja_> more specifically, the _id
[20:34:32] <fatninja_> generated
[20:35:53] <ckd> fatninja_: which driver?
[20:35:56] <eka> fatninja_: is returned IMHO
[20:36:03] <eka> AFAIK
[20:36:14] <fatninja_> in the mongodb console
[20:36:20] <fatninja_> I want to populate a database with demo data
[20:36:23] <fatninja_> by running a .js file
[20:36:40] <fatninja_> and some data is related to other
[20:36:54] <fatninja_> after an Insert it would be nice to get the id of the inserted value
[20:37:56] <ckd> fatninja_: you can just create the _id in advance when you build your object
[20:38:25] <fatninja_> cod I found another way: obj = { xxx } , db.coll.save(obj) , obj._id is now the object id
[20:39:04] <fatninja_> hope it will work
[20:39:11] <ckd> oh, right, yeah, that does work
[20:39:38] <eka> kali: found that I had 2 other collections with 10M docs each ... sharding those also
[20:39:48] <eka> kali: those were trashing shard0
[20:42:01] <fatninja_> what is the diff between .save and .insert ?
[20:51:56] <crudson> fatninja_: read about save() here http://www.mongodb.org/display/DOCS/Updating
[20:52:33] <fatninja_> crudson: got it, thanks
[21:15:57] <cdehaan> Hello! I don't know a ton about MongoDB, but I'm looking at using it to store data from Twitter firehose, as compared to using MySQL. What kind of performance tradeoff might I be looking at?
[21:16:14] <ron> 1:1
[21:17:00] <cdehaan> Cool. I'm thinking I might be better off with MongoDB since Twitter will be handing me JSON that I can just store as-is, instead of doing processing on it and then storing in MySQL
[21:17:19] <ron> umm, I was kidding. there's no real answer to that.
[21:18:21] <kali> cdehaan: mongodb does not store json, it stores bson
[21:18:51] <kali> cdehaan: you probably want to consider other criteria for choosing a DB
[21:21:53] <mspitz> Hello!
[21:22:06] <ron> goodbye!
[21:22:07] <mspitz> May I ask a quick question about WriteResult in a sharded environment? I'm a little confused as to the format.
[21:22:24] <ron> mspitz: as IRC goes, don't ask to ask. just ask.
[21:22:56] <mspitz> Sho' nuff.
[21:22:59] <mspitz> I like being polite. :)
[21:23:06] <mspitz> I've got a sharded setup.
[21:23:16] <mspitz> And I'm running an update query that uses $addToSet
[21:23:27] <ron> irc isn't polite. people hate greetings too.
[21:23:49] <mspitz> And in my write result, I'll occasionally get result["updatedExisting"] = false, but the item was added to the set.
[21:24:17] <mspitz> The other thing of note is that in those writeresults, I'll also get "writeback": ObjectId("...")
[21:24:20] <mspitz> What's the writeback?
[21:24:28] <mspitz> And what's the writebackGLE?
[21:27:59] <crudson> cdehaan: you can treat it as json-in and json-out to a degree if you like. It's not 100% accurate, but that difference shouldn't be used as a dismissing factor for mongo if you value it in your case. Different languages will make it easier/harder more/less transparent.
[21:32:41] <mspitz> If it helps, I'm running mongod/mongos 2.2.1
[21:42:47] <cdehaan> crudson: That makes sense, I guess the only reason I know anything about is due to this Python I'm using that treats it like json-in json-out.
[22:20:37] <mspitz> Does anyone know what the 'writeback' field (an ObjectId) means in a WriteResult in a sharded environment?
[22:24:44] <fatninja_> I want to aggregate two collections
[22:24:56] <fatninja_> and I've seen that can be done via reduce() function
[22:25:02] <fatninja_> but before I dig deep
[22:25:28] <eka> fatninja_: AFAIK no... but...
[22:25:32] <fatninja_> I want to ask: if I choose to add more values after the map reduce, will they still be available in the new meta-collection ?
[22:26:29] <fatninja_> eka: no to aggregating two collections ?
[22:26:45] <eka> is there any way to automatise log rotation?
[22:27:15] <eka> fatninja_: not with aggregation framework AFAIK
[22:27:56] <eka> fatninja_: http://stackoverflow.com/questions/5681851/mongodb-combine-data-from-multiple-collections-in-to-one-how
[22:28:33] <eka> fatninja_: but as said in a comment "I would probably never do that in production software, but it's still a wicked cool technique"
[22:28:49] <fatninja_> I understand
[22:28:59] <eka> fatninja_: http://tebros.com/2011/07/using-mongodb-mapreduce-to-join-2-collections/
[22:29:03] <fatninja_> so there isn't any nice way to join collections, now is it ?
[22:29:10] <eka> fatninja_: still MongoDB is no joins
[22:29:23] <fatninja_> I understand
[22:29:26] <eka> fatninja_: is NoSQL...
[22:29:31] <eka> no relations
[22:30:49] <eka> I mean... there is relations via DBREfs... but no joins
[22:31:15] <fatninja_> the thing is that I can emulate this Join via Model and Db Model , but my main problem is that using js server side, the module for mongo is async, which basically means, when data is received a handler is called with the results. The only viable solution I see is to make this mod sync
[22:31:51] <eka> fatninja_: you can always do it client side...
[22:33:04] <fatninja_> yes, that is indeed a great option, to show a loader and when data is received to display it, but that means more http requests to the server
[22:33:17] <eka> fatninja_: but mongo is fast
[22:34:30] <fatninja_> Wasn't concerned about MongoDB, I was referring to the http server, but now that I think about it, multiple messages can be send/received through a single connection, but don't know for sure
[22:34:58] <fatninja_> thanks for the tips !
[22:51:46] <Baribal> Hi guys. I just set up a mongod, played with it a bit via pyongo, and then I thought... "SECURITY!" So I added a user (use test; db.addUser('name', 'password'), set auth=true in mongo.conf, service restart... yet still I can access the db without authentification. What did I miss?
[22:52:34] <eka> Baribal: should work maybe there is a misstep
[22:52:56] <eka> Baribal: you set the user on which db? test? and then you try to connect to which db ? test?
[22:52:59] <unknet> H1
[22:53:04] <unknet> Hi
[22:53:12] <eka> hi there
[22:53:41] <Baribal> eka, yes.
[22:54:37] <eka> strange
[22:54:52] <eka> Baribal: sure you restarted mongodb?
[22:55:46] <Baribal> Twice now. AND the ipython.
[22:56:14] <Baribal> Okay, the connection will reset anyways, but I've learned to be paranoid.
[22:57:03] <eka> Baribal: db.system.users.find() what does that returns?
[22:57:27] <Baribal> { "_id" : ObjectId("509c35e0c0753218efb2959e"), "user" : "project64198", "readOnly" : false, "pwd" : "4b07178bb3b0fe357275ba13969ceab9" }
[22:57:42] <eka> Baribal: and when the db is admin?
[22:57:46] <eka> use admin
[22:57:47] <eka> db.system.users.find()
[22:58:00] <Baribal> Nothing.
[22:58:07] <eka> Baribal: http://docs.mongodb.org/manual/tutorial/control-access-to-mongodb-with-authentication/
[22:58:20] <eka> Baribal: read that... it says that you need a user in the admin
[22:58:33] <Baribal> *facepalm* Thanks...
[22:58:39] <eka> Baribal: http://docs.mongodb.org/manual/tutorial/control-access-to-mongodb-with-authentication/#adding-users
[22:58:44] <eka> :)
[23:00:39] <Baribal> OperationFailure: db assertion failure, assertion: 'unauthorized db:test lock type:-1 client:127.0.0.1', assertionCode: 10057
[23:00:41] <Baribal> \o/
[23:06:14] <frsk> *sigh*, just noticed I have to rewrite my scripts for log rotation :( <https://jira.mongodb.org/browse/SERVER-4739>
[23:22:40] <PedjaM> Hey guys is someone available to help me a bit?
[23:23:41] <PedjaM> I have issue with sharding, I had capacity issues with single node and decided to go with sharding, but now balancer is making troubles with chunk movement
[23:24:05] <PedjaM> It have moved only 88 chunks in 24h (out of ~8000)
[23:24:22] <PedjaM> in mongos logs i have all the time things like
[23:24:33] <PedjaM> Thu Nov 8 15:21:38 [cursorTimeout] killing old cursor 7588257232297401619 idle for: 600147ms
[23:24:38] <PedjaM> Thu Nov 8 15:21:38 [Balancer] shard0000 is unavailable
[23:24:43] <PedjaM> Thu Nov 8 15:21:38 [Balancer] distributed lock 'balancer/mongo3:28017:1352406473:1804289383' unlocked.
[23:25:31] <PedjaM> I assume that it just don't have enough time to gether all the data (?)
[23:26:08] <PedjaM> with this speed it will balance chunks in couple of months, which is unacceptable
[23:27:01] <PedjaM> when i stop background jobs which wirk with mongo it is a lot lot faste (most of the chunks are moved over that time) but i just can't afford to stop background jobs for a long time
[23:27:09] <PedjaM> can someone give me some advice how to proceed ?
[23:27:23] <PedjaM> should i maybe make a chunk size smaller so it can get data faster ?
[23:28:14] <PedjaM> Anyway, if someone have time to help me a bit, it would be greatly appreciate ;)
[23:43:54] <Baribal> Okay, so I have to create an admin user, authenticate as him, then create a user on the actual db, and THEN authentification works, eka. :D
[23:44:09] <eka> Baribal: :)
[23:44:18] <Baribal> And inbetween, after creating the admin user, I think, I have to restart the mongod.
[23:44:34] <eka> no need AFAIK
[23:44:43] <Baribal> And now I'll annihilate my db files and try to replicate.
[23:44:59] <Baribal> eka, normally, but to switch on auth for the first time I do.
[23:45:18] <Baribal> Otherwise I shouldn't be able to create an admin user, I hope.
[23:46:08] <Baribal> And tomorrow I'll do all that stuff all over again for RabbitMQ... ^^
[23:46:11] <PedjaM> Hey Baribal, eka, can you guys give me a bit of advices with my issue ?
[23:47:13] <Baribal> PedjaM, with the balancing, and yes, it looks like your cursor died because it wasn't used in 10 minutes.
[23:47:27] <Baribal> PedjaM, I meant, no with the balancing.
[23:47:35] <Baribal> I'm a somewhat novice user.
[23:48:08] <eka> PedjaM: mmm sharding is hard... I'm dealing with it now... but doesn't take 24hs and my collection has > 10M docs
[23:48:30] <eka> PedjaM: the balancing is degraded by DB activity yes
[23:48:47] <eka> PedjaM: I stop all the apps for first balancing...
[23:49:21] <PedjaM> thanks, but I just can't stop everything for a long time
[23:49:37] <Baribal> But maybe only 88 chunks have moved because after that the cluster was balanced enough? Was it really chunk movement that made the cursor be idle?
[23:49:56] <Baribal> Or was it maybe at app level?
[23:49:57] <eka> PedjaM: so... you are up to that... balancing takes resources and your app takes resources... and balancing locks also
[23:50:06] <PedjaM> well it is not balanced enough, I wouldn't say that 88 of 8000 is balanced enough ;)
[23:50:16] <eka> Baribal: check sh.status() or sh.stats()
[23:50:35] <Baribal> eka, sh?
[23:50:42] <eka> Baribal: shard
[23:50:49] <eka> PedjaM: connect to a mongos
[23:50:54] <eka> PedjaM: and use that
[23:51:03] <eka> PedjaM: maybe the shard key is bad?
[23:51:06] <PedjaM> yes but it doesn't tell me a lot
[23:51:16] <PedjaM> well that's another issue
[23:51:26] <PedjaM> I have only two fields, _id and pp:...
[23:51:32] <PedjaM> id is auto increment
[23:51:37] <PedjaM> and _id is used as shard key
[23:51:39] <eka> frsk: thats the problem :P
[23:51:43] <PedjaM> it is not ideal i know but...
[23:51:44] <eka> sorry
[23:51:47] <eka> PedjaM: ^^
[23:52:06] <PedjaM> well is there a way to make shard key something like _id%2 ?
[23:52:17] <PedjaM> that would be a lot better than just _id
[23:52:23] <eka> PedjaM: did you read the guide on shard keys?
[23:52:24] <PedjaM> or _id%number_of_shards
[23:52:28] <Baribal> eka, oh, you meant PedjaM, I guess. printShardingStatus: not a shard db!
[23:52:39] <Baribal> (Not... yet.)
[23:52:57] <eka> Baribal: printShardStatus works at db lever sh.stats() is at sharding
[23:52:57] <PedjaM> eka yes I did read...
[23:53:33] <PedjaM> hm sh.stats() doesn't work ;)
[23:53:40] <PedjaM> Thu Nov 8 15:50:35 TypeError: sh.stats is not a function (shell):1
[23:53:49] <eka> PedjaM: so there it talks about how to choose your shard key...you may want to add a random number to a new field... but you cant recreate the shard key
[23:54:00] <eka> PedjaM: sh.status() ?
[23:54:19] <PedjaM> that one works and is the same as printShardStatus
[23:54:25] <eka> PedjaM: you should be connected to your mongos
[23:54:41] <eka> ok
[23:55:13] <eka> that tells you how balanced are your dbs...but again ... with an incremental key you are kind of on a bad position
[23:55:15] <PedjaM> so you think that if i change sharding key i can speed it up ?
[23:55:18] <eka> _id is incremental
[23:55:27] <PedjaM> yes
[23:55:41] <eka> PedjaM: you can't change shard key... you have to redo everything
[23:55:42] <PedjaM> well, i don't think it is bad position when db is full
[23:55:45] <eka> PedjaM: I did it
[23:55:53] <PedjaM> it is supposed to move starting ids
[23:56:12] <eka> PedjaM: the docs say that... don't start sharding when db is full... will make everything slower!!
[23:56:14] <PedjaM> I mean it is problem when you insert, all the inserts would go into one shard
[23:56:16] <PedjaM> for example
[23:56:29] <PedjaM> but balancing is not supposed to have problem with bad sharding key
[23:56:30] <eka> PedjaM: same with queries
[23:57:02] <eka> PedjaM: the problem is... your DB is slow cause you are overburden so... now you added sharding... that has to balance an overburden system
[23:57:07] <eka> PedjaM: do you follow?
[23:57:16] <PedjaM> yes i do
[23:57:40] <eka> Warning
[23:57:40] <eka> It takes time and resources to deploy sharding, and if your system has already reached or exceeded its capacity, you will have a difficult time deploying sharding without impacting your application.
[23:57:40] <PedjaM> i know that i am overburden, thats why i went to sharding ;)
[23:57:58] <eka> PedjaM: http://docs.mongodb.org/manual/core/sharding/#when-to-use-sharding
[23:58:04] <eka> PedjaM: there is the WARNING
[23:58:13] <eka> As a result, if you think you will need to partition your database in the future, do not wait until your system is overcapacity to enable sharding.
[23:58:19] <eka> quoted
[23:58:39] <eka> so that's your problem...
[23:58:55] <PedjaM> yes i was thinking thats the issue ;)
[23:59:08] <PedjaM> but anyway, do you have any advice how to proceed from here ?
[23:59:29] <PedjaM> is it only way to stop all the requests to mongo for a day or two
[23:59:32] <PedjaM> so it can balance
[23:59:42] <PedjaM> or there is somehting else that i can do
[23:59:51] <eka> PedjaM: you need to 1. stop app.. so DB can speed up 2. since you stopped re do your shard key add a random value to a new field