[00:00:13] <rossdm> is it useful to be able to grab a specific URL? then maybe key-value pair where URL is value. If not, perhaps use an array?
[00:01:40] <spartango> its generally not all that useful to grab a specific url, although it'd be useful to have information about the ordering
[00:02:43] <spartango> note that the appends have no guarantee of being done in order (concurrent) so i'd be doing some kind of "put at position" with an array
[00:05:24] <spartango> i'll go look more carefully at the docs for array. thanks though
[01:50:37] <cjhanks> ckd: First one I tried was Fedora. There was a ticket marked resolved about the issue with other people still claiming issue persisted -- I did not want to add noise. Debian installer I didn't bother since I assumed it must be endemic.
[01:51:10] <ckd> cjhanks: Interesting, thanks for the info… I'm so used to living in Ubuntu-land I completely forgot that there's a world outside :)
[02:26:53] <thomas___> Is there any way for me to return multiple counts in one request?
[02:29:57] <thomas___> rather then running users.find({gender:"male"}).count(); and getting 231 then running users.find({gender:"female"}).count(); getting 416 something that could return {male:"231",female:"416"} I know map reduce can do this, is that my best bet?
[02:45:42] <IAD> thomas___: look at http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-{{count%28%29}} : a = users.find({gender:"male"}) && all = a.count(true) && m = a.count() && f = all - m
[03:10:06] <IAD> I think there are many exceptions, such as zombies, orcs =)
[03:32:35] <thomas___> I just updated my database via homebrew, when I got back into the console all my dbs were gone. Are they still on my computer somewhere? Or are they whiped?
[08:58:49] <codemagician> I'm using the PHP $obj = new \Mongo(); in my web app. I stop the mongod and I start my webapp again, without restarting the mongod but my app still gets a connection back. Is this possible?
[08:59:25] <codemagician> Is there some kind of persistence happening at the PHP level, that doesn't check to see if the mongod server has gone away?
[09:10:53] <NodeX> perhaps try the selectDb() call instead
[09:11:29] <NodeX> it's not something I have ever needed to do tbj
[09:11:30] <codemagician> NodeX: $this->connection = new \Mongo(); this line still gets a connection back regardless of selecting db
[09:11:53] <NodeX> great, but selecting a DB will throw an error if there is no database (no connection to the db)
[09:11:59] <codemagician> so will $this->connection = new \Mongo();
[09:12:13] <codemagician> as it does on the command line
[09:12:50] <NodeX> wrap your selectDB() in a try/catch
[09:12:51] <codemagician> when running PHP to test it on the command line, as soon as the mongo server is gone, an exception is throw indicating that the connection failed
[09:32:07] <ron> codemagician: I've been developing for 10+ years to know the problem is that you use PHP instead of a real programming language. that doesn't help solve your problem though.
[09:33:11] <NodeX> I did and have the chan logs to prove it ;)
[09:33:16] <ron> codemagician: who cares about the drivers?
[09:33:32] <ron> remonvv: wanted to tell you about my new job!
[09:35:26] <codemagician> ron: listen, driver, PHP or C. The original question was about connection pooling, since I want my development server to recognise when the mongod instance exists. The PHP caches a pool of connections and this is the issue, not using workaround to attempt to selectDbs. The select works too, because PHP caches that also
[09:36:31] <codemagician> ron: PHP will not raise an exception for either attempting to create a new connection, nor select a DB because it caches both in a pool.
[09:47:44] <codemagician> remonvv: when I refresh my browser window again and again, the web app gets a connection every N cycles, which shows there is likely a pool of connections held inside PHP
[09:47:52] <NodeX> LOL ... I ignored this kid and he's still on about the same thing
[09:48:17] <codemagician> running it from the command line means that there is no connection pooling so the exception is thrown instantly
[09:48:36] <codemagician> the documentation says when the PHP process exists the connection is closed, but this doesn't happen
[09:49:15] <codemagician> so the local state doesn't match the underlying state
[09:51:52] <codemagician> NodeX: you don't even understand the problem. just be quiet I was programming whilst you were in nappies
[09:53:37] <stevie-bash> I once used a mongo shell command with the database name specified in it. Now I can't recall this. How is the syntax to use the db name in a db.'dbname'.collection.command()?
[09:58:45] <remonvv> If the connection has issues for whatever reason it will throw exceptions only if you actually do something that requires the driver to send operations to the server. Also note that most connection pool implementations will quietly attempt to create new connections if one timed out or had IO issues (rather than MongoDB oriented errors)(
[10:00:13] <codemagician> remonvv: but in the case of the $obj = new Mongo(); for PHP it returns an object with a connection inside, even after the original PHP process has died, and the mongo server too.
[10:00:21] <stevie-bash> I also see a db calles "E00"
[10:00:29] <stevie-bash> I can't swithc to this either
[10:00:50] <remonvv> codemagician, I'm not familiar with the PHP implementation but the Java driver closes the connection wrapper (and obviously in that case the actual connection isn't there anymore)
[10:01:09] <codemagician> remonvv: The PHP implementation is broken
[10:01:22] <NodeX> stevie-bash : try to copy the db name to a new name
[10:01:25] <Gargoyle_> stevie-bash: You are not trying hard enough!
[10:01:35] <stevie-bash> I found in the doc something like this db['name'] , does this refer to a collection?
[10:01:46] <remonvv> codemagician, highly unlikely given the usage. Again I'm really unfamiliar with PHP but in most drivers the Mongo object is a singleton and should not be created more than once.
[10:01:53] <remonvv> Throughout the lifespan of the application, that is.
[10:02:02] <Gargoyle_> stevie-bash: There's no reason for the mongo shell not to allow a DB called *
[10:07:20] <codemagician> Derick: can you help explain something just to clear up some mystifying behaviour
[10:08:00] <IAD1> codemagician: use it before at destructing : http://www.php.net/manual/en/mongo.close.php
[10:08:09] <codemagician> Derick: when I stop the mongodb server and hit refresh on the browser, the page gives an exception, an exception, .. and then returns am apparently valid connection instance as if it's cycling through the connection pool
[10:08:55] <Derick> codemagician: possible - the pooling code is confusing. (Hence we got rid of it in 1.3)
[10:09:06] <codemagician> Derick: So before you mention a distinction between PHP request and PHP process. I assume the PHP process continues running, even when a new HTTP request occurs?
[10:10:42] <IAD1> Derick: yep, but if he want it...
[10:11:42] <codemagician> Derick: I am correct in assuming that the PHP request isn't informing the PHP process to update it's connection state when the PHP program terminates?
[10:12:33] <codemagician> Derick: that is, currently in 1.2 the PHP script ends, but should inform its container PHP process and in tern update the connection state for that pool instance internally ?
[10:23:25] <codemagician> Derick: when will I be able to test rc1
[10:23:47] <Derick> codemagician: IMO, that stuff should go fit in a proxy... it's very hard to do that from a apache/php application with so many distinct workers
[10:28:13] <Derick> and on that bombshell, I need to go back fixing that for RC2
[10:28:35] <Gargoyle> NodeX: I only seem to get the problems when connecting to the RS. When running on dev to just a local copy of the db, it's fine.
[10:28:57] <codemagician> Derick: one last question. as a stop-gap, is there a way to force $obj = new MongoDb(); to throw an exception when the server has died
[10:29:14] <Gargoyle> ^^ Derick, bjori ^^ (Think I have already mentioned that, haven't I?)
[10:29:25] <Derick> codemagician: no, you can't even close all connections in 1.2
[10:34:37] <codemagician> Derick: I'm looking forward to the connection pool going
[10:35:00] <durre> I have a noob question: I have a document like this: http://pastebin.ca/index.php … how do I query all the documents that has "SE" in the markets array?
[13:37:02] <NodeX> HelenHunt was always a good one in #security on linknet
[13:38:29] <NodeX> anyways, time for a haircut then Halo 4 me thinks
[13:41:39] <remonvv> Yeah. You're really busy with work.
[13:47:03] <durre> with this document: "filters" : { "markets" : [ "SE", "DK", "GB" ] } … how do I find all the documents that exist in the market SE or GB?
[15:46:01] <rshade98> the second part of the config doc looks right. I have to have it quoted or ruby gripes. should make it into a hash and then parse it to json
[15:48:49] <ckd> Derick: sounds like you had a long night then!
[16:02:04] <Derick> ckd: I'm in London, it's night all year round :-)
[16:24:46] <jtomasrl> need some help with this plz http://stackoverflow.com/questions/13292393/find-object-inside-json-using-nested-id/13292960#13292960
[16:27:38] <ppetermann> jtomasrl: that document looks weird
[16:28:26] <jtomasrl> ppetermann: maybe, im new into this mongodb thing and working with JSON
[16:29:38] <remonvv> jtomasrl, pastie the document itself please.
[16:29:58] <Gargoyle> jtomasrl: You've not really explained what you are trying to do very well. So it's hard to follow!
[16:30:45] <nopz> does copyDatabase lock the whole mongo server?
[16:42:14] <ckd> and if your reasons are legit, play to those strengths
[16:43:02] <ckd> but just sorta shoehorning table rows into a document is never a good idea
[16:44:25] <jdevelop> Hi all! given a list of ObjectId, I need to return only documents with those IDs and containing specific properties. How do I do that in mongo?
[16:44:50] <jdevelop> it's like "where id in (1,2,4,5) and property=value" in terms of SQL
[16:45:22] <Gargoyle> jdevelop: using $in - check quering in the docs.
[17:22:44] <eka> kali: cause I loaded the exported dump on the shard0 then added the shards
[17:23:16] <kali> dump are smaller than active db, they contain no padding, and no indexes
[17:23:22] <eka> kali: but why is using only 3g from 8g available?
[17:24:52] <kali> eka: it's mapping 45g. the 3g are the process space, they don't contain the data, but the "running" stuff, threads stacks, connection, temporary stuff, that kind of thing
[17:25:38] <eka> kali: AFAIK mapping is the file size... so that doesn't map to the whole DB size... or I'm wrong?
[18:36:07] <jonno11> Hi - I have a node app accessing my MongoDB instance on my local machine. For some reason, when I log into mongod (shell) to the same database, the data doesn't seem to be the same as what the node instance is accessing. Any ideas as to why this is?
[18:56:31] <jonno11> Hi - I have a node app accessing my MongoDB instance on my local machine. When I log into mongod (shell) to the same database, the data doesn't seem to be the same as what the node instance is accessing. (ie. a find({}) won't return anything!) Any ideas as to why this is?
[19:03:18] <_m> jonno11: The commands "mongo" and "monogd" are quite different. You should be using the former to connect locally.
[19:04:12] <jonno11> _m: How so? Is mongod not a cli client?
[19:04:25] <ckd> jonno11: mongod is the server, mongo is the client
[19:04:52] <jonno11> _m: got it wrong in the question
[19:05:10] <jonno11> _m: but the issue still is happening
[19:05:25] <ckd> nicobn_: sorry, wasn't ignoring, you just happened to hit the rather shallow ceiling of my knowledge on sharding… your command LOOKS correct to me…. what's that last index though?
[19:07:47] <ckd> nicobn_: test it out on a duplicate, but see if dropping that last index
[19:08:24] <ckd> jonno11: thanks! in the shell, looks like, unless you're actually using the db "test" in node as well, you aren't switching to the appropriate DB
[19:08:45] <jonno11> ckd: Yep, I am using 'test' in node too
[21:15:57] <cdehaan> Hello! I don't know a ton about MongoDB, but I'm looking at using it to store data from Twitter firehose, as compared to using MySQL. What kind of performance tradeoff might I be looking at?
[21:17:00] <cdehaan> Cool. I'm thinking I might be better off with MongoDB since Twitter will be handing me JSON that I can just store as-is, instead of doing processing on it and then storing in MySQL
[21:17:19] <ron> umm, I was kidding. there's no real answer to that.
[21:18:21] <kali> cdehaan: mongodb does not store json, it stores bson
[21:18:51] <kali> cdehaan: you probably want to consider other criteria for choosing a DB
[21:27:59] <crudson> cdehaan: you can treat it as json-in and json-out to a degree if you like. It's not 100% accurate, but that difference shouldn't be used as a dismissing factor for mongo if you value it in your case. Different languages will make it easier/harder more/less transparent.
[21:32:41] <mspitz> If it helps, I'm running mongod/mongos 2.2.1
[21:42:47] <cdehaan> crudson: That makes sense, I guess the only reason I know anything about is due to this Python I'm using that treats it like json-in json-out.
[22:20:37] <mspitz> Does anyone know what the 'writeback' field (an ObjectId) means in a WriteResult in a sharded environment?
[22:24:44] <fatninja_> I want to aggregate two collections
[22:24:56] <fatninja_> and I've seen that can be done via reduce() function
[22:30:49] <eka> I mean... there is relations via DBREfs... but no joins
[22:31:15] <fatninja_> the thing is that I can emulate this Join via Model and Db Model , but my main problem is that using js server side, the module for mongo is async, which basically means, when data is received a handler is called with the results. The only viable solution I see is to make this mod sync
[22:31:51] <eka> fatninja_: you can always do it client side...
[22:33:04] <fatninja_> yes, that is indeed a great option, to show a loader and when data is received to display it, but that means more http requests to the server
[22:34:30] <fatninja_> Wasn't concerned about MongoDB, I was referring to the http server, but now that I think about it, multiple messages can be send/received through a single connection, but don't know for sure
[22:51:46] <Baribal> Hi guys. I just set up a mongod, played with it a bit via pyongo, and then I thought... "SECURITY!" So I added a user (use test; db.addUser('name', 'password'), set auth=true in mongo.conf, service restart... yet still I can access the db without authentification. What did I miss?
[22:52:34] <eka> Baribal: should work maybe there is a misstep
[22:52:56] <eka> Baribal: you set the user on which db? test? and then you try to connect to which db ? test?
[23:06:14] <frsk> *sigh*, just noticed I have to rewrite my scripts for log rotation :( <https://jira.mongodb.org/browse/SERVER-4739>
[23:22:40] <PedjaM> Hey guys is someone available to help me a bit?
[23:23:41] <PedjaM> I have issue with sharding, I had capacity issues with single node and decided to go with sharding, but now balancer is making troubles with chunk movement
[23:24:05] <PedjaM> It have moved only 88 chunks in 24h (out of ~8000)
[23:24:22] <PedjaM> in mongos logs i have all the time things like
[23:24:33] <PedjaM> Thu Nov 8 15:21:38 [cursorTimeout] killing old cursor 7588257232297401619 idle for: 600147ms
[23:24:38] <PedjaM> Thu Nov 8 15:21:38 [Balancer] shard0000 is unavailable
[23:25:31] <PedjaM> I assume that it just don't have enough time to gether all the data (?)
[23:26:08] <PedjaM> with this speed it will balance chunks in couple of months, which is unacceptable
[23:27:01] <PedjaM> when i stop background jobs which wirk with mongo it is a lot lot faste (most of the chunks are moved over that time) but i just can't afford to stop background jobs for a long time
[23:27:09] <PedjaM> can someone give me some advice how to proceed ?
[23:27:23] <PedjaM> should i maybe make a chunk size smaller so it can get data faster ?
[23:28:14] <PedjaM> Anyway, if someone have time to help me a bit, it would be greatly appreciate ;)
[23:43:54] <Baribal> Okay, so I have to create an admin user, authenticate as him, then create a user on the actual db, and THEN authentification works, eka. :D
[23:48:08] <eka> PedjaM: mmm sharding is hard... I'm dealing with it now... but doesn't take 24hs and my collection has > 10M docs
[23:48:30] <eka> PedjaM: the balancing is degraded by DB activity yes
[23:48:47] <eka> PedjaM: I stop all the apps for first balancing...
[23:49:21] <PedjaM> thanks, but I just can't stop everything for a long time
[23:49:37] <Baribal> But maybe only 88 chunks have moved because after that the cluster was balanced enough? Was it really chunk movement that made the cursor be idle?
[23:49:56] <Baribal> Or was it maybe at app level?
[23:49:57] <eka> PedjaM: so... you are up to that... balancing takes resources and your app takes resources... and balancing locks also
[23:50:06] <PedjaM> well it is not balanced enough, I wouldn't say that 88 of 8000 is balanced enough ;)
[23:50:16] <eka> Baribal: check sh.status() or sh.stats()
[23:53:40] <PedjaM> Thu Nov 8 15:50:35 TypeError: sh.stats is not a function (shell):1
[23:53:49] <eka> PedjaM: so there it talks about how to choose your shard key...you may want to add a random number to a new field... but you cant recreate the shard key
[23:57:02] <eka> PedjaM: the problem is... your DB is slow cause you are overburden so... now you added sharding... that has to balance an overburden system
[23:57:40] <eka> It takes time and resources to deploy sharding, and if your system has already reached or exceeded its capacity, you will have a difficult time deploying sharding without impacting your application.
[23:57:40] <PedjaM> i know that i am overburden, thats why i went to sharding ;)
[23:58:13] <eka> As a result, if you think you will need to partition your database in the future, do not wait until your system is overcapacity to enable sharding.