[07:22:30] <gigo1980> no i dont know what the pooling do ?
[07:23:00] <gigo1980> so if i read correctly the "perstit" connectionions are not longer available
[07:23:35] <gigo1980> on my application stack i have one mongos node on each application server
[07:24:00] <Gargoyle> I'm pretty sure they are still available. They are just automatically handled for you.
[07:24:42] <gigo1980> and at the moment each call to the webapp creates an connection to the mongorouter ...
[07:25:20] <gigo1980> so is there an way to make an pool that the webapp handels about (40) persistens connections
[07:25:30] <gigo1980> is this that, what the mongoPool does ?
[07:27:38] <Gargoyle> I guess that depends on the setup you have. I'm sure with apache + php the persistent connections and pooling is all handled automatically
[07:28:19] <LambdaDusk> I have some trouble with proper database design for mongo... if I have several users, each of them can have several blogs... should be the blogs and the posts always be their own collections? I have heard mongo is about redundancy everywhere
[07:28:36] <gigo1980> yesterday i had a problem with ulimit of my operating systems …
[07:29:03] <gigo1980> so there are to many file locks and this depends on the connections that the application makes to the mongos
[07:29:34] <Gargoyle> gigo1980: I don't think that is *just* about connections.
[07:29:53] <Gargoyle> gigo1980: What version are you running?
[09:58:50] <gigo1980> hi, i will remove an shard. but the state is daining ongoing, and no chunks will be moved
[10:04:00] <Gargoyle> Derick: Driver is holding strong. pretty consistent 200ish connections to each of the 3 nodes in the RS. No segfaults spotted in apache, and delivered 25,000+ pageviews already this morning. :)
[10:19:27] <NodeX> if you like I can give you my database to help you out, it's made up of 2million or so places in the UK including every postcode and placename (towns/villages)
[10:19:46] <NodeX> I spent 2 years creating it and fine tuning it
[10:23:02] <Mmike> Hi. How do I run --repair option? I had a server crash, and now when I start server I get 'mongo.lock file exists...'. It is stated that I run mongod wih --repair option, but it just spits the same error in the log file
[10:23:21] <lupisak> NodeX: ok, none from PAF - good to hear
[10:23:43] <Zelest> Mmike, check the log.. if the repair is fine and all, remove the lockfile.
[10:23:50] <NodeX> Derick : I am willing to open source my data, but in general UK postcode databases are made from the PAF which carries a licence charge from the Royal Mail (UK Post people)
[10:23:51] <Zelest> (someone tell me if I'm wrong here though)
[10:24:20] <solars> hey, is there a java object mapper much like mongoid for ruby?
[10:24:20] <BurtyB> think I had to remove the lock file manually to do the repair
[10:24:35] <NodeX> I geocoded (added a lat/long pair) to every single postcode, place, placename and geohashed it all too
[10:24:38] <Gargoyle> NodeX: Derick: Isn't some of that incorporated into the OS open data?
[10:24:45] <lupisak> NodeX: I can probably tell you if that's possible if you tell us where the data is from - which various sources?
[10:24:57] <Derick> Gargoyle: no, PAF is special :-(
[10:26:38] <NodeX> Gargoyle : did you get it from PAF ? because I released one into the wild a few years back
[10:26:57] <Zelest> Mmike, Perhaps the repair should remove the lock-file once its done.. but I think I had a similar issue when I expienced the same thing.
[10:27:01] <Gargoyle> Derick: NodeX: Yeah, and I think geocoded by a third party
[10:27:05] <NodeX> At the time I geocoded the 28million PAF database for a company
[10:27:16] <NodeX> company didn;'t pay me, data went out to the world ;)
[10:27:25] <NodeX> it was probably mine that you had!!
[10:27:28] <Derick> not sure how shady that is for OSM really
[10:27:33] <lupisak> NodeX: sorry to bug you on this, but could you explain exactly how you dreived the data? If done the right way, this data could be interesting for the projects mentioned.
[11:20:55] <PDani> I have a replicated environment with two members and an arbiter. I'm experiencing periodical slowdowns when running relatively intensively insert queries. I'm using 2.2.0. I investigated this issue a lot, I tried to turn off journaling, and I set syncdelay to 0 (for testing purposes), but it didn't help. I'm trying to find out what could cause this issue.
[11:22:15] <kali> PDani: have you checked for preallocation messages in the log ?
[11:22:34] <kali> PDani: what file system your data is on ?
[11:24:06] <PDani> At every slowdown, I see this in the log: http://goo.gl/RnDzr
[12:32:16] <NodeX> one of my clients is a massive domaineer
[12:32:28] <NodeX> he has me doing all manner of crap with domains
[13:27:42] <yhpark> Use a DBRef when you need to embed documents from multiple collections in documents from one collection. DBRefs also provide a common format and type to represent these relationships among documents.
[13:27:51] <yhpark> from http://docs.mongodb.org/manual/applications/database-references/
[13:40:25] <gigo1980> "errmsg" : "MR post processing failed: { errmsg: \"exception: could not initialize cursor across all shards because : socket exception [CONNECT_ERROR] for set12/mongo07.luan.local:10022,mongo08.luan....\", code: 14827, ok: 0.0 }"
[13:40:35] <gigo1980> but this shard does not longer exists
[13:40:44] <gigo1980> also restart the config servers ...
[14:46:54] <W0rmDrink> Hi, I have a system consisting of n computers - request for each user can reach each system - but one request may be processed per user at any one time in the system overall - is there a name for mechanisms used to ensure this - I normally refer to it just as syncronization but its slightly vauge
[14:49:51] <sysdef> is there a way to get mongodb support? a mailing list or something like that? or an official support channel?
[14:50:53] <meghan> https://groups.google.com/group/mongodb-user is the best place
[14:51:55] <sysdef> looks like i don't have access to that page
[14:51:56] <W0rmDrink> further - to achive this we typically do something like : try_lock: db.collection.update( { userId : ..., ownerId : NONE }, { $set : { ownerId : ?self? } }, true, false );
[14:51:56] <W0rmDrink> - and I would like to know if anybody has better Ideas
[14:52:58] <W0rmDrink> and if this is not best place to ask - where should I ask
[14:55:53] <landstalker> sysdef: 10gen provide paid support plans
[14:56:56] <Derick> sysdef: you need to joind the google groups before you can access it - but we (10gen) also provide paid support plans
[14:57:16] <Derick> sysdef: or you can just ask here
[14:58:42] <abourget> what's the primary shard used by mongos if sharding is not enabled ? is it the first server listed as 'configdb' in mongos.conf ?
[14:59:27] <sysdef> i asked teh questio two times and dien't got a response yet. that's why i asked for an other way
[14:59:31] <sysdef> i read there is no version for android. there is (still) no version?
[15:06:07] <Derick> abourget: it's the server where you ran "enable sharding"
[15:06:44] <Derick> sysdef: MongoDB isn't made to run on an embedded device really.
[15:06:56] <abourget> Derick, but where will mongos store the data, if it,s configured to deal with 3 shards, but none of them have been enabled with "enable sharding" ?
[15:07:35] <Derick> Will mongos even be able to talk to any shard in that case? I don't think it would.
[15:07:45] <Derick> I don't have a definitive answer on that one though
[15:07:48] <sysdef> Derick: ok, thank you. i'll stay with BerkeleyDB
[15:11:35] <abourget> Derick, ok.. I just saw, the shards do have 'shardsvr' == true
[15:11:55] <abourget> Derick, can we connect directly to a server that has `shardsvr` == true ? and store data there ? or *must* be accessed through mongos ?
[15:12:12] <Derick> you don't have to, but you really should
[15:13:00] <Derick> anyway, you you can find out what your primary shard is too
[15:49:57] <NodeX> anyone know if i can do an $inc on a sub document that doesnt exist yet?
[15:51:14] <NodeX> I want to do something like db.foo.update({ymd:20120911,type:'geohash'},{$inc:{"foo.myhash":1}});
[15:52:03] <jsaacmk> http://pastebin.com/5JJUA3Uf is there a more concise way for me to do the query at the bottom of the post (ideally with mongodb directives). Also, why does the result print twice when I run it in the mongo CLI?
[15:59:12] <nostalgeek> Hi all, I am fairly new to Mongo. I think I've got a replicas between 2 mongod working (ports 27018). I also have 2 mongos routers (ports 27017). Now, if I understand right, when I connect to the mongos (27017) and issue slaveOk() this allow routing of find() queries to my secondary. How can I tell if the query was served by the secondary or by the master?
[16:27:39] <astropirate> I am working on a CMS. Documents can have child documents. There is no limit how deep the tree structure can be. Should I reference the child documents or embed them? Also what would be the way to get the child documents indexed too based their "slug" property
[16:29:50] <astropirate> Any advice would be very much appreciated
[16:53:00] <nostalgeek> answer to my previous question: I enabled profiling on the DB, and confirmed my read requests are logged on my secondary, and any inserts are logged on the primary!
[17:58:20] <camonz> is there a way to coerce all figures returned in a map/reduce to integers?
[17:59:12] <camonz> I'm already wrapping every figure in each operation with a parseInt(...)
[19:01:40] <iyp> Anyone have advice on dynamically generating json-schema compliant docs on homogeneous(ish) collections?
[19:16:09] <crudson> iyp: that's going to come from one of your application layers somewhere, and depending on what language you are using. There's nothing currently in mongodb to associate a schema with a collection.
[19:17:46] <iyp> crudson: Gotchya. Do you know if theres anything in the json-schema spec that allows for summarizing populated data? Or is that another problem entirely?
[19:19:29] <crudson> iyp: I have no idea, sorry. That would be a question for the json-schema folks.
[19:42:52] <iyp> crudson: Got any insight on data warehousing with mongodb?
[19:44:45] <iyp> I intend to keep a cache of the data sources with something like Hbase and then transform that data into mongodb collections to sit under a web interface for exploration.
[19:45:16] <iyp> See anything blatantly wrong with that? (input welcomed from anyone)
[20:43:16] <Rhaven> hi all, i have just added a shard on a running shard cluster, since that i have got some high latency... and i got some error like that
[20:45:44] <_m> Have you checked here: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/lm7Ht1yCpeQ
[20:46:48] <_m> There's a lot of information on the error messages you linked available via google. I suggest turning there first. Apologies in advance if you've already exhausted that route.
[21:08:26] <tomlikestorock> anyone here know how to connect to a replica set in mongohub over an ssh tunnel successfull?
[21:36:42] <R-66Y> db.games.find({ "$where": a + " == 1" });
[21:36:49] <R-66Y> seems a little dirty but it works
[21:37:28] <Gaddel> hello, i'm brand new to mongodb. if i have a collection of documents and i want a string from one of the fields to the be primary key, is my best option just duplicating the field value for "_id" as well?
[21:40:58] <kchodorow> Gaddel: just use the string as the _id field
[21:41:48] <crudson> Gaddel: you can just use it as _id providing it's unique and logically appropriate. Downside is that it may not be obvious to consumers of that document what that field means without further knowledge.
[21:42:03] <Gaddel> crudson: right, what was my concern. in this case the field is an IP address.
[21:42:50] <Gaddel> which isn't naturally very similar to an "ID" so to speak. but i could probably store it there and then re-assign it to "IP" when presenting the document
[21:47:34] <crudson> Gaddel: if your collection is 'ip_address_stats' or something, then it's obvious what the _id means, otherwise I personally see benefit in making it obvious what attributes mean. Just my opinion.
[22:22:40] <fg3> I can search for my-field true -- how can I search for my-field: false -- or does not exist
[22:37:20] <_m> fg3: Check the $exists documentation. http://bit.ly/xpiM0V