[07:49:55] <desiac> im running the simplest of mongo setups. The database is about 2gb and yesterday i noticed that every 30 mins the Write IO on windows would shoot to 100% and mongo was pretty much locking up.
[07:50:55] <desiac> I upgraded from 2.2.0 to 2.2.1 - same issue. I downgraded to 2.0.7 the IO writes still happening but at least mongo not locking up. Can anyone please assist? I got no help on https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/T0v8zq3QElY
[07:50:56] <tiglionabbit> what's the difference between $min/$max and $gt/$lt ?
[07:51:54] <desiac> tiglionabbit, - see here http://www.mongodb.org/display/DOCS/min+and+max+Query+Specifiers
[08:08:55] <timroes> Hi, is there any possibility to save a BSONObject from Java in the database? I know that the MongoDB driver use this internally, but have i the possibility to give it a BSONObject directly to store into db?
[09:42:24] <NodeX> being new to Ec2 (just testing a few things) - does one have to create EBS volumes aswell as the instance or do instances come with a small harddrive? - sorry a bit off topic
[09:42:54] <Zelest> even more off-topic, I can truely suggest DigitalOcean :o
[10:27:21] <dev-null> we are facing mongodb seg fault in production for last 2 days ... here is the logs http://pastie.org/5188181 ...
[10:57:55] <desiac> Is it normal to have so much activity on a Journal file? I have j.__0 which is 1gb (the limit) so its made a j.__1 which is now 712MB and every 30 mins IO goes ape - im so lost
[11:08:41] <NodeX> [10:51:30] <muatik> there is a line: [initandlisten] ** WARNING: You are running in OpenVZ. This is known to be broken!!!
[11:08:52] <NodeX> I think that's pretty self explanitory
[11:11:32] <desiac> What happens in mongo every 30 mins - this is causing Mongo 2.2 to lock everything and write log warnings such as "Sun Nov 04 20:32:12 [DataFileSync] flushing mmaps took 48563ms for 11 files"
[11:12:00] <desiac> i downgraded to 2.0.7 - the lock doesnt happen anymore but for a good 2-3 minutes the disk usage hits 100% and the log does not contain message as per above
[13:42:56] <Satsuma> I'm configuring a sharded cluster with 3 members (each are configsvr and shardsvr). All configsvr start normally but when I try to start the first shard I get this message in logs : replication should not be enabled on a config server
[13:43:14] <Satsuma> But configsvr as no replSet option.
[13:43:48] <Satsuma> There is a workaround or I've made a kown misstake ? Thank you for any help ;)
[13:49:42] <adam_clarey> Hi, is there a common reason for getting "exception 'MongoConnectionException' with message 'Unknown error' " I can't seem to use mongo from php. the extension is setup
[13:51:14] <Derick> Gargoyle: we haven't had time yet to look at it, sorry
[13:51:31] <Gargoyle> Hi Derick. Just wondering if there's any more help I can give you?
[13:51:31] <desiac> is there any option to buy a once off support ticket from 10gen?
[13:52:39] <moian> Hi ! I have a few questions about sharding
[13:53:38] <dev-null> Can someone pls take a look into this why am getting Seg Faults in mongo production :- http://pastie.org/private/9ijbeuduujuv3cdgry26a
[13:53:42] <Derick> desiac: we do something called lighting consults
[13:53:50] <Derick> Gargoyle: thanks for offering, but not right now I think
[13:55:14] <Derick> Gargoyle: we've just put 1.3.0RC1 out - so now we have time to look at some issues again
[13:59:01] <moian> I'will have 4-5 collections which will get very big (500Go indexes included), I'm wondering if it's better let all collections use the same clusters, or if I should separate them
[15:08:53] <rasto_> if I sort query that returns 0 results, it takes about 2 seconds and costs lot of CPU. Does somebody encounter this problem?
[15:09:28] <toast_> is there a kind of percolator/events/triggers for mongodb? So i.e. i can send an event when a new document is inserted matching certain criteria?
[15:28:27] <toast_> no need, i am in the context lol
[15:28:42] <NodeX> I dont know what that means sorry
[15:31:17] <toast_> if a use a tailable, awaitdata custom query for each user on my site, what would be best practice to not crash the system or overuse mongodb? And should i use connection pooling or a connection per query?
[15:33:54] <NodeX> right so I still dont get teh problem
[15:34:03] <NodeX> you want us to tell you how to build your app
[15:34:14] <NodeX> but you dont give any context other than a new document being added
[15:34:39] <raboof> hi! is there a way to programmatically find out whether a given string is JSON or BSON? doesn't seem to be any 'magic numbers'-like identification there, right?
[15:38:27] <toast_> there is not many solutions, i either have a 1-thread pool with a list of queries firing events, 2- limit of concurrent users in the appside
[15:38:52] <NodeX> pass , if php you could try to decode which makes a cast array then test the type, I dont know with Java sorry
[15:39:00] <toast_> yes it's an alert type system, with queries and bounding box geolocation
[15:39:17] <NodeX> toast_ : I do the same thing - alerts for Jobs in X miles
[15:39:42] <raboof> NodeX: if you know a way to do it in PHP i can translate that to Java no problem :) - but i wonder if it's possible in a reliable way
[15:42:28] <NodeX> I suppose you could regex and look for it starting with "{" also
[15:42:41] <NodeX> as json is only valid to start and end with { / }
[15:43:16] <NodeX> not sure which would be faster in either language
[15:43:48] <raboof> hmm, BSON *could* start with '\s*{' but I guess i can rule out those cases
[15:44:14] <raboof> might be a good approach, i'll try it out. thanks for thinking with me.
[15:44:15] <toast_> «test then against that single job» is rather vague
[15:44:57] <toast_> you mean that in mongo you can test a document against another(i.e. a saved alert one)?
[15:45:07] <Elhu> hi! I have two mongodb instances, one on my dev machine (mongo 2.2.1) and one on my staging env (mongo 2.0.1), with roughly the same data set. I run the same query on both DBs, and they don't use the same index for some reasons. The one on my dev machine is nearly instant, while the one on my staging env takes ~15secs. If I try to force the use of the index with hint on the staging server, the query takes forever (it's been runnin
[15:45:08] <Elhu> minutes). I tried rebuilding the index, the problem is still the same. Was there some bug-fix/improvements between mongo 2.0 and mongo 2.2 that could cause such a drastic difference?
[15:45:10] <NodeX> err.... I run each alert against that single job (new document)
[15:46:21] <toast_> ahhhhhhhhhh then you freakin made an appside percolator :P
[15:46:45] <NodeX> lol, that's what handling it appside implies ;)
[15:46:52] <toast_> so, i need to make a geocoding and fulltext search appside
[15:56:28] <Elhu> ok, I have a really weird situation here. A query uses an index, takes 10 minutes to return, and the nscanned field in explain has a value of about 9 million, but I only have 160 000 documents in my collection oO… any idea what could cause that?
[16:14:20] <Elhu> I found the cause, it's a $or on the indexed field that just kills everything. It's rarely necessary and was mainly here as a safeguard, and removing it makes the query nearly instantaneous. I'll file a bug with MongoDB, but I don't think it's going to get fixed since we still run Mongo 2.0, and it works just fine in 2.2 :)
[16:33:36] <mikejw> I'm trying to figure how to use map reduce but all I'm getting is what looks like a meta obect for the collectin itself
[17:12:43] <doubletap> hey everyone, i just switched from one plan to another at mongohq and i noticed the id's went from alphanumerics to numbers approaching infinity. are _id's approaching infinity normal?
[18:05:22] <doubletap> today the id's have started coming up all starting with 5097 and intermittently with all numbers and just an 'e' in them which turns them to numbers approaching infinity
[18:05:23] <nbargnesi> doubletap: that's part of it - the _id's are considered to be "monotonically increasing"
[18:09:55] <doubletap> so, i was switching based on isNaN(some_id) and for ID's created, it looks like, yesterday, they come up as numbers approaching Infinity like 5097e79240744435429805
[18:10:00] <mikejw> ..ok getting the latest (stable) version now
[18:58:55] <cdave> i need to backup mongodb from remote backup server, and I plan to use mongodump, question i have is, is there a mongodb client only package which provides mongodump utility
[19:09:53] <kevinprince> hey, quick question. my mongo instance is using a lot of ram, so i dropped some unneeded indexes and ran a repair but the ram usage doesnt seem to have decreased. any ideas?
[19:09:59] <Gargoyle> cdave: Don't think so. But why don't you have your backup server invoke mongodump locally on the mongo server via ssh, tar/zip, etc. and then transfer the file?
[19:10:23] <Gargoyle> kevinprince: It's supposed to. It's memory mapped!
[19:11:06] <kevinprince> Gargoyle: so i shouldnt see a reduction in ram usage? ubuntu 12.04
[19:11:20] <cdave> Gargoyle, i think that what i going to do now
[19:11:43] <Gargoyle> kevinprince: No. because the ram is allocated as buffers. Under linux, free ram = wasted ram!
[19:12:14] <Gargoyle> If nothing else needs the RAM, it's used for buffers. As soon as another program needs some ram, linux will use some of the buffer space.
[19:12:31] <kevinprince> Gargoyle: agreed ;) need it for something else. will let the kernal work it out then ;)
[19:14:04] <kevinprince> we are doing some election data analysis, and noticing it spiking but cool :)
[20:15:43] <munro> does a sharded cluster provide redundancy? or do I also have to setup replication? if it does, how do I know the amount of redundancy my data has? and how do I specificy geographic redundancy?
[20:16:17] <wereHamster> munro: sharding is for performance. replication is for redundnacy
[20:16:38] <wereHamster> the docs have all the details you need.
[20:22:22] <munro> http://docs.mongodb.org/manual/administration/sharding-architectures/#sharding-high-availability <-- just to be clear, I should replicate every shard I have correct? not the database as a whole
[20:22:51] <kali> yes. every shard must be a replica set
[20:28:33] <kali> eka: yes, it does work with standalone servers. but munro asked about redundancy, so i would not want him confused
[20:28:55] <eka> kali: ok ... sorry didn't see that
[20:30:03] <eka> q: I need to change my shard_key (I know bummer) How would I do that without loosing all the data? I thought of removing all shards one by one, so data ends on shard1 and then? how to remove the key?
[20:30:55] <Gargoyle> cdave: Good find. I wish Ubuntu would put the version numbers after the names. I forget which ones are which!
[20:39:42] <eka> how to know when you need more shards? how many chunks per shard is reasonable ?
[20:47:21] <roxlu> hi, when I use mongo_insert (c-driver), it seems that it blocks till the data is inserted. Is there a way to use non-blocking inserts?
[20:47:46] <eka> roxlu: by default should be fire and forget... did you check the api?
[20:48:17] <roxlu> hmm yes I saw that... but it seems to be blocking.. or am I wrong (?)
[21:03:56] <eka> roxlu: hight traffic to your mongo?
[21:06:02] <roxlu> eka: I'm new to mongo, but I'm inserting about 50 entries per second
[21:10:01] <kali> so basically you're pumping tweets on one side, pushing them to mongo on the other, and when you get at 50/s, you complain about mongo ? :)
[22:07:25] <smsfail> Is there a way to implement PubSubHubbub or something similar with MongoDB? I haven't used mongo in a few years so am out of the loop. Any help is appreciated.