PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 5th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:19:20] <sent_hil> Can someone point to the place in source where mongod looks for incoming connections?
[03:31:34] <borei> hi all
[03:31:44] <borei> i'm new with mongodb
[03:31:51] <borei> just started to learn it
[03:32:27] <borei> in the mongodb startup script (RH init script), there are the follwing:
[03:32:52] <borei> DBPATH=`awk -F= '/^dbpath/{print $2}' "$CONFIGFILE"`PIDFILE=`awk -F= '/^dbpath\s=\s/{print $2}' "$CONFIGFILE"`
[03:33:13] <borei> sorry
[03:33:20] <borei> DBPATH=`awk -F= '/^dbpath/{print $2}' "$
[03:33:46] <borei> DBPATH=`awk -F= '/^dbpath/{print $2}' "$CONFIGFILE"`
[03:33:52] <borei> PIDFILE=`awk -F= '/^dbpath\s=\s/{print $2}' "$CONFIGFILE"`
[03:34:08] <borei> is it wrong or im missing something
[07:47:04] <Zelest> # pkg_version -vL=
[07:47:05] <Zelest> mongodb-2.0.6_1 < needs updating (port has 2.2.0)
[07:47:07] <Zelest> oh yeah! :-D
[07:47:15] <Zelest> shame 2.2.1 is out.. lol
[07:47:19] <Zelest> but 2.2.x at least
[07:49:55] <desiac> im running the simplest of mongo setups. The database is about 2gb and yesterday i noticed that every 30 mins the Write IO on windows would shoot to 100% and mongo was pretty much locking up.
[07:50:55] <desiac> I upgraded from 2.2.0 to 2.2.1 - same issue. I downgraded to 2.0.7 the IO writes still happening but at least mongo not locking up. Can anyone please assist? I got no help on https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/T0v8zq3QElY
[07:50:56] <tiglionabbit> what's the difference between $min/$max and $gt/$lt ?
[07:51:54] <desiac> tiglionabbit, - see here http://www.mongodb.org/display/DOCS/min+and+max+Query+Specifiers
[07:55:18] <tiglionabbit> o
[08:08:55] <timroes> Hi, is there any possibility to save a BSONObject from Java in the database? I know that the MongoDB driver use this internally, but have i the possibility to give it a BSONObject directly to store into db?
[08:35:16] <[AD]Turbo> hola
[08:36:42] <desiac> [AD]Turbo, hey, this place is a ghost town
[08:38:12] <[AD]Turbo> I like ghost towns
[09:02:01] <dev-null> we are facing mongodb seg fault in production for last 2 days
[09:02:11] <dev-null> here is the logs
[09:02:12] <dev-null> http://pastie.org/5188181
[09:02:24] <dev-null> can someone point me what could be wrong here ?
[09:02:39] <vr_> we're using 2.0.7
[09:12:45] <IAD> ьс
[09:12:54] <IAD> oops
[09:42:24] <NodeX> being new to Ec2 (just testing a few things) - does one have to create EBS volumes aswell as the instance or do instances come with a small harddrive? - sorry a bit off topic
[09:42:54] <Zelest> even more off-topic, I can truely suggest DigitalOcean :o
[10:27:21] <dev-null> we are facing mongodb seg fault in production for last 2 days ... here is the logs http://pastie.org/5188181 ...
[10:27:42] <NodeX> post it in the groups
[10:49:11] <muatik> hi, my mongodb is on a virtual machine. It often shutdowns itself. why?
[10:53:04] <lando`> have a look at the logfile :)
[10:54:15] <muatik> there is a line: [initandlisten] ** WARNING: You are running in OpenVZ. This is known to be broken!!!
[10:55:36] <muatik> can't mongodb run on a vm?
[10:56:20] <desiac> im running mongo on VM Ware VM
[10:56:40] <desiac> and sure there are thousands running on EC2
[10:56:47] <Zelest> i've ran it on vmware, qemu and xen. :-)
[10:57:03] <muatik> is the problem openvz?
[10:57:24] <Zelest> ah yeah, tried openvz too.. which doesn't work. :P
[10:57:55] <desiac> Is it normal to have so much activity on a Journal file? I have j.__0 which is 1gb (the limit) so its made a j.__1 which is now 712MB and every 30 mins IO goes ape - im so lost
[10:58:53] <muatik> no, there is no heavy traffic
[10:59:05] <muatik> I think openvs is the problem
[10:59:09] <muatik> openvz*
[11:08:41] <NodeX> [10:51:30] <muatik> there is a line: [initandlisten] ** WARNING: You are running in OpenVZ. This is known to be broken!!!
[11:08:52] <NodeX> I think that's pretty self explanitory
[11:11:32] <desiac> What happens in mongo every 30 mins - this is causing Mongo 2.2 to lock everything and write log warnings such as "Sun Nov 04 20:32:12 [DataFileSync] flushing mmaps took 48563ms for 11 files"
[11:12:00] <desiac> i downgraded to 2.0.7 - the lock doesnt happen anymore but for a good 2-3 minutes the disk usage hits 100% and the log does not contain message as per above
[11:20:59] <NodeX> where is your filesystem
[11:21:12] <NodeX> is it on the same machine or somewhere else, and how much data is syncing?
[11:21:34] <ron> I wonder if remonvv is okay.
[11:27:56] <NodeX> how come?
[11:28:32] <desiac> I am on Windows server 2008R2 - 64 bit mongo - all on the same file system
[11:28:54] <desiac> this is as vanilla setup as you can get - the total db is 2GB - not very busy - only 500 users online
[11:29:17] <desiac> the journal is always being written to and there are 2 journals - it seems like too much journal activity?
[11:29:47] <ron> NodeX: he hasn't been around for a while.
[11:30:13] <NodeX> !seen remonvv
[11:30:13] <pmxbot> I last saw remonvv speak at 2012-10-24 16:38:37+00:00 in channel #mongodb
[11:30:29] <NodeX> nearly 2 weeks
[11:30:43] <NodeX> maybe he's on vacation?
[11:31:51] <desiac> is it normal that j._1 is being written to more than the my database.3 file?
[11:32:18] <NodeX> I assume "j" means journal so I would think it would be
[11:32:59] <desiac> yes j is the journal file... what about the fact that i have 1000 - 2000 page faults / sec all the time (seeing this in MMS)
[11:33:28] <NodeX> does your working set exceed ram?
[11:34:51] <desiac> i got 16gb ram - mongo reporting to use only 330MB working set
[11:35:50] <desiac> but in MMS it shows memory - just under 2gb which is indeed the size of the DB
[11:51:41] <NodeX> I am not familar enough with Winblows Mongodb to be of any help sorry
[11:52:13] <desiac> yeah i am in total agreement - need to move it to a linux box - my redis windows port is a bit of a disaster on windows too
[12:04:16] <NodeX> http://www.reddit.com/r/funny/comments/12kce9/apple_resizes_website_so_that_the_samsung_apology/
[12:04:18] <NodeX> LOL
[12:30:37] <NodeX> @/12add9fcc6045f1cd09a49426c2bf660015f8f12/
[12:30:40] <NodeX> oops
[12:31:08] <Derick> password changing time? :)
[12:31:40] <NodeX> wasn;t a password luckily
[13:40:03] <Satsuma> Hi
[13:42:56] <Satsuma> I'm configuring a sharded cluster with 3 members (each are configsvr and shardsvr). All configsvr start normally but when I try to start the first shard I get this message in logs : replication should not be enabled on a config server
[13:43:14] <Satsuma> But configsvr as no replSet option.
[13:43:48] <Satsuma> There is a workaround or I've made a kown misstake ? Thank you for any help ;)
[13:46:01] <Gargoyle> ping Derick, bjori .
[13:49:42] <adam_clarey> Hi, is there a common reason for getting "exception 'MongoConnectionException' with message 'Unknown error' " I can't seem to use mongo from php. the extension is setup
[13:50:57] <Derick> Gargoyle: hi
[13:51:14] <Derick> Gargoyle: we haven't had time yet to look at it, sorry
[13:51:31] <Gargoyle> Hi Derick. Just wondering if there's any more help I can give you?
[13:51:31] <desiac> is there any option to buy a once off support ticket from 10gen?
[13:52:39] <moian> Hi ! I have a few questions about sharding
[13:53:38] <dev-null> Can someone pls take a look into this why am getting Seg Faults in mongo production :- http://pastie.org/private/9ijbeuduujuv3cdgry26a
[13:53:42] <Derick> desiac: we do something called lighting consults
[13:53:50] <Derick> Gargoyle: thanks for offering, but not right now I think
[13:55:14] <Derick> Gargoyle: we've just put 1.3.0RC1 out - so now we have time to look at some issues again
[13:55:36] <Gargoyle> Derick: :)
[13:59:01] <moian> I'will have 4-5 collections which will get very big (500Go indexes included), I'm wondering if it's better let all collections use the same clusters, or if I should separate them
[13:59:16] <Tobsn> 500Go
[13:59:17] <Tobsn> :)
[13:59:41] <Tobsn> moian, depends on the data and the read/writes etc.
[13:59:44] <moian> very big for me ^^
[14:01:58] <moian> Tobsn: There will be lots of read/writes on each collections
[14:05:07] <moian> Tobsn: have you an example where a case could be better than the other ?
[14:08:54] <Tobsn> well if you have a shit ton of writes you might want to use replication instead of sharding
[14:09:05] <Tobsn> if you have shit ton or random reads you might want to do the same
[14:09:09] <Tobsn> it depends on what youre doing
[14:16:24] <moian> I want every collections to stay in the RAM, my servers have 64gb RAM, so I need to use sharding
[14:19:00] <moian> but I'm not sur what is best. Use the same sharding clusters for every collections or not ?
[14:24:50] <moian> Tobsn: ?
[14:31:08] <Tobsn> yeah you can use the same cluster for all collections
[14:31:11] <Tobsn> nothing bad about that
[14:31:55] <moian> Tobsn: OK thank you !
[14:32:30] <Tobsn> ;)
[14:32:50] <ppetermann> hey Tobsn
[14:33:18] <Tobsn> well hello there mr. petermann
[14:33:52] <ppetermann> havent seen you in a while
[14:33:57] <ppetermann> except on facebook
[14:34:47] <Tobsn> yep
[14:34:53] <Tobsn> but im often here
[14:34:57] <Tobsn> for the past 4 years
[14:34:58] <Tobsn> ;)
[14:35:20] <ppetermann> yeah, but im not watching this window overly active =)
[14:35:40] <Tobsn> first time i see you in here ;)
[15:08:09] <toast_> hello
[15:08:53] <rasto_> if I sort query that returns 0 results, it takes about 2 seconds and costs lot of CPU. Does somebody encounter this problem?
[15:09:28] <toast_> is there a kind of percolator/events/triggers for mongodb? So i.e. i can send an event when a new document is inserted matching certain criteria?
[15:11:04] <NodeX> create an index on the query
[15:14:49] <rasto_> the query is fast, it uses index, but once the empty result set is sorted in the end, it is slow.
[15:15:17] <NodeX> does the sort use an index also?
[15:16:27] <rasto_> probably not, but since there's nothing to sort it should be fast, no?
[15:16:42] <NodeX> you sia dit was slow sorting
[15:16:45] <NodeX> #said it *
[15:17:34] <jwm> check if it has zero results? heh
[15:17:39] <toast_> or what is best practice to stream a tailable query to a user?
[15:18:00] <toast_> i.e. i could have a lot of user making queries... :/
[15:18:51] <NodeX> do you sort on the query?
[15:20:40] <toast_> yes i do
[15:20:49] <NodeX> ok, does the sort have an index?
[15:21:02] <toast_> its a $natural sort
[15:21:27] <toast_> but i guess your not talking to me heh
[15:21:34] <NodeX> yes I am talking to you
[15:21:42] <NodeX> I dont see what the problem is here
[15:22:39] <rasto_> jwm: generally, it is how mongodb works, or there's something wrong?
[15:23:48] <toast_> well, to put it simply, my server would crash if i have a thread with a tailable query for each user
[15:24:02] <NodeX> why are you trying to tail the cursors?
[15:24:27] <toast_> i have a live stream, so i want to notify the user if any new notification happends
[15:25:03] <NodeX> right, sorry I got your question mixed up
[15:25:19] <NodeX> there is no percolator/pub&sub for mongo
[15:26:50] <toast_> yeah, i figured that out, but then, how should i build it?
[15:27:02] <NodeX> that's an appside problem
[15:27:45] <toast_> yes, but a problem that might happend alot here in the context...
[15:27:56] <NodeX> what context?
[15:28:02] <NodeX> you have not given one
[15:28:27] <toast_> no need, i am in the context lol
[15:28:42] <NodeX> I dont know what that means sorry
[15:31:17] <toast_> if a use a tailable, awaitdata custom query for each user on my site, what would be best practice to not crash the system or overuse mongodb? And should i use connection pooling or a connection per query?
[15:31:42] <toast_> *conext
[15:32:09] <NodeX> that will never scale
[15:33:27] <toast_> thats what i am saying!
[15:33:54] <NodeX> right so I still dont get teh problem
[15:34:03] <NodeX> you want us to tell you how to build your app
[15:34:14] <NodeX> but you dont give any context other than a new document being added
[15:34:39] <raboof> hi! is there a way to programmatically find out whether a given string is JSON or BSON? doesn't seem to be any 'magic numbers'-like identification there, right?
[15:34:54] <NodeX> raboof look at $type
[15:35:01] <toast_> because the rest of the context is irreleveant?
[15:35:33] <NodeX> ok, good luck
[15:35:41] <NinjaPenguin> morning - any MMS users in here?
[15:36:02] <NinjaPenguin> we're just wondering if we should run once agent per machine, or one agent per replica set
[15:36:02] <toast_> so i need to put something like a percolator in front of it (i.e. elastic search)?
[15:36:19] <NodeX> no, you need to handle it appside like I said in the first place
[15:37:06] <NodeX> I assume it's an alerts type system?
[15:37:56] <raboof> NodeX: ah, the string I'm looking at is a string that I'm about to insert into a mongodb database, it's not in there yet...
[15:38:15] <NodeX> raboof : what language?
[15:38:21] <raboof> NodeX: Java
[15:38:27] <toast_> there is not many solutions, i either have a 1-thread pool with a list of queries firing events, 2- limit of concurrent users in the appside
[15:38:52] <NodeX> pass , if php you could try to decode which makes a cast array then test the type, I dont know with Java sorry
[15:39:00] <toast_> yes it's an alert type system, with queries and bounding box geolocation
[15:39:17] <NodeX> toast_ : I do the same thing - alerts for Jobs in X miles
[15:39:26] <NodeX> I do it on the appside
[15:39:42] <raboof> NodeX: if you know a way to do it in PHP i can translate that to Java no problem :) - but i wonder if it's possible in a reliable way
[15:41:08] <NodeX> $t=json_decode('JSON-STRING'); ... if(gettype($t)=='array') { ... // it;s json ]
[15:41:12] <NodeX> * } *
[15:41:17] <toast_> nice, NodeX, but then how do you handle it appside?
[15:41:48] <raboof> NodeX: ah, basically 'try to parse it as JSON, if that fails, it must be BSON' ?
[15:41:57] <NodeX> when the job is added I run thru the alerts collection and test them against that single job
[15:42:04] <NodeX> raboof : exactly
[15:42:28] <NodeX> I suppose you could regex and look for it starting with "{" also
[15:42:41] <NodeX> as json is only valid to start and end with { / }
[15:43:16] <NodeX> not sure which would be faster in either language
[15:43:48] <raboof> hmm, BSON *could* start with '\s*{' but I guess i can rule out those cases
[15:44:14] <raboof> might be a good approach, i'll try it out. thanks for thinking with me.
[15:44:15] <toast_> «test then against that single job» is rather vague
[15:44:57] <toast_> you mean that in mongo you can test a document against another(i.e. a saved alert one)?
[15:45:07] <Elhu> hi! I have two mongodb instances, one on my dev machine (mongo 2.2.1) and one on my staging env (mongo 2.0.1), with roughly the same data set. I run the same query on both DBs, and they don't use the same index for some reasons. The one on my dev machine is nearly instant, while the one on my staging env takes ~15secs. If I try to force the use of the index with hint on the staging server, the query takes forever (it's been runnin
[15:45:08] <Elhu> minutes). I tried rebuilding the index, the problem is still the same. Was there some bug-fix/improvements between mongo 2.0 and mongo 2.2 that could cause such a drastic difference?
[15:45:10] <NodeX> err.... I run each alert against that single job (new document)
[15:46:21] <toast_> ahhhhhhhhhh then you freakin made an appside percolator :P
[15:46:45] <NodeX> lol, that's what handling it appside implies ;)
[15:46:52] <toast_> so, i need to make a geocoding and fulltext search appside
[15:47:10] <toast_> thanks NodeX
[15:47:17] <NodeX> app side = in your code
[15:47:25] <NodeX> it's not a magic "thing"
[15:47:47] <NodeX> my way doens't really scale all that well on a high write system
[15:48:16] <NodeX> but there is limited choices as you say
[15:49:15] <toast_> with very huge write
[15:50:01] <NodeX> certainly full text is not great on Mongo if you're regexing
[15:50:23] <NodeX> you might be better to put this functionality in ElasticSearch
[15:51:39] <toast_> gtg, thnks again!
[15:56:28] <Elhu> ok, I have a really weird situation here. A query uses an index, takes 10 minutes to return, and the nscanned field in explain has a value of about 9 million, but I only have 160 000 documents in my collection oO… any idea what could cause that?
[15:57:04] <NodeX> that is weird
[15:57:14] <NodeX> db.your_collection.count();'
[15:57:16] <NodeX> db.your_collection.count();
[15:57:23] <NodeX> make sure there isn't 9Million docs
[15:58:52] <Elhu> just did: "count" : 166303
[15:59:02] <Elhu> "nscanned" : 9904649
[16:12:10] <Gargoyle> Elhu: Pastebin the query
[16:14:20] <Elhu> I found the cause, it's a $or on the indexed field that just kills everything. It's rarely necessary and was mainly here as a safeguard, and removing it makes the query nearly instantaneous. I'll file a bug with MongoDB, but I don't think it's going to get fixed since we still run Mongo 2.0, and it works just fine in 2.2 :)
[16:33:36] <mikejw> I'm trying to figure how to use map reduce but all I'm getting is what looks like a meta obect for the collectin itself
[16:34:03] <mikejw> s/obect/object s/collectin/collection
[16:34:13] <_m> gist/pastie us your sample query.
[16:50:47] <mikejw> the only reason I'm tryng to use map reduce is because this is what happens when I try and use group: http://pastebin.com/Rz0KXSW8
[16:52:05] <mikejw> (with doctrine odm)
[17:01:24] <kali> mikejw: you should use the new aggregation framework instead
[17:01:56] <kali> mikejw: it will be more efficient by an order of magnitude, and will look much more like what you want it to look like
[17:05:52] <_m> mikejw: I agree with Kali. Try to leverage the aggregation framework.
[17:05:54] <mikejw> kali: so that would have to be in 'raw' php code though?
[17:06:23] <kali> mikejw: what do you mean ?
[17:06:40] <kali> mikejw: at least, you won't have JS code
[17:07:00] <_m> He's using Symphony or whatever. Think the question is whether that's available. To which I say, "Scope your documentation."
[17:07:08] <mikejw> :)
[17:07:14] <mikejw> it doesn't look like it..
[17:07:44] <mikejw> well specifically it's Doctrine
[17:07:56] <mikejw> ..ODM
[17:09:50] <kali> mikejw: whatever. map/reduce can not be used for user-fronting requests
[17:10:14] <mikejw> don't get me wrong I wish I didn't know what it was either
[17:10:22] <mikejw> what do you mean by user-fronting?
[17:10:51] <kali> mikejw: well, in a web page for instance
[17:12:01] <kali> mikejw: it's slow, and only one map/reduce query can run at one given time on a mongod server
[17:12:25] <kali> mikejw: (note that it's actually the same for "group")
[17:12:40] <mikejw> ok
[17:12:43] <doubletap> hey everyone, i just switched from one plan to another at mongohq and i noticed the id's went from alphanumerics to numbers approaching infinity. are _id's approaching infinity normal?
[17:12:44] <mikejw> makes sense
[17:14:35] <mikejw> ..I'd much rather be creating my own custom wrapper functionality to the native mongo php extension
[17:15:11] <mikejw> but that would be too easy
[17:18:09] <doubletap> is it normal for _id's to be numbers approaching infinity rather than alphanumeric in composition?
[17:44:28] <doubletap> how do i change the way _id's are generated so that they become numbers approaching infinity rather than alphanumerics?
[17:50:11] <ron> doubletap: generate them however you want.
[17:50:49] <doubletap> i mean, is there a config that changes the way mongodb creates _id's?
[17:55:10] <mikejw> by the way how do I run mongo 2.1 on debian?
[17:55:49] <mikejw> ..sorry maybe I already am
[17:55:53] <Derick> 2.1 ?
[17:55:57] <Derick> do you mean 2.2.1?
[18:02:31] <mikejw> yes I do :)
[18:04:11] <mikejw> ..although maybe I'm already running it..
[18:04:29] <mikejw> I'm using mongo shell version 2.0.7..
[18:04:29] <doubletap> ok, this is crazy, are the _id's generated by some clock calculation?
[18:04:57] <mikejw> doubletap: I think a portion of them are releated to time
[18:05:16] <mikejw> s/releated/related
[18:05:22] <doubletap> today the id's have started coming up all starting with 5097 and intermittently with all numbers and just an 'e' in them which turns them to numbers approaching infinity
[18:05:23] <nbargnesi> doubletap: that's part of it - the _id's are considered to be "monotonically increasing"
[18:08:42] <kali> doubletap: http://www.mongodb.org/display/DOCS/Object+IDs#ObjectIDs-BSONObjectIDSpecification
[18:09:55] <doubletap> so, i was switching based on isNaN(some_id) and for ID's created, it looks like, yesterday, they come up as numbers approaching Infinity like 5097e79240744435429805
[18:10:00] <mikejw> ..ok getting the latest (stable) version now
[18:58:55] <cdave> i need to backup mongodb from remote backup server, and I plan to use mongodump, question i have is, is there a mongodb client only package which provides mongodump utility
[19:09:53] <kevinprince> hey, quick question. my mongo instance is using a lot of ram, so i dropped some unneeded indexes and ran a repair but the ram usage doesnt seem to have decreased. any ideas?
[19:09:59] <Gargoyle> cdave: Don't think so. But why don't you have your backup server invoke mongodump locally on the mongo server via ssh, tar/zip, etc. and then transfer the file?
[19:10:23] <Gargoyle> kevinprince: It's supposed to. It's memory mapped!
[19:10:57] <Gargoyle> kevinprince: Also, what OS?
[19:11:06] <kevinprince> Gargoyle: so i shouldnt see a reduction in ram usage? ubuntu 12.04
[19:11:20] <cdave> Gargoyle, i think that what i going to do now
[19:11:43] <Gargoyle> kevinprince: No. because the ram is allocated as buffers. Under linux, free ram = wasted ram!
[19:12:14] <Gargoyle> If nothing else needs the RAM, it's used for buffers. As soon as another program needs some ram, linux will use some of the buffer space.
[19:12:31] <kevinprince> Gargoyle: agreed ;) need it for something else. will let the kernal work it out then ;)
[19:13:23] <kevinprince> thanks
[19:13:26] <Gargoyle> kevinprince: Just keep an eye out for excessive swapping.
[19:13:33] <kevinprince> will dp
[19:14:04] <kevinprince> we are doing some election data analysis, and noticing it spiking but cool :)
[20:15:43] <munro> does a sharded cluster provide redundancy? or do I also have to setup replication? if it does, how do I know the amount of redundancy my data has? and how do I specificy geographic redundancy?
[20:16:17] <wereHamster> munro: sharding is for performance. replication is for redundnacy
[20:16:38] <wereHamster> the docs have all the details you need.
[20:22:22] <munro> http://docs.mongodb.org/manual/administration/sharding-architectures/#sharding-high-availability <-- just to be clear, I should replicate every shard I have correct? not the database as a whole
[20:22:51] <kali> yes. every shard must be a replica set
[20:23:47] <munro> http://docs.mongodb.org/manual/administration/backups/#sharded-cluster-backups <-- interesting!
[20:24:14] <munro> ok, thank you all for making things explicit to me!
[20:26:08] <cdave> Gargoyle, i just came across this.. https://launchpad.net/ubuntu/natty/+package/mongodb-clients
[20:26:34] <eka> kali: should? or if one want's durability?
[20:27:09] <eka> wants
[20:28:33] <kali> eka: yes, it does work with standalone servers. but munro asked about redundancy, so i would not want him confused
[20:28:55] <eka> kali: ok ... sorry didn't see that
[20:30:03] <eka> q: I need to change my shard_key (I know bummer) How would I do that without loosing all the data? I thought of removing all shards one by one, so data ends on shard1 and then? how to remove the key?
[20:30:55] <Gargoyle> cdave: Good find. I wish Ubuntu would put the version numbers after the names. I forget which ones are which!
[20:31:24] <cdave> yep
[20:31:36] <kali> eka: irk
[20:31:42] <kali> eka: is dumping/reloading an option ?
[20:31:45] <ron> seriously, you don't remember which release is ooga booga?
[20:32:05] <eka> kali: ah... dumping... yea... is there a dump tool?
[20:32:15] <kali> eka: mongodump ? :)
[20:32:17] <ron> mongodump!
[20:32:23] <eka> kali: so I dump... destroy everything... and start fresh right?
[20:32:57] <kali> eka: honestly, if you can afford the downtime, i think it's the easiest way to go
[20:33:12] <eka> kali: yes I can... is beta still
[20:33:36] <eka> kali: the dump will not dump shard config?
[20:34:10] <kali> mmm the dump dump some metadata now, but i thikn it's just the indexes
[20:34:30] <kali> you can disable it anyway
[20:34:50] <eka> indexes I want... shard key I dont
[20:35:16] <kali> well, it's quite easy to recreate the indexes, if that's the case
[20:35:24] <eka> right
[20:39:42] <eka> how to know when you need more shards? how many chunks per shard is reasonable ?
[20:47:21] <roxlu> hi, when I use mongo_insert (c-driver), it seems that it blocks till the data is inserted. Is there a way to use non-blocking inserts?
[20:47:46] <eka> roxlu: by default should be fire and forget... did you check the api?
[20:48:17] <roxlu> hmm yes I saw that... but it seems to be blocking.. or am I wrong (?)
[21:03:56] <eka> roxlu: hight traffic to your mongo?
[21:06:02] <roxlu> eka: I'm new to mongo, but I'm inserting about 50 entries per second
[21:06:11] <eka> mmm ok
[21:06:29] <kali> roxlu: this is ridiculous. you should get several thousand on a laptop
[21:06:40] <roxlu> hmm
[21:06:46] <kali> roxlu: do you want to paste your code somewhere, so we can have a look ?
[21:06:53] <roxlu> sure, one sec
[21:07:43] <roxlu> https://gist.github.com/eaaf5dac1210c62fdb82
[21:10:01] <kali> so basically you're pumping tweets on one side, pushing them to mongo on the other, and when you get at 50/s, you complain about mongo ? :)
[21:11:18] <eka> lol
[21:11:19] <kali> i think you should write a main() that loops over onTweet with some constant data and check it works on this side
[21:13:18] <nbargnesi> yes
[21:13:30] <nbargnesi> d'oh, wrong channel
[22:07:25] <smsfail> Is there a way to implement PubSubHubbub or something similar with MongoDB? I haven't used mongo in a few years so am out of the loop. Any help is appreciated.
[22:33:19] <silverfix> hello
[22:34:17] <silverfix> where can i find the api reference of mongo function ?
[22:35:57] <silverfix> found. http://docs.mongodb.org/manual/reference/command/
[22:36:27] <silverfix> no -_-
[22:36:40] <silverfix> i'd want to read the doc of db.collection.update
[22:37:57] <crudson> http://www.mongodb.org/display/DOCS/Updating
[22:42:13] <manveru> anybody know whether $addToSet is atomic?
[23:03:49] <manveru> ah, looks like
[23:04:02] <manveru> http://www.mongodb.org/display/DOCS/Atomic+Operations says all modifiers are atomic
[23:12:08] <Bilge> DOCS
[23:33:53] <smsfail> anyone?
[23:35:21] <rossdm> if you want to do some kind of pubsub thing you can use capped collections + tailable cursors