PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 10th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:28:48] <hdm> anyone know if you can cancel a background indexing operation?
[00:29:21] <hdm> it doesn't seem to show up in db.currentOp() list and this particular one went into background mode by mistake and is killing db performance while it runs (3T dataset)
[00:55:38] <tpae> what's the best practice for indices.. setting fields ascending/descending? does it affect the insertion or retrieval or both? should ones that called more frequently be at the "top"?
[01:45:01] <crudson> tpae: direction isn't enormously important for a simple index
[01:45:15] <crudson> tpae: for compound indexes it will more
[02:52:58] <Oddman> if i have a whole stack of asset dependencies, upon each other - is it better to use some sort of slug or something as the identifier, rather than an ID? The reason I ask is because knowing the IDs when creating assets and sending to another is a little annoying.
[04:24:25] <tpae> what's the best way to build a queue via mongo?
[05:22:52] <sat2050> hi
[05:22:59] <sat2050> can someone tell me the meaning of this error
[05:23:02] <sat2050> Wed Oct 10 05:17:48 [rsSync] info DFM::findAll(): extent 5:74314000 was empty, skipping ahead. ns:local.replset.minvalid
[05:25:07] <amitprakash> Hi.. i am constantly getting a DFM::findAll(): extent 5:74314000 was empty, skipping ahead. ns:local.replset.minvalid on my secondary severs.. should I be worried?
[06:08:00] <chovy> is there a url type for an object?
[06:08:04] <chovy> or should i just use string
[06:09:02] <tpae> i just use string, dunno about url types
[06:16:27] <chovy> tpae: thanks
[06:33:07] <tpae> can you perform regex find on keys?
[06:34:03] <wereHamster> tpae: have you tried reading the documentation?
[06:34:18] <tpae> i'll get to it.. sorry
[07:46:35] <[AD]Turbo> hi there
[07:53:33] <Neptu> hi there
[08:03:42] <Null_Route> hi there
[08:35:44] <roderyk> Question: is it safe to stop mongodb while it's in the middle of a --repair operation?
[08:36:22] <roderyk> are there any consequences? Or is it all in tmp files until finished?
[09:41:38] <Jrz> Hey guys, I'm creating a turnbased async game, but what's the best way to model a game+participants?
[09:42:03] <Jrz> For now they are 2-player games, but maybe I'll add X-player in the future
[09:42:55] <Jrz> brb
[10:17:25] <Jrz> back
[10:23:29] <NodeX> the best model is whatever suits your application
[10:23:38] <NodeX> that's the beauty of no schema :)
[10:24:42] <deverdi> hi guys, how can I groupby DAY using 2.2's aggregation framework?
[10:26:42] <Jrz> NodeX, well ug :0
[10:27:19] <Jrz> NodeX, I was thinking of: game(state, participants(player, removed), currentplayer) and players(….)
[10:28:46] <Jrz> Should I move removed / archived games to a different collection, or does it not matter?
[10:29:09] <Jrz> I also wanted to reference to the activegames in players… so I don't have to search with an index in games
[10:41:47] <coalado> Hi. I'd like to da a query like where doc.field1 != doc.field2
[10:43:32] <Derick> How would that work? What are you trying to query exactly?
[10:47:09] <coalado> I already solved the problem
[10:47:16] <coalado> by using a $where clause
[10:47:58] <kali> coalado: you're aware of the performance implications
[10:48:01] <kali> coalado: ?
[10:48:12] <coalado> kali: I am. thanks
[11:13:23] <NodeX> Jrz : I dont have a clue what "game(state, participants(player, removed), currentplayer) and players(….)" means
[11:42:47] <timroes> Hi I have a question about a load test. I used the following script: http://pastebin.com/wA2DXWwt and run it 20 times parallel. So its running now since around two hours and memory is slowly but steadily filling. It started with around 200MB memory load, now its at 1.5GB and youc an watch it increasing. I guess there is some misstake in the script, since I guess MongoDB won't run out of memory just because of some write/read opera
[11:52:00] <NodeX> timroes : the memory is filled up by MRU
[11:52:53] <timroes> okay, as it seem it is also stopping now around 1.5GB (on a 2GB machine)
[11:53:04] <timroes> so it takes care, that it will never run out of memory?
[11:53:19] <NodeX> yes, they're mmapped
[11:53:37] <timroes> okay thanks a lot :)
[12:45:31] <Jrz> NodeX: collection games, with each document to have a state, and a list of participants, which point to a player and field 'removed'..
[12:52:53] <stevie-bash> Hello, Can somebody explain me, where we loose the 226 millis in this explain: http://pastie.org/private/pffchctikhnabkyzofbfua. Its mongodb 2.2.0 and the query was done via a router.
[12:52:59] <stevie-bash> mongodb-router
[14:21:13] <sag> do we have to manage the connection to master and slave for read from slave and write to master
[14:22:08] <sag> i mean do we have to create connection accordingly in client
[14:30:23] <NodeX> no
[14:31:32] <sag> ok
[14:31:44] <sag> then how can i ensure reads from slave and writes to master
[14:32:04] <NodeX> by setting the preference in your connection string
[14:32:18] <sag> ?
[14:32:25] <NodeX> http://docs.mongodb.org/manual/applications/replication/
[14:32:32] <NodeX> first result on google - how strange
[14:32:53] <NodeX> http://docs.mongodb.org/manual/applications/replication/#read-preference
[14:33:14] <sag> i am using php-mongo-1.2.12
[14:33:20] <sag> driver
[14:33:27] <NodeX> awesome, it still applies
[14:37:01] <Derick> but 1.3.0 isn't out yet
[14:37:04] <sag> first line on http://in2.php.net/manual/en/mongo.readpreferences.php
[14:37:45] <sag> i know its in beta...thats why i asked
[14:38:22] <sag> its confusion on docs on mongodb and php.net
[14:38:34] <Derick> what's the prob?
[14:39:05] <sag> it can be a huge problems for legacy systems
[14:40:03] <sag> consider this, i am on old kernel, say Centos 3.06 then im having apache 2.0.63, php 5.2.6 ..now tell me can i use mongo driver safely
[14:40:19] <sag> ?
[14:40:40] <sag> i dont think so, as 1.3.0beta2 says it will need php >=5.3
[14:40:48] <doxavore> If I want to copy a database to another name on the same machine, can I shut down mongod, copy the files (with the new DB name in them), and start the server? Or am I asking for trouble?
[14:41:27] <sag> no need to shutdown
[14:41:53] <sag> no trouble..i have did half an hour agao with 05.GB od db
[14:42:00] <sag> *ago
[14:42:06] <sag> *of
[14:43:51] <sag> again if iam running apache in worker mpm
[14:44:28] <sag> trust me i have spent much days in pain..pulling hairs outta my and my colleagues head
[14:45:09] <sag> to setup working environment for mongo
[14:47:01] <sag> @Derick: i want to design in that way it was described in http://docs.mongodb.org/manual/applications/replication/#read-preference
[14:47:26] <Derick> sag: we don't support 5.2 anymore
[14:47:28] <Derick> upgrade!
[14:47:33] <sag> where it was mentioned "you might use secondary read for"
[14:47:44] <sag> oops!!
[14:47:49] <sag> yep..
[14:48:01] <Derick> also, don't use the worker MPM
[14:48:25] <sag> i know i have to upgrade..but its painful..on environments like mine..where there are many dependent services
[14:48:50] <sag> i have figured it out today somehow..
[14:49:07] <sag> i have configured apache+fpm+mod_fcgi_php5.3
[14:49:31] <sag> apache + fpm + mod_fcgi + php5.3 + php-mongo-1.2.12
[14:49:40] <Derick> that ought to work
[14:49:50] <sag> yep..its working now on centos 3.6
[14:50:00] <sag> but it seems slow..
[14:50:11] <sag> atleast after my httpperf test
[14:50:20] <sag> i mean it is slow..
[14:51:04] <sag> compared to apache-worker
[14:51:20] <NodeX> apache is always slow :P
[14:51:30] <Derick> NodeX: so if your face :P
[14:51:33] <Derick> seriously though
[14:51:38] <NodeX> if my face?
[14:51:41] <Derick> I wouldn't use apache with mod fast-cgi
[14:51:46] <Derick> NodeX: meh, s/if/is
[14:51:50] <NodeX> fail :P
[14:51:52] <sag> then?
[14:51:59] <Derick> i would just nginx or lighttpd with fpm
[14:52:11] <NodeX> nginx ftw
[14:52:19] <Derick> s/just/use
[14:52:21] <NodeX> (better support)
[14:52:22] <Derick> I can't type!
[14:52:28] <sag> ok..
[14:53:06] <_m1> sag: As the others stated, this is why we use nginx.
[14:53:11] <Derick> apache with fast cgi and fpm is just a lot of overhead
[14:53:14] <sag> @Derick but which handler for fpm
[14:53:22] <Derick> sag: ? What do you mean?
[14:53:31] <Derick> nginx talks fastcgi directly to php-fpm
[14:53:58] <sag> ok..so it is fastcgi ot fastcgid
[14:54:04] <Derick> ?
[14:54:11] <Derick> fastcgi is just a protocol
[14:54:32] <NodeX> doesnt the latest php have it's own daemon baked in also?
[14:54:39] <sag> i just read that fastcgi is absolete and now there is something called "fastcgid" or "fastcgi handler"
[14:54:49] <Derick> NodeX: yes, but not for production use
[14:54:53] <Derick> sag: that's with apache
[14:55:19] <sag> yep..but that deamon is listening on port..and server like nginx or apache should delegate to that demon via fastcgi handlers
[14:55:29] <sag> ohh ok..
[14:55:31] <NodeX> correct
[14:55:48] <NodeX> nginx proxies the script and spits out the response
[14:55:53] <sag> so for ngix there is separate module for fastcgi
[14:56:09] <NodeX> it comes built in, it's called fastcgi_pass
[14:56:22] <sag> ok which version for nginx
[14:56:28] <NodeX> (well the part you want anyway)
[14:56:31] <Derick> "latest" :-)
[14:56:39] <sag> :)
[14:56:45] <NodeX> [09:41:52] * Topic is 'Welcome to the Nginx support channel! | Latest Releases: [development 1.3.7] [stable 1.2.4] |
[14:56:46] <sag> is it stable?
[14:57:01] <NodeX> very
[14:57:13] <NodeX> and very predictable
[14:57:23] <NodeX> ~30mb footprint
[14:58:06] <NodeX> just don't expect all the bells and whistles of apache like .htaccess and dynamic rewriting on the fly
[14:58:27] <NodeX> (rewriting from a .htaccess file)
[14:59:02] <Derick> there is some rewriting in lighttpd though
[14:59:15] <NodeX> you can rewrite in both
[14:59:16] <sag> but it can be done in apache also, i mean i still couldn't figure out..why its overhead with apche only..
[14:59:24] <NodeX> you just have to reload the config every time you do it
[14:59:29] <Derick> because apache is old and ancient
[14:59:39] <Derick> just like comparing MySQL with MongoDB f.e. :)
[15:00:01] <NodeX> or disco with bach!
[15:00:05] <_m> Or MySQL with… anything.
[15:00:12] <NodeX> LOL + 1
[15:00:18] <sag> common man..its proven stable from many years..and its 2.4 came up with gr8 control and gr8 documentation..
[15:00:19] <Derick> NodeX: hey, no dissing on disco or bach :-)
[15:00:28] <NodeX> I like them both, honest!
[15:00:28] <sag> LOL
[15:00:37] <Derick> sag: I have stopped using apache about 4/5 years ago
[15:00:40] <Derick> it's too big and complex
[15:00:50] <NodeX> and it eats resources
[15:00:59] <Derick> with lighttpd I can run many different php versions at the same time, which much better control over resources
[15:01:00] <_m> Nginx + Thin = winning
[15:01:05] <ppetermann> i prefer varnish over nginx though
[15:01:09] <NodeX> nginx = 10 mins to compile, configure, install
[15:01:21] <NodeX> varnish blows and it's a cache not a webserver !
[15:01:28] <NodeX> well it doesnt blow
[15:01:31] <NodeX> but it's not needed
[15:01:54] <sag> yep..
[15:01:56] <NodeX> it was in the days when no-1 could get above 10k req/s
[15:01:59] <sag> reverse proxy
[15:02:13] <NodeX> it's just a CPU hog and another thing thta can go wrong imo
[15:02:17] <NodeX> that*
[15:02:19] <ppetermann> most use of nginx is the proxy aswell
[15:02:45] <NodeX> nginx flies with static files and is comparible to apache on cgi
[15:02:47] <sag> @NodeX so can u get 10k req/s in nginx
[15:02:49] <ppetermann> besides i prefer to not have the same software do webserver, reverse cache and mail proxy
[15:02:59] <NodeX> I can get 55k req/s with nginx :)
[15:03:19] <NodeX> staight out of memory but alot all the same
[15:03:28] <sag> just tell me your hardware man..it must be a powerhouse
[15:03:31] <Derick> ppetermann: your mail server shouldn't be on the same machine :P
[15:03:36] <NodeX> 64gb ram, quad xeon's
[15:03:41] <Derick> ppetermann: also, are you off to IPC next week?
[15:04:11] <ppetermann> Derick: nope, im not
[15:04:16] <ppetermann> Derick: not this year, maybe next
[15:04:20] <Derick> ok
[15:04:26] <ppetermann> tons of work =)
[15:04:27] <sag> what about dual core and 4GB
[15:04:43] <NodeX> Requests per second: 20980.02 [#/sec] (mean) <--- 16gb ram and dual core Xeon's
[15:05:02] <NodeX> sag, I dont have a box that small but It will out perform apache
[15:05:12] <NodeX> and it will use far less resources
[15:05:26] <_m> sag: I can confirm this.
[15:05:27] <NodeX> and it is the future
[15:05:37] <ppetermann> apache is the pst
[15:05:38] <ppetermann> +e
[15:06:01] <NodeX> apache is ok for mysql based apps with a few users but dont ever dream of scaling with it
[15:06:21] <NodeX> nginx and mongo share the same parralell scaling principals
[15:06:32] <NodeX> as does fpm to some extent
[15:07:05] <sag> ok..i have that small box..
[15:07:07] <_m> sag: I'm with a much smaller company and we generally serve ~500 req/s with a 60ms response time. Nginx and thin backed by Mongo.
[15:07:22] <_m> Our boxes are approximate to the 4GB one you described.
[15:07:34] <NodeX> perfect
[15:07:36] <_m> With apache, we couldn't get half of that. It's just too dang slow.
[15:07:38] <NodeX> nice low running costs
[15:07:49] <NodeX> and it's good for the environment!
[15:08:59] <sag> i have ran httperf today itself and got ~500 req/sec with ~90ms response time
[15:09:25] <NodeX> from apache?
[15:09:26] <_m> Note that by "boxes" I mean the web nodes. Our mongo machines are sultry behemoths.
[15:09:41] <NodeX> _m : you love your big words lol
[15:09:43] <sag> i mean it was ~15 concurrent req and server ~500 req in second
[15:10:06] <_m> NodeX: I like articulation, yes. ;)
[15:10:06] <sag> yep from apache 2.0.63
[15:10:27] <NodeX> I know that one ... is it to do with large lorries ? :P
[15:18:04] <_m> While we're completely off-topic, can anyone recommend a software load balancer akin to haproxy?
[15:18:49] <NodeX> I dont think anything quite does rival haproxy tbh
[15:19:29] <NodeX> Derick : is the .sock connection string going to make it into the next driver update?
[15:19:43] <Derick> it is merged!
[15:19:53] <Derick> i think
[15:19:55] <Derick> let me check
[15:20:17] <Derick> oh, not yet
[15:20:31] <Derick> https://github.com/mongodb/mongo-php-driver/pull/199
[15:20:33] <Derick> watch that
[15:21:17] <NodeX> " Removed superfluous argument. " LOL
[15:21:33] <NodeX> my commits are always "New commit" or "bug fix"
[15:21:44] <NodeX> or sometimes "sadkfjlaskjdh"
[15:23:04] <NodeX> good thing I'm the only maintainer and I never need to go back and see history
[15:40:22] <wdev> Trying to sort out the docs. Can somebody confirm or deny that if you do an rs.remove('someHiddenMember') with priority:0 it will force a master election (thus breaking writes)? Stuck on 2.0.1 for this set.
[18:54:09] <SeyelentEco> Anyone seen this error: threw away reply, please try again
[18:54:19] <SeyelentEco> Google turns up nothing
[18:54:31] <SeyelentEco> MongoCursorException [ 0 ]: threw away reply, please try again
[18:54:46] <SeyelentEco> only happens when connecting to a server on a different machine
[18:54:57] <gabbah> hi guys. I'm using play 2 and Salat to access mongodb. My issue is I'm not getting any sort of exception when I do a update operation with an objectId that does not exist. How can I find out if the update actually found an object to update at all?
[18:57:24] <crudson> gabbah: the mongodb docs on 'Updating' will tell you
[18:58:11] <gabbah> crudson, as far as i can see, the ONLY way is issuing a getLastError command AFTER... is that really the way? no exception?
[18:59:05] <gabbah> I can't get the update command to throw an exception (via Salat)?
[19:01:20] <SeyelentEco> FYI I figured it out. I was using persistent connections. Apparently that doesn't work?
[19:05:16] <crudson> gabbah: that's how it works. Or findAndModify will return null if it matched no doc for an update
[19:06:39] <gabbah> have you used Salat?
[19:07:21] <crudson> never heard of it, sorry
[19:07:25] <gabbah> according to http://repo.novus.com/salat-presentation/#14 i'm supposed to get a result back actually, from Salat, informing me if it was updated
[19:12:49] <_m> gabbah: You must enable safe mode to get a return value. Otherwise, mongo is "fire and forget"
[19:13:17] <gabbah> yes i have WriteMode.SAFE
[19:13:49] <_m> Could be a bug in your driver then. Never heard of Salat, sorry.
[19:14:44] <crudson> gabbah: also be sure you're not doing an upsert
[19:14:48] <gabbah> ok it's a wrapper for mongodb for Play 2
[19:15:08] <gabbah> yeah it's not upsert.. that is set to false. Also checking the collection after shows no change
[19:39:31] <drudge> Hello. I'm having some issues doing bulk insertion with the mongodb node driver
[19:40:01] <drudge> I have an array of 107k docs I want to insert, however the insertion fails with: Document exceeds maximal allowed bson size of 16777216 bytes
[19:40:51] <drudge> the documents themselves are rather small, not sure why I'm getting the error
[19:51:34] <_m> You cannot insert more than 16 mb at a time.
[19:52:00] <_m> that's total documents or single document
[19:53:15] <_m> I generally batch in groups of 1,000.
[22:28:27] <manveru> heya
[22:28:35] <manveru> how do you do an upsert on a sharded collection?
[22:29:43] <manveru> tried using the shard key only in the find argument, but the resulting doc has a key of "\u0010" for some strange reason
[22:30:30] <manveru> and if the doc exists already, mongo complains if i try to set it
[22:30:59] <manveru> pretty much similar to this: http://stackoverflow.com/questions/7005604/what-does-cant-modify-shard-keys-value-fieldid-for-collection-foo-foo-mean
[23:10:46] <crill> Hey, I'm trying to add an element an embedded document array of time series data and I'm not sure what the best way to do it is.
[23:16:47] <VooDooNOFX_> crill, can you give us an example, and possibly an example of the type of query you're looking to perform on it.
[23:19:48] <crill> VooDooNOFX_: sure (I'm very new to mongo). I have a document users = { "username" : "crill",
[23:19:56] <crill> err, that was early
[23:20:43] <crill> users = { "username" : "crill", "money": [] }
[23:21:29] <crill> I want to have a timeseries array representing how much money a user has
[23:22:53] <crill> I want to push values into that array { "timestamp" : <datetime>, "dollars" : 17 }
[23:24:12] <crill> I effectively want to have array that I can add to. I've read that capped collections would be a good choice, but I'll get there once I come up to speed
[23:40:13] <krispyjala> hi all, so there's no way to set read preference to say, in a nonsharded replica set for 2.2.0, if the primary is locked due to writing, then switch to secondary?
[23:41:06] <krispyjala> seems like primaryPreferred or secondaryPreferred just sets them to only access those, except for failovers