PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 17th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:17] <linsys> owen1: If the master is on the side with the least number of nodes yes, it will demote itself
[00:11:43] <owen1> got it. thanks
[00:18:01] <chanks> Does any one know if the MongoDB JS library will spawn children which will close socket descriptors in their context? Ie; maybe spawn a child with FD_CLOEXEC and then aftewards re-capture the frame?
[00:19:36] <chanks> Because given a persitent HTTP connection (where only the headers have yet been read and not the body), it seems to be either closing an unrelated connection...or corrupting it with no direct knowledge of its existence.
[00:25:33] <owen1> one of my hosts (primary) has old conf file. how to make it uses the new one?
[00:30:53] <owen1> i ended up adding the conf to the primary. not sure if that's the correct way.
[01:10:03] <owen1> when initializing a replica set with configuration file. will members[0] be the primary?
[02:04:37] <kreedy> Can someone tell me what typical memory usage on a mongo box looks like? top only shows the mongo process as using 20% of RAM. The box has ~15GB RAM. the free command show 95% of it being used as cache. does that cache include memory mapped files?
[02:04:46] <kreedy> we are having some periods of high iowait on the box
[03:41:08] <bean> I have a main server that i want to turn into a replica set. Does it just work if I iniate the RS and add the secondaries?
[03:41:17] <bean> and the data will be replicated?
[06:20:50] <salentinux> hi guys, I just read that db.eval() cannot be used in a sharded enviroment.
[06:21:12] <salentinux> I use it heavily in my not yet sharded enviroment.
[06:21:46] <salentinux> Is there any workaround to that problem?
[06:26:44] <sweatn> hello is anyone using mongoosejs with a replica set?
[07:15:01] <samurai2> hi there
[07:15:41] <samurai2> anybody has encountered this kind of error in Java : Jan 17, 2013 3:11:44 PM com.mongodb.DBPortPool gotError
[07:15:41] <samurai2> WARNING: emptying DBPortPool to /localhost:27017 b/c of error
[07:15:46] <samurai2> thanks :)
[08:00:59] <sweatn> hello
[08:12:09] <ron> sweatn: is it me you're looking for?
[08:12:59] <sweatn> I'm not sure but hi
[08:13:09] <sweatn> do you use mongoose?
[08:13:23] <sweatn> or the node native driver
[08:19:29] <ron> I don't use node.js.
[08:19:41] <ron> then again, I also don't use mongodb currently. :)
[08:20:08] <NodeX> you dont use anything
[08:20:18] <ron> I use you.
[08:23:05] <NodeX> you wish
[08:23:32] <ron> dude, every time I type your name, you come running.
[08:35:07] <[AD]Turbo> ciao
[08:41:09] <NodeX> yeh, watch how that changes
[08:45:33] <solars> quick question: I've got a collection with 12 mio entries, however, I'm only interested in the entries of the last 6 months (total ist 3 years) - if I add a flag "recent": true for the last 6 months, and use it in an index - is this performance wise the same as if I move the old ones to another collection so they are not processed?
[08:46:01] <solars> or does this still affect the performance somehow if they are lying around in this collection although not affected by the query
[08:47:55] <NodeX> an index will suffice
[08:48:19] <NodeX> but .... the old docs will still take up room in any other indexes you may have on them
[08:48:51] <chickamade> hey guys, I am experiencing a query that seems to freeze the whole mongodb server (2.2.0) incl. mongotop, mongostat; the query includes geospatial operator. The only thing that is possible for me is to call db.curentOp and db.killOp() otherwise mongodb is completely frozen.
[08:49:01] <solars> NodeX, yeah right, the index space remains of course
[08:49:05] <solars> thanks!
[08:51:15] <NodeX> chickamade : pastebin your query
[08:56:25] <chickamade> NodeX: https://gist.github.com/96b4dca19d90aee50c69#file-mongodb-log
[08:57:30] <NodeX> can you pastebin your indexes on that collection?
[08:57:53] <chickamade> NodeX: obviously the query was unoptimized but I'm very surprised that it render the server down to a piece of rock (so to speak), while cpu & load of the system is minimal
[09:00:00] <chickamade> NodeX: https://gist.github.com/96b4dca19d90aee50c69#file-indexes
[09:01:19] <chickamade> NodeX: I know that the query is not optimized now, but is there any reason why the query could hold a global lock leadings to mongotop and mongostat hanging?
[09:05:42] <NodeX> I'm surprised it attempted a sort() tbh
[09:05:54] <NodeX> how many docs are in the collection?
[09:08:04] <chickamade> 27mil in total, the p1 $in clause result set is about 400,000
[09:12:18] <chickamade> it looks like we could use composite (geo, p1) index, correct?
[09:12:36] <chickamade> however i'm concerned about server hanging while the geo query runs
[09:12:59] <chickamade> file a bug report?
[09:14:37] <NodeX> you also need a sort on there
[09:14:49] <NodeX> geo, p1, cr :-1
[09:19:00] <chickamade> ok, my assumption is that the resultset after the (geo, p1) condition is very small (hundreds of rows)
[09:20:29] <chickamade> have you got any idea about the server freeze, though?
[09:24:57] <NodeX> you still need a sort on the index
[09:27:24] <chickamade> NodeX: could you explain? i could just sort it at my app level, no? or does not having a sort confuse the query planner?
[09:28:43] <NodeX> no, your query needs a sort
[09:28:47] <NodeX> *index
[09:39:26] <fleetfox> Hey guys. Can i add where to aggregated field?
[09:41:50] <chickamade> NodeX: for better or worse, the query with a lone geo clause of 300m radius (1/10 of before) also hangs the server: coll.count({xy: {$within: {$centerSphere: [[-122.3907852172852,37.78129802378191],0.00004703595114532541]}}})
[09:43:05] <fleetfox> I have db.invoices.aggregate({$group:{_id: "$hansa_id", invoices: { $push: "$_id" }}})
[09:43:49] <fleetfox> how do i filter to have results only where invoices.length > 1 ?
[09:45:43] <fleetfox> I can add counter and use match? Or is there a better way
[10:06:53] <remonvv> \o
[10:14:13] <ron> o/
[11:03:21] <remonvv> ron, happy newyear!
[11:05:24] <ron> remonvv: but it's not.
[11:05:43] <ron> remonvv: but happy new year to you ;)
[11:22:00] <remonvv> it isn't?
[12:12:06] <frqnck> Hiya all, any1 here knows how to forcefully run Mongod garbage collection?
[12:13:14] <frqnck> I am Trying to enforce an 'expireAfterSecond'
[12:51:11] <MatheusOl> frqnck: AFAIK, expireAfterSecond is checked each minute
[12:52:33] <frqnck> MatheusOl: yes, that's right.. do you know if we can push it sooner tho?
[12:53:16] <MatheusOl> frqnck: humm... Not sure. But from the docs I'd say "no"
[12:53:42] <MatheusOl> Do you really have a situation where you need it to happen more frequently?
[12:54:12] <MatheusOl> Perhaps always checking the date when you select the documents could be a solution
[13:06:06] <frqnck> MatheusOl: For instance, unit testing wise it woudl be good to enforce this expiration.
[13:06:59] <frqnck> As it stands, .. the expiration of data using the standard mechnism is not insurred to even heppen every 60s.. I have cases were it took 20mins... it is very random somehow!
[13:10:22] <MatheusOl> humm... It also depends on the workload
[13:10:29] <frqnck> It seems that automatic expiration using MongoDB is not reliable so having an extra date check is required.
[13:10:53] <MatheusOl> That's for sure
[13:11:02] <MatheusOl> If you really depending on that, yes.
[13:15:05] <frqnck> It would have been convenient to be able to run gc :-(
[13:15:48] <NodeX> 20 mins wait on GC ... something isa wrong there
[13:17:37] <frqnck> it is sometime within 60s, then feww minute.. and up to 20mins....
[13:17:42] <frqnck> try it.
[13:18:03] <frqnck> Tehre are many reports of this online..
[13:24:30] <xbuzz> when doing mapReduce(map, reduce, {out : {inline : 1}, query {date : "2013-01-17"}}) the reduce function does not appear to work. but when i remove the "query" restriction everything is fine. is there something i'm missing about how query works. my understanding was that query will restrict the results instead of using an entire collection.
[13:25:16] <xbuzz> the query executes fine but does not restrict the data to the date specified.
[13:29:32] <MatheusOl> have you copied that? the syntax is wrong
[13:29:36] <MatheusOl> xbuzz: ^
[13:30:45] <xbuzz> MatheusOI - i just typed that out…
[13:31:29] <frqnck> NodeX: MatheusOl: Here is a console output showing the expire time:
[13:31:29] <frqnck> > new Date
[13:31:29] <frqnck> ISODate("2013-01-17T13:29:49.926Z")
[13:31:30] <frqnck> > db.cache.find({data:'ttl-1'},{expire:1});
[13:31:32] <frqnck> { "_id" : ObjectId("50f7fa130b1dea840d000021"), "expire" : ISODate("2013-01-17T14:18:11Z") }
[13:31:51] <xbuzz> what is wrong with it though?
[13:32:40] <xbuzz> ah i see the missing semicolon after query. typo. but still it doesn't appear to work.
[13:33:04] <frqnck> The collection above has "expireAfterSeconds" set to 1 and "expire" as the key.
[13:34:00] <MatheusOl> frqnck: In this specific case it is within 1 minute
[13:34:11] <MatheusOl> Oh sorry, it is not
[13:34:12] <MatheusOl> =P
[13:34:19] <frqnck> No.. expiring still as not happening.
[13:34:36] <MatheusOl> Any entry on logs about that?
[13:37:03] <frqnck> Note that there is somethign wrong with the date above!!!!! One is one hour ahead.. wtf!!
[13:37:44] <frqnck> ISODate("2013-01-17T13:29:49.926Z") < ISODate("2013-01-17T14:18:11Z") }
[13:48:11] <MatheusOl> So it shouldn't expire after all
[13:48:40] <MatheusOl> =P
[13:51:19] <frqnck> That was a typo in the test.. it woudl eventually expire but in an hour time + next run of internal gc
[13:52:53] <CrawfordComeaux> if I'm accessing mongodb via mongoose, can my mongoose schemas only define the data I care about and not necessarily all fields in my documents?
[13:53:51] <xbuzz> MatheusOI: Regarding the issue with map reduce not working when passing "query". http://pastebin.com/7kPgF0aF
[14:07:17] <MatheusOl> CrawfordComeaux: I don't think I understood what you what. But is not the case to filter the selected fields when issuing a find?
[14:09:12] <MatheusOl> xbuzz: Your map function should return all fields
[14:10:37] <MatheusOl> xbuzz: There is not enough documents in your example to check the correctness
[14:16:06] <xbuzz> thanks for looking into it.
[14:18:01] <MatheusOl> xbuzz: humm... Looking now, the schema returned from map and reduce are different
[14:18:35] <MatheusOl> xbuzz: map uses {daily: {accept: ..., block: .., ...} } and reduce uses {accept: ..., block: ..., ...}
[14:19:06] <MatheusOl> Notice that reduce may receive values from map but also from other reduce
[14:19:49] <MatheusOl> Also I couldn't find what you stated, the results with and without query are NOT equal
[14:34:45] <CrawfordComeaux> MatheusOl: I have a collection of tweets that contains way more fields than I need. I'm wondering if the existence of fields that aren't defined in the mongoose schema for the collection will make mongoose angry
[14:36:01] <MatheusOl> humm... I got it now
[14:36:13] <MatheusOl> yeah, I can answer that one
[14:36:29] <CrawfordComeaux> can or can't?
[14:39:08] <MatheusOl> Oh! Sorry! Can't
[14:39:09] <MatheusOl> =P
[14:39:16] <CrawfordComeaux> hehe no problemo
[14:39:33] <CrawfordComeaux> I'm gonna take a nap and then give it a whirl to see what crops up
[14:40:08] <CrawfordComeaux> oooh...unless you know how I can take the results of a find and make a new collection (or remove the unwanted fields)
[14:40:37] <NodeX> do it appside
[15:22:36] <NodeX> Derick : I dont suppose there is a way in php to convert an ObjectId into an integer and back again is there?
[15:23:39] <Zelest> $a = (int)$foo->_id->__toString(); ?
[15:23:43] <Zelest> ugly as hell though
[15:24:01] <NodeX> what about backwards?
[15:24:16] <Zelest> $a = new MongoID($a);
[15:24:18] <Zelest> but..
[15:24:19] <NodeX> back to an ObjectId()
[15:24:28] <NodeX> how hackish is this lol
[15:24:33] <Zelest> though, not sure how (int) handles the hash
[15:24:41] <Zelest> (seeing you use the default one)
[15:24:46] <Zelest> assuming*
[15:25:39] <Derick> NodeX: not really
[15:25:46] <NodeX> does the __toString() method just cast the oid to a string?
[15:25:51] <Derick> yes
[15:27:30] <NodeX> Zelest : that doesn't work :P
[15:27:48] <NodeX> 50f817c1586477716c00003b = (int)50
[15:28:03] <Zelest> mhm
[15:28:14] <Zelest> well, keep it as a string?
[15:28:25] <NodeX> can't, need it as an int for somehting :/
[15:29:08] <Zelest> hexdec() ?
[15:29:14] <Zelest> *shrugs*
[15:29:38] <NodeX> tried
[15:29:57] <NodeX> it's ok, I was just being picky about a loop that I didn't want to do
[15:31:04] <Zelest> ah, hexdec makes it into a float :P
[15:31:38] <Zelest> anyways, time to go home to my new awesome coffee machine! see ya in an hour or so :D
[15:31:41] <Zelest> o/
[15:33:34] <NodeX> how long is the float
[15:33:36] <NodeX> \o
[15:38:16] <MatheusOl> ObjectId would fit on a int?
[15:38:37] <MatheusOl> Isn't ObjectId 12-bytes long?
[15:42:16] <MatheusOl> But I think you can achieve that with gmp_init
[15:42:39] <MatheusOl> Something like: gmp_init($oid_str, 16)
[15:44:36] <NodeX> I think an int is 24bits but I can't remember
[15:46:08] <MatheusOl> 32bits
[15:46:12] <MatheusOl> On most system
[15:46:15] <MatheusOl> *systems
[15:46:31] <MatheusOl> 4 bytes
[15:51:07] <jmien> Hello everyone. I have a question about deletions. When you delete in MongoDB, does Mongo keep the document at all? I have been using couchdb, and it seems to store the document indefinitely. I was wondering if mongo is the same
[15:55:24] <NodeX> no, it gets removed
[15:56:58] <jmien> ok, thanks.
[16:17:53] <xcat> is calling count() on a cursor free or expensive?
[16:18:38] <xcat> i.e. db.foo.find().count()
[16:18:44] <xcat> How expensive is this?
[16:22:36] <NodeX> expensive - very
[16:42:42] <revoohc> how do you check the log verboseness level on a running mongod?
[16:54:49] <joe_p> revoohc: not sure if getCmdLineOpts will show it
[16:57:40] <mansoor-s> How does one go about escaping for regex search input?
[17:11:07] <MatheusOl> mansoor-s: What language?
[17:11:22] <mansoor-s> MatheusOl, JS
[17:11:48] <mansoor-s> I guess I only want to remove any special charecters
[17:11:55] <mansoor-s> leaving only alphabets and numbers
[17:13:16] <MatheusOl> Actually a simple replace would work
[17:13:31] <MatheusOl> We just need to know all the possible characters
[17:14:05] <MatheusOl> By memory: \/[]{}().*?$^
[17:14:29] <MatheusOl> +|
[17:14:38] <MatheusOl> There must be a list on net
[17:15:09] <bean|work> input.replace(/\W/g, '')
[17:16:01] <bean|work> or
[17:16:08] <bean|work> replace(/[^0-9A-Za-z]/g, '');
[17:16:22] <bean|work> I think
[17:16:41] <MatheusOl> bean|work: The last one isn't right, there accents
[17:16:49] <bean|work> :| ok
[17:17:31] <bean|work> actually the \W one probably works best
[17:17:31] <MatheusOl> But the first were nice
[17:17:34] <MatheusOl> str.replace(/\W/g, '\\$&');
[17:17:42] <bean|work> yep
[17:17:51] <bean|work> "\W: A symbol which is neither from Latin alphabet, nor a digit, nor an underscore, the inversion of \w"
[17:17:56] <MatheusOl> See any case that wouldn't work?
[17:18:18] <bean|work> it would appear that underscores would still makie it through the \W
[17:18:39] <MatheusOl> no, it didn't work with accents also
[17:18:45] <bean|work> hmm odd
[17:18:54] <MatheusOl> '^(|testeã'.replace(/\W/g, '\\$&'); => \^\(\|teste\ã
[17:19:07] <MatheusOl> I think that's because of UTF8 chars
[17:19:10] <bean|work> probably.
[17:20:39] <MatheusOl> humm... I forgot, that is the \Q ... \E
[17:20:56] <MatheusOl> it will work better
[17:23:32] <MatheusOl> To bad, doesn't work on mongoshell
[17:26:03] <MatheusOl> I think the better would be that one:
[17:26:15] <MatheusOl> str.replace(/[\[\\\^\$\.\|\?\*\+\(\)]/g, '\\$&')
[17:26:47] <MatheusOl> ops
[17:26:48] <MatheusOl> str.replace(/[\[\\\^\$\.\|\?\*\+\(\)\]]/g, '\\$&')
[17:27:43] <MatheusOl> I got the characters from here: http://www.regular-expressions.info/characters.html
[17:27:47] <MatheusOl> I think it's right
[17:32:14] <jiffe98> I am trying to add a member to a replica set and I followed http://docs.mongodb.org/manual/tutorial/expand-replica-set/ but the primary shows the new member is still initializing and the new member shows "errmsg" : "can't currently get local.system.replset config from self or any seed (EMPTYUNREACHABLE)"
[17:32:33] <jiffe98> the log on the new member shows http://nsab.us/public/mongodb
[17:39:55] <MatheusOl> Have you set the replSet config properly?
[17:40:19] <jiffe98> MatheusOl: the config was copied from another replica
[17:50:08] <MatheusOl> jiffe98: May be any issue about bind_ip or so?
[17:50:45] <jiffe98> MatheusOl: they are all bound on 0.0.0.0
[17:51:34] <jiffe98> I just noticed there is a version difference, the other two are running 2.2.0, this is 2.2.2
[17:51:39] <jiffe98> I don't know if that would be an issue
[17:52:07] <MatheusOl> humm
[17:52:17] <MatheusOl> I'm not sure either
[17:52:41] <tworkin> imo a config file shouldnt be invalidated by a patch-level version increment. that should be a feature level increment
[17:54:03] <MatheusOl> I also think that, but this doesn't look like a config file issue
[17:56:05] <MatheusOl> jiffe98: What is the result of rs.status() on both nodes?
[17:56:18] <MatheusOl> both = primary and the new secondary
[17:58:59] <jiffe98> MatheusOl: http://nsab.us/public/mongostatus
[18:04:43] <MatheusOl> Is your database huge?
[18:05:16] <jiffe98> not substantially, 4.5tb or so
[18:05:49] <MatheusOl> Perhaps it couldn't sync
[18:05:59] <MatheusOl> I don't remember the error when it happens
[18:06:07] <jiffe98> the logs don't say anything like that though
[18:06:17] <MatheusOl> Or it is still doing sync
[18:06:41] <jiffe98> I keep seeing "[rsStart] replSet info Couldn't load config yet. Sleeping 20sec and will try again" in the logs
[18:15:32] <jhammerman> Hi MongoDB IRC. Does anyone know if Standalone Mongo instances can be sharded?
[18:20:24] <revoohc> jhammerman: yes they can. Do it all the time when I'm testing. Don't run in production that way though.
[18:30:19] <jhammerman> revoohc: Thanks!
[18:37:13] <nemothekid> Its been my experience that in a sharded setup if I create a collection on non primary shard C, the mongos will read the collection from shard C despite the fact thats its not primary. Is this documented anywhere?
[18:43:47] <kchodorow_> nemothekid: you should always create collections through mongos
[18:43:56] <kchodorow_> otherwise mongos won't know how to find them
[18:44:50] <zip_> nodejs mongodb native driver works fine outside a child process, inside a child process (forked) the db object serverConfig has the correct server information however connected is false.... tldr node-mongodb-native won't connect from inside a child process, any ideas?
[18:44:56] <nemothekid> So what I'm seeing is an undocumented side effect? Reason I'm asking is that we have some smaller collection, - not large enough to be sharded, that we want to live on other machines other than the primary
[18:46:55] <nemothekid> we found in some cases like turning profiling on on Shard C, or creating a collection on Shard C, then querying from a mongos would return the data on Shard C. But if this isn't documented it could be just a side effect and gone in the next version
[19:02:16] <addisonj> hey, we are running into a weird case where mongodumping from a secondary and then restoring in a different environment is creating duplicate documents. Once the indexes start being built we get a dupe key error and it crashes
[19:03:55] <addisonj> running on 2.2.2
[19:06:26] <n3wguy87> hello. Not sure if what I want is possible.. I'm looking to build a database structure based on an xml schema from mitre.org - is there anyway to import from xsd?
[19:06:49] <n3wguy87> I've scoured the internet to no avail... :(
[19:08:39] <JoeyJoeJo> Is there a command in mongo that will tell me how long a query took?
[19:12:52] <JoeyJoeJo> I'm doing some performance tests in a test environment where I have 4 servers. I was going to make 2 replica sets with 2 members each. Can I use one additional server to act as an arbiter for both replica sets or do I need two separate arbiters?
[19:14:13] <kali> for performance test, i wuld not even bother with arbiters
[19:14:42] <JoeyJoeJo> Oh, I thought I had to use an arbiter
[19:16:03] <zip_> cannot connect to db in a nodejs child process, works fine in parent, any ideas? - sample code: https://github.com/mongodb/node-mongodb-native/issues/852
[19:16:05] <kali> arbiter is necessary if you want the failover to work, but i don't think you care too much about availibility there
[19:16:27] <JoeyJoeJo> You're right, I don't care about failover yet
[19:47:12] <JoeyJoeJo> What's the difference between a replica set and a shard? Is it that one replica set can contain many shards?
[19:50:58] <kali> no.
[19:51:09] <kali> replica set are servers with the same data
[19:51:41] <owen1> if have 5 hosts. i want the primary to be #1 and if it dies #2 and if it dies #3. #4 and #5 should never get elected. can i set priority 3, 2, 1, 0, 0 respectively?
[19:52:24] <kali> sharding allows to deal with bigger data set by slicing them to several shards, so each shard has a different piece of the ata
[19:52:33] <kali> usually a shard is a replica set
[19:54:11] <JoeyJoeJo> Ok, so I have one database that I want to shard to my 4 servers. Does that mean I'll have one shard and one replica set or 4 shards and one replica set?
[19:55:22] <kali> JoeyJoeJo: two shards, each shard a replica set of 2
[19:57:45] <JoeyJoeJo> I guess I still don't understand. I thought the point of having multiple replica sets is just for failover, which I don't need in my test environment
[19:59:15] <kali> then four shards of standalone server
[20:00:26] <owen1> is there a common way for backing up mongo? if i have RS of 5 hosts, can i think about it as a backup as well as redundency?
[20:13:17] <jiffe98> owen1: a backup is necessary if something gets deleted or corrupted
[20:18:29] <owen1> jiffe98: got it. reading about it and sounds simple. a guess a daily cron that run mongodump on one of my secondaries can do that.
[20:18:58] <UForgotten> owen1: yes. always have a mongdump :)
[20:26:35] <owen1> if have 5 hosts. i want the primary to be #1 and if it dies #2 and if it dies #3. #4 and #5 should never get elected. can i set priority 3, 2, 1, 0, 0 respectively?
[20:41:26] <kchodorow_> owen1: yes
[21:02:25] <JoeyJoeJo> I've started my config servers as well as my mongod servers, but when I try to run mongos, I get this message - error command line: too many positional options
[21:03:05] <kchodorow_> JoeyJoeJo: what does your command look like?
[21:03:13] <UForgotten> mongos can't grok the karma sutra eh?
[21:03:33] <JoeyJoeJo> mongos myhostname
[21:03:53] <JoeyJoeJo> I also tried "mongos 127.0.0.1" with the same results
[21:04:36] <JoeyJoeJo> I have 4 servers, all of which will run mongod and 3 will also be config servers
[21:04:48] <JoeyJoeJo> Could that be the problem?
[21:14:00] <kchodorow_> JoeyJoeJo: see step #3 http://docs.mongodb.org/manual/administration/sharding/#set-up-a-sharded-cluster
[21:15:26] <JoeyJoeJo> I figured out the problem. User error. Mongos was already running
[21:17:32] <kreedy> does anyone know off hand if replicaset heartbeats are sent over TCP or UDP
[21:21:44] <kchodorow_> kreedy: tcp
[21:28:54] <kreedy> damn. i was hoping it was UDP. it would explain my issues better :)
[21:30:59] <JoeyJoeJo> Three of my servers are config + regular mongod shards. Each has two mongod processes, one on port 27017 and 20719. Which of those is for the config server and which is the regular db server?
[21:31:13] <kreedy> i end up with dropped heartbeats on AWS (3 servers, each in separate AZ)
[21:31:55] <kreedy> our mongo driver (mongoid 3) freaks out when this happens, even when set to read from primary and even when the dropped hearbeat is between the two secondaries
[21:32:49] <kreedy> https://gist.github.com/78aad723265d361dc1f3 is the log (mongo01 and mongo03 are the secondaries; mongo02 is the primary). kchodorow_: if you are anyone have some insight into where i should go from here, it'd be much appreciated
[21:36:10] <kreedy> we are using auth as well if that's an issue
[21:45:46] <ehershey> JoeyJoeJo: the best way to tell is the command line of the running processes
[21:45:50] <ehershey> and their actual config
[21:46:32] <JoeyJoeJo> Thanks
[21:46:35] <ehershey> but 27017 is the default for mongos/normal mongod and 27019 is the default for config servers
[21:46:47] <ehershey> According to this - http://docs.mongodb.org/manual/administration/security/#security-port-numbers
[21:49:41] <JoeyJoeJo> One more question - I was adding shards and messed up, so I want to start over from scratch. I ran sh.runCommand ({removeShard: "shard1"}), but it's been stuck in "draining" for a while now, even though there is no data. Is there a fast way to just wipe out all sharding settings start over?
[21:52:45] <ehershey> do you have any data at all?
[21:52:48] <JoeyJoeJo> no
[21:52:51] <ehershey> can you just shut everything down and delete your data directories?
[21:53:09] <JoeyJoeJo> I'll give that a shot
[21:53:19] <ehershey> there might be easier ways
[21:53:28] <JoeyJoeJo> I don't know, that sounds pretty easy
[21:53:32] <kali> +1 for stop, rm, start
[21:54:02] <ehershey> :)
[22:33:52] <owen1> kchodorow_: thanks!
[22:40:01] <sirious> i have a script that does a lot of queries against the db, but part way through it crashes with this: [Errno 99] "Cannot assign requested address"
[22:40:42] <sirious> anyone know where i might poke around to find out why it stops allowing new connections to the db?
[23:17:10] <ehershey> what kind of script?
[23:19:12] <sirious> it's a python script that goes over the entries in one collection, and based on the users that have subscribed, updates their accounts with notifications
[23:20:53] <sirious> doing a db.serverStatus() the number of connections never grows past 5
[23:21:14] <sirious> and i ran the script with a user whose ulimit -Sn was over 9000
[23:22:24] <ehershey> I am thinking
[23:22:26] <ehershey> no idea
[23:22:45] <ehershey> can you reproduce it reliably?
[23:22:48] <sirious> yep
[23:22:56] <sirious> i can rerun the script immediately after
[23:23:03] <sirious> and it happens every time after the same amount of time
[23:23:07] <ehershey> does it die in the same place?
[23:23:21] <ehershey> can I see the script?
[23:24:12] <sirious> unfortunately i can't share it. it's on a classified network
[23:25:21] <sirious> definitely seems like its hitting some limit of connections though
[23:25:47] <sirious> i put in a catch where if it detects that it couldn't connect, time.sleep(5) and try again
[23:25:54] <sirious> and it stumbles a little and then eventually dies
[23:26:07] <ehershey> maybe a low level connection leak ?
[23:26:27] <sirious> possibly. but it would have to be outside of python
[23:26:29] <ehershey> where the tcp connection is still there somehow but not from the pov of the server
[23:26:35] <sirious> because if i start the script immediately, it will have issues right away
[23:26:52] <ehershey> can you try to run the same script somewhere else to compare?
[23:26:52] <sirious> like it didn't finish cleaning
[23:26:58] <ehershey> like on a desktop or different server
[23:27:21] <sirious> i could give it a shot. i'd have to get another box in there with a mongo router
[23:28:26] <ehershey> another interested test could be to swap out the mongodb code with something else
[23:28:41] <ehershey> but that might be more trouble than it's worth
[23:28:55] <sirious> hmm, it would have to be something else attempting to open a tcp connection
[23:28:58] <sirious> but i could give it a try
[23:29:29] <ehershey> a thought
[23:29:41] <ehershey> are you on the latest driver version?
[23:30:23] <sirious> netstat -an | grep 27017 | wc -l
[23:30:25] <sirious> 28239
[23:30:27] <sirious> lollll
[23:30:58] <sirious> umm, i'm using pymongo 2.2 currently on this code release
[23:31:58] <ehershey> that seems like a lot
[23:33:02] <sirious> it's ripping through a LOT of events and making a LOT of notifications for a ton of users it seems
[23:36:26] <sirious> i'm going to venture a guess that's the issue and see what i can do about it :) thanks for the help!
[23:37:52] <ehershey> good luck!
[23:38:00] <ehershey> report back!
[23:38:03] <sirious> will do :)
[23:41:37] <sirious> ehershey: i think i got it :)
[23:41:52] <sirious> net.ipv4.tcp_tw_recycle = 1
[23:41:53] <sirious> net.ipv4.tcp_tw_reuse = 1
[23:41:58] <sirious> no more problem
[23:43:21] <ehershey> wow aha
[23:44:04] <ehershey> we should put a retroactive stackoverflow question up or something
[23:44:17] <ehershey> I'm going to make a blog post at least
[23:44:20] <ehershey> for future googlers
[23:44:29] <sirious> sounds good :)
[23:44:34] <sirious> http://stackoverflow.com/questions/410616/increasing-the-maximum-number-of-tcp-ip-connections-in-linux
[23:44:46] <sirious> once i went down that path, it led me there :)
[23:45:10] <ehershey> nice
[23:53:35] <ehershey> https://twitter.com/ehershey/status/292056635680096256
[23:55:42] <sirious> ehershey: excellent :)
[23:56:03] <ehershey> I will attribute you if you want!
[23:56:12] <ehershey> but figured short and sweet and anonymous was fine
[23:56:31] <sirious> haha, much appreciated, but it can stay as is :)
[23:57:19] <hadees> So I need to figure out what page a comment is one when I paginate comments for a post. What I'm having trouble doing is figuring how how many documents are ahead of it when ordering by created_at desc
[23:58:38] <sirious> hadees: can you just get a .count() of what would be returned if you didn't limit?