[02:42:30] <leptone> 2015-08-13T19:36:26.548-0700 warning: Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
[02:42:30] <leptone> 2015-08-13T19:36:26.549-0700 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
[03:05:57] <leptone> huh it seems my databases are all gone... :/
[03:18:00] <one00handed> having an issue connecting to mongo from a remote server. bind address is commented out. firewall is opened up. i can ssh in to the box and log in to the database with the credentials I'm trying on the remote box but trying the exact same user/password on the remote box gives me an error 18 authentication failed... can anyone help me here?
[03:19:49] <Boomtime> one00handed: you can "ssh in to the box and log in to the database with the credentials" <- how do you connect to the database? the mongo shell?
[08:01:01] <Boomtime> gcfhvjbkn: so if the database can't access the files it uses to back the datastore it should just plow on anyway?
[08:02:06] <gcfhvjbkn> Boomtime: it should stop receiving requests maybe? i don't really know though, just wanted to hear tips/opinions
[08:02:17] <gcfhvjbkn> anyway, what can i do to evade this? i've got a concurrent app that sends request to the db at a high rate; it uses one mongo client instance with one connection pool (as far as i can judge), so my guess is that this file descriptor overflow happens if try to make too much queries simultaneously
[08:02:33] <gcfhvjbkn> any way i can force mongo client to cap its connection pool?
[08:05:18] <gcfhvjbkn> apart from increasing ulimit -n, which i did ofc
[08:13:40] <Boomtime> gcfhvjbkn: apparently you didn't raise ulmits, or at least , not by enough
[08:14:19] <Boomtime> also, your log file shows around 200 connections being made very quickly, do you know why this is?
[08:25:03] <gcfhvjbkn> Boomtime: i guess raising ulimits far enough will do the trick; as to connections, i am not sure how mongo-java-driver allocates connections
[08:26:25] <gcfhvjbkn> as of now it looks like it creates one per akka thread and doesn't close it until the actor is done processing its message
[08:26:49] <gcfhvjbkn> which means that if 200 messages are being processed simultaneously the pool has 200 connections
[08:27:39] <Boomtime> yes, that is how it works, did you issue 200 requests at once?
[08:29:00] <Boomtime> just that you don't sound very sure
[08:29:24] <Boomtime> anyway, the default pool size in java driver is 100, so do you have multipe clients too, or did you change the pool size?
[08:33:45] <gcfhvjbkn> well i should be making something in between 256 and 1024 now; not at all, i use one client and i didn't change the pool size, unless there's a bug in my code or the relationship between connections count and # of file descriptors used by mongo is not 1-to-1; what happens if the pool size is depleted? does it block?
[09:46:55] <zlatko> I have a question about a mapReduce. I have a global variable, and I can't access it in my `map` function. Is that expected?
[09:47:20] <zlatko> FWIW, I set that global variable in a previous query (as a result of a cursor that I run.
[09:48:25] <zlatko> ie. var cursor = db.something.find(); var globalVar = <loop cursor and do the thing>; db.mapReduce(functionWhereICantAccessGlobalVar...)
[10:14:24] <zlatko> Well, apparently mapReduce supports a `scope` thing, so db.collection.mapReduce(map, reduce, {scope: myStuff})
[11:41:31] <supersym> can anyone tell me how I convert a stored UUID as SUBTYPE_UUID v4, back to string?
[11:46:37] <supersym> There is a https://github.com/confuser/node-mongo-uuid-helper/blob/master/index.js
[11:47:00] <supersym> But it converts to v3 and I am unclear on what changes to make in order to path it for a v4 conversion
[11:48:03] <joannac> I'm not sure what this has to do with MongoDB. Sounds like a Node.js issue?
[11:48:50] <joannac> Anyway, I don't know. Maybe someone else will
[11:49:06] <supersym> joannac: erh, well... having it stored using special mongodb bson helpers, one would think it has to do with the format it is stored in
[11:49:14] <supersym> but anyway, thanks for the effort!
[11:55:51] <supersym> ugh..... too simple: nodejs buffer .toString ><
[15:35:52] <MacWinne_> GothAlice, how's your comfort factor with using wiretiger now?
[15:36:58] <GothAlice> MacWinne_: Still waiting for enough bugs to get ironed out to become the default engine, before I switch production.
[15:36:59] <StephenLynx> speaking about wt, is it the default engine yet?
[15:39:24] <StephenLynx> diegoaquilar db.collection.drop() if I am not mistaken
[15:39:26] <MacWinne_> StephenLynx, I think they recommend making a new version of file and deleting old one..
[15:39:39] <GothAlice> Generally why I abstract one layer beyond GridFS, using metadata/management documents with references to GridFS files.
[15:39:41] <diegoaguilar> StephenLynx, sorry I meant linux command line
[15:41:18] <StephenLynx> now I just store the last time the file was modified in it's metadata.
[15:41:26] <StephenLynx> and don't delete it anymore
[15:42:50] <MacWinne_> how much compression can one reasonably expect with WT and simple JSON documents that are log format? with much content that is repeated
[15:43:10] <MacWinne_> like 80%? or more like 30-40%
[15:43:30] <MacWinne_> curious based on peoples experience
[16:14:42] <daveops> MacWinne_: it obviously depends, but for logs which have a lot of duplication, 80% is certainly reasonable
[16:15:23] <MacWinne_> cool.. that'll save lots of cost for us
[16:39:23] <diegoaguilar> Hello, can I create an index on a nested attribute?
[16:58:15] <jamiel> Well 25,000 records is not very big so sounds right ... also if using mongo shell createIndex would block your shell until it was done and you will get feedback saying it did it.
[16:58:39] <diegoaguilar> that last with out the background in true
[17:36:40] <cheeser> mongo doesn't support multidocument locks/transactions
[18:53:43] <diegoaguilar> Hello, I'm trying to perform a bulkUnorderedOperation
[18:54:00] <diegoaguilar> I have a find with an upsert and update chainable calls
[18:54:16] <diegoaguilar> but its not inserting or deleting
[18:54:24] <diegoaguilar> this is what Im trying http://kopy.io/8MCWt
[18:54:47] <diegoaguilar> I read that I should only use update operators but that's what I'm doing, I guess the $push is what makes it not to update it
[18:55:24] <diegoaguilar> before having it, it worked properly
[22:17:24] <kopasetik> PROBLEM: when i try sorting 2 parameters, i keep getting a BadValue bad sort specification error
[22:32:45] <kopasetik> nodejs using the mongodb driver
[22:33:00] <joannac> kopasetik: that looks wrong for the nodejs driver. go look at the docs again
[22:33:23] <kopasetik> just used the options version of sort and it works now
[23:38:04] <diegoaguilar> Hello, when running a multi update, what could cause not the whole matches to be updated:WriteResult({ "nMatched" : 22036, "nUpserted" : 0, "nModified" : 4812 })