PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 14th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:42:28] <leptone> i can't seem to connect to my mongoldb
[02:42:29] <leptone> $ mongo
[02:42:29] <leptone> MongoDB shell version: 2.6.7
[02:42:29] <leptone> connecting to: test
[02:42:30] <leptone> 2015-08-13T19:36:26.548-0700 warning: Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
[02:42:30] <leptone> 2015-08-13T19:36:26.549-0700 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
[02:42:33] <leptone> exception: connect failed
[02:42:38] <leptone> oh shit sry
[02:42:52] <leptone> http://pastebin.com/P29dJTxh
[02:45:01] <Boomtime> leptone: have you looked at the server log?
[02:45:24] <leptone> Boomtime, no where can i find it
[02:45:27] <leptone> on mac?
[02:45:58] <cheeser> did you use homebrew to install?
[02:46:12] <cheeser> /usr/local/var/log/mongodb/
[02:46:46] <leptone> cheeser, i think so
[02:47:32] <leptone> cheeser, i don't see anything in that directory
[02:48:21] <leptone> Caseys-MacBook-Pro:mongodb Casey$ pwd
[02:48:21] <leptone> /usr/local/var/log/mongodb
[02:48:21] <leptone> Caseys-MacBook-Pro:mongodb Casey$ ls -a
[02:48:21] <leptone> . ..
[02:48:22] <leptone> Caseys-MacBook-Pro:mongodb Casey$
[02:48:41] <Boomtime> is the server actually running?
[02:48:58] <leptone> Boomtime, idk...
[02:49:13] <leptone> how can i check?
[02:49:44] <cheeser> well, ps, of course
[02:49:56] <cheeser> but try this: mongod --config /usr/local/etc/mongod.conf
[02:50:11] <cheeser> see what prints out. i'm guessing you lack rights on a directory somewhere
[02:51:29] <leptone> cheeser, seems stuck
[02:51:30] <leptone> $ mongod --config /usr/local/etc/mongod.conf
[02:51:37] <leptone> and then it hangs
[02:52:16] <leptone> cheeser, i don't think it is
[02:52:17] <leptone> http://pastebin.com/D22zHYeT
[02:52:39] <joannac> leptone: cat /usr/local/etc/mongod.conf
[02:52:45] <joannac> and pastebin the result
[02:53:07] <cheeser> it's not hanging, then. use --fork and it will return to the prompt
[02:53:20] <leptone> http://pastebin.com/B7PF5TZs
[02:53:29] <leptone> joannac, ^
[02:53:53] <cheeser> so look in /usr/local/var/log/mongodb/mongo.log
[02:54:58] <leptone> cheeser, now mongod --config /usr/local/etc/mongod.conf runs but nothing in standard output
[02:55:43] <cheeser> so try connecting
[02:55:46] <leptone> cheeser, http://pastebin.com/WePd2f8S
[02:55:47] <leptone> cheeser, http://pastebin.com/WePd2f8S
[02:56:06] <cheeser> it works.
[02:56:17] <cheeser> so since you're using homebrew, do this:
[02:56:42] <cheeser> launchctl load /usr/local/opt/mongodb/homebrew.mxcl.mongodb.plist
[02:56:52] <cheeser> ^C that mongod process first, though
[02:57:26] <leptone> so it connected to mongo when i ran the 'mongo' command
[02:57:33] <leptone> what did we do to stat it?
[02:57:36] <leptone> and here is that
[02:57:38] <leptone> $ launchctl load /usr/local/opt/mongodb/homebrew.mxcl.mongodb.plist
[02:57:38] <leptone> /usr/local/opt/mongodb/homebrew.mxcl.mongodb.plist: No such file or directory
[02:57:42] <leptone> cheeser, ^
[02:58:05] <cheeser> you're using homebrew right?
[02:59:17] <leptone> almost certain
[02:59:20] <leptone> cheeser,
[02:59:32] <leptone> 99%
[03:00:05] <cheeser> brew info mongodb
[03:01:47] <leptone> cheeser, http://pastebin.com/wrqwSTT1
[03:05:57] <leptone> huh it seems my databases are all gone... :/
[03:18:00] <one00handed> having an issue connecting to mongo from a remote server. bind address is commented out. firewall is opened up. i can ssh in to the box and log in to the database with the credentials I'm trying on the remote box but trying the exact same user/password on the remote box gives me an error 18 authentication failed... can anyone help me here?
[03:19:49] <Boomtime> one00handed: you can "ssh in to the box and log in to the database with the credentials" <- how do you connect to the database? the mongo shell?
[03:20:02] <one00handed> yes
[03:20:41] <Boomtime> .. and the command-line you use on the remote host is the same as the one you use in the ssh session?
[03:20:49] <Boomtime> including the hostname?
[03:21:13] <one00handed> on the database server:
[03:21:23] <one00handed> mongo dbname -u user -p pass
[03:21:39] <one00handed> on the other server (the one trying to connect):
[03:21:50] <one00handed> mongo host:port/db -u user -p pass
[03:22:00] <one00handed> if i turn off auth in the config
[03:22:12] <one00handed> it works just fine without the credentials, obviously
[03:22:20] <one00handed> turning on auth and it fails login
[03:23:29] <Boomtime> please specify the hostname in the ssh session and try again
[03:23:40] <Boomtime> i want to know if it still succeeds
[03:24:33] <one00handed> two seconds
[03:25:09] <joannac> what about just 'mongo host:port'. If the shell comes up, 'use db; db.auth(username, password)'
[03:26:21] <one00handed> Boomtime yep - worked just fine
[03:27:19] <Boomtime> goodo, try joannac's suggestion too
[03:27:21] <one00handed> joannac im testing the connection because mongoid gem is trying to connect in a rails app and cant
[03:27:46] <one00handed> so I believe that line has to work - and im more curious as to why its not
[03:27:51] <joannac> one00handed: okay? can you try what I said anyway
[03:27:52] <one00handed> i'm new to mongo
[03:28:00] <one00handed> joannac sure
[03:29:25] <one00handed> joannac able to get into the console but auth did fail
[03:30:09] <one00handed> joannac if i do that on the database server it works
[03:30:26] <joannac> sure you didn't typo?
[03:30:39] <joannac> wait
[03:30:39] <one00handed> copy and past
[03:30:42] <one00handed> *e
[03:30:46] <joannac> run mongo --version
[03:30:53] <joannac> on both servers
[03:31:15] <one00handed> HA
[03:31:21] <one00handed> 3.0.5 on database
[03:31:27] <one00handed> 2.4.9 on the app
[03:31:29] <one00handed> ...
[03:32:05] <one00handed> i think i can take it from here...
[03:35:37] <one00handed> thank you joannac and Boomtime for your help
[03:37:43] <one00handed> that was the reason...I need to take a break
[06:15:29] <lufi> in mongoose, what is executed first? the validator or the setter?
[06:35:14] <joannac> lufi: might be better off asking mongoose devs
[07:58:09] <gcfhvjbkn> i'm continuing to wrestle with mongo and mongo-java-client; now it crashes on me with this
[07:58:10] <gcfhvjbkn> http://pastie.org/private/qdyeapzrzkjxg9g8xhrug
[07:58:33] <gcfhvjbkn> i see it doesn't have enough file descriptors, but should it let itself crash for that reason?
[07:58:35] <gcfhvjbkn> looks like a bug
[08:01:01] <Boomtime> gcfhvjbkn: so if the database can't access the files it uses to back the datastore it should just plow on anyway?
[08:02:06] <gcfhvjbkn> Boomtime: it should stop receiving requests maybe? i don't really know though, just wanted to hear tips/opinions
[08:02:17] <gcfhvjbkn> anyway, what can i do to evade this? i've got a concurrent app that sends request to the db at a high rate; it uses one mongo client instance with one connection pool (as far as i can judge), so my guess is that this file descriptor overflow happens if try to make too much queries simultaneously
[08:02:33] <gcfhvjbkn> any way i can force mongo client to cap its connection pool?
[08:05:18] <gcfhvjbkn> apart from increasing ulimit -n, which i did ofc
[08:13:40] <Boomtime> gcfhvjbkn: apparently you didn't raise ulmits, or at least , not by enough
[08:13:51] <Boomtime> http://docs.mongodb.org/manual/reference/ulimit/
[08:14:19] <Boomtime> also, your log file shows around 200 connections being made very quickly, do you know why this is?
[08:25:03] <gcfhvjbkn> Boomtime: i guess raising ulimits far enough will do the trick; as to connections, i am not sure how mongo-java-driver allocates connections
[08:26:25] <gcfhvjbkn> as of now it looks like it creates one per akka thread and doesn't close it until the actor is done processing its message
[08:26:49] <gcfhvjbkn> which means that if 200 messages are being processed simultaneously the pool has 200 connections
[08:27:39] <Boomtime> yes, that is how it works, did you issue 200 requests at once?
[08:27:55] <gcfhvjbkn> yeah?
[08:28:14] <Boomtime> heh
[08:28:19] <gcfhvjbkn> why not?
[08:29:00] <Boomtime> just that you don't sound very sure
[08:29:24] <Boomtime> anyway, the default pool size in java driver is 100, so do you have multipe clients too, or did you change the pool size?
[08:33:45] <gcfhvjbkn> well i should be making something in between 256 and 1024 now; not at all, i use one client and i didn't change the pool size, unless there's a bug in my code or the relationship between connections count and # of file descriptors used by mongo is not 1-to-1; what happens if the pool size is depleted? does it block?
[09:46:55] <zlatko> I have a question about a mapReduce. I have a global variable, and I can't access it in my `map` function. Is that expected?
[09:47:20] <zlatko> FWIW, I set that global variable in a previous query (as a result of a cursor that I run.
[09:48:25] <zlatko> ie. var cursor = db.something.find(); var globalVar = <loop cursor and do the thing>; db.mapReduce(functionWhereICantAccessGlobalVar...)
[10:14:24] <zlatko> Well, apparently mapReduce supports a `scope` thing, so db.collection.mapReduce(map, reduce, {scope: myStuff})
[11:41:31] <supersym> can anyone tell me how I convert a stored UUID as SUBTYPE_UUID v4, back to string?
[11:44:54] <joannac> supersym: ...language? ODM?
[11:46:10] <supersym> Node.js, sorry
[11:46:16] <supersym> I guess that is relevant
[11:46:37] <supersym> There is a https://github.com/confuser/node-mongo-uuid-helper/blob/master/index.js
[11:47:00] <supersym> But it converts to v3 and I am unclear on what changes to make in order to path it for a v4 conversion
[11:48:03] <joannac> I'm not sure what this has to do with MongoDB. Sounds like a Node.js issue?
[11:48:50] <joannac> Anyway, I don't know. Maybe someone else will
[11:49:06] <supersym> joannac: erh, well... having it stored using special mongodb bson helpers, one would think it has to do with the format it is stored in
[11:49:14] <supersym> but anyway, thanks for the effort!
[11:55:51] <supersym> ugh..... too simple: nodejs buffer .toString ><
[15:35:52] <MacWinne_> GothAlice, how's your comfort factor with using wiretiger now?
[15:36:58] <GothAlice> MacWinne_: Still waiting for enough bugs to get ironed out to become the default engine, before I switch production.
[15:36:59] <StephenLynx> speaking about wt, is it the default engine yet?
[15:37:03] <GothAlice> No.
[15:37:06] <StephenLynx> hm
[15:37:10] <GothAlice> 3.1, I believe.
[15:37:35] <StephenLynx> btw, I found a possible developer error that can be made using gridfs.
[15:37:48] <StephenLynx> this is what I was doing:
[15:38:07] <diegoaguilar> Hello, how can I delete a database collection from command line?
[15:38:10] <StephenLynx> I was deleting a file before updating it
[15:38:27] <StephenLynx> that caused a race condition
[15:38:48] <StephenLynx> if the file would to be read while still indexed by the fs.files collection
[15:38:52] <StephenLynx> it would fail
[15:38:53] <MacWinne_> StephenLynx, i think it's mentioned in the docs that it can happen?
[15:38:59] <StephenLynx> oh
[15:39:00] <StephenLynx> :v
[15:39:24] <StephenLynx> diegoaquilar db.collection.drop() if I am not mistaken
[15:39:26] <MacWinne_> StephenLynx, I think they recommend making a new version of file and deleting old one..
[15:39:39] <GothAlice> Generally why I abstract one layer beyond GridFS, using metadata/management documents with references to GridFS files.
[15:39:41] <diegoaguilar> StephenLynx, sorry I meant linux command line
[15:41:18] <StephenLynx> now I just store the last time the file was modified in it's metadata.
[15:41:26] <StephenLynx> and don't delete it anymore
[15:42:50] <MacWinne_> how much compression can one reasonably expect with WT and simple JSON documents that are log format? with much content that is repeated
[15:43:10] <MacWinne_> like 80%? or more like 30-40%
[15:43:30] <MacWinne_> curious based on peoples experience
[16:14:42] <daveops> MacWinne_: it obviously depends, but for logs which have a lot of duplication, 80% is certainly reasonable
[16:15:23] <MacWinne_> cool.. that'll save lots of cost for us
[16:39:23] <diegoaguilar> Hello, can I create an index on a nested attribute?
[16:39:33] <diegoaguilar> like {"team.id": 1}
[16:40:07] <jamiel> yes
[16:44:47] <diegoaguilar> in a local database, for about 25k records
[16:45:01] <diegoaguilar> without use, how long would it take to apply the index
[16:45:03] <diegoaguilar> on whole docs
[16:45:03] <diegoaguilar> ?
[16:45:07] <diegoaguilar> jamiel,
[16:45:08] <diegoaguilar> ?
[16:46:40] <jamiel> Depends on the server specs more than the number of records
[16:46:50] <diegoaguilar> 8 cores, 15 GB
[16:46:55] <diegoaguilar> RAM
[16:47:28] <jamiel> I'd hazard a guess about a minute max
[16:47:50] <joshua> Thats not a very big dataset. Might depend on how big the records are as well.
[16:50:39] <jamiel> On similar spec box I did a compound index 3 keys on 48million records yesterday, if I remember correctly it took about 12 minutes
[16:50:56] <diegoaguilar> Ok, so for example now I looked db.home-find({"team.userFormatId": "akira9248"})
[16:51:11] <diegoaguilar> with .explain, it gives nscannedObjects: 21622
[16:51:26] <diegoaguilar> that's because of the indexless collection
[16:51:28] <diegoaguilar> right?
[16:51:32] <jamiel> yes
[16:51:37] <diegoaguilar> so I gotta do ...
[16:51:50] <diegoaguilar> db.home.ensureIndex({"team.id": 1})
[16:51:52] <diegoaguilar> ?
[16:52:14] <diegoaguilar> db.home.ensureIndex({"team.iuserFormatId": 1})
[16:52:55] <jamiel> db.home.createIndex({"team.userFormatId": 1}, { background: true });
[16:53:01] <jamiel> If you often query team.userFormatId alone
[16:53:11] <jamiel> with no other fields
[16:53:22] <diegoaguilar> yeah right,
[16:53:28] <diegoaguilar> what does {background: true}
[16:53:30] <diegoaguilar> do?
[16:53:50] <cheeser> creates the index in the background
[16:54:18] <jamiel> http://docs.mongodb.org/manual/reference/method/db.collection.createIndex/#ensureindex-options
[16:55:13] <diegoaguilar> on a production database, how can I test if the index was fully applied?
[16:56:05] <cheeser> getIndexes()
[16:57:04] <diegoaguilar> ok, it already appears there
[16:57:10] <diegoaguilar> instantly with background: true
[16:57:15] <diegoaguilar> that means it ended to index all?
[16:57:16] <diegoaguilar> :o
[16:57:43] <cheeser> afaik, indexes won't appear in that list until building them is done
[16:58:09] <diegoaguilar> awesome
[16:58:13] <diegoaguilar> it took a seconds to build it
[16:58:14] <diegoaguilar> :o
[16:58:15] <jamiel> Well 25,000 records is not very big so sounds right ... also if using mongo shell createIndex would block your shell until it was done and you will get feedback saying it did it.
[16:58:39] <diegoaguilar> that last with out the background in true
[16:58:47] <diegoaguilar> make it "async"
[16:58:50] <diegoaguilar> right?
[16:59:12] <jamiel> It will still block your shell, it just doesn't block other database operations
[16:59:17] <diegoaguilar> ohh ok
[16:59:30] <diegoaguilar> well now I did a find with .explain and got: "cursor": "BtreeCursor team.id_1",
[16:59:37] <diegoaguilar> nscanned: 0
[17:01:39] <jamiel> Job done, grab a beer :)
[17:03:06] <diegoaguilar> Hey, just curious, how do u use mongodb in production?
[17:03:16] <diegoaguilar> achieving millions of records?
[17:03:17] <diegoaguilar> :)
[17:06:04] <jamiel> Powering a social platform, the millions of records are social data
[17:08:47] <diegoaguilar> awesome
[17:08:49] <diegoaguilar> :)
[17:09:01] <diegoaguilar> how do you manage write locks?
[17:16:55] <StephenLynx> afaik, you don't.
[17:17:22] <StephenLynx> you just avoid them if they become a concern using your application design.
[17:33:11] <diegoaguilar> well what ive thought is if a collection if oftently written it should probably get its own database
[17:33:19] <diegoaguilar> what do u think StephenLynx
[17:33:21] <diegoaguilar> ?
[17:33:46] <StephenLynx> afaik, write locks lock the document
[17:33:50] <StephenLynx> not the whole collection.
[17:33:55] <StephenLynx> that would be insane
[17:34:18] <diegoaguilar> aren they "database" locks
[17:34:36] <diegoaguilar> u cant read a doc if something is wrtten on database
[17:34:41] <diegoaguilar> uh?
[17:34:47] <cheeser> with WT, it locks the documents. with mmapv1, it locks the collection
[17:35:54] <diegoaguilar> with WT, (even mongo is not relational) how could I handle locks in multiple documents or collections if needed
[17:36:29] <cheeser> you wouldn't
[17:36:40] <cheeser> mongo doesn't support multidocument locks/transactions
[18:53:43] <diegoaguilar> Hello, I'm trying to perform a bulkUnorderedOperation
[18:54:00] <diegoaguilar> I have a find with an upsert and update chainable calls
[18:54:16] <diegoaguilar> but its not inserting or deleting
[18:54:24] <diegoaguilar> this is what Im trying http://kopy.io/8MCWt
[18:54:47] <diegoaguilar> I read that I should only use update operators but that's what I'm doing, I guess the $push is what makes it not to update it
[18:55:24] <diegoaguilar> before having it, it worked properly
[22:17:24] <kopasetik> PROBLEM: when i try sorting 2 parameters, i keep getting a BadValue bad sort specification error
[22:17:31] <kopasetik> Gist code: https://gist.github.com/kopasetik/377da1ac4ca35926992e
[22:32:21] <joannac> kopasetik: nodejs? mongoose?
[22:32:45] <kopasetik> nodejs using the mongodb driver
[22:33:00] <joannac> kopasetik: that looks wrong for the nodejs driver. go look at the docs again
[22:33:23] <kopasetik> just used the options version of sort and it works now
[23:38:04] <diegoaguilar> Hello, when running a multi update, what could cause not the whole matches to be updated:WriteResult({ "nMatched" : 22036, "nUpserted" : 0, "nModified" : 4812 })