PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 1st of October, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:55:39] <nelas> hi!
[00:56:24] <nelas> i have a 4 node mongodb setup, primary and 3 replica set with 2.6.0 and 2.6.4 hosts
[00:56:53] <nelas> the thing is that now and then one mongod instance dies
[00:57:14] <nelas> and i cant find in the message or mongod.log any errors
[00:57:38] <nelas> any hint how to troubleshoot the mongod chrasses
[01:04:19] <Boomtime> @nelas: what you describe matches the behavior of the linux OOM killer
[01:04:39] <Boomtime> check dmesg or the like
[01:05:07] <Boomtime> if mongod has nothing in it's own log then it was something external that took it out with the equivalent of a kill -9
[01:06:10] <Boomtime> @nelas: also you should always run with an odd number of voting members
[01:06:26] <Boomtime> http://docs.mongodb.org/manual/core/replica-set-architectures/#deploy-an-odd-number-of-members
[01:08:53] <nelas> boomtime yes but i dont see any OOM message in the centos message log
[01:09:36] <Boomtime> then probably something else killed it
[01:10:25] <nelas> @Boomtime the thing is that i would like to find the reason, so it wont happen again.
[01:10:33] <Boomtime> obviously
[01:12:05] <nelas> @Boomtime I removed few days ago one replica set, with the instructions ..shutdown the instance and rs.remove(node)
[01:12:42] <nelas> @Boomtime but i can see in the repl log an funny error, saying that replset cant find slave with Id X
[01:13:17] <nelas> @boomtime there is a jira ticket saying somethign about 2.6.0 bug
[01:13:34] <Boomtime> you are now describing a completely different problem.. "he thing is that now and then one mongod instance dies" <- what happened to this statement?
[01:14:24] <nelas> @boomtime im checkin the option if the error on repl is corelated with the crash
[01:14:38] <Boomtime> "there is a jira ticket saying somethign about 2.6.0 bug" <- if there weren't there would be no need for 2.6.1, or 2.6.2. or 2.6.3, or 2.6.4, right?
[01:14:57] <Boomtime> ok, that is a good way forward
[01:15:22] <Boomtime> dcument the symptoms
[01:15:39] <nelas> @boomtime could be also because 2 nodes are 2.6.0 and the other 2.6.4 ?
[01:15:55] <Boomtime> possible,, though unlikely
[01:16:14] <Boomtime> compatibility between revisions is usually very good
[01:18:42] <nelas> @boomtime obviously the mongod.log wont log any memory pressure info?
[01:20:15] <Boomtime> you would need active monitoring to capture that info - mongodb can supply it, for example, to mms.mongodb.com or any other service that wants it
[01:20:36] <Boomtime> though there are other system monitoring systems which would capture such too
[01:20:54] <nelas> @Boomtime like nagios
[01:20:59] <Boomtime> right
[03:57:37] <eyad> Hi guys
[03:58:02] <joannac> hi
[03:58:37] <eyad> I was looking for a BI/ adhoc reporting tool that can be hooked up to a mongodb server
[03:58:42] <eyad> any clue?
[04:07:49] <eyad> I'm looking for an ad hoc reporting/ business intelligence tool that can be directly hooked up to a mongodb
[04:07:55] <eyad> any idea?
[05:46:48] <diegoaguilar> Hello, what's the best data type or stragety to save some users birthday info?
[05:47:02] <diegoaguilar> I guess Date is definitely a waste of bytes ...
[05:47:24] <diegoaguilar> wonder whether String with MMDDYY would make it
[05:59:37] <Boomtime> @diegoaguilar: you think that Date is a waste of bytes.. why?
[06:00:50] <Boomtime> your suggested alternative would save 1 byte per document.. at the expense of having something that is harder to handle/manipulate
[07:02:21] <ut2k3> Hi guys I have a problem with replication. I cannot get my slave sync ... We have a huge database (about 2,5TB) that cannot be offline for about 10hours which needs a secondary. But after the "initial sync cloning db:...." Its always too stale .... I already setup an oplog about 350GB on the secondary. Do you know how to solve this problem to get replication on a huge database working?
[07:08:54] <joannac> ut2k3: mongo shell to primary, db.printReplicationInfo()
[07:08:57] <joannac> and pastebin the result?
[07:09:14] <ut2k3> ok
[07:10:00] <joannac> Also, how long does it take to clone 2.5TB of data?
[07:11:10] <ut2k3> approx 12-14h
[07:11:12] <ut2k3> http://pastebin.com/YSsn0EUG
[07:11:25] <ut2k3> (currently a resync is running)
[07:11:27] <joannac> right
[07:11:40] <joannac> your oplog gives you ~6.6 hours
[07:11:56] <ut2k3> Ok so i have to raise the oplog of the primary and not the secondary?
[07:11:59] <joannac> yes
[07:12:41] <joannac> but that will involve downtime
[07:12:56] <ut2k3> Can this be easily done without a long downtime? About 10 - 20 Minutes is all right
[07:13:08] <joannac> what filesystem?
[07:13:25] <ut2k3> ext4
[07:13:29] <joannac> depends how long it takes to preallocate your oplog
[07:13:33] <joannac> yeah, that should be fine
[07:14:28] <ut2k3> Ok can you explain shortly how to do this. Would be nice... spend too many hours on this problem :/
[07:14:34] <joannac> http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[07:16:18] <ut2k3> thanks
[07:36:37] <diegoaguilar> Hello I wonder why I cant set up my modles properly
[07:36:37] <diegoaguilar> I get the ObjectID not defined error log
[07:36:37] <diegoaguilar> here is my code http://www.hastebin.com/zesiwuvude.pas
[07:39:18] <Climax777> Hello all. I have data containing the total data usage measured per sample time. Is there a quick and dirty way to aggregate/map reduce this into deltas per sample time
[07:52:02] <joannac> Climax777: I would process it client-side
[07:52:44] <Climax777> so send n+1 data points for the time range query and then to a quick delta calculation client side?
[07:53:51] <Climax777> That is a great suggestion, thanks. However I may need to generate alerts depending on the delta value server side
[07:54:50] <Climax777> The only solution I have come up with so far is using the last sample total value and subtract it from each new sample and then increment a document for that minute
[08:13:36] <dandre> Hello,
[08:14:11] <dandre> I am trying to backup my dataase with mongodump. -db mydatabase
[08:14:53] <dandre> I have read access to this database as required by the docs but my dump is empty. I get 'authorized' error code
[08:15:04] <dandre> can anyone help me?
[08:37:39] <diegoaguilar> Hello, I need help with creating users at my vps
[08:37:43] <diegoaguilar> for mongo, of course
[08:38:16] <diegoaguilar> i guess i already modified mongo.conf setting auth = true
[08:38:39] <diegoaguilar> and created a user with userAdminAnyDatabase role
[08:38:42] <LouisT> if you want to add users, you'd want it as false until you add your admin user with root perms
[08:39:11] <diegoaguilar> but when I do mongo -u username -p
[08:39:19] <diegoaguilar> and try with my actual saved password
[08:39:28] <diegoaguilar> it wont ever pass the auth :(
[08:39:39] <LouisT> did you try to just do "mongo"
[08:39:45] <LouisT> then switch to the DB you want
[08:39:50] <diegoaguilar> yep, did
[08:39:55] <diegoaguilar> and thats ok
[08:39:55] <LouisT> then db.auth('user','password')
[08:41:19] <diegoaguilar> did
[08:41:21] <diegoaguilar> and nothing
[08:41:31] <diegoaguilar> I even have the shell previous log
[08:41:36] <diegoaguilar> I can see im trying my pass
[08:49:24] <Const> Hello, I have this kind of data: http://pastie.org/9609635 It's a history of type of page viewed by one user. I need to keep the track of view by weeks, months... (I'm not stick to this structure if you know a better one). My question is how would you inscrease the 'score' for week 40 of year 2014?
[08:51:49] <Const> Maybe my structure is not adapted
[09:10:15] <stava> Say I have a collection with a nested structure like { foo: { bar: 'baz' } }, how do I use find() to fetch documents where foo.bar is a certain value?
[09:11:33] <diegoaguilar> stava, please try {bar: value}
[09:15:01] <stava> diegoaguilar: https://pastebin.mozilla.org/6678393
[09:15:15] <stava> diegoaguilar: The last two queries dont seem to return anything
[09:18:44] <diegoaguilar> sorry stava
[09:18:51] <diegoaguilar> I guess it's foo.bar
[09:19:02] <diegoaguilar> stava, please try {foo.bar: value}
[09:19:48] <stava> diegoaguilar: I get "unexpected token ." - https://pastebin.mozilla.org/6678418
[09:20:06] <stava> Oh, quoting foo.bar works
[09:20:10] <stava> thanks :)
[09:23:20] <diegoaguilar> sure :)
[09:23:32] <diegoaguilar> honestly I quite always quote the query fields
[09:23:47] <diegoaguilar> where are u from stava
[09:24:21] <stava> diegoaguilar: Using the dot format field names don't seem to work with update() though - https://pastebin.mozilla.org/6678434
[09:26:07] <stava> and if I write update(..., {foo: {baz: 'Testing!'}}); it will replace the entire foo object
[09:27:04] <diegoaguilar> for update you have to use $ operator
[09:27:05] <diegoaguilar> wait ...
[09:27:37] <diegoaguilar> yeah, u have to use $set
[09:27:58] <diegoaguilar> http://docs.mongodb.org/manual/reference/operator/update/set/
[09:28:08] <stava> Yeah I should have seen that in the manual already :D
[09:28:09] <stava> thanks
[09:29:27] <diegoaguilar> stava I advise u to get any free course from here: https://university.mongodb.com/
[09:29:34] <diegoaguilar> they're really good and not boring
[09:30:50] <stava> not boring is the best :)
[09:32:15] <Derick> "really good" is better ;-)
[09:35:10] <Const> If is possible for mongo to return the numeric index of an element inside an array?
[09:41:09] <diegoaguilar> yes Const
[09:41:55] <Const> diegoaguilar, how, please...? ;)
[09:42:50] <diegoaguilar> Const, Im really not an expert but I guess either map function or multikey indexes might help
[09:43:22] <diegoaguilar> map function is in few words a function u "assign" to a mongo operation (query, update)
[10:18:42] <Naeblis> Hi. I keep getting "MongoError: ns does not exist" for my js application using Mongoose. How can I fix this?
[10:24:04] <Naeblis> :/
[10:32:39] <braz> Naeblis, you could check the Mongoose Connection object using the collectionNames function to see what collections exist for the specific namespace - http://mongodb.github.io/node-mongodb-native/api-generated/db.html#collectionnames
[10:35:12] <Naeblis> braz: ok I'll try that. Thanks.
[12:21:27] <MatthiasJ> hello...
[12:21:36] <MatthiasJ> did not expect that this room is so crowded :)
[12:28:04] <Siyfion> MatthiasJ: You kinda need to ask something
[12:28:23] <Siyfion> MatthiasJ: If you come in and say hi, you aren't going to get 200 people saying hi :P
[13:11:04] <_rgn> Siyfion: i don't follow
[13:11:27] <_rgn> do you possess mind reading abilities or what
[13:11:37] <Siyfion> _rgn: Yes, yes I do.
[13:11:50] <_rgn> heh
[13:23:20] <Waheedi> how could i list dynamic keys in a collection
[13:27:15] <latinojoel> Waheedi: Not sure but, i guess, just with map/reduce
[13:27:47] <Waheedi> that was an expected answer, thanks latinojoel
[13:27:48] <aliasc> i asked several times no one answered
[13:27:57] <aliasc> what is the best way to start and stop mongodb from node
[13:28:03] <aliasc> im building an application for windows
[13:28:19] <Waheedi> mongodb is running on a windows machine aliasc ?
[13:28:40] <aliasc> of course it runs
[13:28:46] <Waheedi> lol
[13:28:55] <aliasc> im building an application for windows with node and mongodb
[13:28:58] <aliasc> whats the matter ?
[13:29:01] <Waheedi> you are not looking for an answer aliasc but for a fight
[13:29:12] <Waheedi> nothing else matters
[13:29:17] <aliasc> listen you are one of those smartass guys who complain all the time about windows being a shit
[13:29:21] <aliasc> but all people use it
[13:29:28] <Waheedi> i love windows!
[13:29:30] <aliasc> especially ordinary people not programmers
[13:29:40] <aliasc> so stop being smart
[13:29:42] <Waheedi> my grandfather machine runs windows too
[13:29:49] <aliasc> and if you know a solution lets explain it
[13:30:06] <Waheedi> I'm sorry I'm not that smartass to answer your question
[13:30:14] <aliasc> if mongodb developers made an executable for windows it means it runs perfectly good
[13:30:22] <aliasc> then dont answer
[13:31:45] <latinojoel> aliasc: you can just run command line in node for start and stop... just a tip (I don't know why you want start and stop).
[13:32:05] <aliasc> latinojoel what happens when a power goes off suddenly ?
[13:32:14] <latinojoel> :)
[13:33:22] <aliasc> i should smile and compromise data is that right
[13:33:45] <latinojoel> no
[13:33:57] <latinojoel> but that's why exist replicas
[13:34:21] <aliasc> im building an application for windows with node.js and mongodb. everything works fine. just if power goes off mngodb creates lock file
[13:34:22] <latinojoel> you can have it in diff locations...
[13:34:47] <aliasc> and when the application will try to start again mongodb will not
[13:36:30] <aliasc> what if i just remove the lock file ?
[13:37:14] <aliasc> the mongodb documentation says if you remove the lock file data may get corrupted
[13:37:56] <latinojoel> sometimes no. but you can think is corruped any way.
[13:38:04] <latinojoel> pack everything in docker... ;)
[13:38:15] <aliasc> its just my noob client doesnt want data to be on the server
[13:39:38] <aliasc> you mean to backup things everytime say in a minute /
[13:39:38] <aliasc> ?
[13:41:56] <aliasc> i guess i have to look for a solution on my own rather than ask
[14:48:46] <ginhi_000> hey guys
[14:50:27] <ginhi_000> I have a weird error on my home network I can connect to mymongodb with the javadriver aslong as I have a static ip(even tho I use localhost), at work I dont have a static Ip why wont the localhost connection work?
[14:52:15] <ginhi_000> http://pastebin.com/4MGyUxEx
[15:02:59] <ginhi_000> anyone here
[15:45:31] <devn1nja> hey
[15:45:40] <devn1nja> guys
[15:48:38] <devn1nja> i'm new mongodb's user and i can't understand one simple thing. i have one collection with "news" documents. each document in this collection have "metrics" fields (shares etc.).
[15:49:53] <devn1nja> how can i a) sort by some function applied to this metrics, b) how can i get in one query a sorted and limited list of documents plus exactly [_id1, _id2, etc.]
[15:50:18] <devn1nja> does $in what i need in the b-case ?
[16:06:46] <James1x0> Anyone know of a good way to test an async .post(‘save’ hook in mongoose? Should I just use a timeout and create an unsightly race condition? :(
[17:50:51] <znn> how do you create a subcollection?
[17:58:25] <skot> there is no such thing as a subcollection... what do you think it means?
[17:58:59] <skot> how would you use a "subcollection" differently than a collection?
[18:18:00] <znn> skot: a sbucollection has an extra property associating it with another resource
[18:18:16] <znn> that's what i think is the difference from a typical collection
[18:19:18] <znn> it's a subset of a larger set but it is associated with another object
[18:19:34] <znn> s/another/an
[18:40:46] <skot> znn, there is no such thing on the server. Is this some client idea somewhere? Are you talking about a DBRef? Maybe you are thinking about arrays in documents? Do you have an example of how you want it to work?
[20:14:46] <kexmex> this is ridculous
[20:14:54] <kexmex> i fsyncLocked database
[20:14:54] <kexmex> then did db.dropDatabase() (probably should have unlocked before doing that)
[20:14:54] <kexmex> now i can't even connect with mongo shell
[20:15:24] <kexmex> luckly this isn't a production database
[20:15:29] <kexmex> what should i do?
[20:16:32] <kexmex> 2014-10-01T16:13:49.830-0400 [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
[20:23:07] <kexmex> i can't even kill mongod with -9
[21:32:12] <joannac> kexmex: kill -9 should always work
[21:32:24] <kexmex> it finally did
[21:32:28] <kexmex> but that kinda sucks
[21:32:31] <kexmex> that i was able to lock myself out
[21:33:00] <kexmex> dropDatabase() waits for lock to be removed, but how the heck do i remove the lock, if i can't even connect
[21:37:33] <joannac> kexmex: did you read the big red warning box in the docs?
[21:37:34] <joannac> WARNING
[21:37:34] <joannac> When calling db.fsyncLock(), ensure that the connection is kept open to allow a subsequent call to db.fsyncUnlock().
[21:37:37] <joannac> Closing the connection may make it difficult to release the lock.
[21:37:49] <kexmex> :)
[21:37:59] <kexmex> that's pretty terrible
[21:38:12] <joannac> `/me shrugs
[21:38:15] <kexmex> say if i am connected remotely, where is the guarantee i will remain connected
[21:38:22] <joannac> you told the database "no writes"
[21:38:31] <joannac> and it goes "okay, no writes"
[21:38:47] <kexmex> ok but shouldn't there be at least some way to tell DB to unlock
[21:38:50] <kexmex> if connection requires writes? :)
[21:38:56] <joannac> yes. keep a connection, and unlock
[21:39:16] <kexmex> but connection is not something guaranteed right
[21:39:48] <joannac> well, no
[21:40:08] <joannac> but if your network is flaky enough that you're worried, maybe fsyncLock is not the solution you need
[21:40:25] <kexmex> even if i am doing that on local machine tho
[21:40:37] <joannac> why would a connection drop on the local machine?
[21:40:49] <kexmex> who knows
[21:41:18] <kexmex> maybe a sun spot flares up
[21:41:20] <kexmex> messes up the ram
[21:41:21] <kexmex> :)
[21:41:29] <joannac> if you connections are dropping on the local machine, you have bigger problems than fysncLock :p
[21:42:22] <kexmex> ok, what if the machine i am remotely ssh-ing from into the machine that runs mongo
[21:42:23] <kexmex> dies
[21:42:27] <kexmex> that's very likely
[21:42:42] <kexmex> connection drops
[21:42:50] <joannac> run in screen?
[21:42:56] <joannac> wait, which one dies?
[21:43:02] <kexmex> the one i am connecting from
[21:43:10] <kexmex> either the machine dies, or connection dies
[21:43:13] <joannac> actually, let's step back a second. why are you running fsyncLock?
[21:43:19] <kexmex> to backup
[21:43:25] <joannac> mongodump?
[21:43:29] <kexmex> i was fixing another problem, just as bad, which i didnt go into
[21:43:29] <kexmex> yea
[21:43:30] <joannac> filesystem snapshot?
[21:43:31] <kexmex> mongodump
[21:43:36] <joannac> replica set?
[21:43:39] <kexmex> nah, regular
[21:43:48] <joannac> why aren't you running a replica set?
[21:43:48] <kexmex> default config
[21:44:01] <kexmex> this is a dev DB, just testing things
[21:44:17] <joannac> run a single node replica set
[21:44:22] <joannac> then you can mongodump --oplog
[21:44:30] <joannac> and you don't need to fsyncLock at all
[21:44:38] <kexmex> i see
[21:44:38] <joannac> and also, run in screen
[21:44:38] <kexmex> btw
[21:44:53] <kexmex> the reason i was doing a mongodump
[21:45:37] <kexmex> is because a bad aggregate command (created millions of rows out of 169)
[21:45:59] <kexmex> and i ran out of space
[21:46:00] <joannac> ouch
[21:46:05] <kexmex> but couldn't run repairDatabase()
[21:46:06] <kexmex> lol
[21:46:14] <kexmex> cause...don't have just as much space as DB
[21:46:15] <kexmex> :)
[21:46:21] <kexmex> which was gigs on gigs
[21:47:58] <joannac> yeah
[21:48:17] <joannac> if you have another drive you can mount, you can do repair with --repairPath
[21:48:28] <kexmex> yea i guess in production
[21:48:42] <kexmex> would need to call the datacenter guys
[21:48:46] <kexmex> to mount something quick lol
[21:51:13] <joannac> in any case, yeah. if you are running fsyncLock do *not* close the connection if you can help it
[21:51:25] <joannac> the easiest way to fall into this? auth
[21:51:34] <kexmex> you guys should add some named-pipes command or something
[21:51:49] <kexmex> auth?
[21:51:49] <joannac> fsyncLock, a write comes in and blocks, need a new connection, need to auth, read sits behind write
[21:52:12] <kexmex> ah
[21:52:23] <kexmex> well i did fsyncLock() -> dropDatabase()
[21:52:32] <kexmex> maybe dropDatabase() shouldn't be allowed to run during lock
[21:52:34] <kexmex> in same connection
[21:52:41] <kexmex> basically my connection was open but i locked myself in
[21:52:50] <joannac> yes
[21:53:02] <kexmex> any writes in connection that did the lock
[21:53:02] <joannac> why were you trying to do a write while locked?
[21:53:05] <kexmex> shudnt be allowed, at least
[21:53:08] <kexmex> i wasn't thinking
[21:53:11] <kexmex> :)
[21:53:15] <joannac> :P
[21:53:36] <kexmex> i guess mongoDB DBAs make big bucks
[21:54:28] <kexmex> in mongoshell, is there a way to kill the current op?
[21:54:35] <kexmex> i.e. Ctrl+C just kills the shell
[21:54:39] <kexmex> or was it Ctrl + Z
[21:55:23] <kexmex> [00:51:46] <@joannac> fsyncLock, a write comes in and blocks, need a new connection, need to auth, read sits behind write
[21:55:49] <kexmex> so if i fsyncLock() and then some write comes in from my App, i can't even do Dump? or Dump just reads the files?
[21:56:07] <joannac> depends. do you have auth on?
[21:56:11] <kexmex> yea
[21:56:12] <kexmex> i do
[21:56:20] <joannac> then maybe, maybe not
[21:56:32] <kexmex> seems like fsyncLock() shouldn't be used, EVER :)
[21:57:15] <joannac> the recommendation I always give is if you do fsyncLock, don't close your connection and make sure you can fsyncUnlock on the same connection
[21:57:29] <kexmex> yea
[21:57:40] <kexmex> what about my backup script
[21:57:51] <kexmex> i got it from somewhere, i wonder if it locks before dump
[21:58:30] <kexmex> automongobackup.sh
[22:03:09] <joannac> no idea. read it and see :)
[22:52:00] <freeone3000> Say I have a query, and I'm aggregating a set of records by (domain, subdomain). https://gist.github.com/freeone3000/144197303e1e2c7223f0 I'd like to get a count of unique "requestInfo.userId"s for each _id in the result. How would I specify this?
[23:02:20] <joannac> freeone3000: group on domain, subdomain, userid, then group again on just domain, subdomain, and $sum: 1
[23:28:35] <freeone3000> joannac: Separate group steps?
[23:32:50] <freeone3000> joannac: https://gist.github.com/freeone3000/5a8408f57304c1823ef3 gives me {"count": 113}.