PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 27th of November, 2019

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[05:20:03] <f00lest> Mongod instance keeps crashing whenever I try to make a query
[05:20:22] <f00lest> when my query is just `find({})` it returns results
[05:20:37] <f00lest> if anything complex than that it just crashes
[05:20:50] <f00lest> what can be done so that this will be prevented
[05:21:06] <f00lest> It has over 700 000 documents
[05:21:15] <f00lest> and occupies 177MB of space
[05:22:17] <f00lest> I do want to be able to make queries
[05:26:40] <f00lest> does any one know what to do when such things happen
[05:26:53] <f00lest> is 700 000 a lot of documents for mongodb?
[05:41:22] <f00lest> What is the best way I can get help
[10:13:52] <zelest> When adding users/roles and running a replicaset, do I need to add the roles and users to all replica members? Or does that replicate as well?
[11:23:07] <zelest> Anyone?
[14:09:46] <f00lest> does any one know reason why mongodb is crashing without any errors, it just restarts
[14:09:55] <f00lest> just restarts that's it
[14:09:59] <f00lest> no errors in the log
[14:10:11] <f00lest> whenever I run a query
[14:12:31] <Fira> well it's kinda vague. i'm no expert on mongo, but sounds like it's not entierly mongo-related to me. how do you run/restart it, how do you know it crashed, where do you run that query from?
[14:12:42] <Fira> @ f00lest
[14:14:20] <f00lest> I use robo3t for making a query
[14:14:53] <f00lest> After I make a query I see `***** SERVER RESTARTED *****`
[14:15:42] <f00lest> while making the query robo3t shows the error `Error: error doing query: failed: network error while attempting to run command 'find' on host 'x.x.x.x:27017' `
[14:16:00] <f00lest> It disconnects because it is getting restarted at the same time
[14:16:09] <f00lest> whenever I run a query it restarts
[14:16:18] <f00lest> I do not run any query it does not restart
[14:16:36] <f00lest> It has 700,000 documents in total
[14:16:37] <Fira> where do you get the log from?
[14:17:15] <f00lest> directly logging into the remote server and looking at `/var/log/mongodb/mongod.log`
[14:18:27] <Fira> is it empty empty or just nothing exploitable? could be sent to syslog - again i'm not sure on how mongo behaves in that regard
[14:19:34] <f00lest> Not empty, just no error messages
[14:19:56] <Fira> logs just stop too? checked dmesg in case it's an oom-kill? (assuming you're running on linux)
[14:21:30] <f00lest> Just a sec I'll paste in logs right before and after restart
[14:21:40] <f00lest> have to cleanse them first though
[14:22:10] <Fira> you didn't answer me regarding the oom killer though :p
[14:22:25] <f00lest> I do not know what that is
[14:23:14] <Fira> in kernel log buffer, run dmesg
[14:23:30] <f00lest> so I just type in `dmesg`?
[14:23:32] <Fira> or most likely if you run systemd they're somewhere in journalctl aswell
[14:23:39] <Fira> yeah
[14:25:12] <f00lest> wow there 33 lines here
[14:25:24] <f00lest> is this from start of the server
[14:25:30] <f00lest> up till now?
[14:25:41] <Fira> not neccessarily but usually yes
[14:25:49] <Fira> nothing special at bottom?
[14:26:19] <Fira> like "out of memory" "killed process" ...
[14:26:20] <f00lest> there are entries that say `(mongod)`
[14:26:28] <f00lest> no there are just entries
[14:26:59] <f00lest> do you think I could copy paste into pastebin without revealing anything
[14:28:07] <f00lest> All lines say Out of memory in UB in the start
[14:29:16] <Fira> "Out of memory in UB: OOM killed process ..... (mongod) score ... vm:...." ?
[14:29:25] <f00lest> yes
[14:29:29] <f00lest> there are such entries
[14:29:49] <Fira> well if they match up with your server restarts, it's likely that's it then
[14:30:18] <f00lest> when will the server kill them? Is it like a certain percentage of memory?
[14:30:24] <Fira> mongod probably needs over your available system RAM(+SWAP, technically) to run that "find all" query on all 700K docs
[14:30:49] <f00lest> We just have 1GiB RAM
[14:31:11] <Fira> well i'm no kernel guru either for sure but AFAIK basically when it's full, then the kernels kinda freaks out and calculates the most memory-hungry process and kills it
[14:31:21] <Fira> so that everything else doesn't crash
[14:31:30] <f00lest> understood
[14:31:35] <f00lest> what could be done in this case
[14:31:47] <Fira> in this case very likely mongo and/or robo3t uses over 1GB for 700K docs i'd guess
[14:31:59] <f00lest> owh ok
[14:32:01] <Fira> and they end up killing themselves indirectly via kernel oom-killer
[14:32:26] <f00lest> can we limit mongodb memory usage so that it will only take up certain memory and take up more time maybe?
[14:32:57] <Fira> ummm. well for that kind of stuff i'm probably not qualified at all to answer you. but regardless of what you're doing 1GB on system for 700K docs definitely seems on low end
[14:33:09] <Fira> let me test something quik'
[14:33:18] <f00lest> okay
[14:34:28] <f00lest> Thing is that these docs are for just a month and will probably increase overtime to a lot more
[14:35:35] <Fira> then either way i'd say yeahhhhh you're going to need more than 1GB
[14:35:48] <Fira> do you really need to return all at once though?
[14:36:03] <Fira> i mean letting the dbms do the work is the whole point
[14:36:25] <f00lest> okay, so you mean I should optimize my queries
[14:36:30] <Fira> well {} isn't a query
[14:36:33] <Fira> well i mean
[14:36:35] <Fira> technically it is
[14:36:40] <Fira> but it's just "dump the whole set"
[14:36:51] <Fira> why do you need to do that?
[14:37:16] <f00lest> no I was trying to do `db.getCollection('logs').find({ createdAt: { $gt: ISODate("2019-11-27 00:00:00.000Z") } })`
[14:37:23] <f00lest> just trying to get today's logs
[14:37:27] <Fira> ahh
[14:37:33] <f00lest> It runs fine to get results for {}
[14:37:40] <f00lest> but the above gives me an issue
[14:37:42] <Fira> oook
[14:38:17] <Fira> i'm not entierly sure, did you try to explain() the query?
[14:38:49] <Fira> i mean, i'd say "add an index so it's more efficient to search through createdAt" but it might not actually be the correct answer to a memory problem >_<
[14:39:23] <f00lest> yeah, but that is just one query, client needs to do a lot of analysis on the database
[14:39:42] <f00lest> so he'll most likely not be able to run efficient queries all the time
[14:39:50] <Fira> well either way again if you're going to need to do "lots of analysis" on a db with already 700K docs and many more to come, you need way more than 1GB
[14:40:06] <f00lest> I think he'll just draw a dump
[14:40:12] <f00lest> during analysis
[14:41:27] <Fira> hm well again you could try adding an index but i dunno if it'd really help--
[14:41:28] <Fira> heh
[14:41:29] <Fira> wai
[14:45:52] <f00lest> I'll try to get a dump of db now and see if it works
[14:55:54] <f00lest> I'm not even able to create a dump it says `Failed: error reading from db: read tcp 127.0.0.1:42946->127.0.0.1:27017: read: connection reset by peer`
[14:56:29] <f00lest> do you think indexes will solve this problem?
[15:56:15] <f00lest> do you think setting wired tiger config will solve the problem?
[19:42:31] <DuckyDev> Hi guys. Do you know if it is possible to clear/delete all items from an array in a document through mongodb-compass?