[05:22:17] <f00lest> I do want to be able to make queries
[05:26:40] <f00lest> does any one know what to do when such things happen
[05:26:53] <f00lest> is 700 000 a lot of documents for mongodb?
[05:41:22] <f00lest> What is the best way I can get help
[10:13:52] <zelest> When adding users/roles and running a replicaset, do I need to add the roles and users to all replica members? Or does that replicate as well?
[14:12:31] <Fira> well it's kinda vague. i'm no expert on mongo, but sounds like it's not entierly mongo-related to me. how do you run/restart it, how do you know it crashed, where do you run that query from?
[14:14:20] <f00lest> I use robo3t for making a query
[14:14:53] <f00lest> After I make a query I see `***** SERVER RESTARTED *****`
[14:15:42] <f00lest> while making the query robo3t shows the error `Error: error doing query: failed: network error while attempting to run command 'find' on host 'x.x.x.x:27017' `
[14:16:00] <f00lest> It disconnects because it is getting restarted at the same time
[14:16:09] <f00lest> whenever I run a query it restarts
[14:16:18] <f00lest> I do not run any query it does not restart
[14:16:36] <f00lest> It has 700,000 documents in total
[14:31:11] <Fira> well i'm no kernel guru either for sure but AFAIK basically when it's full, then the kernels kinda freaks out and calculates the most memory-hungry process and kills it
[14:31:21] <Fira> so that everything else doesn't crash
[14:32:01] <Fira> and they end up killing themselves indirectly via kernel oom-killer
[14:32:26] <f00lest> can we limit mongodb memory usage so that it will only take up certain memory and take up more time maybe?
[14:32:57] <Fira> ummm. well for that kind of stuff i'm probably not qualified at all to answer you. but regardless of what you're doing 1GB on system for 700K docs definitely seems on low end
[14:38:17] <Fira> i'm not entierly sure, did you try to explain() the query?
[14:38:49] <Fira> i mean, i'd say "add an index so it's more efficient to search through createdAt" but it might not actually be the correct answer to a memory problem >_<
[14:39:23] <f00lest> yeah, but that is just one query, client needs to do a lot of analysis on the database
[14:39:42] <f00lest> so he'll most likely not be able to run efficient queries all the time
[14:39:50] <Fira> well either way again if you're going to need to do "lots of analysis" on a db with already 700K docs and many more to come, you need way more than 1GB
[14:40:06] <f00lest> I think he'll just draw a dump
[14:45:52] <f00lest> I'll try to get a dump of db now and see if it works
[14:55:54] <f00lest> I'm not even able to create a dump it says `Failed: error reading from db: read tcp 127.0.0.1:42946->127.0.0.1:27017: read: connection reset by peer`
[14:56:29] <f00lest> do you think indexes will solve this problem?
[15:56:15] <f00lest> do you think setting wired tiger config will solve the problem?
[19:42:31] <DuckyDev> Hi guys. Do you know if it is possible to clear/delete all items from an array in a document through mongodb-compass?