[04:14:45] <Trinity> hi guys, i'm reading over the mongodb manual and i'm a bit confused about some subjects
[04:14:51] <Trinity> most notably, data aggregation and sharding
[04:14:56] <Trinity> are they related? or two separate things?
[04:15:13] <Trinity> what is the difference between data aggregation and find() queries?
[04:19:32] <Boomtime> approximately nothing if you only use $match and $sort in aggregation
[04:20:29] <Boomtime> but you'll find aggregation to be much more powerful in terms given all the other pipeline stages available - however, it's also more expensive than a simple query.. use what is appropriate
[04:21:03] <Boomtime> if you only need query what exists, use a find().. if you want to aggregate documents, use aggregation
[05:06:56] <Sugi> hi. i am trying to do a mongodump --dbpath /path/to/dbs/ but I keep getting an error that dbpath is no longer recognized. i assume there was a change in the most up to date version of mongodump. how can I do that in the most up to date version?
[05:07:03] <Sugi> all i have are .0 and .ns files of a DB.
[05:07:10] <Sugi> so i cant connect to the DB and query it.
[14:19:28] <ams_> When I run mongod --repair I'm seeing the lock file get created and stay around. Is that correct?
[14:46:25] <deathanchor> ams_: all mongod should be create lock file to avoid any other mongod that is run after from writing to the same datadir.
[14:59:34] <ams_> deathanchor: ah ok, i was confused because it wouldn't start (complaining about lock file not being empty) but then when i ran --repair the lock file remains
[16:14:53] <StephenLynx> not very useful, since mongo has its own memcache
[16:15:08] <StephenLynx> it would add more overhead than its worth.
[16:31:50] <ChoHag> It takes 2.3 seconds to load the MongoDB perl library. Is there any way to speed that up?
[16:32:34] <StephenLynx> how often you have to load this library?
[16:36:38] <ChoHag> Often enough that the time taken to load it is noticable - ie. >= 1.
[16:39:07] <StephenLynx> I know I am not answering your question, but is there any particular reason you have to load it more than once?
[16:41:17] <ChoHag> Because all the modules in $ork's project must have their syntax tested and that alone increases time the release process takes by 3 minutes.
[16:45:09] <robsco> ChoHag, you won't speed it up, tests tend to take a while anyway, you'll only get the speed up when you're running in a persistent env which I would assume you're doing anyway in prod?
[16:48:11] <robsco> it would also be a question better suited to #perl-help on irc.perl.org
[16:50:21] <ChoHag> Well speeding up the load time of persistent modules is hardly my highest priority but 1.2 seconds seems a bit excessive.
[16:50:55] <robsco> depends, Moose/Catalyst take forever too
[16:51:12] <ChoHag> Especially considering it's not the only part of the project which uses Moose and the entire stack just takes half a second longer.
[16:51:17] <robsco> like I say, for testing, these do take a while
[17:10:12] <robsco> I receive 50m docs each day, and need to store them, knowing if any fields actually change each day (using an updated datetime field), and also if I just saw the same doc again (last_seen datetime field). Could this be done quickly with 2 collections? I currently do this in MySQL, load into an "import" table, then insert into the "main" table, with a "SELECT INTO ON DUPLICATE KEY UPDATE IF ..." - it's
[18:08:35] <MacWinner> how much overhead does wiretiger add to queries? does it only add overhead when queries are made that return documents that are not in memory? or does it affect all queries?
[20:50:21] <aps> Primary of my replica set (mongodb version 3.2; WiredTiger) is crashing (killed by OOM killer I think) almost everyday. What could be wrong here or how do I fix this? :(
[20:51:24] <aps> Tailing the dmesg shows this - https://www.irccloud.com/pastebin/zOJJlANj/
[20:57:56] <nofxx> OOM is from systemd no? there's some options you should add to your mongodb.service
[20:58:13] <nofxx> and also, do you have enought ram aps ?
[22:37:25] <the_voice_> If I have an array of subdocuments that each contain a userId. How would I go about updating some of those subdocuments based on the userID