PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 22nd of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:14:45] <Trinity> hi guys, i'm reading over the mongodb manual and i'm a bit confused about some subjects
[04:14:51] <Trinity> most notably, data aggregation and sharding
[04:14:56] <Trinity> are they related? or two separate things?
[04:15:13] <Trinity> what is the difference between data aggregation and find() queries?
[04:19:32] <Boomtime> approximately nothing if you only use $match and $sort in aggregation
[04:20:29] <Boomtime> but you'll find aggregation to be much more powerful in terms given all the other pipeline stages available - however, it's also more expensive than a simple query.. use what is appropriate
[04:21:03] <Boomtime> if you only need query what exists, use a find().. if you want to aggregate documents, use aggregation
[04:30:01] <cheeser> +1
[05:06:56] <Sugi> hi. i am trying to do a mongodump --dbpath /path/to/dbs/ but I keep getting an error that dbpath is no longer recognized. i assume there was a change in the most up to date version of mongodump. how can I do that in the most up to date version?
[05:07:03] <Sugi> all i have are .0 and .ns files of a DB.
[05:07:10] <Sugi> so i cant connect to the DB and query it.
[05:10:51] <cheeser> why not?
[05:14:11] <joannac> yeah, why can't you start up a mongod?
[08:40:29] <dddh> 2015-12-22T07:53:47.321+0000 E QUERY [conn3] ReferenceError: Aaa is not defined at _funcs1 (_funcs1:1:30)
[08:40:32] <dddh> 2015-12-22T07:53:47.321+0000 F - [conn3] Invalid access at address: 0
[08:40:35] <dddh> 2015-12-22T07:53:47.328+0000 F - [conn3] Got signal: 11 (Segmentation fault).
[11:40:00] <ararog> guys, is there a way to do something like this on mongo?
[11:40:17] <ararog> db.collection.find({date.toLocaleDateString('en-US') === "12/17/2015" })
[11:43:12] <StephenLynx> yes
[11:43:21] <StephenLynx> create a date with the string you got
[11:43:26] <StephenLynx> and then compare to the saved date.
[11:43:32] <StephenLynx> mongo will convert both to UTC
[11:43:50] <StephenLynx> so after you got the date object, you don't have to worry about locales.
[11:45:23] <ararog> hmmm, but here's the find, the saved date has time aswell, I want to compare only the date part
[11:45:35] <ararog> *thing
[11:45:36] <StephenLynx> i see
[11:46:06] <StephenLynx> you could try a range
[11:46:22] <StephenLynx> from the start of the day to the end of it
[12:12:06] <ararog> StephenLynx: I believe that one will work, thanks!
[12:12:21] <ararog> db.collection.find({date: { "$gte" : ISODate("2015-04-17"), "$lt" : ISODate("2015-04-18") } })
[12:37:12] <StephenLynx> aye
[13:07:49] <G1eb> hi, quick question, where can I find my mongo config file (is there one by default?)
[13:36:57] <G1eb> anyone here? :p
[13:37:04] <G1eb> hello world
[14:19:28] <ams_> When I run mongod --repair I'm seeing the lock file get created and stay around. Is that correct?
[14:46:25] <deathanchor> ams_: all mongod should be create lock file to avoid any other mongod that is run after from writing to the same datadir.
[14:59:34] <ams_> deathanchor: ah ok, i was confused because it wouldn't start (complaining about lock file not being empty) but then when i ran --repair the lock file remains
[14:59:38] <ams_> but it is empty
[14:59:44] <ams_> i should have paid better attention to the error message :_)
[15:02:51] <ShadySQL> hi
[15:02:56] <ShadySQL> quick question
[15:03:03] <ShadySQL> how can I put an entire db on RAM?
[15:03:13] <ShadySQL> is it just using the dd command?
[15:09:09] <StephenLynx> in theory, yes.
[15:09:22] <StephenLynx> in practice, you might as well use redis if you just want a small but fast cache.
[15:31:48] <ShadySQL> how can I cache MongoDB in redis?
[15:31:53] <ShadySQL> that sounds cool
[16:14:53] <StephenLynx> not very useful, since mongo has its own memcache
[16:15:08] <StephenLynx> it would add more overhead than its worth.
[16:31:50] <ChoHag> It takes 2.3 seconds to load the MongoDB perl library. Is there any way to speed that up?
[16:32:34] <StephenLynx> how often you have to load this library?
[16:36:38] <ChoHag> Often enough that the time taken to load it is noticable - ie. >= 1.
[16:39:07] <StephenLynx> I know I am not answering your question, but is there any particular reason you have to load it more than once?
[16:41:17] <ChoHag> Because all the modules in $ork's project must have their syntax tested and that alone increases time the release process takes by 3 minutes.
[16:42:05] <StephenLynx> what is $ork?
[16:45:09] <robsco> ChoHag, you won't speed it up, tests tend to take a while anyway, you'll only get the speed up when you're running in a persistent env which I would assume you're doing anyway in prod?
[16:48:11] <robsco> it would also be a question better suited to #perl-help on irc.perl.org
[16:50:21] <ChoHag> Well speeding up the load time of persistent modules is hardly my highest priority but 1.2 seconds seems a bit excessive.
[16:50:55] <robsco> depends, Moose/Catalyst take forever too
[16:51:12] <ChoHag> Especially considering it's not the only part of the project which uses Moose and the entire stack just takes half a second longer.
[16:51:17] <robsco> like I say, for testing, these do take a while
[17:10:12] <robsco> I receive 50m docs each day, and need to store them, knowing if any fields actually change each day (using an updated datetime field), and also if I just saw the same doc again (last_seen datetime field). Could this be done quickly with 2 collections? I currently do this in MySQL, load into an "import" table, then insert into the "main" table, with a "SELECT INTO ON DUPLICATE KEY UPDATE IF ..." - it's
[17:10:13] <robsco> a bit of a beast
[18:08:35] <MacWinner> how much overhead does wiretiger add to queries? does it only add overhead when queries are made that return documents that are not in memory? or does it affect all queries?
[20:50:21] <aps> Primary of my replica set (mongodb version 3.2; WiredTiger) is crashing (killed by OOM killer I think) almost everyday. What could be wrong here or how do I fix this? :(
[20:51:24] <aps> Tailing the dmesg shows this - https://www.irccloud.com/pastebin/zOJJlANj/
[20:57:56] <nofxx> OOM is from systemd no? there's some options you should add to your mongodb.service
[20:58:13] <nofxx> and also, do you have enought ram aps ?
[20:58:56] <aps> nofxx: it's a 30 GB instance
[20:59:45] <aps> I just changed to wiredTiger, so not entirely sure about the RAM requirements. Mmapv1 used to work fine
[21:00:33] <nofxx> aps, yeah, check the recommended .service ... stuff like LimitNOFILE, LimitNPROC
[21:00:57] <nofxx> systemd is not just a launcher, it's a monitor
[21:01:24] <aps> thanks, I will
[22:37:25] <the_voice_> If I have an array of subdocuments that each contain a userId. How would I go about updating some of those subdocuments based on the userID
[22:39:36] <StephenLynx> dot notation
[22:39:46] <StephenLynx> $set:{'field.subfield':value}
[22:39:59] <the_voice_> same for the find?
[22:42:44] <StephenLynx> yeah
[22:43:12] <the_voice_> Cool
[22:43:12] <the_voice_> thanks