[08:16:15] <markizano> I am trying to mongoexport data from two collections. One has {'metadata.dataDomain': 1} (let's call it collection "foo") in the keys, and the other has {foo_id: 1} in the other (let's call it "bar")
[08:16:42] <markizano> is there a way to get the objects/documents in "bar" that have "foo_id" from the "foo" collection that match `metadata.dataDomain` ?
[10:43:23] <GothAlice> markizano: Aggregate query, https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/, + $unwind and $match.
[12:38:42] <markizano> GothAlice: I thought I read somewhere that you couldn't use the aggregator pipeline in a `mongoexport` command?
[12:40:05] <GothAlice> markizano: You can $out the result of an aggregate to a concrete collection, or save an aggregate as a view.
[12:42:14] <GothAlice> markizano: Relevant mongodump switch: https://docs.mongodb.com/manual/reference/program/mongodump/#cmdoption-mongodump-viewsascollections — this will trigger export of the view as a concrete collection instead of just metadata to be restored.
[12:48:10] <z-memory> I am making a 24/7 running scheduling program, using the "agenda" javascript library
[12:48:51] <z-memory> I noticed now that the way i wrote the program is not so great.
[12:49:11] <z-memory> for example when i run a job every 3 minutes, it can successfully run for days.
[12:49:35] <z-memory> but if i change 3 minutes to 3 hours, i get "connection to mlab timed out"
[12:52:16] <z-memory> i know the agenda library "checks" the status of jobs to be executed also with "processEvery" value, which is default 3 minutes as well. So my question is, should i explicitly connect and disconnect to the remote database every time to check the state? or try to maintain a constant connection? or is there a better pattern you recommend?