PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 25th of March, 2020

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:23:22] <Devastator> what is the easiest way to install mongodb 3.2 on Debian 10 (buster)? thank you
[10:12:46] <neuro_sys> Who is an Aggregate API guru? I wonder if I can pull this off with a single aggregate query.
[10:17:54] <neuro_sys> I'll try to formulate an example, but the idea is to match an array containing id, which is a reference to another array of objects, where that object should match a simple and/or condition.
[10:21:07] <neuro_sys> Whatever, I'll do two queries
[10:28:43] <neuro_sys> Hah ahttps://docs.mongodb.com/manual/reference/operator/aggregation/in
[14:46:44] <oct4v1a> I am looking at the way that gridFS works. I am wondering if it might be possible for deduplication of chunk data in very similar files, by looking for exact matches of the 'n' and 'data' keys, and making the 'files_id' field an array. Would pulling all of the documents to assemble the file still be fast enough if files_id was an indexes array, do you think?
[22:04:35] <Sasazuka> My MongoDB is a bit rusty, I was wondering if it's possible to do a group by X and Y where counts > 2 but instead of returning the count I want to return Z - I have the group by and count returned but I'm not quite sure about the return Z part
[22:05:29] <Sasazuka> first Z of the group, not all Z - I guess I can just project Z if I want all of them
[22:24:22] <Sasazuka> looks like I can use $addToSet but that's all I guess I need another group pipeline?
[22:55:32] <Sasazuka> I guess the final solution I came up with is: aggregate, push Z, and then project slice Z - seems to look correct