PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 26th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:09:00] <st431> hello, i need help with aggregate or something for distinct 2 fields. i have a collections called chat and there is 2 field, one is _reciever and another one is _sender
[00:09:52] <st431> or just limit on both field by 1
[00:12:11] <cheeser> what are you trying to achieve?
[00:12:18] <oky> enlightenment
[00:12:57] <st431> http://pastie.org/10890447
[00:13:22] <st431> this query returns all matched items based
[00:13:56] <st431> i want a filter or something that will return only one based on the _reciever and _sender
[00:17:06] <st431> http://pastie.org/10890451
[00:17:27] <st431> sorry my english sucks
[00:45:53] <st431> aggregate([
[00:45:53] <st431> { "$group": { "_id": { thread: "$thread" } } }
[00:45:53] <st431> ])
[00:46:14] <st431> this only returns thread, is there anyway to get other fields?
[00:51:28] <cheeser> if you're going to group, you need some kind of accumulator: a sum or something
[00:55:36] <st431> cheeser, any example please?
[00:59:37] <cheeser> https://docs.mongodb.com/manual/reference/operator/aggregation/group/
[00:59:44] <cheeser> the docs are pretty good.
[01:08:07] <st431> cheeser, any idea what i am doing wrong here? { $match: { $or: [{ _sender: currentUser.id }, { _reciever: currentUser.id }] } },
[01:08:30] <st431> in find() it works but not in aggregate
[05:23:35] <bros> Alright, not proud of this
[05:23:40] <bros> but I have a query that runs *a lot*
[05:23:52] <bros> That hits 2k docs and I need them all deserialized in a more performant manner than the JS or C client offer.
[05:24:00] <bros> The query itself is 5ms
[05:24:05] <bros> The client side parsing is 300ms+
[06:39:24] <diegoaguilar> bros, what could be more performant than C?
[06:39:36] <bros> diegoaguilar: threaded BSON parsing
[15:03:25] <ngl> does restarting mongo cause index rebuilds? I'm having a huge problem simply removing two old indexes (~37,000 records) and adding two new indexes. I assume ops with callbacks don't callback until after the operation is done? Can I just restart and viola - the altered collection metadata causes index builds?
[15:52:06] <bros> What are my options when client-side BSON parsing are the most expensive part of my operations?
[17:08:17] <kurushiyama> ngl No. and No. But you can put the index build into backrgound. as documented.