PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 24th of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:48:31] <sabrehagen> i'm setting up AWS for the first time. i found the MongoDB Official AMI called "MongoDB with 1000 IOPS". I'm entirely new to this. Shoudld I use this image, or make my own?
[00:54:30] <sabrehagen> also, how do i check the size of my MongoDB working set?
[01:14:44] <on247> So i managed to get mongo to build and run
[01:15:01] <on247> but mongod still doesent work
[01:15:12] <on247> now i get a bus error
[01:31:17] <jm9000> With NoSQL is it ever a better approach to store the same data in two places to reduce queries?
[01:37:00] <jm9000> I have a situation with a chat app where there is a table for topics, and then there is a table for all the posts. The first post in a topic is effectively the topic summary. To query for the last 100 topics created it takes 101 queries; 1 query for the list, then 1 query on each result. Alternatively I could just store that data in the topic itself, which would reduce it from 101 queries to
[01:37:00] <jm9000> 1 query.
[02:20:17] <edrocks_> jm9000: organize your data with you access patterns in mind(don't be too afraid to duplicate it). also mongodb calls tables collections as it's a document store
[02:21:21] <jm9000> edrocks_: Thanks, that helps.
[02:21:27] <edrocks_> np
[02:21:51] <edrocks_> also there is a way you can query multiple docs in 1
[02:22:00] <jm9000> $lookup?
[02:22:01] <edrocks_> I use it for pagination let me find the name of it
[02:22:24] <edrocks_> $in"
[02:22:46] <edrocks_> something like {"_id":{"$in":[list of id's or similar data]}}
[02:22:54] <jm9000> Is that more expensive than data duplication?
[02:23:07] <edrocks_> I'm pretty sure it's cheaper
[02:23:19] <jm9000> That's interesting.
[02:23:35] <edrocks_> I use it with redis to get paginaiton into the millions without slow down
[02:23:42] <edrocks_> just by keeping the id's in redis
[02:24:10] <edrocks_> much better then typical skip which is unusable after 3-5k documents deep in pagination
[02:27:04] <StephenLynx> yeah, I also use $in with pre-aggregation.
[02:31:50] <sabrehagen> what user has privileges are needed to run db.runCommand({ serverStatus : 1, workingSet : 1 })?
[02:34:56] <sabrehagen> *role
[10:06:34] <velikan> hi all! how can I compare an embed values between records in different collections?
[11:35:11] <dddh> velikan: what do you mean by compare ?
[11:36:14] <velikan> wow dddh, I thought noone will answer :) I mean just compare the values of all the fields of embeded object…
[11:58:35] <dddh> velikan: something like $unwind ?
[14:16:35] <JoWie> is the maximum document size 16 MB or 16 MiB?
[14:28:05] <StephenLynx> I think its MB
[15:50:50] <deathanchor> MB