[00:14:14] <dgaff> hey all! If I have two processes trying to create a document and the only other operation on the document after that is an increment, do I have to worry about locking the document in some way, or is that being taken care of in the background?
[02:00:16] <rdove> Do you guys know how I would do multiple group by fields... for example... field1 and field2 -- this works for one field using AggregateIterable -- looking to do this in Java: http://pastebin.com/raw/nGmq7nqX
[05:57:00] <tylerdmace> Hey guys! What's best practice when dealing with membership? ie, if I have a user and groups, should I store a list of groups a user is a member of on the user document and a list of users that belong to a particular group on the group document? What's best practice, here?
[07:46:28] <chris|> quick question: in what way is a shutdown command preferable to sending a SIGTERM?
[10:11:18] <Lope> how much space does mongodb require for a small DB? lets say I use small files, and delete the journal at every boot?
[10:38:58] <Lope> I tried to add this to my mongo commandline: --wiredTigerEngineConfig="cache_size=128M,checkpoint=(wait=60,log_size=50M)"
[14:46:13] <alexandrugheorgh> I have a collection of users that have a state_history field which is made up of objects containg a date and a state. I have to generate a report that shows users grouped by transitions from one state to another within in certain timeframe. I was wondering how to go about doing this using just mongodb queries.
[16:43:14] <zylo4747> I am trying to install a MongoDB Automation Agent on a CentOS 7 server. I followed the instructions in the Cloud Manager and set the API and Group keys properly. When I start the agent, it looks like there are no errors in the log or the verbose log yet I'm not seeing the agent in the Cloud Manager web interface
[16:43:24] <zylo4747> Can anyone help me troubleshoot this?
[16:45:40] <zylo4747> also, i can go to logs in the cloud manager and see that my new server is communicating but i don't see it listed under agents still
[16:48:19] <zylo4747> nevermind, it finally showed up
[16:49:32] <kurushiyama> zylo4747: Yeah, takes a while, no clue why.
[19:35:51] <jfhbrook> Hi all, trying to use the mongodb client and db.close() is afaikt straight up not working, as in 30 seconds later I still ahve an open connection pool. Ideas? What gives?
[19:44:10] <kurushiyama> jfhbrook: Oh. My bad. Late here. Actually, if it follows standard procedure, you actually have to destroy the client, not the database handles. But node is far out of my experience.
[19:44:44] <jfhbrook> kurushiyama, the API for node is MongoClient.connect((err, db) => {
[19:44:49] <jfhbrook> and ostensibly, db.close() works
[19:45:18] <kurushiyama> Progster: Do not use either. If you want a rant about why not to use Mongoose from the conceptual side alone, fasten your seat belts. ;)
[19:45:41] <kurushiyama> jfhbrook: Still unclosed cursors, maybe?
[19:45:46] <jfhbrook> seriously Progster from experience, mongoose is garbage, just use the standard client and use like joi or something
[19:46:16] <jfhbrook> Don't think so kurushiyama my app is running locally and I'm not actually making requests, and db.currentOp() in the repl doesn't show anything
[19:46:23] <jfhbrook> except the call to currentOp() itself, obviously
[19:47:27] <jfhbrook> how about turning on debug logging for this thing?
[19:52:10] <jfhbrook> so much for that, according to the client all 5 of thems got closed
[20:21:45] <kurushiyama> Progster: That is a technical debts I would _never_ accept.
[20:22:04] <Progster> well then you and I are different
[20:22:07] <kurushiyama> Progster: My suggestion: Refactor now.
[20:22:59] <kurushiyama> Progster: Because technical debts, especially in your persistence layer, tend to bite you into body parts you would like to be unharmed.
[20:25:19] <Progster> refactoring at the correct time is more important than knowing how to do the refactor
[20:26:07] <kurushiyama> Well, the correct time is before you call a feature done, imho. ;)
[20:27:00] <kurushiyama> And if a feature is done, it should not need anything to be tweaked, tested or documented any more, until requirements change.
[20:29:54] <Progster> It's easy to sit and pontificate on when to do refactors.
[20:32:00] <kurushiyama> Progster: Well, I work that way every single day. And you started the arguing on _when_ to reffactor ;P But do as you please. I am just saying that Mongoose might cause more of a problem than saving the time for refactoring of a simple model is worth it. But you call, ofc, Sir! ;)
[20:32:59] <Progster> I agree with you, but there's a time and place to do a refactor. I appreciate your advice, and will schedule it soon. Not in the middle of a fucking feature I'm working on. Focus...
[20:34:03] <kurushiyama> Well, THAT would be surely a bad idea. Maybe my wording was misfortunate. Now was meant as asaP.
[20:34:26] <Progster> what's the difference between ASAP and asaP ?
[21:12:07] <synic> Say I have a collection full of calendar events. The shard key is company_id, start, and end. Is there no way to efficiently fetch a single record by it's ID without passing in company_id, start, and end to the query itself?
[21:25:03] <kurushiyama> synic: Aside from the fact that I think your shard key is... ...not optimal... no, no efficient query without shard key.
[21:36:56] <synic> kurushiyama: hrmm, it was the most optimal key we could come up with, but of course, that doesn't mean anything.
[21:38:54] <synic> what do you feel is non-optimal about it?
[21:48:43] <revolve> is it possible to subscribe to a find() query as a stream?
[21:53:24] <kurushiyama> synic: No matter how you arrange it, it has monotonically increasing portion.
[21:54:51] <synic> we figured it was ok, since adding a new company doesn't mean old companies will stop getting events. Also considered using a company_uuid instead
[21:58:32] <kurushiyama> synic: It might not be as bad as a timestamp of an objectid as shard key. And as said, it is just not optimal. However, why not use company_uuid instead?
[22:03:53] <synic> have actually put quite a bit of thought into it, but this is my first time doing anything with mongo, so it's probably been someone wrong thinking
[22:05:23] <synic> the fact is that the clients using it will always have a daterange, and a company, even if they are trying to select a single event. It's just odd for them to have to pass the original daterange when trying to update the daterange
[22:05:34] <synic> but whatever, the alternative is that we can't scale.