PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 3rd of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:14:14] <dgaff> hey all! If I have two processes trying to create a document and the only other operation on the document after that is an increment, do I have to worry about locking the document in some way, or is that being taken care of in the background?
[02:00:16] <rdove> Do you guys know how I would do multiple group by fields... for example... field1 and field2 -- this works for one field using AggregateIterable -- looking to do this in Java: http://pastebin.com/raw/nGmq7nqX
[05:57:00] <tylerdmace> Hey guys! What's best practice when dealing with membership? ie, if I have a user and groups, should I store a list of groups a user is a member of on the user document and a list of users that belong to a particular group on the group document? What's best practice, here?
[07:46:28] <chris|> quick question: in what way is a shutdown command preferable to sending a SIGTERM?
[10:11:18] <Lope> how much space does mongodb require for a small DB? lets say I use small files, and delete the journal at every boot?
[10:38:58] <Lope> I tried to add this to my mongo commandline: --wiredTigerEngineConfig="cache_size=128M,checkpoint=(wait=60,log_size=50M)"
[10:39:17] <Lope> Error parsing command line: unrecognised option '--wiredTigerEngineConfig=cache_size=128M,checkpoint=(wait=60,log_size=50M)'
[10:40:13] <Lope> my mongodb log says: I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=8G,...
[12:15:33] <chris|> quick question: in what way is a shutdown command preferable to sending a SIGTERM?
[14:43:12] <alexandrugheorgh> Hello
[14:46:13] <alexandrugheorgh> I have a collection of users that have a state_history field which is made up of objects containg a date and a state. I have to generate a report that shows users grouped by transitions from one state to another within in certain timeframe. I was wondering how to go about doing this using just mongodb queries.
[16:43:14] <zylo4747> I am trying to install a MongoDB Automation Agent on a CentOS 7 server. I followed the instructions in the Cloud Manager and set the API and Group keys properly. When I start the agent, it looks like there are no errors in the log or the verbose log yet I'm not seeing the agent in the Cloud Manager web interface
[16:43:24] <zylo4747> Can anyone help me troubleshoot this?
[16:45:40] <zylo4747> also, i can go to logs in the cloud manager and see that my new server is communicating but i don't see it listed under agents still
[16:48:19] <zylo4747> nevermind, it finally showed up
[16:48:22] <zylo4747> took like 20 minutes
[16:48:23] <zylo4747> thanks
[16:49:32] <kurushiyama> zylo4747: Yeah, takes a while, no clue why.
[19:35:51] <jfhbrook> Hi all, trying to use the mongodb client and db.close() is afaikt straight up not working, as in 30 seconds later I still ahve an open connection pool. Ideas? What gives?
[19:36:00] <jfhbrook> er
[19:36:00] <Progster> has anyone used mockgoose? I've been getting frustrating errors with it that
[19:36:04] <jfhbrook> the node.js mongodb client, sorry
[19:36:51] <Progster> I'm talking about the testing library for mongoose, called mockgoose
[19:36:57] <Progster> want to test my app
[19:37:22] <jfhbrook> I haven't Progster, I actively avoid mongoose
[19:37:47] <Progster> that's fine. How do you mock mongodb to test your api?
[19:37:56] <jfhbrook> sinon
[19:38:31] <Progster> hmmmm
[19:39:39] <jfhbrook> So, uhh, no ideas as to why db.close wouldn't close the connection pool?
[19:39:50] <jfhbrook> I'm calling it, it's calling back
[19:40:05] <jfhbrook> _getActiveHandles() still shows 5 connections to mongo
[19:40:39] <Progster> I've had the worst day getting JS testing up
[19:42:49] <jfhbrook> my google search is failing me https://encrypted.google.com/search?hl=en&q=mongodb%20nodejs%20db.close%20not%20fucking%20working
[19:42:50] <kurushiyama> jfhbrook: A language would be interesting for starters.
[19:42:55] <jfhbrook> I told you, node.js
[19:44:10] <kurushiyama> jfhbrook: Oh. My bad. Late here. Actually, if it follows standard procedure, you actually have to destroy the client, not the database handles. But node is far out of my experience.
[19:44:44] <jfhbrook> kurushiyama, the API for node is MongoClient.connect((err, db) => {
[19:44:49] <jfhbrook> and ostensibly, db.close() works
[19:44:51] <jfhbrook> ostensibly
[19:44:55] <jfhbrook> apparently not, in my case
[19:45:18] <kurushiyama> Progster: Do not use either. If you want a rant about why not to use Mongoose from the conceptual side alone, fasten your seat belts. ;)
[19:45:41] <kurushiyama> jfhbrook: Still unclosed cursors, maybe?
[19:45:46] <jfhbrook> seriously Progster from experience, mongoose is garbage, just use the standard client and use like joi or something
[19:46:16] <jfhbrook> Don't think so kurushiyama my app is running locally and I'm not actually making requests, and db.currentOp() in the repl doesn't show anything
[19:46:23] <jfhbrook> except the call to currentOp() itself, obviously
[19:47:27] <jfhbrook> how about turning on debug logging for this thing?
[19:52:10] <jfhbrook> so much for that, according to the client all 5 of thems got closed
[19:54:19] <jfhbrook> hm
[19:54:23] <jfhbrook> without that call, there are 10 of them
[19:54:30] <jfhbrook> which makes me think I just haven't grepped the right thing
[19:55:13] <jfhbrook> fuck
[19:55:14] <jfhbrook> yeah
[19:55:15] <jfhbrook> okay
[19:55:26] <jfhbrook> someone made a submodule of a submodule that creates its own connection pool
[19:55:27] <jfhbrook> sick
[19:58:06] <jfhbrook> Okay, new question: Is there a getSiblingDb() equivalent for nodejs?
[20:05:35] <jfhbrook> welp
[20:05:42] <jfhbrook> I think I'm gettin' close :|
[20:07:53] <jfhbrook> I, uhh
[20:10:52] <jfhbrook> well, I got it ^______^
[20:10:55] <jfhbrook> thanks for, well
[20:10:58] <jfhbrook> yeah
[20:11:01] <jfhbrook> seeya
[20:19:05] <Progster> i'll have to refactor at some opint
[20:19:10] <Progster> but for now mongoose is doing a decent job
[20:19:27] <Progster> but you're right, with a simple object like that I can just sinon and get on with my life
[20:19:33] <kurushiyama> Progster: No
[20:19:36] <Progster> however, having an in-memory mongo mock would be useful
[20:19:59] <kurushiyama> Progster: Mongoose actually prevents you from modelling correctly.
[20:20:40] <kurushiyama> Progster: It enforces or at least encourages modelling with ERMs.
[20:20:45] <Progster> ERMs?
[20:20:59] <Progster> I agree - I don't think it's a very good tool. For now it works so I'll save the refactor for another day
[20:21:04] <Progster> also it slows shit down immensely
[20:21:05] <kurushiyama> Progster: Where you should be modelling optimized for your use cases. Enitity-Remaltionship models.
[20:21:26] <kurushiyama> s/Remaltionship/Relationship/
[20:21:45] <kurushiyama> Progster: That is a technical debts I would _never_ accept.
[20:22:04] <Progster> well then you and I are different
[20:22:07] <kurushiyama> Progster: My suggestion: Refactor now.
[20:22:59] <kurushiyama> Progster: Because technical debts, especially in your persistence layer, tend to bite you into body parts you would like to be unharmed.
[20:25:07] <Progster> my data model is simple
[20:25:19] <Progster> refactoring at the correct time is more important than knowing how to do the refactor
[20:26:07] <kurushiyama> Well, the correct time is before you call a feature done, imho. ;)
[20:27:00] <kurushiyama> And if a feature is done, it should not need anything to be tweaked, tested or documented any more, until requirements change.
[20:29:27] <Progster> stop
[20:29:29] <Progster> seriously
[20:29:54] <Progster> It's easy to sit and pontificate on when to do refactors.
[20:32:00] <kurushiyama> Progster: Well, I work that way every single day. And you started the arguing on _when_ to reffactor ;P But do as you please. I am just saying that Mongoose might cause more of a problem than saving the time for refactoring of a simple model is worth it. But you call, ofc, Sir! ;)
[20:32:59] <Progster> I agree with you, but there's a time and place to do a refactor. I appreciate your advice, and will schedule it soon. Not in the middle of a fucking feature I'm working on. Focus...
[20:34:03] <kurushiyama> Well, THAT would be surely a bad idea. Maybe my wording was misfortunate. Now was meant as asaP.
[20:34:26] <Progster> what's the difference between ASAP and asaP ?
[20:40:06] <kurushiyama> The emphasis on the P ;)
[20:40:10] <kurushiyama> Progster: ^
[20:42:02] <kurushiyama> And, there is a difference between "now" and "asaP" or ASAP ;)
[20:46:18] <afroradiohead> any mongoose users here know if it's possible to do what validate() does on one field?
[20:55:01] <StephenLynx> yeah, don't use mongoose
[20:55:41] <afroradiohead> heh yeah? it's so convenient though. How come?
[20:56:42] <StephenLynx> kek
[20:56:45] <StephenLynx> first
[20:56:56] <StephenLynx> its performance is awful, its about 6x slower than the native driver
[20:57:10] <StephenLynx> second, it forces you to use mongo in a way it wasn't designed to use
[20:57:17] <StephenLynx> third it breaks when handling dates
[20:57:27] <StephenLynx> fourth it doesn't handle ids correctly either
[20:57:36] <StephenLynx> it is the equivalent of using PHP
[20:57:57] <StephenLynx> it doesn't matter how convenient is is if its screwing up
[20:58:06] <StephenLynx> "stop eating dog turds"
[20:58:12] <StephenLynx> "but they are so easy to find"
[20:58:20] <afroradiohead> haha gotcha
[20:58:48] <StephenLynx> not to mention its completely unnecessary.
[20:58:58] <StephenLynx> there is literally zero reason to use it.
[20:59:35] <afroradiohead> i see
[21:12:07] <synic> Say I have a collection full of calendar events. The shard key is company_id, start, and end. Is there no way to efficiently fetch a single record by it's ID without passing in company_id, start, and end to the query itself?
[21:25:03] <kurushiyama> synic: Aside from the fact that I think your shard key is... ...not optimal... no, no efficient query without shard key.
[21:36:56] <synic> kurushiyama: hrmm, it was the most optimal key we could come up with, but of course, that doesn't mean anything.
[21:38:54] <synic> what do you feel is non-optimal about it?
[21:48:43] <revolve> is it possible to subscribe to a find() query as a stream?
[21:50:48] <revolve> TIL
[21:53:24] <kurushiyama> synic: No matter how you arrange it, it has monotonically increasing portion.
[21:54:51] <synic> we figured it was ok, since adding a new company doesn't mean old companies will stop getting events. Also considered using a company_uuid instead
[21:58:32] <kurushiyama> synic: It might not be as bad as a timestamp of an objectid as shard key. And as said, it is just not optimal. However, why not use company_uuid instead?
[21:58:53] <synic> no reason. We will use it.
[21:59:07] <synic> we just don't have a uuid on company yet, but easy enough to make one
[22:01:47] <kurushiyama> Hm. You sure there isn't any natural key you can use?
[22:02:44] <synic> not that I can think of
[22:03:53] <synic> have actually put quite a bit of thought into it, but this is my first time doing anything with mongo, so it's probably been someone wrong thinking
[22:04:01] <synic> *somewhat
[22:05:23] <synic> the fact is that the clients using it will always have a daterange, and a company, even if they are trying to select a single event. It's just odd for them to have to pass the original daterange when trying to update the daterange
[22:05:34] <synic> but whatever, the alternative is that we can't scale.
[22:46:35] <dorupikku> hi, anyone active?
[23:04:36] <dorupikku> why isnt ecmascript 6 working for me with the mongo shell (3.2.6)?
[23:05:41] <cheeser> probably the v8 version used doesn't (fully) support it?
[23:07:03] <dorupikku> thank you, but I thought it was supposed to be using spidermonkey
[23:07:51] <cheeser> we've gone back and forth but I think 3.2 is v8
[23:08:54] <dorupikku> release notes for 3.2 (https://docs.mongodb.com/manual/release-notes/3.2/) says spidermonkey, but 3.2.6 might be v8
[23:11:53] <cheeser> oh, i have it backwards.
[23:11:56] <cheeser> https://engineering.mongodb.com/post/code-generating-away-the-boilerplate-in-our-migration-back-to-spidermonkey/
[23:12:02] <cheeser> spidermonkey it is.
[23:14:22] <dorupikku> thanks, I wonder which version of spidermonkey it is
[23:31:58] <dorupikku> @cheeser I realized that some of the es6 features are supported
[23:49:49] <bros> How long should it take to read 10MB out of 1 collection using an index?
[23:50:24] <bros> 12,738 docs, 10,600,529 bytes JSON.stringified