[00:07:48] <joannac> Freman: https://www.mongodb.com/contact and someone will reach out to you with pricing details for your deployment
[00:44:11] <crazyphil> ok, if I've built 3 sets of replSet servers, each with their own replica name, how would I shard across all 3 sets? The documentation is a bit confusing to me for some reason
[00:50:10] <crazyphil> nevermind, some digging on google sorted it out
[12:44:11] <kurushiyama> Sounds reasonable for a sync
[12:48:18] <cheeser> there's mongo-connector you might use to cobble something together...
[12:51:22] <kurushiyama> Integration on DB level? Thought we were past that ;)
[12:53:09] <cheeser> nah. still lots of usefulness there.
[12:53:32] <cheeser> e.g., streaming out to elasticsearch
[12:57:12] <kurushiyama> cheeser Well. And that is about it, isn't it? Aside from DB related tasks such as data visualization and such?
[12:59:10] <cheeser> there are other cases, for sure.
[13:01:36] <kurushiyama> The problem I have with integration on DB level is that you tie two systems onto the same data structure, which makes changes there hard to impossible, for the worse of both systems. Data exchange via defined interfaces be it SOAP, JSONREST, RPC or an ESB keeps both systems much more maintainable, imho.
[13:04:19] <StephenLynx> rest is way too limited, arbitrary and doesn't makes too much sense when you implement machine-machine communication.
[13:04:33] <StephenLynx> not to mention that most people don't actually understand what REST actually is.
[13:04:46] <StephenLynx> and they end up not actually implementing REST
[13:05:30] <kurushiyama> StephenLynx Hm, I think that heavily depends on the maturity level, I agree with that. Personally, I do not use it for different reasons, but I see why it may be appealing.
[13:10:23] <StephenLynx> im reading through that site I linked
[13:10:42] <StephenLynx> and all it does is to bind regular concepts to unnecessary naming.
[13:12:03] <StephenLynx> any of that is just common sense to anyone that knows how to perform HTTP requests.
[13:12:16] <kurushiyama> Resource traversal, for example. And a standard is not necessarily a bad thing. There is a reason for RFCs, for example. Conceptually, there is no difference between [ a:1, b: {x,y,z} ] and {a:1, b:[x,y,z]} ;)
[13:17:57] <StephenLynx> "oh, no, these are not http requests done through browser javascript, its Ajax™"
[13:18:31] <StephenLynx> "you are not just consuming plain text from a web back-end, its Rest™"
[13:19:09] <kurushiyama> Well, enough of playing devils advocate. I have walked the whole way from RMM0 to HATEOAS, and there is a good reason why I use RPC with protobufs ;)
[13:19:26] <StephenLynx> I just design a json based RPC
[13:22:18] <StephenLynx> what does the rpc package does?
[13:23:36] <StephenLynx> it seems to be just an abstraction for a duplex TCP based communication.
[13:23:46] <StephenLynx> don't you already have a TCP library?
[13:25:06] <kurushiyama> Follwowing your logic, I could say that there is fprintf and various syscalls, why bother to have a tcp lib (Assuming a UNIX system)?
[13:25:19] <StephenLynx> because TCP is a standard.
[13:25:48] <StephenLynx> I don't think there is any standard for this RPC implementation.
[13:26:08] <StephenLynx> is not about high-low level, is about standards.
[13:26:23] <StephenLynx> it makes sense to have a HTTP lib, a TCP lib, a json lib.
[13:26:39] <kurushiyama> StephenLynx Depends on how you define standard, no?
[13:26:40] <StephenLynx> not an abstraction on top of one of these on the stdlib.
[13:27:34] <StephenLynx> local conventions are not global standards.
[13:30:17] <kurushiyama> Which brings us back to following your argument: If only global standards should be provided with a library, everything but a syscall is basically a local convention. Let's expand on that though and let us say everything defined by ISO or similar organizations is worth a lib. There would be nothing but SQL driver libs, for example, when it comes to persistence.
[13:30:48] <StephenLynx> I never said only global standards should be on it. implementations that make sense to the scope of the language are ok too.
[13:30:58] <StephenLynx> this RPC doesn't make sense on either.
[13:35:03] <kurushiyama> Well, I do not personally use it, but it is a RPC based on JSON. Which is provided for some reason or the other. Taken as granted that the stdlib is supposed to provide different means of communication, it did make sense to the core developers, at least two of which I can only call titans in our industry. However, I do not agree with the placement in the stdlib. I agree with you that the stdlib should be as narrow as possible. However,
[13:35:04] <kurushiyama> it even contains a template language, so I assume there is a broader definition of what a stdlib should do by the core devs.
[13:35:41] <StephenLynx> and this is why I don't use go.
[13:36:50] <cheeser> go isn't bad as a language. the tooling still sucks, though.
[13:39:40] <kurushiyama> Hm, the tooling itself isn't that bad. Imho. The ecosystem is much less developed than in other languages, which often enough leaves me with the feeling of (re-)inventing the wheel. However, now I have written what was I was missing, and for my use cases, it is awesome.
[14:09:00] <CaptTofu> who here uses mongodb in conjunction with sphinx search engine?
[17:01:49] <spellett> hey guys. does anyone know if there is any documentation online discussing how the aggregation framework works behind the scenes?
[17:03:24] <kurushiyama> spellett What do you exactly want to know?
[17:05:55] <spellett> i'm not really sure if there was anything specific that i was looking for. i've just been writing a lot of aggregations lately and i was hoping to get a better understanding of what was going on.
[17:07:32] <kurushiyama> spellett Aside from the core docs https://docs.mongodb.org/manual/core/aggregation-pipeline/ there is not too much.
[17:08:01] <spellett> that's what i figured. thanks for verifying
[17:11:16] <kurushiyama> There are some intricacies on sharded clusters, but they are documented below that as well.
[17:14:22] <mordof> i'm trying to work with mongoexport -q to pass in a query.. but i can't find how to query by date in that format, since it's json and i can't use an actual Date object. anyone able to help or point me in the right direction?
[17:15:42] <kurushiyama> mordof can I see a sample doc and your actual query?
[17:28:24] <mordof> kurushiyama: a bunch of errors kept complaining about it not being valid JSON, so i didn't think that would work since it's technically not valid JSON either
[17:28:43] <kurushiyama> mordof I tested that on my mongo
[17:28:53] <kurushiyama> mongodump -d test -c camp -q '{date:{$gte:ISODate("2014-11-22T00:00:00Z")}}'
[17:29:57] <mordof> kurushiyama: right - the errors from previous query failures said JSON though. it's just a disconnect between their error messages and my understanding of what they're asking for
[17:30:33] <mordof> moot point; it works, i was just confused :) thanks for the help
[19:20:09] <luizdepra> is there a way to group documents and sum a field with accumulation? i mean, imagine i have documents with 'date' and 'debt' and i want to group it by day and sum the depts accumulating.
[19:33:55] <crazyphil> ok - just need to confirm something about sharding and replica sets, if I have 3 replica sets running, each with 3 servers, do I only need to sh.addShard the first member of a replica set, or do I need to add all 3?
[19:35:11] <crazyphil> nevermind, just answered my own question
[19:48:16] <kurushiyama> crazyphil Which is probably the best way of getting things answered ;)
[20:50:31] <varunkvv> hey! We want to copy the contents of a database to a new database within the the same server. Seems like db.copyDatabase is the way to go. The docs mention that it does not lock the target database, but does it lock up the source database at all?
[21:07:33] <varunkvv> alright just going ahead with this - #whatcouldpossiblygowrong
[21:11:55] <kurushiyama> varunkvv Personally, I'd do this during a downtime. Or do an fsync lock.