PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 8th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:48] <joannac> Freman: https://www.mongodb.com/contact and someone will reach out to you with pricing details for your deployment
[00:44:11] <crazyphil> ok, if I've built 3 sets of replSet servers, each with their own replica name, how would I shard across all 3 sets? The documentation is a bit confusing to me for some reason
[00:50:10] <crazyphil> nevermind, some digging on google sorted it out
[08:13:55] <tangorri> hi
[08:21:57] <tangorri> how can I find query with criteria on "child" nodes of my collection ?
[08:22:30] <tangorri> find({child.subchild.uuid = 'cccc' }) ?
[09:21:18] <gaspaio> hello all. Anybody know of a package source for mongodb 3.2 in Debian jessie ?
[10:23:23] <tangorri> can someone help me ? I try to do my first mapReduce but I got exception: 'out' has to be a string or an object
[10:26:08] <tangorri> https://gist.github.com/tangorri/dd289ed50c3c84382d4e13cf45fd372c
[11:48:37] <ren0v0> Hi, does "copyDatabase" not support a "sync" function? it requires the DB not to exist already?
[12:26:18] <cheeser> ren0v0: or will overwrite it, possibly. might just error out if the target exists. i don't remember offhand.
[12:35:13] <ren0v0> yea it errors out
[12:35:25] <ren0v0> is there any ability to "sync" i guess is what i'm asking
[12:35:35] <ren0v0> without have to drop the whole thing and copy the whole thing
[12:37:04] <cheeser> use a replica set?
[12:44:11] <kurushiyama> Sounds reasonable for a sync
[12:48:18] <cheeser> there's mongo-connector you might use to cobble something together...
[12:51:22] <kurushiyama> Integration on DB level? Thought we were past that ;)
[12:53:09] <cheeser> nah. still lots of usefulness there.
[12:53:32] <cheeser> e.g., streaming out to elasticsearch
[12:57:12] <kurushiyama> cheeser Well. And that is about it, isn't it? Aside from DB related tasks such as data visualization and such?
[12:59:10] <cheeser> there are other cases, for sure.
[13:01:36] <kurushiyama> The problem I have with integration on DB level is that you tie two systems onto the same data structure, which makes changes there hard to impossible, for the worse of both systems. Data exchange via defined interfaces be it SOAP, JSONREST, RPC or an ESB keeps both systems much more maintainable, imho.
[13:02:32] <StephenLynx> >JSONREST
[13:02:35] <StephenLynx> just JSON
[13:02:43] <StephenLynx> rest is pretty much bollocks
[13:03:39] <kurushiyama> StephenLynx Can you explain in more detail?
[13:03:42] <StephenLynx> whos roy
[13:04:19] <StephenLynx> rest is way too limited, arbitrary and doesn't makes too much sense when you implement machine-machine communication.
[13:04:33] <StephenLynx> not to mention that most people don't actually understand what REST actually is.
[13:04:46] <StephenLynx> and they end up not actually implementing REST
[13:05:30] <kurushiyama> StephenLynx Hm, I think that heavily depends on the maturity level, I agree with that. Personally, I do not use it for different reasons, but I see why it may be appealing.
[13:06:29] <StephenLynx> for example
[13:06:36] <StephenLynx> from this site:
[13:06:46] <StephenLynx> http://www.acme.com/phonebook/UserDetails/12345
[13:06:50] <StephenLynx> this is just a god damn URL
[13:07:04] <StephenLynx> there is absolutely nothing to it but HTTP
[13:07:24] <kurushiyama> StephenLynx Aye, pointing to a ressource.
[13:07:34] <StephenLynx> then people that have absolutely no idea on how to use HTTP they start calling it "REST" and
[13:07:48] <StephenLynx> rolling a bunch of pointless concept into it.
[13:08:28] <StephenLynx> >It relies on a stateless, client-server, cacheable
[13:08:52] <StephenLynx> so you have pages with the data and use http codes and headers to control the cache.
[13:09:02] <cheeser> roy fielding. the author of the REST "spec"
[13:09:15] <StephenLynx> much of what people call "REST" is just new names on top of existing stuff.
[13:09:26] <StephenLynx> it was nothing but a fad that already died.
[13:09:29] <kurushiyama> Hm, I do not think it is as easy as that. RMM Lv3 holds pretty interesting concepts.
[13:09:45] <StephenLynx> such as
[13:10:23] <StephenLynx> im reading through that site I linked
[13:10:42] <StephenLynx> and all it does is to bind regular concepts to unnecessary naming.
[13:12:03] <StephenLynx> any of that is just common sense to anyone that knows how to perform HTTP requests.
[13:12:16] <kurushiyama> Resource traversal, for example. And a standard is not necessarily a bad thing. There is a reason for RFCs, for example. Conceptually, there is no difference between [ a:1, b: {x,y,z} ] and {a:1, b:[x,y,z]} ;)
[13:12:30] <StephenLynx> resource traversal?
[13:12:39] <StephenLynx> like a page that indicates where other pages are?
[13:13:11] <kurushiyama> StephenLynx Aye
[13:13:18] <kurushiyama> RMM3
[13:13:18] <StephenLynx> /list returns ['a','b'] so you know you have /content/a and /content/b?
[13:13:22] <StephenLynx> that is just common sense.
[13:13:45] <StephenLynx> rest is just useless bloat.
[13:14:03] <kurushiyama> StephenLynx Much of CS are. I'd say "See way above", but that'd be a cheap hit. But you get the picture.
[13:14:20] <StephenLynx> this is not CS.
[13:14:28] <kurushiyama> I was referring to common sense, of course.
[13:14:40] <StephenLynx> >of course
[13:14:44] <kurushiyama> StephenLynx Well, insert the name you wish ;)
[13:15:27] <StephenLynx> the first mistake people make is thinking a HTTP is special.
[13:15:43] <StephenLynx> IO is IO
[13:16:10] <StephenLynx> this is part of why web developers are so incompetent and inadequate.
[13:16:21] <StephenLynx> they keep jerking off around useless and redundant concepts
[13:16:26] <StephenLynx> rest, ajax
[13:17:57] <StephenLynx> "oh, no, these are not http requests done through browser javascript, its Ajax™"
[13:18:31] <StephenLynx> "you are not just consuming plain text from a web back-end, its Rest™"
[13:19:09] <kurushiyama> Well, enough of playing devils advocate. I have walked the whole way from RMM0 to HATEOAS, and there is a good reason why I use RPC with protobufs ;)
[13:19:26] <StephenLynx> I just design a json based RPC
[13:19:30] <StephenLynx> and that's it.
[13:20:09] <kurushiyama> StephenLynx You might want to get some inspiration here: https://golang.org/pkg/net/rpc/jsonrpc/
[13:20:35] <StephenLynx> eww
[13:20:44] <StephenLynx> that's pretty much a framework.
[13:21:13] <kurushiyama> Go's stdlib
[13:21:20] <kurushiyama> StephenLynx ^
[13:21:50] <StephenLynx> its shiiiiiiiiiiiiiit
[13:22:14] <StephenLynx> >for the rpc package.
[13:22:18] <StephenLynx> what does the rpc package does?
[13:23:36] <StephenLynx> it seems to be just an abstraction for a duplex TCP based communication.
[13:23:46] <StephenLynx> don't you already have a TCP library?
[13:25:06] <kurushiyama> Follwowing your logic, I could say that there is fprintf and various syscalls, why bother to have a tcp lib (Assuming a UNIX system)?
[13:25:19] <StephenLynx> because TCP is a standard.
[13:25:48] <StephenLynx> I don't think there is any standard for this RPC implementation.
[13:26:08] <StephenLynx> is not about high-low level, is about standards.
[13:26:23] <StephenLynx> it makes sense to have a HTTP lib, a TCP lib, a json lib.
[13:26:39] <kurushiyama> StephenLynx Depends on how you define standard, no?
[13:26:40] <StephenLynx> not an abstraction on top of one of these on the stdlib.
[13:26:42] <StephenLynx> no
[13:27:03] <StephenLynx> is not how you define, is about an organization such as the w3c or ISO defining it.
[13:27:10] <kurushiyama> Well, so basically each programming language defines its own standard, no?
[13:27:14] <StephenLynx> no.
[13:27:34] <StephenLynx> local conventions are not global standards.
[13:30:17] <kurushiyama> Which brings us back to following your argument: If only global standards should be provided with a library, everything but a syscall is basically a local convention. Let's expand on that though and let us say everything defined by ISO or similar organizations is worth a lib. There would be nothing but SQL driver libs, for example, when it comes to persistence.
[13:30:48] <StephenLynx> I never said only global standards should be on it. implementations that make sense to the scope of the language are ok too.
[13:30:58] <StephenLynx> this RPC doesn't make sense on either.
[13:35:03] <kurushiyama> Well, I do not personally use it, but it is a RPC based on JSON. Which is provided for some reason or the other. Taken as granted that the stdlib is supposed to provide different means of communication, it did make sense to the core developers, at least two of which I can only call titans in our industry. However, I do not agree with the placement in the stdlib. I agree with you that the stdlib should be as narrow as possible. However,
[13:35:04] <kurushiyama> it even contains a template language, so I assume there is a broader definition of what a stdlib should do by the core devs.
[13:35:41] <StephenLynx> and this is why I don't use go.
[13:36:50] <cheeser> go isn't bad as a language. the tooling still sucks, though.
[13:39:40] <kurushiyama> Hm, the tooling itself isn't that bad. Imho. The ecosystem is much less developed than in other languages, which often enough leaves me with the feeling of (re-)inventing the wheel. However, now I have written what was I was missing, and for my use cases, it is awesome.
[14:08:46] <CaptTofu> hi all!
[14:09:00] <CaptTofu> who here uses mongodb in conjunction with sphinx search engine?
[17:01:49] <spellett> hey guys. does anyone know if there is any documentation online discussing how the aggregation framework works behind the scenes?
[17:03:24] <kurushiyama> spellett What do you exactly want to know?
[17:05:55] <spellett> i'm not really sure if there was anything specific that i was looking for. i've just been writing a lot of aggregations lately and i was hoping to get a better understanding of what was going on.
[17:07:32] <kurushiyama> spellett Aside from the core docs https://docs.mongodb.org/manual/core/aggregation-pipeline/ there is not too much.
[17:08:01] <spellett> that's what i figured. thanks for verifying
[17:11:16] <kurushiyama> There are some intricacies on sharded clusters, but they are documented below that as well.
[17:14:22] <mordof> i'm trying to work with mongoexport -q to pass in a query.. but i can't find how to query by date in that format, since it's json and i can't use an actual Date object. anyone able to help or point me in the right direction?
[17:15:42] <kurushiyama> mordof can I see a sample doc and your actual query?
[17:17:46] <mordof> kurushiyama: http://hastebin.com/yitesamimo.js
[17:18:11] <mordof> (there are items on both sides of the date in my query)
[17:21:29] <kurushiyama> mordof Simply add ISODate
[17:21:31] <kurushiyama> mordof http://hastebin.com/uyixeteceq.pas
[17:22:13] <kurushiyama> mordof The query get's interpreted on the target, so this is valid.
[17:22:44] <kurushiyama> mordof Use string concat/templating, if nothing else works ;)
[17:25:52] <mordof> kurushiyama: i'll give that a try
[17:26:02] <mordof> thanks
[17:27:35] <mordof> awesome that works ^_^
[17:28:24] <mordof> kurushiyama: a bunch of errors kept complaining about it not being valid JSON, so i didn't think that would work since it's technically not valid JSON either
[17:28:43] <kurushiyama> mordof I tested that on my mongo
[17:28:53] <kurushiyama> mongodump -d test -c camp -q '{date:{$gte:ISODate("2014-11-22T00:00:00Z")}}'
[17:28:56] <mordof> it works on mine too
[17:28:59] <mordof> it's just confusing
[17:29:21] <kurushiyama> mordof BSON != JSON
[17:29:57] <mordof> kurushiyama: right - the errors from previous query failures said JSON though. it's just a disconnect between their error messages and my understanding of what they're asking for
[17:30:33] <mordof> moot point; it works, i was just confused :) thanks for the help
[17:31:55] <kurushiyama> mordof Does it work now?
[17:32:09] <mordof> yep ^_^
[17:32:19] <kurushiyama> mordof Mission accomplished ;)
[19:13:30] <luizdepra> hello
[19:20:09] <luizdepra> is there a way to group documents and sum a field with accumulation? i mean, imagine i have documents with 'date' and 'debt' and i want to group it by day and sum the depts accumulating.
[19:33:55] <crazyphil> ok - just need to confirm something about sharding and replica sets, if I have 3 replica sets running, each with 3 servers, do I only need to sh.addShard the first member of a replica set, or do I need to add all 3?
[19:35:11] <crazyphil> nevermind, just answered my own question
[19:48:16] <kurushiyama> crazyphil Which is probably the best way of getting things answered ;)
[20:50:31] <varunkvv> hey! We want to copy the contents of a database to a new database within the the same server. Seems like db.copyDatabase is the way to go. The docs mention that it does not lock the target database, but does it lock up the source database at all?
[21:07:33] <varunkvv> alright just going ahead with this - #whatcouldpossiblygowrong
[21:11:55] <kurushiyama> varunkvv Personally, I'd do this during a downtime. Or do an fsync lock.