[02:27:09] <amyloula> has anyone had these error messages trying to run mongo on a linux ec2 instance? No package mongodb-org-server available. No package mongodb-org-tools available. No package mongodb-org-shell available. Error: Nothing to do
[04:54:01] <tdelam> Hi, I have an array with some duplicate elements, e.g: ['member_1','member_2','member_15','member_1','member_1', 'member_2'] I noticed in the docs $in uses at least ONE element that matches a value in the array so naturally my output will be only 3. What can I use to match all in the array, even if they're duplicates. I will be needing all the output.
[07:23:31] <KekSi> i'm having trouble understanding db.collection.copyTo(newCollection) - it tells me the number of objects in the current collection
[07:23:42] <KekSi> but doesn't appear to copy them over to newCollection
[07:25:09] <KekSi> according to the documentation "copyTo() returns the number of documents copied. If the copy fails, it throws an exception." but none of the _id's of collection show up in newCollection
[07:27:07] <KekSi> neither on 2.4 nor on 3.0 that is
[08:05:28] <kurushiyama> KekSi: wfm. Strange. can you pastebin what exactly you entered?
[08:22:15] <KekSi> kurushiyama: db.myCollectionWithRecalculations.copyTo(db.myCollection) then i get no errors and after a while (its several GBs) it showed how many objects were copied over
[08:22:30] <KekSi> but checking with db.myCollection.count() before and after it stayed the same
[08:23:16] <KekSi> and db.myCollection.find({ "_id", "an id that exists in myCollectionWithRecalculations"}) yields null
[08:24:06] <KekSi> so i did a db.myCollectionWithRecalculations.find().forEach(function(x){ db.myCollection.insert(x);}) and that worked (obviously)
[08:24:32] <kurushiyama> KekSi: Ok. Got that. Lemme check
[08:25:41] <KekSi> the command appears to work but not do what its supposed to -- i mean trying with a string ..copyTo("db.myCollection") fails (as it should) and reports an exception
[08:26:51] <KekSi> ...copyTo("databasename.myCollection") as in, the correct path that calling db.myCollection hands out works just like the above
[08:27:47] <KekSi> since our prod & staging are somehow still on 2.4 there's no "initializeUnorderedBulkOp()" yet :x
[08:28:29] <KekSi> i did that on staging btw since copyTo shouldn't be used in prod because it blocks
[08:28:40] <kurushiyama> KekSi: Of th top of my head, I am not too sure wether you can cross DB borders. have you checked that?
[08:33:53] <KekSi> unless you really want to know what's going on don't bother
[08:34:43] <KekSi> i could probably strace it and find out which syscalls are going wrong but i cba and doing it with a function worked well enough since there's no concurrent access on the source collection
[08:59:51] <kurushiyama> KekSi: Sorry, is going to take a while. In general, I strongly suggest updating to 2.6 and 3.0 subsequently, since even 2.6 will be at eol by the end of this year. Setting aside the huge performance improvements for now. ;)
[09:22:31] <kurushiyama> KekSi: Oh, ofc, with a slightly higher version, you could use the aggregation pipeline's $out stage
[09:38:52] <KekSi> yeah i personally run 3.2 in docker containers here but prod and staging is still in pre-docker state
[09:39:00] <KekSi> and upgrading there is annoying af
[09:47:12] <kurushiyama> KekSi: Is it? With a replica set, it is really not.
[10:45:13] <scruz> assume i have documents which have a field called test_results, which is like this: {test_1: ‘passed’, test_2: ‘failed’, test_3: ‘failed’, test4: ‘passed’…}. how do i query such that i can get all documents which failed *any* test?
[10:46:24] <scruz> i’m not so much interested in which test was failed as much as that there were any that failed in the first place.
[11:00:20] <philipwhiuk> If the name is always test_X why not just use an array and then the query is straightforward
[11:04:00] <scruz> philipwhiuk: that was by example, but i think $elemMatch might help
[11:15:13] <scruz> $elemMatch is useful, but won’t work for my case.
[11:15:55] <kurushiyama> scruz: maybe you should rather have a doc per test?
[11:16:32] <kurushiyama> scruz: In which case querying failed ones would be extremely easy.
[11:18:52] <kurushiyama> scruz: Having values as keys being your current one.
[11:19:34] <philipwhiuk> If you have a fixed list of keys you could just build a very long OR-statement
[11:19:56] <scruz> kurushiyama: if i understand you correctly, it’s better to have immutable values as keys?
[11:20:16] <scruz> philipwhiuk: that’s likely what i’m going to do. thanks for the suggestion!
[11:20:36] <philipwhiuk> (i'm not sure what the perf of that is....)
[11:21:33] <kurushiyama> scruz: As per your example, it would be {series:"Foo", tests:[{name:"test1", passed:false},{name:"test2",passed:true}]}, for example
[11:22:20] <scruz> ‘name’ being an immutable key in this case.
[11:22:26] <kurushiyama> scruz: and the query could be either a relatively straightforward aggregation or an $elemMatch in projection, depending on your use case.
[11:22:58] <scruz> kurushiyama: thanks. i’ll watch for using user-defined data as keys in the future.
[11:23:35] <scruz> well, as much as i can, anyway.
[11:24:26] <kurushiyama> scruz: Good way to get you into trouble, as you notice. btw, I'd have single documents like this, most likely: { series:"Foo", name:"test1", passed:true, date:someISODate}
[11:25:19] <kurushiyama> scruz: No risk of hitting the 16MB size limit ever, same functionality, easy to debug and optimize.
[11:25:20] <scruz> kurushiyama: perhaps you could write up some recommended best practices for using mongodb.
[11:25:55] <kurushiyama> scruz: If only I had the time... The only thing I got so far is: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/
[11:25:59] <scruz> probably seemed okay at the time
[11:26:56] <scruz> we have user-defined quality checks on a Template like so: {quality_checks: [{name: ‘foo’, expr: ‘bar’}…]}
[11:28:03] <scruz> and on our Response: {quality_checks: {foo: ‘passed’, foo_1: ‘flagged’, …}
[11:28:27] <kurushiyama> scruz: Well, ofc I can not know your use cases. You have to decide yourself. But "values as keys" is a good way to get you into problems, I'd say 99.99999% of the time.
[11:29:06] <kurushiyama> scruz: And you can always convert your data to maintain your outside API.
[11:40:58] <Keksike> can I use $or-operator with bulk-finding?
[11:41:08] <Keksike> I get an error when I try to do it in my clojure-program
[11:42:37] <Keksike> https://gist.github.com/Keksike/fa869b31ae855dc16cb380d282cc1526 this my Bulk-find query atm, but I also need to add a parameter where it will find ones without :channel channel
[11:43:51] <Keksike> that link broke https://gist.github.com/Keksike/2d1abe27eb9fc5f9098271604ce21e8a
[11:45:03] <Keksike> https://gist.github.com/Keksike/cfe59afb874777069d61f6b966a94659 maybe now it works
[12:20:39] <StephenLynx> also another advice, disregard your native language, always use english on your code and model.
[12:24:06] <deniz946> StephenLynx, Okay, thanks you for the advices, i have a very bad habit and it's mixing both languages in my code.. spanish and english
[12:24:28] <deniz946> Likes this "$scope.articles = articulos; "
[12:24:38] <deniz946> :facepalm: i have to work on it
[13:04:30] <deniz946> I want to get all coments of a specific post, comments are subdocument of document Article, i should search first the post by id and return post.comments?
[13:04:43] <deniz946> this is my schema http://pastebin.com/knMR8zTs
[13:29:44] <Derick> syshape: what are you trying to do? Can't you do a $match first?
[13:32:31] <syshape> I want to $undwind $scores keeping only types homework and exam, sure I can do a $unwind then $match, but I thought it would be quickier using a $in
[13:33:47] <Derick> hmm, not sure whether you can do it this way
[13:33:53] <Derick> how do your documents look like?
[13:41:30] <Derick> syshape: i think you should go with the unwind
[13:42:24] <Industrial> Hi! I'm trying to use https://github.com/mongodb-labs/mongo-connector with https://github.com/mongodb-labs/elastic2-doc-manager with ElasticSearch and Kibana in Docker and Docker Compose
[13:42:56] <Industrial> https://gist.github.com/Industrial/9ab69727eed58fabfabc978b001a0aef is my Dockerfile
[13:43:32] <syshape> ok Derick, thank you, I'm a real beginner, not sure of what am I doing
[13:44:00] <Industrial> I'm getting "mongodbconnector_1 | Logging to mongo-connector.log." then "elastickibana_mongodbconnector_1 exited with code 0" and everything stops :(
[14:06:07] <Ryzzan> let's say i have a doc like this: { _id: Number, someArray:[ { uniqueId: Number, anotherArray: [ values ] ] }
[14:06:34] <Ryzzan> how to update pushing new values to another array, related to uniqueId on some array?
[17:31:05] <bla> Any idea of obsolete .eval() will be replaced with something else similar or just removed? I have a POC implementation of rollback for one of the operations we might need in our system and it looks fine.
[17:59:51] <yopp> cheeser, what happens if two requests are doing the $set => { "a.b.foo": 1} and $set => {"a.b.bar": 1} at the same time when "a.b" doesn't exists?
[18:00:21] <yopp> I mean, it's guaranteed that at the end I'll get "a.b": {foo: 1, bar: 1} or not? :)
[18:14:15] <kurushiyama> yopp: Basically, updates are put in a queue if on the same doc, iirc. Since the 2 operations change different keys, the first one is applied, creating "a.b" implicitly, setting either "foo" or "bar", then the next operation is applied and then the resulting "a.b" is modified with the next op applied.
[18:14:38] <kurushiyama> yopp: a bit simplified, but sufficient.
[18:16:59] <kurushiyama> Well, basically you have document level locking, and document operations are guaranteed to be atomic, so it probably would be more precise to say that operation 1 is first applied in full, preventing all other operations by means of a lock, and after releasing the lock, operation 2 is applied in full.
[19:54:47] <trompstomp> Can anyone tell me how I may install Mongo 3.2 on a raspberry pi 2 (it is ARM 7). I understand the limitations of this install but it would be perfect for the project I am working on.
[23:48:34] <poz2k4444> hi guys, I'm trying to sync a mongo collection with elasticsearch using mongo-connector, but even if the collections has lots of documents, I'm not able to do the sync, it keeps saying `OplogThread: Last entry is the one we already processed. Up to date. Sleeping.`how can I solve this?