PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 26th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:44:11] <amyloula> hi there, has anyone tried setting up mongodb on an aws ec2 instance recently? I am not able to find any packages on yum search
[01:50:10] <brad1121> [MongoError: connect ECONNREFUSED -> open firwall, commented out bindIp, restarted VM, and mongod. Ubuntu 14.04LTS
[01:50:27] <brad1121> At a loss
[02:27:09] <amyloula> has anyone had these error messages trying to run mongo on a linux ec2 instance? No package mongodb-org-server available. No package mongodb-org-tools available. No package mongodb-org-shell available. Error: Nothing to do
[04:54:01] <tdelam> Hi, I have an array with some duplicate elements, e.g: ['member_1','member_2','member_15','member_1','member_1', 'member_2'] I noticed in the docs $in uses at least ONE element that matches a value in the array so naturally my output will be only 3. What can I use to match all in the array, even if they're duplicates. I will be needing all the output.
[04:54:29] <tdelam> db.members.find({_id: {$in: ['member_1','member_2','member_15','member_1','member_1', 'member_2'] }}) => gives only 3
[07:23:31] <KekSi> i'm having trouble understanding db.collection.copyTo(newCollection) - it tells me the number of objects in the current collection
[07:23:42] <KekSi> but doesn't appear to copy them over to newCollection
[07:25:09] <KekSi> according to the documentation "copyTo() returns the number of documents copied. If the copy fails, it throws an exception." but none of the _id's of collection show up in newCollection
[07:27:07] <KekSi> neither on 2.4 nor on 3.0 that is
[08:05:28] <kurushiyama> KekSi: wfm. Strange. can you pastebin what exactly you entered?
[08:22:15] <KekSi> kurushiyama: db.myCollectionWithRecalculations.copyTo(db.myCollection) then i get no errors and after a while (its several GBs) it showed how many objects were copied over
[08:22:30] <KekSi> but checking with db.myCollection.count() before and after it stayed the same
[08:23:16] <KekSi> and db.myCollection.find({ "_id", "an id that exists in myCollectionWithRecalculations"}) yields null
[08:24:06] <KekSi> so i did a db.myCollectionWithRecalculations.find().forEach(function(x){ db.myCollection.insert(x);}) and that worked (obviously)
[08:24:32] <kurushiyama> KekSi: Ok. Got that. Lemme check
[08:25:41] <KekSi> the command appears to work but not do what its supposed to -- i mean trying with a string ..copyTo("db.myCollection") fails (as it should) and reports an exception
[08:26:51] <KekSi> ...copyTo("databasename.myCollection") as in, the correct path that calling db.myCollection hands out works just like the above
[08:27:47] <KekSi> since our prod & staging are somehow still on 2.4 there's no "initializeUnorderedBulkOp()" yet :x
[08:28:29] <KekSi> i did that on staging btw since copyTo shouldn't be used in prod because it blocks
[08:28:40] <kurushiyama> KekSi: Of th top of my head, I am not too sure wether you can cross DB borders. have you checked that?
[08:29:17] <KekSi> they're on the same db
[08:29:22] <KekSi> just different collections
[08:32:49] <kurushiyama> Hm, give me some.
[08:33:53] <KekSi> unless you really want to know what's going on don't bother
[08:34:43] <KekSi> i could probably strace it and find out which syscalls are going wrong but i cba and doing it with a function worked well enough since there's no concurrent access on the source collection
[08:36:30] <kurushiyama> KekSi: I do.
[08:59:51] <kurushiyama> KekSi: Sorry, is going to take a while. In general, I strongly suggest updating to 2.6 and 3.0 subsequently, since even 2.6 will be at eol by the end of this year. Setting aside the huge performance improvements for now. ;)
[09:22:31] <kurushiyama> KekSi: Oh, ofc, with a slightly higher version, you could use the aggregation pipeline's $out stage
[09:38:52] <KekSi> yeah i personally run 3.2 in docker containers here but prod and staging is still in pre-docker state
[09:39:00] <KekSi> and upgrading there is annoying af
[09:47:12] <kurushiyama> KekSi: Is it? With a replica set, it is really not.
[10:43:16] <scruz> hi
[10:45:13] <scruz> assume i have documents which have a field called test_results, which is like this: {test_1: ‘passed’, test_2: ‘failed’, test_3: ‘failed’, test4: ‘passed’…}. how do i query such that i can get all documents which failed *any* test?
[10:46:24] <scruz> i’m not so much interested in which test was failed as much as that there were any that failed in the first place.
[11:00:20] <philipwhiuk> If the name is always test_X why not just use an array and then the query is straightforward
[11:04:00] <scruz> philipwhiuk: that was by example, but i think $elemMatch might help
[11:15:13] <scruz> $elemMatch is useful, but won’t work for my case.
[11:15:55] <kurushiyama> scruz: maybe you should rather have a doc per test?
[11:16:32] <kurushiyama> scruz: In which case querying failed ones would be extremely easy.
[11:17:10] <scruz> hi kurushiyama.
[11:17:19] <kurushiyama> scruz: Hoi!
[11:17:29] <scruz> this is existing data. a new requirement showed up.
[11:18:14] <kurushiyama> scruz: Well, if the data model does not fit the requirements, then you are in even more trouble ;)
[11:18:26] <scruz> :)
[11:18:52] <kurushiyama> scruz: Having values as keys being your current one.
[11:19:34] <philipwhiuk> If you have a fixed list of keys you could just build a very long OR-statement
[11:19:56] <scruz> kurushiyama: if i understand you correctly, it’s better to have immutable values as keys?
[11:20:16] <scruz> philipwhiuk: that’s likely what i’m going to do. thanks for the suggestion!
[11:20:36] <philipwhiuk> (i'm not sure what the perf of that is....)
[11:21:33] <kurushiyama> scruz: As per your example, it would be {series:"Foo", tests:[{name:"test1", passed:false},{name:"test2",passed:true}]}, for example
[11:22:20] <scruz> ‘name’ being an immutable key in this case.
[11:22:26] <kurushiyama> scruz: and the query could be either a relatively straightforward aggregation or an $elemMatch in projection, depending on your use case.
[11:22:31] <kurushiyama> scruz: correct
[11:22:34] <kurushiyama> k/v
[11:22:58] <scruz> kurushiyama: thanks. i’ll watch for using user-defined data as keys in the future.
[11:23:35] <scruz> well, as much as i can, anyway.
[11:24:26] <kurushiyama> scruz: Good way to get you into trouble, as you notice. btw, I'd have single documents like this, most likely: { series:"Foo", name:"test1", passed:true, date:someISODate}
[11:25:19] <kurushiyama> scruz: No risk of hitting the 16MB size limit ever, same functionality, easy to debug and optimize.
[11:25:20] <scruz> kurushiyama: perhaps you could write up some recommended best practices for using mongodb.
[11:25:55] <kurushiyama> scruz: If only I had the time... The only thing I got so far is: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/
[11:25:59] <scruz> probably seemed okay at the time
[11:26:56] <scruz> we have user-defined quality checks on a Template like so: {quality_checks: [{name: ‘foo’, expr: ‘bar’}…]}
[11:28:03] <scruz> and on our Response: {quality_checks: {foo: ‘passed’, foo_1: ‘flagged’, …}
[11:28:27] <kurushiyama> scruz: Well, ofc I can not know your use cases. You have to decide yourself. But "values as keys" is a good way to get you into problems, I'd say 99.99999% of the time.
[11:29:06] <kurushiyama> scruz: And you can always convert your data to maintain your outside API.
[11:29:15] <kurushiyama> Like on the fly ;)
[11:29:54] <kurushiyama> Embedding, as written, will make you hit the 16MB size limit, at least conceptually.
[11:30:09] <kurushiyama> Especially with arbitrary length embedding.
[11:37:03] <deniz946> hello, how i can save an item in a subdocument in mongo?
[11:37:37] <deniz946> for example i have this Schema
[11:37:38] <deniz946> http://pastebin.com/knMR8zTs
[11:37:48] <deniz946> and i want to save a comment into comentarios
[11:37:56] <deniz946> how i can do that?
[11:40:58] <Keksike> can I use $or-operator with bulk-finding?
[11:41:08] <Keksike> I get an error when I try to do it in my clojure-program
[11:42:37] <Keksike> https://gist.github.com/Keksike/fa869b31ae855dc16cb380d282cc1526 this my Bulk-find query atm, but I also need to add a parameter where it will find ones without :channel channel
[11:43:51] <Keksike> that link broke https://gist.github.com/Keksike/2d1abe27eb9fc5f9098271604ce21e8a
[11:45:03] <Keksike> https://gist.github.com/Keksike/cfe59afb874777069d61f6b966a94659 maybe now it works
[11:46:00] <Keksike> ah, now the first link works
[11:59:04] <scruz> deniz946: perhaps this would work? db.collection.insert({titulo: ‘foo’, slug: ‘foo-slug’, autor: ‘bar’, categoria: ‘baz’, contenido: ‘lorem ipsum’, comentarios: [{usario: ‘foo-ser’, contenido: ‘nullam nunc’}], tags: [‘foo-ten’, ‘foo-shin’]})
[12:01:50] <scruz> deniz946: there’s also $addToSet and $push (for updating).
[12:02:09] <scruz> specifically for arrays.
[12:04:51] <scruz> deniz946: so, db.collection.update({_id: someId}, {‘$push’: {comentarios: {usario: ‘foo-user-2’, contenido: ‘Como estas?’}}})
[12:05:27] <scruz> check the mongodb docs, though i suspect you’re using a Node library/framework with MongoDB
[12:15:57] <deniz946> scruz, Hello! thannks you for the advice, yes im using nodejs as backend
[12:18:05] <StephenLynx> <deniz946> for example i have this Schema
[12:18:13] <StephenLynx> you are not using mongoose by any chance, are you?
[12:18:25] <deniz946> Yes im using mongoose :)
[12:18:33] <StephenLynx> don't.
[12:18:53] <StephenLynx> first, performance. it is about 6x slower than the native driver
[12:19:07] <StephenLynx> second, it handles ObjectId's and dates WRONG.
[12:19:22] <StephenLynx> third, it makes you use mongodb WRONG.
[12:19:35] <StephenLynx> it is the single worst ODM that exists.
[12:19:47] <StephenLynx> I suggest you use the driver
[12:19:53] <StephenLynx> npm package mongodb
[12:20:39] <StephenLynx> also another advice, disregard your native language, always use english on your code and model.
[12:24:06] <deniz946> StephenLynx, Okay, thanks you for the advices, i have a very bad habit and it's mixing both languages in my code.. spanish and english
[12:24:28] <deniz946> Likes this "$scope.articles = articulos; "
[12:24:38] <deniz946> :facepalm: i have to work on it
[12:24:41] <StephenLynx> yeah
[12:24:56] <StephenLynx> but most important
[12:24:58] <StephenLynx> you have to stop using mongoose
[12:25:08] <deniz946> Any recomended guide for working with mongodb original driver?
[12:25:22] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.1/api/
[12:25:22] <deniz946> i've tried but it looks a bit hard for me to understand
[12:25:30] <StephenLynx> its simpler than mongoose.
[12:25:31] <deniz946> and mongoose was suck easy
[12:25:56] <StephenLynx> most of the terminal commands translate 1:1 to this driver.
[12:26:16] <StephenLynx> and the documentation is easy to navigate with plenty of examples.
[12:26:27] <StephenLynx> 10/10 documentation
[12:27:04] <deniz946> Thanks you then :)
[12:27:09] <deniz946> I'll give an eye
[13:01:49] <deniz946> StephenLynx, one question, are you around here?
[13:03:06] <StephenLynx> that's the IRC equivalent of "are you awake?"
[13:03:10] <StephenLynx> :v
[13:03:33] <deniz946> Yeah ;D
[13:04:30] <deniz946> I want to get all coments of a specific post, comments are subdocument of document Article, i should search first the post by id and return post.comments?
[13:04:43] <deniz946> this is my schema http://pastebin.com/knMR8zTs
[13:04:47] <StephenLynx> again
[13:04:51] <StephenLynx> schemas don't exist.
[13:05:07] <StephenLynx> a sub-document is just a value as any other.
[13:05:29] <StephenLynx> so whatever you have to do to get any other field, you do to get a sub-document.
[13:13:51] <syshape> Hi, can't we use a $in operator for the cond of a filter ? It says invalid operator, but I don't see a reason to
[13:16:05] <Derick> syshape: can you share your query?
[13:19:55] <syshape> sure a few seconds pls
[13:26:05] <syshape> http://pastebin.com/pHVmW2H5
[13:29:44] <Derick> syshape: what are you trying to do? Can't you do a $match first?
[13:32:31] <syshape> I want to $undwind $scores keeping only types homework and exam, sure I can do a $unwind then $match, but I thought it would be quickier using a $in
[13:33:47] <Derick> hmm, not sure whether you can do it this way
[13:33:53] <Derick> how do your documents look like?
[13:35:54] <syshape> https://university.mongodb.com/courses/MongoDB/M101JS/2016_March/courseware/Week_6_The_Aggregation_Framework/56ba914bd8ca39047c3ac2a2
[13:36:28] <syshape> it's an homework of mongodb university :)
[13:36:54] <Derick> i can't see that becaue I'm not enrolled
[13:39:06] <syshape> http://pastebin.com/dgyX2w32
[13:41:30] <Derick> syshape: i think you should go with the unwind
[13:42:24] <Industrial> Hi! I'm trying to use https://github.com/mongodb-labs/mongo-connector with https://github.com/mongodb-labs/elastic2-doc-manager with ElasticSearch and Kibana in Docker and Docker Compose
[13:42:56] <Industrial> https://gist.github.com/Industrial/9ab69727eed58fabfabc978b001a0aef is my Dockerfile
[13:43:32] <syshape> ok Derick, thank you, I'm a real beginner, not sure of what am I doing
[13:44:00] <Industrial> I'm getting "mongodbconnector_1 | Logging to mongo-connector.log." then "elastickibana_mongodbconnector_1 exited with code 0" and everything stops :(
[14:06:07] <Ryzzan> let's say i have a doc like this: { _id: Number, someArray:[ { uniqueId: Number, anotherArray: [ values ] ] }
[14:06:34] <Ryzzan> how to update pushing new values to another array, related to uniqueId on some array?
[14:06:44] <Ryzzan> (is my question clear? :) )
[14:07:31] <Ryzzan> pushing values to anotherArray*
[17:26:20] <bla> Hello.
[17:31:05] <bla> Any idea of obsolete .eval() will be replaced with something else similar or just removed? I have a POC implementation of rollback for one of the operations we might need in our system and it looks fine.
[17:59:51] <yopp> cheeser, what happens if two requests are doing the $set => { "a.b.foo": 1} and $set => {"a.b.bar": 1} at the same time when "a.b" doesn't exists?
[18:00:21] <yopp> I mean, it's guaranteed that at the end I'll get "a.b": {foo: 1, bar: 1} or not? :)
[18:14:15] <kurushiyama> yopp: Basically, updates are put in a queue if on the same doc, iirc. Since the 2 operations change different keys, the first one is applied, creating "a.b" implicitly, setting either "foo" or "bar", then the next operation is applied and then the resulting "a.b" is modified with the next op applied.
[18:14:38] <kurushiyama> yopp: a bit simplified, but sufficient.
[18:14:58] <yopp> queued, gotcha
[18:16:59] <kurushiyama> Well, basically you have document level locking, and document operations are guaranteed to be atomic, so it probably would be more precise to say that operation 1 is first applied in full, preventing all other operations by means of a lock, and after releasing the lock, operation 2 is applied in full.
[18:17:03] <kurushiyama> yopp: 1
[18:17:08] <kurushiyama> yopp: ^
[19:54:47] <trompstomp> Can anyone tell me how I may install Mongo 3.2 on a raspberry pi 2 (it is ARM 7). I understand the limitations of this install but it would be perfect for the project I am working on.
[23:48:34] <poz2k4444> hi guys, I'm trying to sync a mongo collection with elasticsearch using mongo-connector, but even if the collections has lots of documents, I'm not able to do the sync, it keeps saying `OplogThread: Last entry is the one we already processed. Up to date. Sleeping.`how can I solve this?