PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 15th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:18:53] <joannac> diegoaguilar: if there's no change to be made
[00:19:19] <joannac> i.e. if the change is {$set: {a:1}} and a is already 1 in that document
[00:35:21] <diegoaguilar> joannac, I need to make massive updates on a production database, basically they will be 25k inserts and 30k updates
[00:35:30] <diegoaguilar> that database is consumed by an api
[00:35:50] <diegoaguilar> whats are best techniques to do a healthy import AND/OR have minimal downtime
[00:35:51] <diegoaguilar> ?
[01:27:42] <diegoaguilar> Hello I'm trying to do this to get the "update if found, insert if not found" behaviour, but its not working properly as I get a cannot update lfPoints and lfPoints at the same time
[01:27:48] <diegoaguilar> this is what I'm trying: http://kopy.io/fAWWi
[01:32:36] <Boomtime> diegoaguilar: it is a known problem: https://jira.mongodb.org/browse/SERVER-10711
[01:32:48] <Boomtime> you will need to plan a way around it
[01:32:56] <diegoaguilar> for eaxmple?
[01:33:09] <Boomtime> dunno, it's your schema
[01:33:24] <diegoaguilar> ermm ok my schema is free to change
[01:33:25] <diegoaguilar> so ...
[01:34:03] <Boomtime> the easiest way around it is to not require initializing a field (which you want to increment in an upsert) to anything other than zero
[01:34:11] <Boomtime> if you need a baseline, then store that seperately
[01:34:18] <Boomtime> it's shit, but it works
[01:34:41] <diegoaguilar> hmm
[01:34:46] <diegoaguilar> I cant get it
[01:34:59] <diegoaguilar> I mean lfPoints "does not" exist, ok
[01:35:21] <diegoaguilar> so thats the problem?
[01:35:21] <diegoaguilar> :P
[01:35:48] <diegoaguilar> ahh
[01:36:17] <diegoaguilar> how could this help me?
[01:36:18] <diegoaguilar> db.findAndModify("mydb.counters", {_id: "myCounter"}, {$inc: {n: 1}, $setInsert: {n, 0}}, {upsert: true, new: true})
[01:38:15] <Boomtime> diegoaguilar: two things; 1. that line is the same as an update, 2. you don't need setOnInsert at all in that variant
[01:38:59] <Boomtime> all fields initialize automatically to the equivalent of null/zero as soon as you don't anything with tham - $inc assumes a field to be zero if it doesn't already exist
[01:39:20] <Boomtime> *"as soon as you do anything with them"
[01:39:50] <diegoaguilar> Ok, but I just didnt understand what u said first as workaround
[01:40:03] <diegoaguilar> maybe findAndModify will have the same effect
[01:40:07] <diegoaguilar> and wont fail?
[03:14:30] <lufi> in mongoose, populate an unselected path?
[04:06:44] <wiltors42> hello
[04:07:02] <wiltors42> I'm wondering how to return an entire collection from MongoDB in javascript
[04:08:45] <wiltors42> Since the entries are not in any particular order, how do I step through all of them in a loop?
[04:09:15] <wiltors42> Should I make a list of all unames to findOne() with?
[04:57:05] <movedx> MongoDB newbie here. Creating replication sets is a good way of offering data redundancy, backup, and enabling you to create read-only slaves so you can offload read operations to another system. What does sharding mainly introduce? Performance?
[05:20:30] <Boomtime> movedx: http://docs.mongodb.org/manual/core/sharding-introduction/#purpose-of-sharding
[05:36:36] <leptone> i have a collection of cafes with: {name, address, speedTestList} how can i use db.cafes.update() to append a new object to the speedTestList attribute
[05:39:30] <Boomtime> leptone: assuming speedTestList is an array: http://docs.mongodb.org/manual/reference/operator/update/push/
[05:46:35] <leptone> Boomtime, like this? https://gist.github.com/leptone/64f7b8f4801d448505cb
[05:49:48] <leptone> ?
[05:50:17] <leptone> anyone?
[06:19:26] <Boomtime> leptone: that isn't valid JSON, but if you need examples there are at least 3 on that page i linked
[06:26:02] <leptone> Boomtime, actually it looks like only one
[06:26:09] <leptone> and then the capped collections after that
[06:40:51] <Boomtime> leptone: there are three examples:
[06:40:52] <Boomtime> 1. http://docs.mongodb.org/manual/reference/operator/update/push/#append-a-value-to-an-array
[06:41:00] <Boomtime> http://docs.mongodb.org/manual/reference/operator/update/push/#use-push-operator-with-multiple-modifiers
[06:41:07] <leptone> i was looking at the wrong doc
[06:41:13] <leptone> think i figured it out
[06:41:14] <leptone> TY!
[06:41:16] <Boomtime> http://docs.mongodb.org/manual/reference/operator/update/push/#append-multiple-values-to-an-array
[06:41:17] <leptone> Boomtime,
[06:41:20] <Boomtime> goodo
[06:41:28] <Boomtime> happy that you got there
[08:38:43] <Axy> I'm using javascipt native driver -
[08:38:49] <Axy> cursor.next() seems deprecated
[08:38:52] <Axy> as well as nextObject()
[08:38:57] <Axy> as well as hasNext()
[08:39:00] <Axy> what should I use then?!
[09:14:56] <Boomtime> Axy: http://mongodb.github.io/node-mongodb-native/2.0/api/Cursor.html#forEach
[09:15:32] <Axy> Boomtime, oh I see
[09:15:34] <Axy> very nice!
[09:15:38] <Boomtime> there doesn't seem to be a good example in the docs though
[09:15:54] <Axy> well the thing is I was using next to get a signel item
[09:15:57] <Axy> instead of iteration
[09:16:01] <Boomtime> the docs example uses toArray which is crap because if you have 10,000 docs to process it will grab them all in memory
[09:16:03] <Axy> I mean I was modifying my cursor over time
[09:16:14] <Axy> yeah I see what you mean Boomtime
[09:16:20] <Axy> how should I go without doing that
[09:16:30] <Axy> right now in my db there are around 2m documents
[09:16:34] <Boomtime> does forEach not work for you?
[09:16:34] <Axy> no way I'll fit them in mem
[09:16:44] <Axy> will try now
[09:17:12] <Boomtime> forEach is iterative, it returns to the server in batches and only keeps that number in memory at a time
[09:17:17] <Axy> Boomtime, the problem is -- I want to have an auto incremental field
[09:17:27] <Axy> does "foreach" triggers as much as it can all at the same time
[09:17:35] <Axy> or does it wait for the previous operation to finish
[09:17:51] <Axy> I need to add an "index" field to my documents, and every time I add it I increatse a counter in a "counters" collection
[09:17:57] <Axy> so I need to go one by one
[09:18:19] <Boomtime> how could this problem possibly have been solved by next() ?
[09:18:32] <Axy> because normally when I do next I get a single document
[09:18:34] <Axy> I process it
[09:18:38] <Axy> and then I was checking hasNext
[09:18:45] <Axy> if it has next then recursively do the same
[09:18:55] <Axy> so it keeps taking the next document from the cursor, where it left
[09:18:56] <Boomtime> internally next() retrieves a large batch from the server
[09:19:00] <Axy> and keeps adding the index field
[09:19:08] <Boomtime> then it gives you one of them while holding the others in memory
[09:19:15] <Boomtime> it is an identical pattern to forEach
[09:19:32] <Axy> but forEach applies the operation for everything right?
[09:19:39] <Axy> so I can't wait for the operation to finish
[09:19:44] <Boomtime> any code which worked with hasNext() and next() must, by definition, work with forEach
[09:19:46] <Axy> and go one by one
[09:19:57] <Axy> Ok - here is an example
[09:20:13] <Axy> I have already existing documents - they're in a specific order (by _id)
[09:20:21] <Axy> and I want to add a custom "index" field to them
[09:20:32] <Axy> whivh whould be in the exact order with _id
[09:20:43] <Axy> so I have a seperate collection, as counters
[09:21:01] <Axy> and I keep a document, for the "final index" of the original collection I want to add indexes to
[09:21:29] <Axy> so I need to go one by one, and every time the new index is added I need to increase the one at counters
[09:21:39] <Boomtime> so?
[09:21:45] <Axy> I just tried with foreach it seems to be getting the same index for large number of documents
[09:21:55] <Axy> because I believe it tries to reach all at once or something
[09:21:58] <Axy> not one by one
[09:22:06] <Axy> (or it gets the index before my operation is completed)
[09:22:16] <Axy> ((for a single document I mean))
[09:22:22] <Boomtime> then it would do so for the next() and hasNext() loop as well, or you are breaking the rules of NodeJS - which seems likely at this point
[09:22:55] <Axy> no, because I only trigger hasNext recursion in the callback of next
[09:23:03] <Boomtime> next() gets more than one from the server at a time, it merely reveals one to you at a time, exactly the same way that forEach does
[09:23:41] <Axy> Yes but I don't have control to trigger the next-item-operation in foreach
[09:23:51] <Axy> in next- hasnext loop I do -- I can just put things in callbacks
[09:24:00] <Axy> and it only processes one at a time
[09:24:15] <Axy> and after finishing one fully, goesto the next
[09:24:46] <Boomtime> you have serious concurrency issues
[09:24:59] <Axy> what do you mean
[09:25:11] <Axy> this is just a single use script
[09:25:14] <Boomtime> what is the point of your counters document if it can give out the same value twice?
[09:25:31] <Boomtime> if that can happen then you have a concurrency issue
[09:25:39] <Axy> well this is only to add an auto incremental field to a whole collection filled with documents
[09:25:48] <Axy> I wish there was an easier way than iterating over everything
[09:26:02] <Axy> So my primary concern when doing this is: keeping the order as it is
[09:26:15] <Axy> by order I mean, _id -- or the order I added things
[09:26:31] <Boomtime> ok, so this is a one off?
[09:26:40] <Axy> yes one time use
[09:26:50] <Axy> then when I'm adding new documents I alreaddy check and increase counters
[09:26:54] <Axy> and no issues
[09:27:13] <Axy> I need to process what I already have.
[09:27:16] <Boomtime> you'll probably need to resort of doing the batching the updates yourself - use toArray so you can control the order of things, but do the find in batches of a few thousand or something
[09:27:33] <Axy> man that's too much hassle
[09:27:38] <Axy> .next was doing it :(
[09:27:43] <Axy> I was happy to go one by one
[09:28:05] <Boomtime> so use .next then
[09:28:14] <Axy> deprecated?
[09:28:14] <Boomtime> it's only deprecated right? doesn't it still work?
[09:28:25] <Axy> I'm afraid to update
[09:28:27] <Axy> i don't kno
[09:28:29] <Axy> *know
[09:28:31] <Axy> hehe
[09:28:49] <Axy> Anyway - thanks
[09:28:49] <Boomtime> why don't you do this one off job before you update then?
[09:29:10] <Axy> I mean Boomtime I just want an auto incremental field
[09:29:16] <Axy> why is this soooo hard and effortful
[09:29:22] <Axy> that's my current main issue
[09:29:48] <Axy> and then I get responses like "if you need an auto inc. field maybe mongo is not what you should use"
[09:29:55] <Axy> but that's wrong, because I'm storing documents
[09:30:00] <Axy> and I need an auto incremental field
[09:30:26] <Axy> There is no way I could pull this off with a relational db - all of my documents hass differnet field structures
[09:30:33] <Axy> so yeah.
[09:30:39] <Axy> An auto incremental field would be nice :(
[11:25:04] <luvenfu> hello, there is some client of mongo for windows xp?
[17:06:12] <coderman1> shouldnt this return me just the leadid field? db.leads.find({'leadid': 1}).sort({leadid:-1}).pretty()
[17:07:51] <coderman1> oh nm i see the problem
[18:22:05] <coderman1> is there a way to get the mongo cli to show how long a query took after it ran it?
[18:22:48] <diegoaguilar> coderman1, append .explain at the end of the query
[18:22:55] <diegoaguilar> u will see a bunch of interesting info
[18:22:56] <diegoaguilar> :)
[18:23:10] <coderman1> i did, but i dont see timing
[18:23:23] <coderman1> and that appears to not run the query, just show you the plan that it would run
[18:24:08] <coderman1> oic, executionStats
[18:24:11] <coderman1> ty
[18:34:23] <happiness_color> in a script with javascript Dates
[20:23:23] <asteele> hi all, i am new to using mongo in production and just yesterday started seeing issues in our application where certain endpoints of the api are very consistently taking a very long time (400 - 1200 ms) to return. I am trying now to find out where the issue is, if it is with mongo or some of the code, i am using mongotop and mongostat and according to those two it shows everything happening very fast, never anything above 1-5 ms, is that
[20:23:24] <asteele> telling me the issue is definitely not with mongo - or are there other places to look
[20:24:42] <asteele> ill be idling so if you see this and have any opinions, your time is very much appreciated <3
[20:28:41] <StephenLynx> asteele we will need more details.
[20:28:57] <asteele> StephenLynx sure what would help?
[20:29:10] <StephenLynx> the application source code, what it does in general
[20:29:36] <StephenLynx> does it uses mongoose, by any chance?
[20:29:42] <asteele> yes i am using mongoose
[20:29:48] <StephenLynx> it could be that.
[20:29:58] <StephenLynx> mongoose is about 6 times slower than the native node.js driver.
[20:30:25] <asteele> are there some known issues i should look for there? i am not doing tons of complex work but give me one second and ill try my best to explain whats going on
[20:30:56] <StephenLynx> the issue is mongoose being awful.
[20:31:00] <StephenLynx> the solution is not using it.
[20:32:51] <asteele> lol :( fair enough, but part of me thinks a bigger issue might lie somewhere. everything was working fine for many weeks but just recently there is a very noticable delay in lots of the api calls - reading from the db seems to still be quick
[20:33:17] <asteele> i get your concern though i think, its hard to diagnose slowness when you have mongoose in front of it all
[20:33:29] <StephenLynx> is not hard.
[20:33:36] <StephenLynx> the diagnosis is mongoose being mongoose.
[20:34:00] <StephenLynx> you cannot expect to use it and have a good performance with large datasets.
[20:34:30] <StephenLynx> using mongoose is the most common error people make when using mongodb with node/io
[20:35:31] <asteele> but like 500 ms slow? my data set is really not all that big. We have < 2500 users and most of what they are doing is continually updating one document. I AM using like a populate() query and stuff during some of these calls, but like i said to me the weird part is, up until yesterday all of this stuff ran very fast and fine, even using mongoose
[20:35:47] <StephenLynx> yes
[20:36:06] <StephenLynx> make a prototype performing the same operation with the native database and test it.
[20:36:15] <StephenLynx> native driver*
[20:37:01] <asteele> okay brb a few minutes while i test some stuff on local, thank you btw StephenLynx
[20:37:06] <StephenLynx> np