PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 8th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:53] <pgentoo-> I asked a question earlier about updating a bunch of records, and have a *working* solution, but its slow (only running around 1-2k documents/sec. Any suggestions on how I can speed this up? http://pastebin.com/C6UPjxmF
[00:05:27] <pgentoo-> My collection is about 500GB with 500M records...
[00:05:55] <rkgarcia> pgentoo-, test with $exist
[00:06:30] <pgentoo-> rkgarcia, you think its the cursor finding new records thats the slow part?
[00:06:50] <rkgarcia> pgentoo-, you can filter in the first find
[00:07:59] <pgentoo-> rkgarcia, i am filtering by click.utc2 = null though (which is an indexed field). Is that worse than filtering by click.utc2 exists?
[00:08:15] <rkgarcia> it's different
[00:08:40] <rkgarcia> you are searching with this params "{ "click.utc2": null, "click": {$exists: 1} }" and walking throught cursor
[00:09:07] <rkgarcia> and using a manual condition like doc.click.utc2 == null
[00:09:15] <pgentoo-> so you think doing {"click.utc2" : { $exists: 0 }} would be a better check tahn {"click.utc2" : null} ?
[00:10:16] <pgentoo-> that second check for click:$exists is because teh whole click subdocument may be empty in some cases, which means it wouldn't have a click.utc2, meaning i can't proceed anyway.
[00:11:13] <rkgarcia> pgentoo-, try with {"click.utc2":{$not:null}}
[00:11:39] <rkgarcia> obviusly with your other filters
[00:11:41] <babykosh> Mongo gods … need some eyes on this SO question…http://stackoverflow.com/questions/25194177/many-to-many-relationships-with-mongoid
[00:12:35] <rkgarcia> pgentoo-, sorry i'm seen now
[00:13:21] <rkgarcia> pgentoo-, try with {"click.utc2": null, "click": {$not: null} }
[00:14:01] <pgentoo-> error: { "$err" : "invalid use of $not", "code" : 13041 }
[00:14:56] <rkgarcia> sorry pgentoo- replace $not with $ne
[00:15:41] <rkgarcia> pgentoo-, the final it's db.clicks.update( {"click.utc2": null, "click": {$ne: null} } , {"$set": { "click.utc2": trimmedDate }});
[00:16:08] <rkgarcia> set the value of trimmedDate
[00:17:29] <pgentoo-> but each document has a different trimmedDate, so that is why i'm updating based on _id
[00:17:43] <rkgarcia> pgentoo-, then
[00:18:14] <pgentoo-> thats why i calculate it for each document (just trying to wipe the minutes/seconds/milliseconds from teh date adn saving it in a new field)
[00:18:49] <rkgarcia> ok, that can be like you are doing
[00:19:57] <pgentoo-> ok trying this update to the selection criteria to see how that changes perf
[00:20:00] <rkgarcia> pgentoo-, http://pastebin.com/i356bLrC
[00:20:09] <rkgarcia> pgentoo-, ok
[00:21:55] <pgentoo-> is there an easy way to query for updates/sec? I'm just watching mms and it seems to take forever ot update
[00:24:50] <pgentoo-> and, assuming that is the best i can do for query performance on the cursor, is the way i'm creating my new ISODate() efficient, or is there a faster method?
[00:24:59] <pgentoo-> sorry, i'm no javascript guru
[00:27:48] <pgentoo-> oooh, mongostat for the win. :)
[00:30:36] <pgentoo-> hrm, it crunches along at around 3500/s for probably 75k worht of records, and then it stalls out for about the same amount of time, and then does another 75k, and repeats...
[00:31:53] <pgentoo-> i'm running with basically default configuration, on a replica set with SSD and a good deal of RAM. Any settings that should be tuned to let this data process more fluidly?
[00:37:25] <pgentoo-> maybe .addOption(DBQuery.Option.exhaust) ?
[00:40:47] <pgentoo-> hrm, i set slaveOk andit seems a bit more steady now. I have to run, but thanks for the help.
[01:06:09] <rkgarcia> pgentoo-, you are welcome
[01:19:50] <z3r0> hi guys!
[01:20:14] <z3r0> do you have links for tutorials on how to secure mongodb?
[01:20:40] <z3r0> like how to secure mongodb for dummies?
[01:21:04] <z3r0> tl;dr;
[01:34:29] <joannac> zz_anildigital: http://docs.mongodb.org/manual/security/
[01:34:33] <joannac> oops
[01:34:40] <joannac> oh, he gave up
[01:35:52] <joker666> guys i'm new to nosql/mongo
[01:35:56] <joker666> how do i get started?
[01:36:30] <joannac> have you installed it?
[01:36:36] <joker666> yup
[01:36:43] <joannac> http://docs.mongodb.org/manual/tutorial/getting-started/
[01:36:46] <joker666> i'm using linux machine
[01:37:39] <joker666> no, my question is more generic...how do i start thinking in nosql way coming from relational db system
[01:38:36] <joannac> http://blog.mongodb.org/post/72874267152/transitioning-from-relational-databases-to-mongodb
[01:39:30] <joker666> thanks joannac, let me have a read
[05:25:19] <xdotcommer> decimals ;(
[05:45:56] <xdotcommer> On UPSERT I am getting ----> Cannot increment with non-numeric argument: {v: "0.00797000"}
[05:46:44] <joannac> what's the upsert?
[05:46:53] <joannac> are you trying to $inc that field or something?
[05:47:18] <xdotcommer> $trade_data['$inc']['v'] = "0.00797000"
[05:47:38] <joannac> why are you trying to $inc by a string?
[05:48:22] <xdotcommer> joannac: I am kind of new to mongo not sure how to deal with a decimal
[05:48:49] <joannac> $trade_data['$inc']['v'] = 0.00797000
[05:49:22] <xdotcommer> I dont believe mongo supports decimal type
[05:50:58] <joannac> umm
[05:51:24] <xdotcommer> I come from mysql background they have a decimal type which works magick for storing financial data
[05:52:25] <joannac> I can assure you that mongo supports decimals
[05:53:10] <xdotcommer> http://stackoverflow.com/questions/7682714/does-mongodb-support-floating-point-types
[05:55:14] <xdotcommer> interesting... the first issue was incorrect typesetting by PHP .. hence the string ... and you caught that joannac :)
[05:55:27] <xdotcommer> now it seems to store well as a double :)
[05:59:39] <joannac> yeah
[06:00:15] <xdotcommer> question is if I can restrict the number of decimals
[06:00:27] <joannac> here's no arbitrary precision numbers, like mysql decimals
[06:00:31] <joannac> https://jira.mongodb.org/browse/SERVER-1393
[06:01:46] <xdotcommer> would be nice if it gets done
[06:14:44] <xdotcommer> reading thought the issue https://jira.mongodb.org/browse/SERVER-1393
[06:17:17] <xdotcommer> "Major - P3 Major - P3 " does that mean its going to be done at some point?
[06:34:26] <joannac> THe fix version is "planning bucket A", which means it's in a list of features we want to implement, but it's not scheduled yet
[06:35:02] <xdotcommer> I imagine its not trivial .. it took mysql alot of versions to get it right
[07:01:58] <xdotcommer> I guess this is a work aroud but it would not work with atomic updates http://ec2-54-218-106-48.us-west-2.compute.amazonaws.com/moschetti.org/rants/mongomoney.html
[07:08:18] <Boomtime> xdotcommer, what do you want to update atomically that you can't with that method?
[07:08:49] <xdotcommer> Boomtime: well for example I use $max $min $inc
[07:09:02] <xdotcommer> $inc i guess is the one that will suffer
[07:10:14] <Boomtime> i haven't read that whole page, what is it that isn't going to work?
[07:11:06] <Boomtime> $max, $min, $inc are just operators, they work on whatever field you specify, the page you linked specifically uses int64 which will work with those operators... have I missed something?
[07:11:35] <Boomtime> (and float it appesrs)
[07:12:35] <Boomtime> do you need calculate the updated value of one field from the atomic result of another?
[07:14:40] <xdotcommer> Boomtime: One sec I am just trying to re read the article... for some reason I thought that aproximate value was a string
[07:16:26] <Boomtime> I have not read it all, it's possible there is a problem in there are prevents easy atomicity.. but if so, then that just means you'll have to do it the slightly harder way
[07:18:20] <xdotcommer> Boomtime: I am trying to find the most simple elegant solution
[07:26:08] <xdotcommer> maybe storing it as cents can be a decent solution but feels stoneage like
[08:12:15] <k_sze[work]> Can repairDatabase fail if I don't have enough space?
[08:21:23] <rspijker> k_sze[work]: it should give you a message that it can’t even start if it detects you don;t have enough free disk space
[08:23:54] <k_sze[work]> And is it ok to run repairDatabase on the 'local' db?
[08:24:21] <k_sze[work]> currently it's sitting at 2 GB for some reason.
[08:30:27] <rspijker> k_sze[work]: local will most likely be mostly the oplog. That’s a capped collection. It reserves a certain amount of diskspace
[08:30:34] <rspijker> by default 5% of your free space iirc
[09:43:34] <chungbd> hi all, after i removed one member of my replica set, i'm getting error in my PRIMARY member log "replset couldn't find a slave with id 15, not tracking ..."
[09:44:02] <chungbd> Plz help me
[10:38:00] <remonvv> \o
[10:43:39] <Derick> sup
[10:45:02] <remonvv> Still employed?
[10:45:36] <Derick> hehe, yes
[10:45:38] <Derick> why? :-)
[10:46:05] <remonvv> Didn't you have your review yesterday?
[10:46:09] <Derick> oh yeah
[10:46:14] <Derick> it's not that big of a deal
[10:47:15] <remonvv> \o/
[11:04:42] <Derick> ernetas_: ping
[11:51:59] <joannac> Derick: did you pass? ;)
[11:52:13] <Derick> yes !
[12:53:22] <Const> Hello !
[12:55:36] <Derick> hi!
[12:57:17] <Const> I have a question about Mongo DB. (I'm using the php connector, but it's not relevent) I run a script which read documents from a Mongobd, add some value to the returned array and need to update the doc. To save ressources and time because I have a lot of doc to update, I'd like to store all my docs in an array (or anything else) and perform a save (or update, or anything else) for the whole documents. It would be (but it's not
[12:57:18] <Const> working) to do a users->save() with a $each parameter.
[12:57:43] <Const> Any idea if it's possible and how to do it? Or a hint how to search, I can't formulate my search
[13:00:53] <rspijker> Const: is the value you are adding dependent on the document?
[13:01:46] <Const> well, I modify the all document. i load them with a findOne(), add some value or change some other, and want to update the document in the collection.
[13:07:08] <Const> rspijker, I'm not sure what you are meaning by "dependent on the document". You asked if it's already a document of the collection? In this case, yes, it is.
[13:10:16] <jekle> Const: http://docs.mongodb.org/manual/tutorial/modify-documents/#modify-multiple-documents-with-update-method
[13:11:34] <Const> ah, the title of the page sounds nice... I search in the doc, but not enought. Thanks jekle, I'll read to see if that's what I need.
[13:18:02] <rspijker> Const: what I meant is, does the way in which you wish to modify the document, depend on the content of the document? If not, then you can use the update mehtod that jekle linked. If it does, then you need to loop over them like you are doing now
[13:20:19] <Const> ah, yes I'm afraid to. I load a list of users, change their values and update them. So $user[1] content need to update the matching entry in the collection
[13:21:35] <Const> So I guess I have to keep my loop :(
[13:40:23] <ssaraH> http://pastiebin.com/53c6812d60c4c
[13:41:14] <ssaraH> is this write?
[13:41:33] <ssaraH> right
[13:41:51] <ssaraH> Can i use the object to start the replicate sets?
[13:42:05] <Derick> no
[13:42:14] <Derick> you can only have one _id field in rsconf
[13:42:20] <Derick> you need to do it twice, once per shard
[13:42:47] <abhik_> Hello all i am abhik a phd student at Indiana University USA
[13:42:47] <Derick> (and connected to rs0-0 or rs1-0)
[13:42:59] <Zelest> heya abhik_
[13:43:08] <abhik_> I am trying to implement chemical similarity search using mongodb
[13:43:31] <ssaraH> you talking about the rs id, right, Derick?
[13:43:38] <Derick> yes
[13:43:44] <abhik_> any one of you came across the datablend post on chemical similarity search
[13:43:46] <ssaraH> aight, ima breaking it
[13:43:49] <Derick> the rsconf object has two times _id ... you can't do that
[13:44:15] <Derick> ssaraH: also, you don't start the arbiters as arbiter...
[13:44:22] <Zelest> abhik_, i'm clueless on that.. but on that topic, I have a lysergic acid diethylamide molecule as my wallpaper. ;-)
[13:44:23] <Derick> you start them as normal replicaset members
[13:44:34] <Derick> Zelest: cafeine?
[13:44:37] <Zelest> LSD
[13:45:00] <abhik_> Ok i will move on to my problem
[13:45:03] <ssaraH> why, Derick
[13:45:04] <ssaraH> ?
[13:45:38] <Derick> ssaraH: I am wrong :D
[13:45:49] <Derick> i was confusing config servers
[13:45:56] <ssaraH> trivia: lisergic acid comes from a fungus bone disease that makes you mad
[13:46:11] <ssaraH> ah, ok, ty derick
[13:54:20] <abhik_> any one can help me on sharding ??
[13:56:54] <kali> abhik_: usually irc is not good with meta question. there are 400 people here. just ask the real question
[14:01:14] <abhik_> I have a big collections table and i want to split it up into multiple tables and perform the search based on some key . I have single machine any ideas on this ?
[14:01:36] <Derick> sharding makes no sense if you only have one machine
[14:04:33] <abhik_> Ok i got my answer @Derick i was thinking to shard so you say shard is only meant for multiple machines
[15:11:40] <remonvv> Derick, there are actually some performance improvements if you run such a setup on a single machine in some use cases.
[15:12:02] <Derick> remonvv: perhaps, not often though
[15:13:48] <remonvv> Wow, that was an hour old.
[15:13:52] <remonvv> And he already left.
[15:14:11] <Derick> aye
[15:15:10] <remonvv> Derick, perhaps you should have mentioned that before I put my time and energy into this sir!
[15:15:25] <Derick> maybe not ;-)
[15:15:27] <remonvv> So for about an hour nobody had any questions about MongoDB usage. Go figure.
[15:15:34] <Derick> it's friday afternoon
[15:15:56] <remonvv> Hm, fair point.
[15:16:48] <Derick> i've found myself a tasty beverage already
[15:17:02] <remonvv> Same. A cold "Hertog Jan"
[15:17:19] <Derick> hm
[15:17:26] <Derick> low alcohol cider to start with
[15:17:35] <Derick> the imperial stouts will have to wait until it's really weekend
[15:17:58] <remonvv> Aww, not allowed anything else in the MongoDB office?
[15:18:07] <Derick> i work from home
[15:18:13] <Derick> and no, nothing to do with allowed or not
[15:18:18] <remonvv> Aha, cool.
[15:18:19] <Derick> there is a beer (and wine) fridge in the office
[15:18:34] <remonvv> You work from home the majority of the time?
[15:18:39] <Derick> yes
[15:18:43] <Derick> nearly 95%
[15:18:50] <Derick> no other techies in London really
[15:18:59] <Derick> sales people, and solution architects
[15:19:03] <remonvv> Oh okay. And is that the exception or the rule?
[15:19:06] <Derick> they're on the phone a lot
[15:19:19] <remonvv> Nice.
[15:19:31] <Derick> i think so, we have a few other remotes in europe; hertfordshire, berlin, sevilla and barcelona
[15:19:33] <kali> well, last friday before shutting down, i guess we can try and find similar beverages here
[15:19:40] <remonvv> You'd think you lot are making enough to be able to afford an engineering lab in EU
[15:19:51] <Derick> remonvv: new york...
[15:20:05] <Derick> that's the main tech hub
[15:20:07] <remonvv> That's not the EU Derick!
[15:20:13] <Derick> drivers people are spread out quite a bit though
[15:20:15] <remonvv> And I think we all know they make bad decisions in NY
[15:20:21] <Derick> kernel is all in nyc
[15:22:46] <remonvv> Who's kernel lead at the moment? Still eliot or has that been delegated
[15:22:47] <remonvv> ?
[15:22:53] <Derick> sorry
[15:22:54] <Derick> hmm
[15:23:06] <Derick> eliot is CTO, but there are more kernel leads now
[15:23:12] <shader> can I create an 'unsecured' database on a secured mongo server? i.e. one that is accessible from localhost with no username/password?
[15:23:21] <Derick> shader: no
[15:23:26] <shader> hmm
[15:23:44] <remonvv> Okay. This is not really the venue for kernel related bitching I suppose.
[15:24:01] <shader> you can't add a user with no name or password?
[15:24:03] <remonvv> shader: What's your use case for that?
[15:24:32] <Derick> remonvv: bitching is only allowed through JIRA tickets ;-)
[15:25:04] <remonvv> Derick: That's just an inefficient way to get your ticket to "works as designed" status ;)
[15:25:16] <Derick> hah
[15:25:18] <shader> I'm lazy, and not quite sure how to configure the username/password for this service i'm installing
[15:25:23] <Derick> remonvv: what's your ticket?
[15:26:49] <remonvv> I don't even remember all of them; query syntax inconsistencies, storage engine issues, upsert is inconsistent, behaviour that changes due to indexes...
[15:26:54] <remonvv> Let me have a quick look at my history
[15:27:43] <Derick> maybe not then...
[15:28:08] <remonvv> Maybe not what?
[15:28:29] <Derick> i thought you had just one ticket, not a bucket load! :-)
[15:28:52] <remonvv> I've been using mongo since beta. There's bound to be a few tickets ;)
[15:29:05] <Derick> fair point
[15:30:23] <remonvv> 28 reported, 5 in works as designed/won't fix
[15:30:44] <remonvv> Let me try and objectively judge how often I feel that's justified.
[15:31:17] <shader> why would you objectively judge your feelings?
[15:31:38] <remonvv> JIRA tickets aren't me sharing my feelings.
[15:31:46] <remonvv> I have www.poemsfromengineers.com for that
[15:32:07] <shader> of course
[15:32:10] <remonvv> Roses are red, violets are, my upserts aren't atomic, boohoohoo
[15:33:27] <remonvv> https://jira.mongodb.org/browse/SERVER-12694
[15:33:36] <remonvv> That's the one I'm really not going to ever agree with.
[15:33:39] <remonvv> That's just broken.
[15:33:46] <remonvv> The rest is debatable.
[15:33:50] <shader> I was just thinking that objectively judging how often you feel something seems like unnecessary effort
[15:34:17] <shader> yeah...
[15:34:26] <remonvv> I think you're metaing it a bit too much ;) I just meant run over the issues I reported and see which ones in hindsight I still think are issues.
[15:34:46] <remonvv> There's a limit on how objective you can be about your own work or efforts.
[15:35:09] <shader> yeah, and I was just making conversation because I don't want to work...
[15:35:14] <shader> :/
[15:35:25] <remonvv> Okay, are you GLSL or HLSL?
[15:35:32] <remonvv> ANd are you fragment or vertex?
[15:35:42] <Derick> remonvv: can you not reopen it? it's probably just an oversight that they didn't reply after it was already closed
[15:36:39] <remonvv> Derick: I think it's more a "this is how it's designed to work" sort of thing than an "I misunderstood"
[15:37:31] <remonvv> Anyway, I'm well aware I can't really move such huge tech design decisions as a non-paying user but when I'm CTO of MongoDB I'll make these things a priority.
[15:37:40] <remonvv> And that's only days away. I can feel it.
[15:37:54] <remonvv> I'll promote you to captain too Derick
[15:37:58] <remonvv> I like you
[15:38:08] <Derick> hehe
[15:39:31] <Derick> i think i would agree you on this one though...
[15:39:38] <Derick> but I doubt *I* can do anything about it
[15:40:55] <remonvv> Derick: Such is life ;) But yeah..
[15:53:32] <remonvv> Have a nice weekend!
[16:10:28] <Moonjara> Hi! I don't know if i'm the right place but I'd like some help with basic manipulation in Java. I've a directory with several link as embedded Document. If i do db.directories.findOne({}) in command line i get every link in my directory, but in Java the findOne method does get the embedded documents. Does anyone have any clue to help me? Thanks anyway!
[17:48:16] <near77> hi
[17:48:39] <near77> anyone knows where I can get the protocol specification for mongo?
[17:49:15] <Derick> http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/
[17:49:45] <Derick> and also: https://github.com/mongodb/specifications/blob/master/source/server_write_commands.rst
[17:50:27] <near77> thanks!
[17:50:34] <near77> Im looking because mongosniff always crashes for me
[17:50:40] <near77> and I need a way to audit mongo queries
[17:50:49] <Derick> near77: can I recommend wireshark? it has a mongodb disector
[17:51:13] <Derick> near77: and did you report mongosniff crashing in a jira ticket?
[17:51:32] <near77> I thought so, but I need something that I can put in a server, and starts monitoring queries and sending them to my Elasticsearch cluster
[17:51:37] <Derick> because I'm sure we'd like to fix that... https://jira.mongodb.org/browse/SERVER
[17:51:54] <Derick> near77: hmm, for that usecase, I think you should look at "oplog tailing"
[17:52:23] <Derick> I've written about that at http://derickrethans.nl/mongodb-and-solr.html (I know it's for solr, but the same idea applies)
[17:53:04] <Moonjara> Hi! I don't know if i'm the right place but I'd like some help with basic manipulation in Java. I've a directory with several link as embedded Document. If i do db.directories.findOne({}) in command line i get every link in my directory, but in Java the findOne method does get the embedded documents. Does anyone have any clue to help me? Thanks anyway!
[17:53:32] <near77> do you know if the oplog creates a performance drag on the mongo?
[17:54:05] <Derick> near77: it does a little, but if you're already running a replicaset it's there of course for you to use
[17:55:42] <Derick> dinner time - will be back later
[17:55:52] <near77> ok thanks!
[17:59:16] <near77> hi @Derick
[17:59:37] <near77> When you get back, can you tell me if its possible to get the source ip address and source port that executes each request on mongo?
[18:19:59] <ssaraH> i have two replicate sets, i want to add two config servers, one for each, right?
[18:20:08] <ssaraH> (minimu, im saying)
[18:20:42] <ssaraH> or do i need just one config server for the whole thing to work as two shards of the same mongos?
[18:22:35] <kali> no. it's one configuration server (for tests or homework) or three (for production) for the whole cluster
[18:27:12] <ssaraH> aight, ty kali
[18:39:58] <the-erm> How would you restrict access to 1 database and leave another one completely open? Is there a way to restrict 1 database to localhost connections, and another one can be accessed via the net?
[18:41:35] <the-erm> btw I think the default setup should be to bind to port 127.0.0.1. I just found out mongo has been exposed this whole time.
[18:46:24] <saml> the-erm, what do you mean?
[18:46:38] <saml> oh i see
[18:46:44] <saml> i doubt you can do that
[18:47:20] <saml> because mongodb databases are in application layer, but you're talking about network layer blockage
[18:47:57] <saml> unless.. mongodb can bind to different ip and port per mongodb database
[18:48:20] <saml> i think once you are using a database, you can also use other databse
[18:48:22] <saml> maybe i'm wrong
[18:55:26] <borkdox> I'm using the mongodb nodejs driver (1.4.8) on a AWS t2.micro instance with mongodb 2.6.3 also on a t2.micro instance. Everything works normally, but every day, once I day, I run a set queries that gets called repeatedly about 3000 times. Everything starts fine, but at some point querries start to fail with errors such: "failed to connect to [x.x.x.x:27017]" and "no open connections". I'm using connection pools, everything used to work fine with
[18:55:26] <borkdox> mongodb 2.4.x and the 1.3.x driver. Any ideas or suggestions? thanks!
[18:56:32] <kali> borkdox: smells like oom kills
[18:57:01] <Lenny[]> Hello. Im just starting out with mongodb and php. On my current hosted servers batchinsert using the php driver seems to distort the order of the documents in the database. I've read a bit and found out that this happens on update but i found no notice of on insert. Is this normal or indicative of some error ?
[18:58:18] <kali> Lenny[]: normal
[18:58:48] <Lenny[]> Does it have to do with some documents beeing larger than the default allocated space ?
[18:59:29] <Lenny[]> In any case its good to know its not some kind of error.
[19:00:01] <kali> or documents growing and overflowing their space, being moved and generating holes
[19:00:17] <kali> holes can be filled by a subsequent insert
[19:00:30] <kali> Lenny[]: do not make assumptions on the physical order
[19:00:56] <kali> Lenny[]: it's not strictly speaking persistent
[19:01:00] <Lenny[]> I read that it was normal during updates but it was kinda surprising to see it when batch inserting just once into a empty collection
[19:01:16] <kali> Lenny[]: well, with batch inserting, there is a "order" flag
[19:02:33] <Lenny[]> Thanks a lot for the info, Whenever I need to list something it will be with some kind of sorting anyway. I was just surprised and worried that something was going wrong.
[19:02:47] <borkdox> kali, hmm, any idea how i can verify it's oom kill? same query was workign fine on a t1 instance (less memory) under mongodb 2.4
[19:03:01] <kali> borkdox: /var/log/system.log
[19:03:19] <kali> borkdox: grep memory there
[19:03:30] <borkdox> kali, ok great, will take a look. tnx
[19:26:26] <lock> does anyone know how to manually calculate lock percentage without MMS? Such as with db.serverStatus() ? Thanks
[19:37:15] <nycdjangodev> hey there. I don't have much experience with mongo. I am running this query: db.redacted_db_name.update({}, {sent_5_star_posts: true}, {multi: true}) and I am getting this error: "multi-updates require $ops rather than replacement object"
[19:37:31] <Derick> you need:
[19:37:40] <Derick> { $set: { sent_5_start_posts: true } }
[19:37:52] <nycdjangodev> money. thanks!
[19:39:20] <nycdjangodev> it's running now
[19:39:33] <nycdjangodev> I'm worried that i messed something up cuz the collection has 14 million entries
[19:39:38] <nycdjangodev> and it's still going
[19:40:43] <nycdjangodev> @Derick: am I overloading the db?
[19:41:56] <Derick> it's just going to take some time
[19:44:21] <nycdjangodev> @Derick: ok so I shouldn't worry then?
[19:44:36] <Derick> nycdjangodev: well, was the field already there?
[19:45:11] <nycdjangodev> @Derick: not sure if it was for all cases but for most, yes
[19:45:19] <Derick> k
[19:45:23] <Derick> what does mongotop say?
[19:45:42] <nycdjangodev> I am doing it through robomongo
[19:45:46] <nycdjangodev> not sure what mongotop is
[19:46:02] <Derick> it's a command line tool to see what could be slowing down mongodb
[19:46:21] <nycdjangodev> ah
[19:47:45] <Derick> tells you about speed, and locks and all sorts of stats
[19:48:11] <nycdjangodev> could i just download it then auth in to my db?
[19:48:26] <Derick> should work, sure
[19:48:52] <Derick> are you not running mongodb locally?
[19:49:58] <nycdjangodev> this is a bit embarrassing...I am running it locally but not for that collection, cuz that collection is too big
[19:50:11] <nycdjangodev> running that command on prod :/
[20:17:26] <nycdjangodev> @Derick: uhh this query is still going. I don't have access to mongotop. Is it unsafe to kill the process?
[20:19:16] <Derick> yeah, you shouldn't do that
[20:22:51] <nycdjangodev> what should I do?
[21:28:31] <elux> hello
[21:29:13] <elux> if i attach a mongod 2.6 to a 2.4 cluster, will it work fine..?
[21:29:29] <elux> im planning to add some 2.6 nodes, then eventually kill all of the 2.4 ones, and have the 2.6 ones be the primary
[21:29:31] <cheeser> yep
[21:29:42] <elux> looking to rebuild the db and move to bigger servers without much downtime
[21:29:47] <elux> and be pretty safe
[21:32:32] <joannac> elux: do you have auth on?
[21:32:37] <elux> no
[21:35:56] <joannac> Also go through http://docs.mongodb.org/master/release-notes/2.6-compatibility/
[21:36:02] <elux> ok thanks
[21:36:44] <elux> is that the right way to rebuild the db servers and upgrade at the same time..?
[21:36:57] <elux> or would you guys suggest do stop everything.. dump and import?
[21:38:30] <joannac> when you say cluster, do you mean sharded?
[21:38:42] <elux> no.. just 3 replicas
[21:39:39] <joannac> do you have a bandwidth and oplog to initial sync them?
[21:39:59] <joannac> s/a/enough/
[21:47:46] <Number6> joannac: Bed!
[21:47:54] <elux> thanks guys
[22:25:29] <MacWinner> I currently have a 2-node replication set all working with a mongo arbiter. I wanted to move the 2 nodes to different servers.. what would be the high level process of doing this? or things to watch out for.. Could I just shutdown the secondary node, copy a directory over to the new server, make sure the configs have the right hostnames, and then start it up?
[22:25:49] <MacWinner> I'm concerned about how the replication set's IP addresses and hostnames are set
[22:34:09] <joannac> no, you'd have to reconfigure the replica set with the new hostname
[22:50:27] <MacWinner> joannac, got it.. so is the replicaSet configuration information not part of the mongodump?
[22:51:06] <MacWinner> joannac, if you could point me to the recommended tool set for this situation.. I see articules on mongodump.. on straight file copy.. on running copydb
[23:13:58] <joannac> MacWinner: http://docs.mongodb.org/manual/tutorial/change-hostnames-in-a-replica-set/