PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 16th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:15:41] <regreddit> do indexes have to be built when a db is restarted?
[03:16:44] <retran> no, regreddit
[03:16:48] <regreddit> we use replicas, so I can't see this happening, but i'm putting the index commands into my server's startup, and want to see if they need to be built or rebuilt ever
[03:16:50] <regreddit> ok
[03:16:56] <regreddit> thanks
[03:16:59] <retran> in fact, you can copy the whole data dir
[03:17:12] <retran> and the indexes will be intact if you put it on a different system
[03:18:38] <regreddit> so, does having ensureIndexes adversely affect my db if my app calls it on startup?
[03:18:47] <retran> no
[03:18:52] <regreddit> or if the indexes are good, does it just return
[03:18:55] <retran> it has an expense on insert/update
[03:18:59] <regreddit> sweet
[03:19:31] <retran> and yes, it owuld have an expense if any docs aren't indexed on the collection
[03:19:39] <retran> as specified in the command
[03:19:52] <regreddit> but that's desirable, so i'm good
[03:19:54] <retran> but if the'yre already indexed as specified in command, it wont have expense
[05:29:03] <pootietang> sup peeps!
[05:30:11] <pootietang> I'm having a problem running mongodb on opensuse. I get the following error: JavaScript execution failed: Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
[05:31:29] <pootietang> any tip would be appreciated
[06:32:09] <mark__> aggregation means in mongodb
[08:19:19] <dtrott> If I have a (Java) object that has its own custom serialization (byte [] about 4k) whats the best way to store that in Mongo ? I can create a wrapper DBObject with a single data field but that seems somewhat redundant since I don't really need a wrapper with just one field ? PS Also using Spring Data if that helps..
[09:03:37] <goog> "The minimum recommended number of slave servers are 3." <--- why 3? 1 master + 3 slaves?
[09:26:07] <Nodex> are you confusing slaves with replicas?
[10:02:29] <goog> Nodex: maybe. is slave different than secondary of replica set?
[10:03:31] <crashev> Derick: hello, just to make sure, isnt there other way in order to upgrade mongodb from 1.4.x to 2.2.x than the path 1.6, then 1.8, then 2.0, then 2.2, isnt there something like just dumping data, doing upgrade to 2.2.x and then reloading data like in relational sql data bases?
[10:21:47] <theblackbox> hello all, I'm struggling with starting the mongod process as a service that is then disowned - it currently locks up the shell instance that calls it and so stalls my deployment script. Any suggestions?
[10:35:14] <theblackbox> sorry, got called into a meeting
[10:35:33] <HarryT> Could somebody tell me how I can update my questions object? If a value isn't there, it needs to be added. If a value is there, it needs to be updated. This is the json equivelent of my object - http://pastebin.com/G47GfZ2w
[10:35:40] <HarryT> Any help would be greatly appreciated!
[10:39:08] <Derick> crashev: hmm, it is something to try
[10:39:48] <Derick> crashev: I suggest you: 1. shut down mongod. 2. make a backup of the data files. 3. start MongoDB 2.4 and see whether it works first
[11:29:55] <crashev> Derick: ok,I will try that as well, but how about doing 1.dump of data like mysqldump or pg_dumpall, 2. shutdown current mongodb 1.4, 3. upgrade to 2.x (clean install), 4. restoring data in 2.2 from dump made using mongodb 1.4 -> is this possible, or this logic does not apply here to mongodb ?
[11:38:14] <Derick> crashev: you can try that too - don't go to 2.2 though, upgrade to 2.4!
[11:38:41] <Derick> crashev: I think dump and restore is just going to be slower
[11:41:32] <crashev> Derick: ok, will upgrade to 2.4 then
[11:41:36] <Derick> crashev: do not forget a backup
[11:49:20] <crashev> Derick: sure thing :)
[11:55:19] <Garo_> I have a strange problem: My 300GB database is doing around 650 queries per second and around 200 updates per second. For some reason the database reads over 8000 i/o operations per second from the SSD storage, that's over 200MB (megabytes) per second. I can't find any logical reason why it's doing so much reads. Any ideas? All my slaves (two) are in sync. mongostat reports around 300 page faults per second. the documents ...
[11:55:25] <Garo_> ... are quite small, around few kilobyte per document
[12:49:35] <joannac> readahead?
[13:00:57] <NoReflex> hello! I'm trying to find average and sum for some fields in a mongo collection; all the example I've seen mentioned using a group clause but I don't need to group anything
[13:01:14] <NoReflex> so far the only solution for me is to use $group but use null fro _id
[13:02:24] <kali> NoReflex: yep, that's it
[13:02:47] <kali> NoReflex: null, or anything constant will do
[13:03:16] <NoReflex> kali, I understand; does mongo shell provide a way to save a query so I don't have to type the whole thing again?
[13:04:38] <kali> NoReflex: yes. http://docs.mongodb.org/manual/reference/program/mongo/#mongo-mongorc-file
[13:04:41] <NoReflex> besides the history I mean
[13:09:00] <Derick> crashev: any luck?
[13:09:33] <NoReflex> kali, thank you!
[13:17:36] <trupheenix> is there anyone here who can help me with using $regex and $in on an array in pymongo?
[13:17:41] <trupheenix> I am not getting results
[13:40:26] <jeanre> hi all
[13:41:25] <jeanre> can anyone see anything wrong with this
[13:41:50] <jeanre> db.books.update({ "pages.comments._id": '1' }, { $set: { "pages.comments.$.deleted": true }})
[13:44:42] <jeanre> I'm getting can't append to array using string field name: comments
[13:45:04] <jeanre> there is an array of pages which contain an array of comments
[14:06:12] <HarryT> Could somebody tell me how I can update my questions object? If a value isn't there, it needs to be added. If a value is there, it needs to be updated. This is the json equivelent of my object - http://pastebin.com/G47GfZ2w
[14:16:12] <NoReflex> take a look at upsert http://docs.mongodb.org/manual/reference/method/db.collection.update/
[14:24:56] <HarryJT> I tried doing an update with upsert and it still removed the previous values though?
[14:25:06] <HarryJT> I'll try again in about 15 minutes, gotta do a windows reinstall on a machine first!
[14:45:01] <HarryJT> http://pastebin.com/5hwGmm0M
[14:45:21] <HarryJT> Why is there an unexpected { somewhere, it looks valid to me?
[14:47:53] <NoReflex> Use dot notation to update values in subdocuments
[14:47:57] <NoReflex> http://docs.mongodb.org/manual/reference/method/db.collection.update/
[14:49:21] <grexican> Hey Derick. You were helping me yesterday with a $match issue where I was trying to match on subdocuments and filter them out of a subdocument. I came up with a solution using a bit of rolling and unrolling and using $eq like you suggested. I can pastie it if you're interested in seeing it. And of course no hard feelings if you couldn't care less to see it :)
[14:49:58] <HarryJT> Thanks NoReflex!
[14:51:17] <NoReflex> HarryJT, yw
[14:51:46] <HarryJT> Am I not supposed to use a number as a variable name?
[14:52:35] <HarryJT> (Cause now I have this - http://pastebin.com/hJ64DCuN _
[14:53:35] <HarryJT> * http://pastebin.com/32bD0pYi
[15:00:52] <Derick> grexican: it's ok - glad you found it though :)
[15:01:31] <grexican> roger
[15:09:55] <Nodex> HarryJT : if you're update breaks strings in JSON you have to quote it
[15:10:16] <Nodex> "foo.bar" = ok. foo.f100 = false, foo.100 = ok
[15:23:44] <HarryJT> I did try it with quotes though!
[15:23:44] <HarryJT> db.data.update({ _id : 8 }, { "questions.q100": "It worked"},{ upsert: true })
[15:23:50] <HarryJT> Thu Jan 16 14:57:54.234 can't have . in field names [questions.q100] at src/mongo/shell/collection.js:143
[15:28:11] <cheeser> dotted names imply subdocuments
[15:39:53] <Nodex> that's strange, try {"questions" : {"q100" : "Worked"}}
[15:45:34] <Nodex> anyone know off the top of their head whether https -> http is suseptable to a man in the middle sniff? - I forgot lol
[15:45:44] <Nodex> slightly OT I know but hey
[15:56:43] <Katafalk_> Hey, is there a way to find write-heavy, read-heavy queries in mongo ?
[15:56:47] <Katafalk_> some tools maybe ?
[16:04:38] <Nodex> tail the log
[16:04:50] <Nodex> anything over 100ms shows up in there iirc
[16:36:32] <HarryJT> Nodex, sorry I had to go and add 2 or 3 printers manually to about 15 computers...
[16:37:30] <HarryJT> When I then run the same code, but change the question id, it still removes the original question unfortunately.
[16:42:26] <Nodex> you need $set then
[16:42:52] <Nodex> db.foo.update({CRITERIA},{$set:{foo:"bar"}},{upsert:true});
[17:13:40] <HarryJT> Nodex, it's still removing the old value!
[17:13:47] <HarryJT> > db.data.update({ _id : 8 }, { $set:{ "questions": {"q201": "It worked"}}},{ upsert: true })
[17:13:47] <HarryJT> > db.data.update({ _id : 8 }, { $set:{ "questions": {"q201": "It worked"}}},{ upsert: true })
[17:13:58] <HarryJT> > db.data.update({ _id : 8 }, { $set:{ "questions": {"q202": "blah blah"}}},{ upsert: true })
[17:14:13] <HarryJT> The only one that I can see is q202: blah blah, unfortunately. Any ideas why?
[17:15:58] <jeanre> hi all
[17:16:03] <jeanre> is it possible to do this
[17:16:33] <jeanre> 35 db.collection('books').update({ 'pages.comments._id': id }, { $set: { "pages.comments.$.deleted": true }}, function(err, result) {
[17:16:41] <jeanre> I need to only update the one comment
[17:22:30] <cheeser> HarryJT: because you're setting the subdocument value to be {"q202": "blah blah"}
[17:22:49] <cheeser> HarryJT: "questions" should be an array of subdocuments and then you can use $push
[18:16:20] <jeanre> there has to be a way to do this
[18:18:00] <regreddit> jeanre, are you saying you want to update a single item in an array?
[18:18:17] <jeanre> regreddit its a array in a array
[18:19:52] <regreddit> i think dotted index notation works on mongo shell, have you tried pages.comments.N.deleted?
[18:21:02] <jeanre> I can't . because its not named documents
[18:22:10] <jeanre> looks like I will have to update the whole doucment
[18:38:45] <patricioe> is there any way to store an empty array or map using morphia? (need to keep consistency with avro default values)
[18:39:19] <ron> cheeser
[18:45:56] <cheeser> ron
[18:47:26] <ron> patricioe had a question for you
[18:47:43] <ron> cheeser
[18:48:28] <patricioe> I was about to post this question on morphia email list :)
[18:48:48] <patricioe> :cheesed, is it possible to store an empty list using morphia?
[18:48:56] <patricioe> :cheeser ^
[18:51:22] <cheeser> if you look at MapperOptions, there's a setStoreEmpties()
[18:51:30] <cheeser> and setStoreNulls() for that matter.
[18:51:55] <cheeser> set up your MapperOptions how you'd like and create a Mapper with those options then pass that Mapper to Morphia when you create it.
[18:53:29] <patricioe> awesome. Will look into that. Thanks.
[19:01:27] <cheeser> patricioe: you? https://github.com/mongodb/morphia/issues/567
[19:01:56] <patricioe> yeah.
[19:02:17] <cheeser> yeah, that's more of a mailing list thing than file an issue thing.
[19:02:33] <cheeser> i'm going to close that out as it should work just given the steps above.
[19:02:36] <patricioe> I thought it was a feature.
[19:02:59] <cheeser> that feature is there already
[19:03:05] <patricioe> just commented there and closed it.
[19:03:11] <cheeser> great. thanks.
[19:03:17] <patricioe> sure. But i wasn't able to find in the email list. sorry.
[19:16:00] <Logicgate> hey guys, having problem with an aggregation
[19:16:19] <Logicgate> it has worked for a very long time, and yesterday the behavior of the aggregation changed
[19:17:02] <Logicgate> http://pastebin.com/TDQNgRAX
[19:17:14] <Logicgate> I want it to aggregate the count per date
[19:17:18] <Logicgate> unique date that is
[19:17:30] <Logicgate> now, every date is doubled up
[19:19:51] <Logicgate> http://pastebin.com/5HYzdCeN
[19:19:58] <Logicgate> here's a sample of the data returned
[19:20:25] <Logicgate> i'm trying to figure out as to why some dates get doubled up
[19:23:17] <Logicgate> anyone?
[20:07:23] <patricioe> :cheesed I tried your suggestion and still I'm not able to store an empty array. Any other suggestion ?
[20:07:50] <patricioe> I have debugged the code a bit and I see the options properly set right before calling the DAO.save().
[20:08:19] <patricioe> after that, the database still shows null in the field that should contain an empty array.
[20:13:08] <gswain> I'm not sure if i can ask questions related to the php driver here: but I have a collection names schools and each document has an numerically indexed array called zips. i want to find all schools that have the zip code '83757' in their zip code array. I see examples for querying associative arrays but not for arrays where you don't know the index you are looking for
[20:13:25] <patricioe> :cheeser (sorry I keep misspelling your name)
[20:17:55] <ehershey> 1
[20:20:02] <gswain> ok so if you had tags in your document ['soda', 'juice'] how would you use find to get all documents with the tag soda in it?
[20:24:41] <Joeskyyy> gswain: I'm not 100% on the php driver said, on the shell side that'd be $in
[20:26:48] <Joeskyyy> http://www.php.net/manual/en/mongocollection.find.php
[20:26:51] <Joeskyyy> See example#4
[20:30:32] <gswain> i was just looking at that one ill look again
[20:30:39] <ruphos> gswain: if you only care that the array has a value in it somewhere, you can just do: tag => "soda"
[20:31:04] <gswain> ruphos: exactly what i was looking for
[20:31:07] <gswain> thanks!
[20:31:23] <ruphos> np
[21:48:12] <pasichnyk> hey guys, i'm just rolling out a mongodb deployment (3 node replicaset) and when i have a decent volumes of writes ($inc/upsert's) cpu pegs at 100% and set becomes unresoponve off and on, resulting in growing replication lag. Is the only way to get around this to change my write patterns and/or shard, or are there settings that would impact this behavior in a positive way?
[21:50:09] <pasichnyk> If i completely stop the update calls, it will recover around 1000s of replication lag very quickly, say less than a minute. just seems odd that it lags so hard if the PRIMARY can continue to handle the write patterns...
[21:52:35] <ruphos> your document schema can have an effect. I've experienced significant issues with having subdocument arrays that were growing often, as the bson data has to get grown and shifted around
[21:53:21] <kali> http://docs.mongodb.org/manual/reference/command/collMod/#usePowerOf2Sizes can help when documents are growing
[21:53:26] <pasichnyk> my documents do need to grow a bit, as the data going into them isn't known ahead of time, but most of the updates are just $inc of existing elements. I did set powerOfTwo sizes already
[21:53:48] <ruphos> what is your "decent volume" of writes?
[21:55:58] <pasichnyk> i was processing from a queue, and it was saying between 50 and 100 items processed/sec. Each item is a single document update, that runs $inc operations on 1..n values in document. (where n could be in the 100's, but probably more like double digits)
[21:57:52] <pasichnyk> i just double checked collections, and the stats reports "userFlags" : 1, so it looks like powerOf2Sizes is enabled.
[22:01:58] <pasichnyk> ruphos, i don't have subdocument arrays, i do have a level or two of schema though, following the pattern recommended in the blog about how mms is built. keeping track of data at say a hourly and daily level in a single document.
[22:04:44] <ruphos> I'm not sure then. The subdocs were the majority issue that I've experienced. Other than that, I've only run into replication issues when migrating between collections and doing 30k+ inserts/sec. Which definitely does not seem to be applicable.
[22:05:01] <pasichnyk> most of my schemas though (18 of 22 collections) have a schema more like this: http://pastebin.com/BHeRh36T
[22:05:42] <pasichnyk> yeah, these boxes have raid10 ssd storage, and also WAY more memory that dataset currently...
[22:06:25] <pasichnyk> the query for my update specifies the _id value (indexed) as well as the full metadata node (also indexed now), so that it can create the schema if its not alredy there.
[22:06:38] <pasichnyk> anything seem alarming there, or that pretty standard?
[22:07:07] <ruphos> doesn't look too crazy to me
[22:07:26] <pasichnyk> me either... :/
[22:18:18] <ruphos> I got nothin. sorry.
[22:25:54] <pasichnyk> np, thanks for the thoughts. Maybe someone else has a silver bullet. :)
[22:36:34] <pasichnyk> how's this for a 'locked db' output from mongostat: summaries:1853.7% (that makes no sense, right?)
[22:44:45] <DrStein> Hi. Is it possible to change all 3 config servers' hostnames in one operation?
[22:45:10] <DrStein> I used the actual hostnames and I'd like to start using CNAMEs instead.
[22:48:07] <pasichnyk> hrm, this looks odd. I see that when the cpu i spiked on a box, in db.currentOp() i see "serverStatus":1 running, for over 2 minutes. That should be an instant query right?
[22:48:26] <pasichnyk> I'm guessing its from my mms agent