PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 24th of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:13:23] <Tug> Hi, I have run removeShard about 12 hours ago, now it's stuck at 7 chunks. Looking at mongos log here is what I have:
[00:13:28] <Tug> [Balancer] warning: can't find any chunk to move from: shard2 but we want to. numJumboChunks: 7
[00:17:14] <Boomtime> you have some chunks that are too big to migrate probably caused by a shard key that has poor cardinality
[00:20:09] <Tug> what do you mean too big to migrate ?
[00:21:07] <Tug> it wants to keep a similar number of documents per chunks so it refuses to migrate data ?
[00:21:28] <Boomtime> no, chunks do not keep document counts
[00:22:05] <Boomtime> chunks are targeted by storage size - when a chunk is detected as being above the storage size allowed for a chunk, the chunk is split into 2 pieces
[00:22:44] <Boomtime> to split a chunk into 2 peices requires that each piece have a unique range over the shard key total range to consider it's own
[00:23:09] <Boomtime> this means that if your shard key is not granular enough the chunk cannot be split
[00:23:28] <Tug> my key is more granular than an ObjectId
[00:23:38] <Tug> it's just a 8 chars string
[00:23:44] <Tug> randomly generated
[00:23:48] <Boomtime> apparently not
[00:25:00] <Boomtime> also, what you described is less granular than ObjectId
[00:25:33] <Tug> mm yeah ObjectiD is 8 byte too
[00:25:49] <Boomtime> ObjectId is 12 bytes
[00:25:56] <Tug> ok
[00:25:59] <Boomtime> also, it is unique
[00:26:07] <Boomtime> it has perfect cardinality, not random
[00:26:27] <Boomtime> it makes a bad shard key for a different reason, but it will never result in a jumbo chunk
[00:27:05] <Tug> I see, any remedy for my situation ?
[00:27:26] <Boomtime> your immediate problem might be able to be addressed by increasing the chunk size temporarily
[00:27:32] <Boomtime> http://docs.mongodb.org/manual/tutorial/modify-chunk-size-in-sharded-cluster/#modify-chunk-size-in-a-sharded-cluster
[00:27:57] <Boomtime> but you really need to extract all the data out of this collection, choose a better shard key and then re-import
[00:28:12] <Tug> ok I'll try that
[00:28:22] <Tug> so what is a better shard key in that case ?
[00:28:31] <Boomtime> depends on your data
[00:28:49] <Boomtime> the simplest "good" shard key is the hash of the ObjectId
[00:29:00] <Boomtime> i.e { _id: "hashed" }
[00:29:20] <Boomtime> but that is a total cop-out, something you do if you can't be bothered determining a good shard key
[00:29:22] <Tug> (btw, my actual shard key is unique too)
[00:29:32] <Boomtime> it cannot be
[00:29:45] <Boomtime> a unique shard key would not result in jumbo chunks
[00:29:59] <Boomtime> unless something else is wrong
[00:30:18] <Tug> mmm wait a sec :)
[00:30:37] <Tug> true it isn't
[00:32:01] <Tug> well at the moment I have a date for each document, it might be a better shard key :s ?
[00:32:13] <Tug> (the date of insertion)
[00:33:10] <Tug> otherwise I'll have to compute a hash of the _id and store it in each document ?
[00:39:07] <Tug> I modified the chunkSize about 30min ago (I saw the advice you gave me after searching for my error on google) but I does not seem to be doing anything
[00:48:10] <Boomtime> you may need to manually migrate the jumbo chunks now, with them marked as "jumbo" the balancer won't touch them
[00:48:43] <Boomtime> i'm not certain if there is a way to toggle the jumbo status off - you should try searching for that
[00:52:35] <Tug> Does the moveChunk command will work for those chunk ?
[00:53:02] <Tug> trying...
[01:20:45] <Tug> I have an error when moving some chunks
[01:20:51] <Tug> "errmsg" : "the collection metadata could not be locked with lock migrate-{ r: \"TSZKk2A8\" }"
[01:21:00] <Tug> "errmsg" : "move failed"
[01:22:09] <Tug> but looking at sh.status it looks like it actually moved
[01:23:56] <Boomtime> is the balancer still running?
[01:24:25] <Tug> true
[01:24:28] <Boomtime> only one chunk move can be in-flight at a time, so if the balancer is running, or you have another manual migration running then that will be it
[01:24:41] <Tug> yes it's still running
[01:24:58] <Tug> It seems to be working anyway
[01:25:41] <Tug> because if I run the command again I get "errmsg" : "migration already in progress"
[01:27:01] <Boomtime> give it some time, balancing is hardly a fast process at the best of times and you have very large chunks to move
[01:27:33] <Tug> "msg" : "removeshard completed successfully"
[01:27:39] <Tug> Thanks Boomtime :)
[01:30:02] <Tug> I have to go, thanks again for your help
[01:33:08] <Boomtime> cheers, and good luck
[02:06:06] <jabnib> Quick "schema" design question from a newbie. I'm planning on making a web-based reporting application with NodeJS/mongo. Users can embed text and images into each report(There may be anywhere from 5 to 50 images per report). My question is do you think it makes more sense to store text/images in a single document using GridFS? Or store references to each image, similar to how yo would in a relational DB?
[02:06:42] <jabnib> I'm leaning toward the latter because I imagine the document sizes could get very big, but I'd appreciate any advise/considerations from you more experienced people :)
[02:12:56] <Boomtime> @jabnib, how big is big? will you place limits on what a user may select?
[02:14:47] <jabnib> @Bootime, max image size is 15 MB, so worse case a single document would be around ~750 (15 MB * 50 images + text)
[02:16:02] <jabnib> @boomtime typically there's only about 20-30 images though
[02:33:20] <Boomtime> @jabnib: ok, so your options are valid, you can fit each image in one document, but not more than one (16MB document limit)
[02:34:17] <Boomtime> at this point it's largely an application use case question - if you will always need the images and retrieving them via gridfs every time is a sensible thing to do then do that
[02:34:46] <Boomtime> if you need the images only one at a time, or only optionally, then use a seperate collection just for them
[02:35:15] <Boomtime> perhaps there are other considerations - it is a use case question for what makes sense for you
[03:34:43] <netameta> good night all
[07:15:45] <Strekerud> Hi! Is there anyone here with any experience using MongoSessionManager?
[09:55:25] <TomHawkin> can anyone tell me how to do a case insensitive search using linq and the c# mongo driver. i have tried items = items.Where(i => listToCheck.Contains(i.Property.ToLower())); and items = items.Where(i => i.Property.ToLower().In(listToCheck)); but both result in an unsupported where clause error. i worked out that it is because the toLower() method wraps the string in a regex but there doesn't
[09:55:25] <TomHawkin> seem to be a way to wrap all items in a list in a regex
[13:03:41] <TomHawkin> can anyone tell me how to do a case insensitive search using linq and the c# mongo driver. i have tried items = items.Where(i => listToCheck.Contains(i.Property.ToLower())); and items = items.Where(i => i.Property.ToLower().In(listToCheck)); but both result in an unsupported where clause error. i worked out that it is because the toLower() method wraps the string in a regex but there doesn't
[13:03:41] <TomHawkin> seem to be a way to wrap all items in a list in a regex
[13:18:36] <inad922> hi
[13:19:03] <inad922> When I use pymongo's MongoClient to make a query like client.db_name.collection_name.find({....})
[13:19:09] <inad922> I get back a cursor object
[13:19:19] <inad922> How can I get all the results as a list?
[13:28:10] <skot> just use list(...find())
[13:49:36] <jordz> TomHawkin: Not sure if you'll find an answer here mate.
[13:50:00] <jordz> TomHawkin: I reckon you should just store the field like we said, lower case and have a display field.
[13:54:12] <jontyy> Hi, I'm wondering if somebody would be able to help out with a schema design http://pastebin.com/7cYaeXkq
[13:55:11] <jontyy> Basically we are using mongo to track listening sessions, I need to be able to query the data for analytics reports to show things like how many listeners there were per hour, uniques etc
[13:58:39] <jonathan2> I have a mongodump taken from a slave (master/slave repl) and I have the oplog p
[13:58:42] <jonathan2> osition as found in db.sources from immediately prior to the dump. If I restore the dump onto a brand new slave, and start slaving by writing the db.sources info that I have, will this work as expected (meaning new slave is consistent with master) since the oplog is idempotent?
[13:58:46] <jonathan2> Example: > db.sources.find()
[13:58:48] <jonathan2> { "host" : "the_master", "source" : "main", "only" : "the_db", "syncedTo" : Timestamp(1411507679, 1) }
[15:00:56] <Max-P> Hello, just quick question: is replication compatible between mongod 2.0.6 and 2.6.x, or am I better dump + import?
[15:02:22] <cheeser> that's a pretty big version leap but it *should* be. mixed clusters aren't uncommon especially during upgrades.
[15:03:53] <Max-P> cheeser, I would have tried right away if it was 2.2, but to my surprise 2.0 isn't even documented anymore and I think it uses the old master + slave model :/
[15:06:29] <cheeser> ahhhhh.
[15:09:21] <Max-P> I think I'll go with the risky "rsync the db files". I don't really need actual replication, I just need to move the data to a new server ASAP
[15:10:06] <Derick> in order to upgrade from 2.0 you need to run it through 2.2 and 2.4 too
[15:10:21] <Derick> the data form and indexes need/should be upgraded
[15:10:23] <cheeser> maybe this? http://docs.mongodb.org/manual/reference/method/db.copyDatabase/
[15:10:49] <cheeser> i'd probably just mongodump/mongorestore
[15:10:59] <Derick> i think that's better too
[15:25:18] <Max-P> Yeah, that's what I'd normally do, but I'm stuck migrating a live standalone database
[15:27:21] <cheeser> rolling upgrade it is!
[15:27:40] <hydrajump> hi
[15:28:09] <hydrajump> Derick: hi are you a MongoDB employee? Can I discuss the official AMIs you provide for AWS?
[15:28:12] <cheeser> how big is the database?
[15:29:46] <Derick> hydrajump: I am an employee, but I am not sure how much I know about those AMIs :)
[15:31:33] <hydrajump> Derick: hehe no worries. Do you know someone you can refer me to? Just like to have a brief discussion about using them. PLanning to move from self installed mongodb 2.4.4 to the official AMIs 2.6.4.
[15:31:49] <Derick> hydrajump: what are the questions?
[15:32:55] <Derick> not actually sure where to direct those too - I think the mongodb-user google groups list might be best
[15:33:22] <hydrajump> Derick: So no one directly at mongodb that I can chat to?
[15:34:34] <Derick> perhaps people here?
[15:37:28] <Max-P> cheeser, It's 10GB. I'm going for an rsync, that's my best option. I'll do a hot rsync while it's running, shut it down, delta-rsync it again, and switch to the new instance. I'll get downtime, but that's my only option I think
[15:38:05] <cheeser> hopefully 2.6 can read those files...
[15:38:09] <hydrajump> Ok so some of my questions are. 1) MongoDB provides the AMIs as well as instructions for installing mongo manually on EC2 instances. Is there any reason for not choosing the official AMIs if the instructions will produce the same result?
[15:38:33] <Max-P> cheeser, Apparently yes it does. Authentication has issues, but it does work
[15:38:37] <hydrajump> 2) When mongodb releases a new version, how are the AMIs updated?
[15:38:54] <Derick> hydrajump: if you don't get an answer here... I think the mongodb-user group is best
[15:39:17] <hydrajump> 3) Is it ok to SSH into the official AMIs and for instance configure iptables? Or should the AMIs be left unmodified?
[15:58:35] <blizzow> Is it possible to make a DB that doesn't require auth from a remote client?
[16:31:33] <hydrajump> Has anyone heard of mongodb replica set being run in docker containers in production?
[16:32:53] <Max-P> hydrajump, No, but I can't see why it wouldn't work. I run mine in a virtual machine, and it's essentially the same thing
[16:39:21] <hydrajump> Max-P: sure I just wanted to hear of someone doing it and maybe get some info.
[17:06:14] <Max-P> Is there an easy way to copy only differences between two databases?
[17:08:01] <cheeser> Max-P: https://gs1.wac.edgecastcdn.net/8019B6/data.tumblr.com/b8996fa01159503164bbb1eeddade189/tumblr_n4x8xcdNkg1s3ulybo1_250.gif
[17:08:04] <cheeser> :D
[17:08:14] <Max-P> I'm still copying my database to the new server, but with an array of drives that runs at 500KB/s there's no way I will ever even delta-transfer in any reasonable time, so I'd like to fire up the new database now and let the remaining data slowly copy over
[17:08:34] <electronplusplus> I'm creating a document with an object that is going to hold relations to my images collection. Profile = { userMedia : { "userMediaId" : { mediaId: "mediaId" } } }. Is there a way I can use functions like find, findOne() in this object? I don't want to create a new collection to save the relation.
[17:09:44] <Max-P> cheeser, That's more accurate to my situation: http://38.media.tumblr.com/0752d1a24082482a00fb98f0b1660732/tumblr_n2etdfXSuy1rsale3o1_400.gif
[17:10:13] <cheeser> ha! +1
[17:10:52] <Max-P> electronplusplus, You want to find Profile by its mediaId?
[17:11:05] <electronplusplus> no
[17:11:37] <Max-P> Please explain a little more what you want to do, I don't understand your question
[17:11:41] <electronplusplus> I've the Profile collections that holds information about the user and also what images the user has uploaded. There is a seconds collection for Images.
[17:11:54] <cheeser> Max-P: i have this on my laptop: http://society6.com/product/i-am-a-giddy-goat_laptop-skin#2=51
[17:12:10] <electronplusplus> I want to store the relation between the profile and images inside the Profile user document.
[17:12:45] <electronplusplus> If I use an array, I've to replace the entire array, it's stupid and slower.
[17:13:19] <electronplusplus> So I decided to use an object to save this and I wonder if I can use the function find, findOne, etc because I'm going to have ids. Max-P
[17:14:51] <Max-P> electronplusplus, So, you want to load up the document in Images by a Profile id?
[17:15:26] <electronplusplus> I'll gist Max-P
[17:16:14] <Max-P> Yeah, would probably help. I have trouble figuring out what you want with what.
[17:17:18] <Max-P> If you want the profile from its image, {"userMedia.userMediaId.mediaId": ObjectId("...")}, if you want the image from the profile you need to do two seperate queries. But I can't see what else you could want
[17:17:33] <electronplusplus> Max-P: https://gist.github.com/mariorodriguespt/4a8dbc46d28c760e2c22
[17:18:11] <electronplusplus> Max-P: The problem is updating this object. It's like an embedded collection that I'm treating like an object.
[17:19:37] <electronplusplus> My teammates wrote some code for this but used an Array, since we're using Meteor Js, every time the client update something: 1- get the array 2- write shitty code to update the array ( O( N^2 ) ) 3- replace the entire array in the database with absolutely no validation.
[17:19:43] <electronplusplus> Max-P:
[17:21:25] <Max-P> electronplusplus, Ah, yeah. Well you can filter what you get with find() using the second argument. So for example if you want to avoid reloading the entire profile, you can pass find({..}, {media: false}) and you'll get everything except the media{} object
[17:21:55] <electronplusplus> Max-P: But I want to perform find inside media.
[17:22:50] <Max-P> electronplusplus, can you give a concrete example of the query you'd like to do?
[17:23:07] <electronplusplus> sure
[17:24:51] <electronplusplus> Profiles.find({ _id: 'someUserID' }, { media: 1 }).remove({ _id: 'This_is_the_media_object_id' })
[17:24:53] <electronplusplus> Max-P:
[17:46:24] <Max-P> I see
[17:48:05] <Max-P> electronplusplus, Since your media field is an object and not an array, you can use .update({_id: "someUserId"}, {$unset: {media: 1}})
[17:49:17] <Max-P> electronplusplus, And if it's an array, you can use $pop or $pull for similar results
[17:49:37] <electronplusplus> I know I can use the dot notation, what I don't want to perform is data manipulation operations in the client ( MeteorJs ). This way I still have to download all media items and then performance the search.
[17:50:06] <electronplusplus> Max-P: Well that approach is very limited, I had situation in which I want to remove an element by index and I have to push a new a array.
[17:51:25] <Max-P> electronplusplus, But you don't need to do it on the client at all.
[17:52:39] <Max-P> electronplusplus, update({_id: "userId"}, {$pull{ "media.userMedia._userMediaId.mediaId": "The_media_object_id"}})
[17:53:01] <electronplusplus> True but I also don't want to do it on the server for simple reasons. N clients * time to performance the operation, It might slow the server. This operations need to be validated, more time. It goes "against" MeteorJs philosophy
[17:53:19] <Max-P> This does exactly what you asked for with your find().remove() operation
[17:53:36] <Max-P> but otherwise, if you do it on the client, then yes you're pretty much stuck pushing an updated array
[17:53:51] <electronplusplus> A way to remove by id, without dumping all the data to the client.
[17:54:34] <Max-P> I really don't understand what's your problem here other than really complicating it with that meteorjs thing
[17:55:04] <Max-P> Just make a server page that takes a profileID and mediaId as input, send the update() query to mongo, all done
[17:55:14] <Max-P> YOu don't even need to return anything at all to the client
[17:56:44] <phishin> can anyone provide input on how to handle multiple users updating the same document in mongo?
[17:57:35] <electronplusplus> Max-P: More or less, in meteorjs it really to make the entire application slow, specially if I rely on server methods everytime I want to perform CRUD.
[18:01:03] <Max-P> electronplusplus, Then yeah your only option is to do everything you need on the client and update only once you're done. Logging the change options could be an idea, so at the end when you save you can push a list of grouped small operaions to do on the server. But at this point that's far out of MongoDB's scope, I'd suggest checking meteorjs's channel instead
[18:02:50] <Max-P> phishin, Depends of what you want to happen when it happens. If you update the whole document at once, then by default only the last changed one will be kept. If you use $set, they will merge, potentially in a weird way. That's probably something you want to handle in your application.
[18:03:52] <Max-P> phishin, You could probably use memcached or similar to keep track of a lock, and make sure only one process can access the data at once
[18:04:17] <electronplusplus> Thanks for your time Max-P, It's really hard to find Meteor developer that actually know this, currently I'm refactoring a meteor app and I see shit everywhere. My point with the question was to find a way of using an object as a first-class citizen but my research tells me that currently mongodb doesn't support it.
[18:05:29] <Max-P> electronplusplus, Well yes, it does. It's your application that doesn't. With the appropriate layer on top of your objects, you could easily make sub-objets that knows how to delete themselves. I did it in one of my apps
[18:06:09] <asturel> is it possible to write to secondary mongodb replication ?
[18:06:17] <asturel> and it replicates back to primary?
[18:06:41] <electronplusplus> Max-P: can you point to an article about this?
[18:06:53] <asturel> or how should i write to master if i specify host1,host2 in connection string?
[18:06:59] <Max-P> I basically wrapped all my objets into a Javascript classes that are reconstructed from the raw JSON. Then I can perform a remove on a sub-object, and it will call the appropriate server-side script that will do exactly the upate call I gave you
[18:07:00] <asturel> (c#)
[18:08:02] <phishin> Max-P, thanks. My application has to allow multiple users to access the data at any time. My concern and what ive come across is when user A saves a document, and user B saves it right after, without getting a chance to see what user A did. and other similar scenarios.
[18:09:19] <Max-P> phishin, Ah, yeah. In that case I'd use either a cache of recently updated documents, or put an update timestamp in the document itself. Then, on the server, when a client wants to save a document, check against that timestamp and either merge it if you can, or push back the updated version to the client for revision
[18:09:24] <asturel> e.g. mongodb://db1.example.net,db2.example.net:2500/?replicaSet=test and db1 is primary, db2 is slave, how can i know which is the 'master' ? i mean its possible when the master falls out the secondary become the primary (?)
[18:10:12] <Derick> asturel: the driver will figure out which one is the master, and send write requests to it
[18:10:26] <Derick> asturel: of master and slave switch, the driver will figure that out
[18:10:30] <phishin> Max-P, thanks for the input
[18:10:42] <Derick> asturel: you can not write to secondaries
[18:12:34] <asturel> oh thanks, so basicly i dont have to do anything, thanks
[18:13:31] <Derick> right :)
[18:13:56] <asturel> and it will use always master for read too?
[18:14:09] <asturel> i mean it would be 'faster' if it use the local slave server
[18:14:33] <Max-P> Probably the closest slave, otherwise having slaves would be useless...
[18:14:50] <asturel> probably ?:)
[18:15:14] <Max-P> I don't know about replication yet, so that's really just a logical guess
[18:16:46] <Max-P> Derick probably knows better and can correct me if I'm wrong :)
[18:16:53] <asturel> i dont rly have experience with mongodb replication on SQL databases i just use 2 connection string, like it use always use the local db for select (slave or master doesnt matter) and master to write
[18:17:10] <asturel> but mongo is pretty different
[18:17:44] <Derick> by default it will use master for read. If you configure it to also read from slaves, then it will pick a random secondary that is within 15ms of the server that replied the fastest to a ping.
[18:18:10] <Derick> Some drivers pick a random node from this set, others pin a session to a specific node. I don't recall how the C# driver does this.
[18:18:37] <Derick> I think you can configure it to also read from secondaries per query even--- i know that the PHP driver allows that for sure.
[18:18:53] <Derick> But slaves can be (a little) behind
[18:19:12] <asturel> how little?
[18:20:12] <Derick> depends on the load
[18:20:14] <asturel> since "pingMs" : 1
[18:20:18] <Derick> anything from 0ms to hours
[18:20:31] <Derick> it's usually within 100ms though
[18:20:39] <asturel> ah :D
[18:21:04] <Derick> you can even configure secondaries to always be behind by a certain amount!
[18:22:03] <asturel> well we have 2 cluster so i just want it to work when one drops out for any reason (not likely tho)
[18:22:38] <Derick> a 2 node cluster?
[18:22:48] <Derick> if one node drops out of that, you can no longer write to the other one
[18:22:57] <Derick> you should have 3 nodes (or 2 nodes and an arbiter node)
[18:23:00] <asturel> yeah but then the secondary becomes to master, isnt it?
[18:23:14] <Derick> asturel: no, the majority of nodes (more than 50%) needs to be up
[18:23:19] <Derick> hence the use of an arbiter
[18:23:45] <asturel> and will the c# driver handle the writes if the master is offline?
[18:23:56] <Derick> you will get "can not write" exceptions
[18:23:59] <asturel> like it wouldnt try to write on secondary
[18:24:02] <asturel> ah :D
[18:24:30] <asturel> but i remember that i was able to make to make secondary 'writeable'
[18:24:51] <Derick> you can't really do that yourself - or very much you shouldn't mess with it
[18:25:13] <Derick> you can "step down" a node to elect another one as primary, but if a node is really down than that is another matter
[18:28:06] <asturel> thans
[19:08:58] <jonathan2> I have a mongodump taken from a slave (master/slave repl) and I have the oplog position as found in db.sources from immediately prior to the dump. If I restore the dump onto a brand new slave, and start slaving by writing the db.sources info that I have including the oplog position, will this work as expected (meaning new slave is consistent with master) since the oplog is idempotent?
[19:09:04] <jonathan2> Example: > db.sources.find()
[19:09:06] <jonathan2> { "host" : "the_master", "source" : "main", "only" : "the_db", "syncedTo" : Timestamp(1411507679, 1) }
[19:12:33] <RandomProgrammer> Hi, is there any modelisation tools for Mongodb that can be used in a way similar to power designer's database generation tool?
[19:58:02] <jblancett> how can I tell which node is the master in my repl set?
[19:58:48] <Zelest> rs.status()
[20:05:16] <mjuszczak> Is there a way to get MongoDB to operate in JSON only and skip the JSON -> BSON and BSON -> JSON conversions entirely?
[20:05:20] <mjuszczak> Sorry, BSON*
[20:06:08] <Derick> no
[20:06:16] <Derick> MongoDB stores everything as BSON
[20:06:37] <mjuszczak> Right, but there's no way to drop the data in as BSON and then fetch it as BSON and skip the conversion entirely?
[20:07:33] <cheeser> when would it *not* be bson?
[20:07:52] <cheeser> at least in the java driver, it's always BSON.
[20:10:41] <mjuszczak> Is it MongoDB that's fetching the BSON and converting to JSON or is it the client driver?
[20:12:29] <Derick> nothing is converted to JSON
[20:12:43] <Derick> the PHP driver f.e., never even reads or writes it
[20:12:55] <Derick> but transfers BSON to PHP structures and back
[20:13:25] <Derick> the PHP driver has no means to let you deal with raw bson though.
[20:13:32] <Derick> and I don't think most other languages have either
[20:17:40] <cheeser> iirc, the java driver lets you get at the byte[] of the bson data but generally you deal with it as a Map<String, Object>
[20:20:12] <Derick> ah, neat
[20:20:23] <Derick> I think we're doing to do that for the new version of the PHP driver too
[20:20:25] <Derick> but not sure yet
[21:08:01] <sejo> hey all is there a channel for mongoos, or can I ask questions about it here?
[21:14:45] <mjuszczak> Thank you!
[21:22:40] <jblancett> has anybody else noticed setting fork = true cases upstart to think mongod isn't running?
[21:23:49] <jblancett> s/cases/causes/
[21:31:35] <obiwahn> jblancett: http://upstart.ubuntu.com/cookbook/#expect-fork
[21:34:01] <obiwahn> why bother with upstart at all?
[21:34:35] <jblancett> I see so the upstart conf needs to be modified
[21:34:39] <jblancett> why would you not use it?
[21:39:21] <obiwahn> a) i am not sure - how mongodb exactly starts - so that might be just a hint
[21:41:03] <obiwahn> b) afaik systemd is the new default init-system. upstart will vanish!
[21:45:58] <jblancett> upstart is what is the default init system in newer ubuntu releasees
[21:46:03] <jblancett> replacing init
[21:46:32] <obiwahn> http://en.wikipedia.org/wiki/Upstart :) when i had to mess the last time with it it was really ugly - it could not compare in any way to system with fancy stuff like systemd-nspawn
[21:48:14] <obiwahn> jblancett: even ubuntu deprecated it... http://www.markshuttleworth.com/archives/1316
[22:05:45] <Tug> Hi, I'm back with my chunks stuck in one shard after issuing a removeShard command. Moving jumbo chunk manually did work for the first shard. Now I have one jumbo chunk stuck on the last shard. If I run moveChunk I get the following error: "errmsg" : "chunk too big to move"
[22:06:40] <Tug> the full error cause: { "chunkTooBig" : true, "estimatedChunkSize" : 350842828, "ok" : 0, "errmsg" : "chunk too big to move" }
[22:07:09] <Tug> any idea how to solve this ?
[22:07:41] <flok420> maybe pass it through pkzip! i heard great stories about that
[22:09:53] <Tug> flok420, are you serious ?
[22:10:11] <Tug> chunk is builtin mongodb
[22:10:21] <Tug> it's not a file or anything
[22:10:52] <flok420> Tug: no i was pulling your leg
[22:11:38] <joannac> Tug: up the chunkSize temporarily
[22:11:48] <joannac> actually, no, i take that back
[22:12:15] <Tug> joannac, yeah it's already at the maximum size
[22:12:20] <joannac> is it actually a jumbo chunk? Has a single value of the shard key?
[22:12:30] <joannac> if so, pick a better shard key
[22:12:54] <Tug> yes I will, but I'd like to end the removeShard procedure
[22:14:22] <joannac> increase the chunkSize to be bigger than your jumbo chunk
[22:15:32] <Tug> I have set it to 1024 but no luck
[22:16:19] <Tug> Looks like my jumbo chunk is 350MB so it should work
[22:26:21] <Tug> I can't split it either, I guess it probably has the same shard key value in all documents :s
[22:26:44] <Tug> can I remove it ?
[22:27:12] <Tug> I can try to remove all documents in it
[22:35:35] <aydin> hey, i need to upgrade my mongodb server 2.4.1 to 2.4.11 what's the best way to do this?
[22:35:53] <aydin> i run mongodb on centos 6.5 with 10gen repositories
[23:47:32] <hydrajump> anyone using backup gem to backup mongodb to S3 encrypted http://meskyanichi.github.io/backup/v4/database-mongodb/
[23:48:03] <hydrajump> it looks as if it's following mongo's docs on locking the db, flushing and using mongodump
[23:48:11] <hydrajump> any first hand experience?
[23:52:55] <joannac> hydrajump: no real experience, but it seems okay
[23:53:07] <joannac> whether it does what it says it does though... :)