PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 16th of July, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:23:02] <GitGud> hi so i am not dealing with this in my current project but i was wondering if there was a way virtualize 2 mongo instances in say 2 different servers in such a way that its application-transparent ?
[01:27:46] <cheeser> meaning?
[05:48:40] <Kralian> mongod + mongos ?
[07:52:18] <cs02rm0> is there a library method somewhere for converting an org.bson.Document to a java Map?
[12:13:31] <ernetas> Hey guys
[12:13:40] <ernetas> I have MongoDB 3.0 cluster running.
[12:13:46] <ernetas> (3.0.11 to be exact)
[12:13:54] <ernetas> It's a sharded replica set.
[12:14:36] <ernetas> One of the shards is going to run out of space soon, so I started playing with maxSize in config shards collection.
[12:15:34] <ernetas> Long story short, I tried to resync one member from scratch, so to compact everything on the way, but the freshly synced member now takes ~60 GB more disk space than old ones!
[12:16:34] <ernetas> My best guess is that this is because something has changed in journaling or indexing somewhen in between 3.0.x and 3.0.11 and disk space usage has grown.
[12:16:48] <ernetas> Is there anything I'm missing out and how could I now reduce the size of the shard?
[12:18:42] <ernetas> The primary issue here is that there is a ~60 GB difference in 'show dbs' outputs, if I run them on a secondary and on a primary. Where does that come from and shouldn't it be exactly the same?
[13:14:13] <koren> do you use the same storage engine?
[13:14:39] <koren> wiredtiger reduces disk space usage a lot, maybe your other shards are using it
[13:14:49] <koren> it is not the default storage engine in 3.0
[13:28:19] <cheeser> /1/1
[19:30:00] <ernetas> koren: yes, same storage engine.
[19:30:09] <ernetas> (both on mmapv1)