[09:06:52] <krion> i got huge directory moveChunk on a primary node of a replicaset
[09:07:01] <krion> the thing is i've almost no more space
[09:07:18] <krion> I since switch on the secondary node
[09:07:27] <kurushiyama> thapakazi Not me. The implications of using a storage engine of which the implications for using it with MongoDB are not well understood (and hardly battle tested) and with a project of rather basic (and severe) bugs open are too much for me.
[09:07:47] <krion> in order to do the tricks rm -rf data to freeing space. But i wondering about the moveChunk
[09:15:33] <kurushiyama> krion Now, you have the balancer enabled, and it might be running (or not).
[09:16:22] <kurushiyama> krion In order to be absolutely safe for the deletion of the moveChunks directory, we should ensure that it is not running and will not run while we do maintenance.
[09:17:21] <kurushiyama> krion So before we do anything, we should first check wether the balancer is currently running, wait until it is finished and then disable the balancer.
[09:17:29] <krion> (FYI, i was ready to do that procedure https://docs.mongodb.com/v2.6/faq/storage/#resync-the-member-of-the-replica-set )
[09:17:52] <krion> Oh, never did it before, except for upgrading.
[09:20:00] <krion> I already stop mongodb on the primary that cause me trouble.
[09:20:28] <krion> (old primary, since i also already have forced secondary to become primary and vice versa)
[09:21:13] <kurushiyama> well, that should be obvious. As a rule of thumb: I always disable the balancer before doing maintenance work on a sharded cluster.
[09:21:34] <krion> Is it bad to stop it afterwards ?
[09:22:00] <kurushiyama> krion Well, after maintenance, I tend to reenable it ;P
[09:22:29] <krion> Of course. I mean since i've already stop one of the mongodb.
[09:22:57] <kurushiyama> krion Well, it is not ideal, at least in my book.
[09:23:05] <krion> Should i restart secondary, the stop balancer ?
[09:36:38] <krion> may be this help to see the whole picture
[09:36:58] <krion> Still 'Waiting for the balancer lock'
[09:39:36] <kurushiyama> krion Ok. While we are waiting. Do you have an idea why only one shard runs full and there are supposedly a lot of migrations going on?
[09:40:25] <krion> runs full, you mean full of disk space ?
[09:40:44] <krion> and we recently migrate from 2.4 to 2.6 and have deleted collection
[09:41:15] <krion> on mongodb-02, 113G /data/db/mongodb/moveChunk
[09:41:21] <kurushiyama> krion Well, 2.6 is close to EOL, as a side note.
[09:41:29] <krion> on mongodb-12, 215M /data/db/mongodb/moveChunk
[09:41:42] <kurushiyama> krion I am talking of shards.
[09:41:54] <kurushiyama> krion not replset members.
[09:42:21] <kurushiyama> krion You do seem to have the same problem on the other shard, right?
[09:43:41] <krion> I'm not sure, sorry. My customer handle the shards.
[09:43:46] <krion> (FYI, msg:Waited too long for lock balancer to unlock )
[10:16:10] <krion> Now i'm wondering about deleting only the moveChunk directory or do a complete resync as mentionned here https://docs.mongodb.com/v2.6/faq/storage/#resync-the-member-of-the-replica-set
[18:49:26] <Ben_1> is there a way to configure mongodb to write all log output to an file?
[18:52:49] <UberDuper> As opposed to writing to syslog?
[18:53:03] <UberDuper> There's a config option to specify a filename.
[18:54:19] <Ben_1> My problem is that I get bulk write errors with the async java driver and I want detailed information. I could start mongod in the terminal but this is inconvenient. that's why it would be better for me to log it.
[18:54:47] <cheeser> turn up the logging level prior to the writes
[18:57:20] <Ben_1> cheeser: searched for a "logging level" in my config file. Is it the verbosity?
[19:00:12] <Ben_1> set the verbosity to 5 but after deleting all content of my log file and processed the write operation again it is still empty