PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 29th of August, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:20:13] <husel> Hey, could someone help me with this issue I've encountered while working through a book? http://pastebin.com/PPxwiJyh
[14:38:49] <jayjo> Is there any way to process the errors from a mongoimport file, to 1) figure out how many documents were uploaded out of how many shouldve been uploaded and 2) store the error json lines so they can be processed later
[18:31:26] <cr0mulent> Hello, I would like to use mongorestore to overwrite an existing collection. Do I have to manually drop the connection in the console or will mongorestore overwrite?
[18:31:39] <cr0mulent> s/connection/collection
[18:31:47] <cheeser> mongorestore --help
[18:31:52] <cheeser> there's a --drop option
[18:32:09] <cheeser> also, https://docs.mongodb.com/manual/reference/program/mongorestore/
[18:35:30] <cr0mulent> cheeser: thanks
[19:42:41] <n1colas> Hello
[19:46:00] <jayjo> Is there a mailing list that's recommended to ask questions, or should I reach out directly to mongo? I'm trying to parse errors on mongoimport, I've asked here and stackoverflow but am not gaining much traction.
[19:46:25] <cheeser> https://groups.google.com/forum/#!forum/mongodb-user
[21:17:58] <Doyle> Hey. If you have a DB, create a 100GB collection, then drop the collection, a compact is needed to shrink the DB down again, right?
[21:18:43] <cheeser> i'm not sure WT needs that.
[21:18:53] <Doyle> MMAP
[21:19:17] <Doyle> Rewrites and defragments all data and indexes in a collection. On WiredTiger databases, this command will release unneeded disk space to the operating system.
[21:19:22] <cheeser> then, yes, i believe so
[21:20:53] <Doyle> cool, ty
[21:21:27] <Doyle> On MMAPv1, compact defragments the collection’s data files and recreates its indexes. Unused disk space is not released to the system, but instead retained for future data. If you wish to reclaim disk space from a MMAPv1 database, you should perform an initial sync.
[21:21:42] <Doyle> *rage*
[21:22:40] <cheeser> there's a reason we introduced WT :)
[21:23:37] <Doyle> lol, and it's a blocking command. The doc says only 'blocks operations', so I expect that means all operations
[21:23:44] <cheeser> yes
[21:24:16] <cheeser> is this a replset?
[21:27:11] <Doyle> yep
[21:27:29] <Doyle> I could do the init-sync
[21:27:41] <cheeser> you could do a rolling upgrade to WT
[21:27:43] <Doyle> it's a long process with 2.8tb of data on it though
[21:27:54] <Doyle> yeeeee, that'd be nice
[21:28:53] <cheeser> yikes. yeah. that'll take a bit. :)
[21:29:13] <cheeser> you could just run db.repair() on each node sequentially
[21:29:34] <cheeser> run the repair. let the sync catch up. move to the next node. repeat.
[21:31:37] <Doyle> ohh, that's interesting
[21:32:04] <cheeser> the danger is, though, if the repair takes too long on the secondary, you might outrun your oplog window
[21:32:50] <Doyle> My window is big, but how long does it take?
[21:33:17] <Doyle> 57 hours
[21:33:35] <Doyle> I've never used repair. It might take that long to complete
[21:34:45] <cheeser> i don't know how long it'd take. it's a function of your data size, afaik.
[21:37:34] <Doyle> I could dump/restore that db
[22:14:20] <Doyle> Wait, does db.runCommand( { repairDatabase: 1 } ) run on the currently used db, or all dbs?
[22:23:57] <Doyle> nm, the wrapper is the answer