PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 11th of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:09:11] <justanyone> Hey
[00:09:42] <justanyone> question - I'm running 3.0.7 WiredTiger w/ a 4-shard, single-replica (2copies) set. I'm dumping in massive #'s of updates/sec w/ bulk update. MongoDB starts ok, goes to 100k updates/sec (4M unique recs updated once/minute), then STOPS for several seconds with 0 records /sec.
[00:10:05] <justanyone> fine for several minutes, then .... crickets.
[00:10:50] <justanyone> then several minutes again, then crickets. QW is huge - several hundred on each shard. those drop, and then it takes updates again. what can I do to elim. this delay?
[00:11:34] <justanyone> (pretty soon, it's waiting minutes between resuming, then nothing at all.)
[00:12:03] <justanyone> I tried turning off compression of indexes/prefixes/storage, but behavior continues.
[00:12:33] <justanyone> I tried changing tunables: db.adminCommand( { setParameter: 1, wiredTigerConcurrentReadTransactions: 1024 } ); db.adminCommand( { setParameter: 1, wiredTigerConcurrentWriteTransactions: 1024 } );
[00:12:46] <justanyone> but no go there either, so I set 'em back to 128, which is default.
[00:28:59] <joannac> justanyone: get better disks?
[00:29:21] <joannac> you have a high number of queued writes...
[00:34:39] <justanyone> IOwait is not a problem, disks are idle. SSD's are fast, no waits there. 1k iops, they can do 60k+ iops.
[00:35:22] <justanyone> I'm aware I have queued writes. I don't know how to eliminate them. These are hefty boxes - 32 cores each x 4 primaries (same with the 1 replicaset).
[00:35:34] <justanyone> 256 GB memory each.
[00:36:46] <justanyone> I'm watching mongostat now, nearly zero (1, 2, 0, whatever, nearly 0) updates on all shards. But, QR/QW and AR/AW numbers are changing.
[00:37:10] <justanyone> where is this activity coming from? I've shut off my writers, and nothing should be happening.
[00:38:00] <justanyone> still have 870 connections to each shard, don't know why that is. I shouldn't have but just a couple.
[00:39:52] <justanyone> host01a:3301 *0 *0 1 *0 1 8|0 0.1 31.2 0 44.2G 41.0G n/a 0|0 1|83 899b 34k 870 shard01 PRI 18:33:12 host02a:3302 *0 *0 2 *0 2 9|0 0.1 31.1 0 46.4G 42.9G n/a 0|5 1|128 1k 35k 896 shard02 PRI 18:33:12 host03a:3303 *0 *0 1 *0 1 6|0 0.0
[00:40:17] <justanyone> wow, that came out crappy. Sorry.
[00:41:49] <justanyone> Trying again: host01a:3301 *0 *0 1 *0 1 8|0 0.1 31.2 0 44.2G 41.0G 0|0 1|83 899b 34k 870 shard01 PRI host02a:3302 *0 *0 2 *0 2 9|0 0.1 31.1 0 46.4G 42.9G 0|5 1|128 1k 35k 896 shard02 PRI host03a:3303 *0 *0 1 *0 1 6|0 0.0 31.1 0 44.0G 40.9G 0|0 1|84 791b 33k 884 shard03 PRI host04a:3304 *0 *0 *0 *0 2 9|0 0.0 32.0 0 45.3G 41.9G 5|110 1|31 1k 34k 870 shard04 PRI host01b:3401 *0 *0 *2 *0 0 8|0 0.0 29.8 0
[00:41:58] <justanyone> dammit.
[01:07:31] <steffan> justanyone: What FS is your data volume?
[01:16:14] <justanyone> Steffan - ext4. IO is not the holdup. Something internal is cached and stopping all updates from happening.
[01:17:04] <justanyone> I'm watching there be constantly changing numbers of QR|QW and AR|AW on each shard now, even with technically no activity going into it. Where is this activity coming from?
[01:18:14] <justanyone> IO in and out is 200 bytes to 2k range, which is confusing. and 30k to 300k outbound. But no one is connected to the mongos's.
[01:21:09] <justanyone> Just did some killLongRunningOps() over 1 second, found 4. now there's none.
[01:21:11] <justanyone> 2015-12-10T19:14:59.941-0600 I COMMAND [conn1047] command megamaid.$cmd command: update { update: "metricValue", updates: 125, writeConcern: { wtimeout: 3000, w: 1 }, ordered: false, metadata: { shardName: "shard03", shardVersion: [ Timestamp 504000|16, ObjectId('565ee5753eb2b79cd6c23366') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:155 locks:{ Global: { acquireCount: { r: 433, w: 433 } }, Databa
[01:21:45] <justanyone> acquireCount: { w: 433 }, acquireWaitCount: { w: 103 }, timeAcquiringMicros: { w: 66454746815 } }, Collection: { acquireCount: { w: 308 } }, oplog: { acquireCount: { w: 125 } } } 5268988ms
[01:21:55] <justanyone> that's a lot of milliseconds.
[01:22:02] <justanyone> or microseconds.
[01:22:33] <justanyone> does this mean anything to anyone?
[01:24:31] <justanyone> Can I, from a central point, kill all connections?
[06:15:47] <troypayne> anyone here ever use $geoWithin? I have a collection of documents that have lat/long data. I want to query the things that are inside of a 10 mile radius from a center lat/long. Can someone point me in the right direction?
[06:55:53] <troypayne> if i can’t use $where as an aggregate pipeline stage, then what the heck
[07:04:24] <Upgreydd> Hello i have a question. I'm using mongodb with mongoose. I want to save document with nested document via Schema.Types.ObjectId. Is there a way to "update or if not exist create" sub documents?
[08:11:18] <Upgreydd> Anyone please?
[08:20:22] <troypayne> nm i figured this shit out myself
[10:21:55] <Zelest> My entire database is 5.2GB and my data directory is 8.2GB.. yet, my mongod is allocating and locking around 24GB.. why is that and how can I prevent this?
[10:22:16] <Zelest> mongodb v2.6.x and freebsd 10.x
[10:27:18] <Derick> sup?
[10:27:39] <Zelest> ^
[10:27:55] <Derick> Zelest: the data is mapped two times, and so is the journal
[10:28:05] <Zelest> oh
[10:28:09] <Zelest> why so? :o
[10:28:10] <Derick> it doesn't actually *use* that much memory
[10:28:16] <Zelest> well, it lock it up :(
[10:28:23] <Zelest> Mem: 2882M Active, 24G Inact, 4022M Wired, 574M Cache, 1714M Buf, 130M Free
[10:28:24] <Zelest> Swap: 1024M Total, 1000M Used, 24M Free, 97% Inuse
[10:28:36] <Zelest> causing the machine to swap rather than using the 24GB "inact" memory :(
[10:29:06] <Derick> that sounds like a freebsd thingy? never seen that on linux
[10:29:20] <Zelest> ah
[10:29:36] <Zelest> i'll upgrade it to the latest version in a couple of days.. if it persist, i'll poke the freebsd people :)
[10:30:52] <Derick> 3.2 uses WiredTiger by default, which will not use MMAPv2, and hence probably doesn't exhibit this issue
[10:31:04] <Derick> (you do need to convert to WiredTiger though - it's not automatic)
[10:32:32] <Zelest> Ah, and 3.x doesn't exist in FreeBSD ports yet :)
[10:32:48] <Derick> not even 3.0?
[10:32:56] <Zelest> nope :(
[10:33:02] <Derick> doesn't look like we make freebsd binaries...
[10:33:16] <Zelest> i'll build a CDN soon-ish.. then I'll run it on Linux :)
[11:54:18] <Upgreydd> Can someone can tell me what is the best option to save a document with nested/populated (relationship) documents? I think first of all i need to save all nested docs before i save main doc? Or is there a way to save whole document with relations in one save?
[11:55:23] <StephenLynx> you just save the document at once.
[11:55:23] <StephenLynx> sub-documents make no difference.
[11:55:34] <Upgreydd> Stephen byt my subdocuments are in other schame
[11:55:42] <StephenLynx> then they are not sub-documents.
[11:55:47] <StephenLynx> and there are no schemes.
[11:55:53] <StephenLynx> schemas*
[11:56:11] <Upgreydd> StephenLynx: I'm talking about mongoose ;) sorryt
[11:56:12] <Upgreydd> And one more thing, how about edit? I can have shuffled new subdocuments and old subdocuments. Is there a way to do something like "when no id - save, when id - update"?
[11:56:23] <StephenLynx> I strongly suggest you don't use mongoose.
[11:56:34] <StephenLynx> and in the end, as far as mongo is concerned, schemas do not exist.
[11:57:20] <StephenLynx> yes, there is update.
[11:57:35] <Upgreydd> StephenLynx: wait a minute, I'll show you my code ;)
[11:58:01] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Collection.html#update
[11:58:17] <StephenLynx> however, the author of the driver suggests using updateOne or updateMany, but that is a minor detail.
[11:59:42] <Upgreydd> StephenLynx: http://pastie.org/10625106
[12:00:02] <StephenLynx> that is all martian to me, I don't use mongo.
[12:00:06] <StephenLynx> mongoose*
[12:00:46] <Upgreydd> StephenLynx: ok ;) how about update? If I use update, not existend subdocs will be created?
[12:01:04] <StephenLynx> if you use upsert, yes.
[12:01:17] <StephenLynx> ah
[12:01:18] <StephenLynx> hold on
[12:01:50] <StephenLynx> I am not sure.
[12:02:24] <StephenLynx> I know that for example, if you use $push, an array will be created if it doesnt exist on the document.
[12:03:30] <StephenLynx> so probably if you use something like $set:{'a.b':1}, an object called a will be created as a subdocument so mongo can be able to set the field b of it.
[12:03:45] <StephenLynx> but I am not 100% sure of that.
[14:04:41] <dddh> hm
[15:10:56] <nerder> hello guys
[15:11:26] <nerder> i'm wondering why this return undefined: var a = db.getCollection('product_category').find({_id: ObjectId("566aa2520b4fd17e070854d3")}); print(a._id);
[15:14:14] <cheeser> you probably want findOne()
[15:14:20] <cheeser> find() returns a cursor
[15:14:42] <cheeser> (assuming it's saying '_id' undefined)
[15:26:32] <nerder> cheeser: thank you! :)
[15:29:01] <cheeser> np
[15:56:29] <nerder> cheeser: but if i have an object in DBref?
[15:56:32] <nerder> i mean
[15:57:56] <nerder> https://gist.github.com/nerder/19cf1696e6b277e58667
[16:04:50] <nerder> cheeser: i see doc.parent.$id is already an ObjectId
[16:05:10] <cheeser> i'm not sure what you're asking
[16:05:35] <nerder> so it dosen't need to be called as ObjectId(doc.parent.$id), but findOne(_id: doc.parent.$id) can be enough
[16:06:01] <nerder> i was asking why my query dosent work fine, but i found out by my self :)
[16:37:50] <watmm> Trying to find out what versions of mongo a certain doctrine dbal version supports?
[16:43:57] <Derick> watmm: do you have a slightly more specific version in mind?
[16:45:04] <watmm> I want to upgrade to the current mongo 3.2, and i know what version of doctrine the devs currently use (orm 2.4.8, dbal 2.4.4), but i can't find a page detailing whether the two will work
[16:45:25] <watmm> doctrine-bundle ~1.4
[16:54:37] <Derick> watmm: i've poked the author, but he has not responded yet
[16:55:28] <cheeser> not in the office yet that i've seen
[16:56:15] <Derick> cheeser: he's not coming in - we already had a meeting ;-)
[16:56:36] <cheeser> ah! slacker.
[16:56:37] <jmikola> watmm: doctrine orm and dbal have nothing to do with mongodb
[16:57:02] <cheeser> oh, there he is! :D
[16:57:37] <jmikola> if you're thinking of doctrine/mongodb and doctrine/mongodb-odm, those should work with mongodb 3.2 -- although the legacy driver they use (ext-mongo) hasn't been updated with the bypassDocumentValidation option for 3.2 (nor does ODM support it)
[16:57:57] <cheeser> and that's not really a blocker
[16:58:11] <cheeser> unless, of course, you want ot use that feature :)
[16:58:36] <Derick> cheeser: the legacy driver won't support it either
[16:59:02] <cheeser> we're debating morphia support for it.
[16:59:07] <cheeser> *that* will probably happen.
[17:22:52] <dddh> it seems that m202 course will not help much ;(
[17:30:49] <cheeser> with?
[17:36:31] <dddh> cheeser: I do not remember what I expected
[17:41:35] <dddh> I mean I read the book Instant MongoDB https://www.packtpub.com/big-data-and-business-intelligence/instant-mongodb-instant
[17:41:42] <dddh> read docs
[17:42:51] <dddh> and almost nothing new is learned from these courses
[17:42:52] <dddh> ;(
[17:53:47] <dddh> earlier I passed m101p and m102, but it seems to me that it is better to read books and practice
[17:54:01] <dddh> for example star schema the complete reference http://www.amazon.com/gp/product/B003Y8YWAE
[17:54:27] <dddh> and the data warehouse toolkit: the definitive guide to dimensional modeling http://www.amazon.com/gp/product/B00DRZX6XS
[17:56:31] <dddh> cheeser: do you recommend something else ?
[18:06:43] <cheeser> recommend for what? to learn mongo?
[18:08:48] <saml> waht'd dimensional modeling
[18:09:08] <dddh> mongo data modeling
[18:09:22] <saml> sounds like a good word to put on my resume. thanks
[18:09:26] <cheeser> nothing comes to mind
[19:13:37] <troypayne> anyone know if there are predefined $geometry JSON objects that represent regions like New York City, Los Angeles, San Francisco, etc
[19:54:31] <bjpenn> anyone know how to determine the time it takes to build mongodb indexes?
[20:09:36] <kali> bjpenn: you can check the progress in db.currentOp() or your server logs
[20:17:35] <bjpenn> kali: checking the "progress" thats assuming one is being built right now right? im not sure if one is being built. what i meant was how to determine how long it would take to build if i wanted to restore data from backup and build one
[20:17:46] <bjpenn> without actually doing it :p
[21:43:29] <ShadySQL> hi guys
[21:43:31] <ShadySQL> new to mongo
[21:43:38] <ShadySQL> I am having a hard time
[21:43:44] <ShadySQL> with permissions
[21:44:18] <ShadySQL> I am getting this error: http://pastebin.com/GB3Y3wut
[21:44:23] <ShadySQL> can anyone help pls
[22:09:44] <troypayne> how much of a nightmare is it to upgrade from mongo 2.6 to 3.0?
[22:17:14] <Derick> more like a day dream?
[22:28:23] <troypayne> Derick: seems like i’ve upgraded but when i do mongodb —version it still says 2.6
[22:44:48] <Doyle> How do you go about replacing config servers?
[22:45:08] <Doyle> Can I build 3 more, adjust the mongos configs then restart them
[22:45:09] <Doyle> ?
[22:45:48] <Doyle> ah, found the replace config server docs
[22:45:54] <Doyle> Last place I looked.
[22:57:20] <Doyle> When one config server is down "This renders all config data for the sharded cluster “read only.”"
[22:58:26] <Doyle> That means there'll be no chunk migrations, and such, but databases will still be read and writeable?
[22:59:38] <Derick> yes
[23:00:24] <Doyle> tyvm, that's what I thought.
[23:00:37] <Doyle> I'm sure I'd read that somewhere before, but couldn't find it this time.
[23:00:58] <Doyle> Can you create a new DB Derick ?
[23:01:16] <Derick> yes, because you need to instruct mongodb to shard a *collection*
[23:01:29] <Derick> you can also create new collections, but not shard them
[23:01:38] <Derick> until you have 3 working config servers again, that is
[23:02:39] <Doyle> So in a sharded environment, the mongos don't rely on the config servers for information on DB locations?
[23:02:52] <Doyle> only for collection shard and chunk details?
[23:03:11] <Derick> hmm, not quite
[23:03:19] <Derick> there is a "default shard" for new databases and collections
[23:03:53] <Derick> Doyle: then again... if you have a broken config server you probably should try to fix that first :)
[23:04:47] <Doyle> well, I've got a sharded cluster, but sharding isn't being used.
[23:04:49] <Doyle> yet
[23:05:32] <Doyle> Wondering what the impact of having a downed config server is
[23:05:57] <Derick> ah, ok
[23:06:06] <Doyle> as long as I can create/drop dbs and collections, and r/w to them, no issue
[23:06:11] <Doyle> balancing chunks doesn't factor in