PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 17th of June, 2019

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[18:21:46] <sirsquishy> I am looking for someone that understands MongoDB's Indexes well enough to create new ones based on lookup latency hits.
[18:25:40] <GothAlice> sirsquishy: https://github.com/mongolab/dex is an interesting tool, though old at this point as it utilizes an interpretation of index use <= 2.6. (I.e. it doesn’t anticipate the ability to combine multiple indexes to service a query.)
[18:26:47] <GothAlice> Lookup latency itself isn’t… that helpful as to hint which fields might require indexing. A combination of the filter portion and possibly sort choice, plus use of “covered queries” (where all data returned exists in indexes) is more of a determiner of index construction.
[18:28:40] <GothAlice> sirsquishy: Do you have the output of an $explain of the filter/query?
[18:29:07] <sirsquishy> yea, I am aware of that but the issue I am facing is I do not know MongoDB all that well. and certainly not well enough to start creating indexes. This MongoDB instance is part of an ERP package that is branded by Sage. They deploy their own index keys with the install. However the 3 tables that show up with 50ms-14000ms hits have less then 1% of the total hits on the index keys. So I feel we can do better by rebuilding or adding new
[18:29:07] <sirsquishy> indexes for these tables
[18:29:56] <sirsquishy> and fucking Sage has no one on staff that can help with MongoDB problems, they literally told me we are on our own...even though its a branded installer from them
[18:30:14] <sirsquishy> so...beyond fusterating
[18:30:48] <sirsquishy> id rather just hire a contractor that knows Mongo to handle this, but I have been having issues finding someone that understand what I'm trying to get done.
[18:31:08] <GothAlice> Enable level 3 profiling (log all queries), explore the cardinality of the fields being queried (i.e. what is the most common to filter on, or pairs or sequences or other groupings that are common), note any marked explicitly as slow, create some indexes.
[18:31:32] <sirsquishy> so profile it, just like I would in SQL
[18:31:32] <GothAlice> As long as you always utilize background index construction (watching $currentOps for progress if desired), it’s perfectly OK to experiment with index construction!
[18:31:49] <sirsquishy> even in a pretty heavy production instance?
[18:32:35] <GothAlice> sirsquishy: “Profiling” here is not the C meaning of the term. Different levels of “profiling” determine per-query logging levels. Off, no logging of queries. 1=log slow queries only. 2=log all queries. (And yeah, sorry, level 2, not 3.)
[18:32:59] <sirsquishy> gotcha
[18:33:03] <sirsquishy> also we are on 3.4.16
[18:33:04] <GothAlice> https://docs.mongodb.com/manual/reference/method/db.setProfilingLevel/
[18:33:13] <GothAlice> (0, 1, 2… 2 is the third option. Thanks, computer science! ;)
[18:33:19] <sirsquishy> will dex interface with that ver of monog?
[18:35:08] <GothAlice> Yup.
[18:35:35] <sirsquishy> ok perfect. Ill run the logging after lunch and see what dex outputs
[18:35:59] <GothAlice> Note that this tool will perform that statistical analysis of most frequently filtered by fields. It won’t be a magic bullet, but can help if combined with some thought.
[18:36:18] <sirsquishy> not looking for a magic bullet, just better information to go off of
[18:36:55] <GothAlice> See also: https://gist.github.com/amcgregor/4fb7052ce3166e2612ab#how-much-ram-should-i-allocate-to-a-mongodb-server — indexes must fit in RAM (even if your data itself doesn’t) for them to help in any way. (If they don’t, adding more indexes might actually hurt instead of help.)
[18:37:20] <sirsquishy> I think ram is fine atm. 1.3GB/16GB in use
[18:37:43] <sirsquishy> VMs, on ESXi, with Flash storage
[18:40:58] <GothAlice> THP?
[18:41:48] <GothAlice> THP can be a subtle and damned difficult to track down source of performance issues, given a memory page fault can end up servicing upwards of a gibibyte, instead of only 4 kibibytes.
[22:37:02] <warrshrike> guys
[22:37:20] <warrshrike> whats the fastest and best way to quickly get up to speed with mongo db