PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 29th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:23] <TommyTheKid> OK, so I *know* I am being dense or missing something obvious. I am creating a new "user" in an existing mongo 3.0.x cluster, I can get the user to authenticate if I specify "admin" then change dbs, but I can't connect directly to its "application" db. Another user on the same system works properly :-/ (there is no system.users in the working application database)
[00:06:24] <TommyTheKid> the user has role "dbOwner" and db: "applicationDB2"
[00:11:19] <TommyTheKid> I am authenticating with a user with role "root" to create the user (if that matters)
[00:23:28] <TommyTheKid> ugh, apparently you create the users on the database, but you can't see the collection "system.users" in it... however "show users" after "use appdb" does show appdb.appdbuser ... so ... wow suckify
[08:10:12] <sumi> hello
[08:15:03] <Boomtime> hi sumi
[09:47:52] <spleen> Hello,
[09:48:05] <spleen> i migrated a mongo server
[09:48:23] <spleen> i have now a problem with the config server
[09:48:56] <spleen> could not verify that config servers are in sync :: caused by :: config servers xxxx27019 and xxxx:27019 differ
[09:49:56] <spleen> i forgot to stop configsrv to do the rsync of the congigserv path..
[09:50:06] <spleen> how could i repair that ?
[10:22:17] <Mmike> Hi! Is there a way to change the log level verbosity on a RUNNING mongod instance? I am running mongo 2.4.9
[10:31:05] <thapakazi> Please recommend me best storage engine for my production: (app demanding optimised for fast disk I/O), I was looking into perconaft, but they are ditching it on future release :(
[10:59:51] <thapakazi> Please recommend me best storage engine for my production: (app demanding optimised for fast disk I/O), I was looking into perconaft, but they are ditching it on future release :(
[11:00:24] <thapakazi> how good is rockdb ?
[11:45:13] <cheeser> thapakazi: wired tiger
[12:14:56] <HoierM> Is there anything for mongo that resembles the hibernate xml configuration? I'm using Java, but i can't annotate the model. Not possible to change the model, in no way.
[12:24:36] <StephenLynx> thats because there is no schema validation.
[12:25:11] <StephenLynx> one document can be {banana:9001} while the other can be {zebra:{omg:'kek'}}
[12:25:24] <StephenLynx> I mean
[12:25:29] <StephenLynx> there is a very new feature for that.
[12:25:39] <StephenLynx> i always forget about that one.
[12:25:58] <StephenLynx> https://docs.mongodb.com/manual/core/document-validation/
[12:26:00] <StephenLynx> HoierM,
[12:26:29] <StephenLynx> it was released on the latest version, so use at your own risk.
[12:26:31] <kurushiyama> HoierM SpringDataMongoDB does quite some stuff.
[12:26:43] <StephenLynx> >ODMs
[12:26:49] <kurushiyama> StephenLynx Aye
[12:27:00] <kurushiyama> StephenLynx But quite a good one, to be fair.
[12:27:10] <StephenLynx> you might as well use PHP like a caveman :^)
[12:27:18] <kurushiyama> StephenLynx ;P
[12:27:52] <kurushiyama> StephenLynx Seriously, that one is good. It takes away boilerplate, but lets you go down to the driver in case you want to.
[12:28:02] <StephenLynx> to be honest, I see zero reason to use any kind of schema validation on a database that doesn't implement relations.
[12:28:18] <kurushiyama> StephenLynx hibernate is not only validation.
[12:28:21] <StephenLynx> at best is a crutch for developers that shouldn't be allowed to program to begin with.
[12:29:25] <StephenLynx> sure is not just validation, after all, how could they justify it without loads and loads of useless bloat.
[12:29:30] <kurushiyama> StephenLynx Well, I use it. I validate the user input multiple times. Frontend before submit, server on submit, and I use document validation to make sure the data stays as it should be, no matter what.
[12:29:56] <StephenLynx> the problem is adding a whole dependency for something that is not even a dozen LOC.
[12:30:13] <kurushiyama> StephenLynx But using SpringData just for the sake of it will cause serious problems ;)
[12:31:08] <HoierM> I'm more worried about the serialization/deserialization part, i'm not doing the web module, just the persistence layer... I wanted to use SpringData, just not sure if it's too big a dependency for this project.
[12:31:22] <StephenLynx> it is.
[12:31:42] <StephenLynx> and there is no serialization/deserialization if its web.
[12:31:44] <kurushiyama> HoierM Depends on the scale of the project. It takes away all the basic CRUD stuff, that is not a bad thing.
[12:31:50] <StephenLynx> you get strings, you store strings
[12:31:54] <StephenLynx> you read strings, you output strings
[12:31:59] <HoierM> I wanted to use Morphia at first, but i couldn't find anything about it and external mapping.
[12:32:04] <StephenLynx> when needed you cast to another type.
[12:32:14] <kurushiyama> HoierM xternal mapping?
[12:32:31] <StephenLynx> by external mapping you mean, how to automatically expose your database?
[12:32:49] <StephenLynx> through a back-end?
[12:33:08] <HoierM> It's a really small project. More like a microservice than anything.
[12:33:21] <HoierM> No, no, i mean those @Entity and @Id notations.
[12:33:24] <StephenLynx> what
[12:33:50] <kurushiyama> HoierM Uhm... Those are ERM notions
[12:34:01] <StephenLynx> and microservice is a buzzword.
[12:34:13] <StephenLynx> you just have a web back-end like any other.
[12:34:45] <kurushiyama> HoierM http://mongodb.github.io/morphia/1.2/guides/annotations/
[12:35:28] <HoierM> I'm going at it from what i saw on the Morphia documentation, though.
[12:35:43] <kurushiyama> If it is a small project, I suggest using Morphia directly. If you have to, you can pass around the client object or (if the project has the potential of getting bigger) use Guice.
[12:36:52] <HoierM> Yeah, exactly that. I'm quite new to mongo, so i might be saying bullshit on this side... The thing is that i cannot annotate the classes directly. Any direct model change would be seen as a hack, so i can't do that. The model is on a separated module.
[12:36:53] <kurushiyama> HoierM Although I would not underestimate SpringData. If you just need basic CRUD, this can be achieved with a few dozen lines of codes on top of _all_ of your entity classes.
[12:37:32] <kurushiyama> HoierM WHAT? Why can't you annotate the classes directly?
[12:38:05] <HoierM> I cannot change the model. That's the catch. Before i could control it, but they changed it to a module, so i cannot make changes to it.
[12:38:30] <kurushiyama> HoierM runtime instrumentation is a drag, to say the least.
[12:38:58] <HoierM> To explain better, this is for my university, and since there's a load of people doing different solutions, they decided to do this.
[12:39:13] <kurushiyama> HoierM Well... fork the code?
[12:39:57] <HoierM> kurushiyama well... that'd still be a model change, and my professor would hate it.
[12:40:04] <kurushiyama> WHAT
[12:40:13] <kurushiyama> Adding annotations?
[12:40:24] <kurushiyama> In which parallel dimension is that a model change?
[12:40:59] <HoierM> Yeah, anything vendor-specific. It was kinda of absurd when i heard it the first time too, i cannot even picture this without annotation...
[12:41:32] <HoierM> But then i remembered that hibernate had those old xml external mappings, because i only used the annotation-based ones, so i was looking for something like it
[12:43:49] <kurushiyama> HoierM Hm, the models are beans?
[12:44:54] <HoierM> I pretty much wanted this: https://github.com/DavideD/ogm-mongodb-example/blob/master/src/main/resources/hibernate-contact.hbm.xml
[12:45:37] <kurushiyama> HoierM Would you answer my question? I might have a solution for you, if that is the case.
[12:45:39] <HoierM> They "are being converted into". They didn't even made get and set for all the fields, so i can't call it exactly a bean.
[12:45:49] <HoierM> *getters and setters
[12:46:37] <kurushiyama> HoierM Well, subclass them in a bean and override. Lombok might be helpful here.
[12:47:14] <kurushiyama> HoierM Or, to be more precise override and hide.
[12:47:54] <kurushiyama> HoierM Then use annotations on your bean and Bob's your uncle.
[12:50:33] <HoierM> kurushiyama Heh, now that's one nice expression. I'll look into that, then. Thanks for the help.
[12:54:32] <kurushiyama> HoierM You probably can use Lombok for the boilerplate. But with hiding(where necessary only!) and subclassing, your problems should be solved.
[13:05:02] <kantoniak> Hey there
[13:05:13] <kantoniak> Can I set disk quota for a single database?
[13:05:37] <kantoniak> I'm creating a playground for users and I'd like anyone to have their own database.
[13:29:34] <StephenLynx> http://stackoverflow.com/a/10001832
[13:56:50] <scruz> hi. how do i write a an expression (agg framework) checking if a value is null or missing?
[13:57:16] <scruz> i’m not entirely sure i can use the null literal
[13:58:05] <scruz> entirely forgot about $ifNull
[13:58:09] <scruz> never mind. thanks.
[14:53:37] <kurushiyama> scruz $exists should work too, no?
[14:53:59] <kurushiyama> scruz nvm, no, since the field could be empty.
[14:57:47] <scruz> hey kurushiyama
[14:57:51] <scruz> what’s up?
[14:58:04] <kurushiyama> scruz Everything fine, so far.
[14:58:27] <scruz> i’m using an $eq, so as long as it’s not a valid value, it’s allowed to be null or empty
[14:58:53] <scruz> i hate complicated agg pipelines
[14:59:07] <scruz> they’re like regexes - write once, read never
[16:03:36] <speedio> anyone know what i can use for wordpress that handles documentation for software with different versios(similar to msdn documentation)?
[16:05:37] <StephenLynx> wat
[16:05:59] <StephenLynx> ʇɐʍ
[16:08:24] <speedio> StephenLynx: i need to setup a documentation management system and thought maybe i could use wordpress for that..?
[16:08:34] <StephenLynx> :v
[16:08:41] <Derick> speedio: sure, but what does that have to do with MongoDB?
[16:08:50] <speedio> sorry wrong channel
[16:08:52] <speedio> :)
[18:17:59] <jayjo> Because '.' is invalid for keys, is there a recommended or default replacement?
[18:20:09] <idioglossia> Hi everyone, do collection names have to be alphanumeric, like javascript identifiers?
[18:20:17] <idioglossia> or can they contain some special characters that js can't
[18:29:41] <jayjo> Or is that a decision entirely up to me as the programmer?
[18:48:25] <StephenLynx> that.
[18:55:33] <jayjo> ...it's up to me?
[18:59:56] <kurushiyama> jayjo Ayejo ;)
[19:07:28] <kurushiyama> idioglossia Actually, just like JavaScript identifiers, if my lack of JS knowledge does not betray me,
[19:07:47] <idioglossia> kurushiyama, thanks :)
[19:11:59] <kurushiyama> idioglossia Take it with a grain of salt from a near JS illiterate
[22:00:37] <bjpenn> using 2.6.9 mongodb, anyone know if theres a bug that causes operations to not get killed properly?
[22:03:50] <kurushiyama> bjpenn Not of the top of my head.
[22:05:22] <bjpenn> kurushiyama: any reason why there would be super long running process that doesnt end?
[22:06:00] <kurushiyama> bjpenn Depends on the process and what you call "super long". Can you be more specific?
[22:06:11] <bjpenn> like over 2000 seconds
[22:06:59] <kurushiyama> bjpenn And what is the type of the operation?
[22:07:30] <kurushiyama> bjpenn Depending on your data size, an index creation for example can take quite a while.
[22:07:52] <bjpenn> trying to get some info to answer the questions :p
[22:08:30] <kurushiyama> bjpenn Regarding the type of operation or your use cases?
[22:08:50] <bjpenn> its a query
[22:10:04] <bjpenn> type == query... :p
[22:10:24] <kurushiyama> Oh. That is a long query.This can happen when you set the timeout for the query to a big number or to 0 (worst case) and the client does not properly close the cursor, iirc.
[22:11:01] <bjpenn> kurushiyama: i see...
[22:11:21] <bjpenn> kurushiyama: we were seeing hundreds of connections to the database...
[22:11:24] <bjpenn> so these connections were staying open
[22:11:45] <bjpenn> even though nothing we shut down the app, which connects to the DB
[22:11:56] <kurushiyama> bjpenn You are aware of the fact that each time you create a client, actually a connection pool is created?
[22:12:16] <bjpenn> kurushiyama: and its not good form to "close" a connection right?
[22:12:21] <bjpenn> because opening connections is really resource intensive?
[22:12:46] <kurushiyama> bjpenn You close cursors. And clsoing a connection should really return it to the pool.
[22:13:04] <kurushiyama> bjpenn What language do you useß
[22:13:07] <kurushiyama> ß
[22:13:07] <bjpenn> php
[22:13:16] <kurushiyama> Lemme check.
[22:14:46] <kurushiyama> bjpenn Legacy or new (HHVM-capable) driver?
[22:15:02] <bjpenn> hmm trying to figure that out
[22:15:09] <bjpenn> its a mongo driver for php... let me see if i can find which version
[22:15:53] <bjpenn> mongo.so is the file :o
[22:17:05] <kurushiyama> bjpenn As far as I can see (from my rather limited knowledge and interest in PHP), both open a connection pool.
[22:17:16] <Derick> no pool
[22:17:31] <Derick> there is one connection per PHP process
[22:17:42] <bjpenn> kurushiyama: looks like 1.6.10
[22:17:47] <Derick> but it's up to the server to kill operations
[22:17:54] <bjpenn> Derick: ahh
[22:18:06] <bjpenn> Derick: is there a pool for other drivers, just not PHP?
[22:18:24] <Derick> yeah
[22:18:41] <Derick> it makes no sense to have a pool for single threaded drivers
[22:19:38] <bjpenn> Derick: so do php users use someting like maxTimeMS() to kill serverside connections?
[22:19:44] <bjpenn> operations* rather
[22:20:07] <Derick> they can set the max length, yes
[22:20:15] <Derick> but that hardly ever should *happen*
[22:20:20] <Derick> what sort of ops are you running?
[22:20:37] <kurushiyama> @Derick Oh, good to know that.
[22:20:37] <bjpenn> Derick: these long running ones when i look into mongo are type "query"
[22:21:04] <Derick> what sort of queries are you running that take so long?
[22:21:48] <bjpenn> Derick: thats what i dont know :p
[22:22:10] <Derick> the mongo log will tell you, as well as the place where you seeing where it says "query"
[22:22:33] <bjpenn> ok let me see
[22:24:01] <kurushiyama> @Derick Interesting. know php was single threaded. Per instance, I guess, but yes, that makes sense.
[22:24:22] <Derick> sure, a web server spawns many instances
[22:28:46] <kurushiyama> @Derick Not sure how this relates when PHP is used as an FCGI process.
[22:31:38] <bjpenn> looks like a normal query... i can paste it
[22:31:43] <bjpenn> Derick: looks like this: https://gist.github.com/3283096e24cfef4cc8cf512eedb9d681
[22:32:03] <bjpenn> looks kind of normal to me
[22:32:34] <bjpenn> if its normal and its running along time could it be that the clinet died mid transaction?
[22:32:39] <bjpenn> and thus it stays on forever?
[22:33:32] <bjpenn> operation rather..
[22:33:33] <Derick> kurushiyama: i was talking from the POV of PHP-FPM processes
[22:33:46] <bjpenn> but i guess the operation would just complete, and operation would finish right?
[22:34:01] <Derick> sure
[22:34:03] <kurushiyama> @Derick Ah, ok.
[22:34:13] <Derick> do you have any indexes bjpenn ?
[22:34:46] <kurushiyama> Well, I am not so. If the timeout was set to 0 and the client just broke away?
[22:35:08] <Derick> which timeout?
[22:35:12] <Derick> there are many different ones
[22:36:00] <bjpenn> Derick: i believe so, trying to figure out how to determine that
[22:38:46] <kurushiyama> @Derick cursor?
[22:42:13] <Derick> kurushiyama: that's a client one
[22:42:26] <Derick> bjpenn: "i believe so" didn't quite answer my question of "which timeout?"
[22:43:19] <bjpenn> Derick: which timeout? what do u mean
[22:43:32] <Derick> there are cursor timeouts, connection timeouts, server timeouts
[22:43:36] <bjpenn> ohhh
[22:43:49] <bjpenn> i would see it in the same output that i pasted?
[22:43:54] <Derick> no
[22:44:01] <Derick> only your code will tell
[22:44:21] <Zelest> bjpenn, ufc fan i assume? :)
[22:44:34] <Zelest> </off-topic>
[22:45:00] <bjpenn> Derick: nothing from the client side
[22:45:08] <bjpenn> we didnt look there or have any logs that can indicate what happened
[22:45:13] <bjpenn> just see that theres long running operations on the server side
[22:45:34] <Derick> the server log will probably tell you why then
[22:46:46] <bjpenn> does each operation correspond to a connection? like a many to one relation?
[22:46:53] <bjpenn> ill try and check server logs to see
[22:46:56] <kurushiyama> @Derick Curious. https://docs.mongodb.com/manual/reference/method/cursor.noCursorTimeout/#definition
[22:47:03] <bjpenn> but the operation was just long running, never timed out, i had to kill it
[22:47:07] <bjpenn> it was eating too much server resources
[22:49:02] <Derick> kurushiyama: don't touch that
[22:49:16] <kurushiyama> @Derick huh?
[22:49:21] <Derick> bjpenn: no, each PHP request is a connection
[22:49:29] <Derick> kurushiyama: it's a really difficult thing to handle
[22:49:35] <Derick> kurushiyama: maxTimeMS is the way forwards
[22:50:13] <bjpenn> each php request is an operation right? or many operations right?
[22:50:16] <kurushiyama> @Derick I would not, but in my (admittedly rusty) memory it was the server managing the server timeouts.
[22:50:34] <Derick> bjpenn: many I think
[22:50:36] <Derick> depends on the code
[22:50:42] <Derick> i got to go
[22:50:48] <kurushiyama> @Derick s/server timeouts/cursor timeouts/
[22:52:35] <kurushiyama> bjpenn You have a replset?
[22:54:11] <bjpenn> kurushiyama: yah
[22:54:12] <bjpenn> replset
[22:54:54] <kurushiyama> bjpenn Well, you _could_ have your primary step down and restart it then...
[22:55:17] <kurushiyama> bjpenn Good test for your failover code, as well ;)
[22:58:37] <bjpenn> kurushiyama: hahah :)
[22:58:54] <bjpenn> kurushiyama: yeah... but i wonder why my queries running so long in teh first place :/
[22:59:09] <UberDuper> php is the devil.
[22:59:27] <UberDuper> Check the slow query log.
[22:59:38] <UberDuper> Or, well, the log for slow queries.
[23:00:49] <UberDuper> By default any query that takes > 100ms will get written to the mongo log.
[23:01:05] <Derick> UberDuper: PHP has nothing to do with slow queries...
[23:01:18] <UberDuper> Derick: That doesn't make PHP not the devil.
[23:01:38] <UberDuper> PHP is just awful for mongo because of the connection handling.
[23:01:42] <UberDuper> Or lack thereof.
[23:02:00] <Derick> UberDuper: https://twitter.com/grmpyprogrammer/status/743100733063581696
[23:02:15] <Derick> UberDuper: really? Can you write it down and email me? I did write it after all.
[23:03:49] <UberDuper> Oh? My perspective is as the mongo server operator dealing with apps that end up hitting connection limits.
[23:04:24] <UberDuper> Or apps that don't properly close their connections.
[23:05:48] <UberDuper> The one thing I'll give it credit for is that you don't have cursor not found issues when load balancing mongos nodes.
[23:08:49] <kurushiyama> UberDuper Reaching the connection limit? Even on my 6 year old laptop with 4Gb of RAM, running several databases and my complete dev environment my mongod reports a max connection count of soome 7k...
[23:09:32] <UberDuper> In 2.4 and earlier, there was an arbitrary connection limit of 20k.
[23:10:08] <UberDuper> I've seen primaries hit that a lot.
[23:10:22] <UberDuper> It's much better in 2.6+ due to mongos pooling connections to replsets.
[23:10:34] <kurushiyama> UberDuper May well be, but from my perspective, running into your max connection count is a question of lazy resource management and lack of tests therof.
[23:10:58] <UberDuper> So even if you get to 40k connections into your mongos nodes, you could still only have a couple hundred connections to the primary.
[23:10:59] <kurushiyama> UberDuper And that is not a driver issue or an issue of MongoDB.
[23:11:16] <kurushiyama> UberDuper So?
[23:11:55] <UberDuper> So? You hit the connection limit and heartbeats fail and you get elections.
[23:11:59] <kurushiyama> UberDuper As long as RAM and fds are available, I do not see an issue. Not closing connections properly is a dev issue.
[23:12:29] <kurushiyama> UberDuper I am managing several high load clusters, and that _never_ happened.
[23:13:28] <kurushiyama> UberDuper If that happens, there is something severely wrong, either on the ressource side or on the configuration side.
[23:14:13] <UberDuper> Like I said, it's not really a problem anymore in 2.6+
[23:14:38] <kurushiyama> Which itself is sort of archeology ;)
[23:15:17] <UberDuper> I've still got some 2.2.x around here.
[23:16:04] <kurushiyama> UberDuper That would admittedly make me nervous.
[23:16:19] <kurushiyama> To say the least.
[23:16:34] <UberDuper> It's mostly alright.
[23:21:23] <UberDuper> 2.2 scares me less than wiredtiger
[23:23:43] <bjpenn> hey guys
[23:23:44] <bjpenn> Specifies a cumulative time limit in milliseconds for processing the operation on the server (does not include idle time). If the operation is not completed by the server within the timeout period, a MongoExecutionTimeoutException will be thrown.
[23:23:51] <bjpenn> thats a copy and paste from the docs, whats the "idle time"
[23:23:52] <bjpenn> ?
[23:24:11] <bjpenn> if my php client connects to mongo to make a query, where in that example is the idle time?
[23:24:20] <bjpenn> like if the db is locked or something?
[23:24:32] <bjpenn> and it has to wait to perform the operation
[23:24:32] <bjpenn> ?
[23:24:33] <kurushiyama> bjpenn Which docs exactly?
[23:24:57] <kurushiyama> bjpenn Looks like Java docs.
[23:25:52] <bjpenn> let me link
[23:26:00] <bjpenn> http://www.php.net/manual/en/mongocollection.findone.php
[23:26:02] <bjpenn> thats where i pasted it from
[23:29:46] <kurushiyama> bjpenn To be aken with a pound of salt and then some, here is how I read it: When a query is sent to the MongoDB server, it does not get executed right away. It is run through the query optimizer and – as you noted, and especially for MMAPv1 – the requested ressource or parts thereof might not be available. Say you have an unindexed query, aka collscan. Even with WT, documents might be locked while the query traverses the collection.
[23:29:46] <kurushiyama> I guess those are the idle times meant.
[23:30:19] <bjpenn> kurushiyama: awesome thanks!
[23:30:52] <kurushiyama> bjpenn Please read the disclaimer at the beginning again ;)
[23:31:06] <bjpenn> kurushiyama: haha i get it :)
[23:33:29] <kurushiyama> Basically, it is a wild guess taking into account a few things. I am most likely totally off, thinking of it twice. The collection or documents therein being locked could very well count to the timeout (and actually should, from my perspective.)
[23:33:35] <kurushiyama> bjpenn ^
[23:34:53] <bjpenn> kurushiyama: do you know how to tell whats happening for a long running operation
[23:34:59] <bjpenn> if it says "secs_running" like 3000 secs
[23:35:00] <UberDuper> If I had to guess, I'd say idle time has more to do with batches and time between getmores.
[23:35:06] <bjpenn> what part of that is idle time?
[23:36:16] <kurushiyama> UberDuper Dont know. Time between getmores is actually what I would call inactivity...
[23:36:17] <UberDuper> From the server perspective there's really no idle time when it comes to queries.
[23:36:39] <bjpenn> kurushiyama: reading this link now... http://blog.mlab.com/2014/02/mongodb-currentop-killop/
[23:36:46] <kurushiyama> UberDuper Fair point.
[23:37:58] <UberDuper> bjpenn: Best place to start with finiding out what why that query is slow is to check the mongo logs.
[23:39:02] <kurushiyama> bjpenn Well, that is not exactly new. The docs state pretty much the same. And UberDuper is right, especially when you go for mlogviz.
[23:39:53] <UberDuper> It'll tell you if it's using an index and if so, which ones.
[23:41:17] <UberDuper> Assuming *any* of those queries have finished.
[23:42:08] <UberDuper> It'd be nice of currentOp() showed the query plan it's using.
[23:44:01] <bjpenn> UberDuper: what do i look for in the logs
[23:44:18] <bjpenn> i guess one way is to find the time when i did the currentOp() and then minus the seconds its been running to get a rough estimate when it started running
[23:44:21] <bjpenn> and then look via timestamp
[23:44:29] <bjpenn> i got a 9gb log file though, so first i guess i have to rotate it :p
[23:44:33] <UberDuper> grep "ms$" /path/to/log | grep -vi "writeback\|lockping"
[23:45:01] <bjpenn> whats "ms$"?
[23:45:17] <UberDuper> The last two characters of the log entry will be ms
[23:45:24] <UberDuper> Like 3000ms
[23:45:24] <bjpenn> oh ok
[23:45:29] <bjpenn> thats assuming the query finished right?
[23:45:32] <bjpenn> i killed it
[23:45:37] <bjpenn> so i dont think it would have logged that it finished
[23:45:50] <bjpenn> but there might have been others that finished that also took long
[23:45:51] <kurushiyama> bjpenn Is that a regular problem you have?
[23:45:53] <UberDuper> Yea. I'm not sure what shows up in the log when you kill a long runner. I don't think anything.
[23:46:05] <bjpenn> kurushiyama: yeah
[23:46:39] <UberDuper> grep "ms$" /path/to/log | grep -vi "writeback\|lockping" | awk '{print $NF" "$0}' | sort -n
[23:46:55] <UberDuper> To sort the slow queries by time spent.
[23:47:16] <kurushiyama> bjpenn Not sure wether they still work, but Thomas Rückstiess mtools might be quite handy : https://github.com/rueckstiess/mtools/wiki/mlogfilter
[23:47:35] <UberDuper> Or that.
[23:48:40] <bjpenn> kurushiyama: nice!
[23:48:47] <kurushiyama> I heavily use mlogvis for optimizations.
[23:48:53] <kurushiyama> Or used to use.
[23:49:23] <bjpenn> i ran that grep, its really nice
[23:49:29] <bjpenn> im getting all queries back though even the fast ones
[23:49:33] <bjpenn> that last like 115ms
[23:49:43] <bjpenn> how do i get the ones that are like over 1000ms
[23:49:50] <UberDuper> They should be at the end
[23:49:53] <bjpenn> oh
[23:49:55] <bjpenn> i see
[23:49:56] <UberDuper> It should sort them.
[23:49:57] <bjpenn> just another regex :P
[23:50:22] <bjpenn> it doesnt sort, it just greps through and shows every matching one that ends with ms
[23:50:43] <UberDuper> Oh I also gave you this one
[23:50:43] <UberDuper> grep "ms$" /path/to/log | grep -vi "writeback\|lockping" | awk '{print $NF" "$0}' | sort -n
[23:53:11] <bjpenn> hey i got a bunch of log entries of really long running queries
[23:53:23] <bjpenn> some that took millions of seconds
[23:53:32] <bjpenn> nice
[23:56:33] <UberDuper> kurushiyama: Yeah. My awk-fu is weak. So I just keep piping till I get what I want.