PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 10th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:16:20] <_syn> im trying to configure this: http://docs.mongodb.org/master/tutorial/manage-the-database-profiler/
[00:16:31] <_syn> im just wondering where that data actually logs to
[00:17:10] <_syn> when I run a db.system.profile.find().limit(10).sort( { ts : -1 } ).pretty() - I can see some relevant data, but I cant find that in my .log file
[00:17:27] <_syn> root@mongo1:/data/insight# cat /etc/mongod-insight.conf |grep log
[00:17:28] <_syn> #where to log
[00:17:28] <_syn> logpath=/data/insight/log/mongodb.log
[00:17:47] <_syn> heres an example of one of the mongod instances im trying to get db profiling on
[00:18:17] <_syn> is there any further configuration I need? I owould have expected that the logpath I specified there would contain the db profiling
[00:22:39] <sqwishy> Does it make sense to have a replica set on the same machine?
[00:23:44] <_syn> further more.. I connected via Robomongo and looked in both the admin database and the database I was trying to get profiling on, and couldnt see a system.profile collection
[00:23:49] <_syn> only other system. collections
[00:24:27] <_syn> could it be because im running an older version of Mongo that this is a problem?
[00:36:31] <Doyle> sqwishy, why would you? Your IO would suffer. If you really don't have a second machine, then yes, but the dbpath should be on different physical drives/raids/whatever...
[00:37:44] <Doyle> If one of your dbpath storage devices goes bad, you'd have a secondary ready to take over, but if any other component of the system goes bad, both are down.
[00:50:26] <_syn> okay so I figured out that the reason why it wasnt appearing in robomongo is b/c I hadnt switched to the right database, but that still doesnt explain why I cant see any of the data from system.profile collection in my logfile..
[01:51:30] <joannac> _syn: why do you expect to see it in your logfile?
[01:52:52] <joannac> _syn: do the docs say somewhere that it also goes to the logfile? Link?
[02:01:22] <_syn> yeah
[02:01:25] <_syn> not really
[02:01:29] <_syn> http://tech.rhealitycheck.com/visualizing-mongodb-profiling-data-using-logstash-and-kibana/
[02:01:32] <_syn> thats why though
[02:01:48] <_syn> you can see he has specified the path as: path =&gt; "/var/log/mongodb/mongod.log"
[02:02:00] <_syn> which is indicative that the profiling will be there also
[02:02:17] <_syn> since .. thats what hes trying to monitor
[02:32:05] <joannac> _syn: yeah, that's from someone writing something not-wrong-but-ambiguous, and then someone else misinterpreting it
[02:32:19] <joannac> The important part of that is db.setProfilingLevel(1,50)
[02:32:33] <joannac> and it's not the profiling log level that's important, it's the slowms
[02:32:49] <joannac> which means anything slower than 50ms gets logged, which can then be processed
[02:33:26] <joannac> _syn: what's your slowMS value?
[03:21:49] <_syn> 100 joannac
[03:22:18] <_syn> when i do a db.system.profile find I can see lots of entries matching the slowMS value that I set too
[04:36:24] <joannac> _syn: okay, so what you're saying is, you see a query in system.profile that you do not see in the logs, and the query takes > 100ms?
[04:36:44] <joannac> can you pastebin the entry in system.profile and the log snippet ?
[04:38:01] <kashike> after spending hours earlier getting mongodb 3.0.6 to run on ubuntu 15.04, I wouldn't wish it upon my worst enemies
[04:42:01] <_syn> thats what im saying joannac , sure, give me a few
[06:56:25] <xcesariox> hi i have some error
[06:56:27] <xcesariox> require(process.cwd() + '/lib/connection'),
[06:56:30] <xcesariox> how is it wrong
[06:58:00] <xcesariox> https://gist.github.com/shaunstanislaus/17586024875e259d3912
[07:01:14] <xcesariox> BadCodSmell : could you help me out? https://gist.github.com/shaunstanislaus/17586024875e259d3912
[07:58:45] <KekSi> is there a way to stop logging this: 2015-09-10T07:54:54.836+0000 I SHARDING [Balancer] distributed lock 'balancer/0f3ac73dcf0d:27017:1441269234:1804289383' acquired, ts : 55f1374e8338f856aae518ed
[07:59:01] <KekSi> its spamming logs quite a lot
[07:59:45] <KekSi> 3 lines every 10 seconds seems excessive for something like that
[08:57:03] <nekyian> I have imported 300k documents in a collection during the night and now mongod service refuses to start... And there is nothing in the mongo /var/log/mongodb/mongod.log
[08:57:15] <nekyian> is there any other location where mongo keeps logs?
[09:07:55] <endikamgm> hi there, anyone with experience in pymongo? I would like to know the limitations of the array of dictionaries to bulk with the insert_many function
[09:20:55] <joannac> nekyian: start on the command line, with just --dbpath and see what it says
[09:22:55] <KekSi> any word on minimizing the debug output generated on query routers for balancer rounds even with --quiet option?
[09:23:01] <KekSi> joannac
[09:23:45] <joannac> KekSi: define "debug output"
[09:32:28] <KekSi> joannac: what i posted earlier -- its 3 lines every 10 seconds that the balancer round started, ended and that it pinged all cluster members
[09:32:43] <KekSi> 2015-09-10T07:55:49.745+0000 I SHARDING [Balancer] distributed lock 'balancer/0f3ac73dcf0d:27017:1441269234:1804289383' acquired, ts : 55f137858338f856aae518f7
[09:32:46] <KekSi> 2015-09-10T07:55:50.128+0000 I SHARDING [Balancer] distributed lock 'balancer/0f3ac73dcf0d:27017:1441269234:1804289383' unlocked.
[09:32:49] <KekSi> etc
[09:32:54] <pwlarry> has anyone got any examples of component based designs eg. a post could have multiple components, a component can have a component type (which could have multiple field types <-- this is the part that confuses me)
[09:33:05] <pwlarry> component types reference templates
[09:33:18] <pwlarry> it's for constructing an article with multiple components
[10:44:25] <dcsquare> Anyone knows why when reconnecting to a replica set the C driver connects 4 time to each server?
[10:52:05] <Derick> hmm, I haven't noticed that it did that
[10:52:28] <Derick> perhaps it just retries if it doesn't work the first 3 times?
[10:54:32] <xcesariox> Derick: hi derick ca you help me out ? https://gist.github.com/shaunstanislaus/71efc7f1dbf9c670942a#file-employees-js-L4
[10:55:01] <xcesariox> Derick: i defined my employee but it says i didn't define my schema.
[10:55:07] <xcesariox> Derick: as in register.
[10:55:21] <Derick> xcesariox: it's not polite to single out a single user for questions. Ask the whole channel.
[10:55:38] <Derick> and I don't know anything about Mongoose
[10:56:02] <xcesariox> Derick: am i asking in the right channel or there is a channel called mongoose?
[10:56:25] <Derick> I don't know, I haven't looked for it.
[10:56:44] <xcesariox> Derick: i've looked. there isn't
[10:56:57] <Derick> xcesariox: so ask here, but don't ask a *user* (like me) directly.
[10:57:01] <xcesariox> can anyone please help me out with my mongodb (mongoose module) error? https://gist.github.com/shaunstanislaus/71efc7f1dbf9c670942a#file-employees-js-L4
[10:57:45] <xcesariox> Derick: you must be working as customer service.
[10:58:23] <Derick> I am not
[10:58:33] <xcesariox> Derick: then how could you not know about mongoose?
[10:58:48] <Derick> I know of mongoose
[10:58:52] <Derick> I just don't know how it works
[10:59:11] <xcesariox> Derick: you are not a js person?
[10:59:15] <Derick> no
[10:59:20] <xcesariox> Derick: okay that explains.
[10:59:43] <xcesariox> Derick: no one is replying to my question.
[11:00:03] <Derick> yeah, have patience
[11:14:36] <Derick> dcsquare: did you just post to mongodb-user ?
[11:14:46] <Derick> I don't think your log says it connects
[11:15:30] <Derick> from what I see, it connects to server1 (and sees 3 and 2) - 10 ms later it connects to 2 (and sees 3 and 1) - 210 ms later again is connects to 1 and sees 2 and 3
[11:22:29] <dcsquare> @Derick: yes
[11:23:29] <Derick> replying to you rmail now :)
[11:23:59] <dcsquare> I don't understand the "sees" part
[11:24:09] <Derick> the log says:
[11:24:12] <Derick> > 2015/09/10 13:14:29.0235: [44619]: DEBUG: cluster: Registering potential peer: server1:27017
[11:24:19] <Derick> that doesn't mean it's making a connection...
[11:24:40] <Derick> the "Sees" means that after every server it connects to, it calls "isMaster" to find out other members of the replicaset
[11:25:54] <dcsquare> So "Registering potential peer" means that it calls "isMaster" on that peer?
[11:26:03] <Derick> it's a grouping:
[11:26:08] <Derick> 2015/09/10 13:14:29.0235: [44619]: DEBUG: cluster: Reconnecting to replica set.
[11:26:11] <Derick> 2015/09/10 13:14:29.0235: [44619]: DEBUG: cluster: Registering potential peer: server1:27017
[11:26:14] <Derick> 2015/09/10 13:14:29.0235: [44619]: DEBUG: cluster: Registering potential peer: server3:27017
[11:26:17] <Derick> 2015/09/10 13:14:29.0235: [44619]: DEBUG: cluster: Registering potential peer: server2:27017
[11:26:21] <Derick> it spoke to server1, and found server3 and server2 through isMaster
[11:27:02] <dcsquare> oh, so it got an answer from server1, that the replica set contains server2 and server3?
[11:27:11] <Derick> yup, that's what I expect
[11:27:56] <dcsquare> If that's true, I would expect each grouping to start with another server
[11:28:10] <dcsquare> but the first and the third both start with server1
[11:29:02] <Derick> true :-)
[11:29:12] <Derick> i'd check with a network analyser
[11:29:16] <Derick> brb, lunch
[11:29:21] <dcsquare> mee to
[11:29:22] <dcsquare> o
[11:29:24] <dcsquare> :)
[11:59:52] <pokEarl> Hey friends, so id like to connect to a remote mongodb that uses ssl for authentication, do i connect using a ssl tunnel or how should I/Do I go about this exactly? :(
[12:13:47] <pokEarl> wait thats supposed to be ssh
[12:15:47] <deathanchor> pokEarl: just like anything else... ssh remotehost -L 6666:localhost:27017; mongo --port 6666
[12:56:46] <nekyian> how do I count a very large collection in mongo without getting a timeout at 30 secs. I am using PHP and doing Model::count() and it breaks
[12:57:22] <Derick> nekyian: you need to set the cursor timeout to something higher
[12:57:35] <nekyian> I did that and it still breaks
[12:57:47] <Derick> which setting did you make?
[12:57:58] <nekyian> I have only 300k documents now in the collection, but I shall have 2 mil
[12:58:09] <nekyian> $total_records_nr = DocumentAPI::timeout(180000)->count();
[12:58:19] <Derick> that's not a driver setting
[12:58:28] <nekyian> how can I do the driver setting?
[12:59:11] <Derick> you need to set it on the Cursor: http://php.net/manual/en/mongocursor.timeout.php
[13:38:02] <CustosL1men> hi
[13:39:21] <CustosL1men> I have an array in a documents, and the array contains patterns with optional wildcards, I want some way to find documents that to find all documents which contains patterns that match a specific string
[13:39:27] <CustosL1men> should I use map reduce for this ?
[14:35:19] <roo00t> hi
[14:36:00] <saml> hi roo00t
[15:41:14] <kaseano> hi, I'm still trying to figure out when to use FKs vs having nested data in mongo
[15:41:42] <kaseano> right now I have objects with nested arrays of objects 5 levels deep, should I pull some out into their own collection and use FKs?
[15:41:49] <StephenLynx> yes
[15:41:51] <StephenLynx> god, yes.
[15:42:16] <StephenLynx> I personally wouldn't go beyond the first nesting.
[15:42:17] <kaseano> thanks StephenLynx
[15:42:28] <kaseano> I feel more comfortable using FKs without any nested data aside from arrays of FKs
[15:42:46] <kaseano> oh I thought mongo performed better without the FKs though, like that was the whole point vs relational dbs
[15:42:56] <StephenLynx> just make sure you don't model it a way where you would require joins.
[15:43:05] <StephenLynx> otherwise you are better using a rdb.
[15:43:19] <kaseano> oh I get it, as long as you're only pulling 1 fk at at time, then separate collections work
[15:46:26] <StephenLynx> yeah.
[15:46:42] <StephenLynx> since you would have to perform the joins at application level or perform several queries.
[15:47:01] <StephenLynx> knowing when to use rdbs is very important when you start using nosql.
[15:47:20] <StephenLynx> because most people get into it thinking its a silver bullet and you can use it everywhere.
[15:47:59] <kaseano> well performing the joins via a hash at the server or even JS on the client performs really well so, mongo seems fine even when you do need to join
[15:51:49] <saml> sounds like http will be your bottleneck
[15:54:44] <StephenLynx> hm?
[15:55:00] <StephenLynx> what do you mean?
[16:19:11] <saml> how can I get ip or hostnames of all connected clients?
[16:20:16] <Owner> netstat -t
[17:20:38] <deathanchor> saml: I usually scrub the logs
[19:07:27] <deathanchor> having an issue trying to project sub-sub-doc info: https://gist.github.com/deathanchor/6a6774536630679dec9a
[19:39:13] <wflynnvey> Does anybody have any thoughts on the best way to warm up a secondary before promoting it to primary? Specifically when I have more data than avaialable RAM? Everything I'm seeing online is 2-3 years old. What's current best practice?
[20:01:27] <acovrig> Is it possible to ‘search’ a DB: I.E. find an entry based on a key and/or value regex (as opposed to the find function I’ve seen that requires a key and a regex of a value)
[20:04:59] <StephenLynx> you can check if a key exists or not.
[20:05:11] <StephenLynx> aside from that, I don't think so.
[20:05:33] <acovrig> OK, so I guess my best option is to run an empty find() and pipe it through grep
[20:16:35] <deathanchor> wflynnvey: you can grab some slow queries from the primary and run them on the secondary
[20:16:50] <deathanchor> wflynnvey: that would load some info to memory
[20:17:49] <deathanchor> anyone good with find() projections? https://gist.github.com/deathanchor/6a6774536630679dec9a
[20:18:23] <cheeser> you can't
[20:18:41] <deathanchor> poop, so you can only do single level projection.
[20:19:14] <deathanchor> or aggregate and messing with things there with multi project/unwinds
[20:19:24] <cheeser> you can project as deep as you want. you're basically trying to do a nested query.
[20:19:33] <cheeser> unwinding in a project would work.
[20:19:47] <deathanchor> yeah, which aggregation would help with, but I'm trying to avoid the aggregation :D
[20:20:02] <cheeser> you *might* be able to use certain positional qualifiers to project that subdoc but i'm not sure.
[20:20:43] <deathanchor> oh well
[20:20:45] <deathanchor> no biggie
[20:20:46] <deathanchor> thx
[20:26:36] <wflynnvey> deathanchor: I didn't think of using slow queries, that's a good call. thx.
[20:27:39] <deathanchor> wflynnvey: you can figure out which queries are common, (using specific indexes) and run some of those to load the index to memory.
[20:29:16] <wflynnvey> deathanchor: I'm seeing using vmtouch or dd to warm up the filesystem cache, as well as using mongo touch() to page in data as well. Any thoughts on those?
[20:29:39] <deathanchor> no clue what those are
[20:33:55] <wflynnvey> @deathanchor thanks :)
[22:56:28] <jpfarias> hey guys
[22:56:30] <jpfarias> with pymongo
[22:56:41] <jpfarias> can I get a distinct count on a key?
[22:56:52] <jpfarias> or do I need to get all the values and then do a len(values)
[22:56:54] <jpfarias> ?
[23:01:37] <joannac> jpfarias: first result in google for "pymongo distinct"
[23:01:45] <jpfarias> I did that
[23:01:56] <jpfarias> distinct returns a list of the distinct values
[23:02:02] <jpfarias> I just want the count
[23:02:07] <jpfarias> not the actual values
[23:02:37] <joannac> Okay, then no
[23:02:50] <joannac> Even if there was, it would involve getting all the values and then counting them
[23:02:51] <ams__> .count() at the end but not sure if that's a great idea
[23:28:28] <stickperson> pretty new to mongo and having a hard time with dbrefs/not using joins from rdbs. let’s say i have a collection of “things” and each thing has a list of keywords. how do i say “get me all the things that have these keywords?”
[23:29:01] <stickperson> and would it help if i had a separate collection of keywords i’m interested in. like so: http://jsfiddle.net/ffm33gq5/
[23:40:35] <joannac> stickperson: db.articles.find({keywords: {$in: ["one", "five"]}})
[23:51:12] <dddh> no plans to implement $agg on gpu?
[23:58:58] <pEYEd> how do I show the data from an illegally named collection?
[23:59:00] <pEYEd> https://bpaste.net/show/8fc9ae616b94
[23:59:14] <joannac> mongodump?
[23:59:43] <cheeser> db.getCollections("some bad name").find()