PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 31st of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:02:22] <jasonroyle> I had a mongo replica set configured and running across 3 hosts. Someone has now changed the IPs of the hosts
[01:02:39] <jasonroyle> All rs are in REMOVED state
[01:03:25] <jasonroyle> I need to be able to get the same rs running again on the same hosts using the new IPs
[01:03:41] <jasonroyle> Can anyone suggest anything?
[01:09:56] <joannac> jasonroyle: google 'mongodb force reconfig'
[01:10:20] <jasonroyle> @joannac just looking at that now
[01:23:23] <jasonroyle> Hmm I tried force reconfig on all hosts using the same details
[01:23:43] <jasonroyle> It's set each host to SECONDARY state
[01:24:18] <jasonroyle> And successfully saved the config but the hosts are still not talking
[01:25:52] <jasonroyle> I have set MONGODB-01, MONGODB-02, MONGODB-03 in my hosts file and used these as the host names in the config
[01:26:01] <jasonroyle> Should that work?
[01:26:27] <joannac> test if you can connect using those hostnames?
[01:45:54] <jasonroyle> @joannac ... silly me the IPs changed so I need to update the firewall settings
[01:46:34] <jasonroyle> I think my brain has switched off!
[01:55:00] <jasonroyle> @joannac legend, thank you, it's up!
[04:48:28] <oRiCLe> hey all, just a quick one, looking at findAndModify function, if i have multiple instances running this, and using it for say a queue to findAndModify the record that is being worked on as owned by the instance ID, am I going to experience any overlaps or will it work uniquely if there are many simultaneous requests? :)
[04:49:56] <joannac> as long as you structure it right, there should be no overlaps
[04:50:34] <oRiCLe> well the find and modify would be looking for any records wihout a "instance_id" key in it, and setting one basically
[04:54:08] <joannac> should work. might not be very efficient though
[05:09:09] <oRiCLe> hmm thanks, is there any other suggesions for that style of approach? was looking for a redis styled push and pop (not going to be high load so efficiency not too much of a problem) just didnt want a record to get processed twice
[08:19:07] <netcho> hi all
[08:20:37] <netcho> i have installed mongodb 2.6.11 in a docker container, and when i run it i get error
[08:20:45] <netcho> supervisorctl
[08:20:47] <netcho> mongo FATAL can't find command '/opt/mongodb-linux-x86_64.2.6.11/bin/mongod'
[08:20:49] <netcho> y
[08:21:33] <netcho> root@3d5e96642a01:/# ls /opt/mongodb-linux-x86_64-2.6.11/bin/
[08:21:34] <netcho> bsondump mongod mongoexport mongoimport mongoperf mongos mongotop
[08:21:36] <netcho> mongo mongodump mongofiles mongooplog mongorestore mongostat
[08:26:30] <netcho> i'm running it with supervisor
[08:26:56] <netcho> [program:mongo]
[08:26:58] <netcho> command=/opt/mongodb-linux-x86_64.2.6.11/bin/mongod --config /opt/mongodb-linux-x86_64.2.6.11/mongo.conf
[10:51:18] <arussel> I have a mongo instance running on AWS, few queries per second, none above 100ms, but aws show a usage of CPU at 100%. Is it expected ?
[11:03:17] <cheeser> and mongo's eating that cpu? or something else is?
[11:08:55] <arussel> cheeser: mongo (using top)
[11:09:22] <arussel> and same for the secondary server
[11:09:31] <arussel> (so unlikely to be a HW pb)
[11:13:11] <arussel> I've got 537 page faults in admin ...
[11:14:50] <arussel> how can I have so many page faults for the admin db ?
[11:22:18] <cheeser> check the logs. something's definitely screwy there.
[11:22:43] <cheeser> high RAM usage would not be abnormal. but eating all your CPU is definitely not right.
[11:28:28] <arussel> my indexes are 600M and my ram is 3.75G, why do I have so many page faults ...
[11:29:10] <yopp> arussel, you're on 3.0.6?
[11:29:25] <arussel> no, 2.6
[11:29:33] <yopp> ah.
[11:29:59] <arussel> 2.6.9
[11:30:29] <yopp> We're seeing some issues with high CPU usage in 3.0.6, but it's releated to wiredtiger, no mmap
[11:30:45] <yopp> check the mongotop
[11:31:08] <yopp> and if there any collection popping up with unusual read/write times, try to validate that collection
[11:35:08] <arussel> what would be 'unusual' ? 100ms ? 2s ?
[11:35:52] <arussel> I do have some long insert (200ms), but otherwise the server is very quiet ...
[11:35:59] <arussel> nothing to explain such high cpu
[11:51:00] <velikan> hello everyone! does anybody know how to deal with “exception: unknown group operator '$cond’”
[11:51:05] <velikan> in aggregation framework
[11:51:37] <cheeser> what version are you running? what does your pipeline look like?
[11:51:54] <velikan> cheeser: I’m running 3 version
[11:52:01] <cheeser> 3.0?
[11:53:02] <velikan> cheeser: 3.0.1
[11:53:20] <velikan> but in the production I’m running 2.6.10
[11:53:31] <velikan> cheeser: please take a look https://gist.github.com/velikanov/0cb2dfa17a32984e0fed#file-query-json
[11:55:26] <velikan> oh, I got it
[11:55:27] <velikan> thanks
[12:53:29] <arussel> a completely still DB is doign 75% CPU, any kind of query push it to 100%, I can't find anywhere a trace of activity
[12:53:40] <arussel> mongotop, log, db.currentOp() ...
[12:54:08] <arussel> Both on primary and secondary
[12:54:11] <cheeser> nothing the logs?
[12:54:27] <cheeser> maybe a kink in the replication?
[12:54:36] <cheeser> errant index build?
[12:54:44] <arussel> even the f**g arbiter is at 100% CPU
[12:54:53] <cheeser> that it's happening on both makes me wonder about replication or indexing.
[12:55:12] <cheeser> disk space?
[12:55:27] <arussel> the arbiter is neither replicating or indexing anything, isn't it
[12:56:15] <cheeser> is the arbiter on its own hardware?
[12:57:02] <arussel> all of them are on their own HW
[12:57:23] <cheeser> strange.
[13:04:09] <arussel> is it normal that the secondary have over 300 connection open to the primary ?
[13:05:03] <arussel> [clientcursormon] connections:333, replication threads:32
[13:07:46] <deathanchor> arussel: disk i/o?
[13:09:38] <arussel> on primary under 2millions bytes
[13:10:11] <arussel> I'm having page faults very high too when my indexes is 600M and my RAM 3.5G
[13:10:50] <arussel> how can an arbiter have 100% CPU ?
[13:11:36] <cheeser> sounds like it's time for a support ticket. or at least a post to mongodb-users
[13:20:10] <deathanchor> does a mongoexport require a db lock?
[13:22:05] <cheeser> probably. mongodump does.
[13:23:17] <kali> i think it's an option
[13:41:25] <Trudko> hi guys is there a problem to import isodate from csv using mongoimport? do I have to use json instead with $date?
[14:04:17] <Trudko> guys i am getting simple error when importing json file into mongo using mongoimport exception:BSON representation of supplied JSON array is too large: code FailedToParse: FailedToParse: Date expecting integer milliseconds: offset:31 file:http://pastie.org/10387465
[14:09:38] <deathanchor> Trudko: dates in json are epoch seconds: ex. { "$date" : 1433721951672 }
[14:09:47] <deathanchor> sorry epoch milliseconds
[14:10:42] <Trudko> chaged to http://pastie.org/10387474 same error
[14:10:54] <Trudko> ps command i am using mongoimport --port 3001 -d meteor -c posts --jsonArray --file post-import.json --headerline
[14:11:10] <Trudko> *removed the headerline
[14:11:23] <deathanchor> remove the " around the number
[14:14:22] <deathanchor> should always to mongoexport to see the format the data should be in, mongoimport is very very picky.
[14:14:23] <Trudko> deathanchor: lol ofc
[14:14:28] <Trudko> yeah
[14:14:39] <Trudko> and is there some problem when importing csv ? i read something on stackowerflow
[14:14:42] <Trudko> i mean with dates...
[14:15:08] <deathanchor> some momo in my company did a find().pretty() outputing to a file and I"'m like, that's not a freaking backup!
[14:15:31] <deathanchor> I don't know, I don't use csv imports
[14:15:45] <cheeser> mongodump is a better approach. mongoexport will convert to json and you'll lose certain type information.
[14:16:15] <deathanchor> cheeser: well he's importing from csv
[14:16:26] <cheeser> ewww. ;)
[14:16:29] <deathanchor> I'm sure he's migrating from RDB to mongo or something of the like
[14:16:57] <Trudko> sure but i have existing csv data which needs to be imported into db and I dont have to do that by hand
[14:17:11] <deathanchor> Trudko: there is this option, which is basically import it as string and postprocess it on mongo: http://stackoverflow.com/questions/22890082/convert-to-date-mongodb-via-mongoimport
[14:17:31] <Trudko> deathanchor: yeap i found that one
[14:17:58] <deathanchor> yeah, only other method is write your own program to read your data and update the db
[14:18:34] <deathanchor> something like python where you interpret the date then do an insert via pymongo into your db
[14:19:09] <deathanchor> Trudko: you might be able to find someone else's code that might do that for you :)
[14:19:32] <Trudko> well i am writing connector original it produces csv but i had problem with date so I am experimenting tiwh json now
[14:20:16] <deathanchor> godspeed.
[17:20:42] <deathanchor> anyone know of a good way to search for documents in a collection that are approaching 16MB?
[18:36:51] <jr3> If i'm building up an in memory objct to be saved at the very end of a processing cycle should I be saving throughout processing every so often, currently we wait till the end to save and this results in a 4mb object in same cases to be written to mongo
[18:37:01] <jr3> which can take around ~3.5 seconds to do
[19:50:31] <durre> I want to store some dates (without time). is it possible / a good idea to store them as strings? can I do stuff like db.myDocs.find({when: {$gt: "2015-08-31"}})
[19:51:58] <StephenLynx> you could also do that if you were storing dates.
[19:52:05] <StephenLynx> I strongly suggest you store date objects.
[19:54:21] <durre> but then I have to think about timezones :(
[20:11:16] <saml> hey i see lots of NETWORK logs from replicaset members. And then very slow QUERY and COMMAND
[20:11:21] <saml> how can I dignose?
[20:12:07] <saml> 2015-08-31T15:20:25.616-0400 I NETWORK [initandlisten] connection accepted from *.*.*.*:53509 #12468284 (3031 connections now open)
[20:12:24] <saml> lots of these where *.*.*.* is ip of replica set members
[20:12:41] <saml> then first query took 53282ms
[20:21:50] <StephenLynx> no you don't
[20:21:57] <StephenLynx> mong ostores all timezones as uct
[20:21:59] <StephenLynx> durre
[20:22:11] <StephenLynx> udt
[20:22:18] <StephenLynx> that universal something
[20:22:22] <StephenLynx> utc?
[20:23:31] <saml> 5630 connections now open what is this?
[20:23:36] <saml> why so many open connections?
[20:29:57] <cheeser> you're probably opening new clients when you don't need to.
[20:30:16] <cheeser> or not properly closing cursors if you're not exhausting them.
[20:34:06] <StephenLynx> lel
[20:35:19] <saml> but would opening many connections slow down mongod?
[20:35:36] <saml> i'm trying to see if linux has limits on number of open connections
[20:37:25] <saml> how do you monitor mongod performance?
[20:37:56] <saml> mongod brought my site twice today. node.js app was waiting for mongod queries to return for minutes
[20:38:17] <saml> restarted mongod and it fixed for now.
[20:38:43] <saml> 2015-08-31T15:20:26.548-0400 I QUERY [conn11797566] killcursors keyUpdates:0 writeConflicts:0 numYields:0 locks:{} 670412ms
[20:38:50] <saml> did things like this
[20:57:14] <saml> it's happening again. such slow mongod what do i do
[20:57:46] <cheeser> do see a bajillion connections again?
[21:04:09] <deathanchor> netstat -an and count how many time_wait and close_wait you got?
[21:23:23] <saml> cheeser, no, I see there's only 150MB left in RAM. no swap used yet
[21:28:43] <Melamo> I recently took on a legacy app backed by mongo 2.4 on the production server. I installed the latest 3.0.x today, and noticed some breakage while using the 3.x mongo client on the 2.4 server. I'm new to mongo and am not familiar w/ the major version changes. What big versions changes should I be aware of?