PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 23rd of May, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:30:43] <jbu> hi all, I have an object which must keep track of a potentially long (worst case) list of IDs (ints) and possibly check for existence in this list as well as remove elements from the list. I'm new to document DBs, and realize this might be better suited for RDBMS, but is there a good way about doing this for mongodb?
[06:33:03] <jbu> nevermind, i was able to find a solution to this common issue
[10:12:39] <mick27> Hey folks, I am trying out cloud manager, my replica set has issues, for instance the primary has high cpu all the time north of 90%, where should I start to understand what's up
[10:28:43] <Zelest> Who may I contact regarding a typo/bug in the documentation at http://mongodb.github.io/mongo-php-library/tutorial/crud/ ?
[10:30:19] <Derick> jira.mongodb.org/browse/PHPLIB - or make a PR against https://github.com/mongodb/mongo-php-library/blob/master/docs/tutorial/crud.md :-)
[10:30:37] <Zelest> Ah :)
[11:52:07] <Zelest> Derick, I think I did right... https://github.com/mongodb/mongo-php-library/pull/179 :D
[11:53:32] <Derick> looks good - i'll let jeremy merge it
[11:54:06] <Zelest> \o/
[13:34:38] <NYTimes> hey
[13:35:48] <NYTimes> who's using the official image of mongo in a docker container ?
[14:12:21] <fiatjaf> exit
[16:43:57] <serff> i'm trying to insert a json document in a file into mongo that has a "$date" / "$currentDate" and it's telling me fields can't start with "$", yet every example I see does it this way. I need the document to be a valid json document, so I can't use ISODate()/new Date(). can anyone help?
[16:44:31] <StephenLynx> in some places you can refer to a field using $
[16:44:40] <StephenLynx> that doesn't mean the name of the field contains it.
[16:44:45] <cheeser> field names in documents that are stored can't start with "$"
[16:45:12] <serff> so how would you insert a document with a date?
[16:45:18] <cheeser> in an aggregation pipeline "$foo" means "use the value of the field named 'foo' rather than the literal value of 'foo'"
[16:45:25] <StephenLynx> date : dateObject
[16:48:26] <serff> well: "{"now":"2013-10-21T13:28:06.419Z"}" wouldn’t insert as a date object right?
[16:48:35] <StephenLynx> no
[16:48:37] <serff> it would be a string?
[16:48:39] <StephenLynx> yes
[16:48:56] <StephenLynx> you would have to use IsoDate() or something
[16:49:01] <StephenLynx> if using the terminal
[16:49:01] <serff> and {“now”: new Date()} isn’t valid json
[16:49:11] <StephenLynx> neither are
[16:49:14] <StephenLynx> it depends on the driver
[16:49:26] <StephenLynx> neither = new Date or IsoDate
[16:49:41] <StephenLynx> is the driver's task to parse the object you gave and insert a date
[16:49:42] <serff> ya, i’m trying to figure out how to do this outside of the terminal. or like mongo test < mytest.json
[16:49:48] <StephenLynx> node.js?
[16:49:55] <serff> java driver
[16:49:58] <StephenLynx> I dunno then
[16:50:03] <StephenLynx> cheeser might know
[16:50:23] <cheeser> document.put("now", new Date())
[16:50:49] <serff> every example I’m seen says to do {“date”: {“$date”: 1234564655}}
[16:51:10] <serff> that’s not valid json either cheeser
[16:51:41] <serff> the “$date” is valid json, but the java driver says the field cant start with $.
[16:51:42] <cheeser> it's not?
[16:51:57] <cheeser> "$date" is a valid json field name
[16:52:02] <cheeser> $date is not.
[16:52:15] <serff> right
[16:52:27] <kurushiyama> serff: Iirc, Date is accepted.
[16:52:54] <uuanton> funny question. Can I run 2.6 and 3.2 under same replica set ?
[16:52:59] <kurushiyama> no
[16:53:05] <cheeser> yes, you can
[16:53:18] <cheeser> otherwise rolling upgrades wouldn't be possible
[16:53:18] <kurushiyama> cheeser: Only one minor release.
[16:53:30] <cheeser> oh, well, yeah. that leap might be too large.
[16:53:32] <uuanton> im trying to migrate from 2.6 to 3.2 prod cluster
[16:53:44] <cheeser> you'll want to bounce across 3.0 first in any case.
[16:53:47] <serff> what do you mean kurushiyama? in a json document?
[16:53:48] <kurushiyama> uuanton: Then you have to first update all to 3.0
[16:53:59] <kurushiyama> serff: No, in your processing logic ;)
[16:54:35] <cheeser> in your java code you wouldn't be dealing with json documents anyway. you'd have a Document reference which is bson internally.
[16:54:47] <serff> i’m taking a json document and parsing it into a Document (java) then inserting that
[16:54:50] <uuanton> is there a online document somewhere 2.6 -> 3.0 -> 3.2 ?
[16:55:11] <cheeser> uuanton: https://docs.mongodb.com/v3.0/release-notes/3.0-upgrade/
[16:55:13] <serff> really my problem is that I’m trying to use JOLT transforms and it won’t transform unless it’s valid json
[16:55:20] <cheeser> https://docs.mongodb.com/v3.2/release-notes/3.2-upgrade/
[16:55:53] <serff> so using “$date” is valid, but then the java driver gives me: java.lang.IllegalArgumentException: Invalid BSON field name $date
[16:56:09] <cheeser> just use new Date()
[16:56:39] <uuanton> cheeser muchas gracias
[16:56:46] <cheeser> de nada
[16:57:28] <serff> cheeser, i can’t put new Date() in the json document. it’s not valid json.
[16:57:49] <cheeser> the java driver doesn't work on json documents
[17:01:11] <serff> like i said, my real problem is im using a JOLT transform first. so I go json -> JOLT Transform -> Java convert to Document -> Mongo
[17:01:18] <serff> the json has to be valid json to transform it
[17:01:46] <cheeser> how are you parsing the json to get it in to the driver?
[17:02:30] <serff> final Document doc = Document.parse(new String(content, charset));
[17:05:01] <cheeser> Document.parse("{\"date\": {\"$date\": 1234564655}}")
[17:05:03] <cheeser> works for me
[17:05:34] <serff> now try to insert that
[17:08:53] <cheeser> worked just fine
[17:09:45] <cheeser> Document document = Document.parse("{\"date\": {\"$date\": 1234564655}}");
[17:09:48] <cheeser> database.getCollection("test").insertOne(document);
[17:10:51] <serff> what verion of the driver are you using?
[17:11:04] <cheeser> master
[17:11:11] <cheeser> 3.3.0-SNAPSHOT
[17:11:19] <cheeser> but this isn't new behaviour
[17:11:58] <cheeser> if it was going to fail, it'd fail on the parse. after that, there's a java.util.Date in the Document not a subdocument with "$date" in a field name.
[17:14:00] <serff> ya, that’s what i would think…weird…looking at what’s different..
[17:14:34] <cheeser> can you post a recreation of this somewhere?
[17:15:07] <serff> does $currentDate work in that same test?
[17:18:36] <cheeser> yep
[17:18:48] <cheeser> Document update = Document.parse("{$currentDate: { lastModified: true, \"cancellation.date\": { $type: \"timestamp\" }},$set: { status: \"D\", \"cancellation.reason\": \"user request\"} }");
[17:18:52] <cheeser> database.getCollection("test").updateOne(find, update);
[17:20:55] <serff> weird. ok, thanks. i’ll keep looking at this
[17:22:06] <cheeser> the validations done to an update document are different than those of a ... "data" document.
[17:22:46] <saml> how can i flatten a document so that i can grep?
[17:23:21] <cheeser> what?
[17:23:23] <saml> i'm given a collection where documents are pretty much unstructured. and I need to find all documents that contains some pattern
[17:23:36] <hardwire> are you doing this from the command line?
[17:23:51] <hardwire> if so.. intall jq and learn how to use the awesomeness it is.
[17:23:54] <saml> i'm writing a program
[17:24:03] <hardwire> saml: your program calls /sbin/grep ?
[17:24:09] <saml> i don't know which field i need to look at
[17:24:26] <hardwire> err bin :)
[17:24:30] <saml> db.docs.find({'*': /foobar/}) i want something like that
[17:24:51] <hardwire> oh.
[17:24:56] <hardwire> do you only need to do this once?
[17:25:01] <hardwire> like for data cleaning?
[17:25:04] <saml> yup
[17:25:07] <hardwire> mongodump
[17:25:41] <saml> yeah i thought about that.. but then i don't get _id field or i need to parse json to get _id once i find match
[17:25:57] <hardwire> uhm.
[17:25:58] <hardwire> yes?
[17:26:03] <saml> but that seems to be the most reasonable way
[17:26:07] <hardwire> are you a programmer or a whiner baby?
[17:26:11] <cheeser> easy
[17:26:17] <saml> i'm a whiner baby
[17:26:26] <hardwire> didn't you read the sign?
[17:27:38] <saml> actual task is find all <iframe> of certain video hosts. and find canonical id of the video using each video provider's api
[17:27:52] <hardwire> you could use pymongo in a snap
[17:27:56] <saml> so such html is in various different fields
[17:28:24] <cheeser> you'd be better off pulling the html through parser and pattern matching the DOMs
[17:28:50] <saml> field names are generated dynamically. so it's like components.fieldtype[i].fielddata
[17:29:30] <saml> components is an array of objects where a key could be video[0] or video or videoembed or video[2] or movieembed or html[24] ...
[17:30:25] <saml> i could walk components tree of each document. find html-like fields. make an html element out of them. run html parser. find <iframe> . filter by hosts
[17:31:37] <hardwire> or you could use mongodump or mongoexport
[17:31:52] <saml> yeah
[17:32:26] <saml> i could unassign myself from this trello card and give it to frontend dev
[17:39:06] <StephenLynx> kek
[17:40:02] <hardwire> saml: https://gist.github.com/whardier/b9e938aa852f83a7956b5f60a4dfbf72
[17:40:51] <cheeser> 404
[17:40:55] <hardwire> boooo!
[17:41:00] <cheeser> whoa. now it works.
[17:41:02] <cheeser> weird
[17:44:09] <saml> github is weird
[17:44:29] <hardwire> saml: you python at all?
[17:44:32] <saml> 12:53 EDT Some users may experience a delay in pushes or other changes appearing on the site.
[17:44:42] <saml> i'm python
[17:46:05] <hardwire> saml: https://gist.github.com/whardier/5c233778a716a053f78b1d7998e40dda
[17:46:20] <saml> oh thanks hardwire
[17:46:21] <hardwire> replace 'worries' with a regular expression or keyword you're searching for.
[17:46:30] <saml> didn't know json_util.dumps
[17:47:38] <hardwire> it's pretty special for mongodb so that it encodes a few things properly
[17:48:36] <hardwire> but the base python json module can still do the work if you create a new encoder class
[17:48:39] <hardwire> http://stackoverflow.com/questions/16586180/typeerror-objectid-is-not-json-serializable
[17:48:44] <hardwire> which is pretty much all that module does.
[17:57:26] <StephenLynx> github is crap
[17:58:02] <hardwire> ?
[18:02:26] <StephenLynx> <saml> github is weird
[18:17:40] <hardwire> heyso.. Ubuntu 14.10 packages for MongoDB?
[18:18:06] <hardwire> I'm not sure what the "clang" version is.
[18:19:31] <cheeser> https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/
[18:19:48] <cheeser> the 14.04 packages should install ok
[18:20:03] <hardwire> different init system
[18:20:05] <hardwire> but ok
[18:20:20] <cheeser> oh. interesting. that seems drastic within the same version line...
[18:21:31] <hardwire> it'll be fine for development until it is straight forward
[18:21:44] <kurushiyama> Well, I'd always use LTS, if I could not avoid using Ubuntu altogether.
[18:22:02] <hardwire> err.. I didn't mean 14.04
[18:22:05] <hardwire> err
[18:22:07] <hardwire> 14.190
[18:22:08] <hardwire> asld;kfjasdklfj
[18:22:10] <hardwire> long day already
[18:22:14] <cheeser> gentoo++
[18:22:15] <hardwire> I'm using 16.04
[18:22:22] <cheeser> wut? :)
[18:25:11] <kurushiyama> hardwire: Good time to take a long break called home time ;) 10h is mandatory by law in Germany ;)
[18:25:24] <hardwire> I've only been here an hour
[18:26:39] <hardwire> https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-16-04
[18:26:50] <kurushiyama> hardwire: Well, if you feel the day has been too long already, it probably was.
[18:26:51] <hardwire> that has the systemd service config I was concerned with
[18:27:19] <kurushiyama> hardwire: My suggestion: Dont.
[18:27:29] <hardwire> not sure I follow
[18:27:44] <kurushiyama> hardwire: What exactly would be your advantage by using 16.04 compared to 14.04?
[18:28:00] <hardwire> what's my advantage of using 14.04 vs 12.04?
[18:28:21] <hardwire> I'm not really a boycot systemd kinda person.
[18:28:26] <hardwire> it's a nice solution
[18:28:29] <kurushiyama> I agree
[18:28:32] <kurushiyama> totally
[18:28:45] <kurushiyama> BUT: 14.04 is supported by MongoDB as of now.
[18:28:50] <hardwire> keeping servers up to date is a priority.
[18:28:51] <kurushiyama> 16.04 is not.
[18:28:56] <kurushiyama> Aye
[18:29:00] <kurushiyama> And hence LTS.
[18:29:07] <hardwire> 16.04 is LTS :)
[18:29:08] <kurushiyama> Security patches delivered.
[18:29:22] <kurushiyama> hardwire: Aye, but not officially supported by MongoDB
[18:29:31] <hardwire> I imagine that will change
[18:29:44] <kurushiyama> If you can wait for it...
[18:29:45] <hardwire> if not, I'm not too worried. Just was a bit surprised it wasn't available
[18:29:49] <kurushiyama> As of now, it is not.
[18:30:02] <hardwire> I don't need to set up 14.04 boxes just for MongoDB
[18:30:11] <hardwire> I can always dockerize it as well, if needed.
[18:30:59] <kurushiyama> Well... I have played with dockerized MongoDB a bit. The additional layer does not bring any advantages, imho.
[18:31:38] <kurushiyama> Metal, supported OS, MongoDB version > 3.0.0 . KISS.
[18:31:43] <hardwire> I think the only reason I'd do it is because we have a cluster of docker hosts.
[18:31:58] <hardwire> Most of those are going away as well.
[18:32:36] <hardwire> I'm pretty happy with docker and lxc/runc but it's not needed in any way any more.
[19:18:11] <enoch> hi all
[19:18:23] <enoch> [initandlisten] Assertion: 16762:Automatic environment recovery failed. There may be data corruption.
[19:18:44] <enoch> how to fix it?
[19:23:37] <kurushiyama> enoch: Too little fame. How to fix it? ;P Really, a little more info would be helpful.
[19:24:16] <enoch> IS mongod --repair safe?
[19:24:48] <enoch> i have few infos
[19:24:58] <enoch> the server crashed and now mongo doesn't start
[19:24:58] <enoch> lol
[19:25:36] <enoch> http://pastebin.com/nDTAuAFd
[19:25:46] <kurushiyama> Single server or replset?
[19:26:56] <enoch> single
[19:27:59] <kurushiyama> "Safe" in the meaning "You are guaranteed to get you data back"? No, not at all. The database is guaranteed to be in a usable state after you ran --repair.
[19:28:04] <cheeser> probably your only option short of a restore from backup. nothing in the logs?
[19:31:09] <enoch> nothing
[19:31:17] <enoch> i think this server is not safe anymore
[19:31:33] <enoch> btw we should have a backup
[19:32:33] <kurushiyama> "Should have a backup" always makes my toenails curl.
[19:36:40] <enoch> I'm not the sysadmin
[19:37:09] <cheeser> www.mongodb.com/cloud ;)
[19:43:42] <kurushiyama> enoch: Uh, that is ... not good...
[19:58:02] <enoch> hi
[19:58:08] <enoch> mongod --repair is not working
[19:58:09] <enoch> http://pastebin.com/nwi8FsW6
[19:58:17] <enoch> maybe i have some bin corrupted?
[19:58:29] <cheeser> oh. tokumx.
[19:59:07] <enoch> ??
[19:59:27] <cheeser> you're using the toku file store it seems.
[20:00:14] <enoch> so?
[20:00:28] <cheeser> well, that complicates things quite a bit.
[20:00:51] <enoch> :(
[20:00:53] <cheeser> 1) it's 3rd party. 2) i've never used it. 3) i think that company is ... gone? bought out?
[20:01:48] <enoch> :(
[20:08:13] <cheeser> yeah. percona bought them.
[20:08:20] <cheeser> they might be able to tell you what's what.
[20:08:48] <cheeser> https://www.percona.com/downloads/percona-server-mongodb/LATEST/
[20:10:32] <StephenLynx> kek toku went out of business?
[20:10:53] <StephenLynx> wasn't it that miraculous thing that didn't support unique indexes or something?
[20:11:49] <enoch> don't know why they used tokumx
[20:15:02] <StephenLynx> it does some crazy optimizations
[20:15:05] <StephenLynx> mostly about compressions
[20:15:12] <StephenLynx> but it sacrifices so much
[20:15:27] <StephenLynx> so it never took off
[20:16:45] <cheeser> they used to tout transactions. until someone asked about sharding.
[20:17:04] <StephenLynx> :v
[21:21:33] <wjBj> hi guys
[21:22:01] <wjBj> what is negative number in secs_running mean?
[21:24:14] <wjBj> for example https://gist.github.com/Garont/540f83b0e3a1a174bac34dd0af81ade3
[21:24:32] <kurushiyama> wjBj: Your host is so fast that your queries travel back in time? ;P
[21:25:09] <wjBj> kurushiyama: very funny
[21:26:11] <mick27> folks, Cloud manager reports 0 reads from my replicat set, how possible is this , this is production
[21:27:25] <kurushiyama> wjBj: What version are you running? My first guess would be an int value exceeding its positive range...
[21:27:43] <kurushiyama> mick27: You probably should ask support ;)
[21:31:18] <wjBj> kurushiyama: 3.2.4
[21:34:39] <kurushiyama> strange.
[21:35:38] <kurushiyama> wjBj: Have you had a look at the issuetracker?
[21:38:18] <wjBj> kurushiyama: nope, just tried to google it and didn't find anything useful
[21:38:41] <kurushiyama> Just did. Seems to be unknown from first sight
[21:39:53] <kurushiyama> wjBj: Ah. Are you using virtualization?
[21:41:15] <wjBj> kurushiyama: yep, openvz
[21:41:46] <kurushiyama> wjBj: Well, then it might be related to https://jira.mongodb.org/browse/SERVER-4740
[21:50:34] <alexi5> hello
[21:57:29] <kurushiyama> alexi5: Hello!
[22:03:04] <alexi5> what type of applications is mongodb normally used in ?
[22:05:44] <kurushiyama> alexi5: That is quite broad qustion. Personally, I have sse
[22:06:21] <kurushiyama> seen it from booking data over social media analysis to time series data
[22:06:37] <alexi5> i think it is better for me to tell what I am thinking of using mongodb for
[22:06:55] <kurushiyama> Aye, that would be easier..
[22:07:17] <alexi5> Intrested in developing a catalog application for our Engineering department that can store information about various pieces of equipment at various mobile sites
[22:07:50] <alexi5> as well as their settings , and images of the layout of the sites
[22:08:24] <alexi5> i did a Er model on paper and came up up with about 25 tables
[22:09:10] <alexi5> i came up with 3 collections for 3 different models using json
[22:10:41] <kurushiyama> alexi5: Well, from the start: Get your user stories done first, derive the questions you need the data to answer and then do optimized data modelling, focussing on squeezing out the best performance for the the most common use cases
[22:10:42] <alexi5> i like the document model better as it can be slow extended as new equipment are added with different attributes
[22:11:25] <kurushiyama> Upfront data modelling more often than not leads to tears and pitas.
[22:11:28] <alexi5> ok
[22:12:12] <kurushiyama> But in general, that sounds like a proper use case – though not too easy to model.
[22:12:45] <alexi5> cool
[22:13:01] <alexi5> will get more input on the engineers to see f I missed anything
[22:13:13] <kurushiyama> alexi5: However, data modelling could be quite tricky.
[22:13:40] <kurushiyama> What language will you use?
[22:15:14] <alexi5> using C#
[22:19:47] <kurushiyama> Uh, can not say much there
[22:20:49] <alexi5> ok
[22:31:07] <kurushiyama> alexi5: I can still help with the data modelling
[23:11:52] <tombin> is there a reason mongos 3.2.6 would return unrecognized option net.ssl ?
[23:13:49] <kurushiyama> tombin: I guess so. Programs rarely do stuff without reason ;)
[23:13:55] <kurushiyama> Lemme check
[23:15:44] <tombin> all the documentation i read on the mongodb site says this is a real thing :)
[23:15:58] <tombin> mongos --help shows ssl options
[23:17:52] <kurushiyama> tombin: Yes, and the docs, as far as I can see, require quite a bit more: https://docs.mongodb.com/manual/reference/configuration-options/#net-ssl-options
[23:18:26] <tombin> yep, thats what i have
[23:18:40] <kurushiyama> Show
[23:19:04] <tombin> net:
[23:19:04] <tombin> bindIp: 0.0.0.0
[23:19:04] <tombin> port: 27017
[23:19:05] <tombin> ssl:
[23:19:05] <tombin> mode = requireSSL
[23:19:05] <tombin> PEMKeyFile = /etc/ssl/certs/mongodb.pem
[23:19:05] <tombin> CAFile = /etc/ssl/certs/mongodb-ca.pem
[23:19:22] <kurushiyama> tombin: Next time, please use pastebin!
[23:19:54] <tombin> May 23 20:47:15 mongodb-router01 mongos: Starting mongos: Unrecognized option: net.ssl
[23:20:00] <tombin> oh i'm sorry
[23:20:07] <tombin> will do
[23:20:39] <kurushiyama> you might want to use colons instead of equals...
[23:20:48] <tombin> omfg
[23:20:58] <tombin> i can't believe i didn't notice that
[23:22:35] <kurushiyama> tombin: Happens
[23:25:02] <mick27> anyone would happen to have a doc on what goes on when a secondary become primary
[23:25:07] <mick27> what task are run and so forth
[23:25:12] <mick27> tasks
[23:26:37] <kurushiyama> mick27: I guess you checked the basic docs?
[23:26:39] <kurushiyama> mick27: https://docs.mongodb.com/manual/core/replica-set-elections/
[23:27:20] <kurushiyama> mick27: Is there something specific you want to know?
[23:29:52] <mick27> kurushiyama: Yeah I checked that. well I am still having issues when switching to a new primary, the CPU goes at 100% forever, and I can't seem to find out what's wrong, I can pretty surely rule out i/o issue for the disks. so I am looking at what gets activated on the new primary that could cause this
[23:30:21] <mick27> I googled all king of variations of my problem and haven't found something solid
[23:31:08] <kurushiyama> define "switching" and "new", please.
[23:32:40] <kurushiyama> Stepdown? Failover?
[23:34:08] <kurushiyama> mick27: ^
[23:34:29] <mick27> kurushiyama: sorry
[23:34:38] <mick27> new is a new vm that I spinned up
[23:34:51] <mick27> switching is done via priority change
[23:35:01] <kurushiyama> DO NOT DO THAT!
[23:35:10] <mick27> ah
[23:35:14] <mick27> this is progress
[23:35:31] <mick27> why ? (as it is documented on the documentation)
[23:35:49] <kurushiyama> fiddling with priorities is only slightly better than fiddling with votes.
[23:36:10] <kurushiyama> Say you have 2 priority 0 members and 1 priority 1 member.
[23:36:30] <kurushiyama> The latter will become primary.
[23:36:41] <kurushiyama> Now what happens if that member goes down?
[23:36:54] <mick27> I do it the other way around
[23:37:00] <mick27> I raise the new one to 2
[23:37:04] <mick27> trigger an election
[23:37:08] <cheeser> you really shouldn't muck with priorities.
[23:37:27] <mick27> alright, what is the recommended way ?
[23:37:32] <cheeser> to do what?
[23:37:33] <kurushiyama> stepdown
[23:37:39] <mick27> ok
[23:37:40] <kurushiyama> and freeze
[23:37:51] <mick27> could it be related to my high cpu issue though ?
[23:38:01] <cheeser> if you want a primary to not be primary anymore you use rs.stepDown() on the primary
[23:38:07] <kurushiyama> In general there should be o reason whatsoever to have a specific member to be primary
[23:38:37] <mick27> well we are trying to migrate from one gen of hw to another
[23:38:54] <mick27> I don't do that for funs
[23:39:07] <kurushiyama> mick27: The thing is that fiddling with priorities can have side effects. May that cause your problems? Maybe. May your VMM's real time clock be off, making your newly elected primaries virtual head spin? Very possible ;)
[23:39:23] <kurushiyama> mick27: So?
[23:39:37] <kurushiyama> Add the new servers to the replset, remove the old ones, done.
[23:39:50] <mick27> that's where I wanted to go
[23:39:58] <cheeser> add new rs members on the new hardware. once the replication is caught up (you can copy files to speed the sync) you can do an rs.stepDown() and remove the old members.
[23:40:12] <mick27> but somehow the new ones are geting crazy with the cpu to the point it causes a disruption
[23:40:19] <kurushiyama> So why do you need to have a selected member to be primary?
[23:40:33] <cheeser> reset the priorities to the defaults (1, iirc) and try again
[23:40:38] <kurushiyama> mick27: Well, as I said: check VMMs real time clock, for starters
[23:41:28] <kurushiyama> mick27: if members are more than a second apart, things can become pretty nasty.
[23:41:28] <mick27> ok
[23:42:05] <mick27> would ntp help ? even though I am pretty sure it is running already
[23:42:16] <cheeser> yes, you should be running ntpd
[23:42:41] <kurushiyama> mick27: It depends. If the RTC is too far off, ntpd refuses to set the date, iirc.
[23:43:04] <kurushiyama> mick27: Check, do not assume. ;)
[23:43:15] <cheeser> right. so you manually bump it to the "correct" time and ntpd takes over.
[23:43:25] <cheeser> running ntpdate is usually sufficient
[23:43:51] <kurushiyama> cheeser: Not with virtualizations. Some of them have _really_ nasty problems with timekeeping.
[23:52:17] <mick27> kurushiyama: clocks are good
[23:54:35] <kurushiyama> mick27: What I'd do is remove the new members, remove their dbpath's contents, reset the priorities (if necessary), then add a member on the new machines and see what happens
[23:54:59] <mick27> ok
[23:55:28] <mick27> I have the CM installed now, should be pretty straight forward
[23:56:19] <kurushiyama> Well, I'd use the shell, but that may just be a personal preference.
[23:58:14] <mick27> kurushiyama cheeser thx for your help guys, I have to sign off
[23:59:10] <kurushiyama> mick27: You are welcome!