PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:36:38] <gimlet_eye> ok
[00:36:48] <gimlet_eye> when I initialize my replicaset
[00:36:57] <gimlet_eye> how do I make node 1 of 3 use the longname
[00:37:13] <gimlet_eye> liekl jack01.prod.me.com
[00:37:17] <gimlet_eye> not use jack01
[00:58:32] <qswz> com.mongodb.WriteConcernException: { "serverUsed" : "ds053978.mongolab.com/10.34.229.86:53978" , "err" : "Mod on _id not allowed" ...
[00:58:47] <qswz> I can change this global setting I think
[01:01:20] <gimlet_eye> I reinitialised a repivcaset
[01:01:32] <gimlet_eye> and the orig has local.1 .. .10
[01:01:38] <gimlet_eye> I see on the node 2 slave
[01:01:42] <gimlet_eye> local 1.. 20
[01:01:46] <gimlet_eye> heh
[01:01:49] <gimlet_eye> whats that all abot?
[01:02:03] <gimlet_eye> and I dont see the files like appblah1..10 on node 2 slave
[01:02:04] <qswz> I was trying to do an update({_id: myid}, {$set: allDoc} , true, false)
[01:02:15] <gimlet_eye> hmm
[01:02:20] <qswz> where the collection doesn't have _id: myid yet
[01:05:31] <qswz> so I do have to check for document existence first, probably
[01:10:51] <qswz> k, just need to remove the $set
[01:16:48] <qswz> hmm, no rather making sure _id is not in allDoc
[01:30:48] <patrickod> I'm writing a client at the moment but I'm having issues w/ the responseTo field in the headre
[01:30:57] <patrickod> http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/ the documentation states that if present it'll be the ID of the original client request
[01:31:22] <patrickod> yet with MongoDump I'm seeing the server issue messages that have responseTo values set to the previous response message from the server
[01:32:12] <patrickod> server version 2.4.8
[01:32:22] <patrickod> is this expected behavior ?
[01:52:14] <gimlet_eye> "errmsg" : "couldn't initiate : can't find self in the replset config my port: 17017"
[01:52:17] <gimlet_eye> whats this mean?
[01:54:09] <jkitchen> gimlet_eye: means you're trying to configure a replication set and you don't have the local mochine in the set
[01:54:43] <gimlet_eye> I started mongo
[01:54:49] <gimlet_eye> I want to create brand new set
[01:54:55] <gimlet_eye> this I think worked before...
[01:54:59] <gimlet_eye> or was it initialize
[01:56:04] <gimlet_eye> hm
[01:56:07] <gimlet_eye> er
[01:56:16] <gimlet_eye> shouldnt it add it self to local replicaset?
[01:56:21] <gimlet_eye> as it initiazses?
[01:56:59] <joannac> it will if you call initiate()
[01:57:59] <gimlet_eye> I called rs.initiate() and got that err
[01:58:59] <sharondio> Did you check the logs?
[01:59:28] <sharondio> Remember also that the video mentioned that goofiness with using "localhost"...and then proceeded to use "localhost" in their own questions.
[02:00:09] <gimlet_eye> hm
[02:00:11] <gimlet_eye> log says same
[02:00:28] <sharondio> Are you running mongod for something else?
[02:00:41] <sharondio> I did a full restart and then confirmed mongo wasn't running. Then re-ran all the commands to get it working right.
[02:00:57] <sharondio> But you're running out of time.
[02:02:13] <gimlet_eye> mybad
[02:02:17] <gimlet_eye> hostname borked
[02:02:21] <gimlet_eye> on linux
[02:03:33] <sharondio> :-(
[02:03:43] <sharondio> did you get it on time?
[02:04:07] <sharondio> Last week I lost 3 out of 4 questions because I wasn't watching the time. :-(
[02:19:21] <sharondio> gimlet_eye: Did you get your answer in on time?
[02:20:12] <gimlet_eye> uh
[02:20:13] <gimlet_eye> well
[02:20:21] <gimlet_eye> I just stopped mongo
[02:20:24] <gimlet_eye> fixed hostname
[02:20:27] <gimlet_eye> started mongo
[02:20:33] <gimlet_eye> and did rs.initiate()
[02:20:36] <gimlet_eye> and it worked
[02:20:40] <gimlet_eye> then tuend on mongo 2 n 3
[02:20:50] <gimlet_eye> and did rs.add("name:port")
[02:21:00] <gimlet_eye> turned on
[02:21:01] <gimlet_eye> --
[02:21:04] <gimlet_eye> then life good
[02:21:10] <gimlet_eye> 1 busily copying to 2 and 3
[02:30:13] <gimlet_eye> mongo is crashing on node 3
[02:30:16] <gimlet_eye> thats annoying
[02:42:24] <sharondio> gimlet_eye: That sucks.
[08:31:18] <[AD]Turbo> hi there
[09:25:57] <bin> mornin gguys, how can i create multiple replica set members on a standalone machine ?
[09:26:48] <bin> i read the convert standalone machine to replicaset , but in the desription is to add instance on a different machine
[09:27:01] <bin> i dont have another machine so i want to create instance on my single machine
[09:28:53] <bin> anyone ?
[09:46:51] <Nodex> bin, use spearate config files
[09:46:54] <Nodex> separate
[09:47:05] <bin> but what with the host:port ?
[10:32:12] <kali> use a different port and a different dbpath for each replica
[11:33:59] <bin> i dont get it ... normally you have primmary + 2 secondaries , right and the secondaries number is even
[11:34:23] <bin> but when we create custome pattern we should be aware to keep secondaries' count to be not even ..
[11:35:35] <joannac> bin: Huh? Not sure what you're referring to
[11:35:55] <bin> Primary with Two Secondary Members A replica set with three members that store data has:
[11:36:30] <bin> and as long as i understand from what i have read its a standard pattern
[11:36:34] <bin> when creating RS
[11:36:37] <joannac> Yes.
[11:36:48] <bin> but the number of secondaries is 2..
[11:36:50] <bin> right
[11:37:02] <joannac> Yes.
[11:38:01] <bin> but also is said not to leave a set with even number of members else might suffer from TIED ELECTIONS
[11:38:20] <bin> so if we have 2 secondaries and primary gone down .. we might have that situation ..
[11:38:35] <joannac> No, it's said to not have a set with an even number of votes
[11:38:46] <bin> well yes ..
[11:38:55] <bin> but if you have 2 secondaries
[11:38:58] <bin> you have 2 votes
[11:39:07] <joannac> No, you have 3 votes
[11:39:11] <bin> from where ?
[11:39:14] <joannac> 1 from the primary, 1 from each secondary
[11:39:21] <bin> but the primary is down ?
[11:39:44] <joannac> So it's luck you only need 2 votes to win :)
[11:39:48] <joannac> lucky*
[11:41:38] <joannac> The number of votes in a set, and the number of votes you need to become a primary, is not affected by how many nodes are up.
[11:41:57] <bin> aha...
[11:42:05] <bin> so when i calculate i also must take into consideration
[11:42:08] <bin> the primary..
[11:43:47] <bin> well thanks joannac :)
[11:53:40] <Styles> How does GridFS handle duplicate files?
[11:56:41] <kali> it does not :)
[11:57:37] <Styles> kali, ok so assuming 2 files have the same md5 how would I handle that?
[11:57:57] <Styles> Would I want to compute Md5s locally, check if it's in gridfs then if so just assign that given id?
[12:12:16] <banzounet> Hey guys is there something wrong with this query : {
[12:12:16] <banzounet> "barId": {
[12:12:16] <banzounet> "$id": "518a06a9ea55093a11000000"
[12:12:16] <banzounet> },
[12:12:16] <banzounet> "createdAt": {
[12:12:18] <banzounet> "$lt": {
[12:12:21] <banzounet> "sec": 1386071567,
[12:12:23] <banzounet> "usec": 0
[12:12:26] <banzounet> }
[12:12:28] <banzounet> }
[12:12:30] <banzounet> fuck
[12:12:33] <banzounet> ops :(
[12:12:36] <banzounet> https://gist.github.com/BaNzounet/4b062b1e2d35f2fd5a45
[12:12:39] <banzounet> ./clear
[12:12:53] <banzounet> is there something wrong with that query?
[12:13:23] <kali> banzounet: gist the document you expect it to match
[12:15:54] <banzounet> I've updated the gist
[12:15:59] <banzounet> kali: ^
[12:16:20] <banzounet> could it be the $iod? vs $id ?
[12:17:49] <kali> mmm... $id, dates as string... how did you get this dump ? and what do you send your query with ?
[12:20:37] <banzounet> it's generated from a script a colleague as done , (otherwise PHP/MongoClient object)
[12:20:44] <banzounet> and the database is on mongolab
[12:23:19] <kali> so you cut and paste that from the mongodb shell ?
[12:24:10] <banzounet> (I printed the query from the script and yeah I copied the document from the shell)
[12:27:41] <banzounet> I should get n results but I get nothing
[12:28:12] <kali> the thing is the document look weird: I would expect dates to appear as ObjecId(...) and dates as ISODate(...) not $oid and strings (respectively)
[12:28:29] <bin> can someone translate : ) in understandable english for me .. what follows:
[12:28:30] <bin> A replica set does not hold an election as long as the current primary has the highest priority value and is within 10 seconds of the latest oplog entry in the set. If a higher-priority member catches up to within 10 seconds of the latest oplog entry of the current primary, the set holds an election in order to provide the higher-priority node a chance to become primary.
[12:28:48] <bin> we have a primary with a highest priority which is within 10 seconds
[12:29:07] <bin> but if we have a higher .. ? How do you have a higher priority if there is already the highest .. :S
[12:30:15] <joannac> The two statements are different scenarios.
[12:31:54] <kali> banzounet: i commented with a sample of what all this is supposed to look like
[12:32:22] <bin> ahhhhhhh so i guess if the primary is not within 10 seconds .. and if there is a higher priority member within 10 seconds , there will be an election right ?
[12:33:50] <banzounet> kali, Okay thanks I'll investigate that :)
[12:35:51] <joannac> bin: if the current primary is not within 10 seconds, there will be an election
[12:36:51] <joannac> bin: completely separately, if there is a node that is within 10 seconds of the latest optime, and has a higher priority than that current primary, there will be an election
[12:37:33] <bin> and joannac saved the day for a second time :) ..
[12:37:41] <bin> on three -> free beer :D
[12:37:51] <joannac> I'm going to file a docs ticket... that's not clear enough for my liking
[12:39:09] <bin> that i couldnt get it .. sorry :)
[12:43:43] <Styles> If I have 3 servers and want to shard the db for additional storage. Does the connection string stay the same?
[12:46:23] <joannac> Styles: No.
[12:46:38] <joannac> You'll be connecting to a mongoS instead of a mongoD like you are now
[12:48:45] <Styles> Do you need a dedicated mongos server?
[12:49:07] <Styles> Or does every shard run mongos
[12:49:25] <Derick> every application server should run a mongos
[12:49:33] <Derick> *not* every shard
[12:49:49] <joannac> morning Derick
[12:49:53] <Styles> Ahh
[12:50:01] <Derick> joannac: it's afternoon, but good evening!
[12:51:01] <Styles> Is every application server supposed to be told about every shard?
[12:51:19] <Derick> no, but mongos will read that info from the config servers
[12:51:26] <Derick> and they know where all data lives
[12:51:34] <joannac> the application server just needs to know about a mongoS
[12:51:35] <Derick> mongos uses that information to route requests/data accordingly
[12:51:42] <joannac> the mongoS does the routing
[12:52:15] <Styles> ok
[12:52:47] <Styles> So for 2 shards you should have 3 servers and 1 is your routing server (mongoS)?
[12:53:15] <Derick> the mongoS can run on the server that runs your application (so you can use unix domain sockets and avoid another TCp/IP overhead)
[12:53:32] <Styles> Derick, ah perfect
[12:53:35] <Styles> Thank you :)
[12:53:38] <Derick> 2 shards, is 2 * 3 mongoD data nodes, 3 config servers (which can live on the data nodes) and at least one mongoS
[12:53:50] <Derick> just make sure you put the config servers distributed enough
[12:54:09] <Styles> how so?
[12:54:12] <Styles> like geolocation?
[12:54:33] <kali> like on three hardware independent system :)
[12:54:46] <Styles> ah yeah
[12:54:51] <Derick> Styles: what kali says
[12:55:36] <kali> if on ec2 -> three different zones.
[12:55:39] <Styles> Right now we have ~7tb of files stored. We have 1 really big server and it's quite annoying to maintain. I was looking at migrating to MongoDB and setting up a small server.
[12:56:01] <Styles> OVH, RedStation & Online.net
[12:56:42] <Derick> do think about network latency though
[12:56:57] <Styles> I don't think that should be too big of an issue.. but true
[12:57:07] <traplin> has anyone used Monk a lot?
[12:57:25] <Styles> Online offers 400mpbs guaranteed. RedStation is 1gig & ovh is like 400
[12:57:46] <Styles> i can see the latency adding up though humm
[12:58:06] <Nodex> Redstation suck
[12:58:12] <Nodex> take it from me, do NOT use them
[12:58:35] <Styles> Nodex, whats your horror story?
[12:59:03] <Nodex> you name it then it's happened with redstation
[12:59:07] <Styles> LOL
[12:59:16] <Styles> I did hear about a guy having to fight for an HD to be replaced
[12:59:23] <Styles> I wish OVH had stock :(
[12:59:33] <Nodex> thier network is not as good as it makes out
[12:59:37] <Styles> ahh
[12:59:45] <traplin> MongoHQ is quite nice
[12:59:53] <Nodex> the latency to their main DC has terrible peering
[12:59:53] <Styles> Online.net is pretty damn nice Nodex
[13:00:03] <Styles> haha
[13:01:07] <Nodex> if you're after a good all round network I would recommend OVH but do NOT use their low end servers
[13:01:23] <Styles> Yeah but OVH is out of stock
[13:01:26] <Styles> I want EG 2R :(
[13:01:43] <Styles> 2 2tb servers, 64 gb ram & 12 threads clocked at 3.2 Ghz
[13:02:27] <Styles> Did you see they're getting a US DC next?
[13:02:51] <Nodex> they're not out of stock you just have to ring them
[13:02:57] <Nodex> they have turned off the online ordering
[13:03:01] <traplin> could anyone help me with Monk?
[13:03:16] <Nodex> wth is "Monk" ?
[13:03:17] <Styles> Nodex, I didn't have a server with them so I couldn't place a new order
[13:03:24] <Styles> I tried very hard
[13:03:44] <Styles> I'm thinking of buying a cheap server like a KS then asking to buy another and canceling lol
[13:03:54] <traplin> Nodex: its a layer that makes using Mongo easier, https://github.com/LearnBoost/monk
[13:04:29] <Nodex> Styles : that's probably a good idea
[13:04:39] <Nodex> you can get the cheap ones for next to nohting
[13:04:56] <Styles> yeah 15 /m w/e
[13:05:52] <Nodex> traplin : it looks easy, what's the problem?
[13:06:32] <traplin> my findAndModify method just won't modify the entry, here's the code :http://pastebin.com/SAqMnTr2
[13:07:58] <Nodex> try $set
[13:08:09] <Nodex> + instead of $update
[13:08:15] <traplin> $set also doesn't work
[13:08:48] <Nodex> then it's probably not finding the document
[13:09:23] <traplin> the weird thing is if i insert (commented out), it inserts a document with the exact same ID
[13:09:30] <Nodex> doesn findAndModify() have an error object?
[13:10:04] <traplin> i think it does, let me see if an error is produced
[13:11:52] <traplin> i dont think it has an error object, or i am just doing it wrong
[14:29:14] <Scrawny> hello?
[14:29:40] <Scrawny> anyone there/
[14:29:41] <Scrawny> ?
[14:31:54] <cheeser> way to stick with it.
[15:04:24] <qswz> is there a 'batch' insert, where you can insert a series of document in one shot?
[15:04:46] <qswz> batch update* rather
[15:07:35] <kali> jerome: vazy, j'aurais surement des trucs, mais m'attends pas
[15:07:39] <kali> oups, not here :)
[15:09:39] <Nodex> qswz : no
[15:18:33] <qswz> Nodex ok, too bad :)
[15:19:07] <qswz> but it's nothing crazy to write a loop
[17:19:29] <kraljev> Hey, why doesn't aggregate $match use index?
[17:20:03] <kraljev> data.ensure_index({creativeVersion: 1})
[17:20:05] <kraljev> match = {'$match' => {creativeVersion: 224}}
[17:20:07] <kraljev> query = {"$group" => { _id: {ver: "$creativeVersion", sdk: "$sdk"}, requests: {'$sum' => 1}, max: {'$max' => "$adServerLoadAvg"}, avg: {'$avg' => "$adServerLoadAvg"}}}
[17:20:27] <kraljev> the query takes 17 seconds regardless of the index.
[17:35:50] <kraljev> is this irc dead?
[17:39:09] <cheeser> nope
[17:39:32] <ron> all irc is dead. not just this one.
[17:43:13] <TheUnnamedDude> Pretty much how IRC works
[17:54:53] <kraljev> so, then please answer me :)
[17:55:13] <kraljev> data.ensure_index({creativeVersion: 1})
[17:55:17] <kraljev> match = {'$match' => {creativeVersion: 224}}
[17:55:19] <kraljev> query = {"$group" => { _id: {ver: "$creativeVersion", sdk: "$sdk"}, requests: {'$sum' => 1}, max: {'$max' => "$adServerLoadAvg"}, avg: {'$avg' => "$adServerLoadAvg"}}}
[17:55:23] <kraljev> the query takes 17 seconds regardless of the index.
[17:55:25] <kraljev> Hey, why doesn't aggregate $match use index?
[18:18:00] <name_vhosts> ok
[18:18:09] <name_vhosts> 157G sync keeps crashign node 2 and 3 mongo
[18:18:13] <name_vhosts> any wya to stop that?
[18:18:13] <kurtis> Hey guys, my google-fu is failing me. What version will support either 32MB Limits for the Aggregation Query or outputting the results to a collection?
[18:23:59] <name_vhosts> the newest
[18:24:04] <name_vhosts> smallfiles setting
[18:33:59] <name_vhosts> so why does mongo abort after an fassert failure?
[18:34:10] <kurtis> name_vhosts: Thanks, but I don't think that is the same. The docs look like that's for the data file size. I'm dealing with the Aggregation Results File Size limit
[18:34:11] <name_vhosts> when I am trying to have nde 1 sync 150G data to 2 and 3?
[18:36:00] <name_vhosts> guess ill cron the bitch to start every 5min
[18:36:11] <name_vhosts> but cmon
[18:36:15] <name_vhosts> why in hells it failing?
[18:37:40] <kurtis> name_vhosts: I'm not sure if this helps, but did you read this? https://groups.google.com/forum/#!topic/mongodb-user/UFV1Olo-ZnQ
[18:39:16] <name_vhosts> hm I didn't see dup key error but I will look
[18:40:27] <name_vhosts> replSet initial sync exception: 13106 nextSafe(): { $err: "assertion src/mongo/db/pdfile.h:392" } 4 attempts remainin
[18:40:31] <name_vhosts> hm
[18:40:45] <name_vhosts> I wonder if there is some built in limit setting to how big an inital sync can be?
[18:40:50] <name_vhosts> I am trying to sync 150G
[18:40:58] <name_vhosts> which might be outside what most do..
[18:41:13] <name_vhosts> I should rpbly blank the journal and local files and rsync the files into place
[18:41:16] <name_vhosts> and re initialize
[18:41:22] <name_vhosts> ut im stuborn!
[18:41:23] <name_vhosts> lol
[18:43:03] <name_vhosts> I wonder if mongo is smart enuf to compress its transfers
[18:43:06] <name_vhosts> probly not huhu
[18:43:09] <name_vhosts> huh
[18:43:28] <name_vhosts> www.prevayler.org I wish I knew this
[18:46:15] <name_vhosts> replSet initial sync exception: 13106 nextSafe():
[18:57:00] <name_vhosts> maybe iunm hitting oplog limit or some crap?
[19:29:29] <lazypower> Greetings, I'm having some trouble mapping my MongoDB BSON Documents to classes in C#. This has been trivial historically thanks to mongoid on the rails side, however I have need of consuming this data in C# for another project. My model contains a reference to documents in another collection, and this link is throwing the error "Expected element name to be '_t', not '_id' - i'm not finding much about this on google
[19:29:44] <lazypower> Can someone point me in the right direction of whats going on? Is the MongoDB Driver having an issue determining type?
[19:29:58] <ron> C#. heh.
[19:30:16] <lazypower> I'm not much of a fan, but this is a constraint that I have to deal with now .
[19:53:08] <joannac> lazypower: What kind of reference?
[19:55:49] <KaT0R> hey guys how do you search globally within your vales eg i want to search james and see results whether than be in the email or name or company
[19:56:44] <cheeser> $or
[19:58:46] <joannac> Pull all your stuff into an array field and index that instead
[20:16:06] <shadfc> hey guys, any ideas on where the "right" place to add some instrumentation to pymongo would be? I'd like to send various metrics to statsd (per collection query response times, counts, etc)
[20:34:15] <tastycode> Given i have a table of posts (like blog posts) that have tags (an array of strings). I need to get all posts whose tags include one of the following "lgbt", "humanrights", "equality" .. whose tags also include at least one of the following "petition", "fundraiser"… So far here is what i've come up with db.posts.find({tags: {"$in": ["lgbt", "human rights", "equality"], "$in": ["petition", "fundraiser"]}}) but it also pulls up things that don
[20:34:16] <tastycode> have an lgbt tag, can anyone give me some guidance (also, there doesn't seem to be a real guide to querying in mongodb … just the docs on Query in the mongoose docs that aren't terribly helpful)
[20:37:20] <tastycode> Turns out i was wrong about the docs, mongodb itself has better documentation… I'm a little closer, but still not getting what i expect db.post.find({"$and": [{"tags": {"$in": ["lgbt", "human rights", "equality"]}}, {"tags": {"$in": ["petition", "fundraiser"]}}]})
[20:48:37] <kali> tastycode: paste somewhere the document you're expecting to find
[20:49:36] <joannac> tastycode: also a document thatshould not be matched, thatiscurrently being matched
[20:51:29] <tastycode> kali, joannac: http://pastie.org/8526477
[20:51:54] <tastycode> this always happens.. i figure the problem out if i post it
[20:52:00] <tastycode> I was using $all instead of $and
[20:52:31] <kali> that was easy
[20:53:53] <tastycode> thanks … any caveats or best practices i should consider for making queries on array fields (i.e. tags) performant
[20:55:03] <kali> FYI, i think this one will work the same: db.tastycode.find({tags: { $all: ["lgbt", "petition"] } )
[20:55:40] <tastycode> yes but lgbt js one of a list of tags, and petition is one of a list of tags… the post needs to have at least one in each list
[20:55:52] <tastycode> ["lgbt", "human rights", "equality"] and ["petition", "fundraiser"]
[20:55:58] <kali> ha
[20:55:58] <kali> ok
[21:06:58] <lazypower> joannac: Its an array of ID's for foreign documents.
[21:07:23] <Hummingbird> I am looking for some information about obtaining a range of bytes from gridFS via the java driver. There is a sentence in the docs that mentions this is possible, but I couldn't find any further information. Do I need to get and assemble the packages manually or is there a better way?
[21:07:29] <lazypower> I can provide a document structure sample, and the class translation code if that will help
[21:07:40] <cheeser> Hummingbird: did you ask that on SO, too?
[21:07:58] <Hummingbird> cheeser, jep, that was me :)
[21:08:13] <cheeser> heh. saw that this morning and opened it. haven't had time to really look at it though
[21:08:50] <Hummingbird> ok, than it's just me being too impatient :)
[21:39:14] <cshaffer> my mongo server keeps shutting itself down for like no reason and the logs aren't saying anything about a crash or shutdown command. would anyone like to offer speculation as to the cause?
[21:40:10] <dangayle> cshaffer: gremlins
[21:40:29] <dangayle> sorry, I couldn't help myself. Wish I could help
[21:40:54] <cshaffer> it's alright. gremlins are already my #1 suspect
[21:41:02] <dangayle> ha
[21:52:12] <h4k1m> hello guys
[21:52:42] <h4k1m> how can I access my db in the shell using `mongo --eval ` command
[21:53:09] <h4k1m> putting 'db.collection-name...' as a parameter isnt recognized
[22:10:24] <azathoth99> ok
[22:10:34] <azathoth99> how do I ...
[22:10:51] <azathoth99> I mean say 1 node of 3 si slow
[22:11:02] <azathoth99> how do I tell mongo to like rely on that one LAST?
[22:11:20] <azathoth99> basic replicaset setup
[22:12:13] <cheeser> rely on it how?
[22:19:30] <azathoth99> well the 1 and 3 box
[22:19:35] <azathoth99> have about 400MB/s disk
[22:19:43] <azathoth99> but the 2 box si a dog like 60MB/s
[22:19:45] <azathoth99> so
[22:20:01] <azathoth99> id rather not have 2 be like promoted etc
[22:20:08] <azathoth99> but im too lazy to redo 2 as 3
[22:20:10] <azathoth99> lol
[22:20:15] <azathoth99> so it there some config magic
[22:20:21] <azathoth99> to liek make 1 and 3 the top dawgz
[22:20:22] <azathoth99> ?
[22:23:23] <cheeser> you can set the priority of that box to be a bit lower. or 0. but there are subtle repercussions to doing that.
[22:23:36] <cheeser> the docs have some info about that.
[22:27:43] <azathoth99> interesting
[22:56:10] <klj613> hello - the rpm repo (to install mongo) been very slow past 3 days. is this a known issue?
[23:09:50] <lakmuz> how to configure db path
[23:10:02] <lakmuz> is there some cfg file?
[23:11:43] <ghostbar> lakmuz: /etc/mongodb.conf?
[23:11:55] <lakmuz> on windows? :)
[23:12:33] <ghostbar> lakmuz: oh... I don't know on Windows but this should help http://docs.mongodb.org/manual/reference/configuration-options/
[23:12:41] <ghostbar> Ctrl+F for dbpath
[23:14:58] <lakmuz> rly, now default cfg file for windows?
[23:17:10] <lakmuz> you really do need pull request
[23:28:30] <joannac> lakmuz: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/#configure-the-system
[23:33:21] <joannac> cheeser: you around?
[23:34:08] <lakmuz> i don't need a service
[23:38:15] <joannac> lakmuz: You don't need a service, that was to show you that you can create a config file on Windows?
[23:38:23] <joannac> lakmuz: did i misunderstand your question?
[23:45:27] <lakmuz> joannac, mongod.cfg in mongod.exe dir doesn't work for me
[23:48:57] <joannac> lakmuz: how are you running it?
[23:52:33] <lakmuz> C:\Work\mongodb>mongod
[23:52:46] <joannac> you need to tell it where the config file is
[23:53:38] <lakmuz> what am i talking about, don't you have default config file? and if yes why
[23:54:20] <lakmuz> i'm on middle of pull request
[23:54:37] <lakmuz> if u want it
[23:55:28] <joannac> As in sample config file? Some of the linux packages do, i think?
[23:55:39] <joannac> I don't believe there is one on windows
[23:57:19] <lakmuz> not a sample
[23:57:31] <lakmuz> real :D