PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 16th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:59:23] <aZnmAn> Hi guys -- new here. Wondering how to find() a nested document... http://pastebin.com/z9Zd9j6g for JSON. Need to find posts by an author (I'd like a nested setup for categories too)...
[02:02:18] <aZnmAn> db.listings.find({author: /Smith/}) for instance doesn't return anything
[05:37:18] <domo1> whats the difference between using mongo references vs referencing doc IDs manually?
[05:37:24] <domo1> should refs always be used now?
[06:04:50] <kreedy> anyone have a good first place to look for why mms only sees one secondary and no primary in my 3 box replicaset? when i added all 3 boxes with auth. there is also an agent running on each one
[06:04:58] <kreedy> i get type "no data" from two of them
[06:08:54] <jeremy-> Is there anyway to use update/upsert to NOT do the upsert (create an additional entry), of a second query is true. I can think of one way of doing it which is doing a .find() first, then just programatically not doing the upsert query if that is true. Anyway in one mongo request? I suppose its wanting an additional query parameter for upsert that i dont see.
[06:11:13] <jeremy-> my question was kind of vague so I'll give an example from my pymongo: self.collection.update({'date' : q}, { '$set' : {type : q, typestatus : 'ready'}}, True) IMAGINE THERE IS AN ADDITIONAL FLAG... DONT DO UPSERT/ANYTHING IF... typestatus : { '$nin' : ["csv", "complete"]} ..
[06:12:20] <jeremy-> I think this is just not possible so i guess i'll just use two queries
[06:26:22] <abhi9> is there any way to set matched item as a key from an array in mapreduce, except loop?
[08:22:44] <ranksu> hi, I believe I can possibly get some help from here regarding the mongo
[08:23:41] <ron> well, if you believe, it must be true!
[08:23:56] <ranksu> I'm a firm believer!
[08:25:56] <ranksu> the thing is that I'm having a large collection of POI's in mongo. When I'm doing a bounding box query, I got the POI's only in center of the box. however I'd like to have them in newest first
[08:26:43] <ranksu> that's the problem since it doesn't seem to work. But stack is java with spring data
[08:27:11] <ranksu> whenever I specify the sort order in spring data the query will be god slow
[08:33:03] <kali> ranksu: have you checked what query spring sends to mongodb ?
[08:33:11] <kali> ranksu: and do you have the matching index ?
[08:34:34] <ranksu> I do have a matching index. the query _seems_ to run fine in console, but from spring it's slow. I've tried to debug the query, but I really cannot see the proper query that's been sent to mongo
[08:34:58] <[AD]Turbo> yo all
[08:36:58] <ranksu> Also do you know what's the default order when querying with bounding box?
[08:40:39] <kali> ranksu: ha, it's a geo query
[08:41:21] <ranksu> it is
[08:42:55] <ranksu> and the use case is rather trivial. "50 newest inside the bounding box"
[08:43:14] <kali> well, at least try to check out what spring does: you can probably bump the log level in your java app, or use mongosniff to sneak on the network, or use the mongodb profiler to get all queries
[08:47:20] <ranksu> I believe that's the route I have to take
[09:09:21] <NodeX> spatial queries wont even run without an index
[09:21:19] <waheedi> a quick question on oplog
[09:21:54] <waheedi> i just check my servers and saw oplog configured as oplog size: 9118.617382812501MB / log length start to end: 8378451secs (2327.35hrs)
[09:22:24] <waheedi> do you think i need to decrease that size and length ?
[09:23:17] <kali> i don't think there is any significant downside of having a big oplog
[09:23:31] <kali> except the disk space, but 10GB is nothing
[09:23:46] <waheedi> thank you Kali
[09:54:58] <ranksu> one more thing. in mongo shell, is it possible to use a unixlike more/less since not everything fits in my terminal window?
[09:55:22] <NodeX> "it"
[09:55:41] <ranksu> getindexes() for example wont give me that option
[09:57:09] <NodeX> you must have a lot of indexes then lol
[09:57:33] <ranksu> not that many but my screen resolution is not that big
[10:07:34] <NodeX> I dont think there is sorry
[10:09:32] <ranksu> ok
[10:09:34] <ranksu> ta
[10:14:55] <waheedi> what's the best way to measure my mongos performance ?
[10:20:17] <chickamade> hey guys, mongodb explain without running, is it possible?
[10:21:16] <ron> chickamade: now... try asking the same question again, only this time make a little more sense ;)
[10:23:33] <chickamade> can you ask mongodb to spit out the query plan without actually running the query?
[10:23:52] <ron> oh. hmm. don't know.
[10:23:55] <ron> sorry :)
[10:24:25] <NodeX> no
[10:25:48] <Baribal> Hi. A friend of mine just tried to use a collections .copyTo (without having found API docs on it, btw), using a collection as parameter. The result was: "SyntaxError: missing exponent (shell):1". So... Does .copyTo() actually copy collections? And more importantly, what does copyTo do and where's the docs?
[10:27:30] <ron> chickamade: it's not a bad suggestion though. maybe you should open a feature request.
[10:28:04] <NodeX> no such command as "copyTo"
[10:28:22] <NodeX> in the shell at least
[10:38:52] <chickamade> ron: thanks, will create one
[10:39:19] <ron> chickamade: no, thank you. thank you for moving the OSS world forward :)
[10:44:21] <invisib> How can I see all the collections in a db?
[10:44:48] <xcat> Is it OK to call ensureIndex regularly, when the index already exists?
[10:50:15] <NodeX> invisib : "show collections"
[10:50:31] <NodeX> xcat : once you've setup the index it should never need calling again
[10:50:43] <NodeX> unless it's dropped and needs rebuilding
[10:57:23] <xcat> That's not what I'm asking
[10:57:29] <xcat> I know it doesn't NEED to be called
[10:57:34] <xcat> I'm asking if it OK to keep calling it
[10:57:47] <NodeX> but why would you want to add indexes over and over again?
[10:57:54] <xcat> I don't
[10:58:06] <xcat> But I want to know what happens if you keep calling the function
[10:58:14] <xcat> My application would find it easier to just ensure the index exists than to try to determine if it exists first
[10:58:17] <NodeX> it will keep trying to add the index
[10:58:35] <xcat> In other words, it would be easier to blindly call ensureIndex than to try and introspect the collection to figure out whether the same index already exists
[10:58:49] <NodeX> I'll take your word for it
[10:59:09] <xcat> I take it you're not a programmer
[10:59:30] <NodeX> no, I am a programmer, I just know how to program without doing that ^
[10:59:43] <xcat> What would you do
[11:00:08] <NodeX> either setup the indexes with an install script or manualy add them
[11:00:20] <NodeX> do you know how long some inndexes take to add?
[11:00:27] <Derick> xcat: most drivers cache the ensureIndex, so you can call it as much as you want
[11:00:39] <NodeX> Derick : that's not the point
[11:00:41] <Derick> calling ensureIndex repeatedly is not a problem
[11:00:53] <xcat> OK, thanks Derick :)
[11:01:33] <xcat> I want to do this with the TTL index; it won't have any adverse effect there either? Like restarting the background theread or anything weird
[11:01:41] <NodeX> what happens if you've got a massive collection that has no index and your app creates one...... it will hang your app
[11:01:55] <Derick> yes, that you need to avoid
[11:02:00] <xcat> Oh
[11:02:05] <NodeX> which is my point
[11:02:11] <Derick> (I've had that happen a few times before i was a 10genner)
[11:02:27] <NodeX> you can "background" it but it's not ideal
[11:02:32] <NodeX> it still locks the db
[11:02:42] <xcat> NodeX: not what we're talking about
[11:03:05] <xcat> TTL cleanup runs on a background thread
[11:03:39] <NodeX> xcat : it's exactly what you're talking about .. you asked about creating indexes over and over in your app, and i explained why it's a bad idea
[11:04:11] <xcat> That was 5 minutes ago, try to keep up
[11:05:07] <Derick> xcat: no need to get catty
[11:05:12] <NodeX> kids :/
[11:08:33] <xcat> Grandfathers :/
[11:14:20] <NodeX> This is the trouble with a good product becoming popular
[11:52:38] <ABK> is RHEL5/CentOS5 x86_64 specific RPM of 2.0.2-mongodb_1 available anywhere
[11:52:55] <ABK> require this specific version RPM only
[11:53:12] <Derick> There is a 2.0.8 - don't install older versions as they have bugs.
[11:54:04] <ABK> issue is we currently don't wanna change in Prod., so to have a similar version across require it
[11:54:43] <ABK> we'll only switch in Prod after regressive and long testing... that will take time... so for until then we are stuck
[11:55:05] <Derick> http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/ is all we have online
[11:55:28] <ABK> yeah checked that... it has 2.2.2
[11:55:52] <Derick> and 2.0.8
[11:56:07] <ABK> would there be any back-up available... of required 2.0.2
[11:56:19] <Derick> maybe, but you really should not install that
[11:56:22] <ABK> else I'll have to prepare one from the source :(
[11:57:08] <ABK> I understand the reason for not using that... but as I said pushing the infra change to Production wouldn't be without enough testing
[11:57:54] <ABK> I'll be testing it with 2.2.2 (hoping that's the latest stable build)
[11:58:15] <Derick> 2.2.3 soon though
[11:58:30] <Derick> also, what you build from source is not going to be the same as the RPM I suppose
[11:58:58] <ABK> do you have access to the RPMSpec used to build RPM by any chance...
[11:59:20] <Derick> I've no idea where they are, sorry.
[11:59:45] <Derick> https://github.com/mongodb/mongo/tree/master/rpm has them it seems
[11:59:53] <ABK> aaahhh... the tar I got just have binaries :(
[12:00:08] <ABK> I'll check the github, thanks
[12:00:38] <Derick> https://github.com/mongodb/mongo/tree/r2.0.2
[12:00:42] <Derick> will have it
[12:00:50] <Derick> https://github.com/mongodb/mongo/archive/r2.0.2.zip is a zip file of it
[12:00:57] <ABK> yeah the spec seems to be there @ https://github.com/mongodb/mongo/blob/master/rpm/mongo.spec
[12:01:05] <ABK> thanks dude
[12:01:21] <Derick> do note, that we probably not support this though and really recommend upgrading to at least the latest 2.0.x
[12:02:03] <ABK> sure... this is just for the new upcoming nodes using same config-management-tool with fixed Mongo version due to Production
[12:02:15] <ABK> once we are done testing a swift move to 2.2.2
[12:02:23] <ABK> I'll update it
[12:03:12] <ABK> just can't take chances with Prod data right :(
[13:02:11] <lgbr> my reduce function only seems to be called on a very limited set of my database. So I might have hundreds of objects in a collection whose map key (first parameter of emit()) should have hundreds of entries in it, but when reduce is called, only ~70 or so items are in the second parameter of the reduce() function. Am I understanding map reduce wrong?
[13:24:30] <kali> lgbr: the reduce is only called is there is something to reduce: if you only have one value emitted for a given key, it will not need a reduce
[13:25:11] <kali> lgbr: also, consider using the aggregation framework instead of m/r if it can handle your task as it is much more efficient
[13:26:11] <lgbr> kali: My problem is that I have a key that should have >100 values, but it is consistently calling reduce with <80 values. They're not being broken up into different reduce calls or something are they?
[13:26:36] <kali> lgbr: ho yeah, that can happen too
[13:27:06] <kali> lgbr: if a key have 1000 emitted value, mongodb can choose to reduce the first half, then the second one, and the reduce again the two values
[13:28:47] <lgbr> kali: Hmm. The second parameter of my emit() function should have the same format as the return value of my reduce() function?
[13:29:09] <kali> lgbr: yes
[13:29:21] <lgbr> kali: thank you for the monumental epiphany
[13:29:28] <kali> :)
[13:36:49] <jonesy> I use Diamond to monitor mongodb, and it's creating metrics for, like, every mapreduce job. Where can I learn more about what's happening behind the scenes when I launch a mapreduce job, so I can better understand how to work with my monitor's data collector?
[14:28:45] <BostjanWrk> hi
[14:28:56] <BostjanWrk> i'm having problem with multi document update
[14:29:00] <BostjanWrk> http://docs.mongodb.org/manual/reference/operator/inc/
[14:29:14] <BostjanWrk> i used this command (name: john)
[14:29:23] <BostjanWrk> but only first row get's updated
[14:29:29] <BostjanWrk> an idea why?
[14:29:53] <lgbr> BostjanWrk: You need to use the option 'multi'
[14:30:45] <BostjanWrk> lgbr am....10x and....why isn't this noted in manual? :D
[14:31:06] <lgbr> BostjanWrk: it's here: http://docs.mongodb.org/manual/applications/update/
[14:32:23] <BostjanWrk> huh lgbr thanks a lot
[14:35:36] <BostjanWrk> khm...how does upsert works with multi?
[14:37:04] <lgbr> never tried it
[14:40:15] <NodeX> yes
[15:52:56] <rickibalboa> Can anyone help me out with an issue. I have a query which I can run directly on the database https://gist.github.com/4548097 and it returns results which is fine, the next query is exactly the same, just PHP-ified, I run that in my PHP code and I don't get any results, database is definitely connected etc. Any ideas?
[15:59:02] <NodeX> is it a large query?
[16:00:25] <rickibalboa> Nah
[16:00:42] <rickibalboa> It's returning about 30 results, narrowed it down to the timestamp part
[16:36:02] <owen1> what triggers an election for a new primary? only loss of connection to a member or are there more conditions like high cpu/memory/diskspace?
[16:38:11] <StarX> any developer here ?
[16:38:18] <StarX> why mongo replication have a 12 member limits?
[16:44:17] <Baribal> owen1, AFAIK, loss of the master, and that's it. Could be wrong though.
[16:44:46] <Derick> StarX: because the algorithm to vote the primary doesn't scale beyond that (I think)™
[16:45:38] <owen1> Baribal: got it
[16:46:12] <Baribal> owen1, handle with care, consume with big grains of salt. :)
[16:47:35] <StarX> Derick, hmmm.... for my case... i need 5 vote members and 20 non-vote members.....12 server limit is kill me :/
[16:48:21] <Baribal> StarX, you need 25 replicas per actual server?
[16:48:34] <Baribal> Sounds like a whole new world of read load.
[16:49:20] <StarX> Baribal, exactly
[16:49:55] <StarX> Baribal, 5 eventualy master server....and 20 server for local read...
[16:50:25] <Baribal> Ah, that'll be 4 replicas per master then?
[16:51:09] <NodeX> that#s alot of read load
[16:51:21] <NodeX> you're gonna have to shard to get what you need
[16:51:53] <StarX> NodeX, im thinking in slaves for read
[16:53:19] <StarX> Baribal, maybe my ideal world....is a mix ....Master-Slave and Replica
[16:54:18] <kali> StarX: what's wrong with the usual sharding over replic set setup ?
[16:54:26] <StarX> 5 servers in Replica mode.... and 20 "slaves" (from Master/Slave mode) using (if crash) the 5 "masters" in replica
[16:56:03] <StarX> kali, i need all local data.. in 25 servers... using Replica i limited to 12 servers
[16:56:39] <StarX> in my case.... 90% access is for read and 10% for write..
[16:57:55] <StarX> any ideas ?
[16:58:23] <Derick> shard
[16:58:56] <NodeX> ^^
[16:58:59] <StarX> hmmmm
[16:59:30] <NodeX> how many reads do you expect?
[16:59:30] <StarX> i will read again Shard concept
[16:59:39] <NodeX> sharding will only get you so far
[17:00:36] <kali> StarX: "i need all local data" i don't know what that means
[17:01:01] <NodeX> I think he means he want's 25 copies of his data in 25 servers so he can read from them
[17:01:20] <StarX> NodeX, yeah ^^
[17:01:45] <NodeX> how many reads do you expect?
[17:02:04] <NodeX> because your bottleneck will not be mongo @ 12 replicas
[17:02:26] <NodeX> the mongos will unless you're connecting direct to the rs
[17:02:30] <NodeX> (secondary)
[17:02:35] <StarX> this case is for a email marketing work
[17:02:44] <NodeX> correction, the network card in the primary will be
[17:02:51] <StarX> all emails for all clients and all lists is on mongo
[17:03:19] <StarX> each email is on mongodb
[17:03:45] <NodeX> I should restructture your data then
[17:06:12] <StarX> NodeX, my ideia is use mongodb... for know how many time... each "email address" receive a "email" for each clients.
[17:07:57] <NodeX> I dont understand what that means sorry
[17:08:25] <StarX> i will read Shard concept again....
[17:08:51] <StarX> NodeX and kali , thanks for help
[17:09:44] <WarheadsSE> Need some help sorting out mongodb builds on ARM for Arch Linux ARM
[17:12:07] <kali> mongodb on ARM ? i must admit the usecase eludes me
[17:12:54] <WarheadsSE> kali: for the users requests, and e.g. small nodejs based things
[17:13:05] <kali> WarheadsSE: on phones ?
[17:13:43] <WarheadsSE> right, because only phones have ARM chips.
[17:16:00] <NodeX> lol
[17:16:01] <WarheadsSE> You've never seen server grade ARm equipment have you
[17:16:13] <NodeX> sarcasm doesn't generaly get people helped any faster ;)
[17:17:10] <WarheadsSE> True, but assuming I'm here beacuse I want my Android phone to run a nodejs webchat server with mongodb does't help in understanding the problem in any way either.
[17:17:24] <NodeX> where did node.js come from?
[17:17:35] <WarheadsSE> use case example
[17:17:54] <WarheadsSE> im not worried about that, I am here specifically about mongodb & it's ASM opcode failures.
[17:21:45] <WarheadsSE> Here is what we are hitting on a compile for armv7-a vfpv3-d16 http://sprunge.us/QeOT
[17:22:24] <WarheadsSE> Essentially identical for armv6l vfpv2 & armv5te
[17:23:22] <WarheadsSE> build scripts are located here: https://projects.archlinux.org/svntogit/community.git/tree/trunk?h=packages/mongodb
[17:24:01] <WarheadsSE> The compile was last tried with gcc/glibc 4.7
[17:34:31] <owen1> i got 2 datacenters. is it fine to have RS of 3 nodes, 2 on east coast and 1 on west?
[17:34:56] <owen1> (priority 1 for each node)
[17:35:48] <NodeX> define "fine"
[17:36:30] <owen1> safe? common usage?
[17:36:55] <NodeX> not sure what common is tbh, my common differs to yours
[17:37:10] <NodeX> (not trying to be difficult - just saying
[17:37:20] <owen1> ok. what is your setup?
[17:37:36] <NodeX> better to explain yours
[17:37:41] <owen1> all i need is redundency across data centers
[17:38:05] <NodeX> that shoul be fine I would say .. if east is stable
[17:39:09] <owen1> the only issue is if i lose 2 primaries, i'll have to manually set the last one to primary
[17:40:22] <owen1> so if i'll have total of 5 i can lose 2 primaries and a 3rd one will become primary.
[17:40:56] <owen1> and i need to lose 3 primaries until i need to do some manual intervention.
[17:42:50] <NodeX> I would have 1 master in east and 2 secondaries in the west
[17:43:26] <NodeX> I can only assume east is better than the west in terms of redundancy
[17:45:06] <owen1> NodeX: what's the reason u have 1 primary alone?
[17:46:01] <NodeX> redundancy
[17:46:08] <Saturation> is there anyway to get latest stable working on raspberry pi?
[17:46:11] <NodeX> no point in having a pri and sec in same place
[17:46:18] <Saturation> any way
[17:47:20] <NodeX> (if you're trying to avoid network failure that is )
[17:49:06] <evan_> I'm doing some very detailed reporting on 100million events and was looking to use map-reduce. Should I use cassandra or mongodb or what?
[17:49:56] <NodeX> better o use aggregation framework
[17:54:01] <evan_> NodeX: Which one do you recommend ?
[17:54:47] <NodeX> aggregation framework
[17:58:47] <WarheadsSE> Saturation is the user that has me looking for a way to get the package fixed in our builds.
[18:58:20] <owen1> i don't understand the requirement for odd number of nodes. let's say i have 4. primary dies. 3 can vote. what's the problem?
[18:59:30] <kali> owen1: the problem is when 2 die
[19:01:03] <owen1> i guess. still a bit confused.
[19:03:09] <strnadj> You need ODD number of voters, because if you have even number of votes, there can be a situation - no one of nodes has most votes - 4 = can 2:2, 6 = can be 3:3 etc.. if you have ODD number of voters - this situation is not possible 7 - can be 3:4 etc...
[19:05:07] <JaredMiller> what if you have 5 nodes.. and one dies?
[19:05:41] <owen1> JaredMiller: if it's the secondary, nothing happens. if it's the primary, a new primary will be elected
[19:06:00] <JaredMiller> right, wouldn't there be 4 votes?
[19:06:12] <JaredMiller> so potential for a 2:2 split
[19:06:14] <kali> 4 votes out of 5 is a strict majority
[19:06:16] <owen1> based on which primary has the recent oplog
[19:06:59] <owen1> in 5 nodes, 2 primary can die and the rest can still vote
[19:07:30] <owen1> in 3 nodes only 1 primary can die
[19:08:31] <owen1> the same for 4 nodes. that's why it's make no sense to use even number!
[19:08:58] <kali> the problem is not tie during the votes process. the problem is to make sure there is only one election taking place at a given time. so for an election to occur, you need a strict majority of nodes to be able to talk to each other
[19:23:19] <tworkin> in pymongo, there is collection.find().count(), but why isnt there len(collection.find()) ?
[19:26:37] <owen1> i have 3 nodes. 2 are dead. how to turn the survivor into primary?
[19:29:05] <kali> owen1: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
[19:33:45] <owen1> kali: nice!
[19:53:39] <owen1> i had 1 primary that survive out of 3. i did some config change to one of the secondaries (added himself and the other secondary to the conf) and reconfigure it. it seems like the survivour primary is not aware of the 2 nodes that came back to life.
[19:53:41] <owen1> why?
[19:54:10] <owen1> do i need to run the reconfigure on the primary as well?
[19:55:00] <owen1> i thought the configuration file will be shared whenever i change it on any host.
[20:00:38] <owen1> also the rs.conf is the old configuration. the 2 secondaries have the new conf
[20:03:00] <owen1> maybe i should delete the data in the secondaries before changing their config?
[20:35:51] <WarheadsSE> kali: NodeX would either of you know who might be able to lend a hand in sorting out the issues seen by my users hwne building mongodb?
[20:37:39] <kali> WarheadsSE: you need help from the actual developpers... most of the people are are users, and most of discussion here is about usage
[20:37:58] <kali> WarheadsSE: you could try to open a jira, on contact 10gen for some consulting
[20:38:06] <kali> s/on /or /
[20:42:30] <WarheadsSE> Thanks for the pointers.
[21:10:50] <andrewwinterman> hello!
[21:17:19] <andrewwinterman> do you guys know when you'd want to use multiple databases rather than multiple collections?
[23:02:26] <owen1> let's say i have 3 nodes (primary is here too) on west coast and 2 on the east. if the connection between the coast is lost, what ensures that i don't end up with 2 primaries? is it because of the fact that the east coast is not a majority?
[23:10:06] <owen1> another question - is there a need to restart my node client after reconfiguring a replica set (let's say new primary is elected, etc)?
[23:12:39] <linsys> owen1: What client are you running? I think node.js might have a bug if auth is enabled but besides that no to your restarting of the app node question.
[23:13:29] <linsys> owen1: for a member to become master it must have majority votes. If the west coast went down, 2 nodes is less the majority so no nodes on the east would ever become master
[23:14:28] <owen1> linsys: i use the 10gen official client. not sure if auth is enabled. what does enabling it gives me?
[23:14:54] <linsys> owne1: Well there are 10gen clients for many things.. what programing language is your app in?
[23:15:03] <owen1> nde
[23:15:05] <owen1> node
[23:15:08] <linsys> owen1: auth provides authentication
[23:15:16] <owen1> is there any other language? (:
[23:15:29] <linsys> LOL yes
[23:15:30] <owen1> linsys: i don't use auth
[23:15:35] <linsys> Then you should be fine
[23:15:38] <owen1> do i need to?
[23:15:49] <linsys> Up to you... that is a business question
[23:16:06] <linsys> I don't use it, but I block all my mongodb deployments off behind a firewall
[23:16:28] <owen1> what does auth mean in practise?
[23:16:33] <owen1> practice
[23:18:17] <linsys> owen1: it means you would need to pass a username/password when your app connects to mongodb, same when you connect with the shell
[23:18:27] <owen1> linsys: got it
[23:18:32] <linsys> vs now where you can just do like mongodb --host <ip> and look at all the dbs and such
[23:20:34] <owen1> linsys: regarding the other question, what if i have 1 host(primary) and the west and 2 secondaries on the east and there is a network issue between east and west. the 2 secondaries might think that the west is down so they will elect a primary and i'll have 2 primaries.
[23:23:06] <linsys> No, if that where to happen if there is a network issue the primary in the west will demote itself since it lost the majority of the vote thinking it's isolated
[23:26:41] <owen1> so if the primary is among minority, it will demote itself? let's say i have 2 nodes(1 of them is primary) in the west and 3 nodes in the east, and there is a network issue, the primary will demote itself as well?