PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 16th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:16:45] <poz2k4444> Hi guys, one quick question, when I create a replication infrastructure, do I need to do the rs.initiate() on each member or just on one?
[00:33:49] <Boomtime> @poz2k4444: rs.initiate on only one member - if you have data already be sure to run that command on the member you want to preserve
[00:34:09] <Boomtime> when you want to add other members, use rs.add from the host where you ran rs.initiate
[00:34:27] <poz2k4444> so the replica wont be done in the secondaries right?
[00:34:54] <Boomtime> wut? the replica-set is all members combined - not sure what you are asking
[00:35:41] <Boomtime> all members of the replica-set must be started with the same value for replSet name and keyFile if using auth (cert subject if using x509)
[00:36:05] <Boomtime> the replica-set is then initalized from exactly one member
[00:36:18] <Boomtime> that one member will contact the others, and replicate the configuration
[00:36:37] <poz2k4444> Boomtime: ok, so every configuration file (in my case) has the same name of the rs, then I run rs.initiate() in only one member and then I add the others members?
[00:38:39] <Boomtime> right
[00:38:59] <Boomtime> you'll need to wait a few seconds after rs.initiate before you try rs.add
[00:46:05] <poz2k4444> Boomtime: aight man, thank you, i'll give it a try tomorrow though
[03:43:23] <TheTank111> can someone tell me why update({},{"$set":{"groups":[]} }) won't clear the contents of an array that contains other json objects?
[03:43:44] <TheTank111> db.Logins.update(....) (I'm doing the command from the shell)
[03:50:06] <TheTank111> i fugred it out i needed to add ,{multi:true} at the end
[04:53:26] <bambanx> hello
[04:53:51] <bambanx> how are you tonight guys?
[05:50:38] <TheTank111> gewd you?
[05:56:42] <bambanx> am cool man TheTank111 sorry for late answer was abduced on a dream
[05:57:15] <bambanx> the documents have a max size ?
[06:43:47] <tsturzl> I'm having a really strange error
[06:44:13] <tsturzl> I can open shell and use db.auth() to authenticate but I cannot via command line arguments `-u -p`
[06:44:31] <tsturzl> I get "Failed: error connecting to db server: server returned error on SASL authentication step: Authentication failed."
[06:44:53] <tsturzl> I'm specifying the DB to authenticate with in the cli args
[06:45:01] <tsturzl> So I have no idea why it is failing
[08:12:47] <shayla> Hi guys, i'm having a problem with a simple match in mongodb/php
[08:13:09] <shayla> The query I do in mongo works : { $match : { $and: [ {_id : { $in: [ ObjectId("575a8c5a2620e1df4c8b4567"), ObjectId("575a8c5a2620e1df4c8b4568") ] }}, { "content.type": 'ticket'} ] } }
[08:13:34] <shayla> I try to use it in php and it seems that the "content.type": 'ticket' doesn't work
[08:13:44] <shayla> The match with $in [ObjectId .... works
[08:13:54] <shayla> ['$match' => ['$and' => [['id' => ['$in' => $factsId]], ['content.type' => 'ticket']]]]
[08:14:04] <shayla> Any suggest?
[08:43:36] <gregf_> hello
[08:43:58] <gregf_> is there much difference between mongodb and dynamodb?
[08:45:06] <gregf_> im using dynamodb atm and there is a restriction on #indexes that can be created. also adding a index for example takes quite some time.
[08:45:24] <gregf_> so was thinking about moving to mongodb
[08:46:41] <gregf_> is this channel active?
[08:48:37] <Derick> well, yes
[08:48:41] <Derick> but you need some patiente
[10:49:55] <jacoscaz> Hello everyone. I'm experiencing a weird issue with the nodejs driver version 2.x for which I haven't been able to find a solution no matter how much time I throw at it. Basically, let's assume M1 and M2 are two different machines running the same code and connecting to the same replicasets R1 and R2. Using the 1.x driver, the avg. query for both M1 and M2 is ~10ms both on R1 and R2. Using the 2.x driver, the avg.
[10:49:55] <jacoscaz> query for both M1 and M2 is ~50ms, both on R1 and R2. The query itself is the same (a simple find by id), the codebase is the same.
[10:51:35] <kurushiyama> jacoscaz What was the reason for updating the driver?
[10:51:44] <jacoscaz> However, machine M3 and machine M4 - which are identical to M1 and M2 - do not experience this difference. Their query times are all around ~10ms.
[10:52:58] <kurushiyama> jacoscaz Well, the evidence proves that there _is_ a difference.
[10:55:29] <jacoscaz> @kurushiyama yes, there is a difference. I just haven't been able to find it.
[10:56:25] <kurushiyama> jacoscaz With identical, you are talking of the hardware, I assume? Versions and storage engine are identical, too? You sure the disk storage is identical?
[10:57:39] <jacoscaz> @kurushiyama I should have worded that in a different way. What I mean is that for what I can see (which is clearly not enough), they're identical. Hardware and disk storage is identical across all replica sets. The client machines all share the same hardware specs, the same OS, the same nodejs codebase.
[10:58:32] <kurushiyama> Ok, then check the mongod versions and the respective config.
[10:58:47] <jacoscaz> Same versions, same configs.
[11:00:06] <kurushiyama> jacoscaz I doubt that. MongoDB is pretty predictable when it comes to scaling. Since presumably you did not change the network, there has to be a difference.
[11:03:29] <jacoscaz> I have triple-checked. Also, I don't think it's a matter of scaling as some clients are seeing the different query time while some other clients are not for the replicaset/database.
[11:04:20] <jacoscaz> * same
[11:12:46] <chujozaur> Hi, is there any way to check replication progress in MongoDB?
[11:13:12] <Derick> the initial sync you mean?
[11:13:14] <Derick> or the lag
[11:13:18] <chujozaur> I see data being copied, but can I give reasonable time estimate for replication process to finish
[11:13:28] <chujozaur> yeah, basically I had one replicates on 3 nodes
[11:13:31] <chujozaur> about 1.5 TB
[11:13:45] <chujozaur> and just created 4th host and make it join replicaset
[11:14:04] <Derick> what's the reason for wanting 4 data nodes in a RS?
[11:14:24] <chujozaur> I see that mongo data dir on the newest host is ~490 GB so far
[11:14:46] <chujozaur> (I am slowly adding and removing hosts one by one to separate A-Zs in AWS)
[11:14:49] <kurushiyama> jacoscaz What I meant with that is that MongoDB tends to behave identical on identical config and HW...
[11:15:43] <chujozaur> I'd like to know how to see replication progress
[11:15:58] <chujozaur> rs.status tells just { "_id" : 20, "name" : "10.43.38.247:27017", "health" : 1, "state" : 5, "stateStr" : "STARTUP2", "uptime" : 6768, "optime" : Timestamp(0, 0), "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2016-06-16T10:58:00.029Z"), "lastHeartbeatRecv" : ISODate("2016-06-16T10:57:59.896Z"), "pingMs" : 0, "syncingTo" : "10.43.32.153:27017", "configVersion" :
[11:16:04] <kurushiyama> chujozaur As soon as you added one, you should remove one, even during initial sync. Reason: You do not want to sit around the whole night, do you?
[11:16:12] <kurushiyama> chujozaur Use Pastebin!!!
[11:16:48] <chujozaur> why I would have to sit around all night?
[11:17:46] <chujozaur> I know that 4 node has the same fault tolerance as 3
[11:18:08] <jacoscaz> @kurushiyama that is my experience, too, and the reason why I've spent hours looking for a difference that would explain this. I'll keep looking...
[11:19:17] <chujozaur> but apart from paying more for the additional node that brings no value in terms of cluster durability
[11:19:25] <kurushiyama> jacoscaz What I tend to do in those situations is to use diff (vim-diff) actually for the config files. Take a known good and diff the others against it.
[11:19:38] <kurushiyama> chujozaur Wrong.
[11:19:47] <chujozaur> is there any serious risk related to that?
[11:20:01] <kurushiyama> chujozaur You actually increase the chance of your replica set entering secondary state.
[11:20:11] <kurushiyama> chujozaur Since more nodes are invloved.
[11:20:59] <kurushiyama> chujozaur The probability is simple higher that one of two nodes fails than that a single node fails.
[11:21:08] <jacoscaz> @kurushiyama did that already, both server-side and client-side. Client-side, we've also written a couple of scripts that do not go through the rest of our system but merely connect through the native drivers (1.x and 2.x) to make sure it's something code-related.
[11:21:24] <jacoscaz> @kurushiyama * not
[11:21:32] <chujozaur> okay
[11:21:52] <jacoscaz> @kurushiyama thanks for pitching-in btw. Much appreciated.
[11:21:53] <chujozaur> but still if one of the node dies PRIMARY should get reelected right
[11:21:53] <kurushiyama> chujozaur So you pay more to increase the chances of your replset relegating.
[11:22:04] <kurushiyama> jacoscaz Just thinking with my fingers ;)
[11:22:22] <chujozaur> yes, but 4 nodes is a temporary state
[11:22:53] <kurushiyama> chujozaur Sure. But do you really want to _wait_ until the initial sync is done?
[11:23:27] <chujozaur> unfortunately I have to as there are some crazy software-related dependecies
[11:23:35] <kurushiyama> chujozaur Whut?
[11:23:40] <chujozaur> being resolved by backend team which I work with
[11:23:48] <kurushiyama> chujozaur Ah. Write concern?
[11:24:01] <chujozaur> my mongo cluster is a backend for couple of web services
[11:24:02] <chujozaur> true
[11:24:27] <kurushiyama> chujozaur And I assume a fixed value instead of 'majority'.
[11:24:41] <chujozaur> right
[11:25:25] <chujozaur> anyway, what I can check to tell about actual progress of replication after adding that node?
[11:26:05] <chujozaur> I'd like to have some kind of +/- estimate of when I can expect my newly added node to be 100% valid, up-to-date secondary machine
[11:26:22] <Derick> chujozaur: the log should show progress, does it not?
[11:26:57] <cheeser> you could rsync the files over and then let replication pick up from there.
[11:27:35] <kurushiyama> A snapshot comes in handy, there.
[11:27:48] <cheeser> https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/#sync-by-copying-data-files-from-another-member
[11:28:06] <chujozaur> yes, but I've decided not to copy data
[11:28:13] <chujozaur> and let it sync from scratch
[11:28:34] <cheeser> why?
[11:28:42] <kali> the log will give you an indication. not an eta, but at least some indication
[11:31:14] <chujozaur> will check it
[11:31:30] <chujozaur> you mean log on primary node? or that one which has just joined
[11:31:31] <chujozaur> ?
[11:32:20] <kurushiyama> cheeser I am not even sure rsync is actually faster. Have tried it only once or twice, though.
[11:32:54] <cheeser> oh, yeah?
[11:33:12] <cheeser> i would think it would be since you're *just* copying bytes. oplog sync requires adjusting bits in flight.
[11:36:17] <chujozaur> my log says nothing about replication progress
[11:36:27] <chujozaur> is any rs.* command to check that?
[11:51:00] <jamiel> Don't think so - disk space used can be a decent indicator if you replicas aren't heavily fragmented - if you are using WiredTiger you will see the collections being created in the data directory.
[11:51:26] <jamiel> When it's creating indexes it should print in the log
[11:53:48] <sumi> hello
[11:57:32] <chujozaur> unfortunately I'm using MMAPv1
[11:57:40] <chujozaur> that disk copying is pretty fast
[11:58:00] <chujozaur> but I was wondering how much time it would take after all data gets copied
[11:58:28] <chujozaur> as there is oplog to get applied and indexes to be built..
[12:04:53] <kurushiyama> chujozaur Considerable amounts. Let us hope your oplog window is big enough for the 1.5TB anyway ;)
[12:05:06] <kurushiyama> Hello sumi !
[12:06:18] <kurushiyama> cheeser Well, rsync needs to validate the files first, iirc. This can cause quite an overhead.
[12:07:42] <dddh_> I've seen several times this month that initial syncing to new replica set members on modern hardware is limited by network bandwidth
[12:09:33] <dddh_> of course data could be compressed before transferring to new nodes with rsync -rltpvaz --compress-level=9
[12:10:37] <dddh_> or something like tar -I'pixz -p4'
[12:10:38] <dddh_> ;)
[12:10:56] <kurushiyama> chujozaur To give you an impression. A 800GB data set which I used to resync regularly took some 3h over a bonded 2* 1Gb network, exclusive for database access. including index rebuild and oplog application.
[12:12:02] <kurushiyama> dddh_ Well, and then you can hear the fans from miles away and the lights will start to flicker in the datacenter ;)
[12:14:13] <dddh_> kurushiyama: that is why I use only -p4 and not all possible cpus/cores
[12:15:04] <kurushiyama> dddh_ Funny when only 2 are there ;)
[12:16:09] <kurushiyama> dddh_ Actually, I use snappy when transferring stuff. But then, I tend to pipe tar | snap | nc ;)
[12:21:58] <wtiger> Hi! is mongodb persistent by default?
[12:28:16] <dddh_> kurushiyama: usually there are about 32 available cores on modern production servers
[12:28:38] <kurushiyama> wtiger Yes.
[12:28:47] <wtiger> cool :)
[12:28:48] <kurushiyama> dddh_ I guess that _heavily_ depends.
[12:29:23] <kurushiyama> dddh_ Actually, I am building an application which tends to behave _very_ well with 2-4 cores.
[12:29:44] <kurushiyama> dddh_ And I could argue that 32 cores are way past the sweet spot.
[12:31:32] <kurushiyama> wtiger How did you come to the conclusion that it might be different, if I may ask?
[12:31:58] <wtiger> kurushiyama: thought redis was persistent too
[12:32:09] <kurushiyama> wtiger It can be.
[12:32:16] <wtiger> kurushiyama: by default
[12:32:43] <kurushiyama> wtiger Well, that would be a bit strange for a DB advertised as in-memory ;)
[12:32:53] <wtiger> yeah mate, my bad :)
[12:33:23] <dddh_> kurushiyama: imho, physical servers should have more than 4 cores
[12:33:26] <kurushiyama> wtiger What will you use MongoDB for?
[12:34:01] <wtiger> learning php-codeigniter..
[12:34:24] <kurushiyama> dddh_ Less cores, higher clock rate as for MongoDB. and RAM. Tons of RAM. As for frontends and middleware: I rather scale out than up with that.
[12:34:36] <kurushiyama> wtiger Oh.
[12:35:33] <wtiger> btw, how can I configure mongodb(setup username/password, create database) on manjaro/arch? using mongo-tools?
[12:36:45] <kurushiyama> wtiger First peace of advice: If you start out with MongoDB, use a) a supported distribution (or Docker for development) b) The documentation. Heavily. c) MongoU.
[12:37:41] <wtiger> kurushiyama: sure
[12:37:49] <kurushiyama> wtiger I had a few things to say about codeigniter as well, but since this is not bash #php, I'll refrain. If you want to hear my thoughts on that, PM me.
[12:58:15] <Gnjurac> hi can somone help me i am kinda noob in web dev, can i use php7 with MongoDB 2.4
[12:59:15] <Gnjurac> in my nigx folder config/php.d #extension=mongodb.so is uder comment i think
[13:03:37] <cheeser> yes, you can.
[13:03:54] <cheeser> though why 2.4? we've EOLd that one.
[13:04:10] <Gnjurac> cuz stupit openshift has catrige for that only
[13:06:08] <Gnjurac> is it OP to use mongodb for leaderboard for my game
[13:06:38] <lfamorim> Hello! Someone know why this update can't remove the item from array? http://pastebin.com/AHZe2ymb
[13:07:18] <cheeser> is it OP?
[13:07:23] <lfamorim> I think this is a major bug or my stupidity
[13:07:48] <lfamorim> cheeser, sorry, what you mean for OP?
[13:09:13] <Gnjurac> cuz on #php they tell me i shoudent use it for this
[13:09:16] <cheeser> lfamorim: that was for someone else. i don't know what "OP" means in the context of Gnjurac's question.
[13:09:23] <cheeser> lfamorim: as for yours, that update works just fine.
[13:09:46] <Gnjurac> too much for somthing so simple
[13:09:57] <lfamorim> cheeser, the fact is, is wasn't
[13:09:59] <lfamorim> =(
[13:10:03] <cheeser> lfamorim: i just ran it.
[13:10:03] <lfamorim> I don't know why.
[13:10:35] <lfamorim> http://pastebin.com/KLAzwav2
[13:11:53] <lfamorim> cheeser, I ran it too, but at my enviroment it does not work. =(
[13:12:22] <edrocks> lfamorim: you know antecipate is spelled with an i? anticipate
[13:12:23] <lfamorim> As you can see, I think is a pretty simple thing or a major bug, I use mongo for years and I never saw something like this.
[13:12:37] <cheeser> what version? it just worked for me again.
[13:12:58] <lfamorim> 3.2.6
[13:13:04] <cheeser> that's what i'm running.
[13:13:25] <lfamorim> strange
[13:13:28] <lfamorim> =/
[13:13:33] <cheeser> instead of limit(1).pretty() do .count()
[13:14:25] <Gnjurac> can somone pls tell me how to make a a db that stores , name , score , time. cuz want my cilent wen send post request to read/write that stuff
[13:14:25] <kurushiyama> Gnjurac I do not think so. Think of Gall's law.
[13:14:35] <lfamorim> cheeser, I figured it out!
[13:14:49] <kurushiyama> count is dangerous.
[13:14:52] <lfamorim> cheeser, sorry, and thank you a lot for you time!
[13:15:07] <kurushiyama> Well, not dangerous, but misleading in a lot of cases.
[13:15:12] <cheeser> kurushiyama: it's not dangerous. it's just ... complicated in certain scenarios.
[13:15:16] <cheeser> there you go. :)
[13:15:19] <cheeser> lfamorim: what was it?
[13:15:19] <edrocks> kurushiyama: are you refering to it returning possible duplicates if you don't use `sort()` with it?
[13:15:42] <kurushiyama> edrocks or the fact that orphans may be counted in sharded envs.
[13:15:43] <lfamorim> cheeser, {multi: true} rs...
[13:15:51] <cheeser> lfamorim: yeah. i thought so. :)
[13:15:52] <lfamorim> Your count trick was genius.
[13:15:56] <lfamorim> rs....
[13:16:03] <kurushiyama> edrocks Or the fact that the indices are actually having race conditions when it comes to count.
[13:16:09] <lfamorim> cheeser, what can I do for you? =D
[13:16:13] <edrocks> kurushiyama: ops I was thinking of update all being dangerous. I'm not sure if you need to sort with count
[13:16:29] <edrocks> kurushiyama: Really?
[13:16:38] <kurushiyama> edrocks Yep.
[13:16:49] <kurushiyama> edrocks Wait a sec.
[13:16:49] <edrocks> kurushiyama: That's scary
[13:17:12] <kurushiyama> edrocks https://github.com/mwmahlberg/incomplete-returns
[13:17:17] <edrocks> kurushiyama: On shared collections that happens https://docs.mongodb.com/manual/reference/method/db.collection.count/#behavior
[13:18:02] <kurushiyama> edrocks Yeah, aka SERVER-3645: https://jira.mongodb.org/browse/SERVER-3645
[13:18:13] <jacoscaz> @kurushiyama we tried bisecting our way through the different driver versions. Reverting to mongodb@2.0.33 makes the issue disappear. Upping the version to mongodb@2.0.34 makes it reappear.
[13:18:37] <kurushiyama> jacoscaz Interesting.
[13:18:42] <edrocks> kurushiyama: Why wouldn't they just make count automatically use the aggregation workaround on shared collections?
[13:18:49] <kurushiyama> jacoscaz Do you use SSL?
[13:18:58] <edrocks> What's the benefit of having a broken count
[13:19:03] <kurushiyama> edrocks Because it is expensive?
[13:19:14] <kurushiyama> edrocks Even pretty much so.
[13:19:49] <kurushiyama> edrocks Well, it is a delicate problem, actually.
[13:19:54] <edrocks> Are you kidding me? "After an unclean shutdown of a mongod using the Wired Tiger storage engine, count statistics reported by count() may be inaccurate."
[13:20:06] <edrocks> kurushiyama: https://docs.mongodb.com/manual/reference/method/db.collection.count/#accuracy-after-unexpected-shutdown
[13:20:32] <dino82> Testing!
[13:20:35] <dino82> Hmm ok.
[13:20:37] <kurushiyama> edrocks If you run this "incomplete-returns" you will actually see that it is pretty easy to cause enough noise to make count return wrong results.
[13:21:23] <kurushiyama> BUT: The only remedy would be a RWMutex on the indices.
[13:22:06] <kurushiyama> Not much of a problem with MMAPv1's collection level locking, I guess.
[13:22:55] <cheeser> lfamorim: you can make something awesome :)
[13:25:14] <edrocks> kurushiyama: Has this caused you any issues? I only use count for pagination totals
[13:26:03] <kurushiyama> edrocks Personally? SERVER-3645 did.
[13:27:45] <edrocks> kurushiyama: At least I don't use shards. But it could screw me with invoices. Thanks for warning me
[13:29:04] <jacoscaz> @kurushiyama I confirm - the downgrade to 2.0.33 fixed the issue on all the machines that had it.
[13:29:28] <jacoscaz> @kurushiyama we're using SSL on a host and not using it on the other - doesn't seem to be a factor.
[13:30:19] <kurushiyama> jacoscaz Well, iirc, there was a bug that each connection openened in the pool actually recertified the complete SSL chain.
[13:31:07] <jacoscaz> But wouldn't that come up at connection time? What we're experiencing comes up at query time.
[13:36:38] <kurushiyama> jacoscaz The driver maintains a connection pool, in case you did not notice ;)
[13:37:32] <kurushiyama> edrocks For invoices, I would definetly walk the mile and use an aggregation.
[13:38:31] <edrocks> kurushiyama: Is there any way to tell if you need to run validate to fix the wired tiger info after a crash? I just have a replica set
[13:38:38] <jacoscaz> @kurushiyama yeah. However, I presume that a single connection before the process exits only causes a single entry in the pool. We always wait until the connection has been established before querying.
[13:38:52] <edrocks> I assume logs would be one way
[13:39:15] <kurushiyama> edrocks Yes. Other than that – I tend to resync after a hard crsah, anyways.
[13:39:32] <jacoscaz> @kurushiyama do you think this is worth opening an issue on JIRA?
[13:39:58] <Derick> jacoscaz: check whether there is one first
[13:40:15] <jacoscaz> @Derick absolutely, sure
[13:40:35] <Derick> but otherwise, yeah, I think so
[13:59:34] <dino82> I think I found my replication problem -- [reSync] 9001 socket exception [SEND_ERROR]
[13:59:44] <dino82> Wipes all progress of the replication and starts over
[14:22:09] <jayjo_> When I run a mongoimport, can I specify that I want a field to be writtento disk as a ISODate() if it's currently an integer of seconds since epoch?
[14:22:17] <lfamorim> cheeser, =)
[14:22:26] <cheeser> jayjo_: no
[17:08:18] <Forbidd3n> If I have a sub-set list of objects and I am assigning an id to each one, how would I get $addToSet to ignore the id on insert if a property value matches
[17:09:13] <Forbidd3n> for example. I have lines -> ships -> _id,name : I want it to add to ships with $addToSet only if the name doesn't exist. Right not it is adding them all because it is basing it on _id and name
[17:20:41] <Forbidd3n> basically is it possible to $addToSet based on one field and not all fields?
[17:56:05] <nob> Hi, I am new to mongodb and I ran into problems with the BSON size limit. Does anyone know, if using the $pushAll (or $push with $each) operator does any querying before the update? I ask, because the data I try to push should not be anywhere near the limit, but the data contained in the target array of the document may be.
[18:12:55] <StephenLynx> you will get an error.
[18:13:02] <StephenLynx> then you can deal with said error.
[18:13:16] <StephenLynx> however, if you are expecting to have an array that large, I think you have other issues.
[18:13:17] <StephenLynx> nob
[19:14:18] <n1colas> Hello
[19:50:11] <Forbidd3n> is it possible to $addToSet based on one field and not all fields?
[20:01:38] <StephenLynx> what do you mean?
[20:39:44] <cn28h> probably a basic question, but can't seem to get a good answer. If I have a replica set with 3 members (2 + 1 arbiter), what might cause the "real" members to get stuck in STARTUP2 state? I see logs like "replSet initial sync pending" and "replSet initial sync need a member to be primary or secondary to do our initial sync"
[20:40:43] <cn28h> and the arbiter says "replSet I don't see a primary and I can't elect myself"
[20:41:13] <cn28h> and fwiw, the members have priorities set to 2, 3
[20:43:10] <cn28h> and as far as I can tell, there isn't any connectivity issue between the members