[13:25:42] <kurushiyama> Was on sth. Interesting, be back in 30
[13:27:26] <pokEarl> If I do System.Exit(0) or whatever in a Java app using the mongoDB Driver, does the mongoclient close connections safely on its own or do I need to do MongoClient.close() ?
[13:30:12] <cheeser> there's a shutdown hook that gets run and closes the connections.
[13:58:33] <R1ck> hi. A customer is asking us why a aggragate command is running slower on his server than when running on his virtualbox vm, with the same indexes. Are there any specific settings I could look at?
[14:24:35] <kurushiyama> cheeser: Yeah, but it's like "von hinten durch die brust ins auge" as we say in German. This roughly translates to "tyring to stab somebody from the back through the chest into the eye" ;)
[14:43:24] <kurushiyama> Did a diff over the contents of the 3.2.4 and 3.2.5 rpms. The only major difference seems to be binary.
[14:53:26] <StephenLynx> that is the service file.
[14:53:33] <StephenLynx> it is a sysvinit being ran by systemD
[14:54:31] <StephenLynx> anyway, the problem does seem to be something related to permissions.
[14:54:41] <StephenLynx> since the init script is working if you use root
[14:59:05] <kurushiyama> StephenLynx: Yeah, I get that. But a service file is a service file. And a SysV init script for a systemd distribution is... not optimal.
[14:59:32] <StephenLynx> I agree there is room for improvement, but systemD is designed to run sysvinit files without issues.
[14:59:53] <kurushiyama> StephenLynx: Well, I'll test sth
[15:00:32] <kurushiyama> I still try to undestand the "why", though.
[15:04:32] <kurushiyama> The freezing is... astonishin. And the fact that the init file does not seem to be run properly is rather disturbing. The worst being that I can not seem to find a reason.
[15:56:48] <StephenLynx> distro-sync put it on that version
[15:57:29] <kurushiyama> Hm, just an idea: Can you export the "broken" VM and run it on a different machine? Just to rule out it's sth with the virtualization. Not very likely, but I am running out of ideas.
[15:57:46] <StephenLynx> it happened on 2 different host machines
[15:57:58] <StephenLynx> both running centOS 7.2 though
[15:58:27] <StephenLynx> one on a laptop with an intel CPU, the other on a desktop with an AMD CPY
[16:09:35] <kurushiyama> Well, startung it at root might have changed some permissions. Please doublecheck. Other than that, it is getting weirder and weirder...
[16:09:57] <kurushiyama> s/startung it at root/starting it as root/
[16:10:34] <StephenLynx> it still doesn't explain why it starts failing before being started as root
[16:19:28] <StephenLynx> child process started successfully, parent exiting
[16:19:37] <StephenLynx> sudo service mongod start
[16:19:47] <StephenLynx> Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[17:06:48] <pamp> Where I can Check our mognodb enterprise license status?
[17:06:49] <kurushiyama> Well, try it. Set SELINUX in /etc/sysconfig/selinux to "permissive" (no quotes) reboot and try the service _as root_. That way, we should have rules out any related problem, at least.
[17:07:19] <kurushiyama> pamp: You have been told before ;) It should be possible to do so online.
[17:08:15] <StephenLynx> i disabled it before with no changes,though
[17:08:27] <StephenLynx> but since were running out of ideas, ill try it anyway
[17:20:11] <kurushiyama> Nah, I doubt that. with enforced and semanage port -a -t mongod_port_t -p tcp 27017, It should run again. My _assumption_ is that we had security context problems, maybe, because you started it as root. Or screwed permissions. Or sth
[17:20:27] <kurushiyama> "semanage port -a -t mongod_port_t -p tcp 27017" did _not_ work?
[17:30:51] <StephenLynx> unless you are running a VPS service, you might as well use a VM
[17:32:37] <kurushiyama> StephenLynx: Not really. Docker images are lighter in weight and have less overhead. But lets stick to this. I can confirm that SELinux="enforce" + "semanage port -a -t mongod_port_t -p tcp 27017" does _not_ work.
[17:34:00] <kurushiyama> StephenLynx: So, as far as I can see, it seems to me that there is a problem with mongodb-org-server's selinux settings.
[17:37:59] <StephenLynx> then why would older versions not work anymore?
[17:39:20] <kurushiyama> StephenLynx: Since I can rule out it is mongod's selinux context. I can start it manually as user mongod
[17:39:37] <kurushiyama> StephenLynx: Not necessarily
[17:40:52] <kurushiyama> StephenLynx: If it is a side effect of the interaction between mongod, systemd and SELinux, it can be anything. I just start to track it down, now that I can reproduce it again.
[17:59:00] <aguilbau> hi, is their a way to match the last element of an array ? like 'field.$.value', but for the last element.
[18:03:55] <aguilbau> lets say i do this: 'db.tests.insert( { a: [ { b: 1 }, { b: 2 }, { b: 3 }, { b: 4 } ] } )'
[18:05:12] <aguilbau> now can i find all documents where all last values of 'a' have a field set to b = 4
[18:20:25] <kurushiyama> aguilbau: kurushiyama's Rule of thumb: If you depend on the order of an array, there are good chances there is something wrong with your data model.
[19:19:49] <jr3> Mongo writelocks updates by default correct? If I have a document where I update to add to a set if length is < 20, there should be any kinda of "race" to where I actually end up with 21 elements in a set right?
[20:02:57] <StephenLynx> that depends on how you are doing it.
[20:03:40] <StephenLynx> then yes, you won't end up with 21 elements
[20:11:12] <nalum> hello, I'm trying to set up tls connections on my replicaset. I have the certs I need for the mongod, but I'm confused as to what certs I need for the cloud manager agents (automation, monitor and backup)?
[20:22:03] <kurushiyama> Still have no clue why CM should ask for _keyfiles_.
[20:22:23] <nalum> It's not asking for them to be uploaded, just for paths
[20:22:44] <cheeser> right. because mongo needs to configured to find them.
[20:23:12] <nalum> Mongo does, but do the agents also need them?
[20:23:22] <kurushiyama> cheeser: You probably understand better what's going on. I am totally puzzled.
[20:23:54] <cheeser> i havne't really been paying attention, tbh. wrangling tests.
[20:24:05] <kurushiyama> nalum: Usually, the agents connect to CM, which has cert from a CA known by the agents.
[20:24:50] <nalum> Okay, so these paths are really only if you are using self signed certs?
[20:25:09] <kurushiyama> cheeser: Quick one: unless sth has changed _considerably_, agents dial CM to push data (monitoring, backup) and pull orders (automation), right?
[20:27:01] <kurushiyama> nalum: Well, sorry, not exactly. You need to give the paths to the cert and keyfile for the server so that the server can utilize SSL.
[20:39:50] <nalum> Okay, so I've setup with allowSSL, when I using the mongo command line tool to connect with mongo --host host.com --ssl it fails with: Failed global initialization: BadValue need to either provide sslCAFile or specify sslAllowInvalidCertificates
[20:40:22] <nalum> So do I need to point it at the specific sslCAFile or am I using the wrong flag?
[20:42:38] <nalum> Okay, is there a way to have it just look for the CAFile as it is a public CA and not self signed? Maybe I should not have the cloud manager set up with the sslCAFile?
[20:47:11] <kurushiyama> nalum: Wait a sec. Didn't you say it IS a self signed certificate?
[20:47:32] <kurushiyama> nalum: The one you are using for mongod...
[20:48:36] <Derick> i should try using that with mongodb
[20:49:23] <nalum> But that means it doesn't validate the cert, right?
[20:49:46] <kurushiyama> nalum: No, it is not. And it does not make sense to do any validation, either.
[20:51:06] <nalum> Why does it not make sense? Is that not part of what TLS certs are for, verifying that I am connecting to the server I think I am?
[20:51:25] <kurushiyama> nalum: As you can only verify the _host_ and not the issuer.
[20:52:25] <nalum> So sslAllowInvalidCertificates is for verifying the issuer and the clients should continue to verify the host even with that set?
[20:53:15] <kurushiyama> nalum: No. it validates the certificate the server identifies itself with against well known certification authorities.
[20:53:56] <kurushiyama> nalum: The thing is this: A CA is not only ther for encryption, but to validate the certificat holder. Say your bank, for example.
[20:54:51] <kurushiyama> nalum: the CA of the bank's certificate guarantees you (to a varying extend), that they made sure that the ceritificate holder is who he claims to be.
[20:56:17] <kurushiyama> nalum: Since let's encrypt does only verify a host, you _really_ do not want them to be trusted by default.
[20:57:05] <kurushiyama> nalum: Of course, this is simplified a bit.
[20:57:45] <nalum> kurushiyama: Thanks for the info, much appreciated. So would you recommend not using them in production systems?
[20:58:29] <kurushiyama> nalum: For business? I would not consider it for a second, especially given the prices for wildcard certificates nowadays.
[20:59:29] <kurushiyama> nalum: As for secured communication between known hosts, it might be ok. But then, there is no reason to validate it, really.
[21:00:31] <kurushiyama> nalum: As per communication with the outside world? I'd run from a business which is that sloppy with its digital identity...
[21:07:30] <nalum> kurushiyama: Okay, thanks again :)
[21:12:06] <nalum> Okay, so last question for today. With cloud manager is it possible to have it use custom domains rather than mongodbdns.com or should I CNAME or ALIAS the domain name that it generates?
[21:15:24] <nalum> so repl-test-[0-9]+.test-group.[0-9]+.mongodbdns.com
[21:19:38] <nalum> I've not seen anything in the cloud manager, but maybe I'm looking in the wrong places. Is it possible to change that to use my own domain name or should I create a CNAME or an ALIAS DNS entry for the mongodbdns.com that is created?
[21:24:13] <nalum> cheeser: as a CNAME or through cloud manager?
[21:24:50] <nalum> Or are you not using the automation system?
[21:28:05] <cheeser> i have a CNAME set up on my hoster's DNS
[21:29:07] <nalum> so mydb.mydomain.com cnamed to ???.mongodbdns.com
[21:30:34] <nalum> then is the replicaset configured to use the CNAME or the mongodbdns.com name?
[21:31:31] <zsoc> It appears that while doing a Model.create(arr), the process stops after it hits the first error. Maybe not, but even while trying to use the .then() promise it appears that way. I'd like to be able to catch all the error 11000's because they shouldn't be fatal.
[21:31:43] <zsoc> Sorry, i guess .create() is a mongoose thing, for context.
[21:37:37] <cslcm> Hi folks. I'm trying to get my head round MongoDB sharding. I think I've successfully created a three-host cluster following the documentation, which is good, however I didn't see any way to configure how much redundancy there is. For example, does each host contain a full copy of the data, or is it split into three?
[21:40:37] <nalum> cslcm: Assuming that when you say 3, you mean 3 nodes? I think it's split across the 3 nodes, but for redundancy you would have those 3 nodes actually be replica sets. So 3 shards of 3 node replica sets.
[21:42:17] <cslcm> ah i see, at the moment I have a single replica set with the three servers as members
[21:42:21] <cslcm> I need additional replica sets?
[21:42:58] <nalum> If the 3 servers are part of the same replica set then they are not sharded, if I understand it correctly
[21:43:51] <cslcm> so I have a single replica set on each server
[21:44:02] <cslcm> then connect to them from mongos
[21:44:17] <cslcm> how is the data distributed across the servers in that case?
[21:44:34] <cslcm> What i'm aiming for is for each one of the servers to contain 66% of the data
[21:44:47] <cslcm> so we can lose 1 server without impacting the service
[21:45:24] <nalum> I don't know if that can be done, but I've not looked too deeply into sharding yet
[21:48:28] <nalum> My understanding would be, you have 3 shards, each shard is a replica set with 3 nodes each node is on it's own server. This then allows for single nodes to fail but keeps the system available. obviously the actual number of nodes is up to you and for production systems 5 nodes in a replica set is recommended.
[21:51:17] <nalum> cslcm: In your suggested setup of 3 servers, you could have 3 nodes per replica set, one on each machine, so the whole replica set is not on one machine, meaning each machine has 100% of the data.
[21:53:54] <cslcm> nalum: The problem is (and this is one of the problems which is supposed to be tackled by sharding, according to the mongo docs) - my data cannot fit on a single machine. There's too much data. Hence why I only want 66% on each machine
[21:53:59] <cslcm> I thought that was kinda the point of sharding
[21:54:05] <cslcm> to provide scalability and redundancy
[22:03:30] <kurushiyama> more than 1 instance/node
[22:03:38] <cslcm> i'm more than happy to run a single instance on each node, I'm not suggesting otherwise
[22:04:00] <kurushiyama> cslcm: > i think I understand, so if I have host A,B,C.. I can create three replication sets, one with A+B, one with B+C and one with A+C
[22:07:43] <kurushiyama> cslcm: So, your minimal setup would be 3 config servers, 4 data bearing nodes and 2 arbiters. The latter 6 would each form a replset of 2 data nodes + an arbiter.
[22:08:34] <kurushiyama> Now for the good news: the 3 config servers and the arbiters can be relatively cheapo VPS (though they ofc should be on different machines)
[22:09:53] <cslcm> It sounds like MongoDB can't be used for my application. I only have three physical servers to run this on
[22:09:59] <kurushiyama> cslcm: I can not suggest to run a sharded cluster on standalones or (while technically possible) only 1 config server.
[22:10:32] <kurushiyama> cslcm: Oh, it could. But your requiremnets are a bit like "I want to have the cake _and_ eat it."
[22:11:11] <kurushiyama> cslcm: You want redundancy + sharding + only 3 servers. Choose any 2
[22:12:49] <cslcm> I can't, so mongodb won't work for me
[22:13:42] <cslcm> What I'm asking for is very achievable in principle... split the database into three chunks, 1 2 3, one server has 12, one server has 23, one server has 13
[22:13:51] <kurushiyama> cslcm: Scale up and put more space into them. Plus: Which DBMS do you expect to behave better? We are not talking of implementation, but hard facts and logic.
[22:21:23] <cslcm> 95% of my data will never be accessed
[22:21:24] <kurushiyama> each instance will a) try to use 50% of the available physical RAM as WT cache
[22:21:49] <kurushiyama> b) try to create a utilization of the available phyical RAM of 85-90%
[22:22:23] <kurushiyama> If you are going to run that stunt, it most likely will be best to run the individual instances as docker images for resource limitation.
[22:23:02] <kurushiyama> Do not even bother to think about HDDs, and give each instance a dedicated LVM partiion.
[22:23:56] <kurushiyama> And you _really_, _really_ should have a deployment diagram handy in case you need helpl.
[22:24:20] <cslcm> Fortunately I'm not deploying this to production right away :)
[23:29:57] <cslcm> Question: The docs make very clear that the config servers must be backed up. Do I need to back up ALL of them, or will just one be enough?
[23:31:10] <cslcm> I'm guessing each config server is identical?