PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 18th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[08:38:01] <Ange7> Hey all
[08:38:30] <Ange7> one question, i have multi lines in my mongo log :
[08:38:34] <Ange7> 2016-04-18T10:30:40.186+0200 D STORAGE [conn16237] WT begin_transaction
[08:38:36] <Ange7> 2016-04-18T10:30:40.187+0200 D STORAGE [conn16294] WT rollback_transaction
[08:38:48] <Ange7> what is it ?
[08:39:37] <Rumbles> wired tiger beginning and rolling back a transaction? at a guess
[08:40:36] <Bodenhaltung> lol
[08:42:07] <Ange7> Ok......
[08:43:49] <Bodenhaltung> Ange7: Which version?
[08:46:15] <Ange7> mongo 3.2.0
[09:31:58] <Ange7> for each op on mongodb there is a lock ?
[09:52:05] <aguilbau> hi, is there a way to unset all fields but the specified ones ?
[09:58:27] <Derick> aguilbau: not that I know of
[12:42:32] <kurushiyama> StephenLynx: I can verify that there are problems with the install.
[12:42:42] <StephenLynx> yup
[12:42:50] <StephenLynx> I am going to try different versions now
[12:43:10] <StephenLynx> I just upgraded to the latest one on a VM on another computer and it broke
[12:43:17] <kurushiyama> We should probably join efforts?
[12:43:44] <StephenLynx> ill report you anything I find
[12:44:00] <StephenLynx> you know where I can find all available versions for a package?
[12:44:03] <kurushiyama> I am just rebooting for the disabled SELinux
[12:44:12] <StephenLynx> is not SELinux
[12:44:24] <StephenLynx> this VM updated ONLY mongo.
[12:44:25] <kurushiyama> StephenLynx: Basically, in the repo dir.
[12:44:28] <StephenLynx> and mongo can run.
[12:44:38] <StephenLynx> where is the repo dir?
[12:44:40] <kurushiyama> Strange. I have mongod only, too.
[12:44:52] <StephenLynx> my repo is just a .repo file
[12:45:18] <StephenLynx> you managed to run it using sudo mongod -f /etc/mongod.conf too?
[12:45:27] <kurushiyama> StephenLynx: https://repo.mongodb.org/yum/redhat/
[12:45:47] <kurushiyama> StephenLynx: I haven't tried. but since root is root...
[12:46:19] <StephenLynx> hm, I never log as root.
[12:46:29] <kurushiyama> > sudo mongod -f /etc/mongod.conf
[12:46:35] <StephenLynx> I know
[12:46:39] <kurushiyama> euid root
[12:46:44] <StephenLynx> but I don't su root, you know
[12:46:49] <StephenLynx> I always use sudo
[12:47:04] <StephenLynx> the important part is using -f to inform the conf file.
[12:47:12] <StephenLynx> the funny thing is
[12:47:20] <StephenLynx> on my server I got the very same version for mongo
[12:47:27] <StephenLynx> installed in the same way
[12:47:31] <StephenLynx> and I got no issues there.
[12:47:41] <StephenLynx> centOS 7.2
[12:47:46] <StephenLynx> image provided by linode.
[12:48:34] <kurushiyama> Now that's double strange. Maybe we need to get down to verifying the image checksums with the main repo.
[12:49:04] <StephenLynx> check the centOS image?
[12:49:08] <kurushiyama> Aye
[12:49:17] <StephenLynx> I would expect the one on a host to not be the same you download.
[12:49:25] <StephenLynx> I know they are not the same.
[12:49:41] <kurushiyama> StephenLynx: Nah. The ISOs
[12:49:52] <StephenLynx> I am positive my ISO didn't corrupt.
[12:50:18] <kurushiyama> I rather have a greater fear. Unlikely, but lemme check.
[12:50:36] <StephenLynx> that someone tampered the ISO?
[12:50:46] <StephenLynx> is not that, this same ISO was working before.
[12:52:31] <kurushiyama> StephenLynx: Still, just to rule it out and not for my peace of mind.
[12:52:39] <StephenLynx> go for it
[12:52:47] <StephenLynx> im installing 3.2.4
[12:53:08] <StephenLynx> if it works, ill get to mongodb dudes and tell them they dun goofed with 3.2.5
[12:53:33] <kurushiyama> Well, I raise: CentOS freezes during boot.
[12:54:07] <StephenLynx> wooooooooooooooooooot
[12:54:09] <StephenLynx> :::::vvvvvvvvv
[12:54:55] <StephenLynx> 3.2.4 with the same issue
[12:55:16] <kurushiyama> Have you disabled SELinux just to make sure?
[12:55:41] <StephenLynx> if it were SELinux, it wouldn't start with -f
[12:55:48] <kurushiyama> StephenLynx: Nope
[12:56:06] <StephenLynx> ill try with 3.2.3
[12:56:09] <kurushiyama> StephenLynx: As root, you might have different permissions.
[12:56:15] <StephenLynx> if it doesn't work, ill disable it
[12:56:36] <kurushiyama> StephenLynx: Froze again.
[12:56:55] <StephenLynx> kek
[12:58:08] <kurushiyama> The strange think is that obviously this passed IT.
[12:58:20] <StephenLynx> hm?
[12:58:49] <kurushiyama> StephenLynx: I guess that there are integration tests before release. I know I had some.
[13:00:03] <StephenLynx> I wonder if its something with the date.
[13:00:22] <StephenLynx> 3.2.3 not working
[13:00:39] <kurushiyama> StephenLynx: Can well be. I have noticed very strange behavior, though only in non-standalones.
[13:01:27] <kurushiyama> StephenLynx: I guess the problem is with centos7.2 then, because I have 3.2.3 running on CentOS7
[13:02:04] <StephenLynx> this VM updated nothing but mongo and then it broke
[13:04:55] <kurushiyama> My resources are limited, so it might be a problem there. Will retry, but I think we have sth major here.
[13:06:10] <StephenLynx> my god, there is some gigantic cocksucker OP on #centos
[13:06:26] <StephenLynx> I just asked if anyone running mongo had issues like mine, even said I don't think its centos fault
[13:06:31] <StephenLynx> "HURF DURF OFF TOPIC"
[13:06:38] <kurushiyama> StephenLynx: Do not bother there.
[13:07:10] <kurushiyama> StephenLynx: They a re OT... well, you know the guys from my home country with the brown shirts and such.
[13:07:23] <StephenLynx> i don't even know your home country
[13:07:25] <StephenLynx> :v
[13:07:28] <kurushiyama> Germany ;)
[13:07:37] <StephenLynx> what about brown shirts?
[13:07:47] <kurushiyama> Like 70 years ago.
[13:08:03] <StephenLynx> disabled SELinux, no cigar
[13:08:19] <kurushiyama> wt...
[13:08:26] <StephenLynx> told you it wasn't it
[13:08:32] <kurushiyama> Ok, let me think for a sec.
[13:08:39] <kurushiyama> a) It is not SELinux
[13:08:48] <kurushiyama> b) Root can run it
[13:09:31] <kurushiyama> c) Conclusion: It is sort of a permission or resource limitation problem only applying to user mongodb.
[13:09:44] <StephenLynx> mongod.conf can be read by anyone though
[13:10:06] <kurushiyama> StephenLynx: I am not convinced it is directly related to mongod.conf.
[13:10:29] <kurushiyama> Oh
[13:10:41] <kurushiyama> Have you checked wether there IS a mongodb user?
[13:11:40] <StephenLynx> pretty sure there is, I don't think an update would remove it. but ill check it
[13:12:49] <StephenLynx> yup
[13:12:53] <StephenLynx> its there
[13:13:37] <StephenLynx> gave mongod.conf 777 and nothing
[13:15:22] <StephenLynx> joannac, you there? who packages mongo for RHEL there?
[13:16:16] <kurushiyama> Hm, probably something with dropping privileges?
[13:16:51] <StephenLynx> i dunno man
[13:16:54] <kurushiyama> I know these are just guesses but maybe we can norrow it down.
[13:16:57] <StephenLynx> it worked until a very recent update.
[13:17:01] <StephenLynx> from mongo
[13:17:11] <StephenLynx> nothing from centos was upgraded
[13:17:15] <kurushiyama> Ok, I'll diff the service files.
[13:19:05] <kurushiyama> From 3.0 to 3.2?
[13:19:31] <StephenLynx> i tried 3.0 too
[13:19:34] <StephenLynx> nothing
[13:20:22] <StephenLynx> jesus christ, why is so hard to report stuff on mongo
[13:20:42] <kurushiyama> Strange
[13:21:30] <StephenLynx> >hurr create a jira account
[13:21:38] <StephenLynx> >durr your password can't be the old one
[13:21:48] <StephenLynx> >deprderp find the correct sub-thing
[13:24:00] <StephenLynx> Derick, cheeser GothAlice anyone have any idea on what might be happening here?
[13:25:05] <StephenLynx> HAHAHAHAHAHAHA
[13:25:08] <StephenLynx> look at that
[13:25:11] <StephenLynx> kurushiyama,
[13:25:17] <kurushiyama> Yes?
[13:25:22] <StephenLynx> I changed the user and group to root on the init script
[13:25:23] <StephenLynx> it
[13:25:23] <StephenLynx> just
[13:25:24] <StephenLynx> fukken
[13:25:26] <StephenLynx> werked
[13:25:42] <kurushiyama> Was on sth. Interesting, be back in 30
[13:27:26] <pokEarl> If I do System.Exit(0) or whatever in a Java app using the mongoDB Driver, does the mongoclient close connections safely on its own or do I need to do MongoClient.close() ?
[13:28:15] <StephenLynx> pretty sure it closes.
[13:30:12] <cheeser> there's a shutdown hook that gets run and closes the connections.
[13:58:33] <R1ck> hi. A customer is asking us why a aggragate command is running slower on his server than when running on his virtualbox vm, with the same indexes. Are there any specific settings I could look at?
[13:58:54] <StephenLynx> did you do an explain?
[13:59:09] <StephenLynx> in an aggregation it might not be actually using an index.
[13:59:34] <R1ck> no :) I'll have to ask the customer for the details then
[14:00:28] <zeld> hi, there exists a simple image that show the difference between a sql database and nosql databases?
[14:00:36] <StephenLynx> yeah
[14:00:47] <StephenLynx> sql db = uses sql, nosql = doesn't use sql
[14:01:01] <zeld> StephenLynx: :D too simple hahaha but nice :)
[14:01:15] <StephenLynx> the term "nosql" is pretty bad, also.
[14:01:16] <zeld> i'm searching for someting which is more graphical, intuitive....
[14:01:24] <StephenLynx> and doesn't tell you much.
[14:01:29] <StephenLynx> for example
[14:01:42] <StephenLynx> you have a document database, a key value database and a graph database
[14:01:50] <StephenLynx> all of these serve completely different purposes
[14:01:53] <StephenLynx> yet all are nosql.
[14:02:14] <StephenLynx> separating dbs into "sql" and "nosql" is awful.
[14:02:21] <StephenLynx> instead you want to separate into how they behave.
[14:02:28] <StephenLynx> relational, document, graph, key value
[14:02:58] <StephenLynx> for example
[14:02:59] <zeld> so... maybe a n image which show a table and a grapoh with node and key value can be a good example?
[14:03:14] <StephenLynx> never seen one
[14:03:20] <zeld> table = sql; graph = key-value (nosql)
[14:03:28] <StephenLynx> eh
[14:03:41] <StephenLynx> no, graph db is one thing, key value is another, document is another
[14:03:55] <zeld> mmm this is true.....
[14:04:14] <StephenLynx> and you could have a db with tables and be relational without sql.
[14:04:30] <StephenLynx> and for all purposes to behave like sql dbs.
[14:04:39] <zeld> ehhehe
[14:04:41] <StephenLynx> I don't know if they actually exist.
[14:04:44] <StephenLynx> but is not impossible.
[14:04:55] <StephenLynx> there is nothing inherent to relatinal dbs that require sql
[14:04:59] <StephenLynx> relational*
[14:05:57] <StephenLynx> is this for you or for someone dumb? like a client?
[14:06:01] <StephenLynx> or undergrads?
[14:06:27] <zeld> is for me
[14:06:46] <StephenLynx> I suggest you just use text.
[14:07:01] <StephenLynx> these 4 are the main types of DB.
[14:07:03] <zeld> i'm developing an application that use big datas... and i would like to use a database nosql to manage data
[14:07:11] <StephenLynx> hm
[14:07:18] <zeld> however my professor does not undertand anything about databases :D
[14:07:23] <StephenLynx> do you have to relate this data?
[14:07:26] <zeld> he know only the name "databas"
[14:07:29] <StephenLynx> a lot?
[14:07:56] <StephenLynx> have you written down your model?
[14:08:03] <zeld> 11.000.000 records
[14:08:07] <StephenLynx> not that.
[14:08:18] <zeld> not yet
[14:08:27] <StephenLynx> with that is easier to suggest a database.
[14:08:43] <StephenLynx> if your main concern is the amount of data, I suggest mongo.
[14:08:46] <zeld> write down a simple model?
[14:08:49] <StephenLynx> yeah
[14:08:51] <StephenLynx> that helps a lot
[14:08:54] <StephenLynx> brb lunch
[14:08:58] <zeld> k
[14:09:28] <zeld> yes... i was one time to a presentation about mongodb and i was really impressed by its scalability, flexibility and so on.
[14:10:23] <zeld> anyway i will tri to explain by using a simple model :-)
[14:10:31] <zeld> thanks StephenLynx for your advices.
[14:18:57] <kurushiyama> StephenLynx: Back
[14:22:11] <kurushiyama> StephenLynx: Fun fact: No service file present in the 3.2.4 rpm
[14:22:58] <cheeser> seems like a bug
[14:23:27] <kurushiyama> cheeser: There _is_ a sysV init file.
[14:23:36] <cheeser> well, there you go. :)
[14:24:35] <kurushiyama> cheeser: Yeah, but it's like "von hinten durch die brust ins auge" as we say in German. This roughly translates to "tyring to stab somebody from the back through the chest into the eye" ;)
[14:24:52] <cheeser> i love the germans.
[14:43:24] <kurushiyama> Did a diff over the contents of the 3.2.4 and 3.2.5 rpms. The only major difference seems to be binary.
[14:53:26] <StephenLynx> that is the service file.
[14:53:33] <StephenLynx> it is a sysvinit being ran by systemD
[14:54:31] <StephenLynx> anyway, the problem does seem to be something related to permissions.
[14:54:41] <StephenLynx> since the init script is working if you use root
[14:59:05] <kurushiyama> StephenLynx: Yeah, I get that. But a service file is a service file. And a SysV init script for a systemd distribution is... not optimal.
[14:59:32] <StephenLynx> I agree there is room for improvement, but systemD is designed to run sysvinit files without issues.
[14:59:53] <kurushiyama> StephenLynx: Well, I'll test sth
[15:00:32] <kurushiyama> I still try to undestand the "why", though.
[15:00:50] <StephenLynx> why what?
[15:00:52] <kurushiyama> Especially with my test case
[15:01:03] <kurushiyama> Or test result, rather.
[15:03:03] <StephenLynx> why what?
[15:04:32] <kurushiyama> The freezing is... astonishin. And the fact that the init file does not seem to be run properly is rather disturbing. The worst being that I can not seem to find a reason.
[15:04:36] <StephenLynx> ah
[15:05:02] <StephenLynx> the freezing happened when?
[15:05:14] <StephenLynx> after you enabled mongodb to init on boot?
[15:05:18] <kurushiyama> During mongodb startup while booting.
[15:05:25] <StephenLynx> :v
[15:05:30] <StephenLynx> that is kind of expected, though.
[15:05:45] <StephenLynx> I had that once trying to develop a init script for my project
[15:05:55] <StephenLynx> you should have just tried to init it.
[15:05:55] <kurushiyama> In the context of the problems? Yes. In general? Not so much.
[15:06:14] <StephenLynx> without enabling it on boot or rebooting
[15:06:18] <kurushiyama> Well, it tells us sth. I hope the boot logs reveal sth
[15:06:34] <kurushiyama> StephenLynx: Well, POSTIN adds it via chkconfig
[15:42:07] <kurushiyama> StephenLynx: Hm, interestingly enough, my SELinux caused the freezing...
[15:43:15] <StephenLynx> :v
[15:51:04] <kurushiyama> StephenLynx: I have good news and bad news.
[15:51:50] <kurushiyama> StephenLynx: The good news are: I have installed and run 3.2.4 and 3.2.5 successfully on CentOS 7.2
[15:52:51] <kurushiyama> StephenLynx: The bad news is that this does not help you too much.
[15:52:59] <StephenLynx> how did you got it to run?
[15:53:31] <kurushiyama> StephenLynx: Basic install, permissive SELinux, SELinux mode=targeted.
[15:53:54] <StephenLynx> I didn't had issues with selinux, remember?
[15:53:58] <StephenLynx> I turned it off
[15:54:23] <kurushiyama> Well, hence I said that this does not help you much.
[15:55:23] <kurushiyama> What drives me nuts about that is that it is reproducible for you on various versions of MongoDB
[15:55:57] <StephenLynx> weirder is that it works fine on my server
[15:55:59] <StephenLynx> 3.2.5
[15:56:07] <kurushiyama> We checked the permissions of about almost everything.
[15:56:22] <kurushiyama> StephenLynx: You use 7.2 on server, too? or on older version?
[15:56:29] <StephenLynx> 7.2
[15:56:48] <StephenLynx> distro-sync put it on that version
[15:57:29] <kurushiyama> Hm, just an idea: Can you export the "broken" VM and run it on a different machine? Just to rule out it's sth with the virtualization. Not very likely, but I am running out of ideas.
[15:57:46] <StephenLynx> it happened on 2 different host machines
[15:57:58] <StephenLynx> both running centOS 7.2 though
[15:58:27] <StephenLynx> one on a laptop with an intel CPU, the other on a desktop with an AMD CPY
[15:58:29] <StephenLynx> CPU*
[16:02:59] <pamp> Hey... How can I check my mongodb enterprise license status
[16:04:12] <cheeser> i think those details are in cloud app
[16:05:04] <kurushiyama> StephenLynx: can you try to start mongod with "sudo -u mongod /usr/bin/mongod --dbpath …" ?
[16:05:23] <kurushiyama> StephenLynx: without pointing to the config file, that is?
[16:06:18] <StephenLynx> let me try
[16:08:10] <StephenLynx> if I specify -u mongod, it won't start even with -f conffilehere
[16:08:35] <kurushiyama> wt...
[16:09:35] <kurushiyama> Well, startung it at root might have changed some permissions. Please doublecheck. Other than that, it is getting weirder and weirder...
[16:09:57] <kurushiyama> s/startung it at root/starting it as root/
[16:10:34] <StephenLynx> it still doesn't explain why it starts failing before being started as root
[16:12:14] <StephenLynx> let me get a clean VM
[16:12:21] <kurushiyama> StephenLynx: Agreed. Hence I wanted to start it as mongod, but in foreground without a log file to find out whats going on.
[16:18:13] <StephenLynx> welp
[16:18:15] <StephenLynx> it werked
[16:19:04] <StephenLynx> then I tried to start it using the service, didn't work.
[16:19:21] <StephenLynx> sudo -u mongod mongod -f /etc/mongod.conf
[16:19:28] <StephenLynx> child process started successfully, parent exiting
[16:19:37] <StephenLynx> sudo service mongod start
[16:19:47] <StephenLynx> Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
[16:20:52] <StephenLynx> http://pastebin.com/RLjArKdd
[16:25:01] <StephenLynx> https://bbs.archlinux.org/viewtopic.php?id=202302 if there is any similarity to this
[16:25:19] <StephenLynx> the problem is something the mongo user doesn't have permission to handle
[16:28:15] <StephenLynx> now
[16:28:32] <StephenLynx> why would it be able to start if you run mongod directly?
[16:28:34] <StephenLynx> logging?
[16:39:17] <kurushiyama> Presumably
[16:39:31] <StephenLynx> hm
[16:39:36] <StephenLynx> let see
[16:39:37] <kurushiyama> Nope
[16:39:45] <StephenLynx> the settings work with the mongo user.
[16:39:54] <StephenLynx> the init script work with the root user.
[16:40:08] <kurushiyama> Because if that was the issue, starting it as mongod with the config would have a problem...
[16:40:19] <StephenLynx> yeah
[16:40:41] <StephenLynx> so what's in the init script besides the settings file that is making it fail?
[16:40:44] <kurushiyama> journalctl -xe output?
[16:40:48] <StephenLynx> pasted
[16:40:53] <StephenLynx> http://pastebin.com/RLjArKdd
[16:42:29] <kurushiyama> StephenLynx: erm, can you su to root for a sec?
[16:43:01] <StephenLynx> ok
[16:43:03] <StephenLynx> done
[16:43:56] <kurushiyama> the "chown mongod.mongod -R /var/lib/mongodb && chown mongod.mongod -R /var/log/mongodb/"
[16:44:50] <kurushiyama> After that, try a "service mongod start"
[16:45:05] <StephenLynx> chown: invalid user: ‘mongod.mongod’
[16:45:39] <kurushiyama> Whut?
[16:46:08] <kurushiyama> grep mongo /etc passwd
[16:46:13] <kurushiyama> sorry
[16:46:21] <kurushiyama> grep mongo /etc/passwd
[16:46:47] <StephenLynx> weird
[16:47:12] <StephenLynx> nothing
[16:47:44] <StephenLynx> i think I'm not logged on the VM :v
[16:47:54] <kurushiyama> Might me a reason
[16:47:59] <StephenLynx> v:
[16:49:50] <StephenLynx> chown: cannot access ‘/var/lib/mongodb’: No such file or directory
[16:50:36] <kurushiyama> sorry, /var/lib/mongo
[16:50:44] <kurushiyama> wait
[16:51:05] <kurushiyama> "chown mongod.mongod -R /var/lib/mongo && chown mongod.mongod -R /var/log/mongodb/"
[16:51:35] <StephenLynx> the chown ran, but still won't start the service
[16:52:17] <kurushiyama> Hm. try to run "sudo -u mongod /usr/bin/mongod ..." as _root_
[16:52:35] <kurushiyama> dbpath should be sufficient
[16:52:54] <StephenLynx> sudo -u mongod mongod -f /etc/mongod.conf worked
[16:54:02] <kurushiyama> killall -TERM mongod && /etc/init.d/mongod start
[16:54:42] <StephenLynx> sec, installing killall :v
[16:54:55] <StephenLynx> you know what
[16:54:58] <kurushiyama> simply kill the mongod
[16:55:01] <StephenLynx> yeah
[16:55:03] <StephenLynx> I already did that
[16:55:14] <kurushiyama> Did the init.d work?
[16:55:16] <StephenLynx> no
[16:55:21] <kurushiyama> wt...
[16:55:32] <kurushiyama> Well, obviously I know not enough...
[16:55:38] <StephenLynx> Apr 18 13:45:22 localhost.localdomain systemd[1]: Unit mongod.service entered fa
[16:55:39] <StephenLynx> Apr 18 13:45:22 localhost.localdomain systemd[1]: mongod.service failed.
[16:55:39] <StephenLynx> Apr 18 13:45:22 localhost.localdomain polkitd[620]: Unregistered Authentication
[16:56:10] <StephenLynx> Apr 18 13:45:22 localhost.localdomain polkitd[620]: Unregistered Authentication Agent for unix-process:1077:42583 (system bus name :1.17, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
[16:56:20] <StephenLynx> hm
[16:56:31] <StephenLynx> I wonder if this is relevant
[16:57:00] <kurushiyama> Not sure, tbh. All I can tell you is that it seems to be the only difference.
[16:57:27] <StephenLynx> that the service doesn't work while mongod does.
[16:58:31] <kurushiyama> Aye. I come more and more to the conclusion that there is some problem with dropping privileges
[16:59:00] <StephenLynx> one thing
[16:59:12] <StephenLynx> I noticed all packages got -1 at the end now on mongo's repositories
[16:59:28] <StephenLynx> I wonder if that has been always there
[16:59:35] <kurushiyama> Yep. rpm package version identifier.
[16:59:44] <StephenLynx> so what the hell changed?
[16:59:52] <StephenLynx> why won't older versions work anymore?
[16:59:54] <kurushiyama> I have no f... clue.
[17:00:23] <kurushiyama> Hence my assumption is that there is some problem during priv dropping.
[17:01:08] <kurushiyama> Please put the output of "yum list installed" on pastebin.
[17:01:22] <kurushiyama> I am not going to give up.
[17:02:29] <StephenLynx> you can't reproduce it anymore?
[17:03:09] <kurushiyama> For quite a while. As soon as I got the SELinux stuff fixed, it worked like a charm
[17:03:31] <StephenLynx> something funny
[17:03:45] <StephenLynx> I didn't had to touch selinux for a while now to get mongo working
[17:04:03] <kurushiyama> Well, you disabled it, no?
[17:04:06] <StephenLynx> no
[17:04:12] <kurushiyama> Oh.
[17:04:19] <StephenLynx> telling you, I didn't had to touch it at all.
[17:04:47] <kurushiyama> Well, docs state that if you try to bind to anything else then 127.0.0.1, you do. ;)
[17:04:48] <StephenLynx> http://pastebin.com/3WENyN1p
[17:04:52] <StephenLynx> nop
[17:04:55] <StephenLynx> I didn't had to.
[17:06:48] <pamp> Where I can Check our mognodb enterprise license status?
[17:06:49] <kurushiyama> Well, try it. Set SELINUX in /etc/sysconfig/selinux to "permissive" (no quotes) reboot and try the service _as root_. That way, we should have rules out any related problem, at least.
[17:07:19] <kurushiyama> pamp: You have been told before ;) It should be possible to do so online.
[17:08:15] <StephenLynx> i disabled it before with no changes,though
[17:08:27] <StephenLynx> but since were running out of ideas, ill try it anyway
[17:11:33] <StephenLynx> kek
[17:11:50] <StephenLynx> oh fuck me
[17:11:53] <StephenLynx> it worked
[17:12:06] <kurushiyama> Jay!!!! ... ... sort of.
[17:13:00] <StephenLynx> ill try again with disabled
[17:16:10] <StephenLynx> it worked again
[17:16:33] <kurushiyama> Ok, can we check something?
[17:16:42] <StephenLynx> I think I know what happened then
[17:16:48] <StephenLynx> it was because I started it as root once
[17:17:21] <StephenLynx> ill try again with enforced then apply the exception for mongo
[17:17:50] <StephenLynx> i wonder if the whole problem was a bug with selinux that got fixed
[17:19:51] <StephenLynx> the exception didn't work
[17:20:07] <StephenLynx> rebooting
[17:20:11] <kurushiyama> Nah, I doubt that. with enforced and semanage port -a -t mongod_port_t -p tcp 27017, It should run again. My _assumption_ is that we had security context problems, maybe, because you started it as root. Or screwed permissions. Or sth
[17:20:27] <kurushiyama> "semanage port -a -t mongod_port_t -p tcp 27017" did _not_ work?
[17:20:33] <StephenLynx> nope
[17:20:45] <kurushiyama> Did you reboot after setting it?
[17:20:56] <StephenLynx> yep
[17:20:58] <StephenLynx> still not working
[17:21:02] <kurushiyama> wtf.
[17:21:15] <StephenLynx> so, bug on selinux?
[17:21:29] <StephenLynx> did you managed to get it working with enforced and the exception?
[17:22:11] <kurushiyama> No. Not yet...
[17:22:36] <StephenLynx> either that or the command on https://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat/ is wrong
[17:23:16] <kurushiyama> Iirc, you need to have a perm to bind to any IP other than 127.0.0.1 as well. But I'll check that
[17:23:23] <StephenLynx> no
[17:23:26] <StephenLynx> you don't
[17:23:41] <StephenLynx> you need to bind on ports lower than 1024 or so
[17:23:50] <StephenLynx> but that's it
[17:23:59] <StephenLynx> nothing to do on interfaces.
[17:24:57] <kurushiyama> Ok, installed policycoreutils. now it gets interesting.
[17:27:21] <kurushiyama> rebooting
[17:29:12] <cpama> has anyone tried to install mongodb on alpine linux?
[17:29:30] <kurushiyama> cpama: Yes. Advice: use docker.
[17:29:36] <cpama> ok
[17:30:22] <StephenLynx> >docket
[17:30:51] <StephenLynx> unless you are running a VPS service, you might as well use a VM
[17:32:37] <kurushiyama> StephenLynx: Not really. Docker images are lighter in weight and have less overhead. But lets stick to this. I can confirm that SELinux="enforce" + "semanage port -a -t mongod_port_t -p tcp 27017" does _not_ work.
[17:34:00] <kurushiyama> StephenLynx: So, as far as I can see, it seems to me that there is a problem with mongodb-org-server's selinux settings.
[17:37:59] <StephenLynx> then why would older versions not work anymore?
[17:38:38] <StephenLynx> joannac, you there?
[17:38:40] <kurushiyama> Correct. I think it _may_ be something which is a side effect of running the init.d file.
[17:39:00] <StephenLynx> ok, so if I get a clean VM
[17:39:12] <StephenLynx> and install an over version, it should work, by that logic
[17:39:19] <StephenLynx> and older*
[17:39:20] <kurushiyama> StephenLynx: Since I can rule out it is mongod's selinux context. I can start it manually as user mongod
[17:39:37] <kurushiyama> StephenLynx: Not necessarily
[17:40:52] <kurushiyama> StephenLynx: If it is a side effect of the interaction between mongod, systemd and SELinux, it can be anything. I just start to track it down, now that I can reproduce it again.
[17:45:57] <kurushiyama> I f... KNEW it.
[17:48:28] <kurushiyama> As far as I can see, the user mongod is denied access to ld.so.cache under systemd's runuser
[17:51:24] <kurushiyama> http://pastebin.com/Y0Mj3p3k
[17:51:50] <StephenLynx> yeah I got that too
[17:51:56] <kurushiyama> line 1 is where is set the policy from "enforce" to permissive, activating the audit logging.
[17:52:02] <StephenLynx> type=AVC msg=audit(1460997506.961:34): avc: denied { execute } for pid=843 comm="mongod" path="/etc/ld.so.cache" dev="dm-0" ino=17001639 scontext=system_u:system_r:mongod_t:s0 tcontext=unconfined_u:object_r:ld_so_cache_t:s0 tclass=file
[17:53:15] <kurushiyama> Ok, now the puzzle pieces fall into place, I think.
[17:53:59] <kurushiyama> Most likely, the SELinux metadata on that file have changed.
[17:57:58] <kurushiyama> Which would explain why it basically happens on every version.
[17:58:14] <kurushiyama> But only on 7.2
[17:58:22] <kurushiyama> ?
[17:59:00] <aguilbau> hi, is their a way to match the last element of an array ? like 'field.$.value', but for the last element.
[18:03:55] <aguilbau> lets say i do this: 'db.tests.insert( { a: [ { b: 1 }, { b: 2 }, { b: 3 }, { b: 4 } ] } )'
[18:05:12] <aguilbau> now can i find all documents where all last values of 'a' have a field set to b = 4
[18:20:25] <kurushiyama> aguilbau: kurushiyama's Rule of thumb: If you depend on the order of an array, there are good chances there is something wrong with your data model.
[18:21:01] <StephenLynx> that
[18:22:21] <cheeser> +1
[19:19:49] <jr3> Mongo writelocks updates by default correct? If I have a document where I update to add to a set if length is < 20, there should be any kinda of "race" to where I actually end up with 21 elements in a set right?
[20:02:57] <StephenLynx> that depends on how you are doing it.
[20:03:15] <StephenLynx> jr3
[20:03:22] <StephenLynx> is it a single operation?
[20:03:28] <jr3> yes
[20:03:40] <StephenLynx> then yes, you won't end up with 21 elements
[20:11:12] <nalum> hello, I'm trying to set up tls connections on my replicaset. I have the certs I need for the mongod, but I'm confused as to what certs I need for the cloud manager agents (automation, monitor and backup)?
[20:13:18] <nalum> Anyone able to help?
[20:16:18] <kurushiyama> nalum: Do you need em? You have an on premise CM?
[20:17:08] <nalum> kurushiyama: I'm using letsencrypt
[20:17:23] <kurushiyama> nalum: That does not realy
[20:17:30] <kurushiyama> answer the question
[20:17:37] <kurushiyama> ;)
[20:17:42] <StephenLynx> kek
[20:17:58] <nalum> cloudmanager is asking me to provide pem key files for each of the agents
[20:18:35] <kurushiyama> nalum: Do you use an installed version of CM or do you use The cloud version?
[20:18:44] <nalum> the cloud version
[20:20:51] <nalum> sorry, kurushiyama I had read CM as CA :S
[20:21:22] <cheeser> OpsManager is the local install product
[20:21:32] <kurushiyama> cheeser: Tyvm!
[20:21:36] <cheeser> np
[20:22:03] <kurushiyama> Still have no clue why CM should ask for _keyfiles_.
[20:22:23] <nalum> It's not asking for them to be uploaded, just for paths
[20:22:44] <cheeser> right. because mongo needs to configured to find them.
[20:23:12] <nalum> Mongo does, but do the agents also need them?
[20:23:22] <kurushiyama> cheeser: You probably understand better what's going on. I am totally puzzled.
[20:23:54] <cheeser> i havne't really been paying attention, tbh. wrangling tests.
[20:24:05] <kurushiyama> nalum: Usually, the agents connect to CM, which has cert from a CA known by the agents.
[20:24:50] <nalum> Okay, so these paths are really only if you are using self signed certs?
[20:25:09] <kurushiyama> cheeser: Quick one: unless sth has changed _considerably_, agents dial CM to push data (monitoring, backup) and pull orders (automation), right?
[20:26:04] <kurushiyama> nalum: Aye
[20:26:15] <kurushiyama> nalum: As far as I have understood your setup.
[20:26:37] <nalum> Okay, I'll give it a shot so.
[20:27:01] <kurushiyama> nalum: Well, sorry, not exactly. You need to give the paths to the cert and keyfile for the server so that the server can utilize SSL.
[20:27:43] <cheeser> kurushiyama: yep
[20:27:44] <kurushiyama> nalum: Think of the cert file as the servers passport, and the key file as the servers keyring.
[20:27:54] <kurushiyama> cheeser: Thanks.
[20:28:41] <kurushiyama> nalum: The agents actually are "clients" to CM, if you will.
[20:32:17] <nalum> kurushiyama: Thanks, so the client will look for the relavent CA files on the local machines?
[20:33:00] <kurushiyama> nalum: I'd guess so. Never had the reason to debug it ;)
[20:33:22] <kurushiyama> nalum: May be hardcoded as well... well, in theory.
[20:35:13] <nalum> kurushiyama: Cool, thanks
[20:39:50] <nalum> Okay, so I've setup with allowSSL, when I using the mongo command line tool to connect with mongo --host host.com --ssl it fails with: Failed global initialization: BadValue need to either provide sslCAFile or specify sslAllowInvalidCertificates
[20:40:22] <nalum> So do I need to point it at the specific sslCAFile or am I using the wrong flag?
[20:40:40] <kurushiyama> nalum: At the specific.
[20:42:38] <nalum> Okay, is there a way to have it just look for the CAFile as it is a public CA and not self signed? Maybe I should not have the cloud manager set up with the sslCAFile?
[20:47:11] <kurushiyama> nalum: Wait a sec. Didn't you say it IS a self signed certificate?
[20:47:32] <kurushiyama> nalum: The one you are using for mongod...
[20:47:48] <nalum> No it's not a self signed cert
[20:47:59] <nalum> I'm using letsencrypt
[20:48:05] <kurushiyama> Ah. Sorry.
[20:48:19] <nalum> no worries :)
[20:48:24] <kurushiyama> use sslAllowInvalidCertificates then
[20:48:24] <Derick> letsencrypt++
[20:48:36] <Derick> i should try using that with mongodb
[20:49:23] <nalum> But that means it doesn't validate the cert, right?
[20:49:46] <kurushiyama> nalum: No, it is not. And it does not make sense to do any validation, either.
[20:51:06] <nalum> Why does it not make sense? Is that not part of what TLS certs are for, verifying that I am connecting to the server I think I am?
[20:51:25] <kurushiyama> nalum: As you can only verify the _host_ and not the issuer.
[20:52:25] <nalum> So sslAllowInvalidCertificates is for verifying the issuer and the clients should continue to verify the host even with that set?
[20:53:15] <kurushiyama> nalum: No. it validates the certificate the server identifies itself with against well known certification authorities.
[20:53:56] <kurushiyama> nalum: The thing is this: A CA is not only ther for encryption, but to validate the certificat holder. Say your bank, for example.
[20:54:51] <kurushiyama> nalum: the CA of the bank's certificate guarantees you (to a varying extend), that they made sure that the ceritificate holder is who he claims to be.
[20:56:17] <kurushiyama> nalum: Since let's encrypt does only verify a host, you _really_ do not want them to be trusted by default.
[20:57:05] <kurushiyama> nalum: Of course, this is simplified a bit.
[20:57:45] <nalum> kurushiyama: Thanks for the info, much appreciated. So would you recommend not using them in production systems?
[20:58:29] <kurushiyama> nalum: For business? I would not consider it for a second, especially given the prices for wildcard certificates nowadays.
[20:59:29] <kurushiyama> nalum: As for secured communication between known hosts, it might be ok. But then, there is no reason to validate it, really.
[21:00:31] <kurushiyama> nalum: As per communication with the outside world? I'd run from a business which is that sloppy with its digital identity...
[21:07:30] <nalum> kurushiyama: Okay, thanks again :)
[21:08:17] <kurushiyama> nalum: https://farm8.staticflickr.com/7454/16377486875_bd27a6a6a2_o.jpg ;)
[21:08:52] <nalum> :)
[21:10:31] <kurushiyama> You... erm... get the picture ;)
[21:10:57] <nalum> :P
[21:12:06] <nalum> Okay, so last question for today. With cloud manager is it possible to have it use custom domains rather than mongodbdns.com or should I CNAME or ALIAS the domain name that it generates?
[21:13:04] <kurushiyama> Whut? You lost me...
[21:13:47] <nalum> When you use cloud manager to bring up a Machine Instance in AWS it assigs it a domain name.
[21:14:05] <nalum> which is based on the cloud manager group
[21:14:16] <nalum> and replicaset name
[21:15:24] <nalum> so repl-test-[0-9]+.test-group.[0-9]+.mongodbdns.com
[21:19:38] <nalum> I've not seen anything in the cloud manager, but maybe I'm looking in the wrong places. Is it possible to change that to use my own domain name or should I create a CNAME or an ALIAS DNS entry for the mongodbdns.com that is created?
[21:23:21] <cheeser> i use my own hostname fwiw.
[21:24:13] <nalum> cheeser: as a CNAME or through cloud manager?
[21:24:50] <nalum> Or are you not using the automation system?
[21:28:05] <cheeser> i have a CNAME set up on my hoster's DNS
[21:29:07] <nalum> so mydb.mydomain.com cnamed to ???.mongodbdns.com
[21:30:34] <nalum> then is the replicaset configured to use the CNAME or the mongodbdns.com name?
[21:31:31] <zsoc> It appears that while doing a Model.create(arr), the process stops after it hits the first error. Maybe not, but even while trying to use the .then() promise it appears that way. I'd like to be able to catch all the error 11000's because they shouldn't be fatal.
[21:31:43] <zsoc> Sorry, i guess .create() is a mongoose thing, for context.
[21:37:37] <cslcm> Hi folks. I'm trying to get my head round MongoDB sharding. I think I've successfully created a three-host cluster following the documentation, which is good, however I didn't see any way to configure how much redundancy there is. For example, does each host contain a full copy of the data, or is it split into three?
[21:37:58] <cslcm> this is using 3.2.5
[21:40:37] <nalum> cslcm: Assuming that when you say 3, you mean 3 nodes? I think it's split across the 3 nodes, but for redundancy you would have those 3 nodes actually be replica sets. So 3 shards of 3 node replica sets.
[21:42:17] <cslcm> ah i see, at the moment I have a single replica set with the three servers as members
[21:42:21] <cslcm> I need additional replica sets?
[21:42:58] <nalum> If the 3 servers are part of the same replica set then they are not sharded, if I understand it correctly
[21:43:34] <cslcm> ok
[21:43:51] <cslcm> so I have a single replica set on each server
[21:44:02] <cslcm> then connect to them from mongos
[21:44:17] <cslcm> how is the data distributed across the servers in that case?
[21:44:34] <cslcm> What i'm aiming for is for each one of the servers to contain 66% of the data
[21:44:47] <cslcm> so we can lose 1 server without impacting the service
[21:45:24] <nalum> I don't know if that can be done, but I've not looked too deeply into sharding yet
[21:48:28] <nalum> My understanding would be, you have 3 shards, each shard is a replica set with 3 nodes each node is on it's own server. This then allows for single nodes to fail but keeps the system available. obviously the actual number of nodes is up to you and for production systems 5 nodes in a replica set is recommended.
[21:51:17] <nalum> cslcm: In your suggested setup of 3 servers, you could have 3 nodes per replica set, one on each machine, so the whole replica set is not on one machine, meaning each machine has 100% of the data.
[21:53:54] <cslcm> nalum: The problem is (and this is one of the problems which is supposed to be tackled by sharding, according to the mongo docs) - my data cannot fit on a single machine. There's too much data. Hence why I only want 66% on each machine
[21:53:59] <cslcm> I thought that was kinda the point of sharding
[21:54:05] <cslcm> to provide scalability and redundancy
[21:57:18] <kurushiyama> cslcm: scalability
[21:57:30] <kurushiyama> cslcm: Redundancy is provided by replsets
[21:57:53] <kurushiyama> cslcm: A sharded cluster _can_ consist of replsets. But it does not _have_ to.
[21:57:55] <cslcm> okay
[21:58:18] <cslcm> i think I understand, so if I have host A,B,C.. I can create three replication sets, one with A+B, one with B+C and one with A+C
[21:58:28] <cslcm> and shard them together?
[21:58:49] <kurushiyama> cslcm: Ok, lets assume node B goes down
[21:59:10] <kurushiyama> cslcm: Then two replsets would be affected.
[21:59:37] <cslcm> but one would remain with a full copy of my data, if i'm not mistaken?
[21:59:39] <kurushiyama> node B would hold the data of two shards.
[22:00:15] <kurushiyama> cslcm: First of all, the minimal replset consists of 3 nodes. 2 data bearing nodes + 1 arbiter.
[22:00:22] <cslcm> ok
[22:00:38] <kurushiyama> cslcm: You want to _minimize_ risks.
[22:00:51] <kurushiyama> And you want to scale.
[22:01:10] <kurushiyama> If you do everything right, you maximize the usage of a node.
[22:01:19] <kurushiyama> with a _single_ instance.
[22:01:53] <kurushiyama> Because there are usually 2 factors who force you to shard: IO and disk space.
[22:02:17] <cslcm> right, and both are a problem for me
[22:02:27] <kurushiyama> Those are _constants_ per node
[22:02:44] <kurushiyama> No matter wether you have 1 or 23 instances on that node.
[22:03:16] <kurushiyama> But with more than 1 node, you maximize overhead instead of utilization.
[22:03:24] <kurushiyama> sorry,
[22:03:30] <kurushiyama> more than 1 instance/node
[22:03:38] <cslcm> i'm more than happy to run a single instance on each node, I'm not suggesting otherwise
[22:04:00] <kurushiyama> cslcm: > i think I understand, so if I have host A,B,C.. I can create three replication sets, one with A+B, one with B+C and one with A+C
[22:04:06] <kurushiyama> cslcm: wrong.
[22:04:09] <kurushiyama> ;)
[22:04:14] <cslcm> A,B,C are three machines
[22:04:18] <kurushiyama> Yes
[22:04:19] <cslcm> not three instances on the same machine
[22:04:38] <kurushiyama> But you always want only one intance per machine
[22:04:52] <kurushiyama> Since you want to maximize utilization.
[22:04:56] <cslcm> oh, I see what you mean, an additional replication set would require another node
[22:05:01] <cslcm> instance*
[22:05:06] <kurushiyama> cslcm: Aye
[22:05:22] <cslcm> Okay, that's fine, but is there any way of accomplishing what I need?
[22:05:35] <kurushiyama> cslcm: So, gievn you have say 1TB of data, and each node can hold 600GB.
[22:05:41] <cslcm> that's what I want yes
[22:05:44] <kurushiyama> cslcm: You would need two shards
[22:05:53] <cslcm> ok
[22:06:00] <kurushiyama> cslcm: But, let's say one of those machines fails.
[22:06:14] <kurushiyama> cslcm: Half of your data would not be accessible.
[22:06:28] <cslcm> that makes it entirely pointless, surely
[22:06:34] <kurushiyama> cslcm: Hence, you would have two shards, each consisting of a replset.
[22:06:59] <cslcm> ok
[22:07:43] <kurushiyama> cslcm: So, your minimal setup would be 3 config servers, 4 data bearing nodes and 2 arbiters. The latter 6 would each form a replset of 2 data nodes + an arbiter.
[22:08:16] <cslcm> ok, I don't have four nodes
[22:08:34] <kurushiyama> Now for the good news: the 3 config servers and the arbiters can be relatively cheapo VPS (though they ofc should be on different machines)
[22:09:53] <cslcm> It sounds like MongoDB can't be used for my application. I only have three physical servers to run this on
[22:09:59] <kurushiyama> cslcm: I can not suggest to run a sharded cluster on standalones or (while technically possible) only 1 config server.
[22:10:32] <kurushiyama> cslcm: Oh, it could. But your requiremnets are a bit like "I want to have the cake _and_ eat it."
[22:11:11] <kurushiyama> cslcm: You want redundancy + sharding + only 3 servers. Choose any 2
[22:12:49] <cslcm> I can't, so mongodb won't work for me
[22:13:42] <cslcm> What I'm asking for is very achievable in principle... split the database into three chunks, 1 2 3, one server has 12, one server has 23, one server has 13
[22:13:51] <kurushiyama> cslcm: Scale up and put more space into them. Plus: Which DBMS do you expect to behave better? We are not talking of implementation, but hard facts and logic.
[22:13:58] <kurushiyama> Sure
[22:14:18] <kurushiyama> cslcm: The problem is that you overuse the servers.
[22:14:24] <cslcm> That's irrelevant
[22:15:28] <kurushiyama> cslcm: If you thinks so. If you ask wether it is technically possible, the answer is yes.
[22:15:37] <cslcm> how is that done?
[22:15:48] <cslcm> if load becomes an issue, I can scale out and add additional nodes later on
[22:15:50] <kurushiyama> you put 4 instances on each node.
[22:15:57] <cslcm> 4?
[22:16:07] <kurushiyama> config servers + arbiters
[22:16:34] <cslcm> what's the role of the arbiter?
[22:16:45] <cslcm> from what I read, it's just used for elections?
[22:16:47] <kurushiyama> so node 1 would hold 1+2+arbiter 3 + config1
[22:17:05] <kurushiyama> cslcm: Without an arbiter, if one node goes down, the wholw replset goes down.
[22:17:10] <cslcm> ah
[22:17:23] <cslcm> i'd have thought the config server would deal with that
[22:17:26] <cslcm> but ok
[22:17:33] <kurushiyama> since 1 node alone can not decide wether it is the most up-to-date. It might just be cut off by a network patitioning.
[22:17:41] <cslcm> right, fair enough
[22:18:10] <kurushiyama> So. node 2 would hold 2+3+arbiter 1 + config 2.
[22:18:13] <kurushiyama> And so on.
[22:18:40] <cslcm> and if one node goes down, the data should be intact
[22:18:48] <kurushiyama> Plan with _a lot_ of RAM if you want to use WT
[22:18:52] <kurushiyama> Yes
[22:19:08] <cslcm> what's WT? :)
[22:19:24] <kurushiyama> wiredTiger, the default storage engine as of 3.2
[22:19:28] <cslcm> oh, right
[22:19:45] <cslcm> it doesn't need to cover the entire dataset though, does it?
[22:19:52] <cslcm> that's the reason I need to move away from redis :P
[22:20:20] <kurushiyama> oh, put this down as a note: you have to set WT's cache to < 1/4 of your physical RAM, and you need tons of swap space.
[22:20:32] <kurushiyama> no
[22:20:46] <cslcm> my machines each have 128gb of ram, that shouldn't be an issue
[22:20:58] <kurushiyama> Depends on your data.
[22:21:23] <cslcm> 95% of my data will never be accessed
[22:21:24] <kurushiyama> each instance will a) try to use 50% of the available physical RAM as WT cache
[22:21:49] <kurushiyama> b) try to create a utilization of the available phyical RAM of 85-90%
[22:22:23] <kurushiyama> If you are going to run that stunt, it most likely will be best to run the individual instances as docker images for resource limitation.
[22:23:02] <kurushiyama> Do not even bother to think about HDDs, and give each instance a dedicated LVM partiion.
[22:23:56] <kurushiyama> And you _really_, _really_ should have a deployment diagram handy in case you need helpl.
[22:24:20] <cslcm> Fortunately I'm not deploying this to production right away :)
[22:24:27] <cslcm> thank you for your advice
[22:24:48] <cslcm> you've been very helpful!
[22:24:54] <kurushiyama> cslcm: You are most welcome. I honor your courage...
[22:26:19] <cslcm> In the long term I will expand and add more nodes, and then it can become more sane
[22:26:34] <cslcm> but you have to start somewhere ;)
[22:27:08] <cslcm> Just to confirm though, by default, a replication set splits the data evenly over all the nodes?
[22:27:16] <cslcm> over all the set members, i mean
[22:29:24] <cslcm> never mind, I get it now
[22:29:35] <cslcm> the sharding does that
[22:29:50] <cslcm> I feel so dense sometimes :)
[22:32:48] <Freman> in other news, might finally do something with that domain related to twitter I bought years ago lol
[22:36:32] <cslcm> do you plan to store deleted tweets by any chance? :)
[22:44:12] <Freman> for one of my projects yes, there's an election coming
[22:44:27] <Freman> for the other one... not so much
[22:55:11] <cslcm> kurushiyama: Is this what you were suggesting? http://pastebin.com/raw/Rjj0QsKc
[23:13:41] <cslcm> seems to work well. :)
[23:29:57] <cslcm> Question: The docs make very clear that the config servers must be backed up. Do I need to back up ALL of them, or will just one be enough?
[23:31:10] <cslcm> I'm guessing each config server is identical?