PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 19th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:12:41] <ellistaa> can someone take a loook @ my schema outline
[00:12:43] <ellistaa> https://gist.github.com/ellismarte/b709b3ac7aa2aa60fb05da5e218d0c4a
[00:13:40] <ellistaa> so there are accounts, and accounts have many piecs of property, and instead of duplicating images of those pieces, i want them to reference an item and grab the image from the item. do u think i should use a relational db for this?
[00:14:09] <Skaag> anyone has a link to a document that explains why mongodb decided to use uuid as the document's _id instead of some sequential number like other databases do?
[00:16:00] <carpetfizz> Anyone able to help with tailable cursors?
[01:36:28] <CaptTofu> howdy all
[01:36:32] <CaptTofu> question
[01:36:48] <CaptTofu> so, preparing a talk about automation
[01:36:52] <CaptTofu> I wanted to demo sharding
[01:36:58] <CaptTofu> enabled sharding on a db
[01:37:24] <CaptTofu> I want to remove that so it doesn't show up in sh.status()
[01:37:29] <CaptTofu> it was done wrong
[01:37:34] <CaptTofu> can I do that?
[01:37:38] <CaptTofu> and how?
[04:03:13] <mylord> how to find number of properties in a subdoc?
[04:06:39] <mylord> I want to remove docs which have just 1 property in the users subdoc
[08:20:19] <Ange7> est all
[08:20:23] <Ange7> someone here ?
[08:39:54] <Ange7> I have 3.000.000 jobs in Job Queue (Gearman). Each job make 3 updates in my mongoDB. I have 1200 Jobs in same time permanently. But sometime, with mongostats i see that mongodb waiting... ? 0 insert, 0 update, 0 delete, 0 query i don't know why, someone can explain me why mongodb waiting ? thank you guys
[08:47:41] <oznt> hi everyone, can someone point me to how to set the file path for logging while mongod is running?
[09:01:17] <Derick> oznt: there is a logpath in mongodb.conf
[09:01:41] <Derick> mine looks like:
[09:02:07] <Derick> http://pastebin.com/eUt1UnBb
[09:06:29] <oznt> Derick: yes, I am aware of that option, but it will require me to edit the file, and restart mongo
[09:10:35] <Derick> oh, I missed the "while running"
[09:10:38] <Derick> I don't think you can
[09:21:52] <kurushiyama> oznt: You can _try_ to change the path in the config file and send a SIGUSR1 to mongod. This usually triggers a log file rotation. If you are lucky, the log file rotation is re-read, too.
[09:26:07] <oznt> kurushiyama: thanks, I will try that
[09:26:47] <kurushiyama> oznt: Of course I meant that the log file _location_ might be re-read ;)
[09:46:40] <Ange7> I have 3.000.000 jobs in Job Queue (Gearman). Each job make 3 updates in my mongoDB. I have 1200 Jobs in same time permanently. But sometime, with mongostats i see that mongodb waiting... ? 0 insert, 0 update, 0 delete, 0 query i don't know why, someone can explain me why mongodb waiting ? thank you guys
[10:20:50] <kurushiyama> Ange7: What makes you think that this is MongoDB? Does it occur to you that your application code might cause the delay
[10:23:24] <kurushiyama> Ange7: I am pretty sure that MongoDB does not cease to work randomly, for that matter ;)
[10:28:49] <Ange7> kurushiyama: TCP max connection 32k
[10:28:59] <Ange7> cause i'm not sure my connection is persistent
[10:34:32] <chinychinchin> Advice here - is it possible to have 2.6 sharded replicas mixed with 3.2 shared replicas
[10:35:04] <chinychinchin> from what i have read it appears it is possible but need some confirmation
[10:39:25] <kurushiyama> Ange7: Sounds weird. You are aware of the fact that the driver usually create a connection pool. So each time you create a client, in reality you create a pool.
[10:39:51] <kurushiyama> Ange7: And, of course, connections should be closed.
[10:40:42] <kurushiyama> chinychinchin: It is technically possible, though I would strongly advice against it.
[10:41:59] <chinychinchin> kurushiyama: the problem i have is that the application and current mongodb setup is old - and there is reluctance to change even though we desperatly need to
[10:42:25] <kurushiyama> chinychinchin: Have the cake and eat it? Never worked and never will.
[10:43:38] <chinychinchin> kurushiyama: so are you saying that i cannot add a 3.2 replica shard to an existing 2.6.12 replica shard - just need confirmation
[10:44:06] <kurushiyama> chinychinchin: You _can_. But then, I am pretty sure you need to update mongos
[10:44:31] <kurushiyama> chinychinchin: Then, depending on the language of your application, you'd need a driver update.
[10:44:43] <kurushiyama> chinychinchin: Most likely.
[10:45:47] <kurushiyama> chinychinchin: To keep things easy, I'd rathe add another 2.6.X shard and make sure the migration process is started _now_, as 2.6 is at end of lifetime by Q4
[10:46:02] <chinychinchin> kurushiyama: thats the gotha - they are reluctant to upgrade form java mongodb 2.12 to 2.14 which is compatible with 3.2 -
[10:46:17] <chinychinchin> kurishiyama: thats great information - thanks
[10:46:46] <kurushiyama> chinychinchin: Uhm, but "they" are aware of a notion called "integration tests"?
[10:46:59] <kurushiyama> chinychinchin: ;)
[10:47:45] <chinychinchin> kurushiyama: not everyone has heard of agile extreme programing etc etc - thats why the reluctance - ;-)
[10:49:01] <kurushiyama> chinychinchin: Well, I could argue that ITs are neither agile nor extreme. But my suggestion stands: Scale out on 2.6, but make sure some IT tests are done with the 2.14 driver. From my experience, it is drop in.
[10:50:04] <kurushiyama> chinychinchin: If the ITs are successful, update the application, then do a rolling update of MongoDB to 3.0 and then 3.2
[10:51:14] <chinychinchin> kurishiyama: your suggestion is going straight to email to send to business
[10:51:19] <chinychinchin> :-)
[11:15:19] <gahan> Hi. I'm using 3.2.5 which no longer provides init scriot. Is there a nice downloadable file or init generator?
[11:15:42] <Derick> that sounds like a bug, there should be an init script
[11:16:11] <gahan> As of version 3.2.5, there are no init scripts for mongos. The mongos process is used only in sharding. You can use the mongod init script to derive your own mongos init script for use in such environments. See the mongos reference for configuration details.
[11:17:23] <gahan> just discovered /etc/init/mongod.conf
[11:17:41] <Derick> that's systemd I suppose
[11:17:45] <gahan> service doesn't pick it up, should I use `start mongodb` or something else?
[11:17:59] <Derick> tbh, I don't know
[11:18:52] <gahan> says # Ubuntu upstart file at /etc/init/mongod.conf
[11:20:26] <kurushiyama> Had mongo_s_ ever have an init script?
[11:22:45] <kurushiyama> gahan: For mongos, I tend to use the mongodb user's crontab with an @reboot statement.
[11:24:17] <gahan> kurushiyama: I dont think I want mongos, just mongod
[11:25:41] <kurushiyama> gahan: Sorry, got that wrong, then. But then, what is the problem? systemctl start mongod
[11:37:15] <R1ck> hiya. I'm comparing two "explain" aggregate commands, and one of them has a stage "KEEP_MUTATIONS" as inputStage while the rest is identical - can someone tell me what this means/
[11:46:24] <gahan> hope it's not flooding, kurushiyama
[11:46:25] <gahan> crugo-demo# systemctl start mongo
[11:46:26] <gahan> Failed to start mongo.service: Unit mongo.service not found.
[11:46:26] <gahan> crugo-demo# systemctl start mongod
[11:46:26] <gahan> Failed to start mongod.service: Unit mongod.service not found.
[11:46:55] <kurushiyama> gahan: OS and version?
[11:47:00] <gahan> ubuntu 16.04
[11:47:07] <gahan> soon LTS
[11:47:14] <kurushiyama> gahan: Unsupported, afaik. ;)
[11:47:38] <gahan> ideas?
[11:48:01] <gahan> anyone have a scriot ready? I don't like messing with distros, I love when eerything works out of the box
[11:49:30] <gahan> mongodb-org-server package has /etc/init/mongod.conf in it but I think xenial uses systemd over upstart
[11:49:54] <kurushiyama> gahan: Then use a supported version ;)
[11:51:09] <gahan> I need 3.2.5
[11:53:39] <kurushiyama> gahan: Runs on 14.04
[11:54:40] <kurushiyama> gahan: https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/#overview : "Currently, this means 12.04 LTS (Precise Pangolin) and 14.04 LTS (Trusty Tahr)."
[11:58:24] <gahan> 16.04 is becoming LTS next week
[11:59:18] <kurushiyama> gahan: Well, that does not change the statement, does it?
[12:03:05] <gahan> it is not very helpful towards the goal I'm assking for support to achieve
[12:04:09] <kurushiyama> gahan: a) Have a running mongodb b) Have it running out of the box c) Have a underlying OS that is not supported yet. Choose any 2. ;)
[12:07:21] <kurushiyama> gahan: It is not that I do not _want_ to help. It is just that we are talking of hard facts.
[12:08:27] <dojobo> gahan have you given it a shot on a vm or container?
[12:08:43] <dojobo> (using 14.04 obv)
[13:28:02] <R1ck> I'm comparing two "explain" aggregate commands, and one of them has a stage "KEEP_MUTATIONS" as inputStage while the rest is identical - can someone tell me what this means?
[14:13:19] <owg1> I'm looking into benchmarking MongoDB's performance on various filesystems, including ZFS, as there doesn't appear to be much information out there for it. Does anyone have any resources that might be helpful, or any thoughts on the project in general?
[14:38:48] <kurushiyama> owg1: Just thoughts. ZFS vs XFS. Benchmarking done. ;)
[14:40:32] <StephenLynx> hipsters
[14:56:18] <owg1> kurushiyama: Benchmarking done?
[14:56:39] <owg1> We were planning on doing XFS, ZFS and ext4
[14:57:11] <owg1> StephenLynx: We use Node.JS, and pug too, aren't we trendy
[14:57:23] <StephenLynx> what's pug?
[14:58:34] <owg1> Jade, just they got threatened with legal action by another company that has the trademark for Jade
[14:58:39] <StephenLynx> kek
[14:58:40] <Derick> StephenLynx: a small ugly dog
[14:58:42] <kurushiyama> owg1: Comparing ZFS with XFS is a bit like comparing apples with peas
[14:58:50] <cheeser> mmmm. peas.
[14:58:52] <StephenLynx> I like node, but I avoid templating engines like the plague.
[14:59:05] <StephenLynx> I use jsdom for generating HTML
[14:59:07] <kurushiyama> cheeser: Want some? ;P
[14:59:19] <cheeser> are they whirled?
[14:59:29] <Derick> what tf is a whirled pea?
[14:59:36] <kurushiyama> cheeser: I am afraid not ;)
[14:59:38] <StephenLynx> No need for FE devs to use anything but standard html, css and js
[14:59:46] <cheeser> "visualize whirled peas"
[14:59:52] <cheeser> "give peas a chance"
[14:59:59] <Derick> oh
[15:00:00] <Derick> hah!
[15:00:10] <Derick> that joke works better when you say it out loud
[15:00:22] <cheeser> along the veins of "save the whales. collect the whole set."
[15:00:30] <kurushiyama> owg1: The whole idea of MongoDB is to be able to scale horizontally on relatively cheap hardware. For large datasets, IO and RAM are the limiting factor, not the available disk space.
[15:01:04] <kurushiyama> owg1: So ZFS is a bit pointless when comparing it to a filesystem explicitly designed for fast access
[15:01:45] <owg1> kurushiyama: We want to be able to compare the performance of the different FS's and then have a benchmark to compare tweaks to ZFS against
[15:02:07] <owg1> Disk space doesn't factor into it
[15:02:48] <kurushiyama> owg1: You can tweak ZFS as much as you like, it wont even play in the same league as XFS, much less a tweaked one. But if you wish. You just asked for thoughts, and these are mine ;)
[15:03:03] <kurushiyama> owg1: Especially not the Linux port.
[15:04:31] <owg1> kurushiyama: Good to know, thank you.
[15:15:32] <jokke> hello
[15:17:21] <jokke> a project of ours is handling very frequent writes (every 2 seconds into ~360 documents) of time based measurement data.
[15:17:38] <jokke> as of now we're still using MMAPv1 as storage engine
[15:17:58] <jokke> the plan is to upgrade to WiredTiger to benefit from document based concurrency
[15:19:04] <jokke> an ex co-worker of mine did some research regarding the document structure in regard of performance
[15:19:51] <jokke> in MMAPv1 it was necessary to pre-allocate hourly documents which have slots for minutes and then one level deeper for seconds
[15:20:21] <jokke> to make use of fast update operations using the dot notation
[15:22:06] <jokke> this schema has complicated the application code almost every time we read from the db and we want to simplify it. iirc storing only timestamp and value in a single document isn't very smart performance wise (even with wiredTiger), but that's the way we'd like to access the data.
[15:23:28] <jokke> the co-worker i mentioned earlier had found out that with WiredTiger it'd be possible to simplify the schema so that one document would still contain 1 hour worth of data but stored flat in an array.
[15:23:59] <jokke> because of performant push operations
[15:24:14] <jokke> is this correct?
[15:25:14] <jokke> it's still not ideal because getting a timespan of values involves finding out which document(s) the values is/are in.
[15:25:45] <jokke> *are
[15:25:48] <cheeser> http://blog.mongodb.org/post/65517193370/schema-design-for-time-series-data-in-mongodb
[15:26:02] <cheeser> https://www.mongodb.com/presentations/mongodb-time-series-data-part-1-setting-stage-sensor-management
[15:26:22] <cheeser> https://www.mongodb.com/presentations/mongodb-time-series-data-part-2-analyzing-time-series-data-using-aggregation-framework
[15:26:47] <jokke> hm those are pretty old posts/presentations
[15:26:53] <cheeser> https://www.mongodb.com/presentations/mongodb-time-series-data-part-3-sharding
[15:26:55] <jokke> before WiredTiger
[15:27:15] <cheeser> i imagine they're still most applicable though.
[15:27:27] <cheeser> at least they might give you a nudge to something less cumbersome.
[15:27:51] <jokke> well at least what i skimmed through in the blog post it looks exactly like our schema
[15:27:59] <jokke> and it's very cumbersome indeed :)
[15:41:13] <owg1> does anyone know how to set directoryPerDB in mongo 3.2 /etc/mongo.conf? Nothing I try seems to be working.
[15:41:53] <cheeser> https://docs.mongodb.org/v3.2/reference/configuration-options/#storage-options
[15:42:05] <cheeser> can you pastebin your conf file?
[15:45:27] <owg1> cheeser: I've noticed that our company is using a non YAML version, but I tried coverting it to YAML but ahd the same issues. I'll paste it in one second
[15:46:52] <owg1> Damn prefered pastebin is down
[15:47:00] <cheeser> gist.github.com
[15:47:24] <owg1> http://lpaste.net/160761
[15:47:52] <cheeser> iirc, you can only change this on a new instance
[15:48:11] <cheeser> so if you're adding this to an existing instance, i don't think it'll change anything...
[15:48:27] <cheeser> but the file looks right.
[15:49:04] <owg1> It is a brand new installation of mongo, there is no data. Although the service has been started prior to the changes to the config
[15:52:06] <owg1> It is returning 100, which is an uncaught exception, oh golly!
[15:52:36] <cheeser> run mongod manually and crank up the verbosity and see what you get
[15:55:07] <owg1> Ah nice idea, thanks
[15:59:10] <cheeser> np
[15:59:54] <owg1> This is fun, it exits with 100, but gives no output even with verbose
[16:00:03] <cheeser> awesome
[16:04:24] <owg1> So if I have the directoryPerDB option, included like this: http://lpaste.net/160762
[16:04:47] <owg1> It returns 100 with uncaught exception, if I remove it it starts just fine. Do you think this is worth creating an issue over?
[16:05:26] <owg1> Clean install of Mongo3.2 on Xenial, with a systemd file added to let it run on Systemd and no data at all.
[16:07:05] <owg1> Better log: http://lpaste.net/160763
[16:09:58] <owg1> Found an issue: https://jira.mongodb.org/browse/SERVER-20440
[16:27:41] <owg1> Isn't really a solution provided
[16:30:02] <kurushiyama> owg1: Xenial is not supported, yet.
[16:46:39] <owg1> kurushiyama: Yeah we know, its a pain in the ass
[16:47:31] <kurushiyama> owg1: As said before: I would not waste time on an unsupported OS.
[16:49:24] <owg1> kurushiyama: We just want to be ahead of the curve and up to scratch with Systemd which appears to be the future. Xenial comes out in a couple of weeks, so one would assume support is around the corner
[16:50:14] <kurushiyama> owg1: tbh, I rather have a stable system than the bleeding edge. ;)
[16:52:53] <owg1> kurushiyama: Yeah we aren't using it in production, just want to be in a position to move seamlessly to systemd/xenial when it looks like a good time to move, rather than rushing things later on. Basically just doing R&D for the future.
[16:53:43] <kurushiyama> owg1: Well, understandable. Especially given the distributions... ...interesting decisions in the past.
[16:56:06] <orriols> hi. I'm using MongoDB 2.6. Does count operation leverage index intersection feature?
[16:57:44] <kurushiyama> AAA_awright: You there?
[16:58:03] <orriols> or I need a compound index for the count operation?
[16:58:21] <kurushiyama> orriols: Can you show us the according query?
[16:58:33] <orriols> sure
[16:59:31] <orriols> db.historics.find({ $and: [ { $and: [ { $and: [ { asset_id: { $eq: ObjectId('570da26f661f2d739ea4f761') } }, { signals_triggered: { $in: [ "Turbidity" ] } } ] }, { $or: [ { _group: { $exists: false } }, { _group: { $in: [ "test-group" ] } } ] } ] }, { __type__: "historic_testcoll" } ]}).count()
[17:00:25] <kurushiyama> orriols: In this case, the cursor length is counted, iirc. So basically, it depends on your query.
[17:00:42] <kurushiyama> orriols: And please, use pastebin, even when it is short ;)
[17:00:49] <orriols> sure
[17:00:59] <kurushiyama> orriols: But wait a sec
[17:01:34] <kurushiyama> orriols: Ah, nvm
[17:01:49] <orriols> In one side
[17:02:14] <orriols> I cannot find a way to optimize the {$exists: false} query
[17:02:27] <orriols> so that it does not need to scan the collection
[17:02:28] <Derick> yeah, that can't make use of an index
[17:02:44] <orriols> not even with an sparse one, right?
[17:02:49] <Derick> correct
[17:02:59] <orriols> I'm sure I can find my way around it
[17:03:01] <Derick> you can only use an index for something that's there - not something that *isn't* there
[17:03:08] <orriols> yeah, that's what I thought
[17:03:14] <orriols> in any case
[17:03:32] <kurushiyama> Derick: Wouldn't a check for equality to null work?
[17:03:37] <Derick> kurushiyama: that would work
[17:03:40] <orriols> I'm not sure I understand the index limitations of the count operation, that's why I was asking
[17:03:46] <Derick> but then the field is there - the value is just null
[17:03:58] <Derick> (I don't think they're part of a spare index though)
[17:04:39] <orriols> one question is: does the count operation use index intersection?
[17:05:01] <kurushiyama> orriols: count is a method on the cursor...
[17:05:07] <Derick> kurushiyama: no, it's not
[17:05:12] <kurushiyama> Derick: Huh?
[17:05:17] <Derick> kurushiyama: it looks like one, but it really just runs another command to get that data
[17:06:24] <kurushiyama> Derick: http://pastebin.com/7xncr4Qm ?
[17:06:36] <Derick> yup
[17:07:10] <kurushiyama> Derick: The SERVER-3645 is beyond my comprehension.
[17:07:12] <Derick> it's not on a cursor, it just creates a different command to run
[17:07:21] <kurushiyama> Derick: s/The/Then/
[17:07:34] <orriols> and the other: could any of you explain "the query predicates access a single contiguous range of index keys". the following paste includes the examples from the docu I don't undestand: http://pastebin.com/YfXdc5G7
[17:08:19] <Derick> kurushiyama: what don't you get?
[17:10:28] <kurushiyama> Derick: The problem presumably is that orphaned docs are counted, right?
[17:10:56] <Derick> yes, that's a problem
[17:11:07] <Derick> count does a different path than querying
[17:11:38] <kurushiyama> So, if the query was executed as normal, it would return the non-orphaned docs only, right?
[17:12:16] <kurushiyama> Derick: Ah. Could you go into a bit of detail?
[17:14:37] <Derick> kurushiyama: count does in general not always need to fetch documents
[17:14:43] <Derick> so some optimisations are made
[17:15:34] <kurushiyama> Derick: Well, basically it has to do the query part in _some_ way.
[17:15:46] <kurushiyama> Derick: Ah, got it.
[17:16:03] <Derick> yeah... tbh, i'm not 100% certain either
[17:16:53] <kurushiyama> Derick: Well, as far as 3645 is concerned, most likely routing/filtering is skipped, then as part of the optimization.
[17:17:28] <Derick> the filtering in the router, I presume
[17:19:57] <kurushiyama> Derick: Makes sense.
[17:28:33] <AAA_awright> kurushiyama: Yo
[17:29:22] <AAA_awright> Progress report, on another very similar server, deploying the exact same build on a similar Gentoo setup doesn't show the problem
[17:30:03] <AAA_awright> It might not be using the wiredTiger engine for some reason, even though the config file is the same?
[17:32:04] <kurushiyama> AAA_awright: I doubt that. But tbh, all I remember is that I promised you to help ;)
[18:01:06] <AAA_awright> My mongod instance is constantly using 150-200% cpu on my system, it started doing this apparently in the middle of operation about a week ago
[18:01:32] <AAA_awright> It operates just fine, I can insert and query records without any apparent delay
[18:01:55] <AAA_awright> It just brings my system load from 0.5 up to 3.0
[18:03:51] <AAA_awright> It does this v3.2.4 with a default config and completely new datadir
[18:04:10] <AAA_awright> A comprable system with the exact same build doesn't show this problem
[18:07:52] <AAA_awright> The configurations and builds are literally exactly the same, but for some reason the problematic instance seems to be populating datadir with wiredTiger files
[18:15:30] <kurushiyama> AAA_awright: I'd guess build flags, but you have thought of this, I presume. Very strange.
[18:17:44] <AAA_awright> Forcing my good system to use wiredTiger uses an above-average amount of CPU - but only 2%
[18:17:58] <AAA_awright> Normally mongod is totally idle without any connections
[18:22:07] <AAA_awright> nvm, once mmapv1 preallocates the journal it climbs back up to 150% CPU again
[18:47:40] <mkjgore> hey folks, just curious but calling db.setProfilingLevel(…) does this only on the current collection correct?
[18:48:04] <cheeser> no, it sets it on that mongod
[18:48:52] <mkjgore> OK, then to view the profile data is jsut calling db.system.profile.find({})
[18:49:12] <mkjgore> is that also a global call or "use system" first then db.profile.find({})
[18:49:58] <cheeser> "system.profile" is the collection name so you'd need to switch the correct db first.
[18:50:11] <mkjgore> OK, then it's "use system.profile"
[18:50:18] <cheeser> no.
[18:50:26] <mkjgore> yeah, just saw that fail
[18:50:39] <cheeser> show dbs
[18:50:45] <cheeser> you'll see the "admin" database
[18:51:29] <mkjgore> gotcha, then "db.system.profile.find({}) etc
[18:51:34] <cheeser> yep
[18:51:51] <mkjgore> welp, still empty, guess I'll wait
[18:58:09] <mkjgore> this is very frustrating, I know this rs is dealing with heavy page faults but the primary doesn't register anything in the system.profile :-(
[19:00:31] <cheeser> are you a cloud manager customer?
[19:00:52] <Ryuken> Is MongoDB an unreliable store?
[19:01:11] <cheeser> no
[19:01:22] <Ryuken> Then why do so many people say it is?
[19:01:29] <Ryuken> And that it loses data?
[19:01:30] <cheeser> people say lots of things that aren't true.
[19:07:59] <StephenLynx> Ryuken, FUD
[19:08:16] <StephenLynx> are you familiar with the concept of "write concern"?
[19:09:43] <Ryuken> StephenLynx: No but I just googled it
[19:10:03] <StephenLynx> what happened is that originally the DEFAULT write concern was 0
[19:10:27] <StephenLynx> that would give people success status even if the operation didn't went through completely.
[19:10:39] <StephenLynx> some people used that to spread FUD.
[19:12:05] <StephenLynx> and a couple of grossly misinformed articles started to rank up
[19:12:27] <StephenLynx> because that's what happen with grossly misinformed people that are looking for reasons to get outraged on the internet
[19:12:48] <StephenLynx> and that's why many people say mongo is not reliable.
[19:13:39] <cheeser> +1
[19:13:58] <cheeser> once a noise gets in to the echo chamber, it's hard to excise it.
[19:20:41] <kurushiyama> Plus: Nobody will write articles with the bottom line "All well, no problems, works like a charm."
[19:21:07] <cheeser> yeah. squeaky wheel and all that.
[19:21:20] <AAA_awright> Really? You've never seen a MongoDB is Web Scale puff piece?
[19:21:31] <cheeser> i have that t-shirt
[19:21:36] <kurushiyama> ?
[19:21:49] <cheeser> but that's not really a counterpointer to the argument at hand...
[19:21:53] <Ryuken> #webscale #sharding #infinitehorizantalscalability #web4.0 #2016
[19:23:22] <Ryuken> I have a hard time believing people because I don't know enough to make my own judgement, so I like to hear different sides
[19:23:32] <Ryuken> There are many cases in the world where the majority, or rather the vocal minority are wrong
[19:24:02] <Ryuken> Why do people say document stores are bad?
[19:24:03] <cheeser> thousands of business of all sizes rely on mongodb and their world hasn't ending in flame and anguish yet.
[19:24:09] <kurushiyama> Ryuken: StephenLynx pretty much summed it up.
[19:24:54] <Ryuken> I'm a noob but I like MongoDB because I like the syntax, I like document stores, Mongoose and schemas are easy to work with, ect.
[19:25:00] <kurushiyama> Ryuken: People were too lazy to read the documentation of their persistence technology and ended up with write concerns not matching their use case. They ignored rollbacks, too.
[19:26:35] <Ryuken> DOcument stores just make way more sense to me than this weird ass convuluted web of tables I've had to work with in MySQL for years
[19:26:50] <Ryuken> But everyone keeps telling me they're bad
[19:26:58] <kurushiyama> Ryuken: Rollbacks happen at the rare case in which data was written to a primary but didn't make it to a secondary before a failover. Not much of a problem with a write concern of >1.
[19:27:30] <kurushiyama> Ryuken: Well, there were serious scientist predicting deaths through acceleration in trains.
[19:46:20] <StephenLynx> Ryuken " Why do people say document stores are bad?"
[19:46:22] <StephenLynx> imprinting
[19:46:28] <StephenLynx> a.k.a baby duck syndrome
[19:47:50] <StephenLynx> most people come from a PHP+mysql background and can't learn anything new for the life in them
[19:48:16] <StephenLynx> so what they do when something new and popular happens? they learn other options fit for different scenarios?
[19:48:34] <StephenLynx> hell nah, they just start spouting their way is the One True Way.
[19:49:49] <kurushiyama> Well, as an example: I have avoided InfluxDB and its ecosystem as much as I could. Took some time now to dig into it. If only I had done that earlier...
[19:51:04] <kurushiyama> StephenLynx is right: some people avoid learning anything new. If only they'd know better.
[19:52:45] <kurushiyama> Plus, and this is probably the biggest "problem" with document stores: Data modelling is a much, much more demanding process, as it is much, much more flexible. If you do it wrong, the results are... horrible.
[19:53:27] <StephenLynx> yeah. its way easier to screw up the model with document dbs.
[19:53:34] <StephenLynx> that's what happened at diaspora.
[19:53:55] <StephenLynx> they didn't know what the hell they were doing, got boned HARD because of it and blamed mongo instead of admitting they used the wrong tool.
[19:55:04] <StephenLynx> that was another popular anti-mongo article.
[20:33:33] <kurushiyama> StephenLynx: You got a link for me? I love dissecting them...
[20:34:30] <StephenLynx> sec
[20:34:40] <StephenLynx> http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
[20:39:17] <StephenLynx> kurushiyama, https://ayende.com/blog/164483/re-why-you-should-never-use-mongodb
[20:39:24] <StephenLynx> an article on why the first one is full of shit
[20:39:41] <StephenLynx> by someone who knows what hes doing
[20:39:42] <kurushiyama> Ah, good old Sarah. Didn't have in mind that it was diaspora. She is one of my favs, though. ;)
[20:40:19] <cheeser> and my favorite response: https://ayende.com/blog/164483/re-why-you-should-never-use-mongodb
[20:40:24] <kurushiyama> What I like most of Sarahs article is the reasoning that she builds a lot of web applications and hence is a good DBA.
[20:40:49] <StephenLynx> :v
[20:41:16] <StephenLynx> yeah, webdevs in general are laughably inadequate and extremely arrogant.
[20:42:18] <kurushiyama> That is a bit like me arguing that I do some programming and hence I am a developer.
[20:42:39] <StephenLynx> in general I avoid webdevs as a whole.
[20:42:56] <StephenLynx> I learned more about web dev in a couple months more than most of them will ever know.
[20:43:13] <StephenLynx> I know I learned more than my friend that had worked with it for 15 years knew
[20:44:09] <kurushiyama> Well, I have to admit that I am impressed about those Angular guys – I tried to wrap my head around that for days _and still do not get the point_.
[20:44:17] <kurushiyama> React is nice, though.
[20:44:32] <StephenLynx> kek
[20:44:36] <StephenLynx> any front-end lib is cancer.
[20:44:49] <StephenLynx> any anyone depending on them shouldn't be allowed near a computer.
[20:45:15] <StephenLynx> they take the simplest thing ever, HTML and JS and turn into an incomprehensible ball of mud.
[20:46:34] <StephenLynx> "what do you mean, you just write the actual page? no no no, this is fukken srs bizniz, you have to pretend you are writing something hard and pretend you have to compile it"
[20:48:03] <kurushiyama> Nah, the thing is that I have like 20 charts (guess which ones) and need to sync their rendering. And basically the only difference is the heading and the retrieval url. Made my life easier – by far.
[20:48:42] <StephenLynx> still a bad solution.
[20:48:53] <cheeser> yes. because productivity is an evil
[20:48:59] <StephenLynx> kek
[20:49:06] <cheeser> mv cheeser /dev/train
[20:49:23] <StephenLynx> just because one can't figure a good solution, it doesn't make bad solutions any less invalid.
[20:50:05] <StephenLynx> you could load the template in front-end js from a static file and manipulate it.
[20:50:13] <StephenLynx> no need to add a front-end lib
[20:50:30] <kurushiyama> Time syncing and arbitrary number of renderings? Not sure if react is a bad solution. Well, for sure I do not have a better, and I refuse to invent one. Frontend work is not my favourite, I spend as little time with it as possible.