PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 16th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[08:44:32] <bweston92> How can I search objects from a list in a document like so http://pastebin.com/raw/KXH2Aimc
[09:01:35] <kurushiyama> You want all or one?
[09:01:53] <kurushiyama> bweston92: ^
[11:24:41] <bweston92> kurushiyama: I've found the answer, wanted the how document :D
[11:34:54] <dojobo> hi guys
[11:35:30] <dojobo> after using relational db's for years i've just started looking at mongo this week
[11:35:46] <dojobo> i think i'm in love
[11:46:08] <kurushiyama> dojobo: Take care not to get blind ;)
[11:46:58] <kurushiyama> dojobo: Learning MongoDB properly coming from RDBMS _can_ be a painful process
[11:47:03] <dojobo> hey, it's a drop-in replacement for sql, better in any situation, right?
[11:47:12] <dojobo> (kidding ;) )
[11:47:18] <kurushiyama> dojobo: Ehm... Thanks god!
[11:47:57] <kurushiyama> dojobo: You would not believe how many people think or at least behave like that. If would have gotten a $ for each...
[11:48:01] <dojobo> i'm reading the manual and trying to tutorial things in there right now
[11:48:34] <kurushiyama> dojobo: Reading the docs is a very good start. Are you more of a DBA or more of a dev?
[11:49:09] <dojobo> dev
[11:49:56] <kurushiyama> dojobo: Language?
[11:50:11] <dojobo> well, mostly php, have done a bit with python and ruby
[11:50:25] <dojobo> and a bit of client-side js
[11:50:41] <kurushiyama> dojobo: Well, can not help you much there.
[11:50:59] <dojobo> i'm toying with the idea of the MEAN stack for a new project
[11:51:12] <kurushiyama> dont
[11:51:29] <dojobo> hmm?
[11:51:59] <kurushiyama> Learn MongoDB first. Properly. Modelling and such. Inner workings. Why MongoDB is _not_ an object store.
[11:52:29] <kurushiyama> (and why one really should not treat it as such).
[11:53:02] <dojobo> yeah i've been reading about when and when not to use it
[11:53:20] <dojobo> in my case, i need to make a restful metadata store
[11:53:33] <dojobo> so i think a doc db is perfect for that
[11:54:12] <dojobo> my org is minting DOIs and needs to manage the associated metadata
[11:55:48] <kurushiyama> My advice: With SQL, you basically identify your entities and their properties, model the data accordingly and bang your head against the wall to get your upper left beyond JOIN right to get your questions answered.
[11:57:39] <kurushiyama> With MongoDB, I usually suggest to reverse this process. Identify your use cases/user stories and the questions that derive from them. Then, model your data so that those question can be answered in the most efficient way.
[11:58:00] <dojobo> ah, thanks, i think that's a good way to frame it
[11:58:36] <dojobo> martin fowler calls mongo an aggregate-oriented db
[11:58:46] <kurushiyama> dojobo: I have compiled a doc why one should not overuse embedding: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/
[11:59:04] <dojobo> the idea being you've pre-aggregated the data in a way that makes sense for the domain
[11:59:06] <kurushiyama> dojobo: He is right with that, imho
[11:59:30] <dojobo> thanks kurushiyama :)
[11:59:32] <kurushiyama> dojobo: Or dynamically aggregated data, depending on...
[11:59:35] <kurushiyama> ?
[11:59:51] <dojobo> for the link
[12:00:48] <dojobo> ah you're in cologne, i'm in munich :)
[12:02:35] <kurushiyama> dojobo: ;)
[12:02:58] <kurushiyama> dojobo: Then I know where to stay for the next MongoDays ;P
[12:03:21] <dojobo> ha
[12:05:09] <kurushiyama> Another advice: Do _not_ overlook the aggregation framework. It is MongoDBs killer feature imho (aside from the zero point of failure HA architecture with failover capabilities and transparent data partitioning ;) )
[12:07:24] <dojobo> yeah i glanced at it, but need to spend some more time with it
[12:07:36] <dojobo> the ability to pass in a js function seems super powerful
[12:08:02] <dojobo> esp. compared to traditional sql grouping
[12:08:44] <kurushiyama> Well, you can, but it tends to be a performance killer, as I was told.
[12:09:51] <kurushiyama> Have a look at $group in the aggregation framework. Never needed more than that. Combined with an early $match, you can basically do everything natively.
[12:10:11] <kurushiyama> (when compared to a query on the same match)
[12:12:17] <dojobo> nod
[12:14:49] <kurushiyama> LAst advice before I leave to the archery range: Use the shell while learning. As much as you can. Shell syntax and results are the lingua franca in MongoWorld, and more often than not, you'll get answers to your questions in shell syntax.
[12:17:17] <dojobo> ah yeah, i've been doing that (and really pleased with how easy it is to write, as well as .pretty() )
[12:17:32] <kurushiyama> `.explain()`
[12:17:42] <dojobo> cool i'll check that one out
[12:17:56] <dojobo> thanks for advice for a newb and have fun at the range :)
[12:18:08] <kurushiyama> tyvm
[19:50:25] <Wulf> Hello
[19:51:19] <Wulf> my mongodb server has a full hard drive. Can I add another hard drive and ask mongo to use both drives?
[20:04:17] <AAA_awright> I've upgraded MongoDB to 3.2.4 and pointed datadir to a totally new directory, and mongodb is still taking up 150% CPU usage, no connections, no databases
[20:04:27] <AAA_awright> (kurushiyama)
[20:05:07] <AAA_awright> In 10 minutes it's racked up 15 CPU-minutes of computation
[20:05:42] <AAA_awright> And my system load has gone from 0.4 to 7.5
[20:05:46] <AAA_awright> (5-minute)
[20:42:49] <kurushiyama> AAA_awright: Ok, gimme 5, just had dinner, need a fag
[20:44:16] <kurushiyama> Wulf: OS?
[20:45:01] <Wulf> kurushiyama: linux
[20:45:49] <kurushiyama> Wulf: There are like 10 gazillion distributions out there. A bit more precisely?
[20:46:38] <Wulf> kurushiyama: ubuntu 14.04
[20:46:55] <kurushiyama> Uargs
[20:47:06] <kurushiyama> Ok, let me think for 5
[20:51:19] <kurushiyama> Wulf: Not easily
[20:51:55] <Wulf> I can always copy the data to a bigger hard drive, just looking for alternatives
[20:52:01] <kurushiyama> Wulf: LVM?
[20:52:05] <Wulf> nope
[20:52:11] <kurushiyama> Bad idea
[20:52:20] <Wulf> don't blame me :-)
[20:52:24] <kurushiyama> ALWAYS use a dedicated LVM partition
[20:54:22] <kurushiyama> I fear that under these circumstances, your best option is to use another hard drive. But this time, use an LVM partition.
[20:56:15] <kurushiyama> Wulf: ^
[20:57:34] <Wulf> will try. And need to check if migrating to LVM is possible without long downtime
[20:58:23] <kurushiyama> Hmmm
[20:59:40] <kurushiyama> Basically, you'd setup the partition first. ofc. Stop server, copy data, meanwhile change dbpath in config, copy finished, server up. How big is the data?
[20:59:50] <kurushiyama> Wulf: Or do you have a replset?
[21:00:39] <kurushiyama> Wulf: Presumably not. One of the reasons to have one ;)
[21:00:48] <Wulf> kurushiyama: 800 GiB. Only one standalone server.
[21:02:02] <Wulf> last two colleagues who know a bit about this server left years ago
[21:02:33] <kurushiyama> Wulf: I do not dare to ask which version it is, then.
[21:03:18] <Wulf> ii mongodb-10gen 2.4.11 amd64 An object/document-oriented database
[21:03:54] <kurushiyama> But the procedure I showed you should be sufficient. NExt time you have a maintenance window (and you will, more on that later), just remount the new LVM partition to /var/lib/mongodb and change dbpath to there.
[21:04:38] <kurushiyama> Ok, here is the problem: 2.4 is eol, and 2.6 will be in October, I think. But you _have_ to update to 2.6 before you can update to 3.0
[21:06:48] <cheeser> i think you'll still be able to download 2.6. there jsut won't be any bug fixes to speak of outside of extraordinary circumstances.
[21:08:08] <kurushiyama> cheeser: Well, I would rather not bet my data on it ;)
[21:08:45] <cheeser> i upgrade pretty much day of so ... :D
[21:11:28] <kurushiyama> cheeser: I'd guess so. Which reminds me of a PR...
[21:14:27] <kurushiyama> cheeser: eol does not imply out-of-support, I assume ;)
[21:16:23] <cheeser> i don't remember about that one.
[21:20:55] <kurushiyama> cheeser: Well, all my/my customers servers are on 3.0 at least, so this was academic, anyway ;)