[01:00:27] <joannac> how will the members talk to each other?
[01:01:14] <duallain> @joannac Thank you for the quick answer!
[09:02:10] <stangeland> hi. i am creating a datamatrix every second on a multiprocessor system. the datamatrix is essentially a float array which could be saved as a BLOB with an attached timestamp. My question is the following: If i have a lot of cores creating such datamatrices each second, will mongodb be able to efficiently store them and handle the synchronization?
[09:54:59] <oskie> zero downtime manual failover is not possible with MongoDB 3 replica sets, is it possible with sharding by adding a new shard to replace an old one?
[09:57:09] <Lujeni> Hello - there is a way to limit the memory utilization of mongodb ? thanks
[09:57:38] <oskie> Lujeni: you can always try ulimit
[09:58:14] <Lujeni> oskie, do u know what happened when the limit is reach?
[10:00:22] <oskie> I don't know how MongoDB would react, but probably the same way as if the memory of the machine was full
[10:01:46] <Derick> why do you want to limit the memory?
[10:02:01] <Derick> and I doubt ulimit is going to do what you want with the MMAP storage engine
[10:03:01] <Lujeni> Derick, i use the wiredTiger engine
[10:03:48] <Lujeni> mongodb take 90/95% of the memory. imho it's pretty risky
[10:04:06] <Derick> memory is good for a database, don't try to outsmart it
[10:06:55] <Lujeni> Derick, however you don't think that limit the memory of a database could be usefull ?
[10:07:13] <Lujeni> i don't want to use a cgroup :_
[10:12:31] <Derick> Lujeni: don't OS limit it - configure the database to use less memory if you really *must*
[10:15:05] <Lujeni> Derick, ok what's the best way ? tuning the cachedSizeDB ?
[12:11:48] <_KryDos_> Hello. Could please help me with next question. Collection in my database has 6M of records. I'm trying to get some information from this collection for one month (it's about 3M of records). I'm using aggregation and execution time of the query is about 11 seconds. All fields involved in aggregation query have indexes. I just want to know if 11 seconds is bad result or not? and if you have any suggestions how I can impro
[12:14:55] <_KryDos_> It's actually not so smart question, since I didn't explain what kind of hardware I have and what exactly my aggregation query do... but anyway... maybe you have some experience or examples that you can share. I will try to compare your examples with my result.
[14:45:29] <the_voice_> Pretty new to mongo here so sorry if this comes across as a stupid question
[14:45:46] <the_voice_> I am using Meteor so I am learning mongo too
[14:46:23] <the_voice_> If I am doing a comment and a reply system for a page. Would I be best to have one document for each page that contains all of the comments and then a subdocument with replies
[14:46:28] <the_voice_> or should it be one doc per comment
[14:51:09] <the_voice_> meh, it's irrelevant to the question
[14:51:28] <StephenLynx> not really, does it abstracts the database in any shape or form?
[14:52:20] <StephenLynx> I mean, more than the native driver does?
[14:54:06] <the_voice_> Meteor is really an abstraction of node.js
[14:55:42] <the_voice_> You write regular mongo commands on the server side to access the database. What meteor does is it makes everything realtime, so it will for instance read the oplog(if you have one) and automatically update the client side when you make changes. If you re not using oplog then it polls the database automatically(which is not ideal)
[14:56:43] <the_voice_> On the client side you have minimongo which is basically a JS version of mongodb that runs on the client. So you can also write most mongodb queries on the client side.
[14:57:09] <the_voice_> Security as to what the client can see is handled by a publish/subscribe system(you choose what to publish to what client)
[14:57:49] <the_voice_> Pros: Insanely rapid development, one language, less lines of code do more.
[15:01:37] <the_voice_> so Meteor let's me put out an MVP in a month. I can still scale it to above a 100K users without a problem and at that point I can raise real money and go to a real framework
[15:01:52] <bogn> so, the thing with embedding or not is, will you need those embedded docs on their own often
[15:01:54] <StephenLynx> I wouldn`t be surprised how many of them die because they go with crappy software made by incompetent and lazy developers using crap frameworks.
[15:01:56] <the_voice_> Meteor is answering a very important need.
[15:02:11] <the_voice_> That is the dumbest comment i have ever read
[15:02:14] <StephenLynx> no, meteor is just "shitty framework n 23423"
[15:02:25] <StephenLynx> it is even worse than express
[15:02:26] <the_voice_> no user gives two shits about the software running their applications, they care it works
[15:02:34] <StephenLynx> at least express doesn`t bundle db and front-end on it.
[15:02:40] <the_voice_> if it works, they are happy. For all they care it's a bunch of monkeys pushing button
[15:02:50] <the_voice_> Case in point, twitter was built on Ruby On Rails which scaled like shit back then
[15:02:58] <StephenLynx> it won`t work for long when the framework starts crapping itself and putting a ceiling on the developer
[15:03:11] <StephenLynx> because web frameworks are all crap.
[15:03:15] <the_voice_> They rewrote their backend in Java and C and well they are doing fine
[15:03:19] <bogn> StephenLynx it solves real-time updating on all clients for people who need that. Think of a collaborative CMS or what not.
[15:03:21] <StephenLynx> ruby on rails, laravel, meteor
[15:03:39] <StephenLynx> that is like, 2 hours work.
[15:03:39] <winem_> Stephen, which stack do you prefer for web applications?
[15:04:06] <StephenLynx> I wast not answering you.
[15:04:15] <the_voice_> You'll take a year to release something while the guy writing in Meteor will have released three different versions getting feedback from real users
[15:04:15] <bogn> (15:56:51) bogn: so, the thing with embedding or not is, will you need those embedded docs on their own often
[15:04:35] <the_voice_> Well, they are replies, they will never be on their own
[15:04:39] <the_voice_> they will always be with the comment
[15:05:43] <the_voice_> Does it matter that it makes the embedding two deep?
[15:05:46] <winem_> can I interrupt the discussion with a dumb question regarding handling users in replica sets? did not work with mongodb for 8 months and can't remember how I did it in prior projects
[15:06:04] <the_voice_> I mean you have the page which contains an array of comment documents, and then you have the replies inside the comment
[15:09:59] <bogn> ah forgot that one: If, in the future, you want to switch from chronological to threaded or from threaded to chronological, this design would make that migration quite expensive.
[15:10:27] <StephenLynx> <the_voice_> You'll take a year to release something while the guy writing in Meteor will have released three different versions getting feedback from real users
[15:15:04] <the_voice_> StephenLynx, what costs more money usually, hardware or coders?
[15:15:39] <StephenLynx> developers. and you will have to waste a lot of man power fixing the software deficiencies when you start with a "just make it work" mindset.
[15:15:39] <the_voice_> I can scale the shit out of most of those shitty frameworks cheaply using AWS
[15:18:12] <the_voice_> that is all it is.. I make products for my users and it has to give my user the best experience possible
[15:18:41] <bogn> he's a startup guy, that's trying to build prototypes fast, and if the prototype works out, he'll rewrite it with the right tool and resources to afford more dev power
[15:19:02] <the_voice_> also my wife is Interactive designer(UI/UX etc..)
[15:19:16] <the_voice_> so I believe in the user...
[15:19:31] <the_voice_> Also Learned the hard way previously that working a year on something no one wants is a pretty big waste of time :)
[15:20:52] <the_voice_> Honestly I learned so much from my wife when I met her like three years ago. I had started to learn it previously from a failed venture, but then seeing a UI/UX persons perspective changed everything for me
[15:21:10] <bogn> regarding the MongoDB docs I can tell you, that they are quite extensive and provide useful tips for use cases and such (as you saw)
[15:21:33] <bogn> and there's also https://www.mongodb.com/presentations/
[15:22:07] <the_voice_> Thanks, I was always a mysql guy. So I am now really trying to learn how to use mongodb correctly
[15:22:22] <bogn> above is a link to all presentations, that page has a search bar as well
[15:22:39] <bogn> "schema" might be a good search query for you
[15:22:44] <the_voice_> I think the biggest thing that I need to remember is that mongo doesn't have transactions so if you write to multiple docs you may have a problem
[15:23:07] <bogn> not if you have the "multiple" embedded
[15:25:07] <bogn> but be aware of that, maybe catch an exception or something like that. You might continue comments on a different doc then and post a link on the orignal article or something. And if this happens more that close to never, rethink.
[15:25:35] <the_voice_> The other concerns I have are Meteor related. Massive updates can cause oplog flooding which can crash the meteor server(although I think they patched that now). But again those are problems for when I scale which I'll deal with then
[15:25:51] <the_voice_> It won't happen. It's for a tool where you won't have more than a 100 comments max on a page
[15:36:21] <bogn> update({name: {$set: "newName"}) instead of update(monsterDoc). The small $set stuff is exactly what's sent to the oplog, same for the monsterDoc.
[15:37:37] <the_voice_> but if you are using subdocuments and you include for instance the username and userId(so you don't have to call the users collection)
[15:37:59] <the_voice_> And then you wanted to update all of those subdocs is it the same thing?
[15:52:59] <bogn> this inclusion of username and userId is called denormalization (some call it caching)
[15:53:23] <bogn> updating multiple sub-docs is something akin to this, if I recall correctly
[15:58:36] <the_voice_> yah I can see it causing issues
[16:04:26] <winem_> hi. is it correct that all databases except for local are replicated in a replicaset?
[16:04:56] <the_voice_> born do I need to worry about storage fragmentation as the document grows? If I understand correctly MongoDB 3.0 has really helped solve that problem
[16:40:59] <ruairi> Okay, if you're running it on OSX, run mongod --dbpath=/data
[16:45:51] <ruairi> mylord: if you want it to just start on boot on your Mac, try this: https://alicoding.com/how-to-start-mongodb-automatically-when-starting-your-mac-os-x/
[16:46:17] <mylord> looks like that was the path anyhow, hte db files are there
[16:46:26] <mylord> not sure why the db’s all disapperaed tho
[16:46:36] <mylord> but what about restoring them somehow?
[16:47:11] <ruairi> maybe that wasn't the path. It's possible you were using another path for your dbs, and that you just launched it in /data once and created those files.
[16:47:21] <mylord> anyhow, i think the files in /data got overwritten now with new <mydb>