[14:24:01] <StephenLynx> yeah, that would be the optimal :v
[14:59:38] <triplefoxxy> I use Meteor.js (and therefore MongoDB) as part of a stage show. My MongoDB databases only exist for 5 minutes and have < 10MB for data. I need the data to survive a crash (so no in-memory) and I need low latency. What storage engine do I use?
[15:14:26] <StephenLynx> if your databases are vanishing on crashes, it has nothing to do with mongo,afaik.
[15:15:00] <triplefoxxy> StephenLynx: They're not vanishing. Just saying I cannot use in-memory.
[15:15:23] <StephenLynx> have you tested the default engine?
[15:15:46] <StephenLynx> wt will become default on 3.1, so you might want to test it too.
[15:16:23] <StephenLynx> by the way, if you use mongo just for a cache, redis might be what you need.
[15:16:56] <triplefoxxy> Yep, mmapv1 causes my machien to stutter as the kernel moves the large db files into cache memory. I'm just about to test with smallFiles: true.
[15:17:18] <StephenLynx> from what I heard, redis is faster than mongo IF all your data fits on RAM.
[15:17:47] <triplefoxxy> StephenLynx: I have to use MongoDB for Meteor.js.
[15:17:59] <StephenLynx> why do you need to use meteor?
[15:18:46] <triplefoxxy> Because the development benfit outweights the issue causes by MongoDB.
[15:20:20] <triplefoxxy> Since posting my initial question, I found the smallFiles options. I looks like what I need and what Meteor.js uses for it's dev MongoDB instance. I'll report back.
[19:17:46] <ollien> Hey there. I have a collection with an array of objects. I want to be able to use a find query to just get the matching objects from those arrays. For example, say a document in the collection is {array:[{a:1,b:4},{a:1,b:4},{a:2,b:4}]}, I want to get all objects that have {a:1} in them. My attempt was to use db.collection.find({array:{$elemMatch:{a:4}}) but that just returns the entire document, rather than the specific object. Could someone point me in
[19:25:52] <ollien> partycoder I'm not sure how I would go about it to be quite honest. I know I could use a projection to say, only return the array from the document, but not how to just return the objects within the array
[19:43:16] <ollien> partycoder right. That's essentially what I'm using now, but you said that would have the problem of searching through every document, no?
[19:43:37] <ollien> StephenLynx what if an attacker manages to login, how would he remove it then?
[19:52:34] <partycoder> you are just saying "i don't like this"
[19:52:38] <StephenLynx> I was not defending arrays.
[19:53:11] <StephenLynx> "At the moment (and probably in the future, too) it's not possible to query MongoDB collections with wildcards in fieldnames (thanks to @gWiz). "
[19:53:32] <partycoder> well, there you have an argument
[19:53:39] <partycoder> schema validation is more difficult
[19:53:48] <partycoder> correct, it is more difficult but not impossible
[19:53:55] <StephenLynx> dynamic schemas really, really narrow your use case.
[19:53:59] <partycoder> at least you don't run into concurrency problems
[20:07:36] <StephenLynx> I remember performing some tests and It would take a few seconds, but I don't remember them staying up for up to a whole minute before being removed.
[20:07:37] <partycoder> well, should not be a really high cost
[20:07:58] <StephenLynx> but I wasn't very thorough.
[20:08:02] <partycoder> but that would lock the collection
[20:08:52] <partycoder> and, well... i messed that statement up
[20:09:27] <partycoder> if the collection is not really large it should not be a problem. in blogs most users are unregistered users so you don't need sessions for them
[20:10:03] <partycoder> you might need sessions only for access control purposes
[20:10:27] <partycoder> and if that is the case you can make it as inefficient as you want
[20:10:50] <ollien> partycoder what if, theoretically, it was large
[20:10:53] <ollien> how would TTL be innefficient?
[20:10:58] <ollien> because of the amount of time it takes to expire?
[20:11:05] <partycoder> let's say it runs every 60 seconds
[20:11:15] <partycoder> and 1000 documents have to be removed
[20:11:33] <partycoder> then 1000 documents will be removed and during that time the db will lock
[20:11:41] <StephenLynx> the collection will lock.
[20:12:14] <ollien> partycoder so would it be better to implement my own TTL process?
[20:12:16] <partycoder> that depends on your mongo version
[21:04:38] <bonhoeffer> hey -- i'm trying to use https://github.com/sheharyarn/mongo-sync to sync my db -- but the db is not accessible through port 80
[21:04:57] <bonhoeffer> for example, my url: '127.0.0.1' port: 27017
[21:05:28] <bonhoeffer> stands -- any ideas? what really needs to happen is for mongo sync to log on locally
[21:05:42] <bonhoeffer> or for me to use a mongo cloud server or web-enabled database
[21:08:06] <bonhoeffer> any thoughts, partycoder ?