[00:33:02] <bros> I have a collection called orders that has a field called "items" that represents an array of objects. What's the best way to loop/aggregate over this? Map/reduce, aggregation?
[00:35:21] <cheeser> aggregation is better than map/reduce
[01:45:49] <C0nundrum> and you stream an aggregate query ?
[01:49:53] <cheeser> you can get a cursor back, yes.
[01:52:45] <C0nundrum> I see docs for batches using cursor but would this work http://mongoosejs.com/docs/api.html#querystream_QueryStream on aggregate ?
[01:54:15] <cheeser> i'm not a mongoose use so i don't know how that'd look
[01:55:31] <C0nundrum> hm i see. I will try with cursors then
[07:16:37] <Kosch> hey guys. mongo 3.0.7: I've set the systemlog to syslog, verbosity to 0 and profiling is also 0, but inserts/updates/query... are still logged. Do I have missed something?
[07:24:08] <C0nundrum> Is there a reason mongodb has the arbitrary memory limit of 100Mb for pipeline aggreagation piplines ... When there is more than enough memory free on the system ?
[07:26:26] <C0nundrum> Is it to control the ammount of time pipelane stages are allowed to take ?
[07:28:02] <Kosch> "100MB is just a arbitrary number and as for my knowledge the 100MB limit was because if we consider each document to be of size 100KB and each pipeline returns 1000 documents so it reaches 100MB limit." - seem sindeed just a number they choose.
[07:28:58] <C0nundrum> Well i can understand that a default. What i don't understand is why it isn't configrable
[07:35:35] <Kosch> so maybe mount _tmp directory as ramdisk... as workaround :)
[07:37:19] <Kosch> smth like mount -t tmpfs -o size=8G /my/little/pony/dbpath/_tmp ...
[07:37:39] <C0nundrum> hm true i could create a ram disk and have the .tmp directory be a symbolic link to it
[08:06:33] <Rabenklaue> Hi, I need to implement a document-versioning, but I do not find a way querying all documents and return those with the highest (version) values. http://codepad.org/6sggdwGW
[08:28:50] <msn> is there i can make sure that read queries do not go to one secondary at all
[08:31:03] <sabrehagen> the syntax of the following criteria is suboptimal. will mongodb perform more poorly than if it were structured correctly, or does it normalise it internally and there is no performance degredation?
[17:07:29] <Boemm> Hi Guys, I search for some web UI to connect to mongodb 3.0+ ... can somebody give me some tips about such a tool?
[17:08:27] <Boemm> Its mainly for doing querys ... but all I found seems to be pretty not up to date ... and do not work with mongodb 3+
[17:11:03] <GothAlice> Has anyone tried storing gettext string translations in MongoDB? I'm thinking one record per language a la {_id: {lang: <IETF BCP-47 language code>, domain: 'messages'}, strings: [{key: value}, …]}
[18:08:49] <Kristjan85> how can i remove the first entry in a collection?
[19:08:45] <vivoo> I'm wondering about using mongoDB for an app, but i need to import (with small processing) data from a mysql db and keep the mongo one up to date
[19:09:36] <vivoo> Any advice about the correct way to do that is welcome :)
[19:11:08] <cheeser> you want a constant sync from mysql to mongo? or just a one off?
[19:13:00] <vivoo> Well, i'd like to alert admin that mysql changed, and let him allow sync .
[19:14:15] <vivoo> Mysql db got an updates tables, so it's pretty easy to detect change.
[19:16:42] <vivoo> But i'd like to add processed fields during sync
[19:41:12] <StephenLynx> vivoo I would implement something on application level.
[19:41:19] <StephenLynx> I don't think there is anything out of the box.
[19:42:52] <dddh> "there are at least 35,000 publicly available, unauthenticated instances of MongoDB running on the Internet." That's about 5,000 more than there used to be in July"
[21:24:32] <edrocks> that must be a pain for them to update
[21:25:27] <GothAlice> Luckily I could imagine most changes can be rolled out progressively and non-disruptively.
[21:25:53] <GothAlice> As users roll off a given server/balancer, retire it, while passing new connections to the new machines. :)
[23:12:53] <sabrehagen> the syntax of the following criteria is suboptimal. will mongodb perform more poorly than if it were structured correctly, or does it normalise it internally and there is no performance degredation?
[23:13:00] <sabrehagen> (i have an index on requestUser)
[23:15:32] <Boomtime> sabrehagen: the best way to know for certain is to run an explain - i'm not sure if it will trip over the $nin, which has sucky performance (because indexes are for presence matching, not absence matching)
[23:16:54] <sabrehagen> Boomtime: okay, thanks. if i have any trouble interpreting the results, i guess i'll ask again :)
[23:17:28] <sabrehagen> also, as a general survey, when do you (and everyone else in the channel) pick mongodb as the db over competing technologies e.g. couch*, postgres.