[08:54:59] <ManneW> Hi! I'm using the Aggregation Framework to do some $unwind on a field and later on a $group to stitch things together. I want a couple of fields from the original document (which is also present after the $unwind stage) to be preserved after the $group operation. What is the "correct" way of doing it? Right now I'm using $first to select the first occurrence, but is there any better operator to use?
[10:39:17] <pithagorians> hi all. how can i do load balancing on N nodes?
[11:16:50] <Fajkowski> hey everyone i Have problem with query or maybe my db schema. So I have this in my db: http://pastebin.com/SrCW065i and i do this query: db.match.find({'win_team.username': 'a a'}, {'win_team.points.score': 1, '_id': 0}) and i get this:
[12:15:04] <Nodex> why troll on twitter when you can troll on IRC ;)
[12:21:05] <Nodex> haha, plus development time plus cheaper hardware plus not everything needs to be relational, plus MongoDB NEVER claims to be a relational database - it''s 1 of many tools for one of many jobs
[12:25:15] <Nodex> epic troll fail, pretty poor effort for a friday
[14:01:12] <qrada> hi everyone.. i've started using mongodb recently. I'm mostly accessing the db via mongoose for node.js. I was wondering if anyone knows if there is a way to access a specific nested subdocument without having to grab all of the data (document + all subdocuments) then parse it... right now i'm doing: OBject.find({},function(err,obj) { obj.subdocument.id('some_object_id') ... }) ... So, i have to query the entire document + subdocuments then parse it.
[14:02:16] <qrada> if i do something like: db.objs.find({_id:ObjectId('51d6062ae0cd85b2af90d9c2'), 'comments.id' : ObjectId('...') }).pretty(), it does the same thing.. it will return everything theni have to parse it.. I know there's some unwind/filter stuff, but i'm just wondering if there's a way to achieve it without that.
[14:10:09] <[AD]Turbo> why there is still a limit (100) to the number of result returned by geoNear ????
[14:35:57] <berryciderspider> guys why should i use mongo over postgres?
[14:47:26] <[AD]Turbo> mongoimport gives me such error "error: Can't parse geometry from element: pos: null", I imported data from a 2.0.4 db to the new 2.4.5 version; the old version had a 2d index on 'pos', the new one 2dsphere
[14:47:50] <[AD]Turbo> how can I import data where some records didn't have a pos field?
[14:47:51] <Nodex> berryciderspider : you tried this troll earlier and failed, are you expecting a different response now?
[14:48:42] <[AD]Turbo> troll??? my question was serious, on older version of mongodb there wasn't the limit to the number of results given by geoNear
[17:28:04] <boutell> but that requires that I know the names of all valid properties like 'title'. Unfortunately (and I failed to specify this), I don't have that kind of knowledge at this point in the code. And since MongoDB is schemaless, I would suggest that I should not have to. Right.
[17:28:19] <kali> what about projecting negatively then ? 'areas.body.text' : false
[17:30:25] <boutell> kali: alas, I don't know the name of every unwanted area name either.
[17:30:33] <boutell> these would be very good solutions for many applications, I realize.
[17:31:19] <kali> well, even if marketting says mongodb is schemaless, there are schema that mongodb handles better than other
[17:31:49] <kali> for instance, it's better if the documents keys are things your app knows, not arbitrary stuff
[17:32:33] <kali> a alternative and easier to deal with schema would be: { title: 'koozbane', areas: [ { name: body, text: 'whatever' }, { name: sidebar, text: 'whatever' } ] }
[17:33:15] <boutell> because then I could use elemMatch, I presume. But then I would still be stuck if I wanted other top level properties and didn't know their names, right?
[17:48:58] <awpti> That works -- paste into text file, mongo < txt
[17:49:00] <pquery> does anyone else here run their mongo with ssl? I'm having issues with mongorestore and it taking the ssl option even though my client can make the ssl connection
[17:49:53] <awpti> Oh, nevermind. That doesn't work for the longer strings. It does indeed appear that the mongo client has string length limits. That's no fun.
[17:59:37] <pquery> nobody else deals with a secure data channel?
[18:02:42] <pquery> nm I fixed it, I'm going to have to fix my pointers and init.d scripts since compiling
[18:06:15] <perplexa> is this really correct? i mean, it works and gives the result as expected, but i think the way how i do it is really nasty: $collection->find(array('$or' => array(array('bId' => 683322), array('bId' => 553929))));
[18:06:48] <perplexa> is there a way that less abuses arrays and have one $or for ?
[18:07:02] <perplexa> for multiple values for a single column
[18:10:43] <boutell> Nodex: the trouble with "joins are stupid" is that it's true for certain use cases (basically, "web scale" big data), but false for most of the projects I actually get paid to do (building websites that aren't twitter or linked in or Facebook). I'm assuming here that the objection to joins is performance in the case where the database is super huge.
[18:11:03] <boutell> and mongo doesn't seem to have a very strong opinion that it should only be used for super huge.
[18:11:40] <boutell> I don't have much trouble implementing join-y things in userspace when I need them. But I find the logic behind the omission interesting at this point in mongodb's development.
[19:45:33] <Neptu> hej simple question can I have a blob has more than 16MB?
[20:55:20] <Infin1ty> going to upgrade from 2.0 to 2.2 and from 2.2 to 2.4, i noticed that any of the config collections has an index version (the metadata was created before 2.0), the documents simply say to run a dropIndx and ensureIndex on the config collection, which collections? some of them has an _id index, i thikn it's not possible to drop those, it will be nice to have a reference, i wonder why this isn't
[20:55:20] <Infin1ty> included in the upgrade command
[21:06:23] <spacysp1ff> anyone using node.js with mongodb here, how do you go about loading fixtures here
[22:28:27] <BobFunk> seem to be unable to connect to a replicaset from the ruby driver
[22:28:39] <BobFunk> the set is running and I have other services using it
[22:28:54] <BobFunk> if I use MongoClient.new I can connect to the master
[22:29:20] <BobFunk> but if I use MongoReplicaSetClient.new I get: Mongo::ConnectionFailure: Failed to connect to any given member.
[23:25:33] <BobFunk> this replicaset issue is really strane