[01:04:52] <fredix> I try to compile my app with mongodb 2.4 but I get this error external/mongodb/src/mongo/base/status.h:21: erreur : mongo/base/error_codes.h: No such file or directory
[01:26:18] <jzk> fredix: I just came for the same thing
[01:26:56] <jzk> fredix: it seems to have moved to: mongo/build/linux2/normal/mongo/base
[01:27:25] <fredix> jzk: it works with the c++ driver http://dl.mongodb.org/dl/cxx-driver
[01:28:26] <fredix> jzk: in fact scons generate the file : /usr/bin/python src/mongo/base/generate_error_codes.py src/mongo/base/error_codes.err build/mongo/base/error_codes.h build/mongo/base/error_codes.cpp
[01:29:07] <jzk> fredix: for me it was under the src/ directory, but now it builds into the build/ directory, so that has to be part of the include path
[02:12:30] <akamensky> Folk, i am having problem with getting a subset of embedded documents. Please check this question on SO http://stackoverflow.com/questions/15496071/finding-embedded-documents-in-mongoose-odm
[02:31:28] <bodik> ok i probably found it, i have bad sharding key in the output collection
[03:57:35] <akamensky> Folks, i am having problem with getting a subset of embedded documents. Please check this question on SO http://stackoverflow.com/questions/15496071/finding-embedded-documents-in-mongoose-odm
[10:49:21] <ron> millun: I have experience with morphia. while it's nice, it has its fault and not really under development anymore. I'd try looking at jongo as well.
[11:13:43] <millun> ron, cheers. didn't know jongo. had experience with morphia, but i figured it's dead either
[11:14:11] <ron> millun: there's a list of available wrappers on mongodb's website
[11:15:12] <millun> ok. last time i was checking jongo wasn't there. or maybe i was browsing another section of the site. maybe they assigned it to a framework category.
[12:30:55] <GeertJohan> I'm using userId now, but would like to stick to whats being used 'out there'..
[12:32:24] <kali> GeertJohan: the close thing to this is the dbref convention http://docs.mongodb.org/manual/applications/database-references/
[12:33:14] <kali> GeertJohan: but for the rest... you'd better avoid the "user-id" variant, it will drive you crazy when you'll need to query the database in the mongodb shell
[12:33:35] <GeertJohan> yeah I don't like a dash either..
[12:33:57] <GeertJohan> DBRef is more about the value of the field, but definitly worth checking out.. Now using manual references..
[12:34:11] <GeertJohan> As the example uses underscore I think I'll use that too...
[12:47:12] <GeertJohan> Hmm, a small note DBRef support. The Go driver for MongoDB (named: 'mgo') does support DBRef: http://godoc.org/labix.org/v2/mgo#DBRef
[12:52:26] <CupOfCocoa> Hey guys, does anybody know how to dump multiple dbs at once with mongodump?
[12:52:39] <CupOfCocoa> eg. I have three dbs a,b and c and I want to dump a and b
[13:53:09] <Nodex> faceting is not part of mongodb
[14:42:16] <underguiz> hi, is it safe to run a mongos 2.4 instance with the --upgrade option while having mongod and mongos on the same sharded cluster?
[14:44:26] <underguiz> the documentation says "The upgrade to MongoDB 2.4 adds epochs to the meta-data for all collections and chunks in the existing cluster. MongoDB 2.2 processes are capable of handling epochs, even though 2.2 did not require them."
[14:49:01] <kali> underguiz: mongos 2.4 will refuse to start if you haven't run the upgrade before (unless it's the one running with --upgrade, obviously)
[14:49:59] <underguiz> I'm trying to upgrade my mongos instances
[14:50:32] <underguiz> but I'll keep the mongod proccess running for a while until I can find a safe upgrade window
[14:50:50] <underguiz> that's why I'm trying to figure out the safety of this operation
[14:53:34] <underguiz> there's also this section of the doc: http://docs.mongodb.org/manual/release-notes/2.4-upgrade/#rolling-upgrade-limitation-for-2-2-0-deployments-running-with-auth-enabled
[14:53:47] <underguiz> That says "MongoDB cannot support deployments that mix 2.2.0 and 2.4.0, or greater, components. MongoDB version 2.2.1 and later processes can exist in mixed deployments with 2.4-series processes. Therefore you cannot perform a rolling upgrade from MongoDB 2.2.0 to MongoDB 2.4.0. To upgrade a cluster with 2.2.0 components, use one of the following procedures."
[14:54:21] <underguiz> if I understood it correctly, I can't upgrade it, since my mongod processes are all running 2.2.0
[14:58:40] <Nodex> you will need to update to 2.2.1 then upgrade everything after that
[15:00:18] <underguiz> Nodex, that will be a PITA, but thanks
[15:03:27] <GeertJohan> I have a collection containing documents that have a timestamp, reference_id and other fields.. What would be the best approach to get the latest document (based on the timestamp) for each phone_id?
[15:03:33] <GeertJohan> (This is only done at server init)
[15:05:15] <GeertJohan> I could probably use aggregation/pipes with $group. But won't that leave me with just the fields I have named in the $group properties? Seems like a wrong approach...
[15:05:54] <GeertJohan> also I don't want sum or antyhing.. just the last doc..
[15:15:31] <Codewaffle> GeertJohan, If you figure it out, let me know - I'm sort of doing the same thing and $grouping feels clunky
[15:17:16] <GeertJohan> indeed, it seems that with $group you need to name each field.. I don't want to do that..
[15:17:32] <GeertJohan> Also, I don't think that could be the proper way..
[18:29:51] <azylman> What happens if I have an expireAfterSeconds index on a field in a collection and some of the documents in that collection don't have that field? Will they just not ever expire, or could worse things happen?
[18:30:12] <kali> azylman: they will just stay forever
[18:34:03] <Mmike> But this is so serious issue that I'm having doubt's in the rest of mongodb, if a installer can't be fixed for so long
[18:38:23] <kali> Mmike: numa is highly problematic for mongodb, it is explicitely said in the production notes
[18:38:48] <Mmike> kali, but the installation is not working, package is broken
[18:39:08] <Mmike> one can not install mongodb on debian as it is explained in the documentation
[18:39:17] <Mmike> and it's been like that since mongo 2.2.0 at least
[18:40:32] <kali> Mmike: the problem is numa and mongodb are not working well together... there is not a big interest in making the installer works as the real stuff will not be working anyway
[18:41:05] <Mmike> kali, what you're saying is - mongodb is not supported on debian, unless you patch the installer yourself?
[18:41:39] <Mmike> btw, I have several mongodb clusters that work pretty fine, once I applied patch to the startup script
[18:41:44] <kali> i'm saying numa is bad for databases, including mongodb.
[19:10:22] <kali> well i don't know what they are for :)
[19:10:38] <fjay> me either.. thats what I am wondering.
[19:11:02] <fjay> trying to troubleshoot some connection reset by peer errors between replica set members.. thus causing random elections that should not happen
[19:12:05] <fjay> kali: have you seen similar on ec2?
[19:12:32] <kali> ok. i've seen this kind of thing on ec2... the cause was changes in security group were sometimes making messing with firewalling rules they had no reason to mess for a short amount of time
[19:13:14] <kali> enough to be picked up as node failure by mongodb replica, not enough for us to understand what was happening for quite some time :)
[19:14:44] <fjay> one theory someone else has.. (which i call shenanigans on) is that we have too many mongos running and its causing contention w/ the config servers.. hence my question about config servers earlier.. but that has NO input on master elections for the shards
[19:15:20] <kali> i've had connection pool issues to, but the logs were very explicits on this one
[19:16:04] <fjay> this is connection reset by peer between two members of the replica set.. and getting sometransport errors in the logs from the java driver..
[19:46:50] <ron> Nodex: bad linkedin picture.. bad.. bad.. bad...
[20:01:54] <hydrozen> Hey, just wondering, how do people deal with super long uuids (like those generated by mongodb or couch by default) in the url? It's passable when you only have one in your url, but as soon as you have nested resources… having super long ids make URLs pretty intense.
[20:04:13] <ron> you're overwhelmed by it when you type the URLs by hand?
[20:04:43] <hydrozen> I guess all there is to do is generate a slug or anything else unique and shorter..
[21:44:21] <ixti> would like to ask a question about MongoDB ruby driver
[21:44:57] <ixti> sorry. just realized that i know the answer
[21:53:45] <yfeldblum> after a replSetInitiate on the local mongod which is the only member of the replset (i'll add peers later), and calling isMaster, it reports the local node as a secondary ... what i'd like to do is to figure out if it's currently voting for a new primary and wait for that voting to be finished ... how can that be done?
[21:57:26] <fjay> yfeldblum: you could give it 2 votes
[21:57:56] <yfeldblum> fjay, that's not the problem; the problem is i need to wait until the voting is finished
[21:58:07] <yfeldblum> fjay, so i need to know what flag or property to wait on
[21:58:35] <fjay> i guess I am confused as to what you are doing
[21:59:38] <yfeldblum> fjay, automating some sysadmin
[22:00:07] <yfeldblum> fjay, i'm setting up a replset with a single node, and waiting for the state to stabilize to primary, and after the state is stabilized, do some more things
[22:00:57] <fjay> can you just create it standalone. then make it a replica set later?
[22:19:09] <fjay> ooh.. i think i found my issue..
[22:19:21] <fjay> connection refused because too many open connections: 20000
[22:23:13] <fjay> kali: looks like i found my smoking gun :)
[22:55:39] <djangoobie> Hi I'm very new to mongodb and I'm using mongoose. When defining a schema with a field to a related schema I sometimes see people using ObjectID with a ref attribute and sometimes the related schema itself. Is there a difference between the two?
[23:58:14] <nemothekid> How long does mongos --configdb <config server> --upgrade take?
[23:58:50] <nemothekid> and can I run 2.2 mongos processes in parallel (so I don't have any downtime?)