[02:30:35] <OilFanInYYC> I am trying to use a mongodb callback twice in a function but I keep getting the following error: "db object already connecting, open cannot be called multiple times" the callback returns a result from a findOne query then closes the connection afterwards. I'll upload the code in a minute.
[08:43:06] <kroehnert> the current C++ driver for MongoDB (2.4.6) does not provide a method for GridFS to remove a file by _id
[08:44:37] <kroehnert> this can be replaced by deleting a file by _id from fs.files and fs.chunks if you don't want to enhance the driver code
[08:45:16] <kroehnert> now, is it safe to assume that this method will work with future versions of MongoDB, or is the GridFS specification bound to changes which makes this incompatible?
[08:50:54] <kali> kroehnert: you'll need to check out the jira database, and maybe create a bug... the general principal is that all drivers should expose the same interface (and the other drivers do have a remove(_id) )
[08:58:22] <kroehnert> but it is unlikely that I want to create yet another account for one single issue
[08:59:09] <kali> come on. it will take you less time than it took me to help you
[09:01:02] <kroehnert> if Jira would at least support something like OpenID or Mozilla Persona
[09:05:47] <kroehnert> but I could write to the mailinglist so someone else can add it to the Jira tracker
[09:58:48] <Repox> I'm having a really strange issue. I can't start mongodb at all. I made a pastie of my steps attempting to start mongodb (also doing a repair) - http://pastie.org/8406032 - anyone might have a suggestion?
[10:00:27] <Derick> what does the log look like when you start it yourself on the CMDline without --repair?
[10:17:57] <Neptu> Hej, I see the python driver has a lot of stuff and Mongodb is supper awesome but still wondering why I need to care about the reconnects myself.... in which case is that usefull to be able to handle?
[10:22:27] <Derick> Neptu: the driver doesn't necesaarily know what to do if a write fails. Somethings you (or your app) needs to decide to write again, or just ignore the failure.
[10:22:34] <DragonBe> Which is the preferred MongoDB PHP library promoted by MongoDB team?
[10:22:39] <Derick> or you might just want to show an "we're having issues"
[10:22:45] <Derick> DragonBe: library? You just use the extension.
[10:27:52] <DragonBe> Derick: can you bring some MongoDB stickers to ZendCon Europe?
[10:28:20] <Derick> DragonBe: i dunno whether we have any left here in the office
[10:28:51] <Derick> DragonBe: but get the zendcon people to fill in https://docs.google.com/forms/d/1FpnzawRzTzyysfM2VT_Ke3OpgZ2rAPjQS0afzfWQcik/viewform to request them :-)
[10:29:06] <Derick> (that also means I don't have to carry them)
[10:32:11] <DragonBe> Derick: is this the same form we need to fill in for sponsorship of PHPBenelux?
[10:42:50] <Neptu> Derick, I mean lazy programmer with expect an exception to capture in the app side, retry is a basic choice... donno why I need to add a lot of control to handle that on app side, I mean choices are quite clear retry x times and or act on it due exception...
[10:43:31] <Derick> you can't just retry an update: inc: 1 f.e...
[11:02:20] <Neptu> Derick, connection error you still want to inc 1... whatever you are trying to do you want it to happen ex replica switch ar other puntual things... retry is basically for that and will fix the update if tries again shortly.... We had dam amount of problems until we realized connections die on replica switches...
[11:12:55] <joannac> Neptu: say you do a write, and you get a connection died. How do you tell between "the write failed to make it" and "the write succeeded but the success message failed to make it?"
[11:13:11] <joannac> In the firstcase, you want to try again. In the second, you don't.
[11:16:28] <joannac> Like Derick said, not all writes can be retried multiple times without side effects.
[11:39:07] <Neptu> joannac, I thought we are in shut and forget mode in mongo.... we do not really care about messages back...
[11:39:35] <Neptu> kinda atomic operations are not supported
[11:39:45] <Neptu> you want to know only the message got there
[11:49:38] <durre> thinking about modelling followers. would one collection looking sort of like this: {follweeId: 1, followerId: 2} work, and do queries db.followers.find({followeeId:1}).count() to find my followers. does this scale?
[11:49:50] <durre> having indexes on those two id's
[11:51:37] <kali> i think the count() now handle this optimisation (it's relatively recent, 2.4). But anyway, you probably want the folowee/followers count denormalized on your main user document
[11:52:22] <kali> i for such a collection, i recommend using very short field names or the signal to noise will be huge
[11:54:17] <kali> not the array of followers/followees, just the two counters
[11:55:43] <durre> I can do that. sometimes you want to show the actual followers … db.followers.find({followerId}) … will also scale just as well (using skip and max)?
[12:54:23] <kali> yes. be aware also that mongodb store the field names everytime it occurs, so you may not want to pick something too long in some cases
[13:52:31] <Touillettes> Hi everyone, I'm looking for an idea about a production environement for MongoDB. I have 2 replica set use as shards on my infra and i want to use the secondary node of both RS as shard for another infra is that possible or not ?
[14:03:41] <kali> Touillettes: you can start another mongod on a different port
[14:04:30] <Derick> (don't do that in production!)
[14:05:26] <Touillettes> yes but if i add one node of RS0 and RS1 as shard to another to another config server he discover all the member of the replica
[14:06:29] <Derick> Touillettes: the answer is really "no" to your original question I think
[14:08:30] <Touillettes> ok thanks Derick but is there any solution to copy all the data of 2 rs on another mongo Instance in live or not ?
[14:09:02] <Derick> sorry, I'm not quite sure what you're trying to do.
[14:10:43] <Touillettes> Hum I have 2 shard which are 2 RS i want to use this infra to make only write and i want an instance where all the data are available to do my read process
[14:11:41] <Derick> Touillettes: it helps to use capitals and interpunction. I've a hard time parsing what you say :S
[14:18:16] <Touillettes> i need to replicate all the datas of the 2 RS on one another server. At the end i need to have all the datas twice
[14:18:52] <Derick> but a replicaset already provides you with two extra copies surely
[14:19:12] <Derick> you can not have one node be part of two replicasets
[14:19:26] <Derick> so for a backup node, you need one per shard/replica set
[14:19:43] <czajkowski> http://www.lczajkowski.com/2013/10/16/mongodb-community-kit/ Announcing the MongoDB Community Kit
[14:19:47] <Derick> a node you can mark as "hidden" so that the application doesn't see it for reading, and it can then also never become primary for writing
[14:20:14] <Touillettes> OK thanks for this information ^^
[15:07:20] <tonni> can someone explain why bson_ext is not loading: http://pastebin.com/9KqCbVQJ ?
[15:31:31] <ssalat> Anybody out there who has a working mongodb-hadoop installation running hive?
[15:35:05] <flow_> Can me somebody help with a mongodb-hadoop connector problem?
[15:38:50] <Nodex> Can anybody tell me please what time it is later.
[19:17:01] <kali> and they are certainly many other issues that i'm not thinking of. mongodb is still a young technologies, you don't necessarily want to be the one to push the enveloppe
[19:18:16] <metzen> awesome community is awesome! :D
[19:47:37] <blubblubb> i need to stream data into a mongodb how can i make this via GridFS?
[19:52:30] <tkeith> If I have a document called group like this: {_id: ..., name: ..., members: {<member1_id>: {name: ...}, <member2_id>: {name: ...}, ...}} -- How can I remove one member from members?
[20:17:34] <tkeith> cheeser: Ok, I'm going to try a list instead
[20:17:45] <tkeith> I'm starting to feel like maybe I should create a new top level collection for membership
[20:19:49] <kali> tkeith: remove with $pull, and for add, you can make your update conditional: db.users.update({ _id: ..., members: { $ne: new_member_id} }, { $push : { members : { _id: new_member_id, name: ...} } })
[20:20:37] <kali> tkeith: values are keyname will be impossible to index, nearly impossible to use in aggregation pipeline
[20:20:57] <kali> tkeith: everything is designed for the document key to be keywords only
[20:21:34] <tkeith> kali: Ok, that conditional update thing is what I'm missing... your example doesn't quite make sense though
[20:24:40] <tripflex> your keys should always be static
[20:24:45] <tripflex> with the key value being dynamic
[20:24:54] <tripflex> or you will have all kinds of fun trying to get it to work
[20:26:26] <kali> tkeith: sorry if id does not make sense, but it works: http://uu.zoy.fr/p/guyunuji#clef=kexcboexjaxoyfxw :P
[20:30:08] <tkeith> kali: Perfect! I was just experimenting and it seems like this works too: update({_id: ..., members: {$ne: {id: ...}}}, {$push: ...}) -- Is there any difference?
[20:30:31] <tkeith> members: {$ne: {id ... vs members.id: {$ne: ...
[20:32:19] <joannac> tkeith: why the conditional $push instead of #addToSet?
[20:34:05] <tkeith> joannac: Sorry, I wasn't really clear on this but each "member" element of members consists of changing fields (so member A won't always be represented by exactly the same element in members)
[21:03:12] <tkeith> joannac: Ah, that makes sense, thank you for the example!
[21:13:32] <joannac> tkeith: actually, even doing members: {$ne: {id:3}} (where id:3 is already there) will trigger it -- the document is {id:3, name:"..."} and will not match
[21:15:47] <tkeith> joannac: Right, so I need to do the members.id: {$ne: ...} way right?
[22:14:09] <joannac> tkeith: if you have more than one condition, $elemMatch. If not, I don't know which is more efficient.
[22:15:27] <ramsey> joannac: thanks. I figured it out using the group() method
[22:15:49] <ramsey> joannac: also, long time, no see! :-)
[22:23:12] <tPl0ch> Hi, I have created a replicaSet with 3 nodes on different machines and I am trying to write sessions from a distributed system to it
[22:24:02] <tPl0ch> I have added a keyFile on each server, and I was able to login in via shell and use db.auth()
[22:26:53] <Derick> ramsey: you want the aggregation framework
[22:27:33] <Derick> ramsey: look at example #2 at http://docs.php.net/mongocollection.aggregate
[22:28:19] <ramsey> Derick: cool.. this is what I ended up using, since I'm using the CLI: http://docs.mongodb.org/manual/reference/method/db.collection.group/
[22:38:38] <Bohemian_> I need some help with 2dsphere index .. i have checked my generated json by linting it many times .. but i am still not able to setup that index
[22:39:56] <Bohemian_> I am using PHP and my code looks something like this : http://pastebin.com/NjNp5SFa
[22:41:37] <Bohemian_> This is the generated (geo)JSON > http://pastebin.com/fV3Kfvaf which gets linted .. still i cannot setup that index... any pointers ?? anyone
[23:17:05] <tonni> Would you say that MongoMapper is mature enough for a production environment?
[23:17:52] <tonni> assuming you some experience with MM
[23:32:45] <zmansiv> hi, i have a question about indexing. say if i have a "users" collection, and among other things, each user also has an array of "sessions" documents, each of which have a "token" value. can i create an index for the session token such that i could speed up looking up a user by one of its session tokens?
[23:37:54] <rafaelhbarros> about the ulimits for mongodb
[23:37:57] <rafaelhbarros> how to make them persist?
[23:39:49] <joannac> rafaelhbarros: "Depending on your system’s configuration, and default settings, any change to system limits made using ulimit may revert following system a system restart. Check your distribution and operating system documentation for more information.
[23:40:29] <rafaelhbarros> joannac wrong channel then, I should be asking about ubuntu and I feel stupid
[23:57:03] <logic_prog> is there something that does "git-like revisioning" in mongodb? I have documents -- that I'd like to store in mongodb, but I'd also like ot have git-like revision history on them.
[23:57:16] <logic_prog> To have the latter, I need to store in file system + use git; but I'd prefer to put the fiels in mongodb instead
[23:57:27] <logic_prog> Is there a way to make this work -- to somehow get git-like persistent trees in mongodb?