PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 16th of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:27:02] <elizar> http://www.wired.com/underwire/2013/10/marvel-new-tv-shows/?cid=co13181354
[02:30:35] <OilFanInYYC> I am trying to use a mongodb callback twice in a function but I keep getting the following error: "db object already connecting, open cannot be called multiple times" the callback returns a result from a findOne query then closes the connection afterwards. I'll upload the code in a minute.
[02:34:21] <OilFanInYYC> SEND file:///home/zsolt/Documents/XChat/test.js
[02:40:21] <tripflex> why dont you just leave the connection open
[08:42:16] <kroehnert> Hi, I got a small question
[08:43:06] <kroehnert> the current C++ driver for MongoDB (2.4.6) does not provide a method for GridFS to remove a file by _id
[08:44:37] <kroehnert> this can be replaced by deleting a file by _id from fs.files and fs.chunks if you don't want to enhance the driver code
[08:45:16] <kroehnert> now, is it safe to assume that this method will work with future versions of MongoDB, or is the GridFS specification bound to changes which makes this incompatible?
[08:50:54] <kali> kroehnert: you'll need to check out the jira database, and maybe create a bug... the general principal is that all drivers should expose the same interface (and the other drivers do have a remove(_id) )
[08:51:38] <kroehnert> kali: I see
[08:51:55] <kroehnert> so far I didn't check the interfaces of other drivers
[08:52:13] <kali> kroehnert: i've checked the the java one
[08:52:33] <kali> kroehnert: that said, the c++ driver is a bit atypical because it's part of the server code too
[08:52:41] <kroehnert> yes
[08:53:00] <kali> kroehnert: but that does not justify a smaller interface.
[08:54:09] <kali> kroehnert: as for the gridfs schema spec, i think it is highly unlikely to change
[08:54:13] <kroehnert> yes, essentially when multiple files with the same name can exist
[08:54:22] <kroehnert> okay
[08:54:26] <kali> kroehnert: so the manual workaround should work for a while
[08:55:42] <kroehnert> thanks for the info
[08:56:22] <kroehnert> then I'll probably go with the workaround first
[08:57:27] <kali> kroehnert: that will certainly be a faster and sensible fix. but jira-ing the issue is important for the next guy :)
[08:58:02] <kroehnert> yes, I know
[08:58:22] <kroehnert> but it is unlikely that I want to create yet another account for one single issue
[08:59:09] <kali> come on. it will take you less time than it took me to help you
[09:01:02] <kroehnert> if Jira would at least support something like OpenID or Mozilla Persona
[09:05:47] <kroehnert> but I could write to the mailinglist so someone else can add it to the Jira tracker
[09:58:48] <Repox> I'm having a really strange issue. I can't start mongodb at all. I made a pastie of my steps attempting to start mongodb (also doing a repair) - http://pastie.org/8406032 - anyone might have a suggestion?
[10:00:27] <Derick> what does the log look like when you start it yourself on the CMDline without --repair?
[10:00:59] <Repox> Derick: http://pastie.org/8406040
[10:01:21] <Derick> Wed Oct 16 11:59:24.143 [initandlisten] ERROR: Insufficient free space for journal files
[10:01:25] <Derick> Wed Oct 16 11:59:24.144 [initandlisten] Please make at least 3379MB available in /data/db/journal or use --smallfiles
[10:01:27] <Derick> is quite explanatory
[10:01:42] <Repox> omg... i missed those lines like fifty times...
[10:02:21] <Repox> Derick: Thank you :)
[10:02:28] <Derick> np :-)
[10:17:57] <Neptu> Hej, I see the python driver has a lot of stuff and Mongodb is supper awesome but still wondering why I need to care about the reconnects myself.... in which case is that usefull to be able to handle?
[10:22:27] <Derick> Neptu: the driver doesn't necesaarily know what to do if a write fails. Somethings you (or your app) needs to decide to write again, or just ignore the failure.
[10:22:34] <DragonBe> Which is the preferred MongoDB PHP library promoted by MongoDB team?
[10:22:39] <Derick> or you might just want to show an "we're having issues"
[10:22:45] <Derick> DragonBe: library? You just use the extension.
[10:22:50] <Derick> no library needed
[10:22:55] <DragonBe> extension?
[10:23:07] <Derick> DragonBe: http://pecl.php.net/mongo
[10:23:07] <DragonBe> pecl?
[10:23:15] <Derick> yes
[10:23:20] <DragonBe> ok, thanks
[10:23:43] <Derick> there are also ODMs, but I wouldn't mess with them until you know how to deal with MongoDB directly.
[10:23:48] <Derick> DragonBe: how's the swag btw? :-)
[10:24:01] <Derick> mug large enough for the amount of cofee you drink? :D
[10:24:14] <joannac> swag!
[10:24:20] <DragonBe> Derick, claimed by my coworker who's now a MongoDB fan
[10:24:30] <Derick> joannac: he visited the P.A. office when I was there on Friday
[10:24:44] <Derick> DragonBe: btw, you remember me introducing you to Max?
[10:25:11] <DragonBe> yes
[10:25:22] <Derick> he's our president/CEO :-)
[10:25:31] <DragonBe> joannac: swag pack from zendcon (with MongoDB mug and shirt) https://www.dropbox.com/s/vlxyhqwvbjs9rwr/2013-10-11%2020.34.56.jpg
[10:25:41] <Derick> lol
[10:25:46] <Derick> you got that much stuff? :-)
[10:26:02] <DragonBe> Derick: nice to meet him
[10:26:11] <joannac> whoa!
[10:26:26] <DragonBe> Derick: yes, all packed in a single suitcase + my dirty laundry in a smaller bag
[10:26:28] <joannac> Niiiiiice!
[10:26:50] <Derick> I only brought a red elephpant with me home
[10:26:57] <Derick> i've no space for anything else ever
[10:27:14] <joannac> I want a brown mug
[10:27:21] <Derick> i want a brown t-shirt ;-)
[10:27:46] <DragonBe> Came to ZC with 2 suitcases as I know how crazy it goes at these kind of conferences
[10:27:51] <joannac> Derick: I'll swap you
[10:27:52] <DragonBe> Derick: can you bring some MongoDB stickers to ZendCon Europe?
[10:28:20] <Derick> DragonBe: i dunno whether we have any left here in the office
[10:28:51] <Derick> DragonBe: but get the zendcon people to fill in https://docs.google.com/forms/d/1FpnzawRzTzyysfM2VT_Ke3OpgZ2rAPjQS0afzfWQcik/viewform to request them :-)
[10:29:06] <Derick> (that also means I don't have to carry them)
[10:32:11] <DragonBe> Derick: is this the same form we need to fill in for sponsorship of PHPBenelux?
[10:32:25] <Derick> DragonBe: yeah
[10:32:39] <DragonBe> or are you giving us an email address we can have Thijs use his magic on ;-)
[10:32:40] <Derick> DragonBe: see PM
[10:42:50] <Neptu> Derick, I mean lazy programmer with expect an exception to capture in the app side, retry is a basic choice... donno why I need to add a lot of control to handle that on app side, I mean choices are quite clear retry x times and or act on it due exception...
[10:43:31] <Derick> you can't just retry an update: inc: 1 f.e...
[11:02:20] <Neptu> Derick, connection error you still want to inc 1... whatever you are trying to do you want it to happen ex replica switch ar other puntual things... retry is basically for that and will fix the update if tries again shortly.... We had dam amount of problems until we realized connections die on replica switches...
[11:12:55] <joannac> Neptu: say you do a write, and you get a connection died. How do you tell between "the write failed to make it" and "the write succeeded but the success message failed to make it?"
[11:13:11] <joannac> In the firstcase, you want to try again. In the second, you don't.
[11:16:28] <joannac> Like Derick said, not all writes can be retried multiple times without side effects.
[11:39:07] <Neptu> joannac, I thought we are in shut and forget mode in mongo.... we do not really care about messages back...
[11:39:35] <Neptu> kinda atomic operations are not supported
[11:39:45] <Neptu> you want to know only the message got there
[11:39:54] <Neptu> maybe check the getlast error
[11:39:58] <Neptu> but in any case
[11:40:04] <Neptu> that its a long discussion
[11:40:18] <Neptu> is a focus on replicas changing primary
[11:40:18] <cheeser> document updates are atomic.
[11:40:39] <Neptu> there is were i had a problem
[11:40:58] <Neptu> cheeser, you are absolutely right and the item is locked...
[11:41:07] <cheeser> not quite.
[11:41:10] <Neptu> ??
[11:41:16] <cheeser> write locks are at the database level.
[11:41:32] <Derick> Neptu: most drivers default to acknowledged writes now
[11:41:50] <Neptu> Derick, ok that is news for me
[11:42:10] <cheeser> since 2.4 really
[11:42:12] <Neptu> i thought depending on the level of confirmation but was kinda shut & forget
[11:42:26] <Neptu> ok we moved to 2.4 not so long ago
[11:42:33] <kali> it's on the driver side
[11:42:57] <Neptu> maybe I need to read the "whats new on 2.4" document
[11:44:13] <kali> it's on the driver side
[11:44:22] <kali> you won't find it in the 2.4 changelog
[11:44:36] <kali> (but you probably want to read it anyway)
[11:44:40] <Derick> actually, ithink we did put it in there
[11:44:55] <kali> Derick: ha ? ok then :)
[11:45:23] <Number6> It *used* to be "Fire and forget". It is now journal safe
[11:45:26] <kali> but it still on the driver side, so if you bumped the server and not the client, you may still have the old default
[11:45:53] <Derick> yes
[11:46:19] <Derick> it's in a separate doc under release notes: http://docs.mongodb.org/manual/release-notes/drivers-write-concern/
[11:46:46] <Derick> it's possible that soon there might not be a real w=0 either.
[11:47:15] <kali> and actually, you need to 1/ bump the driver 2/ instantiate is as a MongoClient and not a Mongo for the new default to apply
[11:47:39] <Derick> yes
[11:49:38] <durre> thinking about modelling followers. would one collection looking sort of like this: {follweeId: 1, followerId: 2} work, and do queries db.followers.find({followeeId:1}).count() to find my followers. does this scale?
[11:49:50] <durre> having indexes on those two id's
[11:51:37] <kali> i think the count() now handle this optimisation (it's relatively recent, 2.4). But anyway, you probably want the folowee/followers count denormalized on your main user document
[11:52:22] <kali> i for such a collection, i recommend using very short field names or the signal to noise will be huge
[11:52:31] <kali> s/^i/and/
[11:53:10] <durre> kali: you mean storing the count of followers, followees in the user object… not the array I guess?
[11:53:24] <Derick> I agree, store the count() as extra fields
[11:53:51] <kali> what array ?
[11:53:59] <kali> ha yeah
[11:54:17] <kali> not the array of followers/followees, just the two counters
[11:55:43] <durre> I can do that. sometimes you want to show the actual followers … db.followers.find({followerId}) … will also scale just as well (using skip and max)?
[11:56:31] <kali> skip is linear
[11:56:51] <kali> and you want a composite index on follower+sortkey
[11:58:48] <kali> i think the skip part will be negligible to the cost of fetching data from the matching followwee users documents anyway
[12:05:10] <durre> cool, thanks for the input!
[12:49:55] <maasan> what is best coding style for mongodb fields?
[12:50:58] <Derick> maasan: ?
[12:51:20] <maasan> coding convention for mongodb fields?
[12:51:39] <kali> nothing has emerged
[12:51:57] <Derick> pick whatever you want
[12:52:06] <Derick> just stick to a-z0-9_/i
[12:52:15] <Derick> just stick to A-Za-z0-9_
[12:52:28] <kali> i suspect nothing will... nothing did in 40 years for SQL
[12:53:33] <maasan> Kai: okay. so lowerCameCase will be good right?
[12:53:51] <Derick> just as good as lower_camel_case
[12:53:54] <Derick> pick what you like
[12:54:23] <kali> yes. be aware also that mongodb store the field names everytime it occurs, so you may not want to pick something too long in some cases
[13:52:31] <Touillettes> Hi everyone, I'm looking for an idea about a production environement for MongoDB. I have 2 replica set use as shards on my infra and i want to use the secondary node of both RS as shard for another infra is that possible or not ?
[14:03:41] <kali> Touillettes: you can start another mongod on a different port
[14:04:30] <Derick> (don't do that in production!)
[14:04:35] <leifw> Derick: holy god why
[14:04:45] <Derick> leifw: test suite for the php driver
[14:04:51] <leifw> ha ok
[14:05:26] <Touillettes> yes but if i add one node of RS0 and RS1 as shard to another to another config server he discover all the member of the replica
[14:06:29] <Derick> Touillettes: the answer is really "no" to your original question I think
[14:08:30] <Touillettes> ok thanks Derick but is there any solution to copy all the data of 2 rs on another mongo Instance in live or not ?
[14:09:02] <Derick> sorry, I'm not quite sure what you're trying to do.
[14:10:43] <Touillettes> Hum I have 2 shard which are 2 RS i want to use this infra to make only write and i want an instance where all the data are available to do my read process
[14:11:41] <Derick> Touillettes: it helps to use capitals and interpunction. I've a hard time parsing what you say :S
[14:12:17] <Touillettes> Sorry i do it Again ^^
[14:12:31] <Derick> If I understand correctly, you currently have two shards, where each of them is a two node replicaset?
[14:13:01] <Touillettes> yes each shard are a 3 node RS
[14:13:17] <Derick> okay, so two shards, three nodes or shard
[14:13:21] <Derick> that's six nodes in total
[14:13:25] <Touillettes> Yes
[14:13:31] <Touillettes> exactly
[14:13:41] <Derick> okay, and what do you want new?
[14:13:57] <kzim_> hello, any advices between XFS and EXT4 for mongo ?
[14:14:54] <Touillettes> so i use this to do my write process. I want to create a live backup of this 2 replica on another mongo Instance
[14:15:24] <Derick> your write process is?
[14:15:44] <Derick> you're writing from a script into this two shard, three node set?
[14:16:00] <Derick> and you want to backup your two replicasets (one for each shard) to what?
[14:16:06] <Derick> just one MongoDB server?
[14:16:52] <Touillettes> yes another instance which I will use to do only read on it
[14:17:19] <Derick> but you want *two* replicasets also replicating to *one* other node?
[14:17:32] <Touillettes> Yes exactly
[14:17:38] <Derick> or one node *per* replicaset?
[14:18:16] <Touillettes> i need to replicate all the datas of the 2 RS on one another server. At the end i need to have all the datas twice
[14:18:52] <Derick> but a replicaset already provides you with two extra copies surely
[14:19:12] <Derick> you can not have one node be part of two replicasets
[14:19:26] <Derick> so for a backup node, you need one per shard/replica set
[14:19:43] <czajkowski> http://www.lczajkowski.com/2013/10/16/mongodb-community-kit/ Announcing the MongoDB Community Kit
[14:19:47] <Derick> a node you can mark as "hidden" so that the application doesn't see it for reading, and it can then also never become primary for writing
[14:20:14] <Touillettes> OK thanks for this information ^^
[15:07:20] <tonni> can someone explain why bson_ext is not loading: http://pastebin.com/9KqCbVQJ ?
[15:31:31] <ssalat> Anybody out there who has a working mongodb-hadoop installation running hive?
[15:35:05] <flow_> Can me somebody help with a mongodb-hadoop connector problem?
[15:38:50] <Nodex> Can anybody tell me please what time it is later.
[15:39:06] <cheeser> 12:57
[15:39:19] <Nodex> excellent, thanks, that's been plauging me all day
[15:39:41] <cheeser> any time. (see what I did there?)
[15:40:00] <Nodex> haha, play on words, I like it, you truly are "second" to none
[15:40:03] <Derick> no, i was looking the other way
[15:40:05] <Nodex> see what I did there?
[15:40:56] <cheeser> nope. maybe next time.
[15:41:03] <Nodex> dang, gotta work on my delivery
[16:36:15] <Psi|4ward> Hi there, how to find a rows within subdocuments like https://gist.github.com/psi-4ward/b6c123ba68cfe3e063ad
[16:36:31] <Psi|4ward> i want to find the document containing a formats-subdocument with a given MD5 string
[17:10:23] <astropirate> Psi|4ward, use the $or operator
[17:10:36] <astropirate> in my novice opinion
[17:11:52] <Psi|4ward> astropirate: thanks but the keys of the subdocuments are variabel
[17:12:01] <Psi|4ward> so $or .. $or .. $or is not an option
[17:12:08] <Psi|4ward> i think i have to transform this into an array
[17:12:17] <astropirate> Psi|4ward, the keys for th sub documents?
[17:12:25] <astropirate> yah, you need an array in that case
[17:12:32] <Psi|4ward> yea thumb an so on
[17:23:01] <Ken2> So, I've got to ask... why is the brand logo on MMS a Mario level?
[17:23:17] <Ken2> Is it an MMS / Mario Monitoring System joke?
[17:23:34] <Ken2> https://mms.mongodb.com/static/images/logo-mms-white-landing.png
[17:24:37] <Nomikos> How is that a Mario level?
[17:24:53] <Ken2> It just changed... it was.
[17:27:07] <Nomikos> hmkay, maybe it'll come back
[18:02:09] <HashMap> Hi people.. I am building a schema for a file collection, what are the reasonable attributes that i should store?
[19:07:52] <metzen> Hi, mongodb has any limitation on the number of databases?
[19:11:31] <kali> http://docs.mongodb.org/manual/reference/limits/ look for namespaces
[19:11:34] <kali> metzen: ^
[19:11:51] <kali> but i would not recommend taking that path
[19:12:33] <metzen> using a database per user (since users share nothing) is a good way to go?
[19:12:36] <metzen> if not, why not?
[19:13:14] <kali> depends if we talking 10 users or 10k users
[19:13:30] <metzen> lets say 10k users
[19:13:56] <kali> files are allocated per databse (or per collection as an option but in your case that would be even worse)
[19:14:29] <kali> even with 2 or 3 files per database (which is about the minimum), you already consume 30k file descriptors...
[19:15:08] <metzen> I see.
[19:15:31] <metzen> File descriptors are the only problem that you see?
[19:16:09] <kali> well with standard preallocation, any database will prealocate lot of space. so you need to take that in consideration too
[19:16:10] <metzen> Or there are other issues... idk, like overhead os multiple indexes
[19:16:24] <metzen> Yep, this is solved for now
[19:17:01] <kali> and they are certainly many other issues that i'm not thinking of. mongodb is still a young technologies, you don't necessarily want to be the one to push the enveloppe
[19:17:11] <kali> "here be dragons" :)
[19:18:08] <metzen> thank you kali!
[19:18:16] <metzen> awesome community is awesome! :D
[19:47:37] <blubblubb> i need to stream data into a mongodb how can i make this via GridFS?
[19:52:30] <tkeith> If I have a document called group like this: {_id: ..., name: ..., members: {<member1_id>: {name: ...}, <member2_id>: {name: ...}, ...}} -- How can I remove one member from members?
[19:52:37] <tkeith> (by id)
[19:53:35] <cheeser> $pull
[19:54:51] <tkeith> cheeser: Docs say $pull is for arrays... does it also work on documents?
[19:55:10] <cheeser> oh, i see. i misread that.
[19:55:19] <cheeser> maybe $unset?
[19:55:57] <tkeith> cheeser: Hmm yeah, that looks like what I need. Can you give me an example of how I can use it on a subdocument (members)?
[20:01:14] <tkeith> Can I use an ObjectId as a key in a subdocument?
[20:09:45] <kali> tkeith: never use a variable as a key
[20:10:13] <kali> members: [{ id: member1_id, name: ...}, { id: member1_id, name: ...}, ...]
[20:10:36] <kali> tkeith: or you'll regret it every step of the way
[20:11:41] <tkeith> kali: I was originally doing that, but then how can I say "remove member with id=x"?
[20:12:00] <tkeith> kali: And is there a way to do an $addToSet type operation based on only the id part of each element?
[20:14:27] <cheeser> db.collection.update({your query}, { $unset : { "members.<member2_id>" : "" } })
[20:14:30] <cheeser> more or less
[20:15:29] <tkeith> cheeser: Hmm ok, would you agree with kali about not using ids as keys?
[20:16:09] <cheeser> it feels weird for sure
[20:17:34] <tkeith> cheeser: Ok, I'm going to try a list instead
[20:17:45] <tkeith> I'm starting to feel like maybe I should create a new top level collection for membership
[20:19:49] <kali> tkeith: remove with $pull, and for add, you can make your update conditional: db.users.update({ _id: ..., members: { $ne: new_member_id} }, { $push : { members : { _id: new_member_id, name: ...} } })
[20:20:02] <kali> tkeith: or something like that.
[20:20:37] <kali> tkeith: values are keyname will be impossible to index, nearly impossible to use in aggregation pipeline
[20:20:57] <kali> tkeith: everything is designed for the document key to be keywords only
[20:21:34] <tkeith> kali: Ok, that conditional update thing is what I'm missing... your example doesn't quite make sense though
[20:24:40] <tripflex> your keys should always be static
[20:24:45] <tripflex> with the key value being dynamic
[20:24:54] <tripflex> or you will have all kinds of fun trying to get it to work
[20:26:26] <kali> tkeith: sorry if id does not make sense, but it works: http://uu.zoy.fr/p/guyunuji#clef=kexcboexjaxoyfxw :P
[20:30:08] <tkeith> kali: Perfect! I was just experimenting and it seems like this works too: update({_id: ..., members: {$ne: {id: ...}}}, {$push: ...}) -- Is there any difference?
[20:30:31] <tkeith> members: {$ne: {id ... vs members.id: {$ne: ...
[20:32:19] <joannac> tkeith: why the conditional $push instead of #addToSet?
[20:34:05] <tkeith> joannac: Sorry, I wasn't really clear on this but each "member" element of members consists of changing fields (so member A won't always be represented by exactly the same element in members)
[20:35:16] <joannac> Ah, got it.
[20:43:58] <joannac> tkeith: I have an explanation for you, but I'm trying to figure out how to explain it.
[20:48:51] <joannac> tkeith: http://pastebin.com/bK1W13qG
[21:03:12] <tkeith> joannac: Ah, that makes sense, thank you for the example!
[21:13:32] <joannac> tkeith: actually, even doing members: {$ne: {id:3}} (where id:3 is already there) will trigger it -- the document is {id:3, name:"..."} and will not match
[21:15:47] <tkeith> joannac: Right, so I need to do the members.id: {$ne: ...} way right?
[21:19:20] <joannac> tkeith: or $elemMatch
[21:54:25] <tkeith> joannac: Which of those two ways would you say is better?
[22:05:37] <ramsey> Derick: any pointers on doing "group by" style queries, where I can get counts of each group?
[22:12:59] <joannac> ramsey: aggregation framework
[22:14:09] <joannac> tkeith: if you have more than one condition, $elemMatch. If not, I don't know which is more efficient.
[22:15:27] <ramsey> joannac: thanks. I figured it out using the group() method
[22:15:49] <ramsey> joannac: also, long time, no see! :-)
[22:23:12] <tPl0ch> Hi, I have created a replicaSet with 3 nodes on different machines and I am trying to write sessions from a distributed system to it
[22:24:02] <tPl0ch> I have added a keyFile on each server, and I was able to login in via shell and use db.auth()
[22:26:53] <Derick> ramsey: you want the aggregation framework
[22:27:02] <ramsey> thanks
[22:27:33] <Derick> ramsey: look at example #2 at http://docs.php.net/mongocollection.aggregate
[22:28:19] <ramsey> Derick: cool.. this is what I ended up using, since I'm using the CLI: http://docs.mongodb.org/manual/reference/method/db.collection.group/
[22:28:32] <Derick> same thing really :-)
[22:28:38] <ramsey> :-)
[22:38:38] <Bohemian_> I need some help with 2dsphere index .. i have checked my generated json by linting it many times .. but i am still not able to setup that index
[22:39:56] <Bohemian_> I am using PHP and my code looks something like this : http://pastebin.com/NjNp5SFa
[22:41:37] <Bohemian_> This is the generated (geo)JSON > http://pastebin.com/fV3Kfvaf which gets linted .. still i cannot setup that index... any pointers ?? anyone
[22:50:29] <Bohemian_> anyone ....
[23:17:05] <tonni> Would you say that MongoMapper is mature enough for a production environment?
[23:17:52] <tonni> assuming you some experience with MM
[23:32:45] <zmansiv> hi, i have a question about indexing. say if i have a "users" collection, and among other things, each user also has an array of "sessions" documents, each of which have a "token" value. can i create an index for the session token such that i could speed up looking up a user by one of its session tokens?
[23:34:08] <zmansiv> {"user" : { "sessions" : [ { "token" : "efuheifeiifh"} ] }} <-- for clarification
[23:35:18] <joannac> zmansiv: multikey index?
[23:35:51] <joannac> http://docs.mongodb.org/manual/core/index-multikey/
[23:36:09] <zmansiv> perfect, thanks
[23:37:43] <rafaelhbarros> hey, question
[23:37:54] <rafaelhbarros> about the ulimits for mongodb
[23:37:57] <rafaelhbarros> how to make them persist?
[23:39:49] <joannac> rafaelhbarros: "Depending on your system’s configuration, and default settings, any change to system limits made using ulimit may revert following system a system restart. Check your distribution and operating system documentation for more information.
[23:40:29] <rafaelhbarros> joannac wrong channel then, I should be asking about ubuntu and I feel stupid
[23:41:19] <joannac> rafaelhbarros: http://askubuntu.com/questions/34557/permanently-set-process-limit ?
[23:57:03] <logic_prog> is there something that does "git-like revisioning" in mongodb? I have documents -- that I'd like to store in mongodb, but I'd also like ot have git-like revision history on them.
[23:57:16] <logic_prog> To have the latter, I need to store in file system + use git; but I'd prefer to put the fiels in mongodb instead
[23:57:27] <logic_prog> Is there a way to make this work -- to somehow get git-like persistent trees in mongodb?