[00:54:49] <ranman> hey guys is there a common design pattern for tracking "trending" data (increment % datetime or something?)
[05:52:40] <Gekz> hey guys, I was wondering, if I have three records, each with a field pointing to another record, such that record 1 -> record 2 -> record 3 -> record 1...
[05:52:49] <Gekz> is there an easy way to detect that loop?
[05:52:57] <Gekz> other that writing a simple function to do it?
[13:24:30] <wereHamster> johanhar: anyone? How can I use your db?
[14:02:46] <trepidaciousMBR> What happens if I generate an ObjectId myself, and insert a document, but that id turns out not to have been unique?
[14:06:51] <wereHamster> trepidaciousMBR: the same as when you try to insert a document with a field that is not unique
[14:06:53] <kali> trepidaciousMBR: the insert will fail
[14:07:14] <trepidaciousMBR> Ah great, so I can just try another ObjectId until it works
[14:07:31] <trepidaciousMBR> This is with Casbah driver, I guess I have to make sure I see that error
[14:07:41] <wereHamster> trepidaciousMBR: the chance that an ObjectId is not unique is pretty slim.
[14:12:24] <trepidaciousMBR> Yup, but I can catch it and regenerate pretty easily, so I might as well
[14:25:41] <Metathink> Is there a way to exclude the _id field with pymongo ? I tried that: db.posts.find({'tags' : 'tennis' }, {' _id' : 0 }), but it's not working with pymongo :/
[15:06:45] <wereHamster> Metathink: I think the id field is always returned
[16:09:34] <bika> hello, how do you guys search for a string with accented letter? Is there a way to do this without storing a slugged version of the word?
[17:38:28] <barcelona0202> im using tailable cursors, but i'm only interested in the NEW entries (after i start my python script) - so how can I start the iteration on the last (or in waiting mode for new ones) ? (using pymongo). thanks
[17:42:44] <barcelona0202_> i'm currently doing cursor = coll.find(tailable=True) // while cursor.alive // doc = cursor.next() .... but i want to make sure I start on the last one, and not itereate thru the whole collection. I'm only interested in new entries after script start
[17:43:22] <barcelona0202_> (im on a bad connection, but still here!)
[17:47:05] <barcelona0202_> could i do an addOption(32)?
[17:56:45] <barcelona0202_> how can i start at the last item (and simply wait for new ones) when using tailable cursors (pymongo) ?
[20:41:10] <jeromegn> is it a good strategy to both embed and reference? for instance: a blog post document has comments, embedded seems to be a good idea here, but what if I want to retrieve all comments by a specific user? You'd have to find all blog posts, and then filter all the comments for a user id. what if I also stored comments in their own collection with a reference to the parent (or something)? Would that be overkill? Any downsides (except for the added complexit
[20:41:10] <jeromegn> managing data at two places)?
[20:42:03] <timeturner> references are the way to go if your dataset is expected to grow without limit
[20:43:47] <jeromegn> timeturner: but both referencing and embedding would make lookups faster, right? only one request vs 2. (I would embed more than just the ID, I'd also put in the content of the comment, a summary user object, etc.)
[20:44:57] <timeturner> but the question is, do you expect to embed a number of comments that will exceed the size of 16mb? and if you don't know then how will you deal with it then?