[08:52:17] <coalado> I wonder if it is possible to have kind of "Views" in Mongodb. for example a Self Updating collection based on a map/reduce command
[09:12:49] <superMustafa67_> NodeX: thanks for your answer
[09:13:13] <superMustafa67_> Another question: I have 2 kinds of data source
[09:13:21] <superMustafa67_> One is ticket from logger
[09:13:26] <coalado> Does anybody use MongoVue? There should be a Save/Open Option to save and open map/reduce queries. But the button is missing in my version somehow.
[09:13:45] <superMustafa67_> And another is tunnel from another machine
[09:14:08] <superMustafa67_> I need to make a resolution between ticket specific entry and the tunnel specific entry
[09:14:53] <superMustafa67_> yes, for example : One ticker is : <ip> <request> ...
[09:15:02] <NodeX> https://jira.mongodb.org/browse/SERVER-164?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel <--- you may want to watch that for compression
[09:15:24] <superMustafa67_> And one tunnel instance is : <number><ip> ..
[09:15:40] <superMustafa67_> and at insertion , I want to find the tunnel reference for the ticket
[09:15:45] <superMustafa67_> and merge to one entry
[09:16:03] <NodeX> the best thing to do in that case is an upsert on a familiar field
[12:56:17] <PDani> i would like to write to mongodb asynchronously, and for every write operation i'd like to send a getlasterror asynchronously, and read the results, if any received, pair the results with my requests by request id, and asynchronously decide which writes succeeded and which didn't. is it possible?
[12:57:09] <algernon> if you use a thread for each write, then, as far as I remember, yes.
[12:57:40] <PDani> is it possible without threading using select()?
[12:58:07] <algernon> if you make sure not to send another write before you have the result of the previous getlasterror, then yes
[12:59:55] <PDani> that's a problem, because right now i have two bottlenecks: context switches (threading is enemy), and network roundtrip time (i can't wait for the previous write's result)
[13:01:29] <algernon> well, you could use a thread pool of writers
[13:01:48] <algernon> where each writer would do one write/getlasterror at a time, but you could do as many writes as threads you have.
[13:02:31] <algernon> better than a thread/write, and perhaps allows a bit more throughtput than a single thread that always has to wait
[13:02:55] <PDani> ok, but i still have many context switches in the client, that's why i should avoid threading
[13:04:03] <algernon> well, you either wait for results, always, or not, and pay the price for threading. (or find another way to check whether a write succeeded)
[13:04:52] <algernon> ie, if you don't need the error message, and only want to check if the write arrived, you could query it for later
[13:06:03] <algernon> erm, query it later. eg, insert({_id: "foo", ...}) and later find({_id: "foo"}, {_id: 1}) - if the find returns something, the insert hit the db, and the only requirement is that the find happens later than then insert
[13:06:04] <PDani> how can i query a specific write?
[13:10:13] <PDani> another thing came to my mind: what if i have a connection pool in client? Every connection has a state form (free, waiting_for_getlasterror), and when i have to write, I choose a free connection, send a request and a getlasterror command, put the connection in waiting_for_getlasterror state, and when i'm out of free connection, i try to read some responses from connections
[13:10:40] <PDani> and it can be accomplished in one process
[13:10:53] <algernon> that's what I meant with the thread pool
[13:11:16] <algernon> but connection pool works just aswell, yes
[13:11:18] <PDani> the "thread" word misguided me, because i'd implement this with select()
[13:38:48] <remonvv> connection pool isn't an alternative to a thread pool. If you have the option to have a thread pool (or, multiple threads in general) you should use it for things like this.
[13:48:33] <remonvv> PDani, which language are we talking about here? You shouldn't need much context switches at all for a MongoDB driver.
[13:55:49] <remonvv> asynchronous on what level? It has to be invoked on the same connection and since you're not allowed to use that connection in between the driver parks the connection (i assume it does) until the selector for that channel has reads ready. That shouldn't result in any cpu load.
[13:57:59] <remonvv> On higher levels than that an asynchronous getLastError doesn't make much sense. The whole point of the GLE call is to prohibit asynchronous writes for w > 0 writes. If you want w <= 0 writes just skip the GLE.
[14:00:24] <PDani> remonvv, yeah, finally i decided to use threading. i thought i can implement something like writing-writing-writing, and sometimes reading acks for my writes, but mongodb obviously not designed for it. so i will use separate threads with separate connections to boost up things by the price of some context-switches
[14:07:36] <remonvv> well you can do that on a threading level but you need to park the busy connections
[14:07:40] <remonvv> the two things aren't that related
[14:08:08] <remonvv> a parked connection takes 0 cpu and a blocking thread takes very, very few
[14:08:31] <remonvv> your max throughput should not be bottlenecked by CPU, not even close.
[14:08:48] <remonvv> The only CPU intensive thing happening driver side is BSON serialization and some housekeeping.
[14:28:21] <NodeX> I dont know, I'm on 2.0.5, I havnt had the chance to update
[14:29:09] <venom00ut> I'll give a try to v8, I just hope it works, I'm not on production system
[15:29:48] <remonvv> At some point someone will have to explain to me why SM vs V8 is a hugely relevant issue for production systems ;)
[15:31:10] <Derick> SM is not reentrant so can't run more than one at the same time...
[15:37:05] <venom00ut> remonvv, just because I've been told that v8 support is experimental in mongodb
[15:43:48] <remonvv> Derick, still not that relevant for production systems though. V8 being re-entrant "fixes" JS concurrency somewhat but that's about it. Performance is still vastly inferior to native functionality and most functionality that currently requires JS and scales is replaced by the new AF. MongoDB should probably drop JS altogether.
[15:44:24] <Derick> not disagreeing there. I've always advocated to stay away from M/R or JS as much as you can
[15:50:28] <remonvv> Yeah exactly, it just doesn't scale very well.
[18:24:29] <e-dard> Hi. If I have a list of dictionaries in Mongo, and I'm searching for all documents where one of the dictionaries in the list has a certain value for one of its keys, how do I then unset the matching dictionaries, so that they are removed from their parent lists without the other dictionaries in the list being affected etc?
[18:24:50] <e-dard> Hmmm ^ let me know if the last message was over 512 and got cut off..
[19:34:29] <hadees> so i'm trying to figure out what i'm doing wrong in this query db.request_end_events.find({"t": {$gte: new Date(2012, 6, 18), $lt: new Date(2012, 6, 19)} }) if say the document is { "t" : ISODate("2012-06-18T22:05:07Z") }