[00:25:29] <Axy> because I believe it explains the issue
[00:25:53] <Axy> I need "index" values to all my existing documents, I already have a method to add index to new documents but I needed to add this to old ones
[00:36:44] <StephenLynx> function B calls next and check if anything arrived. if it did, it adds the operation to the operations array and calls function B again passing the cursor and the operations array
[00:51:26] <StephenLynx> yeah, using the incremented field is an option
[00:51:34] <StephenLynx> as some soft of pre-aggregation
[00:52:05] <Axy> but I'm just unable to imagine a solution, for a (example) university database where there are only 3 classes, maths, physics, and history --- and all students (seperate documents) have scores for all three fields
[00:52:17] <Axy> what if I want get the 10th student in history scores
[00:52:33] <Axy> I was imagining this to work like, indexing all fields, and somehow getting the n'th item, easily
[00:56:13] <StephenLynx> because instead of generating the page on every request, you generate it once
[00:56:26] <Axy> right, but if the number is static
[00:56:35] <Axy> I mean, if 41 returns the sme thing every time
[00:56:54] <Axy> but if it's an order of success, that changes, then it matters
[00:57:10] <Axy> I men if 41 is the "41th highest score student in maths class"
[00:57:10] <StephenLynx> i understand your case now
[00:57:20] <StephenLynx> you use the path as a parameter
[00:57:26] <StephenLynx> instead of using a regular get parameter
[00:57:30] <Axy> so yeah, we already have indexes, we can sort things by a specific order, but we can't reach them by index number
[00:57:51] <StephenLynx> this is what I would do in your case:
[00:58:08] <StephenLynx> I would have pages listing several elements per page.
[00:58:20] <StephenLynx> each element would have a link using it's _id
[00:58:46] <StephenLynx> now the page that opens the element don`t need to use its index, because the index is already being used on the page that links to the element.
[01:10:32] <Axy> and all of the added images have a numbe.. for instance 5000th image
[01:10:35] <StephenLynx> if it is an imageboard of sorts, I think I would still use saved HTML
[01:10:37] <Axy> it's the 5000th image that's added
[01:10:55] <Axy> that way without having access to the board index or anyting like that people can just try differnt url's and hunt for cool stuff
[01:11:16] <StephenLynx> I just create urls using this sequence
[01:11:28] <StephenLynx> and each board have a counter that I use $inc to get the next one
[01:11:45] <StephenLynx> files use it, threads, posts
[01:11:57] <Axy> maybe it's the right way to do this, I just want to do this dynamically. For two reasons -- one is tht I really love dynamic stuff and two is that this is not really a commercial project or something, this is just me learning nodejs and mongodb that's all
[02:41:50] <akp> is it possible to use limit_execute with 2 backends? ie something like limit_execute data="db outbound carrier limit redis tn-limit tn limit2" >
[02:44:21] <akp> is it possible to use limit_execute like this - <action application="limit_execute" data="db realm resource1 limit1 redis realm resource2 limit2 " />
[02:48:54] <cheeser> that looks like a redis question
[02:49:29] <akp> ok, then change it to <action application="limit_execute" data="db realm resource1 limit1 db realm resource2 limit2 " />
[06:46:38] <steamio> hi all, I have a quick question about replication set - The scenario is that I have a large single instance and Im creating a replication set. Other than restarting the main inctance for replication support, will there be any downtime while the secondary is syncing?
[07:34:33] <joannac> steamio: no downtime, but you'll see some load on the primary
[13:52:43] <lmatteis> hi guys. so i have a collection with over 11 thousand documents, each representing a log item that recorded the status of my HTTP server
[13:53:00] <lmatteis> now i'd like to present these results to a user as a webpage, but of course i can't send all 11k records to the user
[13:53:30] <lmatteis> what could be a nice query which would show perhaps the average maybe for every month?
[13:55:26] <lmatteis> the problem is that my date is stored as an epoch number :(
[14:25:52] <StephenLynx> so you weed out anything unwanted.
[14:26:39] <lmatteis> StephenLynx: thanks. aggregate queries are read-only right? i don't want to screw things up in this db
[14:27:48] <StephenLynx> by default yes. and even then, you can only write it to a separate collection.
[14:28:01] <StephenLynx> the collection you read cannot be changed on the aggregate, afaik
[14:28:04] <lmatteis> k there's just too much data i guess :(
[14:28:11] <lmatteis> cuz a match is much slower than a find
[14:28:36] <StephenLynx> yeah, so I heard aggregate is slower than a regular find.
[14:28:44] <StephenLynx> that is why I suggested you pre-aggregate this data.
[14:28:53] <cheeser> i think you *can* write back to the same collection, actually, so you need to be really careful because it'll clobber whatever's in that collection
[15:00:34] <MalteJ> I am new to mongodb and have a problem: I have a document "user". Every user is allowed to have a maximum of let's say 5 documents of type "virtualmachine". How can I realize this, so I don't have problems with concurrency (e.g. when reading all virtualmachines of a user, counting them and then add a new one, if the count is <5)
[15:05:52] <mantovani> MalteJ: you don't want to have concurrency problem ? don't use mongodb
[15:06:17] <mantovani> MalteJ: why don't use postgres ?
[15:07:12] <MalteJ> for most of my stuff eventual consistency is ok
[15:07:52] <MalteJ> I am not sure if I should add an RDBMS to the stack just for some edge cases
[15:08:46] <mantovani> why are you using mongodb ?
[15:10:36] <JamesHarrison> MalteJ: you can atomically update a document
[15:10:48] <mantovani> ACID is what make your DB reliable
[15:10:53] <JamesHarrison> what you can't do, which mantovani is alluding to, is transactions
[15:11:23] <JamesHarrison> ACID isn't what makes your DB reliable it's just a convenient way to describe a particular pattern for consistency management, it's not the only way
[15:11:27] <mantovani> if you need transactions, you should used an ACID DB.
[15:11:38] <MalteJ> JamesHarrison: I could use the update method and query for the user and a timestamp of last change, right?
[15:11:43] <mantovani> JamesHarrison: is the only way who work
[15:13:48] <JamesHarrison> you can do two-phase commits, yes, or you can use update if current pattern or similar
[15:13:55] <MalteJ> JamesHarrison: can I query for an array length?
[15:14:25] <JamesHarrison> MalteJ: you could do, though I'd probably store a counter
[15:14:50] <JamesHarrison> mantovani is correct that if you're implementing from scratch, this doesn't sound like something mongodb is likely to be hugely suited for
[15:16:49] <MalteJ> I have to synchronize my data with the vm hypervisor anyway. So an RDBMS with strong consistency does not really help, when the DB and there real world are not consistent.
[15:16:51] <mantovani> so this operations just can be used if you just has your data at one instance ?
[15:17:25] <MalteJ> mantovani: no, your writes go through the master
[15:17:55] <JamesHarrison> MalteJ: basically what I'd do would be to store an updated_at timestamp or version field on the object, query for that version/timestamp, and then findAndModify using the ID _and_ version to insert the document and update the version/timestamp
[15:24:51] <JamesHarrison> in most use cases it's unnecessary
[15:25:18] <JamesHarrison> if you need real HA (and most people don't) then you can do it quite readily these days without any additional software, just core postgresql
[15:34:41] <JamesHarrison> brave new world - I'd argue stored procedures have very limited use cases, and the overhead of database agnosticism is outweighed heavily by the improvement in developer productivity and reduced lock-in risk
[15:34:55] <JamesHarrison> there are still places where stored procedures/triggers etc make sense, sure
[15:37:06] <MalteJ> the question is, what is cheaper? coding, testing, debugging for the super-hyper proprietary rdbms functions or just adding more server hardware?
[15:37:55] <JamesHarrison> most of the time unless you're dealing with very tight performance requirements or very large amount of data, the latter is usually the best option
[15:38:12] <mantovani> MalteJ: or much more simple, try amazon and you will have your answer.
[15:41:12] <mantovani> MalteJ: actually when you use a rdbms since it do a lot for you, you will save code, test and debug
[15:41:35] <mantovani> nif you need to implement what it does for you manually so you will have more code, test and debug