[02:31:49] <Mia> I need to get nth document in my collection, what's the most efficient way to do it
[02:32:02] <Mia> .skip seems really, really slow with huge number of docs
[02:32:32] <Boomtime> yep, that's because .skip() is basically the same as enumerating the documents you asked for, just on the server instead of at the client
[02:32:49] <Mia> I realized Boomtime - so what else can I do
[02:32:52] <Boomtime> you are far better off coming up with a query to get what you actually want
[02:33:25] <Mia> well tis is for an imageboard thing, all of my images have domain.com/id urls, so domain.com/1 is the first image
[02:33:35] <Mia> so I just want to do domain.com/n to get nth image
[02:33:47] <Boomtime> why not just ask for that then?
[02:36:13] <Boomtime> how about you pastebin (or equivalent) your current query, and an example document
[02:36:16] <Mia> the image board is mine, I have images stored in my mongodb (I mean url's of images, not the actual images) and I want to render the page with the corresponding urls when a domain.com/id page is visited
[02:45:54] <Mia> Boomtime, what is not clear about my problem
[02:46:32] <Mia> The problem is: I want to reach my db documents with an order number
[02:46:38] <Boomtime> if you cannot change the URLs that exists already , then change the documents that are stored, to have the parameter you need to search on
[02:47:03] <Boomtime> then add a specific field to the document which is "index_number" or some such
[02:47:11] <Boomtime> that way you can hit it exactly and efficiently
[02:47:23] <Boomtime> btw, you can use the _id for this purpose
[02:47:24] <Mia> That makes sense - but then I will need to increment manually right?
[02:47:32] <Mia> because it's not relational it won't auto increment
[02:47:58] <Mia> Boomtime elighten me please this might be my solution
[02:48:00] <Boomtime> that is true, but it's not hard to construct
[02:48:21] <cheeser> well, relational isn't really relevant to *that*
[02:48:25] <Boomtime> the simplest (though slightly naive) solution is to use .count() as the index number
[02:48:38] <Mia> I mean I was just thinking, since there is an index, I would just be *magically* use that index/order to get nth document -- but looks like it's not that easy
[02:48:54] <cheeser> you can use findAndModify() for atomically increasing numbers if you need
[02:49:19] <Mia> cheeser, excuse my noobness - I was dealing with getting a random document from my db last week, and everyone was suggesting me to use a relational db just because of it's powers for random and auto incremental stuff
[02:49:24] <Boomtime> well, you've kind of dug yourself a hole - you've implemented something without undertstanding it's behavior
[02:50:32] <Mia> and this is a beautiful pain in the end
[02:50:35] <cheeser> neither random access not "auto incremental stuff" are aspects of a relation database.
[02:50:36] <Boomtime> every implementation of random i've seen (eg. mysql order by rand) only works on small scales
[02:51:13] <Mia> I really like mongodb because of it's ease of use with nodejs
[02:51:38] <cheeser> by random, do you mean indexed access? "get me item 17?"
[02:51:39] <Mia> I really don't plan on going for a relational db at this point - so I will probobly just pick the best solution possible for my case and go with it
[02:51:48] <Boomtime> consider that the very term "random" is the exact opposite of what a database is intended to do
[02:52:16] <Mia> cheeser, now I don2tneed it any more, but basically what I needed was trying to get a random document from the whole db
[02:52:38] <cheeser> fwiw, 3.2 will get you that: https://jira.mongodb.org/browse/SERVER-533
[02:52:41] <Mia> I did it by adding a random float to each of my documents and doing a greater than query with 1 limit
[02:53:29] <Mia> so cheeser now what I need is to get document by index
[02:53:43] <Mia> and Boomtime was guiding me about it
[02:54:19] <Mia> since find().skip(n).limit(1) is too slow for huge number of docs - I need an alternate method
[02:54:35] <Boomtime> "Mia: I did it by adding a random float to each of my documents and doing a greater than query with 1 limit" <- this is a horendously biased method
[02:55:03] <Mia> I know, but I don't need accuracy for that one
[02:55:08] <Boomtime> if you actually test the distribution you get from that method you'll find that some documents get selected far more often than others - often by orders of magnitude
[02:55:30] <Mia> and the other method I tried was, getting the total number of documents, and using the magical inefficient .skip() :/ Boomtime
[02:55:33] <Boomtime> and some documents may not every be selected (ever, ever, might actually be impossible)
[02:55:52] <Mia> Boomtime, yes I know, if docs arereturned I just run it once again
[02:56:07] <Mia> I know it's not a good solution it's just what I could find
[02:56:35] <Mia> if I can learn how this is done efficiently I will correct my stupid random method as well
[02:57:11] <cheeser> get the first and last on whatever range you want. subtract their time components. multiply that by some random number between 0 and 1. add *that* to you min key's timestamp. search for any ID > than that. limit(1)
[03:06:34] <Mia> I mean what I want to do is to provide a specific number and get the document at that index
[03:06:40] <cheeser> well, /n/ comes from somewhere. i don't think i know enough of your data model and needs to really bang together a solution as such
[04:31:32] <Boomtime> ok, that doesn't really say what you're doing with it though :p
[04:32:01] <defk0n> can anyone help with this weird behaviour im getting with $and operator. http://pastebin.com/4f6trdY2, it seems like mongodb tries to fetch each field independently
[07:44:42] <kenalex> is there any mongodb use cases outside CMS, online gaming stats data store and social networking ?
[07:47:06] <jamiel> kenalex: To begin with, what would make you think those are the only three use cases out of every possible piece of software which requires a database?
[08:03:10] <kenalex> jamiel: those ae what I came across so far
[08:03:52] <kenalex> I am trying to find other use cases to understand how mongodb is use to solve those problems and challenges encountered when using it
[08:15:09] <dddh> kenalex: you need big data sets to play with?
[08:30:58] <amz3> I know mongodb because of analytics actually
[08:31:27] <amz3> not only for gaming stats, maybe kind of analytics
[08:32:52] <amz3> I have a question regarding mongodb implementation. Ho do you implement index using wireditger, do you use particular table for each column
[08:51:14] <alexi5> Is it normal to have mongodb deployments with only one node ?
[09:00:22] <jamiel> Morning all, having an issue where we have added some new shards, and can see that they are balancing and receiving chunks but from our app servers we are seeing the following error: multiple errors for op : write results unavailable from 192.168.3.16:27170 :: caused by :: Location28563 cannot send batch write operation to server 192.168.3.16:27170
[09:00:23] <jamiel> (192.168.3.16) :: and :: write results unavailable from 192.168.3.15:27172 :: caused by :: Location28563 cannot send batch write operation to server 192.168.3.15:27172 (192.168.3.15)
[09:00:38] <jamiel> have confirmed that all nodes can talk to each other
[09:03:07] <jamiel> This happens when performing an update on any of the sharded collections
[09:22:11] <amz3> seems like mongodb doesn't use the index facility of wiredtiger
[09:22:17] <alexi5> Do you guys think a polling application is a good use case for mongodb ?
[17:57:56] <Axy> it's an existing collection I just wnat to give an incremental "custom_id" to all of the previous items
[17:58:05] <Axy> and from now on I would like to add things incrementally
[18:50:37] <Axy> cheeser, I've been looking for the solution to my problem I've explained yesterday - I've made a topic for it maybe you can give me some insight http://stackoverflow.com/questions/31897103/get-nth-item-from-a-collection
[21:00:55] <carver404> hi, i'm working with minimongo using browserify. However, it seems to be adding only a max of 50 documents per collection! why is
[21:01:07] <carver404> this happening? afaik 50 is really small as compared to usual max no. of docs in a collection. any headers?
[21:04:15] <Axy> I've followed this tutorial to create an auto incrementing field. http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/ It works when I'm adding new items, but how can I use it to update a collection I already have?
[21:04:20] <Axy> I use the "counters" method in the link
[22:05:32] <defk0n> someone told me that only the first operator inside a aggregator query is subject to indexes, but i need to $unwind a array so i can group on the array fields inclusively
[22:06:57] <defk0n> or do i need to add a index on the array itself and the array fields respectively, so unwind wont use a BasicCursor (without indexes)
[22:07:11] <defk0n> when doing $unwind as the first operator