[00:44:41] <acidjazz> i cant really think of an ideal way to just get the next document
[01:17:23] <hexsprite> hey, im wondering what sort of index I would need to optimize a query like this? db.freelist.find({userId: 'dtyb7ZSJoSMtT96Wo', duration: {$gte: 1}, end: {$gt: new ISODate()}}).explain()
[01:18:16] <hexsprite> I tried the obvious {userId: 1, duration: 1, end: 1} but I get a bunch of nscannedObjects
[01:20:32] <hexsprite> It does say BtreeCursor userId_1_duration_1_end_1 but I would assume nscannedObjects should be 0 if the indexes are working correctly?
[01:21:04] <hexsprite> here's the explain results http://pastebin.com/PRayVx1F
[01:34:50] <jkevinburton> anyone have a sec to save me from my sanity about indexes? What would be the proper index for this query? find({name: /gaga/ig}{name: 1, sku: 1, status: 1}).sort({status:-1,fans:-1,name:1}).limit(5)
[01:57:11] <jkevinburton> anyone have a sec to save me from my sanity about indexes? What would be the proper index for this query? find({name: /gaga/ig}{name: 1, sku: 1, status: 1}).sort({status:-1,fans:-1,name:1}).limit(5)
[01:57:31] <jkevinburton> (sorry for the double post, i was sent offline and didn't know if anyone responded)
[02:00:58] <StephenLynx> If I am not mistaken, regex queries do not use indexes.
[02:04:45] <jkevinburton> it's a search anywhere in string
[02:05:13] <jkevinburton> can you use a sort index if you aggregate using regex?
[08:20:24] <neuromute9> hey folks! I've got a strange problem. I have a node/express app that runs fine in Chrom{e|ium} on Ubuntu, until I populate the database with an old import. Then it crashes the browser on every page load. Weirdly, it doesn't crash the browser on another Ubuntu machine running Chromium. It also doesn't crash Chrome on OSX or Windows. Just this one machine. If I drop the database, the site works fine. I'm guessing
[08:20:25] <neuromute9> it's a character/validation issue, but how can I check/fix this?
[08:56:26] <amitprakash> Hi, I am getting Unrecognized option: storage.dbpath while trying to run mongodb
[10:55:22] <kenalex> do any of you guys know how to update all embeded documents in an array field using the typed methods of the c# mongodb driver ?
[12:51:47] <jamiel> Hi all, we removed records from a sharded collection of a specific shard key and are seeing some strange behaviour. We can't locate the records using find() , but count() reports them and we can see them on the shard if we go directly to the shard. Are we missing a step? Important point is probably that we remove all records from a single shard key and it
[13:50:51] <christo_m> hello, i currently have a schema like this: http://pastie.org/10335533
[13:50:57] <christo_m> but i dont think im designing this properly
[13:51:30] <christo_m> i basically want queue items to belong to many users . but i want each individual user to have info about that queue item (like whether they like it or not etc) , i guess im looking for some concept of a join table?
[13:51:52] <christo_m> in my current system queue items are duplicated which may have the same videoId but they're referencing the same content
[13:52:07] <christo_m> they're duplicated for each user though, which is fine since its mongo and to my understanding the denormalization there is desired.
[14:42:55] <christo_m> dddh: are you able to see my earlier question?
[14:43:00] <christo_m> it was just a design question
[14:44:47] <christo_m> i basically want a single source of truth of a queueItem, and allow users to add these items plus extra info relevant to them.. like whether they like that content or not
[14:49:47] <christo_m> dddh: i followed the one to squillions part , because i figured this system is something ilke twitter.. where it would be easier to reference the owner of the tweets rather than embedding or referencing the tweets in the user object.. which would exceed the 16mb limit eventually.
[14:50:14] <christo_m> the problem is, i dont want to duplicate my "tweets" so to speak. i still want to know what the original one is so i can count likes or retweets etc
[14:52:18] <christo_m> the problem with that tutorial is it isnt one to squllions i want, its actually squillions to squillions i want lol
[14:57:45] <deathanchor> christo_m: you are facing the usual, many-to-many problem, just need to figure out a way or use another db type.
[14:58:01] <christo_m> and if i take the same approach.. that is, storing object ids of queueitems in users and objectids of users in queueitems, it isnt hard to see that if a particular queueitem is liked by too many different users, or if a particular user likes too many picese of different content, that the 16 mb will be exceeded
[14:58:13] <christo_m> deathanchor: well im just thinking aloud here, but i dont think it will work for me..
[14:59:20] <deathanchor> maybe try something like neo4j?
[14:59:59] <christo_m> deathanchor: its a little late im pretty entrenched in mongo..
[15:00:13] <deathanchor> it's never too late for anything
[15:39:36] <gentunian> I'm new to mongo and I'm trying to do some aggregation that I'm not sure if possible. All is resumed in this gist: https://gist.github.com/gentunian/cfb826d6ca49bd3d9e38
[15:39:43] <gentunian> can I do this just with aggregation?
[15:40:45] <amz3> I have a question regarding the way wiredtiger deals with index and raw values. I've setup a table with the config key_format=SS value_format=U. There is an index over this table. When I fetch the U column. the value is prefixed with a size integer. Seems like a bug to me.
[15:42:14] <deathanchor> gentunian: nope. your end result is taking a value amd making it into a field
[16:24:08] <Mia> StephenLynx, skip processes the entire collection, very slow
[16:36:22] <kba> I'm getting: "Error: Argument passed in must be a single String of 12 bytes or a string of 24 hex characters" when trying to compare two documents, but only when I'm comparing them a mongoose model method
[16:37:57] <StephenLynx> I got that error once, when I passed an invalid _id
[16:38:02] <StephenLynx> I suggest you stop using mongoose.
[16:38:07] <StephenLynx> it is nothing but trouble.
[16:38:30] <kba> sounds more like a mongoose problem, I suppose, but it just seems to odd and I've been at it for more than 4 hours, already wrote a bug report for mongoose, but haven't submitted, because I keep being confused about where the bug comes from
[16:38:40] <kba> yes, I've found that out by now, but think I'm going to stick it out
[16:38:55] <kba> rewriting everything seems like too much work now, but I don't be using mongoose in the future
[16:39:38] <kba> either way, yes, I'm just comparing the objects, like this.ownA.equals(otherA)
[16:40:02] <kba> should I be doing ._id? Because when I do that, mongoose throws an error instead (which I found a patch for)
[19:24:35] <deathanchor> I should just do a drop db at UTC midnight when the colleciton name changes
[19:26:03] <StephenLynx> if I were on charge of that, I would save this data on plain-text
[19:26:19] <StephenLynx> and move them when needed.
[19:43:29] <deathanchor> it's query-able data, and I have queried it a few times
[19:48:56] <StephenLynx> IMO, a few times is not enough to put it on the database if that is so much data.
[19:49:08] <StephenLynx> being on plain-text files don't keep the application from reading it.
[21:00:18] <kenalex> in your opnion does it make sense to develop an app to run on mongodb that will never need more than one node (not alot of data) ?
[21:25:24] <alexi5> I was wondering what type of applications are document databases more suited for than rdbms? And also are document databases best used when run on more than one node ?