PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 31st of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[09:52:01] <vagelis> Hello, yesterday someone suggested me not to use $where in big collections and i just wanted to use it for testing purposes with really small collections.
[09:52:32] <vagelis> Turns out that i have to use it for big collections. He suggested me something to use instead but i dont remember.
[09:52:50] <vagelis> So if someone can help me please do. I use pymongo and mongodb 3.0
[09:53:57] <Derick> don't use $where - it can't use an index and means a full table scan
[09:54:09] <Derick> what are you trying to do in your $where clause?
[09:54:51] <vagelis> compare 2 fields in the same doc
[09:55:02] <vagelis> I know thats why i came to ask :)
[09:55:59] <Derick> ok
[09:56:10] <Derick> And I think somebody else mentioned "aggregation framework" ?
[09:56:26] <Derick> can you show your whole query - preferably in shell format - in a gist or pastebin?
[09:56:41] <vagelis> yep
[09:59:19] <vagelis> https://bpaste.net/show/e024da755347
[10:00:23] <vagelis> danm i forgot the $ after receipt
[10:00:38] <vagelis> sry again wrong pff im confused
[10:00:52] <vagelis> Ok so, receipt is an array of dictionaries
[10:01:06] <vagelis> qty and returnedQty are keys inside these dictionaries
[10:01:28] <vagelis> do u follow me? :S
[10:01:34] <Derick> As a possible update, I suggest that you actually store a flag for this with your documents. It is much faster.
[10:01:51] <vagelis> i think something like that the 'yesterday' guy told me
[10:01:53] <Derick> As, with this schema, you can not avoid a full table scan.
[10:02:33] <vagelis> well we use limit but w8 does limit, "limits" the scan?
[10:02:47] <vagelis> I mean search the first 1000 docs when u limit by 1000?
[10:02:53] <Derick> In any case, using A/F (aggregation framework) will at least makes things faster, as it doesn't have to go thorugh javascript
[10:02:56] <Derick> vagelis: yes - a limit helps
[10:03:13] <Derick> but it still doesn't use an index (which is a lot faster than scanning 1000 documents)
[10:03:35] <vagelis> i know, u mena aggregation wont use an index still?
[10:03:39] <vagelis> mean*
[10:04:04] <Derick> correct, as you can not construct an index to use for the "if field1 is smaller than field2" case
[10:04:26] <Derick> anyway, in A/F, you'd do something like (give me a few moments to try it out):
[10:04:46] <vagelis> i went through the 3.2 changes but well they were many and i dont remember all. Do u think i can solve this with 3.2?
[10:06:05] <Derick> (still trying it out, give me a few mins to get back)
[10:06:12] <vagelis> oh thanks!
[10:06:26] <vagelis> I just asked generally if u have an idea about the changes but thanks!
[10:11:25] <vagelis> Ok, turns out that i dont need to make that query yay!
[10:14:31] <Derick> db.irc.aggregate( [ { '$match' : { 'status' : 'open' } }, { '$project' : { 'orig' : '$$ROOT', 'toosmall' : { '$cond' : { if: { '$lt': [ '$receipt.returnedQty', '$receipt.qty' ] }, then: true, else: false } } } }, { '$match' : { 'toosmall' : true } } ] );
[10:14:35] <Derick> see - not easy!
[10:14:51] <Derick> AFAIK, there are no changes in 3.2 that make this easier
[10:26:41] <vagelis> haha easy one!
[10:27:08] <vagelis> wow what a query :O nop i will pass :D
[10:27:17] <vagelis> Thanks a lot for ur help!
[10:32:22] <livcd> hi guys what's this msg ? Can i ignore it ?
[10:32:24] <livcd> [HostnameCanonicalizationWorker] Failed to obtain name info for an address: nodename nor servname provided, or not known
[12:15:42] <synthmeat> i'm having some problems with getting the last inserted document form a capped collection with mongoose
[12:15:45] <synthmeat> code here https://gist.github.com/91783863068c48154fe6
[12:16:19] <synthmeat> error shown there too
[12:19:52] <synthmeat> nvm, i have it (i've put options where filter should be)
[14:46:16] <schu-r> Anybody a good idea how to spread a mongodb across multiple hosting providers?
[14:47:07] <StephenLynx> sharding
[14:47:49] <schu-r> I need one huge db but in ohter countries there should not be too high ping times.
[14:52:08] <schu-r> Sharding splits the data and increases the single point of failure. Or are there better methods in mongo?
[19:11:07] <livcd> how do i display my current role ?