[00:43:06] <leandroa> hi, is there a way to sort by list values? for example I have these docs: https://gist.github.com/lardissone/5448683 and I want to sort by the doc with the smallest value in the lists (no matter how many values in the list)
[01:08:34] <yeukhon> so I am using mongoengine for my pyramid project. and recently, my user registration part is failing. here is the gist showing traceback https://gist.github.com/yeukhon/8c99e1d7ff5c0b4308e0
[01:08:41] <yeukhon> mongo said "AutoReconnect: could not connect to localhost:27017: timed out"
[03:28:18] <bmatusiak> is it possible to use a db like git repo?
[03:32:07] <yeukhon> bmatusiak: not any easy way, i am working on virualizaing mercurial using mongodb and i still dont have time to do the file system backend using nosql
[03:32:28] <yeukhon> bmatusiak: what i am doing is save the tar of .hg (or .git) extract to tmpfs in memory
[03:32:58] <yeukhon> bmatusiak: but again, depends on how you want call ur db "git repo" .
[03:33:33] <bmatusiak> well the idea I'm looking into is trying to have a fork-able db for the local client
[03:34:29] <bmatusiak> then or if the network db become online then pushes and merges back into primary db
[03:34:49] <yeukhon> pretty new to mongodb but isn't that what replication is for?
[03:41:07] <yeukhon> bmatusiak: sorry i wish i could be more helpful than that. but if u want to backup db u should look at replication or dump or something similair. other than that, i'd say given it's night time in the US u priobably will have better answers on the mailinglist.
[03:41:27] <yeukhon> but i'd like to hear what u ac tually want to accomplish and the solution later.
[03:42:45] <bmatusiak> i might just make a file struct to hold json files and use git for the remote storage
[03:43:45] <bmatusiak> what i need to do is not so data intensive :P
[03:49:34] <yeukhon> well periodically u can do a dump. idk if git is good with that as long as u have a way to dumb the text, rather than a binary. to me hg maybe better at handling it.
[13:32:20] <leandroa> hi, is there a way to sort by list values? for example I have these docs: https://gist.github.com/lardissone/5448683 and I want to sort by the doc with the smallest value in the lists (no matter how many values in the list)
[13:37:47] <Nodex> have you tried sort({values:1}); ?
[14:15:57] <CupOfCocoa> I need a query to fetch all documents in a collection that share the value for a given key. Any pointers on how to go about that?
[14:25:40] <marcqualie> When using the aggregation framework if you don't specifiy $project, does it project all fields from the document? Can't find any reference to not including it in the docs
[14:38:09] <leandroa> Nodex: oh, that simple like that.. thanks
[14:48:05] <Nodex> I have never tried, I was asking if you've tried before I worked out somethign different for you
[15:34:40] <chandru_in> I have a multi-key index of size ~33M on a collection. The box has 7G of RAM and mongod's RSS is about 3.5G. I'd expect the entire index to be in memory but I see a lot of disk hits on the server. What could I be doing wrong?
[15:35:18] <chandru_in> mongostat shows ~40 pagefaults per sec
[15:43:23] <Number6> chandru_in: What's the output of $blockdev --report
[15:44:44] <chandru_in> RO RA SSZ BSZ StartSec Size Device
[15:48:06] <Number6> chandru_in: Try halfing the Readhead (RA) value. You'll need to restart MongoDB for the changes to take effect
[15:48:52] <Number6> ReadAhead, basically, takes in more data from the disk than is needed, as a method to speed up disk accesses - by caching blocks from the filesystem in RAM.
[15:49:19] <chandru_in> Number6, given there is enough memory for the entire index, how will reducing read-ahead help? Also, our queries are pretty must across the entire data-set.
[15:49:48] <Number6> For random data access patterns, a high read-ahead value can impact preformance by lessining what the OS thinks it should cache
[15:53:46] <Number6> chandru_in: A high readahead can take up a fair bit of RAM, as it is the OS trying to cache disk data in memory - the OS is being helpfull but at the cost of loosing a fair amount of RAM
[15:54:16] <chandru_in> ok, will try setting RA to a lower value
[15:58:59] <scoates> adjusting the readahead made a huge difference for us, chandru_in
[15:59:10] <scoates> this is also helpful: http://www.snailinaturtleneck.com/blog/2012/04/05/thursday-4-blockdev/
[16:08:01] <chandru_in> Reducing RA made it worse. ~90 faults per sec now
[16:19:36] <scoates> reducing RA will just help you fill up more RAM (more of your working set in RAM)
[16:20:09] <scoates> is your ram exhausted? if so, your working set it too large for your RAM. if not, then it'll fault until your RAM fills up.
[16:32:37] <ron> did you consider that THE WAY YOU DO THING IS BAD?!
[16:32:37] <theRoUS> i'm getting this on almost every operation of a new 2.2.3 install: mongo: symbol lookup error: mongo: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE
[16:33:23] <theRoUS> what does this mean, and what should i do about it?
[16:35:30] <theRoUS> oh, d'oh, needs pcre package (undocumentedly in the docco i've looked at)
[18:05:41] <manny> will the $in operator supplied with an array of _id's return documents in the order of the _id's?
[19:24:12] <fejes> oh, no, sorry, it's :'{xlat: "status"}'
[19:24:36] <scoates> I suggest you reduce your code down to the part that's failing. Get a working coll.find(…) and then add it to the wrapper, then add it to the prototype change.
[19:25:23] <fejes> scoates: thanks... I actually have done that, but wasn't able to get a working coll.find()
[19:26:04] <fejes> I haven't done javascript in a good decade, and what I did back then was pretty trivial.
[19:27:08] <fejes> is there an obvious reason why this doesn't work: coll.find('{}', '{}', function(e, data) {
[19:28:17] <kali> fejes: not quote around the arguments
[20:39:43] <pygmael> anyone have much experience with MongoDB's GeoJSON stuff? Specifically trying to query within a rectangle using the 2dsphere index?
[20:39:50] <zamnuts> kali, yes in the js driver: http://docs.mongodb.org/manual/reference/method/db.collection.find/
[20:40:07] <kali> zamnuts: this is the mongodb shell, not node.js
[20:40:53] <zamnuts> fejes, fejes question was in the context of nodejs, that and the JS driver is hosted on mongodb.org... wouldn't that make it relevant?
[22:57:48] <ghanima> I am troubleshooting a performance problem with my mongo cluster and trying to determine if this is truely a system contrainst or optimization needs to be done within the mongodatabase... I get an alert that the load average on a mongo box is above 110 utilization so I begin to investigate. When I see this condition I notice that there 4000 read OPS/ happening per sec which too me seems to be alot but not to the level of driving loa
[22:58:16] <ghanima> iostat on the box and nothing out of the ordinary except for CPU IO wait is always at about 85% to 95%
[22:58:27] <ghanima> Looked at top and the same metric....
[22:59:26] <ghanima> Memory has been steady and not increasing and the mongo process is flucuation in its usage and not staking constant
[23:00:17] <ghanima> Because its CPU IO/Wait I did a ps -aux | grep " D" to see if their were any processes being blocked and it refered to kjournald process but when looking at top its not in the top 10
[23:00:39] <ghanima> At this point I know my constraint is IO Wait but not sure how to track down what is causing the I/O wait
[23:00:53] <livinded> inserting into mongo is a blocking operation right?
[23:01:47] <ghanima> livinded: That's the thing when doing a mongo stat all my ops are read I am doing no writes... I was told that this DB only gets updates once every few months and that the only ops I should see is reads
[23:02:51] <livinded> oh, I'm talking about something else