[00:12:54] <synthmeat> can i somehow "flush" working set? at app start i need to run through all the documents, but not during later operation of the app
[06:32:05] <fishstiicks> howdy. why might i be getting incorrect documents back from a simple find()? my collection and schema are both very straightforward. i'm using mongolab and have double checked my documents in the browser multiple times. i'm only "selecting" on one string value.
[07:27:00] <fishstiicks> joannac: there's another 'vendor' string that i can use, and interestingly enough, it returns all matches PLUS some of the 'gafy' matches (but not all of them)
[08:44:30] <Freman> this is slow db.log_info.find({preview: /upstream/i, date : {$gte: ISODate("2010-10-15T15:25:00.000+10"), $lt : ISODate("2010-10-15T15:35:00.000+10")}})
[08:44:32] <Freman> can I do it in a way that reduces the range of data to seach to the date range first, before running regex over it?
[09:01:38] <vfuse> is it normal behavious for mongodb when writing to hourly documents to build up CPU as well as disk I/O till the end of the hour like this http://imgur.com/ngevNs5
[09:04:10] <bendem> vfuse, what do yo mean by "writing to hourly documents"?
[09:04:47] <vfuse> upserting data every minute for each hour to a document
[09:13:58] <bogn> and this whole preallocation thing that you need for time series using MMAP should no longer be used with WiredTiger
[09:14:21] <bogn> preallocation is actually a joy to get rid of
[09:19:43] <vfuse> bogn: I see it has to do wiht getting rid of padding factor and reusing space so it will have to rewrite the whole document every time
[09:22:52] <bogn> as far as I recall using in-place updates with WiredTiger is close to writing the whole document every time, yes
[09:23:55] <bogn> array appending is the way to go with metrics stored in WiredTiger. So far for my theoretical knowledge gathered from the docs and MongoDB Days events
[09:27:40] <vfuse> I’m asuming you mean using $push when talking about array appening instead of using $set?
[10:13:59] <m3t4lukas> with a TTL index, does the record get excluded from the index after expiration or does the document get deleted after expiry of the TTL?
[10:15:20] <bogn> vfuse do you have any indications that mongod writes whole documents every time even though you are using $push?
[10:15:56] <vfuse> bogn: I’m in the process of switching my upserts from $set to $push i’ll let you know when it’s done
[10:16:32] <bogn> when you say upsert you actually mean update do you?
[10:20:02] <bogn> the fact that WiredTiger doesn't like in-place updates stems from it using a MVCC approach
[10:23:42] <bogn> or rather: the fact that when using $set with WT is not using in in-place updates but full updates stems from it using a MVCC approach
[13:30:23] <livcd> doh .explain() does not return number of scanned objects
[13:30:51] <livcd> do i need to enable something ?
[14:43:38] <mrmccrac> anyone know if there are issues with the redhat repo?
[14:43:40] <mrmccrac> http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/RPMS/mongodb-org-3.0.7-1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
[17:15:17] <pehlert> Hey folks. I'm wondering whether there is a method to remove multiple documents and return them? Like findAndModify, just for more than one document
[17:37:17] <JKON> Hi..! I'm using running mongodb mapreduce calculations from node/mongoose env and I was wondering if there is some way I can define the map and reduce on client side and for finalize call function stored on server?
[20:26:01] <user1> Hi. Newbie question, I'm just getting started with mongo. My app consists of status data (flat key/value list) for different devices. What would be preferable to deal with this: each device gets its own flat collection (collectionX pymongoclient.db.deviceX), or a single collection where key = devicename, and value = list of status k/v pairs? (collection is pymongoclient.db.all, filter for name=deviceX when writing data). The for
[20:26:01] <user1> mer seems better but if it doesn't matter then I'd rather do the 2nd one, to simplify things for the reader of the data.
[21:33:18] <shlant> hi all. Anyone know why I would get a SELF_SIGNED_CERT_IN_CHAIN error when trying to connect to mongo with mongoose? I know my key, cert and ca are correct as I can connect outside of mongoose (with shell)
[21:45:16] <NotBobDole> Okay all, i'm running mongodb in a centos docker container. it times out when setting up the prealloc files on a fresh install
[21:57:11] <NotBobDole> I'm installing mongodb 3, by the way
[22:04:54] <NotBobDole> Not a mongo problem I guess. Mongod doesn't provide a mongod.service file. i added in a longer timeout to my custom mongod.service file. Trying it now
[22:10:41] <StephenLynx> yeah. having an actual systemd init file would be good.
[22:10:55] <NotBobDole> I've got my custom one which I ripped off some post online
[22:11:21] <NotBobDole> seems to work, but I had that timeout issue. I'm standing up the enviornment again, it'l be a few more minutes till I find out if it worked.
[23:39:57] <Bioblaze> Can you store a Array of Objects inside of another Array of Objects?