[03:29:38] <sorabji> howdy. i'm working on an application to tail the oplog in a write intensive mongo cluster. i have a hidden secondary that i'm connecting to and creating a tailable cursor to its local oplog
[03:31:04] <sorabji> this issue i'm having is keeping up with the rate that documents are being inserted into the oplog. i'm wondering if there's any pointers you can offer to speed the process up
[07:24:40] <psuriset__> Boomtime, Two things. 1) Thu Oct 1 03:19:03.084 [initandlisten] exception in initAndListen: 10310 Unable to lock file: /var/lib/mongodb/mongod.lock. Is a mongod instance already running?, terminating
[07:25:05] <psuriset__> 2) Already interleave done as you see in logs. still it does mention to interleave
[07:25:29] <Boomtime> -> exception in initAndListen: 10310 Unable to lock file: /var/lib/mongodb/mongod.lock. Is a mongod instance already running?, terminating
[07:25:57] <psuriset__> Boomtime, yes. same thing. i killed mongod and started with numactl interleave
[07:26:20] <psuriset__> Boomtime, then when i start mongod service i see this
[07:27:19] <Boomtime> yep, sorry i was looking at the log same time as you apparently..
[07:27:38] <psuriset__> Boomtime, Without interleave : https://gist.github.com/psuriset/b465710fdb4560322a25
[07:29:47] <Boomtime> and whenever you specify interleaving (which doesn't "take" btw since the process reports none in both cases), the lockfile error occurs instead?
[07:30:15] <psuriset__> Boomtime, After little google, i realised init script need to be added. Added that too.
[07:34:09] <Boomtime> centos isn't deb though right? and fixed before your version... otherwise, yeah, looks similar
[07:35:54] <psuriset__> Boomtime, yes. looks similar
[07:37:31] <Boomtime> ok, can you test this without the init.d? like, try using a command-line and don't even bother with --fork or log, just try launching a server direct in a shell
[07:38:17] <Boomtime> when possible, you should at least get the latest 2.4 too, or better upgrade to latest 3.0
[12:12:49] <terabyte> in my mongodb directory I have a number of files. myschema.0,.1,.2,.3,.4,.5 they double in size each time. I understand each file is 'preallocated' so given the latest is 2gb i haven't actually used the 2gb yet. just wondering, is the database data all contained in the latest 2gb file, or is it spread across all 5 files?
[12:30:28] <Grummfy> hello, a question about how to do thing. I need to check on a single collection all record that match an element 'a' with another element 'b' but 'b' is in an array. So elementmatch + where is not possible, so i try with aggregate + unwind but where is not possible inside. So except with a map reduce I don't see easy way to do it
[12:33:36] <sorabji> howdy. i'm working on an application to tails the oplog in a write intensive mongo cluster. i have a hidden secondary that i'm connecting to and creating a tailable cursor to its local oplog
[12:33:51] <sorabji> the issue i'm having is keeping up with the rate that documents are being inserted into the oplog. i'm wondering if there's any pointers you can offer to speed the process up
[12:43:22] <Bookwormser> Good morning. Did the method for importing a schema change between 2.6.9 and 3.0.6? I keep getting a bad JSON array format - found no opening bracket '[' in input source error, though the schemas have never contained brackets.
[12:50:06] <Grummfy> solved my issue with a big where function
[12:56:10] <jamiel> Hi all, does WiredTiger do any prefixing of field names for collection storage like it does on indexes. ie. Does it make using short field names redundant as a space saving technique? Or will that still help..? Thanks.
[13:16:11] <NoReflex> hello! I have a collection af about 121 million rows with an average size of 168 bytes; I'm running a query which uses an index that should retrieve around 1.4 million rows
[13:16:50] <NoReflex> this query runs in about 5000 seconds on my machine while with postgres with the same database and index it runs in about 200 seconds (that is 25x faster in PG)
[13:18:59] <NoReflex> the collection holds data sent by many devices; ti represents the device, mo is the moment, rt is the record type while the other data is not useful for indexing
[13:19:46] <NoReflex> here you can see the definition in PG and the queries used: http://pastebin.com/92yyQU3w
[13:20:32] <NoReflex> the only difference is that some key -values are transformed in columns in postgres
[13:20:57] <NoReflex> (the ones that appear in every document)
[13:21:08] <NoReflex> where could this huge difference come from?
[13:26:10] <jamiel> Extra 10 years of development on PG? :) j/k ... you probably want to send us the explain() results for that query and the index definition
[13:29:00] <NoReflex> jamiel, index on Mongo: "v" : 1,
[13:29:07] <Ulrar> Hi, I installed a mongodb for a client and I have some problems with access rights. I have a user with a userAdmin role on a databse, but that user still gets "not authorized" errors when trying to insert data in that database
[13:29:25] <Ulrar> Any idea what could be going wrong ?
[13:32:28] <NoReflex> jamiel, I pasted the useful parts of the explain output here: http://pastebin.com/pTmmFN1j
[13:32:56] <NoReflex> I just used ... in order not to paste the whole ranges for the ti value
[13:35:21] <Ulrar> Okay my bad, I added the readWrite role and it works. feels a little strange that userAdmin can't write, but okay
[13:40:32] <jamiel> @NoReflex you should try creating the index with the field which has the highest cardinality first or which would be the most restrictive of the dates ... "rt" should definitely be the last field in the index as it provides no benefit really, it really depends on your data but you should experiment with {ti: 1, mo: 1, rt: 1 } and { mo: 1, ti: 1, rt: 1 } and
[13:41:01] <jamiel> Also do a db.coll.stats(1024 * 1024) and check the size of the index whether it fits in memory or if the query is going to disk
[13:48:18] <NoReflex> jamiel, there are around 2200 distinct ti values in the collection of which I'm selecting around 500, 4 rt values (0,1,2,3) of which I'm selecting only one
[13:48:46] <NoReflex> regarding the moment (mo) the data spans around 10 months fo which I'm selecting 6 months
[13:50:24] <NoReflex> here are the stats: http://pastebin.com/WCqr28nM
[13:55:47] <MadWasp> Hi, I need to change approximately 20 million documents in my mongo db. They currently reference another collection in which documents have a params field. This params field should become part of the 20 million documents. What would be the fastest way to achieve this?
[14:18:42] <MadWasp> For now I’ve written a PHP Script to do it but it takes like a couple of days to run.
[16:23:55] <bmarick> Hey, I have come across an SSL host validation issue. Is there anyone here that could point me to a location in the code base so I can try to track down the error?
[16:33:01] <brianboyko> Hello. I'm building out the schemas now using Mongoose. How do you create an array of objects? Specifically, each test has questions, each question has a correct response, a student response, and a point value.
[16:33:27] <StephenLynx> I strongly suggest you don't use mongoose.
[16:33:39] <StephenLynx> it is slow, incomplete and inconsistent.
[16:33:55] <StephenLynx> hands down worst ODM in existance.
[19:32:26] <StephenLynx> usually you put the projection on the second argument of the find.
[19:32:32] <StephenLynx> but I don't know pymongo or python.
[19:33:48] <blizzow> StephenLynx: Thanks. I'll read up on projection.
[19:48:27] <teknopaul> Hi I'm finding that update() or updateOne() wipes all the data in an object, my update has no need for "dot notatioin" AFACS its a flat object, so I'm just trying to add some fields { '$set': { desc: 'baa', lastUpdate: 1443728439630 } }
[19:52:30] <cheeser> can you pastebin the document and your entire update command?
[19:59:40] <teknopaul> if (! update.$set) throw new Error() :)
[20:01:06] <StephenLynx> you might want to make an update that does other stuff without a $set though
[20:39:08] <pagios> pagios: Don't use MongoDB - it has many issues, and is essentially never the right solution to your problem. Here's a list of issues: http://cryto.net/~joepie91/blog/2015/07/19/why-you-should-never-ever-ever-use-mongodb/ and a more technical article at http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/ CAN SOMEONE please explain this to me once and for all coz i am tired of ppl's comments and noise
[20:39:27] <pagios> i am new to DB and i wnt to choose a db to use with node.js