[07:27:44] <VeeWee> Mongod won't start. I tried repairing, removing lock file, killing old process and removing pid file. No logs are written. Any ideas?
[07:32:03] <kees_> permissions on the log/pid/data files VeeWee?
[07:34:40] <VeeWee> switched them to mongod user, but they got switched back to root for some reason
[07:35:25] <kees_> prolly because you ran it as the root user to test it once or twice?
[07:38:48] <VeeWee> Could be ... Can't start the process with the mongod user though.
[07:41:24] <yruss972> Hi, I'm trying to understand the serverStatus output. Is it normal for recordStats.local.accessesNotInMemory to be as high as 2666252 ?
[07:52:33] <VeeWee> kees_: got it ... emptied the log directory a while ago *facepalm*
[07:53:13] <rspijker> yruss972: it could be… how long has the process been running?
[07:56:46] <yruss972> rspijker: "uptime" : 19928429 (not sure what units those are)
[08:00:13] <rspijker> so, basically. That number for local.accessesNotInMemory is the amount of times mongod had to fetch a page that wasn’t in memory for the local DB
[08:00:49] <yruss972> yeah- I read it but I'm just not sure how bad it is
[08:00:56] <rspijker> that’s not all that strange. The local DB is written to every time you write to the DB (since the oplog lives there)
[08:01:03] <yruss972> I have some other dbs with high numbers also
[08:01:24] <rspijker> are you seeing bad performance?
[08:38:50] <shambat> is there some way to limit the amount of ram mongodb uses?
[08:44:33] <sflint> shambat: there isn't a configuration in mongod for that
[08:44:37] <sflint> why are you trying to limit it?
[08:46:21] <shambat> sflint: I want to use mongodb as a db instead of storeing data in huge json files, not for speed, but to save memory usage. I'm using the data in a python script, and the json files are quite large, taking up a lot of memory
[08:46:46] <shambat> so I'd rather query a db than load in the whole json file into memory
[08:47:06] <JT-EC> mongodb is not the best choice if you're trying to cut down on memory usage.
[08:47:10] <sflint> sure....then why do you want to limit the RAM mongod is using?
[08:51:02] <yruss972> seems like a lot for such a small DB
[08:51:10] <rspijker> yruss972 it can be. It’s a capped collection (well the most important on in there, the oplog, is) and it defaults to 5% fo free FS space on 64bit linux
[08:54:49] <yruss972> sflint: "The oplog (operations log) is a special capped collection that keeps a rolling record of all operations that modify the data stored in your databases." ?
[08:55:23] <yruss972> editing pages in backend, etc.
[08:56:32] <shambat> sflint: sorry I was afk there ... right now I have nothing in mongodb. I have a python script that does json.load(myfile), which is very demanding on memory. I was hoping that storing the data in mongodb would allow me to just query the database instead of loading the whole thing to memory, thusly preventing the server to crash due to lack of memory
[08:57:27] <rspijker> yruss972: can you run this script:
[08:57:29] <rspijker> var c=0;use admin;db.runCommand({"listDatabases":1}).databases.forEach(function(x){db=db.getSiblingDB(x.name);c+=db.stats().dataSize;})print(c);
[08:57:38] <sflint> if you use an index on the data you will be fine....then the query will be fast as well
[08:58:11] <shambat> sflint: so mongo will use disk if memory is getting full?
[08:58:28] <rspijker> shambat: no, mongo itself won’t
[08:58:47] <rspijker> it uses memory mapping, so it will always use memory
[08:58:57] <rspijker> the kernel will swap pages in and out as required
[09:00:48] <rspijker> yruss972: try this one instead: var c=0;db.adminCommand({"listDatabases":1}).databases.forEach(function(x){db=db.getSiblingDB(x.name);c+=db.stats().dataSize;});print(c);
[09:01:11] <shambat> sflint: this is a script that runs hourly and will add data and compare stored data
[09:42:56] <JT-EC> yruss972: The issue seems to be that SmartOS zones don't have absolute memory limits and mongo doesn't seem to be able to cope very well with that. SmartOS lets you take more memory but swaps it and it kills performance as you're finding.
[09:43:41] <JT-EC> Because of the soft mem limits and (from the docs) "MongoDB automatically uses all free memory on the machine as its cache" this tends to screw SmartOS users.
[09:44:04] <JT-EC> Not sure how Joyent make this all work.
[09:44:44] <yruss972> JT-EC: any idea how mongo determines how much free memory there is?
[09:44:54] <Derick> yruss972: it doesn't care, it just allocates more
[09:45:16] <Derick> it leaves it to the OS to manage the swapping
[09:45:50] <yruss972> ok- but I still don't understand why it should be trying to take more memory when I only have 105MB of data :?
[09:51:16] <JT-EC> yruss972: Look at what I quoted from the docs - "all free memory" - doesn't mention anything about your data size only mongo's greed.
[09:52:09] <JT-EC> So you could try ulimits to fix this.
[10:46:21] <_Nodex> dragoonis : if it works for your app then I don't see why not :)
[10:49:04] <dragoonis> I had another mongoDB question about indexes, i just made it now - maybe you have some feedback for me here too? :) http://stackoverflow.com/questions/24734811/hiow-to-mongodb-ensure-index-upon-collection-creation
[11:11:41] <phr3ak> so if i set the database server ip in our application and if the node will be unreachable how to know the application the ip address of the working node? how the HA works?
[11:13:05] <phr3ak> is there any virtual ip support in mongodb?
[11:13:35] <_Nodex> your mongos takes care of that iirc
[11:13:50] <rspijker> if you have just a single replica set (no mongos, no sharding)
[12:33:04] <backSlasher> we are expiriencing really slow queries (>3s) with a collection that holds medium (<10MB) files.
[12:33:04] <backSlasher> I don't have a very high lock percentage (~10%-20%) but I do see a very high write queue on my disk, once every ~20s.
[12:33:04] <backSlasher> AFAIK it's no the data flush because the console says that they are one minute apart. "accessNotInMemory" is lower than 1 for the relevant collection.
[12:33:05] <backSlasher> Any thoughts about what to check next?
[12:34:31] <cheeser> do an explain on your query. make sure you're using an index.
[12:35:19] <backSlasher> it's an upsert based on id, and idhack is true for these queries, so index is probably not the issue
[12:47:07] <backSlasher> dragoonis, that "basicCursor" means no index - just scanning
[12:48:07] <backSlasher> cheeser, any thoughts about my problem? Any way to trace mongo's writes to disk?
[12:48:27] <dragoonis> backSlasher, ok got it, i added an index on results_date field and then queried on it.
[12:48:32] <dragoonis> I got this: "cursor" : "BtreeCursor results_date_1",
[12:48:44] <dragoonis> which is good, but I have "indexOnly" : false,
[12:48:50] <backSlasher> dragoonis, this means that it used the index. It should also scan less objects
[12:48:50] <dragoonis> Does this imply I have more work to do ?
[12:49:20] <dragoonis> The docs on indexOnly said ... "indexOnly is a boolean value that returns true when the query is covered by the index indicated in the cursor field."
[12:49:36] <dragoonis> and the only condition I have is results_date, so it should fully cover this index?
[12:49:42] <dragoonis> > db.company_category_score_months.find( { results_date: { $gte: new Date(1404169200000), $lte: new Date(1406761200000) } }).explain()
[12:49:55] <backSlasher> dragoonis, why are you not using range?
[12:50:18] <dragoonis> backSlasher, will a Range give me an indexOnly trigger?
[12:51:46] <backSlasher> ofc it wouldn't serve the same results, but just for the experiment
[12:52:22] <backSlasher> I think indexonly is when mongo doesn't have to do any sacnning, and in your case you'll always have to scan something because you have no specific number, only a range
[12:52:47] <dragoonis> > db.company_category_score_months.find( { results_date: new ISODate("2014-07-01T00:00:00Z") }).explain()
[12:52:57] <dragoonis> found the 2 matched rows, but indexOnly is false
[12:53:16] <backSlasher> dragoonis, well don't know then... try by id? :)
[12:54:56] <dragoonis> same result - https://gist.github.com/dragoonis/d171cbd9cb69f27610a4
[12:56:49] <backSlasher> maybe try only fetching the id like find({id:1234546},{id:1})
[12:59:50] <backSlasher> it means that it used an index
[13:00:23] <backSlasher> the real thing to note is to compare "n" (number of objects returned) to "nscanned" (or something similar), which is how many objects mongo manually compared
[13:00:42] <backSlasher> you have to keep the ratio high, meaning that mongo doens't scan objects you don't want
[13:01:24] <dragoonis> n = 2 and nscanned = 2 .. so we're good here :)
[13:01:44] <dragoonis> backSlasher, thanks for your time and helpful explanations!
[13:05:50] <backSlasher> dragoonis, sure thing. I actually came here seeking help for my own problem... :-)
[13:07:16] <dragoonis> We're all at diff points in the food chain
[13:07:30] <dragoonis> I can now help people with indexing questions a bit :)
[13:17:33] <backSlasher> we are expiriencing really slow upserts (>3s) with a collection that holds medium (<10MB) files. They have "idhack", meaning the native "_id" index is used.
[13:17:33] <backSlasher> I don't have a very high lock percentage (~10%-20%) but I do see a very high write queue on my disk, once every ~20s.
[13:17:33] <backSlasher> AFAIK it's no the data flush because the console says that they are one minute apart. "accessNotInMemory" is lower than 1 for the relevant collection.
[13:17:33] <backSlasher> Any thoughts about what to check next?
[13:21:00] <alanhoff> Hey guys, I'm willing to receive a JSON and use it as the a query param inside find(), is this ideia ok?
[13:46:11] <Cactusarus> can anyone help me figure out if I can use browseify with mongodb in node.js?
[13:50:09] <Cactusarus> I Saw the npm package, I'll install seperatley
[14:28:30] <uehtesham90> hey, i want to make a copy of a collection in a database....there are two options: using copyTo() and cloneCollection....im not sure which is a better option
[14:34:09] <obiwahn> http://paste.debian.net/109545/ -- Is there a faster/better way to delete the homework with the lowest score?
[14:36:53] <uehtesham90> im doing the same course...lol
[14:37:18] <uehtesham90> @obiwahn....i used python though
[14:38:49] <obiwahn> i just think it is best to do stuff you can do in the db inplace instead sending stuff over the driver
[14:39:05] <pseudo_> I have a 3 node cluster that "ran out" of disk space. I am getting "Can't take a write lock while out of disk space" errors. However, each machine in the cluster has at least 900MB of disk space left. Why am I getting these messages still and how can I force mongo to delete some data?
[14:39:28] <uehtesham90> @obiwahn u can have a look at my code in the above link
[14:40:04] <rspijker> obiwahn: you probably have the quickest way
[14:40:37] <rspijker> pseudo_: after a while, mongod will allocate disk space in chunks of 2GB
[14:40:38] <cheeser> findAndModify would be atomic
[14:41:16] <rspijker> pseudo_: to get mongod to return disk space to the filesystem, either remove the files and resync from another node in the replica set, or run repairDatabase
[14:42:24] <pseudo_> rspijker: so if i remove the files directly from the filesystem, i can run repairDatabase and all the other data should be fine?
[14:43:42] <rspijker> pseudo_: no… you EITHER remove the files and sync back from another replica set member, OR you run repairDatabase
[14:43:49] <rspijker> for repairDB you don’t have to remove anything
[14:44:11] <rspijker> it will just rewrite the files, ‘unfragmented’
[14:44:52] <rspijker> however, if your DB files are already fairly unfragmented, which can happen if you don;t have many removals or updates. Then repairDatabase won’t give you much space back.. but then again, neither will anything else. Because you just have that amount of data
[14:45:41] <pseudo_> yeah, repair database didn't give a whole lot. i dont think i had a single removal/update
[14:45:51] <obiwahn> uehtesham90: lowest_score = min(entry['scores'][2],entry['scores'][3]) - ok works but not really elegant you could have matched for homework
[14:46:34] <pseudo_> each node in the cluster is short on disk space and the only place with 2gb of data is /var/lib/mongodb. is it safe to delete files straight from here?
[14:46:46] <obiwahn> :P on the other hand i have wasted too much timer over it:P
[14:48:15] <rspijker> pseudo_: if reapirDB doesn’t give you back any space, neither will a deletion
[14:48:34] <rspijker> apparently, you just have that much data in your system
[14:48:42] <pseudo_> rspijker: i have a 98GB database that I want to completely drop. I just don't have enough disk space to perform the drop operation
[14:48:46] <rspijker> or, it’s distributed really uncomfortably and you have a lot of overhead
[14:49:01] <rspijker> o, then you can just remove the file :)
[14:49:15] <rspijker> just, shut down the db first
[14:49:18] <pseudo_> okay, it won't screw up metadata or anything?
[14:49:42] <rspijker> as long as you remove the correct files, no...
[14:49:55] <pseudo_> also, should i stop the whole cluster and remove it all at once? will the cluster be able to come back online if they all go down?
[15:01:56] <pseudo_> rspijker: i just restored the files that i deleted. i took away the 0th file from the packets database. i am going to try removing a different one.
[15:02:20] <rspijker> I thought you wanted to drop the entire thing?
[15:02:45] <pseudo_> rspijker: i do, but i was just trying to free enough disk space to perform the drop command in mongo
[15:03:01] <rspijker> well… there is a packets.ns file
[15:08:48] <dragoonis> Derick, I believe upsert and multi need a 1 line description of each. I'm happy to add this now if you can approve my wording of 'upsert' and you tell me what 'multi' means :)
[15:09:13] <Derick> multi means that one update statement can update more than one document
[15:09:40] <og01> If i want a replicaset with only 2 instances, how can I ensure that both instances become primary when network connectivity goes down?
[15:10:05] <og01> or is that the normal situation anyway
[15:24:28] <ejb> Is there a way to specify that id's should be strings not ObjectIds? Meteor uses strings so it doesn't play nice with documents that I've loaded via mongoimport
[15:31:20] <saml> ejb, doc json can contain _id field and mongoimport would respect that
[15:32:08] <ejb> saml: hm, ok. How can I generate good ids to put in my json file?
[16:34:31] <jParkton> Im having a couple of simple problems connecting to my mongodb via node I was wondering if someone could help point me in the right direction?
[16:35:01] <jParkton> ignore the .find() I know thats off
[16:36:49] <jParkton> Im trying to find my collections and return them I looked here http://runnable.com/UW3vaGTkq498AACC/query-for-records-in-mongodb-using-mongodb-native-for-node-js but its not working in my case maybe I am missing some package?
[17:04:10] <og01> jParkton: you seem to have various race conditions in your code
[17:13:31] <og01> nodejs (and/or javascript in general) runs an event loop. this loop is native to nodejs (in some languages it must be implemented). This loops allows the programmer to setup watchers, listeners and fire events
[17:14:38] <og01> instead it should setup callbacks/events that occur some time in the future (my description here isnt excellent, but hopefully you follow)
[17:17:52] <jParkton> Im making a simple login / logout app
[17:18:11] <og01> so with the connect command you only have access to the db object once the callback is called... you cant reliably use this object, and you can't just set it to a global variable because you wont actually know when its ready
[17:18:12] <jParkton> so I need to somehow setup a backend so I am not hard coding it all in my exposed files
[17:18:29] <og01> really what you end up with is a load of nested callbacks....
[17:20:20] <og01> also the return from function calls/methods become meaningless
[17:21:00] <og01> now, with this understanding, you can work around things, you can set flags on variables when things are ready, setup timers and work wround things...
[17:21:16] <og01> delays can be setup and all sorts to help bodge things into working
[17:21:26] <og01> but this is not a good way to go around things
[17:21:36] <og01> you can pass arround callbacks, and such
[17:22:51] <og01> im not explaining myself well at this point... sorry but do you think you follow
[17:22:57] <jParkton> I think I get why it was failing
[17:23:16] <jParkton> I think so, but Ihave been wrong before lol
[17:24:00] <og01> MongoClient.connect() completes immediatly - it does not block, but the function you provide to it is called some time in the fucture when the function is complete
[17:24:26] <og01> but by that time you've already left your function (connectDatabase or getCollection)
[17:25:09] <og01> now I like promises, but some people I work with do not, i personally think some people simply fail to understand them.
[17:25:31] <og01> as promise, is the promise of a result some time in the future.
[17:26:20] <og01> it is an object you can store, and you can call promise.then(function(db) {...do something with ...});
[17:29:15] <og01> it makes it so you can rebuild a more familiar flow to your code, you can return a promise of your results, be it successfull or failure, it makes returning a value usefull again, and also provides the equvilient of a thrown error via rejection possible
[17:30:11] <og01> I think you should perhaps grasp how to pass around callbacks through your code first
[17:43:40] <og01> take a look at this (again untested) example
[17:43:59] <og01> please note that i wouldnt write code like this, but it is an example for you to learn from
[17:44:49] <og01> in this example getDatabase reuses the same db object every time it is called
[17:45:10] <og01> if it isnt already created, it connects stores the db object and calls the callback
[17:45:33] <og01> if it already is created then it calls the callback with the already existing db object
[17:46:41] <og01> as for your querying from your login page, you'd need to know your webserver/webframework and have an understanding of HTTP and potentially XHR/AJAX
[17:47:24] <og01> perhaps lookup how to write a RESTfull interface in express or some such
[17:48:33] <og01> jParkton: i have to go in 5 min, so any final questions ask now
[17:48:46] <og01> in anycase i hope i've helped more than hindered :)
[18:15:45] <apetresc> Hey, guys; I'm having trouble getting scons to respect the targets I tell it to build. Basically, even if I say something like "scons tools install --prefix=/whatever" it will insist on building and installing *everything*
[18:38:26] <rickibalboa> Hi, I have a strange situation where a mongo query performed from a nodejs process hangs and ends up timing out. The database connection is made, as soon as I run any find() queries they hang, but they work if I use mongodb shell / robomongo.
[18:38:37] <context> with the aggragation framework can i $project an embedded documents field (this is an array of documents not a single embedded document)
[18:38:43] <context> im thinking i might have to mapReduce it
[18:40:03] <apetresc> What do you mean, context? Like, you want to only project some of the documents in the array, or the entire field (which happens to be an array)
[18:40:26] <context> well im doing a count based on the embedded documents
[18:41:07] <context> primary { blah []secondary }, many secondary, i want to project fields out of every embedded doc, some records only have one in the array, some have more
[18:41:53] <ranman> apetresc: scons mongo -j 8 or something like that seems to just build the shell for me
[18:43:28] <apetresc> ranman, not me :( I create a brand new directory (/Users/apetresc/tmp/mongo) and run exactly: `scons mongo install --prefix=/Users/apetresc/tmp/mongo`
[18:44:24] <apetresc> And then `ls ~/tmp/mongo/bin` returns everything... mongo, mongod, mongos, etc
[18:45:12] <apetresc> I feel like maybe even if the `mongo` target only builds the shell, the `install` target accidentally depends on `all`, perhaps?
[18:47:15] <ranman> apetresc: seems to be the case, I don't really want to go through the sconstruct atm though :/, my recommendation scons mongo -j8 then copy the build artifacts to wherever you want them to be
[18:47:54] <apetresc> ranman, sigh, I see :( If I track this down and submit a patch for it, there's no reason it wouldn't get pulled, right?
[18:48:15] <ranman> I doubt it would get pulled actually
[18:48:20] <ranman> install is an alias, not a target
[18:48:29] <ranman> and it's probably depended on my something
[18:48:51] <apetresc> So... what's the point of having individual fine-grained targets if it's by-design impossible to install any of them?
[18:49:45] <ranman> I don't think install was intended to be used that way -- nor do I think mongodb is meant to be installed on machines that way -- better to use the packages
[18:50:04] <ranman> if you're just doing development then you rarely need to "install" anything you just need to run tests
[18:50:47] <apetresc> No, but what if you're building it from source on a machine that just needs a client? You can't envision any scenarios where someone would want to talk to a mongo server without BEING a mongo server?
[18:51:17] <apetresc> Well anyway, I think I'll just workaround it by manually cp-ing the binaries out
[18:51:47] <ranman> apetresc: there are packages for the just the client, why are you building from source ?
[18:51:49] <apetresc> But for the record, I don't understand why installing a target doesn't install just that target :P But to be fair, I don't know too much about scons
[18:51:51] <ranman> apetresc: it's not that at all
[18:52:34] <apetresc> ranman, in this particular case it's because I'm trying to modify the Homebrew recipe for Mongo to support a --no-server option
[18:52:43] <ranman> apetresc: it's that scons and master and all of these things are for development of mongodb itself -- if you're using them your results are not guaranteed -- when most people deploy a database the want a battle tested release and package
[18:52:44] <apetresc> Homebrew insists on building from source, they don't take binary packages
[18:53:02] <ranman> apetresc: homebrew has a bottle for mongodb
[19:00:39] <ranman> apetresc: if you're interested in trying to fix it I think the bug is in here somewhere: https://github.com/mongodb/mongo/blob/master/src/mongo/SConscript
[19:02:08] <ranman> apetresc: I don't have a lot of time to look into it right now but you have a good point
[19:22:34] <Gargoyle> Are there any channels for the university courses - can't see anything obvious. If there are no other suggestions, I'll be hanging out in #mongodb-m102 for the next seven weeks.
[19:33:45] <coreyfinley> My document has a field that is an array of dates where the given object meets some boolean requirement. so given a date range, could anyone help me craft a query that can find objects that have at least one date in the array that is within the given date range?
[20:17:16] <lfamorim> Hello! Why Mongo locked my database on indexRebuilder operation? I'm crying ... ;(
[20:17:41] <lfamorim> I have a severe outage because of it
[20:19:39] <kali> lfamorim: well, for this kind of operation, you usually perform them by taking replica out of the replica set one at a time...
[20:22:22] <lfamorim> kali, my server are standalone =(
[20:22:58] <lfamorim> the server come in this condition after a background index finished
[20:22:59] <kali> in production ? this is a disaster waiting to happen
[20:23:21] <lfamorim> kali, mongo need a replica always?
[20:23:36] <patricklewis_> Hey I have question does anyone have a sec to try and help me out
[20:24:09] <kali> any critical database needs a replication or some sort.
[20:24:33] <patricklewis_> More or less a way to find the next document in the DB
[20:24:55] <patricklewis_> meaning if I am viewing information on a document and I want to go to the next document how can I do so?
[20:25:26] <patricklewis_> a great example would be viewing a portfolio of web work. Each project is a document
[20:25:42] <patricklewis_> and I need to be able to find the next and previous documents
[20:25:55] <kali> next and previous according to what ?
[20:25:55] <patricklewis_> so that way I can navigate using arrow icons
[20:26:14] <patricklewis_> so if I am viewing a project and I want to view the next one based on when it was added
[20:27:24] <kali> patricklewis_: you need to have something to materialize this "added" operation. if you're using standard ObjectIds as _id, you're lucky, because they have a built-in timestamp you can use
[20:27:44] <patricklewis_> yep I am using object ids
[20:28:02] <patricklewis_> btw i just started using mongodb so Im using the wrong terminology let me know
[20:31:11] <patricklewis_> Basicly I am building a site/webapp using meteor js. I and the trailing part of the url is a parameter telling the app what document to pull the information from
[20:31:29] <patricklewis_> so in this case YxvFYNYQyRyvNXH8e is my _id
[20:31:40] <patricklewis_> however I would like to find the next document
[20:31:47] <kali> well, it does not look like an ObjectID
[20:31:56] <kali> so the objectId trick will not work
[20:32:09] <kali> you need to add a timestamp to your document model
[20:32:13] <patricklewis_> hmm.. I guess meteor does it differently
[20:32:17] <kali> but i don't know how to do that in meteor
[20:32:38] <patricklewis_> I do, how would I find the next document based off of the time stamp?
[20:33:43] <kali> just look for the smaller value of the timestamp strictly higher than the one you're on
[20:34:43] <patricklewis_> how do I do so in mongo?
[20:36:38] <kali> or whatever will work with meteor
[20:50:03] <cozby> question... if an application is supplied one mongo url replica endpoint is that application aware of the other replica sets?
[20:50:20] <cozby> or does one need to supply ALL replica set endpoints for the application to be aware of them
[20:57:08] <kali> cozby: one is enough when everything is right. the driver will contact the one it know about and get the full list from there
[20:57:56] <kali> cozby: but if the one you're passing it does not work when the client starts, then you're in trouble. so it's better to provide the driver every replica you know about
[21:51:33] <maxmanders> I know every mongodb configuration is different, but can anyone offer advice on useful values from mongostats to use as alerting thresholds?
[21:51:49] <maxmanders> I.e. if queues reads/writes are > X then potential problem; if > Y then significant problem etc?
[22:03:49] <joannac> maxmanders: what does your normal pattern look like?
[22:04:51] <maxmanders> joannac: I think unfortunately my answer will be 'give it a shot and find out' - I'm looking to put forward some useful indicative suggestions as part of a management offering. Would immediately go for MMS, but in this case it's not an option.
[22:05:29] <maxmanders> I just hoped there might be some indicative useful figures / ratios to look out for - but I really don't have all the details that I'd like to offer for a decent answer (own worst enemy)
[22:05:49] <maxmanders> joannac: among other things - not a reservation on my part at all, but client governance requirements
[22:07:10] <joannac> you probably don't want queued reads or writes at all
[22:07:27] <joannac> because it means you're not keeping up with the load on the database
[22:08:06] <maxmanders> So as a starter-for-ten, nominally these values should be zero, and anything non-zero is worth investigating, and further tweaking as we understand more about the application and its behaviour?
[22:08:14] <joannac> but of course, you don't provision for your absolute peak load, so you will occasionally get spikes of queued readers and writers
[22:08:57] <maxmanders> so as reads or writes increase, would increased queue length necessarily inform a decision to add more replica sets to spread the load?
[22:11:09] <maxmanders> Er yeah :-\ iiuc scaling sharded replica sets can offer better performance, but premature sharding can be counter productive over having entire data sets on multiple replicas?
[22:13:16] <joannac> yes, assuming your data is easily separated into multiple replica sets
[22:14:01] <maxmanders> So unless there's a need to share to spread writes across different replica set members, would I be right in saying it would be good to start with having the entire data set on multiple replica set members? *with your assumption above*?
[22:14:18] <maxmanders> Newer tech to me, still trying to get it right in my head :-)
[22:21:42] <maxmanders> And thought that one or more schema were present on a primary member, and were made highly available through other secondary replica set members.
[22:22:56] <maxmanders> Anyways - you've cleared things right up - and I've been awake too long - so thank you again, very much appreciated.
[22:32:34] <wc-> hi all, im having a lot of trouble creating an admin user on a local mongo server