PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 15th of September, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:15:58] <_bahamas> Hi ! After doing rs.initiate() I do rs.add('hostname') then I receive this message: "errmsg" : "Our config version of 2 is no larger than the version on hostname:27017, which is 3" can someone help to decode this errmsg ?
[01:17:21] <joannac> yes
[01:17:33] <joannac> the member you're on has config version 2
[01:17:41] <joannac> the member you're adding has config version 3
[01:17:57] <joannac> that is not an acceptable situation. why does the node you're adding already have a configuration?
[01:19:13] <_bahamas> how can I make member2 to have the same version number ? can I force it ?
[01:19:25] <_bahamas> member2 is the member Im trying to add.
[01:19:35] <joannac> can you answer my question, so I understand what you're trying to do?
[01:19:42] <joannac> why does the node you're adding already have a configuration?
[01:20:02] <_bahamas> because I created it with a configuration.
[01:21:55] <_bahamas> @joannac I started both servers (master and slave) with the same configuration (same mongod.conf)
[01:22:10] <joannac> connect to both servers, and run rs.status()
[01:22:17] <joannac> and pastebin what you see from both servers
[01:24:48] <_bahamas> @joannac http://pastebin.com/ZSKGH5Gy
[01:25:20] <joannac> okay, so on the primary
[01:25:42] <joannac> rs.add("hostname2"), what's the error you get?
[01:26:16] <_bahamas> "errmsg" : "Our config version of 2 is no larger than the version on prd-mp-mongo-catalogo-n2.ns2online.com.br:27017, which is 3",
[01:26:37] <joannac> on the machine where the primary is, type mongo --host hostname2:port
[01:26:45] <joannac> and then in the mongo shell, type rs.status()
[01:28:19] <_bahamas> @joannac you help me find the error. now I know what is going on.
[01:28:24] <joannac> bad DNS?
[01:28:26] <_bahamas> Maybe I need some sleep. sorry .
[01:28:33] <_bahamas> Im using ansible
[01:28:58] <_bahamas> and my ansible yml was pointing to another ReplicaSET
[01:29:14] <_bahamas> thank you.
[01:30:09] <joannac> no problem
[10:48:38] <OPB2K> Hi Mongo people. I have a question about using mongos as a proxy for a replica set. We're currently have an AWS-hosted PHP app connected to our unsharded 3-node replica set. For legacy reasons, our driver wrapper does not support replica sets, so we currently have the primary at a higher priority than the secondaries, and we manually point to that IP (very bad, I know!). So my question... could we use mongos as a proxy
[10:48:50] <OPB2K> to the replica set without sharding? mongos is replica-set aware, so will direct the queries as needed. We just point our driver to the mongos instance. Obviously this then introduces the mongos instance as a single point of failure, so we're no better off. So could we then have two independent mongos instances behind a load balancer (ELB, as we're on AWS) for redundancy. Would be interested to hear thoughts on this.
[10:48:57] <OPB2K> Thanks in advance!
[10:56:01] <Derick> OPB2K: you can, but mongos *does* take some overhead, and there are minute changes in behaviour possible - but unlikely anything worse than connecting to just the primary by hardcoding it :)
[10:56:21] <Derick> You end having to create a *one* shard cluster though
[10:56:32] <Derick> and you can have two mongos
[10:56:48] <Derick> the PHP driver supports listing both if you want, but a load balancer could also work
[10:56:59] <Derick> You *do* need config servers too though, that also need to be maintained.
[10:57:15] <Derick> I would suggest you rather fix your PHP app to just be able to use a replicaset directly.
[11:00:52] <OPB2K> @Derick: thanks
[11:01:25] <OPB2K> developer capacity for upgrading the framework, dependencies, etc is pretty limited, so I was hoping to go for an angle where we could by pass that
[11:01:45] <Derick> you can, but you introduce a *lot* of extra DBA work
[11:01:58] <OPB2K> yeah, definite tradeoff
[11:02:32] <OPB2K> i might go back to the drawing board and see if I can squeeze more out of our devs. Thanks for the advice
[11:02:50] <Derick> which framework is this? or is it custom?
[12:09:23] <Jeroenimoo0> Hey, does anyone have some good articles on data consistency with denormalized data models? It doesn't have to be MongoDB specific
[13:37:20] <Guest24> So I have a match object, which contains a players array, which contains 10 player "objects", which has an accountId... if I want to query all of the matches and return those that have a player with a specific accountId.. how might I do actually do that :
[13:39:28] <cheeser> $unwind the array. match on the accountId
[13:48:17] <AndrewYoung> { players: [ { accountId: 1 }, {accountId: 2 } ] }
[13:48:21] <AndrewYoung> Is that right?
[13:52:36] <zylo4747> has anyone ever tried running a hybrid windows / linux mongodb replica set? is it possible? the reason i ask is because we have a windows replica set that I want to switch to linux but i'd rather add new linux nodes then remove the windows nodes later
[13:52:51] <AndrewYoung> Yes, it is possible.
[13:53:15] <zylo4747> any caveats that you're aware of?
[13:53:24] <AndrewYoung> All replica set communication is done over the network, so the nodes can be different operating systems, storage engines, etc.
[13:53:33] <zylo4747> ok great
[13:53:35] <zylo4747> thank you
[13:53:54] <AndrewYoung> You just need to make sure that all the machines in the replica set can keep up with the primary in terms of processing power, etc.
[13:54:06] <zylo4747> that shouldn't be a problem
[13:54:31] <AndrewYoung> Guest24: db.match.find({"players.accountId": 1})
[13:54:43] <AndrewYoung> That worked in my testing just now.
[14:57:50] <jwpapi> I am calling the same database and ask for two different collections on the same way, but one doesnt get a responce
[14:57:50] <jwpapi> i checked everything 3 times already
[14:57:50] <jwpapi> http://pastebin.com/HTTfm38k
[14:58:01] <jwpapi> rooms is empty even though there is stuff in the rooms collection
[14:59:07] <StephenLynx> try without meteor.
[14:59:13] <StephenLynx> that code makes absolutely no sense to me.
[15:04:52] <jwpapi> i tried
[15:05:10] <jwpapi> like db.getCollection('rooms').find({}); it works
[15:16:02] <StephenLynx> no
[15:16:05] <StephenLynx> not on the terminal.
[15:16:06] <StephenLynx> on your code.
[15:18:20] <Robbilie> joannac: i am pretty sure by now that the timeout error is because of a really slow filesystem (1hdd shared by a couple kvms)
[15:18:33] <Robbilie> is there a way to increase mongos filesystem read/write timeout?
[15:37:58] <jokke> i have a short question about the aggregation framework: Is it possible to use $fetch _from_ a sharded collection if the collection to be fetched is unsharded?
[15:38:40] <cheeser> you mean $lookup?
[15:39:02] <jokke> uh yes
[15:39:04] <jokke> sorry
[15:39:15] <cheeser> no worries. just clarifying. yes, i believe that's possible.
[15:39:25] <jokke> mhm good to know
[15:39:45] <cheeser> it's not supported against sharded collections because the data locality can not be guaranteed and there are performance issues to say the least.
[15:41:29] <jokke> i see
[15:44:50] <jokke> cheeser: i assume localField and foreignField can be given in dot notation?
[15:46:35] <cheeser> i believe so, yes.
[16:10:10] <Robbilie> anyone here who can answer me a mongod source code related question? i dont quite understand under what circumstances ErrorCodes::ExceededTimeLimit is thrown (other than a set maxTimeMS)
[16:52:35] <AlmightyOatmeal> ugh, come on mongod, build 3 indices faster! i don't care if there are 26.2M documents, INDEX ALL THE THINGS!
[16:55:38] <AlmightyOatmeal> what would be the best approach to shard a single instance? i've got many cores to utilize and ~10k IOPS to utilze and my server is barely breaking a sweat with using only one core.
[16:58:22] <cheeser> if you only have one node, there's not much to do when sharded i would think.
[16:59:24] <AlmightyOatmeal> cheeser: what do you mean there isn't much to do?
[17:01:22] <cheeser> if you have one node, what do you expect sharding to do?
[17:01:48] <AlmightyOatmeal> cheeser: parallel operations utilizing more CPU cores and disk I/O instead of one process/thread doing *all* the work
[17:02:25] <cheeser> there's nowhere to move docs to...
[17:02:40] <Derick> mongodb does thread...
[17:02:52] <AlmightyOatmeal> cheeser: different shards?
[17:02:59] <cheeser> you said you had one node
[17:04:59] <AlmightyOatmeal> cheeser: yes.
[17:05:12] <cheeser> so. there's nowhere to move docs. no work to do.
[17:05:51] <AlmightyOatmeal> cheeser: moving docs to their respective shards would be something to do. then multiple shards could be accessed in parallel increasing I/O, no?
[17:06:06] <cheeser> what shards? you have one node, you said.
[17:06:32] <AvianFlu> if you have one node doing very little work, that sounds like it's overprovisioned
[17:06:37] <AlmightyOatmeal> cheeser: becuase i haven't found a single other option that could increase resource utilization
[17:06:41] <AvianFlu> adding more underworked servers will just increase complexity for nothing
[17:06:55] <AvianFlu> is the one core pegged to the point that you're slowing down?
[17:07:11] <AlmightyOatmeal> AvianFlu: mongod is only using one core and little disk I/O
[17:07:20] <AvianFlu> but is that a bottleneck
[17:07:25] <AvianFlu> or is that too much server for your workload
[17:07:32] <AvianFlu> also, what mongo version?
[17:08:32] <AlmightyOatmeal> AvianFlu: if i'm running complex queries over 26.2M+ documents, i expect more resources to be consumed and multiple readers accessing a collection to expedite results -- i'm coming from a MySQL world so my expectations are probably misaligned.
[17:08:40] <AlmightyOatmeal> AvianFlu: v3.2
[17:09:17] <AvianFlu> oh you're expecting multithreaded *queries*
[17:09:38] <AvianFlu> these queries are a bottleneck for you, or are you just wondering about the cpu usage
[17:10:43] <AlmightyOatmeal> i don't think the query is a problem, the fact that i have a large number of items to sift through is a problem when there is very little resource usage happening
[17:11:05] <AlmightyOatmeal> indices are probably going to take a couple of days to build
[17:12:59] <AvianFlu> sharding, to get back to your original question, aren't something you'd do on one host
[17:13:03] <AvianFlu> you'd want a few smaller instances
[17:13:11] <AvianFlu> and then you'd split that collection up across them
[17:13:24] <AvianFlu> it's generally used to parallelize writes, or to allow a collection to exceed disk/ram
[17:13:38] <AvianFlu> so I dunno if it's gonna help you a lot there either
[17:13:55] <AvianFlu> parallel queries, parallel index builds... I'm not sure if mongo actually does any of that
[17:14:12] <Derick> queries, certainly; indexes, maybe
[17:14:30] <Derick> but if you have load during query building, it's going to slow things down
[17:14:32] <AlmightyOatmeal> in all honesty that sounds a little ridiculous to run multiple instances on a single host when mongo should be intelligent enough to handle that by itself. again, i'm coming from MySQL which is significantly more mature for heavy demand so my expectations are a little high
[17:14:36] <AvianFlu> parallel queries in the sense that one query uses multiple concurrent readers on one instance though?
[17:14:41] <Derick> even more if you build them in the background
[17:14:43] <AvianFlu> I wasn't aware that was a thing in mongo
[17:15:02] <Derick> AvianFlu: no, multiple queries at the same time - not one query using multiple readers
[17:15:05] <AvianFlu> right
[17:15:08] <AvianFlu> that's what he's asking about
[17:15:16] <AvianFlu> of course it supports multiple concurrent ops :)
[17:15:20] <AlmightyOatmeal> yeah, i'm referring to one query and multiple readers
[17:15:27] <AvianFlu> yeah I don't think mongo does that
[17:15:35] <AvianFlu> and yeah, my point was sharding wasn't gonna do what you expected there
[17:15:37] <AlmightyOatmeal> thats where i thought sharding would help
[17:15:41] <AlmightyOatmeal> ah
[17:15:57] <AvianFlu> it's for the "oh crap my collection is bigger than the most RAM I could ever put in one machine" situations
[17:16:01] <AvianFlu> or "shit I'm out of disk"
[17:16:10] <AvianFlu> or for scaling writes
[17:16:29] <AvianFlu> what kind of data is this that you're working with?
[17:16:43] <AlmightyOatmeal> another problem i run into is a lack of resource control in a single instance -- it's caused other services to crash and when i limit what resources are available to mongod, then it bombs out with an OOM stack which is frustrating
[17:16:45] <AvianFlu> the indexes can certainly make it faster by limiting what you're doing to subsets of the big collection
[17:17:09] <AvianFlu> so you're trying to run this mongod on the same host as the apps using it?
[17:17:11] <AlmightyOatmeal> i'm loading ElasticSearch results to perform data analysis/statistics
[17:17:19] <AlmightyOatmeal> AvianFlu: no
[17:17:52] <AlmightyOatmeal> but there are system monitoring applications running as well as kernel resources used for filesystem caching and I/O handling which it has choked-off
[17:18:13] <AvianFlu> oh, that makes sense
[17:18:25] <AvianFlu> it's surprising to hear that alongside "why isn't this using more resources"
[17:19:09] <AlmightyOatmeal> memory usage exceeds what WiredTiger is configured to use which is one annoyance but the resources i'm expecting to see more usage of is multiple cores and disk I/O, not consuming all the ram in the box
[17:19:32] <AvianFlu> oh yeah, WT will use all your ram for its cache
[17:19:46] <AvianFlu> I expected it to play nice with the fs cache but I don't run it in production yet
[17:19:55] <AlmightyOatmeal> but it should not when it's configured to use X amount of ram or 60% (default), not 100%
[17:20:48] <AlmightyOatmeal> i've tried configuring WT to use as little as 1G of ram but it consumed all the ram it can find; logs confirm that mongod is configured properly with the WT cache set properly but it still uses more than what it's configured to use
[17:20:58] <AlmightyOatmeal> which has lead to multiple OOM conditions on this box
[17:23:06] <AlmightyOatmeal> http://pastebin.com/AqMvqpfj
[17:23:53] <AlmightyOatmeal> from what i can see, it is not behaving as configured
[17:25:04] <AlmightyOatmeal> oops, apparently the log line i grabbed wasn't the latest because it's configured to run with a 6GB cache
[17:25:29] <AlmightyOatmeal> http://pastebin.com/mjngQuGc <-- that's better
[17:25:40] <AvianFlu> yeah I'm not sure what's up with that
[17:25:45] <AvianFlu> might be worth reporting as a bug
[17:30:39] <AlmightyOatmeal> http://pastebin.com/J4bR3MTy <-- i would expect my disks to be thrashed more than drunk girls at a kegger but storage array IOPS are around 10-20% -- there is so much more power that could be used in terms of CPU and disk I/O but it's not being harnessed :(
[17:30:45] <AlmightyOatmeal> that makes for a very sad panda.
[17:30:52] <AlmightyOatmeal> almost makes sad panda want to cry.
[17:31:26] <AvianFlu> what kind of processing are you ultimately trying to do with this data?
[17:31:34] <AvianFlu> what's the pattern of putting it into mongo gonna look like?
[17:32:14] <AlmightyOatmeal> right now it's very very basic using $and to match 6 criteria
[17:32:21] <AlmightyOatmeal> in one collection
[17:36:12] <AlmightyOatmeal> i should also mention that i'm using find()
[17:39:07] <AvianFlu> index it, and try running the query in the mongo shell with .explain()
[17:39:11] <AvianFlu> see how bad it is
[17:39:45] <AlmightyOatmeal> AvianFlu: i have run .explain() before and i'm building indices but it's going to take at least 24 hours to complete
[17:40:00] <AlmightyOatmeal> becuase mongo is not using very much CPU nor disk I/O
[17:40:19] <AlmightyOatmeal> ergo my original problem or trying to find a solution
[17:40:25] <AvianFlu> yep
[17:40:34] <AvianFlu> it sucks, but I think you're basically at a crossroads here
[17:40:48] <AvianFlu> 1) think about how this might be turned into several collections for more efficient use with mongo
[17:40:56] <AvianFlu> 2) consider other data stores that don't have this fail you're seeing
[17:41:17] <AvianFlu> it's hard to give a rec for 2 without knowing much about these millions of docs though
[17:41:25] <AlmightyOatmeal> i don't understand why though... why is it not performing more parallel operations? why is it not taking advantage of more CPU power? why is it not taking advantage of a low-latency storage array capable of 10x I/O that it's currently under?
[17:41:49] <AvianFlu> because it's running a single query in javascript
[17:41:53] <AvianFlu> using a single threaded js vm
[17:42:16] <AlmightyOatmeal> that shouldn't be a factor considering that's a remote client connection
[17:42:36] <AvianFlu> mongo doesn't split up a single huge operation though
[17:42:39] <AvianFlu> that's what I'm trying to say
[17:42:44] <AlmightyOatmeal> ah
[17:42:48] <AvianFlu> one query, one big collection, one action
[17:43:00] <AvianFlu> one big collection, N large indexes
[17:43:09] <AvianFlu> the large indexes take time to build, and to search
[17:43:11] <AlmightyOatmeal> this database is already split up into 15 collections (give or atake a few)
[17:43:16] <AvianFlu> within how mongo works, splitting this up would be better for you
[17:43:25] <AvianFlu> but, not everything is ideal for mongo
[17:43:26] <AlmightyOatmeal> s/atake/take/
[17:44:12] <AlmightyOatmeal> dropping in elasticsearch results into mongo is an absolute wet dream and the efficiency of disk storage has been the best of many nosql document stores i've tried and many people boast at how well mongo does with huge collections
[17:44:34] <AlmightyOatmeal> couchbase was absolutely ridiculous in terms of on-disk storage...
[17:45:42] <AlmightyOatmeal> and even if there is one query, one big collection, and one action, there should be much more CPU usage and significantly more disk I/O. it seems like it's tripping over its own feet
[17:45:57] <AlmightyOatmeal> would building indices run into lock contention for a collection?
[17:46:14] <AvianFlu> it'll certainly make your db slower while it's running
[17:46:26] <AvianFlu> lock contention with WT is per-doc though
[17:46:47] <AlmightyOatmeal> oh, interesting
[17:46:57] <AvianFlu> yeah that's one of the big advantages over older mongo
[17:47:05] <AvianFlu> along with the on-disk compression
[17:47:51] <AvianFlu> you should consider the aggregation framework
[17:48:00] <AvianFlu> it might help you with the kind of query work you're doing here
[17:49:27] <AlmightyOatmeal> how would that be beneficial? i've used the aggregation framework to iterate through arrays that are stored within a document
[17:51:31] <AvianFlu> I think it can parallelize more than a regular query can
[17:52:31] <AlmightyOatmeal> hmm, it's worth a shot anyway
[17:54:41] <AvianFlu> googling a little, it seems like you're gonna want to chunk up your collection no matter what you're trying to do here
[17:54:47] <AvianFlu> but the aggregation may be faster for complex things
[17:55:12] <AvianFlu> but, all the examples I see did their own splitting up of the collection, and then aggregating on different pieces separately
[17:55:20] <AvianFlu> letting the individual ops be concurrent since the queries can't
[17:56:45] <AlmightyOatmeal> if i keep splitting up the collections then i might as well not store json documents and setup a parser to store json data in a relational databse
[17:56:50] <AlmightyOatmeal> at some point we start to defeat the purpose
[17:57:25] <AlmightyOatmeal> on the plus side, there are only a few collections that are rediculously large
[17:57:49] <AlmightyOatmeal> essnetially i create one database per company and each company has the same collections that represent respective ElasticSearch types
[17:58:10] <AlmightyOatmeal> i could break those down even further but then it starts to get more complicated trying to pull different types of data from multiple collections within a single query
[18:00:56] <AvianFlu> yeah
[18:01:08] <AvianFlu> the aggregation framework has $lookup in 3.2, which is limited left join support
[18:01:09] <AvianFlu> but yeah
[18:01:17] <AvianFlu> that's my data store motto
[18:01:20] <AvianFlu> use it till it falls over
[18:01:25] <AlmightyOatmeal> ha
[18:01:27] <AvianFlu> when it falls over, think about what else won't
[18:01:29] <AvianFlu> and then run like hell
[18:01:32] <AvianFlu> not to knock mongo
[18:01:45] <AvianFlu> but if you're hoping for one query to get magically parallelized, mongo doesn't have your back
[18:01:46] <AvianFlu> you know?
[18:02:24] <AlmightyOatmeal> oh yes. and in my humble opinion, mongo is very immature and has a long way to go before it can truly compete with the big dogs.
[18:02:34] <AlmightyOatmeal> however, it is still really neat and shows a lot of potential/promise
[18:02:46] <AvianFlu> I can help you think about ways to model what you've got, insofar as it works to keep in mongo
[18:02:58] <AvianFlu> but yeah, there's some "regular old school database" stuff that it just doesn't have
[18:03:31] <AlmightyOatmeal> oh i'm not referring to the relational aspect of it; that's comparing apples and oranges. i'm simply referring to the back-end
[18:04:01] <AlmightyOatmeal> i'm very much open to help/guidance/opinions from others. i'm a firm believer that there is no such thing as too much information :)
[18:05:36] <AlmightyOatmeal> my goal for a nosql database (or simply a json doc store) is to cache data from elasticsearch so my long-running complex queries don't take down our production environment so i slowly scroll through what i need and store it in mongo where i can beat the pants off it without taking down the production environment
[18:05:58] <StephenLynx> AlmightyOatmeal, you probably haven't worked with node.
[18:06:10] <AlmightyOatmeal> StephenLynx: ?
[18:06:18] <StephenLynx> > i'm a firm believer that there is no such thing as too much information :)
[18:06:41] <AlmightyOatmeal> i'm missing something
[18:06:49] <AlmightyOatmeal> sorry :\
[18:07:26] <StephenLynx> also, mongo is mature by now.
[18:07:32] <AlmightyOatmeal> brb, grabbing lunch
[18:07:54] <StephenLynx> you can just look at the big business using it, like craiglist.
[18:08:32] <StephenLynx> if I were to criticize anything, would be implementations exclusive to the enterprise edition.
[18:16:45] <crazyphil> ok, it would appear that no matter what I do, my 2.6 mongod does not like --setParameter authenticationMechanisms, suggestions?
[18:17:57] <StephenLynx> upgrade to 3.2
[18:18:30] <crazyphil> I've considered it, but I just have so many things running against this, I'm worried something important will break
[18:20:35] <AlmightyOatmeal> StephenLynx: i find that trying to run a mongo cluster to be quite cumbersome when you have half a dozen services you need to run, configure, maintain, etc which would be better off reducing the number of seperate services and building-in core-functionality to do what needs to be done. similar to MySQL FS or ElasticSearch clusters
[18:21:06] <StephenLynx> wat
[18:23:52] <AlmightyOatmeal> shard servers, replica servers, routers, config servers, and who knows what else i'm missing -- that is cumbersome when routing and configs could be built-in to a "node", or shared/replica server, and that would not only simplify the entire infrastrcture but allow for better scalability and redundance as new "masters" can be elected and there is no single point of failure
[18:23:59] <AlmightyOatmeal> that's just my humble opinion of course :)
[18:24:09] <StephenLynx> on what?
[18:24:27] <AlmightyOatmeal> StephenLynx: what do you mean by "on what"?
[18:24:41] <StephenLynx> your opinion on what?
[18:26:12] <AlmightyOatmeal> StephenLynx: oh, the aforementioned statement about the various components to a mongo cluster where i think that clustering models from ElasticSearch and things like Isilon's OneFS where small clusters are more inclusive than individual components
[18:26:22] <StephenLynx> :v
[18:26:40] <StephenLynx> are you sure you were having that convo with me?
[18:26:47] <StephenLynx> I don't remember discussing any of that.
[18:27:22] <AlmightyOatmeal> StephenLynx: you mentioned mongo is mature by now and i was posing another point that disagrees with your opinion but my point is also based on my opinion
[18:27:29] <AlmightyOatmeal> so it was trying to open a dialogue for discussion :)
[18:27:49] <StephenLynx> you went on a roundabout about elastic
[18:27:57] <StephenLynx> I really don't see the relation.
[18:28:06] <AlmightyOatmeal> i explained the relation, twice
[18:28:20] <AlmightyOatmeal> sorry for the confusion.
[18:28:33] <kexmex> hey guys, why is there a backtrace after this?
[18:28:36] <StephenLynx> have you tried a mysql cluster?
[18:28:37] <kexmex> 2016-09-15T18:04:52.865+0000 I - [conn13] Assertion: 10334:BSONObj size: 17149427 (0x105ADF3) is invalid. Size must be between 0 and 16793600(16MB) First element: findAndModify: "subscriptions"
[18:28:42] <cheeser> "mature" is different than "there are other ways to do something"
[18:29:34] <StephenLynx> yeah, its performant, feature rich and reliable. I saw a bunch of words completely unrelated to what maturity means.
[18:29:49] <crazyphil> I don't understand how this setParameter doesn't work - I'm at 2.6, all the docs say 2.6 supports MONGODB-CR for an auth mechanism
[18:30:27] <StephenLynx> even more now with wired tiger.
[18:30:46] <AlmightyOatmeal> cheeser: yes and no. my opinion isn't quite that black and white; i do see how that would fit into simply another way of doing things but in my opinion, maturity is well put together for speed and easy of setup, maintenance, scalability, etc which does not fit into the numerous different services needed to setup a Rube Goldberg configuration :)
[18:31:12] <StephenLynx> complex setups are complex to set up.
[18:31:23] <AlmightyOatmeal> StephenLynx: my point exactly.
[18:31:31] <StephenLynx> and how is that mongo's fault?
[18:31:42] <StephenLynx> just set a local db then.
[18:31:48] <StephenLynx> sure it won't go too far but
[18:31:54] <AlmightyOatmeal> StephenLynx: i've iterated my points to that
[18:32:06] <StephenLynx> tl,dr
[18:32:14] <AlmightyOatmeal> how is that my fault? :)
[18:32:15] <AlmightyOatmeal> hehe
[18:32:24] <cheeser> because you're the one trying to be understood :)
[18:32:29] <StephenLynx> that.
[18:32:47] <AlmightyOatmeal> cheeser: touche
[18:33:14] <StephenLynx> plus you have to keep in mind the features mongo offers.
[18:33:43] <AvianFlu> kexmex, there's a 16MB doc size limit
[18:33:50] <AvianFlu> you seem to have exceeded it there
[18:33:57] <StephenLynx> it might be more laborious to set mongo complex setups, but could you have a tool that offers the same features on these setups and also are simple to setup?
[18:34:00] <kexmex> yea so why is there a trace? :)
[18:34:22] <AvianFlu> dunno, so you can figure out which thing failed?
[18:34:45] <AvianFlu> unless I'm missing something that seems like proper error behavior
[18:34:48] <AvianFlu> fail, print backtrace
[18:35:14] <StephenLynx> for example, can you add servers to a mysql cluster without stopping it at all?
[18:36:45] <AlmightyOatmeal> StephenLynx: yes.
[18:41:56] <kexmex> AvianFlu : but like, if the Doc is > 16MB, that's the error, why would i care about a backtrace? :)
[18:42:25] <AvianFlu> heh, fair point :)
[18:42:29] <AvianFlu> still, default error behavior
[18:42:33] <AvianFlu> is the "why"
[18:44:54] <kexmex> AvianFlu : just the first time i am seeing a backtrace, that's why :)
[18:54:54] <replicaSetPrimar> Hello, I have a question for someone well versed with mongo. Any takers?
[18:57:04] <AlmightyOatmeal> replicaSetPrimar: it is advised to ask a question rather than asking to ask because not everyone is online all the time so someone may see your question at a later date/time then respond
[19:01:36] <replicaSetPrimar> Thanks. New to the irc thing. I'm experiencing an error "Connection to host <remote_host> destroyed" and it happens pretty randomly throughout the day. My node application connects to a replica set with three nodes and has readPreference set to primaryPreferred. Another possibly helpful piece of information is that the stack trace for the error ori
[19:01:36] <replicaSetPrimar> ginates in mongodb/lib/cursor.js (node mongo driver). I'm wondering if anyone has encounteredthis before, or if there are thoughts to why the connection may be getting destroyed. Thanks in advance for the help!
[19:12:17] <AlmightyOatmeal> replicaSetPrimar: have you checked the mongod logs to see the dropped connection and when it was estabolished? it should give you some additional information. also, is that connection active for the entire duration or could it be an idle timeout?
[19:26:55] <n1colas> Hello
[19:30:50] <AndrewYoung> That error seems to mean that the node socket object instance was destroyed (as opposed to the underlying OS socket, although I'm sure it's also gone by then.)
[19:30:55] <replicaSetPrimar> AlmightyOatmeal: I have checked the mongod logs, I looked for anything relating to connection destroyed, and was unable to find results via grep. The application uses a poolsize of 10 so as connections are opened and become unused I would assume they become idle
[19:31:15] <AndrewYoung> Yeah, "destroyed" is talking about the node side, not the server side.
[19:31:28] <AndrewYoung> The server will just show that the connection was closed if it shows anything.
[19:32:11] <replicaSetPrimar> so AndrewYoung you think it could be related to timeouts?
[19:32:42] <AndrewYoung> Possibly. All I know for sure is that the node objects have had destroy() called on them.
[19:33:09] <AndrewYoung> https://nodejs.org/api/net.html#net_socket_destroy_exception
[19:34:34] <replicaSetPrimar> Ahh ok I see. Thank you for the insight
[19:37:08] <AndrewYoung> Are you using the official NodeJS driver?
[20:19:51] <AlmightyOatmeal> is it possible to specify a sort order in a findOne() query?
[20:20:19] <StephenLynx> I don't think so.
[20:20:36] <StephenLynx> because that would be a find, a sort and a limit.
[20:20:50] <StephenLynx> but I'm not 100% sure.
[20:20:58] <AlmightyOatmeal> is that handled server-side or client-side?
[20:21:08] <AvianFlu> server side, generally
[20:25:37] <AlmightyOatmeal> sweet :)
[22:29:01] <herriojr> quick question, I had to migrate an old mongo instance from 2.4 to 2.6 and I did the auth schema upgrade which moves the system.users to admin instead of being in each db — how do I remove the old system.users collection in each db since they are no longer used