PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 14th of January, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:21:21] <gimpy2938> How do I create a full god-mode admin user on 3.2?
[01:39:53] <zivester> is there anyway to install mongo shell on Ubuntu that is not an LTS? like 15.10 ? Can't seem to find an easy way...
[01:57:26] <bros> https://gist.github.com/brandonros/79354208e7d4f33d399e Am I using indices correctly?
[01:59:41] <StephenLynx> >mongoose
[01:59:42] <StephenLynx> go figure.
[02:30:17] <Waheedi> basically one mongos went down for some reason (i know) on a replica set with 3 secondaries, the primary detected its not healthy out of reach, then i removed the unhealthy node
[02:31:12] <Waheedi> after that I readied it after the node comes up again, but the strstate does not make sense
[02:31:18] <Waheedi> "REMOVED"
[02:31:22] <Waheedi> lol
[02:32:07] <Waheedi> Readded*
[02:33:15] <Waheedi> my question when does the removed status change to anything else
[03:10:31] <Boomtime> @Waheedi: are you checking the status on the removed node itself?
[03:10:46] <Boomtime> because, if so, that will never change - the node is removed right?
[03:11:24] <Boomtime> once removed, the node will generally not show up anymore at all in the status of the other members
[03:14:07] <Waheedi> Boomtime: the node is not removed its readded, and I'm checking from the primary for sure!
[03:25:22] <Boomtime> @Waheedi: how did you re-add it? just rs.add()?
[03:25:35] <Boomtime> and also, how was it removed originally; just rs.remove()?
[03:30:45] <Waheedi> yes
[03:30:49] <Waheedi> Boomtime: :)
[03:31:22] <Waheedi> I think i know whats going on, thank you Boomtime
[03:35:48] <flaf> Hi @all.
[03:37:39] <flaf> I use mongo 2.4 and with rs.conf() I can see the priority of each host only when the value is != 1 (which is the default). Is there a way to print the priority of each host even if it is equal to 1?
[03:38:36] <flaf> In clear, I would like a command the have explicitly the priority of each node.
[03:38:43] <flaf> *to have...
[03:55:51] <Waheedi> and FYI Boomtime that even if you check from the node that has the removed status it will change its status
[03:58:30] <Waheedi> flag u can loop through conf on each node and get priority but if its not showing that means its 1 as you say, then always consider that
[03:58:34] <Waheedi> flaf:
[04:00:36] <flaf> Waheedi: ok, but for instance “rs.conf()["members"][0]["priority"]” displays nothing.
[04:00:50] <flaf> (because the priority is 1 I guess)
[04:00:55] <Waheedi> thats the default
[04:00:59] <flaf> Is it normal.
[04:01:13] <Waheedi> yeah because in the code its written somewhere
[04:01:20] <flaf> ok I see.
[04:02:04] <Waheedi> change it to 1000
[04:02:05] <Waheedi> lol
[04:02:11] <flaf> :)
[05:06:44] <Lonesoldier728> hey so thinking of making a table for caching search categories
[05:06:58] <Lonesoldier728> does it make sense or might as well just query the regular db of all items
[06:45:50] <Logicgate> hey guys
[06:46:07] <Logicgate> Is there a way to update a value in an nested array for a specific object
[06:46:37] <m3t4lukas> yep, there is. One moment, let me find the docs
[06:46:44] <Logicgate> Example: {subscriptions: [{objProp1: 'hello'}, {$objProp2}]}
[06:47:11] <Logicgate> Example: {subscriptions: [{objProp1: 'hello', otherProp: 'world'}, {$objProp2: 'foo'}]}
[06:47:26] <m3t4lukas> yeah, all covered. Here you go: https://docs.mongodb.org/manual/reference/operator/update/positional/
[06:47:33] <Logicgate> thanks
[06:47:47] <m3t4lukas> no problem, have fun ;)
[06:48:29] <Logicgate> m3t4lukas, how can I do the query selection thoguh
[06:48:40] <Logicgate> I got it
[06:48:42] <Logicgate> :)
[06:48:46] <Logicgate> $elemMatch
[06:52:55] <m3t4lukas> I don't think you need the $elemMatch operator for that: https://docs.mongodb.org/manual/reference/operator/update/positional/#update-documents-in-an-array
[06:53:10] <m3t4lukas> a simple update with dot notation is enough
[06:53:39] <Logicgate> okay cool
[06:55:45] <m3t4lukas> Logicgate: at least whith the examples you gave. If you want to match multiple criteria within the documents in the array you want to update, then you'd need the $elemMatch operator :)
[06:56:15] <Logicgate> m3t4lukas: http://puu.sh/mv7kY/faf4baa07c.png
[06:56:21] <Logicgate> does that snippet make sense?
[07:02:20] <m3t4lukas> Logicgate: I think so ;) I never work with lambdas so I don't know whether the lambda constructor of array actually produces the right thing :P
[07:02:33] <Logicgate> I'll try it and see!
[07:34:16] <Logicgate> hey m3t4lukas, I'm having problems with $pull now
[07:34:23] <Logicgate> can I do a subselection?
[07:34:48] <Logicgate> "7: cannot use the part (subscriptions of subscriptions.stripe_subscription_id) to traverse the element"
[07:35:59] <m3t4lukas> you can only use $pull on arrays
[07:36:10] <Logicgate> ah I see my mistake
[09:17:26] <mapc> morning; I am in a situation where I need to process a small subset of documents from collection; small in this case still means millions
[09:18:25] <mapc> the subset is defined by a query; no I'd like to distribute the work among multiple workers, so I need some way to spilt the set of documents into smaller parts
[09:19:13] <mapc> I would like to avoid using a controller if possible, because that's hard to scale. So optimally I'd have some way to just query for each batch
[09:19:47] <mapc> now my Idea was to hash a field (probably the id field), mod $number of workers and then just select those
[09:20:00] <mapc> This could be done as a map reduce query in JS
[09:20:36] <mapc> now, I know that mongodb already does exactly what I want to do for sharding: It takes a hash index over the sharding key field and just uses the hash values from the indey
[09:21:01] <mapc> now my question is: Is there any way to access the hash value from the hash index in a query?
[09:22:14] <mapc> I mean, if I could use that I bet that would be a lot more efficient than coding and running a hash function in JS. Specifically becuase I'd need to recompute the hash every query
[09:24:05] <mapc> I mean, if that doesn't work, I could also compute the hash once, store the hash in an _id_hash field and index that, but that's also not as beautiful as using the hash directly :)
[11:03:22] <_shaps_> Hi, I'm getting an odd error when trying to connect to a rs with pymongo. i'm using 2 different versions
[11:03:25] <_shaps_> http://ur1.ca/oewvi
[11:05:11] <_shaps_> in v2.7.2 that works ok, in 3.2 it comes back with port must be int
[11:06:00] <_shaps_> Is the connection changed? According to the docs I should now pass each node
[11:06:06] <_shaps_> and mongoclient will figure it out.
[11:33:57] <pchoo> Hi all, quick question: Say I have a collection with the following document structure {userId: ..., type: ..., isDefault: true, name: ...} - I want to have a unique index on the userId, type and name. I also want to have only one record with isDefault possible. Is there a way of preventing insert via an index if there is already a document with {isDefault:
[11:33:57] <pchoo> true}, or is this a case of search first, insert later?
[12:20:12] <cheeser> pchoo: you'd have to enforce that in your app i'm afraid
[12:20:30] <pchoo> cheeser: thanks for the confirmation, that's what I thought :)
[12:20:58] <StephenLynx> or you can try and insert anyway and catch the error.
[12:21:19] <StephenLynx> but that would require an unique index on it.
[12:21:40] <cheeser> StephenLynx: a unique index would fail on all false values.
[12:21:47] <StephenLynx> oh
[12:21:50] <StephenLynx> its a boolean
[12:22:08] <StephenLynx> welp
[12:22:11] <StephenLynx> nvm then
[12:22:18] <cheeser> now, 3.2 provides partial indexing so you might be able to create that unique index on only documents with true in that field.
[12:22:28] <cheeser> it would be weird but would probably work.
[12:24:12] <pchoo> Hmm, we are currently on 3.0, it is a Meteor based project, and I haven't looked into whether they now support 3.2 or not. I'll have a look at partial indexes then, but I feel I will just enforce in code :)
[12:24:29] <cheeser> meteor should work fine on 3.2.
[12:24:47] <cheeser> you just might not be able to access certain new features on the server.
[12:25:13] <pchoo> Oh cool, thanks
[12:25:14] <StephenLynx> >meteor
[12:25:21] <StephenLynx> ack
[12:25:40] <StephenLynx> you might as well suggest him a pill of cyanide.
[12:26:59] <pchoo> StephenLynx: lol
[12:27:38] <pchoo> I get the feeling you love Meteor
[12:27:51] <StephenLynx> any web framework is cancer.
[12:28:04] <StephenLynx> you already got high-level enough tools.
[12:28:27] <StephenLynx> all you get from these is performance compromises, loss of flexibility and increased complexity.
[12:28:37] <pchoo> Do you use any packages ever, or just write everything yourself?
[12:28:54] <StephenLynx> I use libraries that implement things that I don't care how they are done.
[12:28:58] <StephenLynx> such as:
[12:29:04] <StephenLynx> sending e-mails
[12:29:07] <StephenLynx> manipulating DOM
[12:29:15] <StephenLynx> reverse-proxying
[12:29:21] <StephenLynx> parsing ips
[12:29:24] <StephenLynx> database drivers
[12:29:38] <StephenLynx> all I care about these tasks is that they get done correctly and efficiently.
[12:29:51] <StephenLynx> they do ONE thing and they do it right.
[12:30:20] <StephenLynx> a framework abstracts a high-level tasks into a higher level task and makes it overly streamlined.
[12:30:50] <StephenLynx> like if your driving wheel were replaced by two buttons: left and right
[12:31:15] <StephenLynx> and your car can only move in a straight line or 90º to either side.
[12:32:03] <StephenLynx> did you ever looked at meteor source code? its impossible to make sense out of that.
[12:32:25] <StephenLynx> I think express is bad, but meteor takes it to a whole new level of overly complex and bloated web framework.
[12:33:01] <StephenLynx> because not only it abstracts the high-level tools for handling http, but it also couples the database to the application.
[12:33:08] <pchoo> Yeah, I've dug about in the source a fair bit, it was ok. It provides what the project needs, and has allowed our team to all work with consistent principles.
[12:33:24] <pchoo> StephenLynx: it's a tool for a job, not the be-all-and-end-all of JS
[12:33:29] <StephenLynx> a job?
[12:33:34] <StephenLynx> and what job is that?:
[12:33:56] <StephenLynx> besides >consistent principles
[12:34:02] <StephenLynx> is just subjective buzzwording.
[12:34:37] <pchoo> fair enough :)
[12:35:05] <StephenLynx> https://github.com/meteor/meteor/tree/devel/packages
[12:35:07] <StephenLynx> look at this
[12:35:22] <StephenLynx> how is this remotely reasonable?
[12:35:55] <StephenLynx> nearly A THOUSAND open issues.
[12:36:08] <StephenLynx> node itself has HALF of that.
[12:36:13] <pchoo> how is it not? It's all the package they use internally. It uses their framework to provide what you need.
[12:36:26] <StephenLynx> and 3k less commits.
[12:36:49] <StephenLynx> its the very definition of bloat.
[12:37:00] <pchoo> I'm not writing it, it's just what was chosen for the project I'm working on
[12:37:12] <StephenLynx> I don't care about that. my point is: its bad.
[12:37:17] <pchoo> ok
[12:37:25] <pchoo> my point is "thanks"
[12:37:51] <StephenLynx> >Meteor is an ultra-simple environment for building modern web applications.
[12:37:54] <StephenLynx> my sides
[12:38:30] <StephenLynx> node is the new PHP.
[12:38:52] <StephenLynx> it already got crap flavor-of-the-month frameworks
[12:39:08] <StephenLynx> at least the base technology isn't awful.
[12:39:28] <StephenLynx> I remember less than 2 years ago everyone was sucking express' dick like no tomorrow.
[13:59:44] <mehakkahlon> hi i have question regarding mongodb sharding .
[14:00:16] <mehakkahlon> I sharded my collection based on a compound index for 2 feilds {x:1, y:1}
[14:00:41] <mehakkahlon> then there for more indexes on the collection.
[14:00:54] <mehakkahlon> So for a query does mongo use only 1 index.
[14:01:20] <mehakkahlon> i.e is it pick my shard key as index , no other index will be used in the query
[14:01:21] <mehakkahlon> ?
[14:29:29] <deathanchor> hmm.. you can use compound indexes for sharding?
[14:30:32] <deathanchor> mehakkahlon: for v2.x mongo I know that the shardkey is required for updates but it could use a different index on the actual shard depending what the query planner says.
[14:33:24] <mehakkahlon> mine is mongo 3.0.8
[14:33:31] <dvayanu> hello, I have an interesting error in my logs: Caused by: com.mongodb.DBPortPool$SemaphoresOut: Concurrent requests for database connection have exceeded limit of 25, I would like to ask how can I avoid this? is this driver setting? and where do the 25 come from? I have driver 2.10.1 against 2.4 db
[14:33:41] <dvayanu> 2.4.9
[14:40:38] <metame> zivester: (asked how to install MongoDB on Ubuntu 15.10) - I followed these steps to get mongo working on my 15.10 machine: http://www.liberiangeek.net/2015/06/how-to-install-mongodb-in-ubuntu-15-04-easily/
[14:45:38] <metame> mehakkahlon: just run an explain() operation on your query to see what the query planner does, basically replace .find with .explain in your query
[14:45:54] <metame> if it is a fully covered query, it should show that no docs were scanned
[14:46:03] <metame> make sure to project out your _id
[14:46:49] <zivester> metame, thanks, but I don't think there are packages for lsb-release, aka wily on that repo
[14:46:57] <metame> and that will also tell you what indices were used to fulfill your query
[14:47:07] <metame> zivester: hmmm... maybe that was when I had 15.04
[14:47:48] <metame> zivester: imagine this is on your dev box, not a prod env?
[14:48:46] <StephenLynx> the easiest route is to install a centOS VM
[14:48:55] <StephenLynx> RHEL packages are by far the most supported
[14:50:03] <zivester> need to check today, but i got it installed with the debian wheezy packages.. those seemed to have worked MongoDB shell version: 3.2.1
[14:50:09] <zivester> just need to make sure everything is compatible
[14:50:26] <metame> metame: my mind is blank on how i installed it. I haven't yet upgraded to 3.2 so I'll probably figure it out then
[14:51:43] <zivester> i get some unrelated system warnings when connecting locally, but nothing that matters I don't think
[14:52:23] <metame> zivester: cool. also found this that might help if you do run into any issues: http://theleancoder.net/index.php/2015/11/05/install-mean-stack-on-ubuntu-15-10/
[14:52:23] <zivester> does mongo 3 now use "mongod" instead of "mongodb" now ?
[14:53:01] <metame> zivester: mongod is the server service
[14:53:33] <metame> zivester: think it's been that way since at least 2.6 (that's where I started anyway)
[14:54:27] <zivester> nah my 2.6 install only has mongodb as a service, at least as one i can call from `service mongod`
[14:56:28] <metame> zivester: interesting. may be that I just have my bash path setup differently
[14:57:54] <zivester> yah.. or the debian wheezy packages do it differently
[14:58:41] <zivester> i wonder when they'll have mongo 3 support for 16.04... hopefully soon
[15:03:17] <m3t4lukas> zivester: if you newly create software you should really work with mongodb 3.2
[15:03:35] <m3t4lukas> or 3.3 which is released as fas as I know
[15:04:37] <Derick> 3.3 is a development release series
[15:04:44] <Derick> so no, don't use that
[15:04:53] <Derick> even is production, odd is dev
[15:05:00] <Derick> like the linux kernel
[15:09:15] <metame> Derick: so are you saying the debian package is best for Ubuntu 15.x over the Ubuntu 14.x pckgs
[15:09:51] <metame> +?
[15:10:03] <StephenLynx> I don't think thats wrong. I believe debian 8 has more in common to ubuntu 15 than ubuntu 14 does.
[15:10:26] <StephenLynx> I think ubuntu 14 is behind even centOS 7.2 at this point
[15:10:41] <Derick> metame: sorry, I meant that *our* apt packages is better than using the packages coming with Ubuntu or Debian itself
[15:10:47] <metame> StephenLynx: well right now they have a pkg for deb 7
[15:11:23] <StephenLynx> anyway, still would be easier to run a VM with centOS.
[15:11:41] <metame> Derick: ok cool. Just seeing how much I needed to do to get it to work (when i upgrade to 3.2). I think last time I did just use the wheezy pkg
[15:11:48] <StephenLynx> and if you got ubuntu 15 on a server I got bad news for you.
[15:11:55] <Derick> yes, that's what I run too
[15:12:24] <metame> StephenLynx: haha ya, just on my dev box. So don't want to run a vm just to run a local mongod
[15:12:37] <metame> Derick: thx
[15:12:44] <Derick> you can also just download the tarball...
[15:13:12] <StephenLynx> how much RAM you got?
[15:13:32] <StephenLynx> you could run a dev mongo deploy with 256mb
[15:15:07] <metame> StephenLynx: ya my setup is working fine for now on 3.0, was just trying to help zivester and thinking about upgrading to 3.2 when I get around to it (unfortunately currently don't get to work with Mongo at work)
[15:18:59] <Ryan_> If I'm sharding and use multiple mongos instance (one per api server), do i configure all of them exactly the same? (i.e. connect and set the same shard key?) and can they share the same config mongd instances?
[15:23:49] <metame> Ryan_: the docs on sharding are pretty good imho https://docs.mongodb.org/manual/core/sharding-introduction/
[15:24:20] <Ryan_> indeed, i just don't see anything on setup for multiple mongos instances. besides high level diagrams
[15:25:50] <metame> Ryan_: here's an example shell script to set up sharding that may answer your q - https://github.com/metame/mongoCertification/blob/master/notes/init_sharded_env.sh
[15:26:39] <metame> Ryan_: obviously that script is setting it all up on a single server for demonstration purposes
[15:26:40] <Ryan_> metaname: useful to reference but he only has one mongos instance (line 80)
[15:30:56] <metame> Ryan_: based on the diagram in the docs, it seems you'd just run the same mongos command on each server you want it to run on
[15:31:00] <StephenLynx> is there any way to fetch the highest _id I got on a collection?
[15:31:39] <metame> Ryan_: since they're all connected to the same rs's
[15:31:47] <metame> ... i.e. mongod's
[15:32:19] <Ryan_> metame: ok probably so, thanks! this script is actually pretty useful btw. How exactly on lines 61-68, are the config commands being piped to the mongo connection?
[15:32:57] <Ryan_> " << 'EOF'" is weird to me
[15:34:19] <metame> Ryan_: hmm... not sure.
[15:36:06] <Pinkamena_D> on mongo 2.4, how can I grant a user permission to drop a database? I feel like I have tried every available permission but I still get unauthorized.
[15:36:22] <metame> Ryan_: had to look up some bash docs on bitwise operators
[15:37:38] <metame> << bitwise left shift (multiplies by 2 for each shift position)
[15:40:21] <metame> Ryan_: ahh this explains EOF: http://forums.devshed.com/programming-languages-139/eof-328074.html
[15:42:23] <Ryan_> wow, thanks! For some reason I thought that you actually had to be inside of the mongo shell, but i guess this opens it first
[15:43:55] <metame> Ryan_: ya pretty nice to show how you could script your config
[17:21:44] <Voyage> If I have a huge JSON doc for each id, how to search for a specific JSON.property/value?
[17:22:07] <StephenLynx> dot notation
[17:22:18] <StephenLynx> 'a.x' : thing
[17:33:58] <mapc> hi, is there any way I can embed multiple queries in one query?
[17:34:44] <mapc> the idea is that I have a to run a lot of queries in the form {propA = x, propB = y}, {propA = x2, propB = y2}, ...
[17:34:49] <StephenLynx> depends n the query.
[17:34:56] <StephenLynx> if they are write queries on the same collection
[17:35:02] <StephenLynx> you can use bulkwrite
[17:35:18] <mapc> so I'd essentially like to combine all those and get the results of each subquery in a read
[17:35:29] <StephenLynx> take a look at aggregation
[17:35:36] <StephenLynx> it might do what you need
[17:35:38] <mapc> StephenLynx: hi, thanks for the response :). I am already using bulk queries, these are reads
[17:35:58] <mapc> I mean bulk writes
[17:36:41] <mapc> to clarify: I know about bulk writes, are there bulk reads
[17:37:18] <StephenLynx> did you looked into aggregation?
[17:37:30] <mapc> dooing that right now :)
[17:43:29] <dlopes> hello, someone can helpme with a upgrade issue?
[17:49:44] <mapc> StephenLynx: so, is it correct then, that I could express my query as a series of match stages?
[17:52:01] <StephenLynx> probably.
[17:52:05] <StephenLynx> don't know what your query is.
[17:54:44] <mapc> argh, no. multiple matches would apply the first match to the entire collection, then apply the second match to the result of the first match and so on
[17:55:36] <mapc> but that's incorrect for my case, I need multiple queries, optimally run in paralell, then dumped into an array and send back to me, optimally as a cursor
[17:56:19] <mapc> each of the queries could be run separately and entirely indipendantly. I want to combine them so they can run in paralell and I can save some latency
[18:01:05] <mapc> it could also be written with a series of {$or: [{propA: x1, propB: y1}, {propA: x2, propB: y2}, ...]}
[18:01:11] <mapc> but that wouldn
[18:01:28] <mapc> t return the result of each subquery as an array
[18:02:32] <StephenLynx> no but you could group on aggregate.
[18:03:26] <bros> https://gist.github.com/brandonros/79354208e7d4f33d399e Am I using indices correctly?
[18:05:24] <mapc> StephenLynx: well, then I would have a very long $match and a very long $group listing each of the subresults?
[18:05:40] <StephenLynx> v:
[18:05:49] <StephenLynx> I dunno. maybe?
[18:06:58] <StephenLynx> or you could try performing one thread at a time.
[18:07:27] <StephenLynx> and take in the latency as a trade-off for maintainability
[18:07:37] <MacWinner> just an fyi for anyone using node + mongo. so if you setup a socketTimeout with the node driver, if all the connections in the pool are not used within the socketTimeout period, your connections will be disconnected and reconnected all the time..
[18:07:41] <StephenLynx> and not having one huge query taking longer
[18:07:53] <mapc> StephenLynx: I am optimizing for performance right now. batch processing around 1G documents ^^
[18:08:25] <StephenLynx> less operations does not equal to performance necessarily.
[18:08:36] <StephenLynx> what if you hog your db with a single query by doing that?
[18:11:23] <mapc> StephenLynx: The queries should involve around 10k documents max. And I am working with a cursor batch size=6k
[18:11:31] <mapc> but you are right, that would need testing
[18:12:35] <metame> MacWinner: thx for sharing
[18:16:11] <MacWinner> metame, i had gigs of logs per day on just connect/disconnect mongod.log messages.. only with 4 servers connecting in my cluster. actually didn't notice it until i started investigating a very rare "Primary server unavailable" error that was happening even thoughour mongo cluster was healthy. turns on the connection pool was getting torn down at the same time a write was happening
[18:16:47] <StephenLynx> why did you had to set a socket timeout, though?
[18:17:29] <metame> bros: your indices look fine. are you sure you need both of your first 2 indices though?
[18:17:40] <MacWinner> StephenLynx, i think i just had it from a copy and paste of some other config i found when I first started
[18:17:46] <StephenLynx> :v
[18:18:04] <bros> metame: Almost. What can I do to kind of "keep an eye" on my production server for a day and see what is taking the longest?
[18:19:20] <metame> MacWinner: good reminder to (when possible) know what the code is doing when you copy/paste.
[18:19:40] <metame> MacWinner: and good job getting to the bottom of your cluster err
[18:21:12] <metame> bros: check this out https://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/
[18:21:49] <bros> Awesome. Set it to 1, then read system.profile at the end of the day?
[18:22:00] <bros> Is the slowOpThresholdMs a sane default?
[18:23:11] <kur1j> can someone maybe explain how these two benchmarks obtained opposite results between Cassandra and Mongo on a single node?http://www.datastax.com/wp-content/themes/datastax-2014-08/files/NoSQL_Benchmarks_EndPoint.pdf vs http://info-mongodb-com.s3.amazonaws.com/High%2BPerformance%2BBenchmark%2BWhite%2BPaper_final.pdf
[18:24:09] <metame> bros: ya 100ms
[18:26:09] <StephenLynx> version, configuration
[18:26:16] <StephenLynx> maybe the first dudes lied?
[18:26:52] <StephenLynx> its hard to find mongodb got genuinely #rekt as it did on that first paper without something wrong going on
[18:28:08] <bros> metame: i already have 6. should i kill myself?
[18:28:53] <kur1j> StephenLynx: they both use Cassandra 2.1.x and Mongo 3.0.x.
[18:29:50] <kur1j> the one paper from united software I don't see actual config information other then the durability settings
[18:30:39] <metame> bros: not yet.
[18:31:02] <bros> Is it ok if they are aggregations?
[18:32:44] <metame> bros: take a look at the logs and see what sorts of queries are taking the longest
[18:32:55] <metame> bros: aggregation is definitely going to hurt perf
[18:35:27] <metame> bros: aggregation is best for analytics and such, not as a res to user behavior
[19:07:08] <MacWinner> if I have a 3-node replica set, and I want to upgrade from 3.0.8 to 3.2, what would be the best high level way to do it? I currently don't use wiredtiger but I want to. should I migrate my existing cluster to wiredtiger first, and then upgrade? keep in mind that I'm limited to the hardware of the 3-nodes for doing the upgrade
[19:12:53] <cheeser> you could take down one node, bump to 3.2, bring it up as wiredtiger and let it resync. repeat for the others.
[19:28:09] <sbinq> Hello, what could be considered as reasonable connection pool limit for mongo server (e.g. single instance)? Just some average numbers maybe, etc (it would probably depend on specific app/deployment/whatever, but still curious what numbers are like). E.g. java driver has default about 10 connections, and golang driver has default about 4096 connections - what is best?
[19:41:59] <stickperson> my collection looks something like this: https://jsfiddle.net/7833mq4c/ . i want to dynamically calculate a “score” for each document by adding values in the values array only if a certain condition is met (the text equals what i ask for)
[19:42:05] <stickperson> do i have to use $unwind for that?
[22:36:59] <alexi5> hi guys
[22:42:40] <jbrhbr> hey folks. i have a collection that i'm doing a `.count({state_enum: 1})` on and i also have an index on this field and when i do the verbose explain syntax i see that this index is even being favored when i do my count, but it doesn't seem like the query is benefiting from the index. ie, performance is slow (like 1~2 seconds for 4.4mil records)
[22:43:15] <jbrhbr> any ideas? is my expectation that it would be faster unreasonable?
[22:58:35] <jbrhbr> it's actually taking about 600ms