[02:30:17] <Waheedi> basically one mongos went down for some reason (i know) on a replica set with 3 secondaries, the primary detected its not healthy out of reach, then i removed the unhealthy node
[02:31:12] <Waheedi> after that I readied it after the node comes up again, but the strstate does not make sense
[03:37:39] <flaf> I use mongo 2.4 and with rs.conf() I can see the priority of each host only when the value is != 1 (which is the default). Is there a way to print the priority of each host even if it is equal to 1?
[03:38:36] <flaf> In clear, I would like a command the have explicitly the priority of each node.
[03:55:51] <Waheedi> and FYI Boomtime that even if you check from the node that has the removed status it will change its status
[03:58:30] <Waheedi> flag u can loop through conf on each node and get priority but if its not showing that means its 1 as you say, then always consider that
[06:52:55] <m3t4lukas> I don't think you need the $elemMatch operator for that: https://docs.mongodb.org/manual/reference/operator/update/positional/#update-documents-in-an-array
[06:53:10] <m3t4lukas> a simple update with dot notation is enough
[06:55:45] <m3t4lukas> Logicgate: at least whith the examples you gave. If you want to match multiple criteria within the documents in the array you want to update, then you'd need the $elemMatch operator :)
[06:56:21] <Logicgate> does that snippet make sense?
[07:02:20] <m3t4lukas> Logicgate: I think so ;) I never work with lambdas so I don't know whether the lambda constructor of array actually produces the right thing :P
[09:17:26] <mapc> morning; I am in a situation where I need to process a small subset of documents from collection; small in this case still means millions
[09:18:25] <mapc> the subset is defined by a query; no I'd like to distribute the work among multiple workers, so I need some way to spilt the set of documents into smaller parts
[09:19:13] <mapc> I would like to avoid using a controller if possible, because that's hard to scale. So optimally I'd have some way to just query for each batch
[09:19:47] <mapc> now my Idea was to hash a field (probably the id field), mod $number of workers and then just select those
[09:20:00] <mapc> This could be done as a map reduce query in JS
[09:20:36] <mapc> now, I know that mongodb already does exactly what I want to do for sharding: It takes a hash index over the sharding key field and just uses the hash values from the indey
[09:21:01] <mapc> now my question is: Is there any way to access the hash value from the hash index in a query?
[09:22:14] <mapc> I mean, if I could use that I bet that would be a lot more efficient than coding and running a hash function in JS. Specifically becuase I'd need to recompute the hash every query
[09:24:05] <mapc> I mean, if that doesn't work, I could also compute the hash once, store the hash in an _id_hash field and index that, but that's also not as beautiful as using the hash directly :)
[11:03:22] <_shaps_> Hi, I'm getting an odd error when trying to connect to a rs with pymongo. i'm using 2 different versions
[11:05:11] <_shaps_> in v2.7.2 that works ok, in 3.2 it comes back with port must be int
[11:06:00] <_shaps_> Is the connection changed? According to the docs I should now pass each node
[11:06:06] <_shaps_> and mongoclient will figure it out.
[11:33:57] <pchoo> Hi all, quick question: Say I have a collection with the following document structure {userId: ..., type: ..., isDefault: true, name: ...} - I want to have a unique index on the userId, type and name. I also want to have only one record with isDefault possible. Is there a way of preventing insert via an index if there is already a document with {isDefault:
[11:33:57] <pchoo> true}, or is this a case of search first, insert later?
[12:20:12] <cheeser> pchoo: you'd have to enforce that in your app i'm afraid
[12:20:30] <pchoo> cheeser: thanks for the confirmation, that's what I thought :)
[12:20:58] <StephenLynx> or you can try and insert anyway and catch the error.
[12:21:19] <StephenLynx> but that would require an unique index on it.
[12:21:40] <cheeser> StephenLynx: a unique index would fail on all false values.
[12:22:18] <cheeser> now, 3.2 provides partial indexing so you might be able to create that unique index on only documents with true in that field.
[12:22:28] <cheeser> it would be weird but would probably work.
[12:24:12] <pchoo> Hmm, we are currently on 3.0, it is a Meteor based project, and I haven't looked into whether they now support 3.2 or not. I'll have a look at partial indexes then, but I feel I will just enforce in code :)
[12:24:29] <cheeser> meteor should work fine on 3.2.
[12:24:47] <cheeser> you just might not be able to access certain new features on the server.
[12:29:38] <StephenLynx> all I care about these tasks is that they get done correctly and efficiently.
[12:29:51] <StephenLynx> they do ONE thing and they do it right.
[12:30:20] <StephenLynx> a framework abstracts a high-level tasks into a higher level task and makes it overly streamlined.
[12:30:50] <StephenLynx> like if your driving wheel were replaced by two buttons: left and right
[12:31:15] <StephenLynx> and your car can only move in a straight line or 90º to either side.
[12:32:03] <StephenLynx> did you ever looked at meteor source code? its impossible to make sense out of that.
[12:32:25] <StephenLynx> I think express is bad, but meteor takes it to a whole new level of overly complex and bloated web framework.
[12:33:01] <StephenLynx> because not only it abstracts the high-level tools for handling http, but it also couples the database to the application.
[12:33:08] <pchoo> Yeah, I've dug about in the source a fair bit, it was ok. It provides what the project needs, and has allowed our team to all work with consistent principles.
[12:33:24] <pchoo> StephenLynx: it's a tool for a job, not the be-all-and-end-all of JS
[14:29:29] <deathanchor> hmm.. you can use compound indexes for sharding?
[14:30:32] <deathanchor> mehakkahlon: for v2.x mongo I know that the shardkey is required for updates but it could use a different index on the actual shard depending what the query planner says.
[14:33:31] <dvayanu> hello, I have an interesting error in my logs: Caused by: com.mongodb.DBPortPool$SemaphoresOut: Concurrent requests for database connection have exceeded limit of 25, I would like to ask how can I avoid this? is this driver setting? and where do the 25 come from? I have driver 2.10.1 against 2.4 db
[14:40:38] <metame> zivester: (asked how to install MongoDB on Ubuntu 15.10) - I followed these steps to get mongo working on my 15.10 machine: http://www.liberiangeek.net/2015/06/how-to-install-mongodb-in-ubuntu-15-04-easily/
[14:45:38] <metame> mehakkahlon: just run an explain() operation on your query to see what the query planner does, basically replace .find with .explain in your query
[14:45:54] <metame> if it is a fully covered query, it should show that no docs were scanned
[14:46:03] <metame> make sure to project out your _id
[14:46:49] <zivester> metame, thanks, but I don't think there are packages for lsb-release, aka wily on that repo
[14:46:57] <metame> and that will also tell you what indices were used to fulfill your query
[14:47:07] <metame> zivester: hmmm... maybe that was when I had 15.04
[14:47:48] <metame> zivester: imagine this is on your dev box, not a prod env?
[14:48:46] <StephenLynx> the easiest route is to install a centOS VM
[14:48:55] <StephenLynx> RHEL packages are by far the most supported
[14:50:03] <zivester> need to check today, but i got it installed with the debian wheezy packages.. those seemed to have worked MongoDB shell version: 3.2.1
[14:50:09] <zivester> just need to make sure everything is compatible
[14:50:26] <metame> metame: my mind is blank on how i installed it. I haven't yet upgraded to 3.2 so I'll probably figure it out then
[14:51:43] <zivester> i get some unrelated system warnings when connecting locally, but nothing that matters I don't think
[14:52:23] <metame> zivester: cool. also found this that might help if you do run into any issues: http://theleancoder.net/index.php/2015/11/05/install-mean-stack-on-ubuntu-15-10/
[14:52:23] <zivester> does mongo 3 now use "mongod" instead of "mongodb" now ?
[14:53:01] <metame> zivester: mongod is the server service
[14:53:33] <metame> zivester: think it's been that way since at least 2.6 (that's where I started anyway)
[14:54:27] <zivester> nah my 2.6 install only has mongodb as a service, at least as one i can call from `service mongod`
[14:56:28] <metame> zivester: interesting. may be that I just have my bash path setup differently
[14:57:54] <zivester> yah.. or the debian wheezy packages do it differently
[14:58:41] <zivester> i wonder when they'll have mongo 3 support for 16.04... hopefully soon
[15:03:17] <m3t4lukas> zivester: if you newly create software you should really work with mongodb 3.2
[15:03:35] <m3t4lukas> or 3.3 which is released as fas as I know
[15:04:37] <Derick> 3.3 is a development release series
[15:10:03] <StephenLynx> I don't think thats wrong. I believe debian 8 has more in common to ubuntu 15 than ubuntu 14 does.
[15:10:26] <StephenLynx> I think ubuntu 14 is behind even centOS 7.2 at this point
[15:10:41] <Derick> metame: sorry, I meant that *our* apt packages is better than using the packages coming with Ubuntu or Debian itself
[15:10:47] <metame> StephenLynx: well right now they have a pkg for deb 7
[15:11:23] <StephenLynx> anyway, still would be easier to run a VM with centOS.
[15:11:41] <metame> Derick: ok cool. Just seeing how much I needed to do to get it to work (when i upgrade to 3.2). I think last time I did just use the wheezy pkg
[15:11:48] <StephenLynx> and if you got ubuntu 15 on a server I got bad news for you.
[15:13:32] <StephenLynx> you could run a dev mongo deploy with 256mb
[15:15:07] <metame> StephenLynx: ya my setup is working fine for now on 3.0, was just trying to help zivester and thinking about upgrading to 3.2 when I get around to it (unfortunately currently don't get to work with Mongo at work)
[15:18:59] <Ryan_> If I'm sharding and use multiple mongos instance (one per api server), do i configure all of them exactly the same? (i.e. connect and set the same shard key?) and can they share the same config mongd instances?
[15:23:49] <metame> Ryan_: the docs on sharding are pretty good imho https://docs.mongodb.org/manual/core/sharding-introduction/
[15:24:20] <Ryan_> indeed, i just don't see anything on setup for multiple mongos instances. besides high level diagrams
[15:25:50] <metame> Ryan_: here's an example shell script to set up sharding that may answer your q - https://github.com/metame/mongoCertification/blob/master/notes/init_sharded_env.sh
[15:26:39] <metame> Ryan_: obviously that script is setting it all up on a single server for demonstration purposes
[15:26:40] <Ryan_> metaname: useful to reference but he only has one mongos instance (line 80)
[15:30:56] <metame> Ryan_: based on the diagram in the docs, it seems you'd just run the same mongos command on each server you want it to run on
[15:31:00] <StephenLynx> is there any way to fetch the highest _id I got on a collection?
[15:31:39] <metame> Ryan_: since they're all connected to the same rs's
[15:32:19] <Ryan_> metame: ok probably so, thanks! this script is actually pretty useful btw. How exactly on lines 61-68, are the config commands being piped to the mongo connection?
[15:36:06] <Pinkamena_D> on mongo 2.4, how can I grant a user permission to drop a database? I feel like I have tried every available permission but I still get unauthorized.
[15:36:22] <metame> Ryan_: had to look up some bash docs on bitwise operators
[15:37:38] <metame> << bitwise left shift (multiplies by 2 for each shift position)
[15:40:21] <metame> Ryan_: ahh this explains EOF: http://forums.devshed.com/programming-languages-139/eof-328074.html
[15:42:23] <Ryan_> wow, thanks! For some reason I thought that you actually had to be inside of the mongo shell, but i guess this opens it first
[15:43:55] <metame> Ryan_: ya pretty nice to show how you could script your config
[17:21:44] <Voyage> If I have a huge JSON doc for each id, how to search for a specific JSON.property/value?
[17:52:05] <StephenLynx> don't know what your query is.
[17:54:44] <mapc> argh, no. multiple matches would apply the first match to the entire collection, then apply the second match to the result of the first match and so on
[17:55:36] <mapc> but that's incorrect for my case, I need multiple queries, optimally run in paralell, then dumped into an array and send back to me, optimally as a cursor
[17:56:19] <mapc> each of the queries could be run separately and entirely indipendantly. I want to combine them so they can run in paralell and I can save some latency
[18:01:05] <mapc> it could also be written with a series of {$or: [{propA: x1, propB: y1}, {propA: x2, propB: y2}, ...]}
[18:06:58] <StephenLynx> or you could try performing one thread at a time.
[18:07:27] <StephenLynx> and take in the latency as a trade-off for maintainability
[18:07:37] <MacWinner> just an fyi for anyone using node + mongo. so if you setup a socketTimeout with the node driver, if all the connections in the pool are not used within the socketTimeout period, your connections will be disconnected and reconnected all the time..
[18:07:41] <StephenLynx> and not having one huge query taking longer
[18:07:53] <mapc> StephenLynx: I am optimizing for performance right now. batch processing around 1G documents ^^
[18:08:25] <StephenLynx> less operations does not equal to performance necessarily.
[18:08:36] <StephenLynx> what if you hog your db with a single query by doing that?
[18:11:23] <mapc> StephenLynx: The queries should involve around 10k documents max. And I am working with a cursor batch size=6k
[18:11:31] <mapc> but you are right, that would need testing
[18:16:11] <MacWinner> metame, i had gigs of logs per day on just connect/disconnect mongod.log messages.. only with 4 servers connecting in my cluster. actually didn't notice it until i started investigating a very rare "Primary server unavailable" error that was happening even thoughour mongo cluster was healthy. turns on the connection pool was getting torn down at the same time a write was happening
[18:16:47] <StephenLynx> why did you had to set a socket timeout, though?
[18:17:29] <metame> bros: your indices look fine. are you sure you need both of your first 2 indices though?
[18:17:40] <MacWinner> StephenLynx, i think i just had it from a copy and paste of some other config i found when I first started
[18:18:04] <bros> metame: Almost. What can I do to kind of "keep an eye" on my production server for a day and see what is taking the longest?
[18:19:20] <metame> MacWinner: good reminder to (when possible) know what the code is doing when you copy/paste.
[18:19:40] <metame> MacWinner: and good job getting to the bottom of your cluster err
[18:21:12] <metame> bros: check this out https://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/
[18:21:49] <bros> Awesome. Set it to 1, then read system.profile at the end of the day?
[18:22:00] <bros> Is the slowOpThresholdMs a sane default?
[18:23:11] <kur1j> can someone maybe explain how these two benchmarks obtained opposite results between Cassandra and Mongo on a single node?http://www.datastax.com/wp-content/themes/datastax-2014-08/files/NoSQL_Benchmarks_EndPoint.pdf vs http://info-mongodb-com.s3.amazonaws.com/High%2BPerformance%2BBenchmark%2BWhite%2BPaper_final.pdf
[18:31:02] <bros> Is it ok if they are aggregations?
[18:32:44] <metame> bros: take a look at the logs and see what sorts of queries are taking the longest
[18:32:55] <metame> bros: aggregation is definitely going to hurt perf
[18:35:27] <metame> bros: aggregation is best for analytics and such, not as a res to user behavior
[19:07:08] <MacWinner> if I have a 3-node replica set, and I want to upgrade from 3.0.8 to 3.2, what would be the best high level way to do it? I currently don't use wiredtiger but I want to. should I migrate my existing cluster to wiredtiger first, and then upgrade? keep in mind that I'm limited to the hardware of the 3-nodes for doing the upgrade
[19:12:53] <cheeser> you could take down one node, bump to 3.2, bring it up as wiredtiger and let it resync. repeat for the others.
[19:28:09] <sbinq> Hello, what could be considered as reasonable connection pool limit for mongo server (e.g. single instance)? Just some average numbers maybe, etc (it would probably depend on specific app/deployment/whatever, but still curious what numbers are like). E.g. java driver has default about 10 connections, and golang driver has default about 4096 connections - what is best?
[19:41:59] <stickperson> my collection looks something like this: https://jsfiddle.net/7833mq4c/ . i want to dynamically calculate a “score” for each document by adding values in the values array only if a certain condition is met (the text equals what i ask for)
[19:42:05] <stickperson> do i have to use $unwind for that?
[22:42:40] <jbrhbr> hey folks. i have a collection that i'm doing a `.count({state_enum: 1})` on and i also have an index on this field and when i do the verbose explain syntax i see that this index is even being favored when i do my count, but it doesn't seem like the query is benefiting from the index. ie, performance is slow (like 1~2 seconds for 4.4mil records)
[22:43:15] <jbrhbr> any ideas? is my expectation that it would be faster unreasonable?
[22:58:35] <jbrhbr> it's actually taking about 600ms