[00:28:18] <starseed> Is there a way to determine if a specific cursor id is still valid?
[00:29:17] <starseed> pymongo or mongo shell would both work, but I can't find a method that allows me to ask a MongoD if cursor_id xxxxxxxxxx is still alive
[00:31:49] <starseed> unfortunately I have notimeout cursors to deal with and am not in a position to change the application code which generates them, so I've written some code to clean them up - but it would be more efficient if I could ask mongo if each cursor I'm vetting still exists before parsing logs and issuing a pymongo.MongoClient.close_cursor(id)
[00:38:45] <starseed> currentOp doesn't seem to use cursor IDs
[00:38:55] <starseed> so I'm not sure I could cross reference
[00:39:34] <cheeser> yeah. i was looking over it, too. i'm not sure those values are exposed.
[00:41:31] <starseed> it isn't the end of the world I guess. The use case is I get a list of cursor IDs from log output which are blocking chunk migrations. I then search logs for each ID and determine when that cursor ID first appeared in logs (determine its age). If it has existed longer than x amount of time (our longest running jobs are like 20 hours)...then I kill it
[00:41:52] <starseed> but it would have been nice to find out if a cursor was still active before searching two weeks worth of compressed log output
[00:42:46] <starseed> but I think I can blindly issue a close_cursor(id) against each ID that is >24 hours old, worst that can happen is that it can't find the cursor to kill because it no longer exists.
[08:57:51] <dnmngs> Hi folks. Is there separate downloads for MongoDB community and enterprise edition? Or is it possible to just update the community edition to enterprise by using a license?
[08:58:33] <dnmngs> ah I see, it seems to be a different build
[10:35:29] <basil_kurian_> I 'm using an invalid ssl cert on my replica set members. I have allowInvalidCertificates: true in my mongo config file. It is working fine on mongo 3.0.12 . But when I upgrade to mongo 3.2.6 , it is giving me error saying that invalid ssl cert
[10:36:17] <basil_kurian_> when I try rs.status() , i can see a message ""lastHeartbeatMessage" : "could not find member to sync from""
[10:36:38] <basil_kurian_> any idea what is the issue here ?
[10:37:29] <tinco> can you read from the loaded config somewhere if it picked the value up?
[10:39:03] <basil_kurian_> @tinco using db.serverStatus() ?
[10:39:56] <basil_kurian_> how can I see the loaded config ?
[10:48:16] <ultrav1olet> Hi. Perhaps I'm asking a stupid questions but we'd like one server to host two shards. Do I have to have two mongodb instances on this server with different configuration files and TCP ports, or two shards can be be created within the same mongodb process?
[10:51:18] <Derick> you need two mongod with confif files and TCP ports
[10:52:56] <Derick> ? why do you want two "shards"?
[10:53:42] <ultrav1olet> We want to have two shards and two replicas on two servers, and an extra server for mongos, config and arbiter - a very tough configuration, but it will allows us to grow horizontally in the future
[10:54:15] <Derick> you still didn't answer why you want two shards
[10:56:14] <ultrav1olet> At the moment we have a replica set across three servers - all based on plain old HDDs - however they are not capable of withstanding our load (we store roughly 100MB of data every few seconds and release roughly 20MB - all randomly)
[10:56:42] <ultrav1olet> Our IO load hovers around 100%
[10:57:26] <ultrav1olet> In fact we omit 95% of data because if we dump everything, then our servers stall completely
[10:58:18] <ultrav1olet> So we want to migrate to SSD based servers and implement sharding so we'll be able to grow even beyond SSD disk bandwidth
[10:58:24] <Derick> having two shards on one box isn't going to fix that IO load
[10:58:36] <Derick> especially when they use the same drive...
[10:59:52] <ultrav1olet> SSD disks will momentarily alleviate our problems, and then, when we add new shards, we'll be able to withstand even a higher load
[11:04:47] <Derick> although using the WiredTiger storage engine should make that a bit better
[11:08:14] <ultrav1olet> Actually I'd love to talk about one problem we faced with mongo 3.2.5 (all 3.x.x releases are affected): after we switched to the WiredTiger engine with compression (and without compression as well) indexing of one of our fields started to take over 2,5 hours instead of 2 minutes using mmapv1
[11:09:13] <ultrav1olet> at the same time all reads and writes were severely affected - more likely dead as each query never completed during indexing
[16:44:59] <teprrr> hi, I'm doing a find with $lt & time to select the elements not updated in, e.g., the past hour, and if not, updating and setting that field. my question is, will $lt match for non-existing field?
[16:46:42] <teprrr> solution is apparently using $or and $exist :)
[17:16:23] <tom> In my Python app (using pymongo), I'd like to send multiple find_and_modify commands at once and then receive all the responses without wasting round trips in between every find_and_modify command. Is this possible? Would I need multiple sockets or could I write all requests at once into the socket? How would I implement it?
[18:37:16] <teprrr> tom: does find_and_modify do something else than just doing an update?
[18:37:22] <teprrr> because you can filter your updates..
[19:09:34] <shortdudey123> anyone know if the background option can be added to an index w/o reindexing? https://docs.mongodb.com/manual/tutorial/manage-indexes/ only mentioned TTL
[19:12:53] <Derick> shortdudey123: background indexing is only for the initial index creation
[19:13:03] <Derick> so it makes no sense to add it when an index is already existing
[19:13:40] <shortdudey123> hmm did not realize that
[19:14:26] <shortdudey123> new docs are always index in the background or the foreground after the initial index creation?
[19:42:58] <brunoais> Hi. I found in 2-year old and older answers that MongoDB does not support asking for the attributes of a document. Does that still hold true?
[19:43:40] <brunoais> If it does, what prevents such feature from being made? (I'm good with technical stuff so please do tell or point/link me to it)
[19:44:55] <brunoais> I've checked on a schema that seems to be perfect to what I want but, out of the ~15 operations I want to do on those data, 2 seem not possible to do
[20:51:53] <brunoais> StephenLynx, http://pastebin.com/cBp0DyLg <- 2 scheming for the collection
[20:52:41] <brunoais> StephenLynx, How do I ensure that elements of an array are considered sets according to a certain parameter of the document in it?
[20:53:38] <brunoais> Only the ones that are immediate values of the params with "setsOfTypeX" (where X is a digit) are dynamic. All the rest is stable and static
[21:20:38] <tom> teprrr: not sure what you mean by "filter your updates". my findAndModify does an update and returns the updated document, which is important since I need the exact document back for audit purposes.
[21:33:59] <teprrr> tom: well, it's been a day or two since you started asking these questions, and you still haven't defined really what you want.. makes it really hard to answer without knowing that..
[21:34:22] <teprrr> tom: but okay. can't you just update them & use find afterwards? or should it be atomical?
[21:36:25] <teprrr> probably yes, if it's for auditing.. oh well, can't really help, but maybe someone else can now as your reqs are known :)
[22:53:48] <tom> teprrr: sorry, I just haven't properly set up notifications on my IRC client. it needs to be atomic for auditing purposes
[22:54:34] <tom> maybe to rephrase, could I issue multiple findAndModify requests over the wire at once, and then wait for all the responses to come back, or does it have to be one by one?
[22:58:06] <teprrr> tom: I'm just a beginner working on his first mongodb project, but I'd assume that if there's no info available in docs that's not so easily doable. the problem being that for atomicity all the nodes in cluster needs to be synced in time of the query => needs support from the backend
[22:58:17] <teprrr> so doing it on the wire won't help. but that's just complete guesswork...
[22:58:30] <teprrr> I hope someone knowledgeable would chime in, I'd be interested also in the answer
[22:58:56] <AvianFlu> tom, you can send all the requests you want to, but mongo has no concept of a transaction
[22:59:02] <AvianFlu> so there's no multi-doc atomicity