[00:51:56] <starseed> I'm looking for a way to determine the age or creation date of a cursor. Anyone know a way to do this?
[02:01:00] <starseed> I'm looking for a way to determine the age or creation date of a cursor. Anyone know a way to do this?
[02:02:01] <cheeser> i don't think there is. what are you trying to find?
[02:14:54] <starseed> just what I said, the age of the cursor
[02:15:06] <starseed> We have bad code in an application that unfortunately depends on notimeout
[02:15:32] <starseed> so we have these zombie cursors popping up occasionally which block sharding chunk migrations
[02:16:39] <starseed> I am scraping the Cursor IDs out of our logs, then I need to figure out how long each cursor has existed and kill them if they are older than x hours
[02:21:42] <cheeser> i'm surprised they last longer that 10 minutes, tbh
[02:21:53] <cheeser> i thought that was the timeout on cursors.
[02:23:18] <cheeser> i'd start by grepping the source for that option and removing it.
[02:55:08] <starseed> you can specify notimeout, which means they live until killed via command or you restart the mongoD process
[02:56:06] <starseed> we have some very very long running queries. Due to shard distribution, sometimes we will open a query and access some data...then it'll be more than ten minutes until that query needs data located on the shard again. So the cursor would timeout if set to default and the query fails
[02:57:24] <starseed> our devs 'workaround' was to specify notimeout on the cursors...which frankly wouldn't be the end of the world, except that open cursors block chunk migrations.
[07:14:39] <skoude> hmm. .I would need a unique key for every single mondo document.. Should I use ObjedID or should I generate my own one? Any best practices for this?
[07:16:47] <joannac> if you have a built in unique key for your documents, then use that
[10:01:56] <jokke> my disk space ran out.. i deleted old time series data and wanted to reclaim the space with db.repairDatabase() but i got the following error: Cannot repair database energybox having size: 478605737984 (bytes) because free disk space is: 2851266560 (bytes)
[10:02:02] <jokke> is there really nothing i can do?
[10:02:24] <Derick> you can setup a secondary, and let it sync
[10:02:34] <Derick> then tear down the first node and reconfigure the replicaset
[10:39:57] <kedare> Simple question, is this still true ?
[10:39:59] <kedare> One MongoDB 3.2 feature we demoed at MongoDB World in June was $lookup. $lookup is an aggregation pipeline stage that lets you insert data from a different collection into your pipeline. This is effectively a left outer join. $lookup will only be included in MongoDB Enterprise Server"
[12:53:41] <atbe> I'm seeing some network discovery going on before I started to use my MongoDriver, is this by design? The mongo code is packaged in with some other code. But the mongo code had not been executed and it already started checking the network for mongo instances
[12:58:50] <cheeser> that's likely the driver discovering the cluster topology. i.e., trying to find the primary
[12:59:05] <cheeser> but that has nothing to do with anything static...
[13:03:13] <atbe> I see, could you point me to the code that does this?
[13:23:03] <cheeser> there's no way for the driver to know what machines to talk to until you give at least one host in a constructor call on MongoClient
[13:23:29] <atbe> so there are no static blocks executing network code when loaded into the jvm/
[13:23:49] <atbe> I am just trying to be as thorough as can be
[13:23:52] <cheeser> of course not. how would they know who to talk to?
[14:06:57] <nikitosiusis> https://docs.mongodb.com/manual/faq/concurrency/#how-does-concurrency-affect-secondaries this says mongo applies replication oplog in batches and doesn't allow reads while that. can I tune this somehow? When I apply many writes to primary, I get significant read time increase on secondary
[14:07:55] <cheeser> why are you trying to read from the secondary?
[14:11:32] <nikitosiusis> because master dies when I try to read and write simultaneously
[15:14:14] <AAA_awright> Hey, I popped in here a while back reporting troubles with MongoDB eating 110% CPU sitting idle doing nothing, no I/O, no network, empty data
[15:14:37] <AAA_awright> It appears it was caused by a leapsecond bug fixed with a `sudo date` operation
[15:15:02] <AAA_awright> https://www.mongodb.com/blog/post/mongodb-and-leap-seconds claims MongoDB is leapsecond-safe, is this really the case?
[15:15:43] <AAA_awright> This was bothering me for weeks and I just gave up until recently. A restart may have also helped, but I can't easily do that since it's a production machine
[18:46:50] <boutell> Hi. I’m getting an “Overflow sort stage buffered data usage of 33559541 bytes exceeds internal limit of 33554432 bytes” error. I know this means I’m trying to do a sort() on something that isn’t indexed. And I’m gazing at the mongodb logs to figure out what. But while they tell me exactly what the query is, they don’t seem to tell me what the sort is at all. ): This is 2.6. Any thoughts on how to find out what s
[20:14:02] <cheeser> if you're doing a $lookup on a field, you'll that field indexed.
[20:18:21] <xmad> Cool, would a $match followed from a $lookup use an index, if available?
[20:19:40] <cheeser> this is not my area, but i would expect that the optimizer would attempt to move that $match before the $lookup. but if you're trying to $match on the values of the $lookup result, it couldn't do that of course.
[20:19:48] <cheeser> it also couldn't use an index, afaik.
[20:23:30] <xmad> Yeah, assuming the $match is operating on fields that come from $lookup
[20:27:07] <xmad> Thanks so much. I think it's fairly safe to assume that $match doesn't use indexes if it's not at the beginning (given the wording from the doc). Maybe $match can be optimized for $lookup on later versions
[20:27:11] <xmad> $lookup is a fairly new feature after all
[20:28:02] <JFlash> hi think the last time i got it working by manually passing the config file
[20:28:25] <JFlash> but how it get this errror when I try passing -f
[20:28:28] <JFlash> F CONTROL Failed global initialization: FileNotOpen Failed to open "/var/log/mongodb/mongod.log"
[20:33:05] <cheeser> JFlash: permissions problem on that file. you're not starting mongod as the correct user.
[21:02:11] <sivli> Just wondering what the best way to aggregate data to a constant collection w/o deleting existing docs.
[21:02:35] <cheeser> $out will overwrite the existing data.
[21:02:52] <sivli> I am aware, which is my problem.
[21:03:53] <sivli> I could just have it returned to my app server and then bulk insert but that seems a needless usage of network resources.
[21:04:42] <AndrewYoung> There's no easy way for mongo to know if it can cleanly insert all those records or not. If there are _id or other unique index constraints on the existing collection it would cause problems with the insert.
[21:05:04] <sivli> makes sense, I figured that was the reason.
[21:06:50] <sivli> So what I am wondering is there a way to cleanly do a collection to collection copy or do I really need to send the results over the wire both ways?
[21:07:14] <sivli> The col to col option likely has the same issues as $out (assuming that is the reason for the limitation)
[21:07:27] <AndrewYoung> You could use server side javascript to do it.
[21:08:02] <AndrewYoung> But server side javascript requires a global lock.
[21:08:44] <AndrewYoung> It would probably be better to do it the "naive" way by implementing it in code on your app server.
[21:09:21] <sivli> True but my mongo host (Atlas) likely does not have a way to do in locally. So I would have to do it over the wire.
[21:10:05] <sivli> Which I can... but is feels like such a waste. Thus here I am asking for other options :)
[21:47:19] <warp0x00> Nevertheless, systems running MongoDB do not need swap for routine operation. Database files are memory-mapped and should constitute most of your MongoDB memory use. Therefore, it is unlikely that mongod will ever use any swap space in normal operation.
[21:48:00] <warp0x00> but clearly that's wrong because linux doesn't swap memory-mapped files only anonymous memory
[21:48:20] <AndrewYoung> That bit of the documentation is talking specifically about the MMAPv1 storage engine.
[21:49:12] <AndrewYoung> Which uses mmap to map files to virtual memory.
[21:49:22] <AndrewYoung> However, the default storage engine for the version you're using is WiredTiger.
[21:55:15] <JFlash> I have to tokenize the text then
[21:55:25] <JFlash> so maybe i need to use map reduce?
[21:56:24] <JFlash> what if I saved the text as tokens prior to insertion
[21:56:36] <AndrewYoung> warp0x00: You might just need to adjust the WiredTiger cache size, but that's just a guess. https://docs.mongodb.com/manual/faq/storage/#wiredtiger-storage-engine
[22:01:31] <AndrewYoung> warp0x00: Is the wording here useful at all? (I'm asking about the documentation quality, specifically) https://docs.mongodb.com/manual/administration/production-notes/?_ga=1.27790261.866869998.1468255429#hardware-considerations
[22:01:40] <AndrewYoung> Look at the bit in green. (The note)
[22:02:13] <AndrewYoung> JFlash: I would do a compound of the _id and the token field, probably.
[22:05:17] <AndrewYoung> Yeah, those are all old links.
[22:05:48] <AndrewYoung> You might set the search time to "past year".
[22:06:33] <AndrewYoung> Glad you got it working though. :)
[22:08:58] <warp0x00> if someone made a new SO/Severfault question it would likely just get marked as duplicate. I wonder if someone with lots of stack overflow points can go fix them or if there is even a mechanism of fixing answers that are wrong because they're out of date on SO