[00:34:05] <GothAlice> blizzow: I see nothing mentioning 3.4 is released, only announced. The whitepaper and press release aren't overly obfuscated on this. Pointy Haired Bosses should read the first sentence of a press release, not just the title. ;)
[00:36:58] <blizzow> GothAlice: Subject: "Announcing Mongodb 3.4" Body: "We are proud to announce MongoDB 3.4. This new release enables modern mission-critical applications with the support, flexibility, and tooling to ensure your database can keep up with your vision for years to come."
[00:37:30] <blizzow> Nothing in this email says anything about the fact that it's not released yet.
[00:39:06] <GothAlice> Announcing. Proud to announce. Not "announce general availability", or "packages announced"… Nothing there says it's generally available, either. (Of course, it is, if you're willing to try out a release candidate: https://github.com/mongodb/mongo/tree/r3.4.0-rc2)
[00:41:02] <blizzow> My point is that people reading the email infer that if a product is announced, the product is available. They do so reasonably.
[00:41:46] <GothAlice> (So if Pointy Haired Bosses want you on 3.4, you can oblige without too much risk.)
[00:42:09] <GothAlice> And 3.4 _is_ pretty awesome. I want to be on it right now, too. ^_^
[00:43:25] <blizzow> Many PHBs and operations people work for institutions with policies saying don't use beta software.
[00:44:35] <blizzow> Like I said, the mongo propaganda is poorly worded.
[02:34:56] <Sasazuka> I'm using Casbah to retrieve a document, it's a very simple lookup and it succeeds; I then use the _id to help me perform an update, however I noticed that the _id seems to be off by one (in Mongo = 5823d98712a259121c2b6aad vs returned = 5823d98712a259121c2b6aae) - perhaps I did something wrong?
[02:35:13] <Sasazuka> I initially did not use _id to perform the lookup
[02:40:02] <Sasazuka> changing it to use the original query field works as expected
[04:15:41] <niftylettuce> as soon as 100 people ❤️ or retweet this then I will commit/release the official version of CrocodileJS https://twitter.com/niftylettuce/status/796561776267710465
[05:46:05] <nikitosiusis> how can I force initial sync from specific rs member? I want to sync from hidden one
[11:08:07] <gokl> Hi, is it ok to use mongodump on a mongos instance to get a backup of a sharded cluster?
[11:13:48] <gokl> Let me change my question: Why do i need db.fsyncLock when i do backups witth mongodump --oplog ?
[11:14:02] <gokl> Its in the official documentation https://docs.mongodb.com/manual/tutorial/backup-sharded-cluster-with-database-dumps/
[12:00:08] <dsfsdfsdfs_> Hey. Can someone answer a few questions about certs in mongodb?
[12:05:27] <kurushiyama> dsfsdfsdfs_ Do not ask to ask, just ask.
[12:07:41] <dsfsdfsdfs_> well, if nobody is available I do not waste time typing.... Anyway: Are CA and CRL for internal(Cluster) and external Auth necessarily the same?
[12:08:46] <dsfsdfsdfs_> When I disable authentification of clients via certificates, does this include internal communication or only external communication?
[12:32:36] <deever> does dump'n'restore make directly upgrading from 2.6 to 3.2 possible? there isn't much data in the db...
[15:44:37] <mautematico> Do you guys have any pointers as to how would I be able to deploy a replicaSet for MongoDB using Docker and ECS, each replica running on a different instance?
[16:04:12] <kurushiyama> mautematico So what is the deal here? Deploy them, mount the conf files, put the same replset name in each node, connect to one instance, initialize the replset and there you go. Albeit, I have to admit that imho dockerizing MongoDB, unless you need it for compose and/or cloud.d.c does not make much sense.
[16:05:43] <mautematico> Gracias Kurushiyama. We want to automate the whole process, from conf and deployement to scalling
[16:07:01] <mautematico> Currently I am stuck on the service discovery. AFAIK, each mongo needs to know other mongos' address, and we don't have that info in advance as it's automatically assigned by ECS
[16:07:47] <mautematico> We are looking at this https://github.com/tobilg/docker-mongodb-marathon/blob/master/mongodb-configurator/routes/all.js#L201
[16:08:45] <mautematico> tobilg seems able to to create/destroy/recreate mongos as desired; he's using marathon ant it's APIs.
[16:08:53] <kurushiyama> mautematico Seriously, we are talking of your data. If you have to ask how to do it, it is probably a bad idea to do it in the first place. You are adding at least one layer of complexity. And most likely, if your dimensions are correct (aka you are at the upper end of the "most bang for the buck" range), dynamically adding shards is a bad idea.
[16:10:12] <kurushiyama> mautematico And please get the terminology right. A "mongos" is a query router for sharded clusters.
[16:10:14] <mautematico> Oh, it won't be a so critical replica set (by now). We are actually deploying Graylog cluster
[16:11:05] <mautematico> I am still RTFM, for sure I'll make some mistakes, sorry in advance.
[16:11:13] <kurushiyama> mautematico Like I said: Aim for proper sizing, and scaling is basically as easy as I have described it, plus adding the new replset as a shard.
[16:14:08] <mautematico> "dynamically adding shards is a bad idea." I get it. Sure I am missexplaining myself.
[16:14:46] <mautematico> We're thinking about the scenario when a mongo node dies.
[16:15:12] <mautematico> An election will take place and a new Primary will be elected.
[16:16:25] <mautematico> We'd like to automate the process that redeploys the dead node.
[16:17:13] <mautematico> Do you have any last advice, kurushiyama? Thank you in advance. I'll keep RTFM
[16:22:52] <kurushiyama> mautematico My advice: dont. You do not seem to fully understand the technologies involved, and important data or not, it is easy to screw things up.
[16:38:41] <geaaru> hi, i'm trying to use mongo command line shell (v.3.2.9) for connecting to a shard cluster with authentication roles. I'm trying to pass password from commandline (as --help description) but it seems that password is used as script name (when used as last option) or as database name (if used as first option)
[16:39:23] <geaaru> is there an issue about this or how can i send password as argument to avoid password login request ? thanks in advance
[18:57:19] <rhqq> greetings, i have nodes in [PRIMARY, STARTUP2, ARBITER] state. can i upgrade arbiter node and not have problems with quorum?
[19:34:05] <joren> Hey, I'm seeing something a little weird in mongo cloud. I've got a couple replica sets that are showing up as if they have members from two different sets, if that makes sense
[19:34:37] <joren> the members still show up in their proper sets too and everything seems to function properly.
[19:35:24] <joren> for the sets showing members from the wrong sets, those ones are missing one of their "real" members (only in the cloud though)
[19:38:09] <joren> Just for clarity: https://goo.gl/photos/n4skh3feChXPdjVN6 prod-mongo-5-0 is not a member of rs12.
[20:46:42] <teprrr> hmm. I have sort of scan happening weekly, which emits me a list of things. I'm wondering whether I'll hit some problems if I push all results with <timestamp, result> to the same collection in the long run?
[20:47:22] <teprrr> I'd like to be able to query result sets per timestamp, and creating one collection per week sounds not so nice..
[21:03:02] <jeremyquinton> I have a mongodb related question and wonder if anyone can help. Im building a system that scraps html. Initially the html was stored on disk which worked well but now there is gigabytes of files. I had a look at mongo db to store the html as its essentially plain text and thought using the wiredTiger storage engine and the ’block_compressor=zlib' might be a solution as plain text compresses well. I created a test collection and inserted 60MB
[21:03:02] <jeremyquinton> worth of html as a test but a simple du -sh on /var/lib/mongodb it grew by about 300MB. Im looking for a way to store html in mongo db and reduce the amount of disk space Im using instead of flat files.
[21:04:41] <jeremyquinton> I guess it feels a bit hacky to store html files in a database but if it can compress them at storage time that would be great as well as making retrieval easier
[21:07:10] <jeremyquinton> there is also GridFS is that the preferred way to do this
[21:24:26] <teprrr> so hmh, why you want to store it in a database instead of filesystem in the first place? what type of queries you want to do on them?
[21:24:35] <teprrr> I'm just wondering if mongodb is what you are looking for anyhow
[21:33:05] <jeremyquinton> firstly a database is easier to move around instead of files but I was also hoping mongo could do some compression
[21:34:42] <jeremyquinton> queries will be relatively straightforward but I also convert the html to json so mongo seemed like a good idea. I want the ability to be able to reprocess the html if need and output the json. Having it in mongo seemed like a viable solution
[21:43:04] <meowzus> hey is there a way to enable vi mode editing in the mongo console?
[21:44:00] <jeremyquinton> ok I just noticed that I didnt have the correct storage engine set for mongo db so changed it blockCompressor: zlib. Added about 10megs of files and database only grew by 1meg so onto something
[21:48:28] <jeremyquinton> got it working its compressing 32MB down to about 5MB which is super cool
[21:49:46] <teprrr> jeremyquinton: oh, that's cool. zlib is not per default enabled?
[21:49:52] <teprrr> or do you have a custom setup?
[21:50:14] <teprrr> anyway, depending on _how_ and _what_ you want to search from those files, there may be better alternatives
[21:52:45] <jeremyquinton> seems like zlib is not the default
[21:55:52] <jeremyquinton> the pattern I need is pull html out reprocess to get json. Pulling from disk is slow but also takes up space. I need the ability to compress the files to.
[22:12:53] <atbe> I have a database running locally and I wanted to route the dbpath to a new directory because I am moving the database to a new hard drive
[22:13:15] <atbe> what is the safest way to edit the dbpath on the localhost db?
[23:35:37] <atbe> `couldn't open file pre-alloc' is an issue I am having when starting the db from the new driv