[08:03:10] <kurushiyama> dimaj Not that I am aware of. Why do you need it that way?
[08:04:33] <dimaj> Thanks for the reply kurushiyama! at my work, we have several lab networks and a production network. One lab network has access to production, but the other does not.
[08:04:50] <dimaj> it so happened that I have to deploy my application on the network that does not have access
[08:05:19] <dimaj> so, I was hoping that I could trick my application into thinking that I'm talking to the mongodb on the lab network
[08:05:39] <dimaj> but all traffic will be forwarded to the mongodb on the production network
[08:16:48] <kurushiyama> Derick Well, on the other hand it is not.
[08:17:16] <dimaj> I'm basically hoping that one node is going to delegate all of the heavy lifting to another node and then just return the results back to the application
[08:18:05] <dimaj> i know this is probably the weirdest usecase :)
[08:18:18] <Derick> dimaj: there is no such thing like that with MongoDB - unless you can get a mongos on that network? I suppose you can't install anything either?
[08:18:27] <kurushiyama> dimaj Well, if you can fire up additional instances...
[08:19:18] <dimaj> machine on network C is a Cloud Foundry... so, can't install... can only deploy to it... machine on network B, i have root privileges
[08:20:10] <kurushiyama> dimaj But your application is on C. Is that to be taken as layered?
[08:20:20] <dimaj> Derick, kurushiyama, hang on a sec... googling _mongos_
[08:20:26] <Derick> on network B, install a mongos (sharding router). On network C, install a mongod, and 3 mongo config instances. Configure your app on network C to use the mongos on network B. Configure the sharded environment through mongos on network B to talk to things on network A
[08:25:25] <dimaj> so, again, just to clarify... my final setup is going to look like this: Network A (mongodb); Network B - mongos and mongodb; Network C - node app that is talking to mongos
[08:25:44] <kurushiyama> Well, a bit like bellwire and spit, but should do the trick
[08:25:51] <dimaj> Derick, that's fine... it's temporary, until my opts department will have time to address my ticket
[08:26:26] <dimaj> at which time, my machine on Network B gets destroyed
[08:26:39] <kurushiyama> dimaj Correct setup. However, terminology is _really_ important. In B, it will be mongos and a config server.
[08:27:13] <dimaj> kurushiyama so, mongodb and config server are different things?
[08:27:22] <dimaj> as you can tell, i'm very green with this :)
[08:34:55] <kurushiyama> dimaj Just because the way I learned it.
[08:35:50] <kurushiyama> dimaj There was a guy hammering it into my head "ALWAYS use 3 config servers". But he was referencing to a proper cluster setup. You have nothing to worry about.
[08:36:37] <dimaj> yeah, my application is an internal tool that won't see the light of day
[08:37:14] <dimaj> if, and when, it'll take off, we'll set up a proper cluster to make sure that it's fast and reliable and not hacky :D
[08:37:22] <kurushiyama> And even if. The worst case scenario would be that you get corrupted metadata. Which, with only a single shard, is pretty easy to overcome.
[08:54:32] <Ange7> Someone can explain why i have error « duplicate key error » but i make an update with $inc and options upsert = true
[08:54:57] <Ange7> i just want $inc my document or insert him if not exists.
[08:58:34] <kurushiyama> Ange7 can you show some code? In general, this should work. a sample doc would not hurt, either.
[09:07:43] <ToMiles> Any reason that when I mongorestore a database into a freshly initiated replset of mongo servers that my db name got prefixed with "b=" ?
[09:08:10] <kurushiyama> Ange7 Most likely, your match does not match the already existing document with _id $key, hence mongod tries to insert a document, but then it finds that a document with $key exists and throws an error.
[09:08:28] <kurushiyama> ToMiles User error, most likely.
[09:15:37] <kurushiyama> ToMiles you might want to provide us with the command lines used to dump and restore.
[09:20:56] <Ange7> kurushiyama: i'm not sure to understand, doesn't match but existing document
[09:21:24] <kurushiyama> Ange7 You have a match that is beyond just the _id
[09:21:56] <kurushiyama> Ange7 See lines 5-9 of your code
[09:22:14] <kurushiyama> Most likely, no document matches that
[09:22:29] <kurushiyama> But there _is_ a document with the same id
[09:39:27] <kurushiyama> Ange7 You should _really_ get into the docs.
[09:40:11] <kurushiyama> Ange7 As you are told not for the first time, and not only by me. Really, it does not help you if you get every problem spoon fed. You need to learn how to fish, not how to beg for fish.
[09:44:42] <diegoaguilar> kurushiyama, do u think it would be too dificult to get automatic s3 backup ina custom server?
[09:45:07] <diegoaguilar> well Id like to keep up to 5 last backups ...
[09:45:26] <kurushiyama> diegoaguilar Tbh, if you have to ask, most likely the answer is yes.
[10:08:25] <eje_> should use applySkipLimit for this
[10:08:36] <eje_> mongo is full of shitty surprises
[10:16:44] <kurushiyama> eje_ Well, count actually returns the number of _matched_ docs, iirc.
[10:18:40] <kurushiyama> eje_ So basically, it is correct. limit on the other hand, modifies the cursor. Which is what you want. So call the length method on the cursor, and Bob's your uncle.
[10:34:45] <crazyphil> I was looking for a global way to let mongo do it itself, like I can do with Elasticsearch or Kafka (basically you can tell them use these X directories)
[10:44:18] <strepsils> Hey folks, I have a replica set upgrade question: going from 2.4 to 3.2 by adding a 3.2 node to the 2.4 replica set, once the data is replicated over making the 3.2 node PRIMARY, removing the 2.4 nodes and adding more 3.2 nodes. This is skipping the 2.6 upgrade the docs recommend, are there any risks in doing so?
[10:44:21] <crazyphil> it was supposed to be a sharded setup, however when I told the primary db to start replicated, it told me something about it couldn't because there was some dbs that was local
[10:44:22] <kurushiyama> Which most likely means that you have less than 1TB worth of data. It is just that the datafiles do not shrink.
[10:44:49] <kurushiyama> crazyphil Uhm... that does not really make sense.
[10:44:54] <crazyphil> show dbs has the db size at 950
[10:46:44] <kurushiyama> strepsils Uhm, iirc you are supposed to update to 2.6 first, and not out of fun. Skipping it may render your data unusable, so I would not.
[10:47:25] <kurushiyama> crazyphil uhm. your data size is bigger than your files? We might have a misunderstanding here... ;)
[10:47:41] <strepsils> @kurushiyama the thing is, there's no data corruption… it seems to work
[10:48:28] <kurushiyama> strepsils Well, cool if _you_ want to bet your data on being lazy. I would not. Ever.
[10:50:00] <kurushiyama> crazyphil A sharded cluster may be composed of replsets or standalones (or both). First step is to start a replica set (if you want to), then do the sharding.
[10:51:49] <crazyphil> I'm pretty sure a replica set is there, as all 3 servers show "RS0" at the mongo prompt
[10:52:34] <kurushiyama> crazyphil Please pastebin rs.status() from the primary. Do not forget to anonymize the host names, if publicly available.
[10:54:13] <strepsils> @kurushiyama I don't want to play with data, it's just that after upgrading a few replica sets using this approach I realized we missed the 2.6 step. RTFM didn't happen when it was supposed to :(
[11:00:16] <kurushiyama> crazyphil Well, now you can do a sharding setup. However, you should be very, _very_, _VERY_ careful with choosing a shard key. It is one of the very few things you can really screw up with MongoDB, since it is _pretty_ hard to change.
[11:05:17] <avner_> Hi all, question re ReplicaSet URI setup, from driver side. docs say: "When connecting to a replica set it is important to give a seed list of at least two mongod instances.."
[11:05:18] <avner_> The "at least 2" are needed part, is it so that during first connection I get better HA, or is there a runtime implications (because all servers are auto discoverd in any case)?
[11:06:19] <cheeser> if you give just one server the driver (at least java's) will only talk to that one server ever. and that one might not be primary at the moment.
[11:06:55] <cheeser> by giving it at least 2, it knows to discover the topology of your cluster and find the primary/mongos machines
[11:07:08] <kurushiyama> cheeser The replSet option does not force discovery?
[11:12:11] <cheeser> kurushiyama: i'll double check that with the java driver to be sure but i don't believe it works that way.
[11:12:53] <kurushiyama> cheeser Well, I am not too sure wether I just implied it would or wether it is documented that way in the uri docs.
[11:13:16] <avner_> So, using replicaSet, and just one server in the uri, discovery of all servers works. Do I need to worry because of the docs saying “at least 2”?
[11:13:20] <cheeser> it's worth investing either way just to sure
[11:13:29] <kephu> got a kinda tricky question, is there any way to get a result where keys are one field's values, and those key's matching vals are from aggregation's $push?
[11:13:33] <cheeser> avner_: it makes me curious, personally
[11:14:35] <avner_> @cheeser, I suspect the “at least” two, is to get a better HA on startup time, e.g. first connection, but once replicasset is discovered, it’s meaningless.
[11:15:14] <kurushiyama> cheeser Well, actually it isn't too clear about that. It says that if you only set one server _and_ omit the replset option, the client creates a standalone connection. I have to admit that I would conclude that it creates a replSet aware connection (including discovery) if the replSet option is set.
[11:16:58] <kurushiyama> cheeser But that may be just me.
[13:13:04] <kurushiyama> cheeser well, do not know the java driver too well, so I was cautious ;) I was actually pretty astonished that it could be otherwise, as the principle of least surprise seems to be an important one for MongoDB, and the Java driver is supported...
[14:06:52] <StephenLynx> even if it weren't indexed, you wouldn't notice on a collection with 20 or so documents
[14:06:58] <siddc> Hi, I am trying to troubleshoot a mongodb issue. We have one primary and 6 secondary(voting) and 4 secondary(non-voting) notes. During replication the secondaries become unresponsive causing the monogdb to break the cluster and the primary becomes secondary. When they reform the cluster, there is no primary and they drop their DB and start the replication again and this is stuck on an endless loop. How do we fix this? We are using 2.4.14
[14:50:43] <kurushiyama> pamp That sounds more like an Ubuntu problem than like a MongoDB thing, no?
[14:53:40] <kurushiyama> siddc If you do not have it, we are going to have a problem debugging it. Running an 11 member cluster without metrics is like doing an ultra-low altitude flight at supersonic speed with a mobility cane and a street map for navigation. ;)
[14:54:26] <siddc> kurushiyama: I found the MMS tool. Just looking for metrics :). So Replication headroom on one node that I found is 1717S
[14:55:06] <siddc> I dont see a counter for Oplog Window :(
[15:01:54] <kurushiyama> siddc Hm... That should be sufficient headroom to do the initial sync, albeit depending on your setup, it might be not enough.
[15:03:17] <kurushiyama> siddc I take for granted that we are talking of a geodistributed replset?
[15:03:56] <siddc> kurushiyama: No, they are in the same datacenter
[15:04:17] <kurushiyama> o.O You are serious with redundancy, that is for sure.
[15:15:19] <kurushiyama> And even if that theory proves to be correct, it is not exactly easy to overcome that problem. My guess is that the primary is overburdened.
[15:16:13] <siddc> And to resolve that, I tried replicating with only 6 of the 10 secondaries but still same issue
[15:17:07] <kurushiyama> siddc Well, as said – pretty hard to impossible to find a solution "remotely"
[16:44:55] <Echo6> I have a foundation question about Mongo. As I understand it I can have "nested" collections. I have a customer that has a hotel booking website. If they were storing the hotels in one collection, the customer information in another collection and the bookings in another collection and the bookings use the hotels and the customer does a copy of the information for the hotel and the customer get duplicated to the booking document?
[16:48:58] <kurushiyama> Echo6 a) No, you can not. You can have embedded documents, which are a double sided sword: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/ b) You probably better use references, be them implicit or explicit (you have to decide that per use case). c) https://docs.mongodb.org/manual/applications/data-models/
[16:50:38] <Echo6> I'm not saying that I want to do it that way. I'm checking to see if I understand how Mongo works as there is conflicting information on the internet about it.
[16:57:27] <quadHelix> Is there a way for me to get the last element of a nested array? I love mongo, yet my newbishness shows with this data structure :\ I have an array like: "status" : [
[17:00:18] <quadHelix> I can get the results if I know how many entries are in the array using: db.orders.find({'status.2.status':'Printing'})
[17:00:20] <kurushiyama> Echo6 Well, here is the most valuable piece of advice I can give you: Use the official docs _only_ until you feel confident enough to judge the quality of the advice given at other locations.
[17:00:55] <kurushiyama> Echo6 _only_ in the sense of _exclusively_
[17:01:41] <kurushiyama> quadHelix http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/ Most likely, you overembedded.
[17:02:11] <quadHelix> I am realizing this now, I will read the doc - thanks.
[17:03:17] <kurushiyama> quadHelix That is no doc, just one opinion. However, I am a stern supporter of the "Be as flat as possible" fraction.
[17:06:30] <quadHelix> I have been trying to use slice -1 and elemMatch without success. Flatness is my future friend.
[17:08:09] <quadHelix> I knew I would never hit the BSON limit. It is a glorified log file contained in an order. I will change my model to be flatter.
[17:18:56] <FreeSpencer> is there a way I can make certain that a user can only see certain records where a value = value? In mysql you can kinda do it with views
[17:20:51] <StephenLynx> others with more experience might confirm it.
[17:20:58] <StephenLynx> but I have never heard of such a feature in mongo.
[17:53:38] <quadHelix> I have been able to search in a nested array using: db.orders.find({'status':{$all:[{$elemMatch:{status:"Submitted"}} ] }} ) Does anybody know how I could limit this to only return the last element?
[20:34:19] <roonie> mongo 3.2 broke all my tests because i think i can no longer maintain as many connections to the db. any one else encounter this problem?