[02:01:50] <GothAlice> cslcm: Consider how separate the two actions are. You are issuing a query, and instructing MongoDB to follow that up with an update operation across those matched documents. Upsert says, no match, no problem, we make our own matches 'round here. The document constructed as part of that operation does not qualify as a "result of a find operation".
[02:02:17] <GothAlice> Not unless you override the intended meaning of the return value by passing returnNewDocument=True to the operation.
[02:03:46] <GothAlice> Ref: https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/#update-document-with-upsert ← the example uses it, for good reason. See the final paragraph-sentence of this section.
[02:27:44] <cslcm> GothAlice: my issue was that i was not returning the new document but the old one
[02:42:04] <cgi> GothAlice, even if there is an election - how does the application know about it?
[02:42:41] <cgi> So I've a python program that was talking to the primaryIP - now it failed. How does it know what the new primary is?
[02:45:53] <cgi> GothAlice, I'm asking from the actual application
[12:33:36] <cgi> when the python client connects to a replicaset - does it automatically do the failover?
[12:36:25] <GothAlice> cgi: Yes. https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst I think is what I wanted to link you, but by the time I saw backlog, you had already gone.
[12:37:01] <GothAlice> Specifically note the “Monitoring” section.
[12:37:50] <GothAlice> Noting that while the connection pool will automate handling of failover and discovery/selection of the new primary for application nodes, individual queries will _not_ be automatically retried.
[12:38:10] <GothAlice> If you want that behaviour, wrap in exception handling catching the timeout or failure, and implement retrying yourself.
[12:41:02] <cgi> In mongodb 3.6 it seems there is one write try that it can automatically do - if configured
[12:50:50] <cgi> GothAlice, Thanks. I will try to see if I can test a HA Mongdb
[12:51:12] <cgi> GothAlice, right now, i am experimenting at AWS, and it seems the cost of running 3 machines is kind of painful since I need a small database
[12:53:57] <GothAlice> Part of the reason database hosting companies that specialize in just hosting databases exist. I.e. you can get allocated space on a “shared cluster” at much lower cost than running your own cluster.
[12:54:19] <GothAlice> Then you have no database administration overhead—monitoring, updating, responding to cris, etc.—too.
[14:45:49] <Frank_45678908it> Might be a stupid question, but I couldn't find the answer online. Is there a better way of updating aggregation output than just updating the field directly using pipeline stages and throwing the output into insert_many ?
[14:49:44] <GothAlice> No need to pull in every record generated into your application, only to forward it back out to the database. Excessive roundtrips are excessive.