PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 20th of August, 2019

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:01:50] <GothAlice> cslcm: Consider how separate the two actions are. You are issuing a query, and instructing MongoDB to follow that up with an update operation across those matched documents. Upsert says, no match, no problem, we make our own matches 'round here. The document constructed as part of that operation does not qualify as a "result of a find operation".
[02:02:17] <GothAlice> Not unless you override the intended meaning of the return value by passing returnNewDocument=True to the operation.
[02:03:46] <GothAlice> Ref: https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/#update-document-with-upsert ← the example uses it, for good reason. See the final paragraph-sentence of this section.
[02:27:44] <cslcm> GothAlice: my issue was that i was not returning the new document but the old one
[02:27:50] <cslcm> thanks for your help :)
[02:39:33] <cgi> when a machine primary fails - how does the application goto the new primary?
[02:39:36] <cgi> dns seems to take time for this?
[02:41:03] <GothAlice> https://docs.mongodb.com/manual/core/replica-set-elections/ http://lmgtfy.com/?q=mongodb+primary+election
[02:42:04] <cgi> GothAlice, even if there is an election - how does the application know about it?
[02:42:41] <cgi> So I've a python program that was talking to the primaryIP - now it failed. How does it know what the new primary is?
[02:45:53] <cgi> GothAlice, I'm asking from the actual application
[12:33:36] <cgi> when the python client connects to a replicaset - does it automatically do the failover?
[12:36:25] <GothAlice> cgi: Yes. https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst I think is what I wanted to link you, but by the time I saw backlog, you had already gone.
[12:37:01] <GothAlice> Specifically note the “Monitoring” section.
[12:37:50] <GothAlice> Noting that while the connection pool will automate handling of failover and discovery/selection of the new primary for application nodes, individual queries will _not_ be automatically retried.
[12:38:10] <GothAlice> If you want that behaviour, wrap in exception handling catching the timeout or failure, and implement retrying yourself.
[12:40:48] <cgi> sounds good
[12:41:02] <cgi> In mongodb 3.6 it seems there is one write try that it can automatically do - if configured
[12:50:50] <cgi> GothAlice, Thanks. I will try to see if I can test a HA Mongdb
[12:51:12] <cgi> GothAlice, right now, i am experimenting at AWS, and it seems the cost of running 3 machines is kind of painful since I need a small database
[12:51:20] <cgi> 2 would have been ideal
[12:53:57] <GothAlice> Part of the reason database hosting companies that specialize in just hosting databases exist. I.e. you can get allocated space on a “shared cluster” at much lower cost than running your own cluster.
[12:54:19] <GothAlice> Then you have no database administration overhead—monitoring, updating, responding to cris, etc.—too.
[12:54:24] <GothAlice> s/cris/crisis/
[14:45:49] <Frank_45678908it> Might be a stupid question, but I couldn't find the answer online. Is there a better way of updating aggregation output than just updating the field directly using pipeline stages and throwing the output into insert_many ?
[14:49:17] <GothAlice> Frank_45678908it: https://docs.mongodb.com/v3.2/reference/operator/aggregation/out/
[14:49:44] <GothAlice> No need to pull in every record generated into your application, only to forward it back out to the database. Excessive roundtrips are excessive.
[14:50:35] <kali> and expensive
[14:52:17] <Frank_45678908it> GothAlice I'm going to take a look. I already found that article but thought it was applicable, thanks
[14:53:04] <Frank_45678908it> wasn't**