[06:13:08] <dcrouch> Attempting to upsert/replace whole document where ID matches. I'm a bit foggy on mongodb. Working with pymongo. I will have a long list of ids to likely replace daily. Should I check to see if they exist, and overwrite. I'm a bit lost.
[06:16:52] <dcrouch> That was actually simple. :) update/upsert set: dict
[06:40:50] <laroi> Hey guys, Do i need to add the ips of slave servers in my application code (nodejs) for making the slaves readable?
[09:39:54] <synthmeat> do you have anything else that can vote in there, besides primary and secondary?
[09:40:12] <synthmeat> if so, you should add at least one arbiter to that. but ideally one more secondary
[09:40:49] <laroi> there is only a primary and a secondary, that is all
[09:41:31] <laroi> if is set readPreference as primiarypreffered, will it redirect reads to secondary in case primary server is under load?
[09:41:41] <laroi> or it wait for primary to completely go down?
[09:42:21] <synthmeat> i think heuristics takes network into account, but i wouldn't count it. i do all reads from 2 replicas and all writes to primary
[09:43:22] <laroi> other than the connection string changes, is there anything else required to make the read from slave working?
[09:43:24] <synthmeat> PRI+SEC+SEC+hidden non-voting for backups, with readPreference=secondaryPreferred covers a lot of use cases and is very resilient
[09:43:57] <synthmeat> laroi: i don't think anything else is required, no
[09:44:28] <laroi> also, is there any consistency problem if we read from secondary?
[09:44:43] <laroi> the time taken to replicate from pri to sec?
[09:45:21] <synthmeat> of course there are. but you have transactional sessions if you need consistency, or you can hack it out with majority write and majority read (which won't work with just PRI+SEC)
[09:47:16] <laroi> in that case I will try to set up another secondary
[09:48:21] <synthmeat> if it's not helped with replication, you should probably inquire into your own code for query optimizations, and into caching efficiently, if possible
[09:49:56] <synthmeat> in my own use-case where 64 core couldn't cut it, 3x16 core cut it just fine
[09:52:17] <laroi> so you distributed load across three 16 core machines (pri+sec+sec) machines?
[11:18:44] <ronaldl93> Hey guys. Random question. I got a whole aggregation pipeline written. Is it there a way to exclude (like completely ignore) a document when it's got a specific value?
[14:06:28] <livingbeef> How would I manually remove database(es)? mongod (3.6) refuses to start and complains that I didn't update the db with 3.4. I don't need the databases, but I can't drop them from mongo shell, bc mongod won't start.
[14:11:28] <Derick> you should be able to remove the files from the dbPath directory — but, make a backup (if you need one)
[14:17:26] <livingbeef> As in I can just remove everything in the folder?
[14:18:20] <Derick> if you have no need for the data, yes.
[19:18:20] <trojanski> hey guys, sorry for off top, quick question: Where do you keep your backups in terms of storage - local storage (nfs, ftp, http) or any other public storage like S3 or somewhere else. thnx