PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 21st of January, 2019

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:13:08] <dcrouch> Attempting to upsert/replace whole document where ID matches. I'm a bit foggy on mongodb. Working with pymongo. I will have a long list of ids to likely replace daily. Should I check to see if they exist, and overwrite. I'm a bit lost.
[06:16:52] <dcrouch> That was actually simple. :) update/upsert set: dict
[06:40:50] <laroi> Hey guys, Do i need to add the ips of slave servers in my application code (nodejs) for making the slaves readable?
[06:41:46] <laroi> I'm looking at this
[06:41:47] <laroi> http://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html?#read-preference
[09:31:01] <synthmeat> laroi: you'd need to add them all, since what is primary and what secondary is dynamic thing (if set up properly)
[09:31:31] <synthmeat> example connection string "mongodb://127.0.0.1:27001,127.0.0.1:27002,127.0.0.1:27003/dbName?replicaSet=someName&readPreference=secondaryPreferred"
[09:32:51] <laroi> synthmeat >> you'd need to add them all
[09:33:00] <laroi> thats what I thought too
[09:33:28] <laroi> slave is set up in the server, and the replication is working fine
[09:33:43] <laroi> I just want to slave also to be read
[09:34:58] <synthmeat> that's what the readPreference=secondaryPreferred is there for, yes
[09:36:34] <laroi> Okay, my question is, changing the code (connection string) in the app server is really required?
[09:37:49] <synthmeat> yes, to my knowledge
[09:38:34] <laroi> Okay, let me try that. Currently the slave server is idle and master is under tremendous load
[09:39:07] <synthmeat> you have just one or more secondaries?
[09:39:29] <laroi> just one secondary
[09:39:54] <synthmeat> do you have anything else that can vote in there, besides primary and secondary?
[09:40:12] <synthmeat> if so, you should add at least one arbiter to that. but ideally one more secondary
[09:40:49] <laroi> there is only a primary and a secondary, that is all
[09:41:31] <laroi> if is set readPreference as primiarypreffered, will it redirect reads to secondary in case primary server is under load?
[09:41:41] <laroi> or it wait for primary to completely go down?
[09:42:21] <synthmeat> i think heuristics takes network into account, but i wouldn't count it. i do all reads from 2 replicas and all writes to primary
[09:43:00] <laroi> Okay
[09:43:22] <laroi> other than the connection string changes, is there anything else required to make the read from slave working?
[09:43:24] <synthmeat> PRI+SEC+SEC+hidden non-voting for backups, with readPreference=secondaryPreferred covers a lot of use cases and is very resilient
[09:43:57] <synthmeat> laroi: i don't think anything else is required, no
[09:44:06] <laroi> okay
[09:44:28] <laroi> also, is there any consistency problem if we read from secondary?
[09:44:43] <laroi> the time taken to replicate from pri to sec?
[09:45:21] <synthmeat> of course there are. but you have transactional sessions if you need consistency, or you can hack it out with majority write and majority read (which won't work with just PRI+SEC)
[09:46:00] <laroi> Okay
[09:46:01] <synthmeat> it can create bugs in your code, yes
[09:46:22] <laroi> is there any other (easy) way to share the load across machines?
[09:46:50] <synthmeat> i don't think there's anything easier than this
[09:46:55] <laroi> OKay
[09:47:16] <laroi> in that case I will try to set up another secondary
[09:48:21] <synthmeat> if it's not helped with replication, you should probably inquire into your own code for query optimizations, and into caching efficiently, if possible
[09:49:56] <synthmeat> in my own use-case where 64 core couldn't cut it, 3x16 core cut it just fine
[09:52:17] <laroi> so you distributed load across three 16 core machines (pri+sec+sec) machines?
[09:52:25] <synthmeat> yeah
[09:52:34] <laroi> what are the caching options available?
[09:53:15] <synthmeat> depends on your case
[09:53:40] <laroi> I think my use case is similar to yours
[10:01:41] <synthmeat> dunno. i memoize queries to mongo at various intervals. i keep some of cache in redis too (fast changing stuff, like <1s)
[10:10:55] <laroi> Okay
[10:11:00] <laroi> Let me see what I can do
[10:11:04] <laroi> Thanks
[11:18:44] <ronaldl93> Hey guys. Random question. I got a whole aggregation pipeline written. Is it there a way to exclude (like completely ignore) a document when it's got a specific value?
[11:33:49] <Derick> $redact ?
[11:35:02] <Derick> no, maybe just a $match with a negative condition
[11:47:27] <ronaldl93> @Derick - so like not equal to?
[11:51:44] <Derick> yeah
[11:51:53] <Derick> but do that somewhere at the end, as that can't use an index
[11:53:14] <ronaldl93> @Derick. Thanks will try now!
[11:58:44] <ronaldl93> @Derick - Perfect, working! Thanks!
[14:06:28] <livingbeef> How would I manually remove database(es)? mongod (3.6) refuses to start and complains that I didn't update the db with 3.4. I don't need the databases, but I can't drop them from mongo shell, bc mongod won't start.
[14:11:28] <Derick> you should be able to remove the files from the dbPath directory — but, make a backup (if you need one)
[14:17:26] <livingbeef> As in I can just remove everything in the folder?
[14:18:20] <Derick> if you have no need for the data, yes.
[14:18:27] <livingbeef> k, thanks
[19:18:20] <trojanski> hey guys, sorry for off top, quick question: Where do you keep your backups in terms of storage - local storage (nfs, ftp, http) or any other public storage like S3 or somewhere else. thnx
[19:20:04] <Derick> in another replicaset member
[19:20:32] <trojanski> what about offline backups Derick ?
[19:20:43] <Derick> Mine are tiny, so S3.
[19:22:17] <trojanski> thanks
[19:22:30] <trojanski> @here anybody else wants to help me ? )
[20:26:01] <willwh> backups dumped to NFS storage, tape, and s3