PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 18th of January, 2021

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:18:27] <VectorX> hi, i need to create a dashboard node/express/react that will show visit/hit counts per site/durations min, hour, day, month etc seperately, trying to figure out schema for this to be efficient, anyone seen a tut example for something like this ?
[10:15:59] <chovy> any idea why this would hang? db.getCollection('reads').findOne({ shortId: '72-GKAzCg9K'})
[10:16:11] <chovy> i have 65k docs. so its not like there's that much data
[10:24:20] <synthmeat> chovy: you have shortId as index?
[10:25:17] <synthmeat> another thing to check, is `reads` capped collection?
[10:25:30] <chovy> capped?
[10:25:44] <chovy> synthmeat: i'm using mongoose so it sort of works. idon't know.
[10:25:45] <synthmeat> https://docs.mongodb.com/manual/core/capped-collections/
[10:25:59] <chovy> i have an index, but it is from mongoose
[10:26:38] <chovy> no i don't think its capped
[10:27:00] <chovy> the index that was created has "background: true"
[10:27:06] <chovy> i think it should be not that
[10:27:11] <synthmeat> what does db.reads.getIndexes() tell ya?
[10:28:04] <chovy> http://sprunge.us/87Sqhu
[10:28:22] <chovy> i think its trying to sort them
[10:28:23] <synthmeat> also, just to be sure, db.reads.isCapped()
[10:28:42] <chovy> false
[10:29:54] <chovy> they are sparse. only newer docs have shortId
[10:30:31] <synthmeat> db.reads.findOne({ shortId: '72-GKAzCg9K' }).explain()
[10:31:43] <chovy> http://sprunge.us/Fveily
[10:33:04] <chovy> http://sprunge.us/z0r5YC
[10:33:11] <chovy> here's my entire getIndexes()
[10:35:21] <synthmeat> everything looks fine, tbh. might be mongoose or your code issue. so, just to confirm, from within mongo cli, `db.reads.findOne({ shortId: '72-GKAzCg9K' })` times out?
[10:35:30] <chovy> synthmeat: yes
[10:35:45] <chovy> in my docker logs i have an index query constnatly running
[10:36:26] <chovy> briskreader-db | {"t":{"$date":"2021-01-18T10:36:07.583+00:00"},"s":"I", "c":"-", "id":51773, "ctx":"IndexBuildsCoordinatorMongod-6","msg":"progress meter","attr":{"name":"IndexBuild: scanning collection","done":27300,"total":92751,"percent":29}}
[10:37:14] <chovy> i suspect this might have something to do with it
[10:37:33] <synthmeat> yeah. it's stuck at 29% or is just super slow at indexing?
[10:37:38] <chovy> just slow
[10:37:46] <chovy> i goes up like 1% every 10 seconds
[10:38:47] <synthmeat> yeah, then that's it. for some reason it's super slow in docker. can't help ya with that, not using docker myself. sorry :(
[10:39:16] <lloydxmas> synthmeat, any xp with linux containers?
[10:39:35] <synthmeat> nope, not at all. i mean, i use docker every once in a while, but never for dbs
[10:40:38] <chovy> synthmeat: do you know how mongoose index: true works on schemas?
[10:40:44] <chovy> does that shit just run constantly?
[10:41:42] <synthmeat> first time until completion, then it's updated as docs are added/updated/deleted, as all indexes do
[10:43:59] <chovy> so if i add a doc, it re-runs on the entire collection?
[10:44:19] <synthmeat> no, that should be super fast, it's just gonna update the index, not run everything again
[10:44:28] <chovy> ok
[10:44:39] <chovy> cause i'm adding like 30k docs an hour
[10:44:53] <chovy> just want to make sure its not constantly updating index on every insert
[10:45:39] <chovy> its weird. findOne({ url }); works fine but findOne({ shortId }); doesn't
[10:45:42] <synthmeat> that's fine, i'm at ~20M insert/update/delete per hour :D
[10:46:12] <chovy> god damn
[10:46:29] <chovy> i'm ready to quit mongo. my shit falls aapart with < 100k docs
[10:47:25] <synthmeat> works great for me. i'd advise against mongoose, it's really just unecessary layer between you and the db
[10:47:27] <chovy> if i restart docker do the indexing happen all over again?
[10:48:03] <synthmeat> you can probably set it up so it doesn't, but iirc general advice is not to keep your dbs in docker
[10:48:06] <chovy> yeah. i liked mongoose for readability
[10:48:22] <lloydxmas> i heard docker storage is nuked on quit
[10:48:22] <chovy> i have a mounted volume
[10:48:31] <chovy> not if you havfe a volume setup
[10:48:55] <chovy> i just map ./mongo_data:/var/db or whatever it is
[10:49:12] <chovy> that way it persist between restarts
[10:49:21] <chovy> maybe my indexes never finished building
[10:51:41] <synthmeat> yeah, maybe try leaving it until it completes once
[10:55:54] <chovy> yeah
[10:56:09] <chovy> i've been restarting docker because this query is too slow. thinking it ws my code
[10:56:18] <chovy> so i guess it restarts the indexing every time
[10:56:37] <chovy> there's no reason find({ url }) should be faster than find({ shortId });
[10:56:47] <chovy> especially with only 100k docs
[11:05:03] <rendar> is mongodb conf file in Yaml?
[11:06:04] <synthmeat> yeah
[11:29:22] <chovy> well indexing is done. and it still doesn't return
[11:31:10] <synthmeat> does db.reads.find({ shortId: '72-GKAzCg9K' }) run fine?
[11:35:31] <synthmeat> if not, how about db.reads.find({ shortId: '72-GKAzCg9K' }).noCursorTimeout()
[11:36:53] <chovy> synthmeat: nope
[11:36:56] <chovy> let me try that
[11:38:11] <chovy> synthmeat: just hangs. same as before
[19:12:13] <VectorX> hi, if i have a value like '2001-01-02T04:41:00.000+00:00' how can i match by year 2001
[19:37:12] <VectorX> if i project the date into year, month, i can that would be another option too
[20:45:11] <VectorX> went with a projection for the moment, not sure if a expression would be slower but if anyone has any other suggestions, thanks