PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 3rd of December, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:13:39] <hicker> Hi Everyone :-). When I use model.create({user: mongoose.Types.ObjectId(req.body.userId)}) to create a new document, it generates a new ObjectID instead of storing the object id passed in the request :-(
[00:16:49] <joannac> hicker: there's no _id field in that?
[00:17:18] <hicker> No because I'm creating a new doc
[00:17:21] <joannac> or do you mean, there's a new ObjectId generated for the "user" field?
[00:17:58] <hicker> I'm passing an object id string. Yeah, I want to store a reference to the user doc
[00:18:59] <hicker> Storing just the object id string gives me a validation error, saying it needs to be cast to an ObjectID
[00:19:25] <hicker> But mongoose.Types.ObjectId(req.body.userId) seems to be creating a new one
[00:20:25] <joannac> are you sure req.body.userId exists
[00:20:29] <joannac> and is not null, etc?
[00:20:54] <hicker> I think so. I guess I should double check
[00:21:42] <hicker> Oh crap, you're right
[00:22:15] <hicker> Ok, that fixed it. I'm dumb, sorry
[00:22:25] <hicker> Thank you for helping me
[00:22:31] <joannac> np
[02:36:25] <harttho> Is there a way to detect if a program is connecting directly to a replica set/shard rather than going through mongos?
[02:37:59] <Boomtime> not detect that i know of, but enabling auth will prevent it
[02:46:37] <appledash> Since actually figuring out how to use mongo I've been finding all sorts of applications for it with stuff I've previously been using SQL or just a (very huge and slow) flatfile for
[02:47:34] <Boomtime> appledash: awesome, good to hear
[02:48:46] <appledash> Xe: I still don't like you for the record
[03:05:52] <harttho> thanks :boomtime
[03:07:04] <harttho> A follow up question, we're running into an issue where one of our mongo shard thinks that the chunk size is 1K, when it's in reality much larger. Anyone encounter anything similar to this or know where to investigate? I can share a log momentarily
[03:08:04] <harttho> Tue Dec 2 17:10:34.352 [conn3406773] warning: chunk is larger than 1024 bytes because of key { _id: { date: new Date(1406851200000) } }
[03:08:46] <Boomtime> that's from a mongoD log?
[03:09:11] <harttho> Yes
[03:11:30] <Boomtime> you appear to have used _id as a shard key where _id is a document
[03:11:58] <Boomtime> i'm not sure what happens
[03:14:27] <harttho> We have other collections with the same ID scheme, (which we know is less than ideal), and we don't see that specific error
[03:14:44] <harttho> Just seems weird that it thinks the chunk size is 1024
[03:31:34] <blizzow> I have 10Gigabit ethernet cards on my routers, and replica sets. I'm running a 7.1GB mongorestore from another host with a 10GbE card. The restore has 7130219438 records and it's taking upwards of an hour to do the restore. When I look at mongostat on my router, the says the netIn is peaking around 10m. Does anyone have an idea why my routers aren't taking data faster?
[03:32:24] <blizzow> Disk write on my replica set members seems pretty quiet too.
[03:33:17] <Boomtime> indexes?
[03:33:38] <Boomtime> your db might be busy doing other stuff, like building indexes
[03:33:55] <blizzow> Boomtime, I'm doing the restore with --noobjcheck --noIndexRestore
[03:34:31] <blizzow> How would I check if it's building indexes anyway?
[03:35:32] <Boomtime> log output
[03:36:37] <Boomtime> your situation sounds strange though, there must be a bottleneck somewhere - apparently it isn't the network or destination disk
[03:37:06] <Boomtime> check top on both ends, obviously check iotop too to ensure the disk isn't simply waiting the entire time
[03:37:30] <Boomtime> (because waiting will show up as no activity while actually being a bottleneck)
[03:38:27] <blizzow> mongod says it's using 421% CPU on an 8 core system.
[03:38:40] <blizzow> doesn't look like it's waiting either.
[03:39:00] <blizzow> Could the sharding be slowing it down?
[03:39:40] <blizzow> Never mind, that's kind of a dumb question.
[03:39:59] <blizzow> Shardings supposed to speed it up (if I picked my key right).
[03:49:28] <Boomtime> "Could the sharding be slowing it down?" <- not a dumb question
[03:49:57] <Boomtime> mongos can totally be a bottleneck - consider that any information sent to it must be re-transmitted for storage
[03:50:47] <Boomtime> i.e at a minimum you should double it's network requirements
[03:51:08] <Boomtime> finally, it must also do some processing on the incoming data to determine their destination
[03:51:28] <Boomtime> mongos is not disk bound in any way but it can still bottleneck in other ways
[03:56:19] <blizzow> My routers are 4 CPU cores, 4GB RAM, 2GB swap, and 10GbE cards. iftop -i eth0 shows a rate of about 80Mb/s. nethogs reports something similar.
[03:57:38] <Boomtime> what does top say?
[03:58:15] <blizzow> pretty quiet, 100-180%CPU usage. Not waiting on anything I can see.
[03:58:55] <Boomtime> i actually suspect you are CPU bound, despite having spare cores - certain operations are serial when performed on the same database, which unfortunately, importing/restoring does one DB at a time
[04:00:18] <Boomtime> you'll see a lot of discussion around mongoimport/mongorestore not being very fast - they "get the job done" but hardly in an efficient way, the DB can be used much better than either of those tools utilize
[04:01:38] <blizzow> ouch. Another router to run the mongorestore through? Or set one up on the host that's running the mongorestore?
[04:02:02] <Boomtime> definitely try setting up one on the mongorestore host
[04:02:50] <Boomtime> this is a common implementation, put the mongos close to the app (on the same host)
[04:04:59] <blizzow> I'll give that a shot.
[04:28:48] <CipherChen> hi all, i'm configuring how rs init sync since primary only keep capped oplog and the older oplogs have already been dropped.
[04:29:57] <blizzow> hrm, set up a router on my host where mongorestore is running from. in iotop it seems to hover around 2-4M/s Read rate.
[04:37:11] <Boomtime> blizzow: yeah, restore has to get confirmation back from the server, the smaller your individual documents the worse that ratio will be
[04:37:14] <joannac> CipherChen: okay?
[04:42:27] <CipherChen> what?
[04:44:40] <Boomtime> you don't seem to have asked a question - do you have an issue?
[04:54:47] <CipherChen> we want to auto sync a primary from another primary, acted as: mongosync --from-source primary1 --to-source primary2
[04:55:17] <joannac> what's mongosync?
[04:56:41] <joannac> also, you want to put data on a new node, and have that new node be a primary for a different replica set?
[04:56:51] <joannac> (is that the goal?)
[04:56:52] <CipherChen> a tool i want to implement
[04:57:17] <CipherChen> yes
[04:57:34] <joannac> okay. so you have a question about that?
[04:58:03] <arussel> I'm moving data throught mongoexport/import, when looking at the data through the mongo shell, what was before 1444259592914 became NumberLong("1444259592914"). Any reason for that ?
[04:58:38] <joannac> http://docs.mongodb.org/manual/reference/program/mongoexport/
[04:58:45] <joannac> "Do not use mongoimport and mongoexport for full-scale production backups because they may not reliably capture data type information. "
[04:59:29] <CipherChen> since mongodb use oplog to keep refresh among its slaves, how do i replay those oplogs at other rs?
[04:59:49] <joannac> CipherChen: you can't?
[05:00:18] <joannac> you just said you want to have a different replica set... so why should oplogs at replica set 1 be replayed at replica set 2?
[05:00:20] <arussel> joannac: I'm using it to extract production collections to my local machine to test some functions.
[05:00:48] <Boomtime> arussel: use mongodump/mongorestore
[05:01:19] <CipherChen> joannac: i want to make sure that the two rs data are exactly the same, so i want to do oplog replay myself
[05:01:50] <joannac> CipherChen: why not just have another secondary?
[05:03:02] <CipherChen> since secondary may fall down when secondary can't reach primary for a long time
[05:03:35] <CipherChen> so we want to build multi rs across different data center.
[05:03:43] <joannac> if the secondary can't reach the primary, how will your solution help? how does it get the oplog then?
[05:05:33] <CipherChen> we hope to replay the oplog until the network recover, just dont want the secondary to turn DOWN/RECOVERING...
[05:05:53] <joannac> I don't think you understood my question.
[05:06:09] <joannac> Your primary is on host A
[05:06:18] <joannac> Your "second node" is on host B
[05:06:28] <joannac> B cannot see A for a long time (your scenario)
[05:06:41] <joannac> how do you get the oplog from host A to host B if B cannot see A?
[05:06:42] <arussel> Boomtime, joannac: will s/export/dump/ ta
[05:08:25] <joannac> also, why do you anticipate long periods where your 2 servers can't see each other?
[05:08:38] <CipherChen> I mean the network exception, and long time later the network just recovered, and B has already been down.
[05:09:21] <Boomtime> CipherChen: you haven't explained how your solution expects to fix that
[05:09:46] <Boomtime> if the network is down then data does not transfer, no matter what clever software you write
[05:09:56] <CipherChen> build multi rs center, and do mongo sync ourselves
[05:10:10] <Boomtime> that doesn't fix anything, the network is down
[05:10:33] <Boomtime> software cannot fix a hardware fault
[05:11:13] <CipherChen> what we want is that: host B dont be down/recovering even when network is down
[05:11:55] <CipherChen> hope that B is readable all the time
[05:13:01] <Boomtime> i think i understand the problem you are trying to fix, but your solution is incredibly complicated considering it has a simple fix
[05:14:03] <Boomtime> how long a network outage do you need to be able to handle?
[05:15:17] <CipherChen> not sure, several hours/one day
[05:16:36] <CipherChen> avoid Secondary being down due to long time broken connection with Primary.
[05:16:46] <joannac> How long does the oplog on the primary span?
[05:17:21] <joannac> pastebin the output of db.printReplicationInfo()
[05:17:49] <CipherChen> it's working fine now
[05:18:05] <CipherChen> but we've met this in the past
[05:18:23] <joannac> Yes, I know. Please paste the output of the command I asked for
[05:18:43] <CipherChen> configured oplog size: 4766.09609375MB
[05:18:44] <CipherChen> log length start to end: 23699secs (6.58hrs)
[05:18:44] <CipherChen> oplog first event time: Wed Dec 03 2014 06:42:23 GMT+0800 (CST)
[05:18:44] <CipherChen> oplog last event time: Wed Dec 03 2014 13:17:22 GMT+0800 (CST)
[05:18:44] <CipherChen> now: Wed Dec 03 2014 13:17:24 GMT+0800 (CST)
[05:19:13] <joannac> CipherChen: please use pastebin.com next time, so as not to paste multiple lines in chat
[05:19:36] <joannac> that's 4.5GB of oplog, which is ~ 6.58 hours of oplog operations
[05:19:51] <joannac> if you want to sustain a day, increase the oplog to about 18GB
[05:20:08] <joannac> (if your write load goes up, you'll need an even larger oplog)
[05:21:01] <CipherChen> you cannot just increase the oplog larger and larger, ?
[05:21:07] <joannac> sure you can
[05:21:25] <joannac> http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[05:21:28] <joannac> so some extent
[05:21:33] <joannac> to some extent*
[05:22:57] <CipherChen> my plan is to build multi center and do the data consistency ourselves, is that possible?
[05:23:28] <joannac> yes, but I don't understand how that solves your problem
[05:23:29] <CipherChen> i'll try the enlarge oplog size later
[05:23:37] <joannac> server A cannot talk to server B
[05:23:43] <joannac> how are you going to solve that?
[05:24:05] <CipherChen> then just sync until they can communicate
[05:24:10] <joannac> sync what?
[05:24:40] <CipherChen> sync the lefted oplog from the main center
[05:24:53] <joannac> server A cannot talk to server B
[05:25:04] <joannac> how are you going to get the oplog from A to B?
[05:25:35] <CipherChen> when they can talk
[05:25:56] <joannac> by the time they can talk, you have the same problem as before
[05:26:05] <Boomtime> so you need an oplog that is long enough to cover the time the two machines are not in communication
[05:26:20] <joannac> The oplog on server A does not overlap with the last operation on server B
[05:26:44] <joannac> which means you need a bigger oplog, like we've been telling you
[05:27:53] <CipherChen> so i assume that mongodb use image+oplog to keep data.
[05:28:11] <CipherChen> not sure that is right.
[05:28:37] <Boomtime> what is "image"?
[05:29:32] <CipherChen> a data set contains the older data, when replay the oplog at the image, we can get the newest db.
[05:29:50] <joannac> sure, that's mostly correct
[05:30:07] <CipherChen> that is how hdfs doing it, so i think since mongodb have oplog, it probably have image things
[05:30:54] <joannac> not exactly. but I'm not sure how this is relevant
[05:32:05] <CipherChen> if not so, what happens when rs.add() a node, the primary dont keep the oplog all the time
[05:32:16] <CipherChen> oplog is a capped collection
[05:32:49] <joannac> when you rs.add a node, the new node takes a copy of all data files
[05:33:11] <joannac> and then reads oplog entries from when it started taking a copy, until the current time
[05:34:42] <CipherChen> can we do the copy and do what rs.add do
[05:35:32] <joannac> to achieve what? if you secondary goes into DOWN or RECOVERING? yes, that's the correct action to take
[05:36:25] <CipherChen> to build tree-like mongodb cluster
[05:37:33] <joannac> CipherChen: we are going around in circles. I do not understand the use case you have and why it is not served by running a normal replica set across multiple servers or data centers
[05:38:56] <joannac> CipherChen: if you could explain apecifically why a replica set does not work for your environment, that would be good
[05:46:45] <CipherChen> joannac: thanks, i'll try the increase oplog size to solve the secondary DOWN problem first
[06:05:42] <wizzardx6> Hello
[06:06:14] <joannac> hi
[06:06:18] <joannac> did you have a question?
[06:06:30] <wizzardx6> yes
[06:06:58] <wizzardx6> i am trying to prepare for the c100dba..
[06:07:12] <wizzardx6> anybody has any pointers...
[06:07:26] <wizzardx6> i am enrolling in the mongodb class...
[06:08:12] <joannac> do the classes, play with mongodb, try and break your replica set / sharded cluster
[06:08:34] <wizzardx6> ok...
[06:08:52] <wizzardx6> have any of you taken the cert? is it difficult?
[06:23:17] <wizzardx6> hi
[06:23:23] <wizzardx6> sorry guys got disconnected...
[06:23:44] <wizzardx6> so has any of you done the cert?
[06:23:53] <wizzardx6> is it difficult?
[10:00:17] <zzz> Hi, is there any possibility to set up mongodb to use id' s only from the [0-9] range, and not from the [0-9a-f] ?
[10:01:14] <kali> zzz: you can use any id you want, but you have to do it yourself
[10:01:53] <kali> zzz: http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/ for instance
[10:02:20] <zzz> kali: i meant from the db side, so if i insert a document, the generated id should contain only 0-9 characters automatically in my case.
[10:06:24] <kali> nope
[10:07:21] <zzz> kali: thanks, for the last case i' ll use this tutorial.
[10:59:38] <varadharajan> Hi, Is it possible to restrict particular user to log in from a particular IP when using a mongodb instance configured with Challenge/Response as the auth mechanism?
[14:30:20] <samgranger> Hey guys - lets say I have a collection of users, containing first and last names
[14:30:24] <samgranger> seperated
[14:30:32] <samgranger> how can I query full names?
[14:31:06] <samgranger> I tried using a regex: http://pastebin.com/FptPcuUe
[14:31:37] <samgranger> but I can't get it fully functional, it only finds first names or last names, but if I type in John Doe, it doesn't work - any ideas?
[14:41:38] <GothAlice> samgranger: If you have both a first, and a last name… couldn't you just explicitly query for them explicitly? Why the or?
[14:41:58] <samgranger> Good one >_<
[14:42:02] <GothAlice> samgranger: User.find(firstName="Bob", lastName="Dole") — will find you only records whose names are "Bob Dole" in the literal sense.
[14:42:10] <GothAlice> samgranger: When in doubt, simplify. ;)
[14:42:37] <samgranger> Just need to think how best to split the first and last name in the query
[14:42:53] <samgranger> maybe splitting on space for now is sufficient
[14:43:01] <GothAlice> (Also, if you're using regexen or $where in MongoDB, you're probably coming at a problem sideways. Both have important negative impacts on your queries.)
[14:43:32] <GothAlice> samgranger: "Alice Bevan McGregor" — that's my name.
[14:43:48] <samgranger> Yeah it'd fail on that
[14:43:53] <GothAlice> samgranger: Bad logic is worse than no logic and simply presenting separate first and last name fields to end-users. ('Cause splitting will fail.)
[14:44:17] <samgranger> so probably best keeping first and last in one field?
[14:44:47] <GothAlice> If that's how you want to query it (always together, not sorted on last name, etc.) then yes.
[14:44:58] <GothAlice> It all depends on how you intend to use the data (query, display, order).
[14:45:53] <samgranger> Let's say i'd like to order on last name - what's best in that case? Having a full name, first name and last name field? Seems a little dirty?
[14:46:39] <GothAlice> Are you utterly stuck with presenting users a unified full name field of some kind?
[14:47:07] <GothAlice> I.e. why are you complicating it again? ;^P
[14:48:52] <GothAlice> If you need to sort on last name, then you need last name by itself. That means it would make sense to have the first name by itself. (The split is usually "family name" and "surname" — since one can have multiple of either as in my multiple family name example.) This would then dictate presenting split first/last name fields to users of your system for the purposes of data entry (to match the data structure) and querying.
[14:49:13] <samgranger> Well, I just want users to find other users - just by typing in their name, preferably still having the names split in mongodb, just stuck on how best to query ie. Alice Bevan McGregor if Alice Bevan is in first name and McGregor is in the last name
[14:49:34] <samgranger> And I want to keep the option open so I can still sort
[14:49:46] <GothAlice> Except "Bevan McGregor" is a pair of family names, not surnames
[14:49:55] <samgranger> Ahh sorry yes
[14:50:02] <GothAlice> "Alice Zoë" would be my surnames. ;)
[14:52:08] <Derick> there is a good article on storing names somewhere...
[15:21:53] <markotitel> ls
[15:22:09] <markotitel> ahh, sorry.
[15:23:27] <markotitel> I cannot find config option with which I can log mongodb queries to a file. Is that actually possibile, or I need to read it from a db?
[15:31:00] <winem_> markotitel: also not very familiar with mongodb but db.setProfilingLevel(2) should do what you want
[15:31:09] <winem_> you can see the current one by db.getProfilingLeve
[15:31:13] <winem_> +l
[15:32:28] <markotitel> winem_, yes I have set that, I am thinking possibly at the moment I dont have slow queries because of empty base. Although I set slowms is set to 1ms
[15:39:31] <winem_> do you see your queries in system.getProfilingLevel() ?
[15:40:25] <winem_> sorry, db.system.profile.find()
[16:04:48] <markotitel> winem_, I see
[16:28:03] <pulse00> hi all. i've run into a "no file left on device" error on a development mongo instance. is there a way i can just wipe all data manually and "reset" the db? db.dropDatabase() doesn't work anymore because "Can't take a write lock while out of disk space"
[16:31:48] <cofeineSunshine> pulse00: delete files
[16:31:56] <cofeineSunshine> /usr/var/lib/mongo/*
[16:31:59] <cofeineSunshine> or something
[16:32:32] <pulse00> will mongo restore them after a restart?
[16:32:36] <pulse00> i mean empty ones
[16:55:10] <pulse00> cofeineSunshine: thanks, deleting the files worked
[19:10:49] <ss_> so if i have a field called tags which is an array. how would i find list of articles that contain that tag
[19:10:53] <ss_> the most efficient way?
[19:11:23] <ss_> would i need to use in?
[19:12:16] <ss_> or would it be like db.collection.find({tags: "tagname"})
[19:14:18] <ehershey> that should work
[19:14:53] <ehershey> You only have to use $in if you want to specify an array of values to filter by
[19:15:00] <ss_> ok so i need to get the most recent articles that has that tag.. my current query is db.collection.find({tags: "tagname"}).sort({ 'timestamp': -1 })
[19:15:01] <ss_> oh ok
[19:15:07] <ss_> does that seem about the right way of doing that?
[19:15:16] <ehershey> yeah
[19:15:19] <ss_> cool, thanks.
[19:15:25] <ehershey> no problem!
[19:16:09] <ss_> now i gotta figure out how to write it using the nodejs module
[19:20:15] <hahuang65> is there a way to mongodump everything except certain collections?
[19:26:57] <cheeser> hahuang65: not afaik. you can dump a single collection or all of them. but nothing in between
[19:44:07] <hahuang65> cheeser: okay, thanks :)
[19:51:53] <hahuang65> why would something report as "already sharded" when I try to shard it, yet it's not show in sh.status()?
[20:02:19] <hahuang65> it keeps saying some of the collections are already sharding when I drop a database and recreate it
[20:04:56] <hahuang65> man had to restart mongos, something was corrupted on that
[20:10:16] <lfamorim__> Hello! How cold I remove only N number of records with remove? Example, 2 records of 3, or 10 of 20.
[20:10:55] <lfamorim__> db.collection.remove({}, 2); remove 2 records?
[20:18:13] <piklu> Should I use mongodb to store timeseries data?
[20:18:25] <piklu> I need to do some rigrous analysis based upon them
[20:40:18] <harttho> Any thoughts on this: dbname> db.collectioname.drop()
[20:40:19] <harttho> Wed Dec 3 12:32:02.233 drop failed: {
[20:40:27] <harttho> "code": 13331,
[20:40:27] <harttho> "ok": 0,
[20:40:27] <harttho> "errmsg": "exception: collection's metadata is undergoing changes. Please try again."
[20:40:27] <harttho> } at src/mongo/shell/collection.js:383
[20:42:04] <hahuang65> piklu: use something like hbase.
[20:42:08] <hahuang65> piklu: or cassandra
[20:49:22] <decompiled> strange issue, on a cluster's configuration server I can authenticate against the mongod running as the config server, but can't auth against the mongos running on the same host
[21:44:36] <GothAlice> harttho: If no-one answered your question, certain operations (like building an index in the background) will prevent other administrative operations from happening on the collection until they finish.
[21:51:37] <joannac> decompiled: same user exists on all config servers?
[21:52:48] <decompiled> joannac, they did at some point, but another admin screwed things up. I have a backup of the admin database, but only tried to restore it to 1 of the config servers. Should I restore it to all three?
[21:56:49] <joannac> yes
[21:57:16] <joannac> although, i hope you have a backup of a config server, just in case
[22:01:34] <decompiled> Good call. I've made backups of files I've changed, but will just do a general backup before anything else and try to restore the admin database on all 3
[22:01:37] <decompiled> thanks for the help
[22:23:16] <doug2> Is there a way to create a rs without having to use json?
[22:23:26] <doug2> can I run commands like rs.create() and rs.add() on each member?
[22:30:32] <joannac> doug2: um, sure, if you don't want anything fancy
[22:30:51] <joannac> start 3 mongod instances, connect to one in the mongo shell
[22:31:11] <joannac> rs.initiate(); rs.add("hostname1:port1"); rs.add("hostname2:port2")
[22:31:14] <joannac> and you're done
[22:38:12] <varesa> My mongod service has stopped starting, any ideas? http://i.imgur.com/lTVM1nQ.png http://i.imgur.com/1exoMjv.png
[22:38:52] <varesa> sorry for screenshots instead of paste/whatever, I was running those commands in a VM console so no easy text copying
[22:39:45] <joannac> where's the mongod logs?
[22:40:20] <varesa> joannac: well /var/log/mongod/ is empty
[22:40:40] <joannac> oh, why can't it access the PID file?
[22:41:08] <varesa> -rw-r--r--. 1 mongodb mongodb 6 3.12. 23:46 /var/run/mongodb/mongodb.pid
[22:42:10] <varesa> I tried removing the pidfile, it gets recreated
[22:43:20] <joannac> ywhere's your data directory?
[22:44:44] <varesa> joannac: /var/lib/mongodb/ I guess?
[22:45:04] <varesa> I also tried setting SELinux to permissive, so it shouldn't be a wrong context/etc.
[22:45:11] <joannac> mongod --dbpath /var/lib/mongodb
[22:45:25] <joannac> just to isolate it to your startup script
[22:46:19] <varesa> joannac: goes from "MongoDB starting" to "Waiting for connections on port ..."
[22:46:34] <joannac> right, so it's working
[22:46:38] <joannac> also it looks like this http://stackoverflow.com/questions/18591664/mongod-2-4-not-running-fedora-19-freeze
[22:47:41] <joannac> note the difference in what the error message says, and the file you have
[22:47:47] <joannac> mongodb.pid vs mongod.pid
[22:52:23] <varesa> joannac: thanks for that link
[22:52:40] <varesa> I fixed it by creating a fixed unit file in /etc/systemd/system/
[22:56:53] <joannac> varesa: cool, glad it's fixed now :)
[22:57:46] <varesa> I just need to remember to remove that overriding file when it gets fixed in the repos and possibly gets some other changes :)
[23:23:00] <decompiled> @joannac your suggestion resolved the issue. thanks