[01:36:22] <markizano> getting this message "Insufficient oplog size: The oplog window must be at least 1 hours over the last 24 hours for all active replica set members. Please increase the oplog"
[01:36:34] <markizano> ^ but the oplog is like 10G, and the DB is only 35G on disk.
[01:36:50] <markizano> I can't increase the time window by increasing the size either.
[01:37:01] <markizano> so I must be missing an option Google/DuckDuckGo/Bing isn't showing me....
[01:38:15] <markizano> Can someone help me understand how to increase the oplog window size, please ?
[01:45:32] <joannac> markizano: did you do a huge data load or something?
[01:57:35] <zsoc> This is a dumb question but.. if my data is modeled properly to an incoming object.. is there any reason I can't just create a new document with the entire object? Instead of just like {foo: obj.foo, bar: obj.bar...} and etc etc
[01:59:11] <cheeser> sounds not so much dumb as underspecified.
[02:00:24] <zsoc> Well, I mean if I validate the object i have is what I'm expecting... is there some syntax just to save the entire object whole? Maybe I'm misunderstanding mongo syntax i mean... you're creating a data object so... I guess I should just give it the entire object instead of the {...} notation of creating one on the fly with some objects properties set
[02:00:49] <cheeser> so i'm guessing ... javascript is your language?
[02:00:56] <zsoc> That's my newness to js showing, I guess it makes sense now. Should it be ... deserialized before insertion?
[02:01:03] <zsoc> not my language, but the one i'm using lol
[02:01:13] <cheeser> i'm not a js guy, thankfully.
[02:02:03] <zsoc> Pretty sure I need to deserialize it if it's JSON right?
[02:02:50] <Boomtime> @zsoc: you can just specify the whole object
[02:03:08] <zsoc> Boomtime, ok.. even if it's serialized as json?
[02:03:16] <Boomtime> but it means you update the whole document at the server - if the doc is small that won't matter, but as it grows that extra burden might be important
[02:03:30] <zsoc> Ah yeah i'm referring to strictly inserting new documents
[02:03:31] <Boomtime> you're using Node or something right?
[02:05:29] <cheeser> you'd be surprised how many people trip over that one.
[02:05:33] <zsoc> However.. If I had a dollar for every time I had to stringify something just to re-parse it to get something to like it...
[02:06:06] <zsoc> I feel JSON is different from a javascript object.
[02:06:22] <Boomtime> that will occur if the object is no longer has a sensible JSON view; i.e if you add a method member it can't be represented as strict JSON anymore
[02:06:32] <zsoc> Like for instance.. JSON does the whole {"foo":"bar"} thing but in js i'm just {foo: "bar"}
[02:07:12] <Boomtime> right, JSON is forwards compatible, a JS object is not compatible with JSON in many ways
[02:07:32] <Boomtime> JSON does not support methods of any kind
[02:08:04] <zsoc> So I guess then... what is mongo expecting? Assuming i'm using a nodejs driver.. i assume it's expecting a javascript object and not JSON
[02:09:01] <Boomtime> no, it would prefer JSON - but it will handle anything can be trivially be recognized as JSON
[02:09:22] <Boomtime> indeed, if you want to be strict, then be strict, and give it JSON only
[02:43:16] <kurushiyama> Freman: xenial is unsupported, and for a good reason. It might well be that some dependencies simply do not match – iirc, libstdc++ is dynamically linked and xenial's version might not be the one required by MongoDB.
[02:43:38] <Freman> that's the opinion we're forming too :D
[02:45:06] <kurushiyama> Well, you gambled by trying (daring for a persistence technology, I might add – the problems could have been more subtle). 2019 for trusty should be good enough, imho.
[02:45:31] <kurushiyama> Freman: By then, I am sure there is a version support for xenial.
[10:04:24] <Industrial> I'm trying to create a geospatial query; https://gist.github.com/Industrial/0dfc631a0efd388500753065211181c9
[10:04:42] <Industrial> When I do the query in e.g. robomongo/mongo client it works, I get one result with one key.
[10:04:46] <Derick> Industrial: your lat/long are the wrong way around
[10:07:18] <Industrial> Derick: here it says latitude value can be from 0 to 60? https://en.wikipedia.org/wiki/Geographic_coordinate_system#Expressing_latitude_and_longitude_as_linear_units
[10:07:44] <Industrial> So, the query working in the mongodb client, howcome I get an error about the index when I try to do the same query from my app (node)
[10:09:16] <kurushiyama> Industrial: I am pretty sure I pointed you to the correct docs ;)
[12:04:10] <JWpapi> How do I update multiple documents with different parameters?
[12:05:31] <JWpapi> I have an array of objects with different values and I want to store all if the id field is not already existent
[12:06:01] <JWpapi> I could use update combined with a for each method but im not sure how to close the db afterwards
[12:26:46] <ikus060> Hello, It is possible to preallocate a specific amount of disck space for mongodb ? I want to allocate 250GB for a specific db, but then I want it fixed. I don't want mongo to grow.
[12:27:22] <ikus060> Oracle has something similar called fixed tablespace. I want to do something similar.
[12:51:51] <Derick> a full disk error shuts mongodb down (or at least, it can)
[12:52:02] <kees_> well, at least you get an error than ;)
[12:52:25] <cheeser> also, the exact space required for your data *and* your indexes (not to mention on disk query/aggregation processing) is tricky, at best, to estimate.
[12:54:05] <ikus060> Then, I'm questionning how are you managing your disk spaces. Last time, the database growth to 250GB and fill the disk. It was impossible to reduce the database size and ultimately, we delete it.
[12:54:37] <ikus060> How do you prevent against such scenario ?
[12:55:26] <kees_> i usually put them in a replicaset, and when the database is too large/fragmented i just delete it and let it resync from another replica
[12:56:05] <kees_> but my database is only ~15G, so it doesn't take to long
[12:57:15] <ikus060> kees_: doesn't seams very reasonable to do this.
[13:46:29] <m3t4lukas> I have a slight problem concerning morphia: it is about polymorphy. I have one base class named Person. Derived from that I have the classes NaturalPerson, LegalPerson, Costumer, Supplier. Suppliers are also either LegalPersons or NaturalPersons, just as Costumers are. Now, how do I do this best in Morphia, especially concerning DAO's and stuff?
[13:47:37] <m3t4lukas> One more thing: Is there a way to do a upsert on all fields of an object without doing it manually on each field? I mean there are annotations and stuff, I might as well use them :D
[13:48:44] <cheeser> what polymorphism problem are you actually seeing?
[13:56:55] <m3t4lukas> cheeser: I just found out I cannot save classnames, otherwise I get cast errors. The other problem I see is concerning overwriting objects that have otherwise been saved (eg. creating new Costumer from LegalPerson). For example a costumer does not have a legalForm field.
[14:03:49] <StephenLynx> you can implement multiple interfaces.
[14:03:53] <kurushiyama> m3t4lukas: Really? If you would not have told me, it did not know!!! ;P What I was trying to express that this is your problem.
[14:05:53] <cheeser> composition and delegation in just a keyword or two
[14:06:57] <kurushiyama> cheeser: In Go, composition actually is shorter than a var declaration ;)
[14:07:47] <m3t4lukas> cheeser: if I do a get(ObjectId) with the CostumerDAO (derived from BasicDAO) I get LegalPerson's or NaturalPerson's depending on how they were saved in the masterdata module. They both cannot be cast into a Costumer.
[14:08:12] <m3t4lukas> kurushiyama: sry, didn't mean to offend :P
[14:08:30] <cheeser> this isn't really a morphia problem as much as a modeling problem.
[14:08:43] <kurushiyama> m3t4lukas: No harm done. Either way, I hope.
[14:12:51] <m3t4lukas> StephenLynx: it has to be modular. I don't see a way of doing multiple modules extending whatever they need from the model with interfaces only. I mean Person is Abstract, LegalPerson and NaturalPerson are provided by the masterdata module and Costumer is provided by the sales Module. Supplier is provided by the purchase module. I don't see a way of easily using that kind of structure otherwise. For example if a company needs additional attributes to
[14:12:51] <m3t4lukas> their costumers, the model has to be easily extensible.
[14:14:13] <m3t4lukas> StephenLynx: it can also happen that a company does only want to manage Employees provided by the HR module deriving from NaturalPerson without sales and purchase module.
[14:17:03] <kurushiyama> m3t4lukas: Galls law. And I still fail to see why you can not achieve what you want with interfaces. Actually, they are _made_ for what you want.
[14:20:51] <m3t4lukas> kurushiyama: how would you do it using interfaces? How does a plugin define an interface a Plugin it depends on implements?
[14:21:38] <m3t4lukas> a plugin simply cannot change a plugin it depends on. It may even be maintained by someone completely different
[14:22:10] <kurushiyama> m3t4lukas: Say your base system defines legalPerson and naturalPerson.
[14:22:38] <kurushiyama> m3t4lukas: And now you have a HR module.
[14:23:10] <kurushiyama> m3t4lukas: The entities you manage _within_ that plugin are specific to HR
[14:23:48] <m3t4lukas> what about the class Employee, how would you do that class?
[14:24:04] <kurushiyama> m3t4lukas: So, you can easily subclass naturalPerson to employee, which in turn satisfies all needed interfaces you need in HR
[14:24:46] <saira_123> Hi one question please, how can mongodb handle high velocity data?
[14:27:50] <kurushiyama> m3t4lukas: You are mixing criss-cross.
[14:28:10] <kurushiyama> If that would be exactly what you do, where is your problem the first place?
[14:29:07] <m3t4lukas> kurushiyama: Here the Problem: and now that same employee buys something from the company he works at (which does happen in reality) and also becomes Costumer. Worst Case Scenario: he also sells things to the company he works at
[14:29:43] <kurushiyama> m3t4lukas: Thinking of it twice, imho it does not make sense to do any subclassing for either employees or vendors or customers for natural persons. They have so little in common that you get yourself into more trouble than saving effort.
[14:31:27] <m3t4lukas> kurushiyama: okay, but what about masterdata management?
[14:31:45] <kurushiyama> m3t4lukas: Aye. And hence, if you want to implement it, use an interface "Customer" with the method "BuysStuff(whatever foo)". Now, you take your little employee subclass, implement the interface Customer on it and you are done.
[14:33:25] <m3t4lukas> kurusiyama: that is another problem of interfaces, they only have functions or methods. Now a costumer is either a natural or legal person with one addition: he has a costumer reference number
[14:33:40] <kurushiyama> m3t4lukas: What masterdata? I do not know how your business cases look like, but my suppliers and customers have nothing much in common. Surely not enough that I would even start to think about using a common data model for them.
[14:34:30] <m3t4lukas> kurushiyama: they could be the same.
[14:34:35] <kurushiyama> m3t4lukas: So? make both naturalPerson and legalPerson implement "Customer", which has a getter and setter for both
[14:34:53] <m3t4lukas> and I want to be able to represent that fact in BI if they are the same
[14:35:35] <kurushiyama> m3t4lukas: That have to be quite interesting business cases...
[14:35:43] <m3t4lukas> kurushiyama: they can't implement an interface from a plugin that depends on the masterdata plugin
[14:36:08] <m3t4lukas> legalPerson and naturalPerson still belong to the masterdata module
[14:36:09] <kurushiyama> m3t4lukas: I do not find a way to explain it to you and will not waste your time any further
[14:36:48] <m3t4lukas> the sales module depends (among others for banking and document creation) on the masterdata module
[14:37:19] <m3t4lukas> have to go now, be back in an hour will read the logs
[18:31:58] <StephenLynx> yeah, I am pretty sure all those corporations with servers that cost tenths of thousands of dollars are really concerned about the fact a software name resembles something on some languages.
[20:29:31] <Whisket> I have MongoDB running on an Ubuntu VM in Azure and my first query after a period of inactivity is always very slow and usually times out. All subsequent queries are fine. Is there a way to fix this?
[20:31:12] <cheeser> sounds like maybe azure is having to "wake up" your VM
[20:35:12] <StephenLynx> stop using ms for anything serious or important.
[20:35:32] <StephenLynx> use a vps or dedicated server