PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 29th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:53:07] <kurushiyama> oky: Yes.
[00:56:06] <oky> kurushiyama: cool! i was curious what software you are using for plotting the analytics / query exploration, or what exists already
[00:57:47] <kurushiyama> oky: Since it is a web application, I use c3.js
[00:58:20] <kurushiyama> oky: The queries are sort of static, though.
[00:59:13] <oky> kurushiyama: i was thinking in terms of stuff like kibana, caravel or other query builders / explorers for other DBs
[01:00:44] <kurushiyama> oky: Uhh, iirc, there are some. But I am just kind of sleepless, and my brain probably does not work properly, yet.
[01:01:47] <kurushiyama> oky: cloud9charts might be good if you need sth quick and dirty (ond have nothing to hide).
[01:02:48] <kurushiyama> oky: Pentaho, ofc.
[01:08:20] <oky> kurushiyama: do you use anything like those?
[01:08:26] <kurushiyama> oky: That was what was lurking in my mind. I saw a presentation of JasperSoft during MongoDays Germany 2014. Looked very promising back then.
[01:08:57] <oky> thanks for the list, i'm looking at them now - i see they are SaaS / paid products?
[01:09:08] <kurushiyama> oky: I have used Pentaho and cloud9charts occasionally.
[01:09:43] <kurushiyama> oky: Well, a free BI tool is probably worth what you pay for it.
[01:09:59] <oky> kurushiyama: i pay nothing for mongoDB :-D
[01:10:21] <kurushiyama> oky: Well, but they have a much larger audience and can much easier monetize.
[01:12:23] <oky> if anyone else has other names to look up, i'm also interested
[07:31:31] <aps> Is there a way to set maxTimeMS in mongod server config?
[07:44:22] <Boomtime> @aps: no, maxTimeMS must be passed by the client
[07:46:02] <aps> Boomtime: Really? So If client forgets or chooses to add a high value, mongo's screwed?
[07:46:12] <Boomtime> @aps: look like somebody else wants that feature too; https://jira.mongodb.org/browse/SERVER-13775
[07:46:35] <Boomtime> @aps: lets not forget you gave that client authorization to use your database
[07:51:06] <awal> Boomtime: FYI: irc does not use @ for pings. just write the name of the user ;)
[08:03:04] <goyalr41> Hi anyone there?
[08:03:35] <goyalr41> I wanted to know is there a way to append to a field's string value while doing update?
[08:03:45] <goyalr41> Instead of set, is there any upate option?
[08:16:17] <Boomtime> @goyalr41: there are currently no remote string manipulation update operators - arrays, numbers, and bitwise manipulation operators are available though
[08:17:05] <kurushiyama> goyalr41: Not that I am aware of.
[08:25:48] <Zelest> kurushiyama, so.. i got my 3.0 cluster running now (havn't moved to 3.2 yet) .. though, it runs the old storage engine (non-wiredtiger) .. would it be a good idea to upgrade/change to wiredtiger before I migrate to 3.2?
[08:31:26] <Derick> no
[08:31:38] <Derick> just start up the 3.2 nodes in wiredTiger and you'll be fine
[08:32:33] <Derick> the data format for mmap and wt is totally different, and a node won't start up in wt mode if there is mmap data available
[08:32:57] <kurushiyama> +1
[09:10:25] <Zelest> Derick, well, the idea was to shutdown a 3.0 node, remove the data, enable wt and start it up again.. :)
[09:10:30] <Zelest> Derick, which I did :P
[09:12:21] <Derick> okay
[09:12:29] <Derick> you could just have replaced it with a 3.2 binary too
[09:12:33] <Derick> at the same time
[09:13:50] <Zelest> Derick, nah, still got 2.7 nodes :/
[09:14:19] <Zelest> also.. in rs.status(), i truely hate "syncingTo" .. :S
[09:14:28] <Zelest> it should be "syncingFrom"
[09:15:50] <Ange7> Hey all
[09:16:00] <Derick> 2.7 nodes?!
[09:16:16] <Zelest> err, 2.6.7
[09:17:11] <Zelest> as in, i sitll have 2.6.7 nodes in the replicaset, so I can't go to 3.2 just yet.. need to migrate them/remove them first :)
[09:18:38] <Ange7> just one question : http://pastebin.com/tsmE5sKi If i want calculate average of count per uid per date i don't think my aggregate query is wrong... :/ why ?
[09:20:36] <Ange7> expected result for 2016-04-28 : 257,6
[09:23:23] <Ange7> http://pastebin.com/tsmE5sKi (edit)
[09:23:32] <Ange7> i just want to be sure my calc is correct ? :/
[09:55:02] <Zelest> Euhm, how can I lower the log verbosity? Says the default is 0, but it seems to log more or less everything :(
[09:55:49] <kurushiyama> Zelest: Use syslog, configure acordingly.
[09:56:26] <Zelest> can't i just disable some of it? also, why is https://docs.mongodb.org/manual/reference/configuration-options/#systemLog.verbosity so confusion?
[09:56:58] <Zelest> "0 to 5" .. yet, it doesn't say which of the I, W, F, E or D they correspond to
[09:57:26] <aps> I am getting some very long operations. I turned on the profiling and got this - https://www.irccloud.com/pastebin/kn0wU0KS/
[09:57:30] <Zelest> I wanna log warnings, errors and warnings... information feels a bit overhead
[09:57:41] <aps> Can someone please tell me what could be wrong here?
[09:57:57] <Erni_> l
[10:01:29] <Erni_> anyone has experience deploying a mongodb replica set in AWS?
[10:01:48] <kurushiyama> aps: Mongoose.
[10:03:44] <aps> kurushiyama: Oh. Is mongoose that bad? And how did you know I used mongoose? :P
[10:06:07] <kurushiyama> aps: I think it is wrong from the start. The whole concept is wrong in my book. And let's say the models Mongoose creates tend to have their own style. Mongooses signature field '__v' for optimistic locking (done wrong, if I get it right from that query) does not make it harder. ;)
[10:07:33] <aps> kurushiyama: okay. So what's the correct alternative?
[10:07:44] <kurushiyama> Erni_: What is the exact problem you have?
[10:08:40] <kurushiyama> aps: Use the raw driver. Get your user stories right. Derive the questions you need your data to answer from it. Model accordingly, optimized for the most common use cases.
[10:08:52] <Erni_> the problem is that I deploy a replica set (3 replicas in 3 different AZs)
[10:09:13] <Erni_> and now how do I connect to the replica set from my application?
[10:09:17] <kurushiyama> aps: It is not that Mongoose is wrong at all. It just makes it way to easy to do it all wrong.
[10:09:46] <kurushiyama> Erni_: like if they were standing next to each other, no?
[10:09:47] <Erni_> I mean, do I need to construct the URI with the private IP´s AWS gives to the mongodb EC2 instances?
[10:10:34] <Erni_> do I need to do this with the private IPs? https://docs.mongodb.org/manual/reference/connection-string/
[10:10:44] <Erni_> my app is running on AWS as well
[10:11:32] <kurushiyama> Erni_: Aye, that should work, if you can reach each replset member via shell from your application's machine.
[10:12:43] <kurushiyama> aps: And if you do proper data modelling, there is very little advantage in using Mongoose.
[10:14:26] <aps> kurushiyama: Thanks!
[10:15:18] <Erni_> kurushiyama thank you, I will try
[11:08:32] <Erni_> kurushiyama what if one of the nodes in the replica set goes down? it then starts with a new IP address and therefore not visible to the application?
[11:10:18] <kurushiyama> Erni_: uhm, that is definetly something you need to prevent. Iirc, elastic IPs is how it is called...
[11:15:01] <kurushiyama> Erni_: VPC is the other option, ofc.
[11:17:20] <Erni_> VPC the other option? I don´t understand
[11:17:43] <kurushiyama> Erni_: Virtual Private Cloud. An AWS option.
[11:18:19] <Erni_> I know what a VPC is, in fact I deploy each replica in a different subnet in the VPC
[11:18:43] <kurushiyama> Erni_: Then you should be able to assign IPs to your instances.
[11:18:46] <Erni_> but I don´t understand how that solves the problem of the new IP when a replica goes down
[11:18:55] <Erni_> mmm
[11:19:17] <Erni_> so you mean I should assign a private IP address when I create the instance?
[11:19:40] <kurushiyama> Erni_: Aye
[12:17:01] <livingBEEF> Hi. Docs say that db.collection.copyTo() is deprecated but I don't see it mentioning what it is deprecated in faver of... or is it just deprecated without any replacement?
[12:57:59] <arbitez> hi all. has anyone converted mongodb databases to postgresql here? need some tips on how to mass convert dbs
[12:59:05] <arbitez> ask on #postgresql earlier, got some ideas, but a bit of work, so asking here as well
[13:15:49] <crazyphil> do I need to worry about a [UserCacheInvalidatorThread] warning? I can't seem to find much info about it
[13:18:14] <oky> arbitez: so, going from unstructured data to a table scheme? or using postgres jsonb columns?
[13:25:45] <arbitez> oky: a mix
[13:26:11] <arbitez> oky: the stuff we have in mongodb will go into json columns though
[13:26:49] <arbitez> so i suppose a small "read-all-collections, but into json cols in respective tables" should be easy enough if no tool exist
[13:27:20] <oky> arbitez: so, one column per table stuffed with JSON data inside postgres?
[13:27:49] <arbitez> oky: that will be step 1, yeah
[13:28:08] <oky> f.e. you have table "Foo" in mongo with { bar: baz, honk: dazzle}, you'll get Foo table in postgres with one column (call it Blob for now) that contains the JSON?
[13:29:44] <arbitez> probably, for step 1, then we'll destructure in tables/cols as needed
[13:30:30] <ramnes> jow guys
[13:30:47] <oky> arbitez: i guess you would also want to get rid of the mongo BSON id or whatever it is when transfering the blobs to postgres
[13:30:48] <ramnes> is there any plan for mongodb to support TTL indexes with a better precision?
[13:31:11] <arbitez> oky: dunno, haven't looked at details yet
[13:31:11] <ramnes> like, schedule the document delete rather than checking every 60 seconds
[13:32:50] <oky> arbitez: fwiw, i stored JSON data inside postgres JSONB columns - the queries are a pain to write ;)
[13:33:15] <owg1> I made a blog post about Mongo + ZFS: http://www.clock.co.uk/blog/mongodb-performance-on-zfs-and-linux
[13:33:16] <oky> arbitez: also, check if there is indexing available for JSON blobs - might be nice to add indeces
[13:33:23] <owg1> Any thoughts
[13:34:46] <arbitez> oky: jsonb supports indexing, yes
[13:35:48] <arbitez> oky: write performance in our case, on indexed jsonb, is two orders of magnitude faster than mongodb, which explains our switch :)
[13:36:30] <arbitez> thanks, i think writing a custom migration tool is best
[13:36:48] <oky> arbitez: wow, very cool!
[13:37:03] <arbitez> yeah
[13:37:32] <oky> owg1: cool blog post, any tips for tuning ZFS to be as fast? :P
[13:38:57] <oky> arbitez: using JSONB or JSON? (supposedly JSONB has some small overhead during write, so it's even more amazing if that's the one that's much faster than mongo)
[13:39:35] <arbitez> oky: jsonb. It's not that postgresql writes are so fast, it's that mongodb writes are so slow
[13:39:48] <arbitez> at least at our scale
[13:40:14] <arbitez> no difference for read, apparently
[13:40:30] <arbitez> (i didn't do the benchmarking, but trust the guys who did)
[13:40:58] <oky> arbitez: i wonder if you'll be able to horizontal partition with your postgres (or if you are already doing that)
[13:41:16] <arbitez> yup, seems to be great at that
[13:41:24] <arbitez> we need sharding to be top notch
[13:41:33] <oky> (citusDB has been looking cool, in that respect - i'm uncertain how that works with JSON)
[13:41:56] <oky> arbitez: are you logging analytics / event data?
[13:42:12] <arbitez> i think ph_shard and jsonb worked great
[13:42:16] <arbitez> pg_shard
[13:42:22] <arbitez> i'll ask
[13:43:00] <oky> that's so cool, tbh - the simple horizontal sharding + support for JSON + support for column based storage
[13:43:25] <arbitez> it apparently works, is fast and solves our problems :)
[13:43:45] <oky> arbitez: in two - four years, i expect to see: "hey, migrating off postgres..."
[13:44:16] <arbitez> maybe, but postgresql seems super solid. mongodb has been nothing but trouble.
[13:44:50] <Derick> postgresql's jsonp support is nothing like mongodb's bson/json support though
[13:45:08] <Derick> or mongodb's replicasets...
[13:45:21] <arbitez> not of which matters since write performance is killing us
[13:45:26] <arbitez> none
[13:46:19] <arbitez> and we have this statistics system that works on relational data, and the current setup with copying mongo data off of to a rdbms is hopeless
[13:46:21] <Derick> as to your original query of converting to postgresql, they're different types of databaes, with different data models. There is no magic to convert between them.
[13:46:34] <arbitez> with postgresql we can simply keep that part in good old tables
[13:46:50] <arbitez> Derick: true, figured I'd ask anyway :)
[13:54:32] <oky> Derick: i'm curious about the difference between mongo's JSON and postgres' JSON
[13:56:06] <Derick> JSONP is more of a single blob, where you can index fields from. Whereas Mongo's BSON really still consists of the individual fields allowing for much greater query flexibility
[13:56:40] <oky> Derick: do you mean JSONB? i'm familiar with JSONP in the context of browsers and cross-domain JSON
[13:56:48] <Derick> sorry, JSONP
[13:56:51] <Derick> erm
[13:56:54] <Derick> JSONB
[13:57:35] <oky> Derick: what do you mean 'greater query flexibility'?
[13:58:27] <Derick> one sec, let me find an example
[13:59:40] <Derick> say you have { a: 'foo', 'b' : 6 }. How do you find all docs with b > 5 ?
[14:00:00] <Derick> MongoDB: db.col.find( { b: { $gt: 5 } } );
[14:00:06] <Derick> how in PGSQL?
[14:01:57] <oky> Derick: i don't remember the exact semantics, i have a query builder i use, but it's something like: SELECT * FROM <db> WHERE blob::json->'b' > 5;
[14:02:25] <ramnes> DROP DATABASE;
[14:02:33] <oky> (and blob is the name of the JSON field)
[16:26:48] <webm> Hey, I'm new to this web scale stuff. Can you mirror two server databases with mongodb? Aka two servers with mongodb installed sharing a database?
[16:27:58] <StephenLynx> yes.
[16:28:04] <StephenLynx> you can do that in two ways:
[16:28:12] <webm> Oh hey StephenLynx
[16:28:21] <StephenLynx> 1: replica set where you will have a copy of the data
[16:28:35] <StephenLynx> 2: a cluster where your data will be split across multiple servers
[16:28:44] <StephenLynx> you can do it in both ways simultaneously
[16:28:51] <webm> I see, nice, thx man!
[16:28:59] <webm> How's lynxchan? :3
[16:29:03] <StephenLynx> great.
[16:29:24] <StephenLynx> a migration script for vichan has been just finished
[16:29:34] <webm> Ooh, so you can migrate from vichan to lynxchan?
[16:29:35] <StephenLynx> and there is new people looking for adopting it.
[16:29:37] <StephenLynx> yes.
[16:29:39] <webm> Niiiice
[16:29:49] <crazyphil> do I need to worry about a [UserCacheInvalidatorThread] warning? I can't seem to find much info about it
[16:30:32] <StephenLynx> https://github.com/SpaceDustTeapot/infinity2lynx
[16:33:20] <oky> StephenLynx: no javascript, huh
[16:33:28] <oky> cool stuff
[16:33:58] <webm> Are there other databases that support replica set/mirroring with mongodb?
[16:34:36] <StephenLynx> eh?
[16:34:40] <StephenLynx> I don't think so.
[16:35:04] <StephenLynx> oky yeah, nothing requires client-side js. that is a main design goal.
[16:35:45] <webm> there's this; https://github.com/vmware/tungsten-replicator
[16:36:08] <StephenLynx> still middlewhare.
[16:36:20] <StephenLynx> the database themselves don't implement it.
[16:36:22] <StephenLynx> it makes no sense.
[16:36:33] <webm> middleware's fiiiine
[16:36:38] <StephenLynx> to have some db's running X and others running Y
[16:36:50] <StephenLynx> to run a cluster of rep set
[16:37:23] <StephenLynx> there is no way to scale how large of a maintenance nightmare that would be
[16:39:24] <oky> webm: something like https://github.com/stripe/mosql for postgres
[16:39:39] <webm> hehe nice
[17:36:28] <kurushiyama> webm: I can not even count on how many fronts it is wrong to assume that a data river (which basically is a real time ETL) has the slightest thing to do with MongoDBs notion of replication. StephenLynx is rather guilty of understatement than exaggeration.
[17:43:08] <kurushiyama> webm: If you really need semi-structured data and SQLish access, use Cassandra.
[17:43:34] <webm> not my choice, my boss called me and asked if mirroring to an external mongodb server was possible
[17:47:11] <StephenLynx> well, one thing is possible
[17:47:15] <StephenLynx> another is viable
[17:47:21] <StephenLynx> and another one is reasonable.
[19:08:07] <kurushiyama> webm: Mirroring to an external _MongoDB_ server is easy.
[19:08:14] <kurushiyama> webm: Even reasonable.
[19:10:16] <webm> I'm just thinking I'm on the wrong track here, the data we have is already relational and doesn't need to be non-relational.
[19:11:03] <webm> It's just that this other company needs some of our data, and they use MongoDB. So maybe we should just force them to use a second engine or something.
[19:11:18] <kurushiyama> webm: It is wrong to assume that MongoDB can not handle relations.
[19:11:38] <webm> But wouldn't it be pointless to use MongoDB for relational data?
[19:13:15] <StephenLynx> webm provide an API
[19:13:16] <kurushiyama> webm: Almost all data is relational. Take graph databases, for example. There is not much more than relations. Would you call it a relational database?
[19:13:28] <StephenLynx> that is the most sane way to share this data.
[19:13:46] <kurushiyama> I agree with StephenLynx, btw.
[19:13:48] <webm> StephenLynx: how?
[19:13:58] <StephenLynx> HTTP
[19:14:00] <kurushiyama> webm: Write one...?
[19:14:13] <webm> kurushiyama: come on
[19:14:17] <StephenLynx> v:
[19:14:33] <StephenLynx> you know what HTTP is?
[19:14:38] <webm> wow
[19:14:44] <webm> thanks for patronizing the shit out of me
[19:14:48] <StephenLynx> kek
[19:14:50] <StephenLynx> I wasn't.
[19:14:55] <webm> double wow
[19:16:58] <kurushiyama> webm: Writing a data access API given all the tools and frameworks should not take more than 2 or three work days.
[19:17:25] <StephenLynx> >frameworks
[19:17:36] <kurushiyama> webm: Really, that is most likely the easiest way.
[19:17:36] <StephenLynx> >shiggidiggy
[19:18:15] <kurushiyama> StephenLynx: Well, we all know your ... fondness... towards frameworks. Especially Mongoose, iirc? ;)
[19:18:49] <StephenLynx> those are two separate subjects, not that I appreciate ODMs either
[19:18:50] <StephenLynx> :V
[19:19:47] <kurushiyama> Well, we could define it as a CRUD framework ;)
[19:19:52] <StephenLynx> kek
[19:20:15] <StephenLynx> a crud would be more acceptable than a ODM, specially one like mongoose
[19:20:23] <StephenLynx> that not only does something unnecessary, but does it wrong
[19:48:12] <alanoe> hi. How can I get a value from DictField in mongoengine? I have tried field['a'] and get 'DictField' object has no attribute '__getitem__'
[19:50:12] <StephenLynx> a couple of people here use mongoengine, but you could have a better change at somewhere dedicated to it
[19:52:05] <alanoe> StephenLynx, I joined #mongoengine and asked there too, ty
[19:52:13] <StephenLynx> welp
[20:06:20] <kurushiyama> StephenLynx: Hm, regarding Mongoose: let's say that you can do data modelling the right way with Mongoose and that it makes it faster. Then there should be a point with the maximum gain, with the smallest overhead, no?
[20:07:49] <StephenLynx> wot
[20:07:57] <StephenLynx> that's impossible.
[20:08:06] <StephenLynx> you can't make something faster by increasing the stack.
[20:08:23] <StephenLynx> 1+1 is always greater than 1
[20:10:35] <kurushiyama> Maximum gain of dev speed, that was, with the minimum loss of performance.
[20:11:03] <StephenLynx> >gain of dev speed
[20:11:05] <StephenLynx> git gud
[20:11:34] <StephenLynx> in the long run the time you save is meaningless.
[20:12:04] <StephenLynx> even if you were to save something like 80 hours, what is 2 weeks in 2 years?
[20:12:09] <kurushiyama> Playing devil's advocate... ;)
[20:12:23] <StephenLynx> and if you were to save 80 hours by using an ODM, this person should look for a new job.
[20:12:42] <StephenLynx> or the company shouldn't give critical tasks to interns
[20:13:17] <StephenLynx> you are going to waste much more time by having to maintain code that is more complex.
[20:13:29] <StephenLynx> and then you have more bugs because of said complexity
[20:13:49] <StephenLynx> and waste more time scaling because you are wasting performance
[20:14:50] <kurushiyama> The last point is probably the most important one...
[20:32:08] <starfly> although it can be the case, a bigger overall stack doesn’t always translate into slower performance and complexity, that approach can be used to create additional, targeted code sets to minimize code execution paths for high frequency operations (as an example)
[20:33:57] <StephenLynx> only if the used stack for a given operation differs from the used stack for a different operation.
[20:36:43] <starfly> yes, I wasn’t suggesting redundancy, there are just reasons (at times) for logical code organization that can often result in a bigger stack
[20:37:27] <StephenLynx> in practice that's not a bigger stack.
[20:37:40] <starfly> can be. :)
[20:37:42] <StephenLynx> you just have a more complex system where each scenario uses a diffent stack
[20:38:12] <starfly> that wasn’t implied, Dude
[20:51:46] <Ryzzan> let's say i have a String variable like var string = "123, 543" and I want to use it with $in in find, like db.collection.find({_id : $in : [ string ]})
[20:52:04] <Ryzzan> is it possible? how am I supposed to use string variable?
[20:53:26] <StephenLynx> do you need that to be a string?
[20:53:34] <StephenLynx> that seems like an array.
[20:53:38] <Ryzzan> not really
[20:54:06] <Ryzzan> StephenLynx: yep... but would it work with $in as an array?
[20:54:25] <Ryzzan> ow...
[20:54:29] <Ryzzan> stupid question
[20:54:31] <Ryzzan> kkk
[20:54:35] <Ryzzan> got it
[20:54:49] <Ryzzan> gonna try
[21:27:17] <catbeard> how do i drop an index i just created
[21:27:29] <catbeard> with db.results.ensureIndex()
[21:27:42] <cheeser> db.collection.dropIndex("somename")
[21:28:32] <catbeard> > db.collection.dropIndex({'profile.main().cu': -1})
[21:28:34] <catbeard> { "ok" : 0, "errmsg" : "ns not found", "code" : 26 }
[21:28:46] <catbeard> meant to type cpu instead of cu when i created the index
[21:29:18] <catbeard> or am i referencing the index name incorrectly?
[21:32:16] <catbeard> i'm following instructions from https://github.com/perftools/xhgui @cheeser
[21:37:43] <kurushiyama> catbeard: db.results.getIndices()
[21:38:32] <kurushiyama> catbeard: Then: db.results.getIndices(indexNAME)
[21:38:49] <kurushiyama> sorry db.results.dropIndex(indexNAME)
[21:41:05] <kurushiyama> My new fav question title about MongoDB on SO: "How to make two successives synchronous queries on collections in MongoDB?"
[21:48:10] <catbeard> ty kurushiyama
[21:48:13] <catbeard> sorted
[21:48:43] <kurushiyama> catbeard: You want to delete your index sorted?!?
[21:48:53] <catbeard> there was nothing in it
[21:49:13] <catbeard> was just gettig indexing done first before i start profiling
[21:55:39] <kurushiyama> catbeard: Well, I'd probably do some runs without any indices, To have a complete list, so to say.
[23:16:46] <Lonesoldier728> Hey so I am using mongoose with Nodejs and trying to figure out how I can store DateType as 0 - I do not want it to be blank is there a way to have a dateType that is 0?
[23:30:04] <StephenLynx> i suggest you dont use mongoose.
[23:39:08] <Boomtime> @Lonesoldier728: MongoDB Date type is just a 64bit integer under the bonnet; https://docs.mongodb.org/manual/reference/bson-types/#date
[23:39:36] <Boomtime> however, a value of zero maps to the Unix epoch, which is a perfectly valid date; the first second of 1970
[23:39:47] <Boomtime> is this really what you want?
[23:48:07] <Lonesoldier728> Nope wanted to store a 0
[23:49:16] <StephenLynx> then store a 0
[23:49:20] <StephenLynx> your problem is mongoose
[23:49:23] <Lonesoldier728> I know
[23:49:27] <StephenLynx> then stop using it
[23:49:58] <Lonesoldier728> It is a cool friend to have, just kidding it is the one field I can just leave it blank