[00:41:44] <Boomtime> @llakey: i quoted the YAML file format, which version of mongod are you using?
[00:42:04] <Faeronsayn_> @joannac I am trying to convert my mysql database into mongodb and will be importing csv files
[00:42:31] <llakey> Boomtime: 2.6.3 the config which shipped with the arch linux package isn't in yaml format. is the yaml format required for disabling journaling?
[00:42:47] <Boomtime> it should not be, just so long as you are consistent
[00:45:36] <llakey> ok. that's why i stayed with the k = v pattern. i was hoping journal = false from the example config would also work
[01:43:36] <balakalaka> how do i create a date type using the c mongo driver?
[01:43:54] <llakey> joannac: for me, i like to use nojournal when testing because i don't want to wait for journal files to be created, because we're just testing. in production, we do journal and backup
[01:49:04] <balakalaka> theres like 5 million people in this channel
[01:49:34] <balakalaka> am i to guess the mongo c driver is not used very much?
[01:49:48] <balakalaka> there seems to be almost no info on it online
[03:00:31] <Boomtime> "balakalaka: am i to guess the mongo c driver is not used very much?"
[03:00:50] <Boomtime> @balakalaka: the C driver is not officially released, it is at 0.98 i believe
[03:02:07] <Boomtime> i think you should definitely expect the docs to lag behind as the maturity of the C driver comes up to speed, with that said, the C driver should reflect in a C-style-of-design roughly what the other drivers all present as an API
[03:21:57] <Faeron> Hey guys, where would I check if my mongoimport fails (it doesn't give me an error message :( )
[03:28:03] <Faeron> Hey guys, how would I reference a object_id?
[03:47:02] <awinn> Hi, not sure about the etiquette (new to IRC), is there someone that could help me with a quick question?
[03:49:37] <awinn> I have a 2d index and I am searching using $near
[03:50:14] <awinn> I want to set maxDistance but I need to convert from input in miles
[03:50:58] <awinn> people are giving different formulas online using the earths radius and whatnot but it doesnt work
[03:57:01] <Boomtime> setting aside the conversion from miles for a moment, does a $near query work when you just stuff a value of 1 in for maxDistance?
[04:00:12] <awinn> no, I am using coordinates that match a document too, but I have to put in a large number aprox 14 thousand for it to find the document
[04:00:39] <Boomtime> ok, so the conversion is not the first problem, let's solve the first problem
[04:01:10] <awinn> ok.. sounds good. where should i start?
[04:01:41] <Boomtime> can you provide the doc extract coordinates (the bit which is indexed as "2d" (or 2dsphere?)
[04:10:10] <Boomtime> i would suggest you stop using mongoose for the moment and get your query to work in the shell, it should translate pretty easy from there to mongoose
[04:10:39] <Boomtime> by the 'shell' in 'mongo shell'
[04:13:21] <awinn> I did log the query object and it did look like it was assembling it correctly however I was assuming the lat long is in right order because it isnt logging that part out completely
[04:14:09] <awinn> I will try the shell and log out the coordinates after the query is assembled.
[04:22:23] <cheeser> can you tell mysql to rename that column to _id ?
[04:22:47] <Faeron> I can... but doesn't mongodb not use auto-incrementing ids?
[04:23:03] <Boomtime> only if you don't instruct it otherwise
[04:23:11] <Boomtime> you can't avoid having an _id, if you don't supply one it will be generated for you as ObjectId
[04:23:50] <Faeron> so there is no way of telling mongodb to use string instead of objectId for _id?
[04:23:55] <Boomtime> the best option is to claim the _id for your own purposes, if you have a unique pre-existing field which you use as PK in mysql then it's got _id candidate written all over it
[04:24:27] <Boomtime> yes there is, just not with mongoimport
[04:24:46] <Faeron> is there a way I can mass update after the import?
[04:24:50] <Boomtime> you can set _id to whatever you want, but what you are doing is not setting it at all
[04:29:07] <awinn> now with the conversion do I divide the distance by the radius of the earth then?
[04:29:46] <Faeron> lets say I have tv shows and episodes. in my episode document I have a show_id, what should I set this show_id to? The string version of the objectId?
[04:33:32] <cheeser> you have existing data. why would you want to generate new IDs?
[04:34:13] <Faeron> I want to update to the mongodb way, I feel like auto-incrementing fields aren't really encouraged for mongo
[04:34:32] <cheeser> they're not for the most part
[04:35:34] <Faeron> so back to my original question, should I put the ObjectId in the show_id field?
[04:35:58] <cheeser> why would you? you have a value there already.
[04:36:16] <Faeron> I will update it to the new tv show ids which would now be object ids
[04:37:00] <cheeser> you'll need to update all your FKs, then, with the newly generated ObjectIDs.
[04:37:07] <cheeser> not hard, but will take a little scripting.
[04:40:04] <Faeron> yeah, don't have too many realtionships atm, so I feel like it's not too big of a deal (saves me a headache later where I am unable to use some cool mongodb stuff because my ids are all over the place)
[04:40:21] <Faeron> so referenceId I should use ObjectId?
[04:40:54] <cheeser> i prefer ObjectIDs personally
[04:41:24] <Faeron> What's the advantages of ObjectIds over strings?
[04:42:28] <cheeser> you don't have to generate them yourself
[04:42:59] <Faeron> _id that are strings are not generated by mongodb?
[04:43:33] <Boomtime> "(2:26:37 PM) awinn: now with the conversion do I divide the distance by the radius of the earth then?"
[04:43:39] <awinn> hey anybody: I am using maxDistance/(3959* 3.14/180) for miles to radians but it appears to be a little innacurate. is it because im not using pi
[05:18:34] <balakalaka> no errors, but it just stops
[06:08:49] <zereraz> hello I have a doubt in mongodb, why can't I do this in update {$inc:{votes[optionNo]:1}, where votes is an object, and optionNo is a string
[06:18:55] <zereraz> but I would like advice on how to improve my schema
[06:19:09] <Boomtime> before you even try to design a schema, make sure you know what you want to do - not what you want to *store* (that part is easy) - i mean, really know what you want to do with your data
[06:19:43] <zereraz> hmm, yeah that is true. Do you people use any tool to plan and design?
[06:19:53] <Boomtime> given what you want to achieve, it is really just a matter of trail and error to find a data layout/form/structure that works well to achieve it
[06:20:18] <zereraz> because I normally have no clue what I want to do, I just some how hack at it to make it work(which I know is a very bad way :( )
[06:20:20] <rh1n0> thats the damn truth - im working on untangling a mongodb setup because the guy who did it didnt know how to properly design the schema. 24 second query times? no thanks
[06:21:35] <Boomtime> rh1n0: it is a common problem, people come from SQL where they spend all the time constructing clever queries to achieve the outcome, in mongodb you should spend all your time designing a clever schema
[06:21:51] <rh1n0> zereraz thats a loaded question :) it takes experience
[11:42:44] <Derick> joker666: sorry, I don't understand what you're asking
[11:44:15] <joker666> Derick familiar with rails or Laravel? Database migration feature is like versions of your db, like in git, versions of your software
[11:44:27] <huleo> Derick: noob question, so basically we use .aggregate instead of .find? (+proper options for the call)
[11:51:37] <mhh_12345678> Hello. I'm new to MongoDB and nodejs. I'm trying to run a unittest testing some mongodb stuff using nodeunit. The mongodb server port is 27017. The test just hang, it dosn't complete. Have any of you guys any experience with tesing mongodb with nodeunit?
[14:07:21] <soupsucka> hey, for some reason the mongo driver for node and mongoose are dying out when i try to batch insert a lot of data into mongodb
[14:07:46] <soupsucka> they both die without any errors or anthing after inserting about 500k records
[15:25:54] <soupsucka> whats the typical overhead in bytes for storing one record?
[15:26:49] <soupsucka> im storing 100mb of csv data (3 fields) and its using 1G of disk space :O
[15:27:13] <rspijker> soupsucka: the intiali overhead is quite large, due to pre-allocation
[15:27:31] <rspijker> percentage-wise this becomes way better later on though
[15:27:40] <dmitchell> don't forget db's don't store compactly but in a means which allows random retrieval
[15:27:51] <soupsucka> how can i check how much is actually used?
[15:58:26] <cramrod> Has anyone done stuff with NLP in node? I need help figuring out the most effective way to store and retrieve records in Mongo where they are indexed by an NLP n-gram. I dont know enough about javascript data type efficiency to optimize storage/retrieval.
[16:02:31] <cramrod> Is it very bad to use 'document' and 'record' interchangeably in this context?
[23:25:21] <faeronsayn> Aug 15 19:17:33 localhost.localdomain systemd[1]: mongod.service: control proces Aug 15 19:17:33 localhost.localdomain systemd[1]: Failed to start SYSV: Mongo is -- Subject: Unit mongod.service has failed -- Defined-By: systemd