[00:39:57] <kurushiyama> synapse: Here! Had to track down a huge performance problem, 16h straight
[03:46:38] <kalx> Hi all. Is there a way to query a document to see if my query input array contains all the values that are in a document's array value? So like $.all except matching in the other direction.
[03:47:00] <kalx> eg. document is: { tags: ['b','c','d'] }. I want to query with ['a','b','c','d','e'] and find documents where all the document's tags are in my input array
[03:53:11] <Boomtime> however, you can use aggregation to do it today; https://docs.mongodb.org/manual/reference/operator/aggregation/setIsSubset/#exp._S_setIsSubset
[03:53:35] <kalx> will check those out, much appreciated Boomtime
[04:16:54] <kalx> @Boomtime: Tested $setIsSubset, worked but only in the context of an aggregate call. I'm trying to do this within a findOneAndDelete call (atomic grab+delete for perfomance reasons). Any idea if that's possible? Seems not, but researching in meantime
[04:41:33] <kalx> @Boomtime: Just to share. Found a workaround but will likely destroy performance for larger collections. As a find condition -> {my_field: {$not: {$elemMatch: {$nin: ['a','b','c','d','e']}}}} . Too many negatives makes it confusing... essentially, finds any document with my_field array where none of the array elements can satisfy not being in the input list.
[04:42:28] <kalx> so not happy about it (even the other one $setIsSubset likely would have been not good performance wise)
[04:48:53] <Boomtime> @kalx: yeah, i know of that workaround and it suffers the problem you recognized - it cannot use an index, literally there isn't one that can be used - the aggregation pipeline _can_ - it won't be terribly efficient, but it should be much better than the $not/$elemMatch/$nin abomination
[05:07:44] <Trinity> i'm working with mongo in node.js and i'm trying to use a string variable for a key in the document
[05:10:34] <Trinity> shell shows that the key 'throttleKey' is being created/updated instead of 'myIndex'
[05:10:58] <Trinity> ah i think I might be able to create the object and just pass it along?
[05:13:40] <Trinity> nvmd I dont think that's working
[05:14:55] <Boomtime> @Trinity: you can't do what you want in JSON - you'll need to use a real javascript array to create it
[05:15:20] <Boomtime> right now you are describing the update using JSON syntax - which will assume that all keys are literals
[05:16:01] <Boomtime> create a javascript object instead, even if that means instantiating all other members using JSON first, then adding one more entry using javascript array syntax
[05:16:04] <kalx> @Boomtime: Unfortunately, aggregate leads to other trouble. Need to find+delete or mark processed at a rapid pace which means atomicity is best (Findanddelete or findandupdate). Aggregate would lead to a multistep process which is ultimately doomed.
[05:17:23] <Boomtime> you know that all findAndUpdate type operations are multi-step right?
[05:17:52] <Boomtime> i mean, they can do what they say on the packet - which is assure atomicity, but so can you
[05:18:57] <Boomtime> if you find a document that matches your special query and then use an update which specifies the _id you can hit only that document - even if there is a chance of an ABA problem, you can just use another field as a sequencer to avoid it
[05:22:10] <Trinity> Boomtime, thanks got it working
[05:23:22] <Logicgate> Trinity, you would have to create an object first.
[05:23:39] <Logicgate> Then object[throttleKey] = ..
[05:23:40] <Trinity> Logicgate, thanks for the help I got it working with Boomtime's suggestion
[05:24:14] <kalx> Issue is more to avoid multiple processors from identifying the same task to process. Sure, we can easily prevent multiple from grabbing the same record, but then one fails and has to re-identify + grab again. With very high concurrency, it's easy to run into perf problems there.
[05:28:19] <Boomtime> so don't re-identify - use a cursor - then loop 'update' until you get one that succeeds - reads are much cheaper than writes, so a miss on the read portion of an efficient update doesn't matter so much
[05:28:46] <Boomtime> also, given that in order to 'miss' one other _must_ have succeeded, the system must be making progress
[12:33:19] <cdunklau> i don't get this... i'm trying to do the update data part of the mongodb getting started thing (https://docs.mongodb.org/getting-started/node/update/), here's my code https://gist.github.com/cdunklau/7eef9f4a3280007bf3ff
[12:33:23] <cdunklau> when i run that, it just outputs undefined, and the object isn't changed
[12:33:39] <cdunklau> what am i missing, and what things could I do to find out what's going on?
[12:34:07] <cdunklau> using the same filter and update in the mongo shell, it works
[12:56:56] <cdunklau> aha. i didn't have an assert.equal(err, null) so the error wasn't getting outputted
[13:02:42] <cdunklau> so here's the error https://gist.github.com/cdunklau/ec2060bfc5efd9d6c742
[13:06:34] <cdunklau> looks like there's something up with my mongodb package. when i do "npm version mongodb" it says it can't find the package.json but when i look in the node_modules/mongodb i find it, and it shows me this: https://gist.github.com/8e0610dadbd39be40f01
[13:07:40] <synapse> kurushiyama are you still there
[13:08:06] <synapse> re: our performance issue regarding AJAX and large responses
[13:08:49] <kurushiyama> synapse: ... ...Had we? You were asking for other poor lads spending their Friday... I was the other one ;)
[13:09:09] <kurushiyama> synapse: But shoot, and I will try to help.
[13:09:27] <synapse> As I said I done some tests. I have a small db with under 100 entries, and I also messed about with using the "restaurants" collection example dataset which is very big. The app works fine with the former and on the latter the whole app gets killed on like 4 queries
[13:16:22] <kurushiyama> for example, if you have db.coll.find({foo:"bar",answer :42})
[13:16:46] <olivervscreeper> Hi - I'm trying to get the PHP Mongo driver installed. I've installed the MongoDB driver (not Mongo), and phpinfo() shows it as enabled, but if I try to create a connection with new Mongo() it says that the class is not found. Any ideas? [13:09] == Cannot send to channel: #mongodb
[13:17:08] <kurushiyama> you should have an index on those two fields (order matters!). So you need to check wether an according index exists.
[13:17:42] <synapse> I am lost when you talk about indexes
[13:18:01] <synapse> kurushiyama I already am using a cursor
[13:18:02] <kurushiyama> synapse: Have a look at the docs.
[13:18:15] <kurushiyama> synapse: Indexing is _extremely_ important
[13:18:26] <kurushiyama> synapse: But let me explain.
[13:19:15] <kurushiyama> without an index, mongod would need to read each and every document from disk to determine wethe the document matches.
[13:19:16] <olivervscreeper> My issue's on StackOverflow if that explains it better: http://stackoverflow.com/questions/35514409/uncaught-error-class-mongoclient-not-found
[13:19:41] <kurushiyama> olivervscreeper: Sorry, when I want Pasta, I see an Italian ;)
[13:19:44] <synapse> I have db.users.find({ $name: {$regex: 'query', $options: 'i'}});
[13:25:44] <kurushiyama> synapse: An index basically is a specialized data structure which can be searched efficiently and will be kept in RAM, if at all possible. It holds the values of the indexed field(s) and references the position of the document in the data files.
[13:30:02] <kurushiyama> When you do a query on an indexed field, the match is done against the index, and only then the matching docs are read from the disc, (vastly) reducing latency (RAM is orders of magnitude faster than any disk)
[13:31:15] <synapse> Ahh so basically if the query checks a significant amount of docs more than what is returned eg, checked 10,000 docs to return 2 results, this is a good example where an index would be ideal
[13:47:10] <kurushiyama> synapse: it is a serverside construct.
[13:47:58] <kurushiyama> synapse: after your index is built, the query looks exactly the same, but mongod will utilize the index to make the query faster ;)
[13:57:45] <kurushiyama> synapse: Ok, a basic: Identify the questions you have on your data, model your data ___according to your questions___ (as opposed to model by entities) and add the indexes for the fields your query. That's about all there is to it (details and intricacies ommited) and this is what about 90% of the people ranting about MongoDB get wrong.
[13:57:57] <synapse> should I add .hint to the query to force the use of the index I added
[13:58:08] <kurushiyama> synapse: This usually is not necessary.
[14:08:30] <synapse> I know when i tested on the shell without .hint it still used the index but for some reason in the code i can only assume it wasn't using it
[14:08:48] <synapse> but with the addition of it it hasn't crashed
[14:08:56] <kurushiyama> synapse: well... Dont ask me about JS.
[14:09:10] <synapse> you more of a PHP kinda guy? :)
[14:10:13] <kurushiyama> synapse: Yeah. Absolutely. I'd rather change my job for a promising career as a loaf of bread.
[14:10:41] <synapse> If I had a little timer to the onkeyup AJAX query so it doesn't send off queries every single time it should be a lot more solid
[14:14:34] <synapse> Is it really the rolls royce of server side web app building?
[14:15:14] <synapse> kurushiyama callback hell is being phased out with the use of chained methods / promises :)
[14:31:41] <kurushiyama> synapse: Rolls Royce? Not really, since it lacks mature frameworks, imho. The better comparison would be Ferrari – it is __fast__. And you compile static binaries. Deployment can be done by copying a (one) file from one machine to the next, if done properly.
[14:33:12] <kurushiyama> synapse: Where it excels is API building. This is where Go really rocks. Think of a Rolls Royce with a formula 1 engine and the capabilities of using it.
[14:34:22] <StephenLynx> 1- callback hell is just as bad as any other uncontrolled nesting. no differen than nesting too many loops or conditions
[14:34:40] <StephenLynx> 2- single threaded isn't an issue because of how you have the cluster native module to use all cores of the cpu
[14:35:07] <StephenLynx> 3- neither is type safety because you have unambiguous operators such as '==='
[14:35:58] <kurushiyama> StephenLynx: It still is concurrent instead of parallel. Agreed, node does an impressive job, but still it is single threaded ;) And type safety enforcing during runtime is not exactly the problem. ;)
[14:36:33] <StephenLynx> what is the problem with being concurrent instead of parallel?
[14:37:00] <StephenLynx> you can use the whole CPU if you need.
[14:38:54] <kurushiyama> StephenLynx: Well, let's say you have like 4 users and 4 cores. With true parallelism, those would be processed in parallel. With concurrency, each worker from time to time yields. It is not that this would not work, but it makes less good use of the resources. When it comes to scaling out, the difference will become noticeable sooner or later.
[14:40:57] <StephenLynx> each process is single threaded
[14:41:04] <StephenLynx> you just have more processes to use all cores.
[14:41:34] <StephenLynx> 1 master process and as many worker processes, usually you spawn 1 worker for each core
[14:42:13] <StephenLynx> otherwise node software would be severely capped.
[14:43:24] <kurushiyama> StephenLynx: So, if I get you right (paraphrasing to check if I got you right): You have a single master node process, dispatching request to worker processes, for which one is spawned per CPU?
[14:44:15] <cdunklau> i'm following the getting started guide for node (https://docs.mongodb.org/getting-started/node/update/) and I've hit a wall with the update portion. code and error is here: https://gist.github.com/cdunklau/7eef9f4a3280007bf3ff
[14:44:53] <StephenLynx> you spawn the worker processes manually, though. so 1 per core is just what you usually want.
[14:45:17] <StephenLynx> and they can all share the same HTTP listener transparently.
[14:45:21] <StephenLynx> node handles that internally.
[14:45:44] <kurushiyama> StephenLynx: Interesting. One point less on my long list "Why not to do node..." ;) Thanks for the info! But I think cdunklau needs a node specialist, now ;)
[14:47:32] <synapse> kurushiyama well it looks like you have 3 points less not to do node now ;)
[14:47:52] <synapse> don't do loose comparisons === not == as an example
[14:48:58] <kurushiyama> synapse: Nah, that's runtime. Last but not least – I used to be a sysadmin. I like go for the easy way to have a natively compiled application including all resources deployed by merely copying a file.
[14:49:24] <kurushiyama> synapse: No dependencies aside from libstdc
[14:49:49] <synapse> cdunklau $currentDate should be 'currentDate' as it's a field not an in-built mongo option on line 13 of pastebin
[14:50:08] <StephenLynx> compiled application are not too practical for quickly iterating software.
[14:50:22] <StephenLynx> and deploying is not an issue on scripted software when you use git to deploy.
[14:50:30] <cdunklau> synapse: ah, looks like that's new in 2.6 :(
[15:21:51] <Tachaikowsky> If something is written to mongodb using mongoose and secret key, can I use mongodb plugin now to read it without a secretkey?
[15:22:20] <cdunklau> the problem was originally difficult to figure out because the example with updates doesn't have the error === null assertion. i don't know any better so i took it out of my code. then i just got undefined in the console and was confused
[15:22:33] <kurushiyama> Tachaikowsky: If you are talking of the cluster key file: nafaik.
[15:22:51] <cdunklau> the examples before (querying and such) have assert.equal(err, null) in the callback
[15:23:06] <cdunklau> kurushiyama: it's just the one on debian sta(b)le
[15:23:17] <Tachaikowsky> kurushima, in my case - the key is a 'secretword' that is in my config.js (I am working with node.js)
[15:24:58] <cdunklau> i'm learning mongo and node with the hopes of moving from tech support to development at my company
[15:25:03] <kurushiyama> cdunklau: You might likely run into driver problems, unless you are very careful etc.
[15:25:06] <cdunklau> and i'll bet we're using an old old version
[15:25:18] <cdunklau> kurushiyama: okay, i'll keep that in mind. thanks!
[15:25:19] <synapse> cdunklau that syntax looks so odd to me, where the function was tagged onto the query, I'm a beginner too but that does look strange as
[15:26:02] <kurushiyama> cdunklau: Use a docker image of a more recent version of Mongo
[15:26:09] <cdunklau> synapse: seemed natural enough to me. i'm just getting started with mongo, but i've played with node and other async stuff to become fairly comfortable with that style
[15:27:36] <synapse> ahh so updateOne in the mongo driver api docs must define an addition callback as a parameter, something you wouldn't have on the shell of course
[15:33:03] <kurushiyama> cdunklau: Sorry ;) Well, as for your playing around, I'd use a docker image for a more recent version of MongoDB. And you might want to have a look at MongoU
[15:33:08] <synapse> I think my synapses are having a day off today
[15:37:19] <kurushiyama> StephenLynx: vastly OT, but do you use bower?
[15:37:22] <synapse> Tachaikowsky just taking a random stab at this as it's not something I know much about but I assume the secret key is only there to validate a session cookie on the client, I don't see why it would affect your ability to read a document from the db?
[15:43:57] <StephenLynx> kurushiyama, dont try to understand webdevs.
[15:44:15] <kurushiyama> StephenLynx: I have to. Have to write a frontend...:/
[15:44:16] <StephenLynx> they are just the thing about a thousand monkeys on typewriters
[15:44:31] <StephenLynx> eventually spitting out somewhat usable software
[15:45:07] <cdunklau> so there's a lot of mongo hate, some of it's based on old issues that have been fixed, but the arguments about data-you-think-isnt-relational-until-it-is seem to be pretty reasonable. I'm sure you guys have to deal with the hate a lot in here, so i'll try to keep it specific and opinion-free.... What are the use cases where mongo really shines?
[15:45:28] <kurushiyama> StephenLynx: Infinite Monkey theorem. Sounds about right... I could not describe how much I hate frontend development.
[15:45:48] <kurushiyama> cdunklau: Let me give you an example
[15:45:56] <StephenLynx> large datasets of data that are not too related and don't follow a strict schema, cdunklau
[15:46:07] <kurushiyama> cdunklau: Many people say MongoDB can not be used for financial transactions.
[15:46:23] <kurushiyama> cdunklau: Which could not be farther from the truth.
[15:46:42] <kurushiyama> cdunklau: When banks do bookings, they document the actual booking.
[15:46:57] <kurushiyama> cdunklau: So they have accounts and bookings.
[15:46:59] <StephenLynx> well, if your system depends deeply on transactions, mongo will be not too optimal. because you will have to implement the transactions on software code.
[15:48:06] <kurushiyama> cdunklau: A booking would be {_id:someId, deductFrom: acctId1, transferTo: acctId2, amount:200.00}
[15:49:03] <StephenLynx> is nothing inherent about financial software, though.
[15:49:05] <synapse> I used to use postgres but I recently switched to mongo purely for the experience of using it with node, the JSON format is handy but I think postgres supports that too
[15:49:16] <kurushiyama> cdunklau: ever wondered why it takes even directly connected cash machines a while to find out wether you can get some money? Because they will do an aggregation on the bookings.
[15:49:40] <kurushiyama> StephenLynx: But that's an example you often see in the context "You cant do this with Mongo"...
[15:50:28] <StephenLynx> well, any "you can't do X" is a bad example.
[15:50:35] <kurushiyama> cdunklau: To make a long story short: There are _very_ few things you absolutely can not do with MongoDB. Everything else is prone to sensible data modelling.
[15:50:37] <StephenLynx> one thing is doing, the other is doing optimally.
[15:51:14] <kurushiyama> StephenLynx: Actualy, the format of bookings with a referall to accounts is how banks exchange bookings ;)
[15:51:15] <StephenLynx> if your implementation is X tool is a flawed hack, you might as well be using Y tool.
[15:51:27] <Tachaikowsky> synapse, to confirm - YES. It works. The secret key is associated with the plugin, and the db details can be directly accessed from mongoshell
[15:51:33] <StephenLynx> again, nothing about banks and stuff.
[15:52:30] <synapse> Tachaikowsky thanks for the update, that's how I expected it to work but as I say it's just on instinct as I don't know anything about mongoose. Secret key's on the server side are used to validate the clients session
[15:53:07] <Tachaikowsky> StephenLynx, yes I am moving away from mongoose - exactly why these questions are being asked :)
[15:53:11] <kurushiyama> StephenLynx: I get you. It was just an example. Transactions imho are overestimated. From my experience, if you need a transaction spanning multiple tables, that's a good candidate for an according document in MongoDB.
[15:53:32] <synapse> I was told fiercely to not use MongoDB but the truth is I didn't really understand why the said person was so adamant about it
[15:53:39] <StephenLynx> constant runtime aggregations are too slow, though.
[15:53:42] <cdunklau> kurushiyama: well i was wondering about the transaction/data integrity thing too, but i don't know enough about it to really know one way or the other
[15:53:44] <Tachaikowsky> synapse, I just want to confirm - any query that is executable on mongoshell can also be executed through node.js without much modification?
[15:55:03] <StephenLynx> that one is supported by the driver.
[15:55:06] <kurushiyama> cdunklau: Well, I have _started_ a series of blog posts. Need to finish the second article this WE... never seem to find the time.
[15:59:26] <cdunklau> kurushiyama: please make time, there are not enough of these types of posts
[16:00:33] <kurushiyama> cdunklau: If you enjoyed it, please comment. Most important: Please add what you want to see in the future, so I can focus. Wrinting on data modelling for MongoDB can easily fill a book.
[16:01:21] <cdunklau> best practice documentation is sorely lacking for a *lot* of things
[16:07:27] <cdunklau> kurushiyama: i can't think of anything to contribute... this is so new to me i can't even have an opinion :)
[16:30:27] <Tachaikowsky> cdunklau, thanks a bunch
[16:31:00] <StephenLynx> Tachaikowsky, you are using the worst syntax for that though
[16:31:16] <StephenLynx> it should be db.col.update({query to match},{update to perform})
[16:31:28] <cdunklau> Tachaikowsky: i do hope you're validating the incoming data :)\
[16:31:36] <Tachaikowsky> Oh yes, that's great thanks!
[16:32:35] <StephenLynx> on node you just pass your callback as the last parameter. or if you wish to perform multiple operations on the cursor, like sort and limit, you call toArray() as the latest function and pass the callback for it
[16:33:23] <StephenLynx> blabla.find({query},{projection, if desired}).sort({field and direction}).toArray(function gotArray(error, data){});
[16:33:51] <Tachaikowsky> StephenLynx, I dont quite understand the last part (from toArray)
[16:34:11] <StephenLynx> it gets all the data from the cursor as an array
[16:35:16] <StephenLynx> your two other options are:
[16:35:25] <StephenLynx> keeping the cursor open and fetching one document at once
[16:35:39] <StephenLynx> performing multiple queries fetching one document at once on multiple cursors.
[16:36:13] <StephenLynx> usually I fetch all as a single array if I know there isn't too much data or use one cursor at a time if I know there might be too much data.
[16:36:44] <StephenLynx> using a single cursor and getting one document at once usually isn't worth.
[16:37:00] <StephenLynx> too much code and your cursor will timeout if your operations take too long
[16:37:11] <StephenLynx> you will never get a timeout on the other two options.
[17:48:20] <synapse> StephenLynx .toArray... damn I've been using a .push all this time
[17:56:50] <synapse> my function StephenLynx that gets used inside the core of the app doesn't take a db argument, as that's handled on the module page I posted. I just do searchUsers(query, .....
[17:57:29] <StephenLynx> you connect to the db every time you get an instance of search users?
[17:59:14] <synapse> StephenLynx I stress tested it earlier and it was dying so I knew it needed improving that's why I joined the channel to ask about it, I was told to add indexing which I did
[17:59:32] <StephenLynx> the connection dance isn`t helping either.
[17:59:51] <StephenLynx> change so you only connect once and get a reference to the collections once.
[18:00:00] <StephenLynx> then you reuse the collection reference or db reference if needed.
[18:00:08] <synapse> so what it holds the collections in RAM?
[18:13:46] <StephenLynx> the more usage a software gets, more of these unpredictable and undesirable behaviour is revealed.
[18:13:55] <synapse> I totally agree I was lazy on the error handling but it wouldn't be that way if it was a fully working app I'm literally just trying to get off the ground with mongo and node ya know?
[18:14:17] <StephenLynx> yeah, but I don't critique based on context.
[18:14:31] <synapse> of course and i appreciate it
[18:14:38] <StephenLynx> i just tell you how it is :v
[18:15:19] <StephenLynx> i have myself some code that would easily break if the input were some unexpected stuff.
[18:15:37] <StephenLynx> but given their context, I don't mind it.
[18:15:50] <StephenLynx> like how the admin of the site inputs settings.
[18:16:04] <StephenLynx> and how he could input weird stuff to make it break.
[18:16:11] <synapse> I have to admit StephenLynx, Node has been a peculiar language for me to get to grips with as it's very different to traditional OOP or procedural languages I've used before. I found coming up with a way to organise the code a minefield
[18:16:23] <StephenLynx> protip: dont use OOP on js.
[18:18:54] <synapse> 'this' was weird before I understood how it works
[18:18:56] <StephenLynx> it makes everything worse.
[18:19:05] <StephenLynx> 'this' is not useless as a concept, just on js.
[18:19:13] <StephenLynx> since you shouldn't be using OOP there in the first place.
[18:19:46] <synapse> The problem is not so much me, but what everyone else does you know? I'f I have to depend on libs I kinda need to follow how they're built
[18:20:00] <StephenLynx> most libs on node are garbage
[18:20:15] <StephenLynx> and if the lib dictates how your code is written
[18:20:21] <StephenLynx> that is the biggest red flag you could get.
[18:20:24] <StephenLynx> never use a library like that.
[18:20:26] <synapse> I had to be able to understand them when I read them to know what they were doing, and they all make use of the proto chain
[18:20:43] <StephenLynx> you dont. you just have to understand what they spit out on their callback or their return.
[18:20:52] <StephenLynx> if you have to understand more than that, its a bad lib.
[18:21:07] <synapse> So I ended up implementing it in my own code not because I had to but because I figured well if libs are built this way it can't be that bad?
[18:28:56] <StephenLynx> you should writing your js as you write your C
[18:29:03] <StephenLynx> this is the recipe to make js a sane language
[18:29:59] <synapse> The problem comes though as I said, lets say I end up in an environment that means I need to use JS but it's insisted I do it the "conventional" way.... I'd be lost
[19:29:47] <synapse> you don't know about heretics? It was a period in history where anyone who questioned the religious doctrine in society was tortured :)
[19:30:51] <StephenLynx> brazil would be pretty much fixed on twenty years or so.
[19:31:49] <StephenLynx> because they are both the cause and the product of our issues.
[19:32:04] <StephenLynx> they are poor because of shitty politicians and they are the ones that keep electing these.
[19:32:19] <synapse> I do completely understand your view
[19:32:23] <StephenLynx> and as a byproduct we get less personal freedoms because of religious fanatics.
[19:32:48] <synapse> I often feel that way here that it seems like the majority of people don't think or question anything and they end up voting in the governments
[19:33:01] <synapse> so it;s like the idiots deciding power
[19:33:39] <synapse> People rarely form an opinion that goes beyong that of a tabloid newspaper
[19:46:16] <StephenLynx> routing is business logic
[19:46:23] <StephenLynx> and libraries have no saying on that.
[19:46:57] <vak> how to pass an index hint to an aggregate() ? there is no info on this in docs and a call like this: db.c.aggregate(...).hint() leads to TypeError: Object #<Object> has no method 'hint'
[19:47:17] <StephenLynx> maybe you can't on aggregate?
[19:48:23] <Tachaikowsky> synapse - quick question, is the only way to verify whether something exists is to do a find, then pass output to array, then check length??
[19:48:40] <Tachaikowsky> synapse, sorry for pm - I asked in pm coz my question is very noobish
[19:48:40] <synapse> Tachaikowsky SQL has a query called COUNT(*) Mongo must have an equivalent
[19:48:52] <synapse> it's ok we're all here to learn from each other
[19:49:04] <Tachaikowsky> But I am a distilled idiot, I have nothing to offer :P
[20:54:39] <synapse> Tachaikowsky well there are many problems with how that's constructed, one thing you should know is never place anything after a return, return will return you from the function, ntohing after will be executed
[20:56:29] <synapse> Second that will cause a connection to the db every single time the query is run, in a real world setting it will kill your app ( literally had this problem myself and learned the hardway), you need to create a connection cache / pointer something which I've yet to implement myself but StephenLynx just provided me with a well written project code in node js which I will take a look at to learn
[20:56:29] <synapse> how to implement such a thing
[20:58:22] <synapse> The idea behind async code is that you can get on with other tasks whilst something is still being executed i.e it's non-blocking, eg we don't wait to read in a 10gb file before doing anything we get on with other things that's where callbacks and events come in Tachaikowsky
[20:58:53] <Tachaikowsky> how do I learn about async?
[21:09:48] <Tachaikowsky> ok I read that and got the concept of async
[21:10:17] <Tachaikowsky> but I dont really understand how we can do this in an async way
[21:14:47] <synapse> Tachaikowsky Basically when you define a function you specifiy a bunch of work to do as you would any other, the function solves a task
[21:15:10] <synapse> What's different is what we then do with the result
[21:16:40] <synapse> instead of running a return value; to the caller, we run a function (the callback) that's placed as an argument with the results of our work as the parameters of the function we callback.
[21:16:55] <synapse> Tachaikowsky By building your entire app this way, your app will be completely async
[21:17:09] <synapse> because noting waits for results before moving on
[21:24:34] <Tachaikowsky> I will run the code and let it happen in the background
[21:24:55] <Tachaikowsky> so i neeed it to be non-blocking
[21:24:56] <synapse> No... we'll make that file handler function a callback function so we can write other functions that do other tasks in our code that will also be callbacks. Our app will consist of these types of functions, our becomes async
[21:26:38] <Tachaikowsky> I am unable to explain it properly
[21:26:54] <synapse> If you look at the node.js API, look at the Filesystem API you will see there are two functions that read in files that node offers you to use. You can choose a syncronous .readFile() function or an async version of the same thing. They both achieve the same task, the difference is how the results are handled
[21:27:26] <synapse> One will make your app pause and wait, the other wont