[03:55:13] <dman777_alter> does db.users.getIndexes() not show partial indexes?
[09:17:56] <einseenai> hello, everyone! I'm trying to build a basic blog with Mongoose, Node and Express. I can get the list of entries, create an entry and delete them, but when I proceed to an individual entry from the list, my server crashes and that's it. here's the piece of code responsible for retrieving a post and output that I get - https://jsfiddle.net/pcfyesr4/1/ Could you, please, have a look and tell what am I doing wrong here?
[09:19:03] <einseenai> oops, correct one - https://jsfiddle.net/pcfyesr4/1/
[09:21:14] <kurushiyama> einseenai Formally: you should check for error first, than access the result – that being said without knowing poo about node.
[09:21:54] <kurushiyama> einseenai And in your error check, you should probably log the error returned, so that we can find out more.
[09:24:03] <einseenai> kurushiyama, thank you, will check it now!
[09:37:44] <einseenai> kurushiyama, it works now :-)))
[09:38:08] <kurushiyama> einseenai Great! What was the problem?
[09:38:49] <einseenai> kurushiyama, it was exactly that I try to access entry before error handler
[09:39:24] <einseenai> kurushiyama, this one works https://jsfiddle.net/pcfyesr4/5/
[09:40:05] <einseenai> although, one strange thing, I noticed - if I try to access an entry that doesn
[09:40:59] <einseenai> 't really exist - like, localhost:3000/blog/non-existant-entry, the server _does not_ crash, it just displays emptry entry
[09:41:44] <einseenai> kurushiyama, but it has to do with how I handle such cases myself, right?
[09:47:43] <kurushiyama> einseenai I do not know too much about node, but I assume that the template language gracefully handles null values.
[10:11:00] <einseenai> kurushiyama, thank you a lot, it made my day :-), for the sake of logging of the channel non-existent entry problem is solved basically like this - https://jsfiddle.net/pcfyesr4/6/
[10:11:28] <kurushiyama> einseenai Glad I could help!
[10:14:54] <einseenai> kurushiyama, one last question - should I cache database calls' results? I mean it has to be not a good thing to call a db on every page update, right? to json, for example?
[10:16:08] <kurushiyama> einseenai With adding cache, you add a moving part, and – please forgive me – you do not seem to be well experienced.
[10:16:44] <kurushiyama> einseenai So keep your application as simple as possible in the beginning. "Premature optimization is the root of all evil" -- Donald Knuth
[10:17:12] <einseenai> kurushiyama, it's true, I'm in fact, new to Node and Mongo, it's ok :-)
[10:17:34] <einseenai> kurushiyama, thank you for this valuable advice
[10:17:41] <einseenai> but later it should be done, right?
[10:19:07] <kurushiyama> einseenai Not really. The problem with caching is that getting eviction right is _very_ hard to do across a cluster.
[10:19:30] <kurushiyama> einseenai Wait a sec, please, I think I have something for you.
[10:19:43] <hyades> Hi. I have a aggregate cursor stream coming in. I then attempt to push this result into another collection. I am doing this using Bulk upserts.
[10:19:44] <hyades> I expect my stream results to run into millions, so bulk created would be huge. I want to know the recommended way of doing this.
[10:35:11] <kurushiyama> hyades No stream, just simple cursors and bulk ops, kept small.
[10:36:28] <hyades> kurushiyama: yea thanks for this. What problems does stream have?
[10:36:43] <kurushiyama> hyades There is no such things as streams
[10:37:47] <kurushiyama> hyades The problem you _may_ have is that your aggregation needs more than 100MB of RAM, in which case you need to basically swap to disk and you might run into scatter/gather problems in sharded environments.
[10:38:58] <kurushiyama> hyades It is not that my solution will prevent either of them, but you split them problem into individual parts. When implemented properly, you might be able to do parallel processing of your aggregation result set.
[10:41:52] <kurushiyama> hyades BTW: From my side, the alarm bells would ring when you need to do cross database data correlation or/and aggregation. This tends to be a good sign that there is something wrong with your data structure. imvho.
[10:41:59] <hyades> kurushiyama: you mean multiple find() s in parallel?
[10:43:51] <kurushiyama> hyades For example. Your language might be capable of using multiple cores, so you could start multiple threads/processes splitting the $out collection between them.
[10:53:02] <hyades> kurushiyama: yea got it. Thanks. Why is cross db aggregation bad?
[10:54:26] <kurushiyama> hyades Chances are that you split data over different dbs which should not be split. It is sort of a circumstantial evidence.
[11:06:43] <ruohki> Hey, is it possible to cut down the character of a field that is returned. For example if you want to build a read more text
[11:09:25] <kurushiyama> ruohki That should be typically done in the presentation layer...
[11:10:24] <ruohki> yeh i could do that but iam allways trying to find alternative ways of doing things
[11:11:00] <ruohki> if the db server is located in a different location for example
[11:11:12] <ruohki> trying to keep the network load lower
[12:12:53] <kryo_> anyone using mongoose can explain this behavior? http://hastebin.com/uxuxudipuj.coffee
[12:21:12] <StephenLynx> yeah, mongoose is broken.
[12:54:04] <kryo_> StephenLynx: you think i should submit a bug report?
[13:42:59] <mkozjak> is there a way to make wiredtiger to sync data to another persistent storage like elasticsearch? so with every checkpoint (60 seconds), can it sync data to elasticsearch instead of its own default storage on disk?
[13:43:45] <mkozjak> or would i need to replicate data to elasticsearch and use TTL with mongodb?
[14:43:26] <pamp> but then I have to group the data in the second stage
[14:43:44] <pamp> like this --> "_id":{"mo":'$meta.mo.key',"param":'$meta.parameter.key'}
[14:43:55] <pamp> but this second stage is really slow
[14:44:20] <pamp> is there some way to improve performance in the group stage?
[14:46:54] <kurushiyama> pamp a) it would help if you used pastebin for your full aggregation b) indices are only used for $match and $sort _at the beginning of the pipeline_. c) the whole result set of your $match is fed into the subsequent stages
[14:48:01] <kurushiyama> pamp An aggregation is not necessarily meant as a replacement for queries. If you need performance on the result, preaggregate and use an $out stage
[14:51:53] <kurushiyama> pamp A slightly more precise definition of "really fast" and "really slow" could make sense, too. And: Do you run the aggregation on a sharded collection?
[15:26:23] <jr3> hm is there a way to say, update where a date field is oldest
[15:45:19] <kurushiyama> jr3 Since only the first document found will be updated, simply use a sort on your date field in ascending order, which means that the first document found will be the oldest. Set the update document accordingly and you are good to go.
[15:49:57] <kurushiyama> jr3 Bad luck if you are in a sharded cluster, though. In that case, it gets a wee bit more complicated.
[15:56:46] <Perseus0> Hey, I'm still wondering where the db is being stored because this is all I get when I run these cmds on the mongo shell: http://kopy.io/dp9OQ
[20:33:53] <dimaj> Hello, I was wondering if there's a way to get the timing of an aggregation query
[20:41:43] <Derick> dimaj: I'm not sure ... file a JIRA ticket?
[20:44:42] <dimaj> @Derick, Isn't Jira used for bug tracking?
[22:06:06] <hackel> I've got an index and cannot even start mongod after upgrading to 3.2. How can I fix this? Do I have to downgrade mongod just to remove the invalid index?
[22:39:14] <kurushiyama> hackel Can you Give a bit more info? logentries? Startup warnings?
[22:40:56] <hackel> kurushiyama: I actually just ended up downgrading it to remove the index, and that worked fine. Thanks.
[22:41:27] <ellistaa> would mongodb be a good choice for something like tinder?
[22:41:33] <kurushiyama> hackel Well, would be just for my curiosity, then ;)
[22:41:58] <kurushiyama> ellistaa As good and as bad as (almost) any other DBMS. ;)
[22:42:26] <ellistaa> im thinking there arent many relations … so it might be a good hcoice
[22:42:33] <hackel> kurushiyama: The log entry that prevented the server from starting was: Found an invalid index { v: 1, unique: true, key: { email: 1.0 }, name: "email_1", ns: "db.users", sparse: true, partialFilterExpression: { deleted_at: null } } on the quirks.users collection: cannot mix "partialFilterExpression" and "sparse" options
[22:43:09] <hackel> (It allowed me to create it under 3.0.)
[22:43:23] <Derick> hackel: yes, because partialFilterExpression didn't mean anything I suppose
[22:43:33] <kurushiyama> ellistaa It is a misconception (and a severly wrong one) that MongoDB can not handle exceptions properly. They are just not as straightforward as people used to SQL know them to be ;)
[22:43:55] <ellistaa> i didnt say anything about exceptions
[22:45:52] <kurushiyama> ellistaa In general, I probably would use a MongoDB (suprise, surprise) and use Cayley (a graph database which can use MongoDB as storage backend and has a REST API) for accessing certain relations.
[22:49:03] <kurushiyama> hackel Thanks a lot, mate!
[22:53:35] <kurushiyama> Jeez, tinder is a single, big, fat data privacy nightmare
[23:03:22] <ellistaa> im using mongoose, can i not have a type object?
[23:04:47] <kurushiyama> ellistaa I suggest to take cover. All JS specialists (I am not one of them) seem to doom mongoos. Personally, I see the problem that it makes some of the strengths very hard, since it misuses MongoDB as a simple object store.
[23:05:27] <ellistaa> how would i interact with mongodb if i dont use mongoose?
[23:31:45] <kurushiyama> ellistaa MongoDB is a document oriented database, no object store. One of it's advantages is that with carefully crafted data models and queries, you can achieve amazing performance. With Mongoose, being an ODM, you take away that possibility.
[23:36:55] <kurushiyama> Maybe ODMs actually are easier to grasp for people coming from relational data. Doing data modelling "on foot" can be hard to understand and is highly error prone, as proven by highly overembedded documents.
[23:37:26] <cheeser> having used ODMs for long, long time, i can you tell you that's wrong.
[23:38:13] <kurushiyama> cheeser What speecifically?
[23:38:45] <cheeser> are you saying using ODMs is highly error prone?
[23:52:34] <kurushiyama> Either you use it as a lightweight ODM using structs and tags or you use maps. And it is fast. Nice to use. And performance comparisons made me learn Go just to use mgo.
[23:58:38] <kurushiyama> Tbh, Gustavo did an _awesome_ job. Many of the weird things stem from the fact that he tried to be as close the Golangs stdlib as possible, imho,