[10:10:26] <crodjer> Does it make sense to use hashed index for saving space? I have long id keys. Also, what hashing algorithm will be used?
[10:13:35] <kurushiyama> crodjer: Well, hashing takes time. Not much, but it does. I would not. Not sure how index prefix compression works, but for me, that is usually enough.
[13:27:48] <gain_> hello to all, I need to search by id but find({_id:"asd123"}) obv not works because the _id is an object and not a string...
[14:01:51] <kurushiyama> gain_: To be fair: It is not node. Not even the ecosystem. But sometimes, when I see node related questions on SO, I ask myself wether the self-proclaimed developers should spend less time with demanding their jobs to be be done for them and more with thinking.
[14:02:55] <cheeser> if people did that, we'd have a shot at getting rid of js and putting in something decent.
[14:03:01] <kurushiyama> gain_: (or reading docs, for that matter). Do not get me wrong though. Personally, I avoid node like the devil holy water, but I see the advantages.
[14:06:43] <kurushiyama> StephenLynx: Well, Perl had a thing that you had to announce a package on a mailing list. An "isPositiveInteger" would have been a very, very short joke.
[14:27:00] <edrocks> I'm maxing out all my servers
[14:27:29] <kurushiyama> edrocks: I'd _really_ have a deep look into the memory utilization before doing so.
[14:28:10] <edrocks> kurushiyama: I had to buy a 3rd one anyway and I got a good deal + It costs a lot to drive up and install stuff
[14:29:27] <kurushiyama> edrocks: Usually, people tend to max out memory because of problems.
[14:30:19] <kurushiyama> edrocks: And usually, those problems turn out to be either mas storage IO related or bad indices/query order/you know the stuff.
[14:30:54] <kurushiyama> edrocks: But ofc, more RAM would not hurt.
[14:31:29] <edrocks> kurushiyama: I wanted more ram. I was going to buy 64gb for ea server but I decided to max them
[14:31:30] <kurushiyama> edrocks: As long as you are still below the "sweet spot" adding RAM can actually save money ;)
[16:59:43] <kurushiyama> edrocks: Sorry, got distracted on #go-nuts. Uhm. so you'll have 128Gigs in the members?
[17:00:14] <edrocks> kurushiyama: I'll have 3 servers with 382GB ea
[17:00:59] <edrocks> kurushiyama: they run some other stuff too, so probably around 100GB ea for mongodb
[17:01:08] <kurushiyama> edrocks: My bet is that most likely, your disk IO is much more of a bottleneck.
[17:01:30] <edrocks> at this point I just bought ram disks
[17:02:10] <edrocks> I got a great deal on ssds too
[17:02:15] <kurushiyama> edrocks: What is the other stuff running on those servers?
[17:02:44] <edrocks> kurushiyama: elasticsearch, redis, influxdb(on one server), some internal stuff and the app servers
[17:03:23] <kurushiyama> edrocks: Not ideal. since both InfluxDB and MongoDB are quite IO-heavy.
[17:03:40] <kurushiyama> edrocks: I'd rather scale out than scale up.
[17:03:51] <edrocks> kurushiyama: was cheaper to go up at this point
[17:04:13] <edrocks> kurushiyama: I agree with scaling out later on
[17:04:17] <kurushiyama> edrocks: Do you have IOWaits?
[17:04:35] <edrocks> kurushiyama: I don't believe so
[17:04:56] <edrocks> kurushiyama: it was more of a reliability thing with getting a 3rd server so all the cluster stuff worked as intended
[17:04:57] <kurushiyama> edrocks: What is your SSD setup? RAID? Which level, if yes?
[17:05:30] <edrocks> kurushiyama: raid iirc 1. the third server will be raid 10 with 4 ssds the other 2 have 2 larger ssds each
[17:06:04] <kurushiyama> edrocks: Makes sense, assuming the RAID10 one will be the one bearing influxdb.
[17:06:22] <edrocks> the 1tb of ram was mostly so I could forget about going there for a while and I got the price down by buying a bunch at once
[17:38:01] <shlant> anyone know why I would get "SCRAM-SHA-1 authentication failed for blah on admin from client 172.17.0.2 ; AuthenticationFailed SCRAM-SHA-1 authentication failed, storedKey mismatch"
[17:38:12] <shlant> does that mean the password is wrong?
[17:39:45] <StephenLynx> or the account does not exist
[17:40:14] <StephenLynx> maybe authentication is disabled and the client expects it to be enabled? just guessing.
[18:01:59] <dino82> hi all o/ -- How long does mongodump take before I actually see files being written? Is it supposed to be immediate? Also, can I run a backup while the node I am connecting to is still indexing?
[18:04:58] <kurushiyama> dino82: Foreground or background index?
[18:15:51] <kurushiyama> dino82: Not so simple. I was actually trying to suggest that you read the docs _before_ you use an operation to understand what is going on.
[19:10:59] <alexi5> basically what I have is a collection with documents for a product and price. but when price changes I want to store the older version of the document
[19:11:52] <alexi5> so far I am thinking of having an array call pricing version ing document, but that will laod all history when document is fetched or have a collect for storing the old product documents
[19:12:11] <kurushiyama> alexi5: I would actually treat this as a time series.
[19:13:04] <alexi5> as in a document has an array that contains all the old documents ?
[19:18:06] <alexi5> so store old documents in another collection ?
[19:21:36] <kurushiyama> alexi5: I'd do so. It is not that collections are expensive. If you need a pricing history, it does not get any cheaper. Use redundancy where necessary. I'd probably have a collection for storing the data I need for a product overview.
[19:22:42] <alexi5> and aggregation would can be easily done to see past history status on documents
[19:31:35] <kurushiyama> alexi5: db.prices.find().sort({date:-1}).limit(1) for the current price. Add the fields you need for an overview, and you might be where you want.
[19:33:19] <alexi5> so current pricing and history all in the same collection
[19:33:48] <Perseus0> I have users collection @mongodb with a createdAt date field but can't seem to be able to return it when I do Users.find().fetch()
[19:35:20] <Perseus0> any clue how to get this returned?
[21:53:09] <bofh> Hi all! I have a mongodb and I need to export certain records to another database, what is the best way?
[21:58:13] <oky> bofh: there is literally a command called 'mongoexport'
[21:59:16] <bofh> I don't need everything, just records that match some search criteria