PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 16th of July, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[08:10:46] <sooli> Hi there
[08:35:23] <sooli> Hi have a strange issue, when I try to create a User object from my ruby script, on save, I get an objectId, but document is not saved on DB
[08:35:36] <sooli> If I run request from my DB it’s inserted
[08:35:51] <sooli> I tried on an another model and it works
[08:45:32] <amitprakash> Hi, I am seeing a lot of command somedb.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", autoAuthorize: 1, payload: BinData(0, ...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:162 locks:{} 253824ms
[08:45:38] <amitprakash> What exactly does this reflect
[08:45:55] <amitprakash> Also, only noticing this on the 3.x replicas
[08:56:31] <puppeh> Hello. I'm in the process of converting a map/reduce (group operation) into an aggregation
[08:56:40] <puppeh> and having a little trouble understanding the sorting that happens
[08:58:18] <coudenysj> puppeh: what is happening?
[08:58:31] <puppeh> If I don't specify a "$sort" in my group operation, does it sort by a specific field?
[08:59:57] <puppeh> for ex. in this group operation, http://pastie.org/private/izitl0fnmvjpzll0xeaxlg, what is the sort criteria?
[09:00:15] <puppeh> judging from the results I get, it's the order that the documents were inserted in the first place
[09:00:39] <coudenysj> that would be correct
[09:00:59] <coudenysj> http://docs.mongodb.org/manual/reference/operator/meta/natural/
[09:02:09] <puppeh> oh so it's natural by default?
[09:03:12] <kali> yes. and don't assume it matches the insertion order as updates can move the documents around
[09:03:52] <coudenysj> The results are returned in the order they are found, which may coincide with insertion order (but isn't guaranteed to be) or the order of the index(es) used.
[09:07:10] <puppeh> so if I don't specify an order in my `group` operation, the order is not guaranteed, right?
[09:07:20] <coudenysj> nope
[09:07:44] <puppeh> it's just that in the process to converting that group operation (see link) to an aggregation, I got different result ordering
[09:08:12] <coudenysj> if you really want an order, just specify it
[09:09:31] <puppeh> ok, if I have a document of the format: {"d": "13-05"}
[09:09:41] <puppeh> does it make sense to order by "d" which is a string containing a number?
[09:10:39] <puppeh> in other words, how would you sort documents like this? assumming you want an order of "13-05", "13-06", "13-07" etc.
[09:13:13] <puppeh> as far as I see { "$sort" => { "d" => -1 } } in the aggregation does the trick, but I'm not sure why and if this is correct
[09:54:32] <puppeh> any ideas how I can convert this to an aggregation? http://pastie.org/private/8bajoln3tgs0izsvw1wq
[10:20:01] <johnflux_> Stupid question - can I make lots of indexes?
[10:20:06] <johnflux_> I have a large read only dataset
[10:20:14] <johnflux_> can I just create an index for every variable?
[10:20:25] <johnflux_> (it's about 15 variables)
[10:20:38] <johnflux_> s/variable/field/
[10:44:13] <Derick> johnflux_: you can make up to 63 indexes
[11:11:09] <johnflux_> Derick: is that per collection?
[11:11:27] <kali> johnflux_: yes
[11:12:21] <puppeh> hey, I'm trying to cnvert a ".group" operation to aggregation or a map/reduce. Any help please? http://pastie.org/private/yrsluiju2ffojxs36efaba
[11:12:23] <johnflux_> if I have: db.actionlaunch.aggregate( { $group : { _id : { "appVersion" : "$appVersion", "token" : "$token" } } },...
[11:12:51] <johnflux_> would that use the index on just 'token' ?
[11:13:07] <johnflux_> or would it only use an index if I specifically had a compound index for appVersion and token?
[11:13:35] <Derick> johnflux_: indexes are used for $match (and $sort) only
[11:13:54] <Derick> a $group needs to scan over the whole collection (or pipeline), so an index is not going to help there
[11:14:05] <johnflux_> oh
[11:22:41] <puppeh> any insights on how I might convert this?
[11:49:04] <johnflux_> How can I count the number of distinct "token"s that I have?
[11:49:09] <johnflux_> I'm trying: db.actionlaunch.distinct("token").length
[11:49:20] <johnflux_> which worked on my small dataset, but fails on large
[11:49:25] <johnflux_> "errmsg" : "exception: distinct too big, 16mb cap",
[11:57:11] <johnflux_> I think I do it with:
[11:57:13] <johnflux_> db.myCollection.aggregate( {$group : {_id : "$token"} }, {$group: {_id:1, count: {$sum : 1 }}}) .result[0].count;
[13:20:41] <jamieshepherd> I have a collection of reviews, these are posted by authors (users), so they contain an author value (which is the _id of the user) - At the moment, I was going to embed the author name as an array in the review, e.g. review { author { _id: 123, name: jim } } - But if they change their name I'd have to update all those records - not sure the best way to do this in mongo is.
[13:23:07] <cheeser> name changes are pretty rare
[13:23:29] <cheeser> it'd be cheaper over time to embed the author name and save those subsequent queries
[13:23:30] <johnflux_> jamieshepherd: just don't update
[13:26:11] <jamieshepherd> You reckon? Interesting, can I query for specific _id's inside of objects then? If I wanted to get all reviews by an author I could get review { author { _id } }?
[13:27:36] <cheeser> sure.
[13:28:10] <cheeser> i'd call the id field in the embedded docs something different, though, because _id has certain connotations.
[13:28:40] <jamieshepherd> Sure, but my _id will be referencing an object id anyway - would that matter?
[13:36:51] <Mindiell> hi there, simple question: is mongodb ok for storing web pages directly ? Or I have to parse/convert html to json ?
[13:38:42] <StephenLynx> yes. it is.
[13:38:45] <StephenLynx> I do that on lynxchan
[13:38:48] <StephenLynx> works perfectly.
[13:39:09] <StephenLynx> However, I had to convert the strings to an utf-8 buffer before writing the data to gridfs.
[13:39:23] <Mindiell> StephenLynx: so you're storing html directly ?
[13:39:26] <StephenLynx> yes.
[13:39:34] <Mindiell> hmm, interesting :o)
[13:39:39] <Mindiell> I have to try now
[13:39:41] <Mindiell> thx
[13:40:04] <StephenLynx> gitlab.com/mrseth/LynxChan look for src/be/engine/gridFsHandler.js
[13:40:11] <StephenLynx> if you want to see how I do it.
[13:40:18] <deathanchor> Mindiell: just remember the 16mb limit on doc sizes.
[13:40:24] <StephenLynx> no need to.
[13:40:34] <StephenLynx> because gridfs splits your file into multiple chunks.
[13:40:46] <StephenLynx> each chunk uses a document.
[13:41:15] <StephenLynx> being limited to 16mb files would render gridfs useless :V
[13:42:06] <Mindiell> even with such a limit, a webpage of more than 16mb is insane :o)
[13:42:22] <cheeser> indeed.
[13:42:30] <deathanchor> Mindiell: not the webpage, the entire document
[13:42:37] <cheeser> what does that mean?
[13:42:40] <deathanchor> you could have stored other data about the page
[13:42:51] <StephenLynx> you can't hit the limit on the document without a 16mb webpage
[13:42:58] <Mindiell> I think deathanchor is speaking about the totla doc, with js, css, and images and maybe videos
[13:43:19] <deathanchor> or any other data you store, but again, that's pure mongodb, not gridfs
[13:43:20] <cheeser> images might push it over, yeah.
[13:43:37] <StephenLynx> and yeah, you can have arbitrary metadata, but that is on a separate document and hitting 16mb of metadata is ludicrous.
[13:44:10] <deathanchor> StephenLynx: programmers do all kinds of ludicrous things
[13:44:13] <lobmys> War and Peach worth of metadata..whew!
[13:44:18] <lobmys> *Peace
[13:44:41] <jamieshepherd> @cheeser, if I'm not to use _id, is there a recommended convention instead (id maybe, or reference)?
[13:44:52] <jamieshepherd> I understand I can't use it as a top level field
[14:32:51] <lobmys> If I store an array of review ids within a product - wouldn't that array technically be considered growing without bounds?
[14:35:46] <cheeser> i would store the product id on the review document
[14:36:36] <lobmys> That makes sense - in general, we should avoid mutable, growing arrays, right?
[14:38:19] <cheeser> if you can, yes
[14:38:55] <lobmys> Data modeling is so much more interesting in Mongo
[14:39:31] <cheeser> :D
[14:40:42] <Lope> I'm storing documents in a collection that include an array. Most of the time the array will be empty. I'll be querying to see if the array contains values.
[14:41:23] <Lope> Should I A: store the empty array if it's empty. B: Store a zero (or null) instead of an empty array. or should I omit the key entirely when the array is empty?
[14:43:08] <lobmys> Lope: I beleive you could just leave it out since null and undefined are treated similiarly.
[14:44:00] <lobmys> This might be helpful - http://docs.mongodb.org/manual/faq/developers/#faq-developers-query-for-nulls
[14:46:43] <yous> Is there a big perf difference between using an ObjectId for _id vs a string?
[14:47:48] <daidoji> hello, does anyone in here have experience piping from jq into mongoimport?
[14:48:19] <daidoji> I'm getting "exception bson representation of supplied json is too large" when running jq with --compact-output as a filter
[14:51:17] <lobmys> yous: I'd argue that the ObjectID is smaller but you could use Object.bsonsize(I think...) to see the size difference.
[14:52:05] <lobmys> Whoops - that's only for objects.
[14:54:02] <lobmys> yous: I just made two identical objects, one with an id of "aaa" and the other with an ObjectId - the string was 17 and the object was 21
[14:54:05] <diegoaguilar> Hello Im trying to export a database with this syntax:
[14:54:06] <diegoaguilar> mongoexport --db production-season-lf-199-2015 --collection player --type=csv --fie│
[14:54:06] <diegoaguilar> ldFile fields.txt --out players.csv
[14:54:26] <lobmys> Granted, an 8 character string is 22
[14:54:36] <diegoaguilar> where fields.txt looks like _id,first_name,last_name,team.name,position,status
[14:55:37] <diegoaguilar> and it won't actually export properly, output log says theres 591 documents exported, and at generated csv I can see there's 591 new lines
[14:55:42] <diegoaguilar> what am I doing wrong?
[14:57:02] <cheeser> so the log says it exported 591 docs. and your file is 591 lines long. and what's the problem?
[14:57:20] <diegoaguilar> 591 new lines with empty content
[14:58:34] <cheeser> hahah. blank lines?
[14:58:39] <diegoaguilar> yep
[14:58:55] <cheeser> awesome
[14:59:48] <Lope> I read this page which suggests that omitting keys causes a performance penalty: http://stackoverflow.com/questions/9009987/improve-querying-fields-exist-in-mongodb
[15:00:07] <Lope> So I'm wondering what would index better. Empty arrays, or fields where the array equals zero?
[15:00:25] <lobmys> StephenLynx: If I remember correctly, you don't use _id's in relationships but use other uuid's?
[15:00:38] <StephenLynx> usually, yes.
[15:00:56] <webjay> shouldn't I be able to use ?readPreference=secondaryPreferred or is it uri.readPreference?
[15:01:04] <lobmys> Is that because you leverage the use of ObjectIds for _id?
[15:01:10] <Lope> $size will also involve a performance penalty: Queries cannot use indexes for the $size portion of a query, although the other portions of a query can use indexes if applicable.
[15:01:18] <Lope> http://docs.mongodb.org/manual/reference/operator/query/size/
[15:01:32] <Lope> So it seems I should just use a zero instead of the array, if the array is empty.
[15:03:43] <johnflux_> i ran a command that took an hour to run
[15:03:49] <daidoji> diegoaguilar: fields.txt fields should be split by newlines not commas
[15:03:51] <johnflux_> the output is now being shown on the screen
[15:03:55] <daidoji> if I remember correctly
[15:03:57] <daidoji> check the docs
[15:04:01] <johnflux_> is it possible to now put that into a variable? or to late?
[15:04:18] <cheeser> diegoaguilar: can you pastebin how you're trying to do the export?
[15:04:21] <johnflux_> (it's a aggregate command)
[15:07:47] <diegoaguilar> daidoji, that was the problem
[15:07:50] <diegoaguilar> thanks, I solved it
[15:07:51] <diegoaguilar> :)
[15:07:56] <diegoaguilar> thanks to u too cheeser
[15:09:40] <daidoji> diegoaguilar: np
[15:14:03] <webjay> Hi. Shouldn't I be able to use ?readPreference=secondaryPreferred or is it uri.readPreference?
[15:19:30] <daidoji> oh man, jq --unbuffered was the solution in case any of you care :-(
[15:19:45] <daidoji> otherwise it helpfully strips out newlines somehow...
[15:41:02] <johnflux_> what's the right syntax for this:
[15:41:03] <johnflux_> > db.actionlaunch.find().limit(5).forEach(function(x){print("#{x._id}");} )
[15:41:17] <johnflux_> the "#{x._id}" bit
[15:41:40] <johnflux_> I thought I could put fields inside of strings, but I've forgotten the syntax and can't seem to find it on gogle
[15:41:42] <johnflux_> google
[15:43:08] <johnflux_> hmm they do the same as this here: http://stackoverflow.com/questions/14478304/redirect-output-of-mongo-query-to-a-csv-file
[15:43:23] <johnflux_> the only difference is that they are putting the command in a js file
[15:43:29] <johnflux_> and I'm doing it inside mongo
[16:12:19] <symbol> cheeser: Quick followup on the Review -> Product and referencing the Product id in the review instead of storing an arrays of review ids in the product...that decision was made because of the growing without bounds but if we have the example of a product that has many parts, which is a set size, would you then store the part in an array on product?
[16:35:24] <daidoji1> johnflux_: can't you just do what you had but with "this[_id
[16:35:27] <daidoji1> ])
[16:35:29] <daidoji1> ?
[16:36:16] <daidoji1> or just pass a filter to find right db.collection.find({},{_id: 1}).limit(5).forEach(print)
[16:36:24] <daidoji1> that should work right?
[16:42:32] <johnflux_> daidoji1: hmm, thanks
[16:43:57] <johnflux_> I want to find the top 5 programs that people run, for each country
[16:44:43] <johnflux_> db.c.aggregate([ { $group : { _id : { continent: "$continent", name: "$name" } }, number : { $sum : 1 } },
[16:45:21] <johnflux_> so I'm doing this to count the number of times the program is run, for each continent (sorry, read continent above)
[16:45:38] <johnflux_> I think I can even sort this with:
[16:45:40] <johnflux_> { $sort : { _id.continent : 1, number } },
[16:45:57] <johnflux_> but how do I now limit it to the top 5 number's, for each continent?
[16:51:25] <daidoji1> johnflux_: {$limit}
[16:51:47] <daidoji1> johnflux_: oh wait per each continent
[16:52:26] <johnflux_> daidoji1: yeah exactly
[16:54:58] <daidoji1> you'll probably have to uhhh, {group continent, number} => {group continent, array_of_numbers} => {map a sort} => {unwind} or something like that
[16:55:17] <daidoji1> for that it might just be easier to write a map/reduce to pick top 5
[16:56:00] <daidoji1> or just use what you had and then walk with a custom javascript function that creates some object to print out top 5
[16:56:18] <daidoji1> map/reduce is probably what i'd pick though
[16:56:36] <johnflux_> is this how to do sort by continent:
[16:56:37] <johnflux_> { $sort : { _id.continent : 1, number : 1 } },
[16:56:44] <johnflux_> is that syntax right?
[16:56:47] <daidoji1> yes
[16:56:58] <johnflux_> sometimes I see pages use id instead of _id
[16:57:00] <johnflux_> is that just a typo?
[16:57:14] <daidoji1> when do you see them use id?
[16:57:32] <daidoji1> documents in mongo all have '_id' fields as far as I'm aware
[16:57:59] <johnflux_> I don't remember now - i'll look out for it next time
[17:12:48] <johnflux_> daidoji1: I can't get that sort syntax to work
[17:12:51] <johnflux_> { $sort : { _id.continent : 1, number : 1 } },
[17:12:58] <johnflux_> it complains that the . is an invalid token
[18:13:04] <daidoji1> johnflux_: use double quotes around that key
[18:13:09] <daidoji1> "_id.continent"
[18:13:19] <daidoji1> _id['continent'] also works I believe
[18:13:22] <daidoji1> whichever you find nicer
[18:14:25] <daidoji1> what's the fastest way to transform keys (ie keyfoo => keybar for documents ) that won't overload my server with writes?
[18:14:28] <daidoji1> perhaps GothAlice??
[18:14:42] <kali> daidoji1: only "_id.continent" will work in that place
[18:15:28] <daidoji1> kali: ahhh
[18:16:04] <GothAlice> daidoji1: http://docs.mongodb.org/manual/reference/operator/update/rename/ is the way you "transform keys", unless I'm missing something, which I suspect I am.
[18:16:22] <kali> and that will generate a lot of writes
[18:16:29] <GothAlice> Yup.
[18:16:40] <GothAlice> Any other approach will simply generate more.
[18:16:42] <johnflux_> daidoji1: is that the same for match too? like:
[18:16:44] <johnflux_> { $match : { continent:{$exists: true }, "payload.item.packageName":{$exists:true} } },
[18:17:25] <daidoji1> GothAlice: roger, will it do it in place or should I batch it? (space is an issue unfortunately...) :-(
[18:17:50] <daidoji1> johnflux_: probably
[18:18:42] <GothAlice> johnflux_: Aye. The issue is the "literal syntax" of the language being used. In pure JSON, *all* keys are quoted. In JS, you don't need to quote unless it contains symbols outside the regex: [a-zA-Z_][a-zA-Z0-9_]* (i.e. it's something that can't be used as a variable name). Python uses function call syntax, or dict literal syntax. With the former you are limited to variable name-safe keys, in the latter all are quoted (making it
[18:18:42] <GothAlice> compatible with JSON).
[18:19:19] <StephenLynx> daium GothAlice gridfs is great.
[18:19:39] <StephenLynx> implemented these days so the server would retrieve only a piece of a file.
[18:19:43] <johnflux_> GothAlice: ah thanks
[18:19:59] <GothAlice> I.e. dict(foo=27) is safe, dict(foo.bar=27) is uncompilable nonsense, {"foo.bar": 27} is good, and use of unquoted labels will actually use the variables: foo = "hi", {foo: 27} == dict(hi=27)
[18:19:59] <johnflux_> at the moment I'm doing: db.actionlaunch.aggregate(...
[18:20:17] <johnflux_> for the purpose of just testing, can I somehow limit this to just the first 100 items in actionlaunch ?
[18:20:29] <GothAlice> Conditionally add a $limit stage.
[18:20:30] <johnflux_> just so that I can see that I'm getting the right results before running it on the full thing?
[18:20:42] <johnflux_> GothAlice: right at the top?
[18:20:55] <GothAlice> johnflux_: Depends on the aggregate.
[18:21:13] <GothAlice> At the end will take a subset of the actual results, accounting for sorting, etc. across the whole dataset, which will likely still be intensive.
[18:21:17] <daidoji1> GothAlice: is there a way to use rename similar to a ? b: c
[18:21:23] <GothAlice> At the start would limit the aggregate pipeline to only processing X records out of the overall dataset.
[18:21:36] <johnflux_> GothAlice: right, that's what I want.
[18:21:48] <GothAlice> daidoji1: That looks like a ternary statement, and I have no idea what you're intending with that.
[18:22:12] <daidoji1> yeah, is there a way to use conditionals alongside the $rename in an update statement?
[18:22:16] <GothAlice> No.
[18:22:41] <daidoji1> GothAlice: :-( that sucks. Okay though, thanks for your help
[18:22:43] <GothAlice> http://docs.mongodb.org/manual/reference/operator/update/rename/ < from the docs, you pass it two field names. There are no expressions being evaluated.
[18:25:03] <GothAlice> StephenLynx: Yeah, once I figured out MP4 frame encoding and forced the chunks to break on frame edges (for that media type) the streaming code I wrote got _insanely_ more awesome. :D GridFS is sweet.
[18:25:34] <johnflux_> when I do: .forEach(function(x){...
[18:25:35] <GothAlice> (No more accidental loading of two chunks just to get a frame header or whatever, after a seek.)
[18:25:44] <johnflux_> can I save javascript variables between calls?
[18:26:11] <GothAlice> johnflux_: http://www.webdevelopersnotes.com/tutorials/javascript/global_local_variables_scope_javascript.php3
[18:45:27] <johnflux_> GothAlice: I'm not sure how to apply that. How do I declare a global variable?
[18:45:44] <johnflux_> GothAlice: like: forEach(... ?
[18:45:51] <johnflux_> how do I put code before the function() call?
[18:46:08] <johnflux_> oh, maybe I put it completely outside
[18:46:30] <johnflux_> ah that's it!
[18:54:04] <johnflux_> daidoji1: I got the javascript code to work, to print out the top 5 programs for each continent
[19:24:52] <daidoji1> johnflux_: sweet
[19:27:18] <daidoji1> anyone got a second to let me know what I'm doing wrong?
[19:27:20] <daidoji1> https://gist.github.com/daidoji/4d818a90dc4997e5aca7
[19:27:31] <daidoji1> this rename seems to work on some of the fields but not on other fields and I'm not quite sure why
[19:28:55] <daidoji1> I have checked by hand all the fields that don't work in the original data set
[19:29:01] <daidoji1> so i'm really confused as to what is going on
[19:45:01] <daidoji1> oh wait nevermind...
[20:24:59] <ericmj> im developing a driver, if a user passes an empty list of documents to Collection.insertMany should i error out or return an InsertResult with sending anything to the database?
[20:25:11] <ericmj> the protocol doesn't allow sending zero documents to insert
[20:25:16] <blizzow> Just upgraded my sharded cluster from 2.6 to 3.0. Now my replica set members are experiencing really high load averages. Is this common behavior for a 3.0 install?
[20:27:54] <GothAlice> ericmj: db.insert({}) is valid. The driver pre-processes it by adding _id as a new ObjectId instance, then submits the {_id: ObjectId(…)} document.
[20:28:11] <GothAlice> (Any time _id is missing, it's up to the driver to add it when inserting.)
[20:28:28] <ericmj> GothAlice: right, I am talking about a list of documents here and the list is empty
[20:28:29] <GothAlice> blizzow: No, not really.
[20:28:39] <ericmj> GothAlice: so there are no documents
[20:29:35] <GothAlice> Oh, right. None of the drivers I use provide an insertMany, as they all provide BulkOperation abstractions.
[20:30:19] <ericmj> GothAlice: i see, I will check with one of the official drivers that provides the function
[20:30:47] <GothAlice> While the up-to-date (3.0) docs for pymongo indicate an insert_many, the driver version I have does not provide it.
[20:31:14] <ericmj> drivers are supposed to implement it according to this https://github.com/mongodb/specifications/blob/master/source/crud/crud.rst#write
[20:31:15] <GothAlice> (Nor does the mongo shell seem to provide it, at least insofar as tab completion can't find it.)
[20:32:32] <ericmj> the official drivers are pretty inconsistent even though 10gen provides a specification for drivers :(
[20:45:44] <GothAlice> This is true. Handily, they do accept pull requests. ;)
[20:57:44] <mastyd> Is this an expensive Mongo query? http://i.imgur.com/YRMSSWk.png
[21:31:11] <mastyd> Anyone who can help? Trying to figure out a bottleneck in my socket.io app; doing anything more than 20 concurrent requests a second shows 90% of request time being taken up by MongoDB. Doing this query: http://i.imgur.com/YRMSSWk.png. Any thoughts?
[21:45:08] <louie_louiie> @mastyd maybe if you put the "var position" into the "FindOneAndUpdate" function/packet?
[21:46:22] <mastyd> @louie_louiie you mean instead of instantiating a JS object and passing it into the find call, just inserting it straight from the function call? Not sure how that can help
[21:46:53] <louie_louiie> if its a processing power issue, there might be some bottle necking in the callback
[21:47:44] <mastyd> New relic is showing me the time being spent is mostly in Mongo: https://scontent-lga1-1.xx.fbcdn.net/hphotos-xpt1/v/t35.0-12/11211860_863179133737281_1639247832_o.jpg?oh=e9710ea2f768474c4756b646cc9b1001&oe=55AAC979
[21:48:04] <louie_louiie> inserting it into the function call might make the process more efficient.... its just a hunch
[21:50:01] <mastyd> Like so? http://i.imgur.com/cAfHBZk.png
[21:57:27] <ericmj> that's not going to make any difference
[22:02:53] <mastyd> ericmj: I didn't think so; shouldn't mongo be able to handle more than 20 concurrent writes a second or am I missing something fundamental here?
[22:03:43] <ericmj> yes, it should, i dont know why it doesnt though
[22:06:14] <mastyd> ugh
[22:06:51] <tejasmanohar> if speed isnt of importance, should you hash all values in your database?
[22:06:54] <tejasmanohar> for security?
[22:11:10] <ericmj> tejasmanohar: hashing values means you wont get them back
[22:21:23] <owen1> how to install mongo-clients from source?
[22:22:15] <owen1> (i use alpine linux and there is no package for ti)
[22:22:17] <owen1> ti=it
[22:50:47] <johnflux_> some of my rows have: rank: 2 and some rank: "2"
[22:51:37] <johnflux_> I'm doing:
[22:51:40] <johnflux_> > db.actionlaunch.aggregate([
[22:51:41] <johnflux_> ... { $group : { _id : "$payload.rank", number : { $sum : 1 } } },
[22:51:51] <johnflux_> can I convert the rank to a int if it's not?
[22:51:56] <johnflux_> somehow in this call
[22:53:17] <johnflux_> ah, google to the rescue
[22:54:04] <johnflux_> I want to use new NumberInt( "$payload.rank")
[22:54:20] <johnflux_> hmm, how do put this code into the group code