PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 24th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:41:09] <f00sh> Quick question - Anyone know how I can use .updateMany({ "Derp" : data.Hello}, data) ? data.Hello would be a different value every iteration
[00:41:46] <f00sh> My problem being I need to filter based on data being passed into updateMany
[00:51:26] <f00sh> I think I need to use $in
[00:51:31] <f00sh> But not sure how to integrate it there
[00:52:41] <Boomtime> @f00sh: you mean you want to update many documents but based on a different filter for each one?
[00:52:53] <Boomtime> that would be a bulk update
[01:00:53] <f00sh> Oh yeah thank you Boomtime I'll give that a shot
[07:08:07] <sumi> hello
[08:05:52] <Industrial> Hi!
[08:06:38] <Industrial> Say I have a collection with a .location key and a .deviceID key, how do I create an aggregation that picks out all the last locations?
[08:07:30] <Industrial> db.devices.find({}).forEach(function(v) { db.incoming.find({ deviceID: v.deviceID, location: { $exists: 1 } }).sort({ activityID: -1 }).limit(1).forEach(function(w) { db.locations.insert(w); }); });
[08:07:34] <Industrial> I have tried that before
[08:07:40] <Industrial> with weird results :-)
[08:08:29] <Industrial> So for each device I'm fetching the last document with a location key, and write all those to a new collection
[08:08:40] <Industrial> Should give memy user's last known locations, right?
[10:43:12] <fontanon> Hi. What is the usual procedure when you need to create temporary collections to perform further aggregations on that? I think creating temporary collections on the same prod-env databse is really dirty.
[11:42:15] <chris|> when looking at the stats of a capped collection, I see a maxSize, a size and a storageSize value. Which of the later two is relevant for when documents get deleted from the capped collection because of maxSize?
[13:12:22] <spleen> Hello All
[13:13:13] <spleen> What is the best way to know on what RSM a collection is written ?
[13:13:32] <StephenLynx> wat
[13:13:33] <spleen> on a sharded cluster..
[13:13:39] <StephenLynx> rsm?
[13:13:45] <cheeser> replica set member
[13:13:50] <StephenLynx> ah
[13:13:56] <spleen> i mean Replica set
[13:14:00] <cheeser> spleen: data gets replicated to all members.
[13:14:19] <cheeser> in a replica set at any rate
[13:14:39] <spleen> humm.. i am gonna explain.
[13:14:55] <spleen> i have several Replica set in a sharded cluster
[13:15:06] <cheeser> one behind each shard
[13:15:16] <spleen> i need to know for a collection, where is it written ?
[13:15:44] <cheeser> the collection will "exist" on all shards. it's the documents that get shifted around
[13:16:37] <spleen> cheeser, i heard db.chunks.find(..) command to know that..
[13:16:48] <cheeser> what?
[13:22:14] <spleen> cheeser, my collections are present on one shard only.. or two shard when collections are chunked
[13:23:26] <cheeser> the collections should be "there" (as much as mongo supports the notion) everywhere regardless of document location. e.g., if you define a non-ID index, that index should exist on all shards.
[13:23:45] <cheeser> i *think*. they might be lazily propagated across shards.
[13:23:48] <spleen> I am looking for a way to know actually where a chunk of a collections is written ? shardA, shardB, or shardC..
[13:24:00] <cheeser> but the big question i have is, what exactly are you trying to do?
[13:24:13] <cheeser> where things live in the sharded cluster is not usually something you have to care about.
[13:25:44] <spleen> cheeser, i have to move a collection to another shard. For that i need to know where a collection is
[13:26:20] <cheeser> that ... doesn't square with my understanding of sharding...
[13:26:34] <cheeser> but i'll confess that's a bit more a grey area for me.
[14:14:41] <kurushiyama> spleen Non-sharded collections _always_ on the primary shard. Chunks are not written. Chunk ranges are assigned to shards – so you always know where a document gets written to. If a chunk is split, then it remains on the same shard until it gets balaned away.
[14:49:56] <dino82> So apparently I am replicating to a node that doesn't show up in rs.status() or rs.conf() ?
[14:50:58] <kurushiyama> dino82 Huh? That is a bit context free...
[14:52:31] <dino82> Well, if I rs.remove a node, it begins replicating, and if I rs.add it, it says "Our replication set configuration is invalid or does not include us"
[14:52:41] <dino82> It's like... inverted
[14:55:26] <dino82> Oh weird...
[14:56:18] <dino82> It thinks its a different member for some reason
[15:01:42] <dino82> Nevermind, fixed
[15:01:47] <dino82> That's a weird one though...
[16:00:16] <saml> hey what web UI do you use to do CRUD ad hoc way?
[16:06:22] <StephenLynx> >web UI
[16:06:23] <StephenLynx> you
[16:06:24] <StephenLynx> wot
[16:33:21] <kurushiyama> saml None. shell.
[16:33:39] <kurushiyama> dino82 Please share!
[16:51:09] <jsheely> What would be the best way to store a javascript function in mongo? I'm using angular formly to schema and I want to store both JSON schema as well as functions
[16:51:51] <jsheely> Convert everything to a string and then do the naughty eval on the client?
[16:53:12] <cheeser> eww
[16:57:27] <StephenLynx> you don't.
[16:57:40] <StephenLynx> if you do that, you will have to run eval.
[16:57:52] <StephenLynx> which is not allowed by strict mode.
[16:57:57] <StephenLynx> which you should be using.
[16:58:40] <StephenLynx> mongo has its types, they are meant to be used.
[16:58:57] <StephenLynx> if your model consist of a bunch of overly complicated strings, its a bad model.
[17:03:37] <jsheely> Hmm
[17:14:46] <kurushiyama> jsheely Expressive is better than smart.
[17:15:55] <jsheely> Well, It's not a matter of expressive or whatever. It's literally how the schema works. So I either need to write some complicated abstraction over it or use it as is
[17:16:06] <jsheely> And if I use it as is. I need functions. So I have to put them somewhere
[17:24:22] <cheeser> /1/1
[17:24:56] <StephenLynx> mongo doesn't implement schemas.
[17:25:18] <StephenLynx> so you are hacking around the tool design if you want to do that.
[17:25:40] <StephenLynx> you store in a given type and expect it to not have changed overnight.
[17:26:05] <StephenLynx> and if it does change overnight, your computer is severely defective.
[17:26:16] <cheeser> well, there's document validation...
[17:26:32] <StephenLynx> ok, so theres that.
[17:26:42] <StephenLynx> if you REALLY want to have schema validation, you should use that.
[17:26:58] <StephenLynx> and if that is not enough, you either don't implement it or you use a different tool.
[17:27:05] <StephenLynx> those are the other 2 better options.
[17:28:49] <StephenLynx> anything outside those 3 is a bad move.
[17:59:53] <jsheely> Nevermind, We're not talking about the same thing
[18:05:05] <GothAlice> StephenLynx: You are entirely wrong on the "MongoDB doesn't implement schemas" bit, BTW.
[18:05:28] <GothAlice> https://docs.mongodb.com/manual/core/document-validation/ https://www.mongodb.com/presentations/webinar-document-validation-in-mongodb-3-2
[18:05:42] <StephenLynx> yeah, cheeser already reminded me of that.
[18:06:15] <GothAlice> It's something often overlooked.
[18:06:21] <GothAlice> (Being pretty new.)
[18:06:33] <StephenLynx> not only that, but is not central to mongo's design.
[18:06:46] <StephenLynx> is not like in a rdb where stuff need that in order to work.
[18:06:52] <GothAlice> Considering it hooks every insert and update unless expressly disabled on the query, I'd call that pretty core.
[18:07:08] <GothAlice> It's just not required, unlike under relational DBs.
[18:07:17] <StephenLynx> I was talking about the design.
[18:07:26] <GothAlice> (And even then, some, like SQLite, are almost no-SQL in that they don't enforce their schemas at all, really.)
[18:08:37] <GothAlice> My own suite of Python MongoDB tools (that are explicitly not going be Active Record ODM) I'm writing to instead of implement schema and validation application-side, have it generate validation documents for DB-side use. The Way Of The Future™ :D
[18:09:05] <StephenLynx> kek
[18:12:32] <teprrr> hi, I'm new to mongodb, and was wondering if this document structure look sane to you? https://pastee.org/c3pnu
[18:13:09] <StephenLynx> no.
[18:13:14] <GothAlice> teprrr: Deep nesting suffers from substantial issues in how you can manipulate that data.
[18:13:19] <StephenLynx> answers is too complex.
[18:13:20] <cheeser> the 'ips' array makes me nervous
[18:13:39] <StephenLynx> that too.
[18:13:43] <teprrr> I have a list of domains which are being resolved regularly. now I'd like to do things like: 1) unique IPs per domain 2) augment those IP addresses with extra data
[18:13:45] <GothAlice> teprrr: There are quite a few points to cover, let me gist a copy of this and add notes.
[18:13:48] <teprrr> which raises questions how to do that :)
[18:14:07] <StephenLynx> i'd create a collection for domains
[18:14:12] <StephenLynx> and index it based on the domain itself.
[18:14:20] <StephenLynx> i'd have ips to be an array of ints.
[18:14:31] <StephenLynx> so there would be no need to worry if its ipv4 or 6
[18:15:24] <teprrr> yeah, I'm struggling exactly with nesting here, when trying to do anything a bit more complex.
[18:16:10] <teprrr> and well, now I'd like to augment those IPs with extra information, but would/should it then be stored in another collection & joined together on the client-side, or what's a sane approach for it?
[18:16:12] <GothAlice> teprrr: https://gist.github.com/amcgregor/eadb39b155c92a412c3f06c75d698a46
[18:18:20] <GothAlice> teprrr: The key is to not nest arrays; multiple arrays at the same level in a document _might_ be OK, but only if you never need to query subsets of both at the same time. (The $ projection operator only works on one array in a document at a time.)
[18:18:52] <teprrr> GothAlice: ah, thanks. good catch with ordering of the dns tree :) the entries in ips array come directly from elsewhere, although it's adaptable for sure.
[18:19:00] <GothAlice> Arrays of sub-documents are a-OK, as long as those don't also contain arrays. (So my advise there to turn "ips" into a list of strings instead, to avoid storing "address_type", can be ignored if you intend to add more fields there.)
[18:19:08] <teprrr> GothAlice: about having those answers in a separate collection, looks like JOINs are a pretty new feature?
[18:19:21] <GothAlice> teprrr: In this case, it's a clear 1:many.
[18:19:39] <GothAlice> That means two queries, and no need for use of the recent $lookup, but $lookup during aggregate queries would be useful here, too.
[18:19:44] <GothAlice> (Just less useful for normal query operations.)
[18:21:36] <teprrr> and thanks StephenLynx, encoding those as ints is good, although PITA when doing adhoc debugging with robomongo/pymongo :/
[18:22:03] <StephenLynx> nope
[18:22:08] <StephenLynx> it won't work as ints.
[18:22:14] <StephenLynx> because ipv6 is 128 bits.
[18:22:16] <GothAlice> Yeah, IPv6 is 128.
[18:22:23] <StephenLynx> hence, arrays.
[18:22:41] <GothAlice> You could totally store it as a binary blob…
[18:22:52] <GothAlice> But then yeah, even worse in terms of tooling.
[18:23:03] <teprrr> ah, I meant a string presentation of an integer there
[18:23:09] <GothAlice> Practicality beats purity: strings are fine for IPs. ;P
[18:23:09] <StephenLynx> ah.
[18:23:19] <StephenLynx> not much.
[18:23:25] <StephenLynx> because its hard to query them as ips.
[18:23:34] <StephenLynx> like when you search for a range.
[18:24:16] <GothAlice> Well, subnet searching is trivial. True, exact numeric range searching is not so much. teprrr: The question boils down to: how will you use that data? Will you search on IP ranges, exact matches, or not at all?
[18:24:42] <StephenLynx> yeah, if you are just going to use them as strings, you might as well store them as strings.
[18:24:57] <StephenLynx> just be aware that will bring some limitations.
[18:24:59] <GothAlice> MongoDB forces you to really think about how you're going to _use_ and _manipulate_ the data as a focus instead of focusing on "purity of data structure" (which highly nested structures would be… very pure, but not very practical).
[18:26:27] <teprrr> GothAlice: yeah, probably going to do subnet searching too
[18:26:48] <GothAlice> So, subnet searching with strings is just a prefix regex search. :)
[18:26:53] <StephenLynx> kek
[18:26:58] <GothAlice> (Fast with an index.)
[18:27:05] <teprrr> yup, did it before with sqlite & bigint :)
[18:27:16] <StephenLynx> I don't think that's as fast or intuitive as an array of ints, though.
[18:27:23] <teprrr> not regex search, but int ranges
[18:27:40] <GothAlice> With IPs stored as integers, subnetting involves bit masks or math (modulo and friends).
[18:27:56] <StephenLynx> not really.
[18:27:57] <GothAlice> (Remember, not all subnets are on the . boundaries. ;)
[18:28:06] <StephenLynx> ah
[18:28:09] <StephenLynx> wait, subnets.
[18:28:10] <StephenLynx> nvm.
[18:28:17] <StephenLynx> I was thinking of ips.
[18:28:19] <StephenLynx> ranges*
[18:28:20] <StephenLynx> ::V
[18:28:30] <GothAlice> Slightly different. Slightly. ;)
[18:30:39] <teprrr> but how would you store augmented info about those ips? a separate collection too? would give me know a rdmbs-like setup with three collections: domains (with name & metadata), results (raw data + some extracted info such as that ip) + ip_info (or somesuch with the data gathered from elsewhere)
[18:30:48] <GothAlice> Well, no.
[18:31:02] <teprrr> or even so, that one collection for ips, which then have backlinks to their respective domains & when that info was gathered
[18:31:07] <GothAlice> As I mentioned, ignore my advise to turn it into a list of strings (instead of a list of sub-documents) if you're going to add more fields there.
[18:31:18] <StephenLynx> you could make a document for each ip.
[18:31:41] <StephenLynx> then you put the ip itself on one field, and the other data on other fields.
[18:31:42] <GothAlice> That'd make blacklist checks really quick.
[18:34:41] <teprrr> StephenLynx: yeah, was thinking about that. but it still needs a backlink towards hostnames including when it was seen (there may be more than one host resolving to the same IP, which is something I'm interested in :)
[18:36:45] <teprrr> having forward links on domain collection for IPs seen for that domain, and viceversa from IP collection to domains it's been seen
[18:37:14] <teprrr> redundant data, but makes it easier to handle like you said
[18:38:36] <GothAlice> teprrr: Often what appears to be redundant is required to facilitate or optimize a query.
[18:39:33] <GothAlice> For example, consider "owner" references for records. Since the #1 thing I do with the "owner" field is display it with the username (or actual name) as a link to their profile, it makes sense to save an extra query to look that data up and just store the username with the ObjectId reference. E.g. {…, "owner": {id: ObjectId(…), name: "GothAlice"}}
[18:40:05] <GothAlice> "Duplicated", but incredibly efficient to service the #1 typical use for the field.
[18:41:16] <teprrr> that's definitely true
[18:41:56] <teprrr> actually that data structure I have sucks also by not having the exact times when those IPs were resolved. ah, great :)
[18:42:40] <teprrr> what I'd like to do is to make some sort of timeline of those results. and also compare which all domains were pointing out to same addresses at some point etc.
[18:42:50] <GothAlice> teprrr: If you have document-per-ip, the _id ObjectId will contain the creation time automatically. (Not times of each resolution, but people often miss that they don't need a "created" time because of this.)
[18:43:27] <teprrr> ah. objectid stores that kind of info? cool, good to know.
[18:43:43] <GothAlice> Also, for the "update some statistics, or create a document if missing" situation, you want upserts.
[18:43:45] <GothAlice> (Update or insert.)
[18:43:57] <teprrr> however, the same IP is resolved again and again, so I need to keep that manually
[18:44:17] <teprrr> yes, I'm using upserts actually to inject those entries into ips array
[18:45:00] <teprrr> a nice "oneliner" incrementing the counter, updating the last checked and pushing the newest results into that array :)
[18:46:03] <GothAlice> teprrr: http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework is the article I normally link for analytics of time-series data (which is what the IP "hit" events sorta fall into)
[18:46:28] <GothAlice> This article is a touch old, but covers various concerns for different ways of storing such data, from performance of different types of query on the data, to data storage and index overhead size.
[18:47:46] <teprrr> GothAlice: awesome, thanks, will read it up later. what I atm have is a really adhoc hack I built yesterday, and already stumbled upon query problems after wanting more than "how many IPs there are in total"
[18:48:32] <GothAlice> Yeah, pre-aggregation (the technique the article covers) can help, but you'll need to include the statistics you want to gather in your upsert. (That's the pre- part of pre-aggregation.)
[18:49:06] <GothAlice> That makes asking questions of the data easier and more efficient by having the data handy, rather than needing to expensively re-calculate for each question.
[18:49:07] <teprrr> I'm thinking I'm just unsure how much logic should I build already to queries versus what to process in python after fetching the data from the db
[18:49:41] <GothAlice> teprrr: https://gist.github.com/amcgregor/d6a507cfba5eed0606d6cc78c2384d75 is an aggregate query from work, missing an initial $match, but you get the idea.
[18:50:03] <GothAlice> The processing done application-side is only to apply the "discount" field's instructions (add, subtract, multiply, etc. by an amount) to the totals.
[18:50:24] <GothAlice> As one non-trivial example. :)
[18:50:33] <teprrr> will need to read and experiment more. anyway, thanks a lot for your time and tips :)
[18:50:39] <teprrr> humm, that looks a bit scary :)
[18:50:46] <GothAlice> Hehehe, thus the "omfg" name.
[18:50:47] <GothAlice> :P
[18:50:48] <GothAlice> It never hurts to help.
[18:50:56] <teprrr> but say, the foreign lookups like that are not non-mongo-like anymore?
[18:51:06] <teprrr> I was under impression that people prefer avoiding foreign keys :)
[18:51:15] <cheeser> prefer, yes.
[18:51:17] <teprrr> or that that's the "nosql" part in general
[18:51:23] <cheeser> doesn't mean you can always get away from them.
[18:51:31] <cheeser> teprrr: completely orthogonal
[18:54:11] <teprrr> yup. ookay, I'll give a bit more thought and experiment with it.
[19:00:51] <StephenLynx> you can have relations, they just won't be validated.
[19:01:15] <GothAlice> Quite correct; being non-relational, there is no referential integrity checking.
[19:01:18] <StephenLynx> and you can crash into a wall if you misuse them, but as cheeser said, there are valid use cases.
[19:01:28] <GothAlice> (This means: make sure your client application can handle bad reference edge cases…)
[19:16:38] <jayjo> When I ran my code with line_profiler, I only get back one line that reads "Timer unit: 1e-06 s"
[19:16:44] <jayjo> Not sure how to interpret it
[19:26:36] <StephenLynx> kek
[19:44:08] <synthmeat> so, if i have a collection of user documents (autoindexed), and i have a "rating" Number field in them, how would one go constructing a query to get 10 users (including one particular user) that are sorted by rating around that user?
[19:44:32] <synthmeat> am i gonna have a bad time?
[19:50:30] <GothAlice> synthmeat: Not quite Sans-level, but yes.
[19:50:46] <synthmeat> also, Mongoose
[19:51:52] <GothAlice> There's a way to do it, that also adds a slight race condition, but it is possible. Expensive, but possible. Mongoose… is unfortunate, but shouldn't overly get in the way. First, you need to determine where the record you want to target is in the sorted data. (Project just IDs, iterate until you find it.) Now that you know the offset, you can subtract 5 from it to find the offset to seek to in the larger query.
[19:52:35] <GothAlice> synthmeat: Unfortunately, skip use is… not very efficient. It requires visiting the leaves of the index to count results (if using an index, otherwise a table scan!) with logarithmically worse performance the further you go.
[19:52:52] <GothAlice> (Linearly worse in the scan situation.)
[19:53:14] <synthmeat> auch
[19:53:30] <kurushiyama> @GothAlice Regarding your validation library that generates server side validations: That sounds like a pretty interesting concept. Would you mind me stealing it?
[19:53:58] <GothAlice> kurushiyama: https://github.com/marrow/mongo — go nuts. There's a few things in there, including a really nice capped collection tailing query helper thing.
[19:54:14] <GothAlice> kurushiyama: If you're targeting Python, though, pull requests are greatly appreciated! :D
[19:54:32] <kurushiyama> @GothAlice "go nuts". Pun intended?
[19:55:16] <GothAlice> kurushiyama: :P Also, it's not a validation library. It's a ODM, just not Active Mapper pattern. The "schema" stuff comes from its own isolated "declarative schema" package: https://github.com/marrow/schema (which does include validation and transformation tools)
[19:55:18] <StephenLynx> synthmeat, dont use mongoose.
[19:55:44] <kurushiyama> @GothAlice Well, I would do a lib on top of mgo.
[19:55:45] <synthmeat> StephenLynx: so you keep on saying and i keep agreeing, but i'm too far off in the project to drop it
[19:56:10] <GothAlice> synthmeat: "Don't use Mongoose" is the general advise; it's known to cause lots of problems, and occasionally encourages bad practices. But yeah, practicality beats purity, and for this problem, it's basically just two simple queries and a for loop.
[19:56:17] <StephenLynx> I forget names easily :v
[19:56:22] <synthmeat> :)
[19:56:25] <GothAlice> kurushiyama: mgo?
[19:56:49] <synthmeat> GothAlice: i'm still not sure i understood the answer though. trying to parse it third time around
[19:56:57] <kurushiyama> GothAlice The Go driver written by this channels niemeyer
[19:57:01] <GothAlice> Right, right.
[19:57:18] <kurushiyama> GothAlice _blazing_ fast.
[19:58:17] <GothAlice> synthmeat: Sadly, my p-code here will be semi-Python, but the idea should transfer. for i, doc in enumerate(collection.find({… criteria …}, {"_id": 1}, order=[…])): if doc._id == target: offset = i
[19:58:52] <GothAlice> offset -= 5; results = collection.find({… criteria …}, {… desired projection …}, order=[… the same order …], skip=offset, limit=10)
[19:59:09] <GothAlice> Er, limit 11, rather.
[19:59:24] <GothAlice> And there you go. results will be the +/- 5 records around the target, and the target. Unless you get hit by the race condition…
[19:59:46] <GothAlice> Oh, also "break" in that "if doc._id ==" bit, no need to keep searching once found.
[20:00:00] <GothAlice> (Between the first query completing and the second starting the data might change, thus the race condition.)
[20:00:30] <synthmeat> that's ok :)
[20:00:45] <synthmeat> GothAlice: don't go anywhere though! :P
[20:00:51] <GothAlice> :P
[20:02:00] <synthmeat> GothAlice: as a side note, how many atomic operations is that?
[20:02:10] <GothAlice> None. Atomic operations apply to updates.
[20:03:20] <synthmeat> GothAlice: wait, wait. i'm gonna have this return ALL the docs each time?
[20:04:26] <GothAlice> No. The first time you're rapidly searching just the IDs for the one you're looking for, counting as you go to find the offset.
[20:04:41] <GothAlice> The second time you're doing your real query, and only getting 11 results back. ±5 around your target, and the target.
[20:06:01] <synthmeat> okie
[20:20:12] <synthmeat> GothAlice: something like this? https://gist.github.com/synthmeat/1550579ba2ff9c4495b8c7df761c00f1
[20:20:38] <GothAlice> Ouch, no find() criteria? Just "all users"?
[20:21:03] <synthmeat> GothAlice: :)
[20:21:18] <synthmeat> it's a high score list
[20:21:24] <GothAlice> Additionally, that code is highly concerning on line 15, seeming to imply that _all results are loaded by that point_, which is going to be a terrifying amount of data, even just IDs, depending on how many users you have.
[20:21:51] <GothAlice> I.e. it'll have to wait to collect everything there, before continuing, instead of the "stop pulling in data once we have a match" process in my p-code example.
[20:22:17] <synthmeat> GothAlice: a MILLION users!
[20:22:43] <GothAlice> Yeah. Mongoose isn't doing you any favours, there.
[20:23:52] <GothAlice> (There's absolutely no question that this process will be slow, though.)
[20:24:11] <GothAlice> There's an alternate approach with a dataset that size. Hmm.
[20:24:44] <synthmeat> (i'm going to check does this even work in the first place meanwhile)
[20:24:58] <GothAlice> Load your user's score and ID, then perform two queries to load records whose score is >, inversely sorted, and a second query to load records whose score is <, sorted normally.
[20:25:11] <GothAlice> Limit 5 on each.
[20:25:27] <GothAlice> (Again, there's race condition, but the overall queries should be faster for… that many records.)
[20:25:51] <synthmeat> oh, that sounds better
[20:28:47] <synthmeat> GothAlice: you're always awesome
[20:28:51] <GothAlice> Yeah, apologies; didn't realize you were processing that much.
[20:29:03] <synthmeat> i'm not. i'm hoping for though!
[20:29:14] <GothAlice> Hehehe. Good to plan for that now, for sure!
[20:30:07] <synthmeat> GothAlice: with a hint of sarcasm?
[20:30:09] <synthmeat> :)
[20:30:59] <GothAlice> No, really, important to plan for scaling.
[20:31:10] <GothAlice> If you don't, and you get slashdotted/hackernews'd, you can be screwed.
[20:31:28] <GothAlice> (Happened to us at work!)
[20:31:38] <synthmeat> well, i'm just hoping 4GB mongodb instance will hold up for now, the rest should be fine
[20:32:08] <synthmeat> GothAlice: this is the infrastructure for now https://www.dropbox.com/s/19ebfymsa27kdj9/Screenshot%202016-06-15%2012.52.46.png?dl=0
[20:32:49] <GothAlice> That's… uh… a lot of moving parts.
[20:33:13] <GothAlice> Not to mention that htop is a pretty heavy load just to get general uptime/load/mem stats. ;P
[20:34:26] <synthmeat> GothAlice: that's just my "emergency" tmux layout, i don't have it running all the time
[20:34:30] <GothAlice> On one game project at work, after I was onboarded as a fresh hire, I replaced about a half dozen other tools with MongoDB, saving immensely on infrastructure and maintenance by reducing the moving part count.
[20:35:28] <synthmeat> GothAlice: worked on a lot of games?
[20:35:34] <GothAlice> A few. XP
[20:35:52] <synthmeat> this is basically tcp server
[20:35:57] <synthmeat> not even websocket
[20:36:20] <GothAlice> Hehe, have you ever looked into Stackless Python, as powers the EVE Online client and persistent server universe?
[20:36:24] <GothAlice> Some amazing stuff.
[20:37:14] <synthmeat> tbh, i'm looking into something typed for beyond prototype/first iteration stuff
[20:37:26] <synthmeat> looked into Go. boy, do you need to type a lot of text in that thing
[20:39:33] <GothAlice> Uhm.
[20:39:49] <GothAlice> Annotations + https://pypi.python.org/pypi/typeguard/
[20:39:59] <GothAlice> "Just enough typing to help, not so much as to suck."
[20:40:45] <synthmeat> yeah, that's what they're at now with Flow in js
[20:41:04] <GothAlice> Plus, Pypy JIT, or for "full, strict typing that enforces singular typing of all variables", there's also RPython.
[20:41:32] <GothAlice> (RPython infers type, but ensures it's constant once inferred. Thus: no need to explicitly write any typing yourself, just be consistent and it won't explode.)
[20:41:48] <GothAlice> (Explode at compile time, in the RPython case.)
[20:42:53] <GothAlice> Just sayin' there be options. ;P
[20:43:31] <synthmeat> GothAlice: you don't do js?
[20:43:39] <synthmeat> that's a bit strange for mongoers
[20:44:10] <GothAlice> synthmeat: I write my JS in Python.
[20:44:23] <GothAlice> I also run JS tools in Python. No Node dependency for my apps. :)
[20:44:27] <GothAlice> (Seriously.)
[20:44:47] <GothAlice> https://gist.github.com/amcgregor/7ce55806409c14ec4c3d < an example
[20:46:09] <synthmeat> hah. you're crazy :D
[20:46:11] <synthmeat> in a good way
[20:46:46] <GothAlice> :) Helps my SPAs to not have multiple megabytes of JS, too.
[20:46:54] <GothAlice> ES6 is beautiful.
[20:47:32] <synthmeat> it is. i'm loving it. can't wait to start on the next project with it (in a week, once i release the game)
[20:51:17] <synthmeat> StephenLynx: ahahaa. that mongoose id not being a string bit me now. again.
[20:52:22] <GothAlice> Always, and never not.
[20:53:04] <StephenLynx> kek
[21:19:27] <synthmeat> twice!
[21:19:38] <synthmeat> i've just spent half an hour of my life on that thing
[21:19:42] <synthmeat> that's just today!
[22:39:45] <troydm> hey all! please suggest some open source free GUI tool for Windows, doesn't needs to have much, just to be able to browse database and edit data