[20:20:08] <jlondon> btubbs: Hehe, I'm still around although busy with other stuff. I was just wondering... I know you've been looking at implementing Docker with velociraptor. Would you say that were I to start working with your tools now, your potential moves there wouldn't break apps down the road?
[20:22:47] <jlondon> So next question... I couldn't tell for sure from the documentation I'd read so far... Is there any ability within the toolset to do scaling of resources (like based on load or connection averages for instance)?
[20:23:19] <btubbs> there's no load monitoring in velociraptor
[20:23:41] <btubbs> the existing API could be used to change the size of a swarm
[20:24:11] <btubbs> you have an upper bound there on the number of hosts though. Velociraptor doesn't make new EC2 instances or anything like that
[20:24:30] <jlondon> Okay, so then I can plug on my own tools (or perhaps existing if I could find something) to do the scaling/request to API.
[20:25:23] <btubbs> what does the variability of load on your app look like?
[20:25:48] <jlondon> Well, in terms of the new backend instances I imagine that would be managed (and then added to the swarm) by the same toolkit I'd have to build for the auto-scaling.
[20:26:22] <jlondon> btubbs: E-commerce sites... so from nearly zero traffic during the night to 100-300x the normal load during christmas, etc :P
[20:27:59] <btubbs> at yougov we worry about a huge survey project coming along that unpredictably spikes traffic for a day or so
[20:28:13] <btubbs> but it hasn't happened yet, and manual scaling is still getting us along
[20:28:25] <jlondon> basically right now we work with Cloudify (not sure if you've heard of it), which can do the auto-scaling bit... but is quite heavy due to the guest tools being based off of Java.
[20:28:47] <jlondon> So we're/I'm looking at other solutions.
[20:29:44] <jlondon> I think lightweight is what we need, but extensible... and what I'd read about velociraptor so far seems to fit that bill in nearly everything but having auto-scaling built-in.
[20:29:51] <jlondon> But that probably is not insurmountable.
[20:30:15] <btubbs> velociraptor is very lightweight on the app server
[20:32:12] <btubbs> 1. the app is launched by Supervisor (http://supervisord.org/), which watches for if the app dies and auto-restarts it
[20:32:54] <btubbs> 2. There's a Velociraptor-provided plugin to supervisor called proc_publisher that receives events from Supervisor when an app starts/stops/etc, and puts them on a Redis pubsub
[20:33:30] <btubbs> the Velociraptor dashboard (Django web process) listens on that pubsub and streams events through http to the browser
[20:33:34] <jlondon> Got it. Is there the ability to script/control what is considered failure/dying?
[20:37:10] <btubbs> i envision it being implemented similarly to the pluggable balancer interface
[20:37:43] <btubbs> but i haven't thought deeply about it
[20:38:50] <jlondon> So more on the supervisord side of things, and forgive me I'll have to look over the documentation... but basically could I define 'This port is no longer responding, assume dead', or is it only if a 'process' is no longer running?
[20:40:02] <btubbs> https://bitbucket.org/yougov/velociraptor/src/65acb94c4c42fe345d1e2e0c164f2f13387641b0/docs/uptests.rst?at=default is worth a read
[20:40:30] <jlondon> I suppose one easy thing to get around that (if it actually needed to be an issue), is have a wrapper for whatever it is actually being run and exit the loop and kill the wrapper if the port stops responding, etc.
[20:42:22] <btubbs> that could work, though my instinct would be to do the monitoring and killing from the outside
[20:45:08] <jlondon> Last thing I think and then I'll just start playing around with the tools is: In terms of buildpacks, is Velociraptor meant mainly to be controlling the 'app-server' side of things, or 'anything', such as memcached or couchdb, etc?