PMXBOT Log file Viewer

Help | Karma | Search:

#velociraptor logs for Thursday the 20th of June, 2013

(Back to #velociraptor overview) (Back to channel listing) (Animate logs)
[19:22:09] <jlondon> btubbs: Howdy. Around for a quick question?
[20:18:44] <btubbs> hi jlondon
[20:18:49] <btubbs> late, but i'm here :)
[20:20:08] <jlondon> btubbs: Hehe, I'm still around although busy with other stuff. I was just wondering... I know you've been looking at implementing Docker with velociraptor. Would you say that were I to start working with your tools now, your potential moves there wouldn't break apps down the road?
[20:20:30] <btubbs> apps won't change
[20:20:38] <btubbs> velociraptor has a promise of heroku compatibility
[20:20:48] <btubbs> that much will be maintained
[20:22:05] <jlondon> Cool.
[20:22:47] <jlondon> So next question... I couldn't tell for sure from the documentation I'd read so far... Is there any ability within the toolset to do scaling of resources (like based on load or connection averages for instance)?
[20:23:19] <btubbs> there's no load monitoring in velociraptor
[20:23:41] <btubbs> the existing API could be used to change the size of a swarm
[20:24:11] <btubbs> you have an upper bound there on the number of hosts though. Velociraptor doesn't make new EC2 instances or anything like that
[20:24:30] <jlondon> Okay, so then I can plug on my own tools (or perhaps existing if I could find something) to do the scaling/request to API.
[20:24:35] <btubbs> right
[20:25:23] <btubbs> what does the variability of load on your app look like?
[20:25:48] <jlondon> Well, in terms of the new backend instances I imagine that would be managed (and then added to the swarm) by the same toolkit I'd have to build for the auto-scaling.
[20:26:22] <jlondon> btubbs: E-commerce sites... so from nearly zero traffic during the night to 100-300x the normal load during christmas, etc :P
[20:27:38] <btubbs> just curious
[20:27:59] <btubbs> at yougov we worry about a huge survey project coming along that unpredictably spikes traffic for a day or so
[20:28:13] <btubbs> but it hasn't happened yet, and manual scaling is still getting us along
[20:28:25] <jlondon> basically right now we work with Cloudify (not sure if you've heard of it), which can do the auto-scaling bit... but is quite heavy due to the guest tools being based off of Java.
[20:28:47] <jlondon> So we're/I'm looking at other solutions.
[20:28:47] <btubbs> haven't heard of cloudify
[20:28:56] <jlondon> http://www.cloudifysource.org/
[20:28:58] <btubbs> there are sooooo many paases out there now
[20:29:03] <jlondon> For sure.
[20:29:44] <jlondon> I think lightweight is what we need, but extensible... and what I'd read about velociraptor so far seems to fit that bill in nearly everything but having auto-scaling built-in.
[20:29:51] <jlondon> But that probably is not insurmountable.
[20:30:15] <btubbs> velociraptor is very lightweight on the app server
[20:30:28] <jlondon> :)
[20:30:32] <btubbs> basically velociraptor doesn't even exist there. Just SupervisorD starting an LXC container
[20:31:03] <jlondon> Okay, so within the container there is no script/daemon doing and sort of watching of the app, etc.?
[20:31:03] <btubbs> well, and a little monitor service
[20:31:10] <jlondon> s/and/any
[20:31:24] <btubbs> there are two things:
[20:32:12] <btubbs> 1. the app is launched by Supervisor (http://supervisord.org/), which watches for if the app dies and auto-restarts it
[20:32:54] <btubbs> 2. There's a Velociraptor-provided plugin to supervisor called proc_publisher that receives events from Supervisor when an app starts/stops/etc, and puts them on a Redis pubsub
[20:33:30] <btubbs> the Velociraptor dashboard (Django web process) listens on that pubsub and streams events through http to the browser
[20:33:34] <jlondon> Got it. Is there the ability to script/control what is considered failure/dying?
[20:33:46] <btubbs> it's unix process state
[20:33:49] <jlondon> Or is it 'this process is no longer running'
[20:33:53] <btubbs> yeah
[20:33:59] <jlondon> Got it.
[20:35:00] <btubbs> Supervisor lets you configure how many restarts to attempt, and some other things http://supervisord.org/configuration.html
[20:35:58] <btubbs> Are you looking to contribute to Velociraptor and improve it as you go, or hoping it'll just work?
[20:36:07] <btubbs> (it's probably not ready for users in the latter group)
[20:36:44] <jlondon> btubbs: Probably would end up doing both with at least getting some type of auto-scaling/scaling working :)
[20:36:56] <btubbs> that'd be a welcome addition
[20:37:10] <btubbs> i envision it being implemented similarly to the pluggable balancer interface
[20:37:43] <btubbs> but i haven't thought deeply about it
[20:38:50] <jlondon> So more on the supervisord side of things, and forgive me I'll have to look over the documentation... but basically could I define 'This port is no longer responding, assume dead', or is it only if a 'process' is no longer running?
[20:39:13] <btubbs> just process
[20:39:18] <btubbs> supervisor doesn't know about ports
[20:39:37] <btubbs> Velociraptor does support 'uptests' that look at the port side of things
[20:39:41] <jlondon> Okay, got it.
[20:40:02] <btubbs> https://bitbucket.org/yougov/velociraptor/src/65acb94c4c42fe345d1e2e0c164f2f13387641b0/docs/uptests.rst?at=default is worth a read
[20:40:30] <jlondon> I suppose one easy thing to get around that (if it actually needed to be an issue), is have a wrapper for whatever it is actually being run and exit the loop and kill the wrapper if the port stops responding, etc.
[20:42:22] <btubbs> that could work, though my instinct would be to do the monitoring and killing from the outside
[20:45:08] <jlondon> Last thing I think and then I'll just start playing around with the tools is: In terms of buildpacks, is Velociraptor meant mainly to be controlling the 'app-server' side of things, or 'anything', such as memcached or couchdb, etc?
[20:47:20] <btubbs> definitely not anything
[20:47:34] <btubbs> 12 Factor apps as defined at http://12factor.net/
[20:48:03] <jlondon> Well sure, not anything-anything..
[20:48:29] <btubbs> disposable app that maintain all state through network attached backing services and never locally
[20:48:33] <btubbs> apps*
[20:50:41] <jlondon> Got it.