PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Saturday the 14th of May, 2016

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[11:49:16] <apollo13> dstufft: btw I am mostly doing infra stuff recently and if you want I'd happily be a second/third hand for the ops team at pypi -- not that I have aeons of time, but whatever I can do to free you to do other things…
[11:51:45] <apollo13> dstufft: also, I'd like to help on warehouse if you can give me tasks and a little bit of a hand
[11:53:23] <dstufft> apollo13: mind posting an offer to infrastructure@python.org ? We don't have a clear way to authorize a new user for like, sudo access on PyPI or what not but if you make a psot to that ML offering to help (with any relevant background info) I can make sure nobody has any objections.
[11:54:14] <[Tritium]> is there any low hanging fruit on warehouse?
[11:55:00] <apollo13> dstufft: can do
[11:57:07] <dstufft> apollo13: [Tritium] The tasks that I think are most important for getting Warehouse launched are tagged https://github.com/pypa/warehouse/milestones/Launch other items we have a few issues tagged easy but I don't think we're super great at actively categorizing issues as easy vs not
[11:58:03] <apollo13> hehe, /me is already failing at "make serve" :D
[11:58:47] <dstufft> If you're interested though, you don't have to limit yourself to just the Launch tasks, if a non Launch task looks interesting to you feel free to tackle it (although a few of them will be harder to do until after the old code base dies)
[11:58:58] <dstufft> apollo13: what's the error? It does require docker and docker-compose
[11:59:14] <apollo13> ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running? -- but I am also getting issues with docker ps -- gotta fix some perms
[11:59:20] <dstufft> Ah okay
[11:59:58] <dstufft> I think the only real thing we end up depending on having already setup on the host system is docker and docker-compose (and well make-- but even that's optional if someone felt like copy/pasting commands)
[12:01:25] <[Tritium]> who has docker and doesnt have make installed though?
[12:02:31] <apollo13> hurray, it is installing :D time to do something else in the meantime, mail will come later too
[12:03:57] <dstufft> [Tritium]: I think docker can run on Windows!
[12:04:02] <dstufft> or at least the client side of it can
[12:04:08] <dstufft> you need some VM running the daemon somewhere
[12:04:23] <dstufft> No idea if Warehouse is able to be dev'd from Windows or not though :(
[12:04:28] <[Tritium]> dstufft: ok, who has docker installed on windows and doesnt have visual studio?
[12:04:54] <dstufft> [Tritium]: No idea :] I haven't used Windows as a dev platform in like 8 years :)
[12:04:55] <[Tritium]> (vc++ comes with a version of make)
[12:06:54] <[Tritium]> warehouse MIGHT be dev-able on windows IF you dont have to test it against gunicorn
[12:10:40] <dstufft> it runs on gunicorn deven in dev, but that's inside of a Linux container
[12:11:08] <dstufft> basically everything in Warehouse runs in a docker container with the host FS mounted inside it
[12:11:28] <[Tritium]> is docker the intended deploy story?
[12:11:37] <dstufft> it's deployed using Heroku currently
[12:11:42] <dstufft> though we migh thave to not do that
[12:12:06] <[Tritium]> it probably will have to live on psf infrastructure at some point
[12:12:31] <dstufft> Docker is used so that people starting up don't have to get PostgreSQL, RabbitMQ, Elasticsearch, and redis installed
[12:13:28] <dstufft> Well operationally wise the PSF is fine having it run on Heroku, it's easy to move from Heroku to not Heroku if it ever becomes a problem since we purposely don't depend on any Heroku APIs (except during build)
[12:14:24] <dstufft> the problem is I don't know if Heroku is fine with us running there, (they donated credits which is great, but their AUP also states that no app can use more than 2TB/month of bandwidth... and legacy PyPI is currently taking up 10TB/month to the Rackspace backend servers)
[12:14:57] <dstufft> I talked to them like a year ago about it or so, but back then we were hovering right around 2TB/month and they said "well we don't really enforce that limit unless it becomes a problem"
[12:15:14] <dstufft> but sort of hovering around the limit and blowing past it 5x is a bit different I think
[12:16:03] <[Tritium]> pypi. uses 10tb/month EVEN with fastly's support?
[12:16:44] <dstufft> Yes
[12:17:03] <dstufft> In April we pushed almost 360TB through Fastly
[12:17:40] <[Tritium]> 1% of that is 10 people running bandersnatch
[12:17:46] <dstufft> https://s.caremad.io/6mWhf6yHpK/
[12:18:37] <[Tritium]> $360k in donated bandwidth...
[12:18:37] <dstufft> (That's not _just_ for PyPI, it's for all of our stuff on Fastly which is largely PyPI, the second largest thing is www.python.org which I think is 5-10% of the bandwidth used by PyPI)
[12:19:02] <[Tritium]> (annually)
[12:19:18] <dstufft> Fastly's donation almost single handidly keeps PyPI running.
[12:19:37] <dstufft> I mean, the servers that Rakcspace donates and the other companies etc are all super important too
[12:19:58] <dstufft> but Fastly really takes the brunt of the load, which allows us to run with a skeleton ops team
[12:20:03] <[Tritium]> Fastly has nothing to mirror without rackspace
[12:20:36] <[Tritium]> the only other option is the CPAN model
[12:21:03] <[Tritium]> "Pick the nearest university, and hope they have the module you need"
[12:21:20] <dstufft> We had a mirror network before we used Fastly, the problem was nobody ever ran mirrors regularly. People would set one up, it'd run for awhile, then it'd get stuck and nobody would notice for months and months and we'd eventually delist them
[12:21:58] <dstufft> I think when we finally killed the last bit of automatic mirror support in PyPI/pip, like 80% of the "mirrors" were actually just aliases for PyPI itself at that time
[12:22:43] <dstufft> (due to the way the mirrors were structured we couldn't delete a mirror, we had to keep the hostname pointing somewhere so instead of deleting we just added another hostname to PyPI itself)
[12:22:48] <[Tritium]> I mean, was pypi ever setup to support a mirror network? I think CPAN uses some bulletproof rsync magic
[12:23:19] <dstufft> there was a PEP and mirroring clients. It wasn't rsync easy though (and still isn't)
[12:23:55] <dstufft> I'm sure there was a bit of a chicken and an egg problem too
[12:24:19] <dstufft> nobody used mirrors because they were unreliable, and nobody bothered to keep the mirrors reliable because nobody used them
[12:26:50] <apollo13> dstufft: ok, got warehouse running -- all images (ie http://localhost/static/images/logo-large.87e8a3ef.svg ) are black though, any idea :D
[12:27:33] <dstufft> apollo13: they're _black_? That is odd... I wonder if the image optimization is going haywire for you
[12:27:59] <apollo13> http://pix.toile-libre.org/?img=1463228395.png
[12:28:28] <[Tritium]> can you view-source on those svgs?
[12:29:08] <apollo13> https://dpaste.de/mSjE/raw
[12:29:13] <apollo13> let me put that into gimp or so :D
[12:29:45] <apollo13> mhm, it is firefox failing, eog and gimp show LOGO
[12:29:48] <dstufft> Commenting out https://github.com/pypa/warehouse/blob/master/Gulpfile.babel.js#L146 would remove the image optmization from the asset pipeline, though because of some terrible behavour with npm you have to rebuild the docker containers to get that to be reflected
[12:30:07] <dstufft> I wonder if that's a problem with the image optimization and firefox in general
[12:30:16] <dstufft> lemme see if I can repo
[12:30:20] <dstufft> I don't use FF normally
[12:30:26] <[Tritium]> is that code live on pypa.io?
[12:30:30] <apollo13> ff 46.0.1
[12:30:44] <apollo13> pypi.io shows fine for me
[12:31:03] <[Tritium]> same
[12:31:33] <dstufft> So, whatever is in the master branch is live on pypi.io modulo something that just got merged and hasn't been automatically deployed yet, _except_ the logos don't currently go through the image optimization because for IP reasons the logo is in a seperate repo
[12:31:43] <dstufft> and I was lazy and didn't setup optimization for it yet
[12:31:50] <dstufft> the non place holder logos that is
[12:32:11] <dstufft> oky ea
[12:32:15] <dstufft> I can reproduce locally
[12:33:15] <[Tritium]> ...curious about the ip reasons....and I just figured it out, nevermind
[12:33:31] <dstufft> :]
[12:33:50] <dstufft> That repo mostly just holds the new PyPI logo, and the logos of the sponsor companies across the bottom
[12:34:07] <[Tritium]> keeping it in the warehouse repo would license the image the same as the code...
[12:36:25] <dstufft> Okay, the black placeholder logo happens even with the original image commited to the repo
[12:36:40] <dstufft> so it's not any part of the build process, it's just the placeholder .svg not being compatible with FF I guess
[12:37:31] <dstufft> [Tritium]: Yea, the PSF doesn't want the logo licensed like that for trademark reasons or something
[12:37:48] <dstufft> Van asked me to keep the logo out of the public repo, so I did *shrug*
[12:38:27] <apollo13> if I change code, everything should restart just fine, or do I need to do something else?
[12:39:08] <dstufft> apollo13: gunicorn should restart (it's using --reload)
[12:39:19] <dstufft> for some reason celery doesn't seem to correctly restart
[12:39:29] <dstufft> but our use of celery tasks isn't very heavy at the moment
[12:39:55] <dstufft> (and changing static fiels should automatically trigger them to get rebuilt)
[12:40:35] <[Tritium]> I need to start sucking at my job so that when someone calls in sick on a weekend... i am not the first person called
[12:44:30] <apollo13> dstufft: so I added child-src to csp (which is the new thing instead of deprecated frame-src) but it will not show up, it properly reloaded but it seems as if config.add_settings({'csp': … }) does not pick up any changes -- is there another config somewhere?
[12:45:53] <apollo13> oh and firefox properly shows the images in the network tab of the dev tools :D
[12:46:31] <apollo13> dstufft: oh nevermind, it is there now, was looking at the wrong cached request
[12:46:36] <dstufft> apollo13: mmm Nope. Some views have dynamically added additional CSP rules but
[12:46:38] <dstufft> ok :)
[12:46:56] <dstufft> apollo13: I think child-src inherents from default-src doesn't it?
[12:47:33] <apollo13> according to mozilla: "
[12:47:33] <apollo13> Note: This directive is deprecated. Use child-src instead, unless you are supporting browsers that use CSP 1.0 only (e.g. Safari 9).
[12:47:33] <apollo13> "
[12:47:46] <apollo13> do we need old browsers? :D
[12:47:57] <dstufft> I could be remembering wrong, I think I looked at child-src and it inherited from default-src whereas frame-src did not
[12:48:16] <apollo13> yes, child-src inherits default-src
[12:48:18] <apollo13> good point
[12:48:37] <dstufft> apollo13: https://warehouse.pypa.io/development/frontend/#browser-support
[12:48:47] <apollo13> ah that settles that
[12:49:37] <dstufft> https://github.com/pypa/warehouse/issues/1052 (more info on the decision for what browsers we decided to support if that sort of thing interests you)
[12:49:41] <apollo13> ok, and the rest of the errors is due to ember inspector
[15:00:32] <tdsmith> aw, pity dstufft.toml was passed over
[17:17:02] <pdobrogost_home> Hi all!
[20:24:28] <ionelmc> dstufft: isn't there a conf language that is a subset of yaml?
[20:25:22] <ionelmc> a sane subset, someone must have thought of it
[21:25:14] <rubenwardy> Why isn't my package installable? https://pypi.python.org/pypi?name=phpbb-parser&version=0.1.0&:action=display
[21:27:58] <dstufft> rubenwardy: you didn't upload any files
[23:38:37] <[Tritium]> ionelmc: I think TOML has been settled on.... i think partly because there are six TOML libraries in the wild that could easily be vendored