PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Monday the 1st of February, 2016

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[10:40:55] <linovia> Thanks a lot for the get_pip.py script !!
[11:35:12] <nikolaosk> Hi, I need an opinion on something
[11:35:19] <nikolaosk> I want to make a bugfix release on an old release of some pkg
[11:35:26] <nikolaosk> I realized that the pkg has been supporting py2.6 far too long but I don't want to drop support on a bugfix release
[11:35:43] <nikolaosk> on the other hand, supporting py2.6 means that a requirement will have to be fixed to a very old version
[11:35:49] <nikolaosk> would it be sane to use version_info in setup.py and do something like 'requirement<2.0' if PY26 else 'requirement<10.0' ?
[11:38:46] <mgedmin> it can cause trouble for pip 7 users because wheel cache
[11:39:22] <mgedmin> (pip 8 made the wheel cache limited to one minor python version so dynamic requirements like this wouldn't leak)
[11:40:18] <mgedmin> the other way to specify dynamic requirements is PEP-(I can't remember which one) environment markers
[11:40:53] <mgedmin> they're underdocumented an *sigh* I just can't sorry
[11:41:46] <nikolaosk> thanks, this actually can help me a lot already
[11:43:43] <nikolaosk> pep0496, it has "draft" status
[11:44:28] <mgedmin> https://mail.python.org/pipermail/distutils-sig/2015-October/027415.html has more info
[11:52:53] <nikolaosk> that's an entertaining thread
[12:00:47] <mgedmin> oh hey did any distro ever ship pip 7?
[12:00:56] <mgedmin> maybe we can ignore it :)
[12:07:29] <dstufft> mgedmin: don't think so
[12:07:34] <dstufft> we also have numbers!
[12:08:34] <dstufft> https://caremad.io/s/6M9yzjhgty/
[12:08:45] <dstufft> that is what was downloading things from PyPI yesterday.
[12:19:53] <nikolaosk> so, even if I specify dynamic requirements AND env markers
[12:19:55] <nikolaosk> it will break
[12:20:27] <mgedmin> wait, no
[12:20:45] <dstufft> it's a bit complicated
[12:20:54] <dstufft> and it depends on what versions of pip and setuptools you want to support
[12:20:55] <mgedmin> dynamic requirements "break" if you use pip < 6 or setuptools < something really really ancient, or if you use bdist_wheel < 0.24 (iirc)
[12:20:56] <dstufft> sec
[12:20:59] <mgedmin> er
[12:21:00] <nikolaosk> I mean, it will break on some systems
[12:21:03] <mgedmin> s/dynamic reqs/env markers
[12:21:11] <mgedmin> dynamic reqs break if you use pip >= 7 < 8
[12:21:49] <mgedmin> I've never tried to imagine a situation where a packager would apply both solutions at the same time
[12:22:12] <mgedmin> it'd probably break on anything except pip 6 or 8?
[12:22:34] <dstufft> https://github.com/twisted/treq/pull/110/files#r44893644
[12:22:52] <dstufft> that thread has a lot of :words: about it
[12:23:06] <mgedmin> eh, just depend on the thing you need and let your users who need python 2.6 support pin the versions
[12:23:59] <dstufft> (I just woke up and I didn't sleep much lat night, so I may not be completely with it)
[12:24:37] <mgedmin> nikolaosk doesn't want to drop python 2.6 support in a bugfix release
[12:25:00] <nikolaosk> I only use setup.py, no setup.cfg nor requirements.txt
[12:25:09] <mgedmin> but one of their deps apparently dropped 2.6 support, so version pinning may be required
[12:26:17] <mgedmin> lol at choose your adventure
[12:27:05] <nikolaosk> I wonder if I put more logic in setup.py to handle some strange cases, would it end up full of hacks?
[12:27:53] <dstufft> Tl;dr -> You can use environment markers in setup.py, pip < 6 won't get the dependency installed OR you can programatically do a thing and pip >7.1,<8 will get a broken-ish wheel cached, OR you can duplicate the information and do programatic in your setup.py and add a setup.cfg like https://github.com/pypa/twine/blob/master/setup.cfg#L9-L15 with a setup.py like https://github.com/pypa/twine/blob/master/setup.py#L29-L32 and it works everywhere,
[12:27:54] <dstufft> but you have to list your dependencies twice
[12:28:18] <dstufft> oh, and doing it programatically also means if you try to upload wheels to PyPI they'll be wrong
[12:28:26] <dstufft> unless you use the third option of duplicating information
[12:28:38] <dstufft> if that makes sense?
[12:30:11] <mgedmin> oh, cool, there's a way to have it work everywhere!
[12:30:23] <mgedmin> somehow I'd lost track of that
[12:31:12] <dstufft> yea, you just have to duplicate information
[12:31:46] <dstufft> (this works because setup.py bdist_wheel will ignore install_requires if there is a setup.cfg llike twine has)
[12:40:56] <nikolaosk> the combination of environments to test discourages me
[12:41:09] <nikolaosk> perhaps I should compromise
[12:41:24] <nikolaosk> maybe I can keep py2.6
[12:41:27] <mgedmin> you could outsource your QA to your users, like almost everyone else :)
[12:42:04] <nikolaosk> but drop pip-7?
[12:42:35] <nikolaosk> although someone may have pip-7 install and use other software to install the package
[12:43:09] <nikolaosk> can setup.py know that it's run by pip and which pip?
[12:43:13] <mgedmin> the pip 7 breakage is this: when a user pip installs yourpackage, the requirements are frozen and written into the wheel cache
[12:43:30] <mgedmin> so when a user pip installs the same package again, using a different python version, they get frozen requirements
[12:43:43] <mgedmin> how likely is it that you have users that will pip install the same package for multiple python versions?
[12:44:27] <mgedmin> I think it's mostly only developers who use $yourpackage as a dependency and test against multiple pythons that are affected by this problem
[12:50:39] <nikolaosk> maybe a devops person who finally decided to switch to py2.7
[12:59:24] <nikolaosk> disabling pip's cache from setup.py isn't possible, right?
[13:35:13] <ionelmc> nikolaosk: you mean the wheel cache?
[13:37:04] <nikolaosk> yes
[14:15:54] <Theuni> dstufft: hi!
[14:16:12] <Theuni> dstufft: i have a case of fastly not updating a page for a while
[14:16:19] <Theuni> getting bandersnatch stuck
[14:16:27] <Theuni> Feb 1 15:05:01 services02 bandersnatch[mirror]: 2016-02-01 15:05:01,452 DEBUG: Getting /simple/PyKat/ (serial 1933566)
[14:16:28] <Theuni> Feb 1 15:05:01 services02 bandersnatch[mirror]: 2016-02-01 15:05:01,476 DEBUG: Expected PyPI serial 1933566 for request https://pypi.python.org/simple/PyKat/ but got 1933565
[14:16:37] <Theuni> any chance to get this cleared?
[15:24:06] <ionelmc> nikolaosk: just make bdist_wheel fail, that will make pip install it in the "old" way
[15:28:51] <dstufft> Theuni: can try now
[15:29:42] <Theuni> dstufft: that incident makes me really want to implement the purge request upon this specific condition
[15:29:51] <Theuni> (see the existing bug entry)
[15:30:08] <Theuni> especially as people are already doing so, except under a lot more circumstances with a much bigger shotgun ;)
[15:31:56] <dstufft> Theuni: well, I'm aware of one group of people doing so, openstack and I can get them to stop by asking them to do so. Adding it to bandersnatch proper means we have to convince people to upgrade
[15:32:14] <dstufft> it also means we'll end up with a bunch of purges when we don't need them, which will directly cost the PSF more money
[15:32:30] <Theuni> i have no data on whether that effect is really this dramatic
[15:32:45] <Theuni> i would imagine it's not as bad as you think, but again, i might be completely wrong, too :)
[15:33:12] <Theuni> dealing with the unreliability also costs (different) money and time
[15:33:21] <Theuni> it's less frequent than it used to be
[15:33:31] <Theuni> but it also makes bandersnatch mirrors less reliable than what people would like
[15:34:13] <Theuni> we could try something like implementing it in an unreleased (maybe not even publicly pushed) version and monitor the result?
[15:34:39] <Theuni> i'd just like to be able to contribute to progress, not keeping the band-aid alive :)
[15:34:56] <Theuni> (well, progress-like bandaid that would be, but still)
[15:35:19] <Theuni> I have one more idea to make bandersnatch less annoying even without the cache purge
[15:35:38] <Theuni> i don't update the todo list while still having stuff on it, that's what blocks people most.
[15:35:44] <Theuni> i think i could resolve that first
[15:35:55] <Theuni> would you be willing to consider the purge option if I do that?
[15:36:21] <dstufft> tbh, the real solution is to implement retries for the purging side on legacy PyPI, which is what warehouse already does
[15:36:39] <Theuni> so the client does see failures?
[15:36:45] <Theuni> (client == pypi)
[15:37:24] <dstufft> oh yea
[15:37:26] <dstufft> it does
[15:37:34] <Theuni> so ...
[15:37:45] <dstufft> the problem is just that rq (The task queue) doesn't have any retry support
[15:37:49] <Theuni> it shouldn't be hard to do, it's jsut that nobody ame around?
[15:37:53] <Theuni> ah
[15:38:05] <Theuni> i can look at it and try to figure out whether i can force my will on it ... ;)
[15:38:07] <dstufft> so something would need to be figured out
[15:38:20] <dstufft> warehouse uses celery so it's got retries all over it
[15:38:45] <Theuni> well
[15:38:47] <Theuni> i've got 30 minutes
[15:38:50] <Theuni> lets see what happens
[15:39:31] <dstufft> I hadn't done it because the legacy PyPI code base is a pain in the ass, and afaik most of the issues get caught by openstack's purging, which I can ask them to stop when warehouse goes lives
[15:41:19] <Theuni> what's the eta on that? :)
[15:41:25] <Theuni> i was happy to see the announcements
[15:41:33] <Theuni> grats btw, and thanks for all that work!
[15:43:21] <dstufft> Theuni: hoping 2016Q1, the API stuff is all there, it's jsut trying to fill in missing UI functionality now
[15:47:18] <Theuni> sounds great
[15:47:38] <Theuni> (with the usual margin of error for "just") :)
[15:56:36] <Theuni> dstufft: i poked around a bit. there's exception handling and i found a few stackoverflow examples of people who implemented re-queuing
[16:00:53] <Theuni> i should have kept that vagrant file that i created a few years back to get pypi dev bootstrapped
[16:01:03] <Theuni> this is just embarrassing ...
[16:02:52] <Theuni> ok, out of time for now
[16:02:55] <Theuni> dstufft: btw
[16:03:02] <Theuni> i explicitly purged the one url that broke for me
[16:03:03] <Theuni> doesn't help
[16:03:20] <dstufft> Theuni: using curl -XPURGE?
[16:03:24] <Theuni> yup
[16:03:30] <dstufft> if that's the case, sounds like a fastly bug
[16:03:36] <Theuni> well
[16:03:39] <dstufft> which url?
[16:03:42] <Theuni> i have my oppinion on that
[16:03:44] <Theuni> https://pypi.python.org/simple/PyKat/
[16:03:57] <Theuni> s/oppi/opi/
[16:04:02] <dstufft> (well that or PyPI is serving the wrong data)
[16:04:09] <dstufft> what serial are you expecting
[16:04:14] <Theuni> 2016-02-01 16:56:40,250 DEBUG: Expected PyPI serial 1933566 for request https://pypi.python.org/simple/PyKat/ but got 1933565
[16:04:23] <Theuni> off by one :)
[16:05:01] <dstufft> oh wait
[16:05:07] <dstufft> the urls are normalized
[16:05:24] <Theuni> i tried purging without trailing slash, too
[16:05:33] <Theuni> if that's what you mean
[16:05:42] <Theuni> or is there another level of normalization going on
[16:05:43] <Theuni> ?
[16:06:03] <dstufft> re.sub(r"[-_.]+", "-", name).lower()
[16:06:13] <dstufft> gotta purge https://pypi.python.org/simple/pykat/
[16:06:28] <Theuni> looks better
[16:06:29] <Theuni> i dont
[16:06:30] <Theuni> i
[16:06:33] <Theuni> fastly
[16:06:34] <Theuni> really
[16:07:06] <Theuni> that's the old discussion again
[16:07:19] <Theuni> should bandersnatch be normalizing those names or not?
[16:07:25] <Theuni> in any case
[16:07:30] <Theuni> fastly should be purging.
[16:07:54] <Theuni> everytime i come in contact with fastly i get reminded why middleboxes are a bad idea
[16:08:20] <dstufft> The cannonical URL is the normalized URL, so you'll do less HTTP and such if you normalize it first
[16:08:50] <dstufft> Theuni: heh, if it weren't for Fastly PyPI wouldn't scale nearly as well as it does.
[16:08:55] <Theuni> i know
[16:08:56] <Theuni> but still
[16:09:03] <Theuni> it creates pains that didn't exist before
[16:09:23] <Theuni> i've had to have them debug tcp kernel options on some random host on the internet because they killed connections to my datacenter
[16:09:34] <Theuni> and the problem always is: you get errors that are completely unrelated
[16:09:42] <Theuni> the error scenarios are just atrocious
[16:09:46] <Theuni> anyway
[16:09:58] <Theuni> i'll normalize the package names in urls
[16:10:01] <Theuni> in the next days or so
[16:10:09] <Theuni> that should help a bit, too
[16:10:30] <dstufft> Theuni: yea, I thought I checked bandersnatch for normalization already, maybe I did and got confused
[16:10:42] <Theuni> i'll check
[16:10:44] <Theuni> gotta run now
[16:10:47] <dstufft> see ya
[22:20:12] <xafer> Hello, could someone with some setup.py expertise quickly look at https://github.com/spyder-ide/spyder/issues/2962 ?