[12:20:54] <dstufft> and it depends on what versions of pip and setuptools you want to support
[12:20:55] <mgedmin> dynamic requirements "break" if you use pip < 6 or setuptools < something really really ancient, or if you use bdist_wheel < 0.24 (iirc)
[12:27:05] <nikolaosk> I wonder if I put more logic in setup.py to handle some strange cases, would it end up full of hacks?
[12:27:53] <dstufft> Tl;dr -> You can use environment markers in setup.py, pip < 6 won't get the dependency installed OR you can programatically do a thing and pip >7.1,<8 will get a broken-ish wheel cached, OR you can duplicate the information and do programatic in your setup.py and add a setup.cfg like https://github.com/pypa/twine/blob/master/setup.cfg#L9-L15 with a setup.py like https://github.com/pypa/twine/blob/master/setup.py#L29-L32 and it works everywhere,
[12:27:54] <dstufft> but you have to list your dependencies twice
[12:28:18] <dstufft> oh, and doing it programatically also means if you try to upload wheels to PyPI they'll be wrong
[12:28:26] <dstufft> unless you use the third option of duplicating information
[12:42:35] <nikolaosk> although someone may have pip-7 install and use other software to install the package
[12:43:09] <nikolaosk> can setup.py know that it's run by pip and which pip?
[12:43:13] <mgedmin> the pip 7 breakage is this: when a user pip installs yourpackage, the requirements are frozen and written into the wheel cache
[12:43:30] <mgedmin> so when a user pip installs the same package again, using a different python version, they get frozen requirements
[12:43:43] <mgedmin> how likely is it that you have users that will pip install the same package for multiple python versions?
[12:44:27] <mgedmin> I think it's mostly only developers who use $yourpackage as a dependency and test against multiple pythons that are affected by this problem
[12:50:39] <nikolaosk> maybe a devops person who finally decided to switch to py2.7
[12:59:24] <nikolaosk> disabling pip's cache from setup.py isn't possible, right?
[13:35:13] <ionelmc> nikolaosk: you mean the wheel cache?
[14:16:28] <Theuni> Feb 1 15:05:01 services02 bandersnatch[mirror]: 2016-02-01 15:05:01,476 DEBUG: Expected PyPI serial 1933566 for request https://pypi.python.org/simple/PyKat/ but got 1933565
[14:16:37] <Theuni> any chance to get this cleared?
[15:24:06] <ionelmc> nikolaosk: just make bdist_wheel fail, that will make pip install it in the "old" way
[15:30:08] <Theuni> especially as people are already doing so, except under a lot more circumstances with a much bigger shotgun ;)
[15:31:56] <dstufft> Theuni: well, I'm aware of one group of people doing so, openstack and I can get them to stop by asking them to do so. Adding it to bandersnatch proper means we have to convince people to upgrade
[15:32:14] <dstufft> it also means we'll end up with a bunch of purges when we don't need them, which will directly cost the PSF more money
[15:32:30] <Theuni> i have no data on whether that effect is really this dramatic
[15:32:45] <Theuni> i would imagine it's not as bad as you think, but again, i might be completely wrong, too :)
[15:33:12] <Theuni> dealing with the unreliability also costs (different) money and time
[15:33:21] <Theuni> it's less frequent than it used to be
[15:33:31] <Theuni> but it also makes bandersnatch mirrors less reliable than what people would like
[15:34:13] <Theuni> we could try something like implementing it in an unreleased (maybe not even publicly pushed) version and monitor the result?
[15:34:39] <Theuni> i'd just like to be able to contribute to progress, not keeping the band-aid alive :)
[15:34:56] <Theuni> (well, progress-like bandaid that would be, but still)
[15:35:19] <Theuni> I have one more idea to make bandersnatch less annoying even without the cache purge
[15:35:38] <Theuni> i don't update the todo list while still having stuff on it, that's what blocks people most.
[15:35:44] <Theuni> i think i could resolve that first
[15:35:55] <Theuni> would you be willing to consider the purge option if I do that?
[15:36:21] <dstufft> tbh, the real solution is to implement retries for the purging side on legacy PyPI, which is what warehouse already does
[15:36:39] <Theuni> so the client does see failures?
[15:39:31] <dstufft> I hadn't done it because the legacy PyPI code base is a pain in the ass, and afaik most of the issues get caught by openstack's purging, which I can ask them to stop when warehouse goes lives
[15:47:38] <Theuni> (with the usual margin of error for "just") :)
[15:56:36] <Theuni> dstufft: i poked around a bit. there's exception handling and i found a few stackoverflow examples of people who implemented re-queuing
[16:00:53] <Theuni> i should have kept that vagrant file that i created a few years back to get pypi dev bootstrapped
[16:09:03] <Theuni> it creates pains that didn't exist before
[16:09:23] <Theuni> i've had to have them debug tcp kernel options on some random host on the internet because they killed connections to my datacenter
[16:09:34] <Theuni> and the problem always is: you get errors that are completely unrelated
[16:09:42] <Theuni> the error scenarios are just atrocious