[10:35:25] <ojii2> hi. I have a library that depends on the unicode database (of which it bundles a part), how would I make a PEP440 compliant version string that "encodes" that info?
[10:35:43] <ojii2> example, my library code is version 1.0, unicode data version is 6.2.0
[11:42:53] <MarkusH> what is the suggested setting for my ~/.pypirc to use pypi.io instead of pypi.python.org?
[13:55:18] <tdsmith> hey dstufft, the pip user cache doesn't provide any integrity guarantees right? is it posssible a malicious process can plant evil code in the local pip download cache and pip won't be able to tell the difference?
[13:57:00] <dstufft> tdsmith: right and I don't think it'd ever be reasonably possible to mitigate that (see, the first immutable law of security)
[13:59:04] <tdsmith> context is i have a branch for adding some pip-using functionality to homebrew core and i'm wondering if we want to allow access to a persistent cache
[13:59:35] <tdsmith> pedantically, pip could ping warehouse for checksums, but i guess that's not stable yet
[13:59:44] <tdsmith> also pip probably doesn't care
[13:59:48] <apollo13> and what stops you from exchaging pip?
[14:00:41] <tdsmith> apollo13: not following, sorry
[14:00:54] <dstufft> tdsmith: the attack surface where you can alter a cache but can't alter pip itself seems pretty narrow and/or non-existent (particularly given things like user site-packages)
[14:01:17] <apollo13> tdsmith: if we are talking about malicious stuff, why just play with the cache instead of replacing pip or whatever
[14:02:17] <dstufft> tdsmith: to be clear though, pip itself does respect the index server's hashes if provided and both PyPI and Wareohuse provides them, but that's also a HTTP request so can also be cached
[14:03:02] <tdsmith> pip and $HOME aren't persistent on our CI bots but this cache might be; i think the takeaway is it shouldn't be but it also shouldn't matter on a user machine
[14:03:43] <apollo13> tdsmith: so make the cache readonly and only update it from a trusted (tm) machine
[14:03:58] <apollo13> sounds as if you are having a shared cache or something
[14:03:59] <dstufft> tdsmith: oh, is this for your builders?
[14:04:35] <dstufft> tdsmith: Maybe have a reused cache from anything that actually lands in `master` but not for PRs
[14:05:08] <dstufft> sort of how Travis does it, PRs inherent the cache from `master`, but also layer their own cache on top of that
[14:06:28] <dstufft> tdsmith: I guess the other question is what are you hoping to cache here, are you looking to cache downloads, or built wheels
[14:06:44] <dstufft> if you're looking to cache downloads, you could just stick devpi or bandersnatch in the same local network
[14:07:27] <tdsmith> sdists, but i'll pretend fastly is probably as good :D
[14:09:12] <dstufft> tdsmith: just using fastly, and revisting devpi/bandersnatch if that becomes a problem is probably a reasonable path
[14:16:31] <tdsmith> anyway, the homebrew PR is https://github.com/Homebrew/brew/pull/344 if you have any thoughts wrt how homebrew should think about rolling a pipsi-like thing around virtualenv
[14:36:25] <tdsmith> hm, the pip CLI API isn't quite as expressive as setuptools's
[14:36:50] <tdsmith> a pip equivalent of --install-scripts=/foo/bar that will work with whatever future non-setuptools build systems would be handy
[14:37:09] <tdsmith> is there a good place to inject that into a discussion?
[14:37:22] <dstufft> tdsmith: we don't have such a thing yet, largely because we don't know what the future of non-setuptools build systems looks like exactly yet :)
[14:38:08] <dstufft> to be clear, you can add --install-option=--install-scripts=/foo/bar though
[14:38:11] <tdsmith> i guess the pip API has to be a ~strict subset of the intersection of all supported build system APIs :p
[14:38:37] <tdsmith> is it a bad idea to bake that into homebrew in 2016?
[14:39:36] <dstufft> that *probably* won't exist long term as we transition to our glorious new future, but I think that it'll probably be at least a year before we tackle anything beyond the setup.py interface (but I could be wrong)
[14:42:15] <dstufft> tdsmith: that being said, I'm not entirely opposed to such a thing, just know that it'll probably invoke some level of debate about whether it's something we want to support long term as part of the build interface or not (I think either it is, or we want to only support installing from wheel in which case it doesn't matter what the build interface does)
[14:44:20] <tdsmith> oh, so wheels already don't have that flexibility, right?
[14:45:58] <dstufft> tdsmith: right, wheels are a zip file and static metadata
[14:46:18] <dstufft> the zip file has directories that corespond to different things, like purelib, platlib, scripts, data, etc
[14:46:37] <dstufft> we don't currently let you specify a different location for those things via the pip cli, but arguably we should.
[14:46:58] <dstufft> but due to the weird nature of build systems in Python that'll get drug into such a discussion too
[14:50:15] <natefoo> is it possible to get `pip -i` behavior from within setup.py or wheel metadata when installing something from pypi? we have some dependencies that are forks and so not hosted on pypi
[14:50:29] <natefoo> i guess we could always upload the forked projects under some other name
[14:51:06] <dstufft> natefoo: not sure what you mean by "from within setup.py"-- you want it to work with ``setup.py install``?
[14:52:06] <natefoo> i want to `pip install galaxy` and have pip check for dependencies on our pypi server as if the user had run `pip install -i http... galaxy`
[14:52:26] <natefoo> sort of like dependency_links (and i have read your blog post. ;)
[14:54:04] <dstufft> natefoo: yea, that's not possible without adding flags, there is --process-dependency-links flag in pip if you really want to use dependency_links, but other than that no. The cannonical suggestions are instruct people to use -i or upload your forked versions with new names
[14:55:02] <dstufft> tdsmith: just FYI I'm spending 0 time actually reading this ruby code, because rubby makes my brain hurt
[14:55:20] <tdsmith> hahahaha no problem, thanks for taking a look!
[14:57:08] <dstufft> tdsmith: fwiw if you want to keep the same integrit guarentees, something like shit_to_install = {"flake8": ("2.5.4", "sha256:cc1e58179f6cf10524c7bfdd378f5536d0a61497688517791639a5ecc867492f), ...} for the entire dependency tree would work
[14:57:28] <dstufft> then you just write out a fake requirements.txt like https://github.com/pypa/warehouse/blob/master/requirements/main.txt
[14:58:01] <dstufft> multiple hashes are a 1 of N situation, lets you accepts wheels and sdists across many platforms
[14:58:05] <tdsmith> can i make pip freeze do this for me? :D
[15:01:05] <dstufft> tdsmith: the main downside besides having to generate hashes for the entire dependency tree is that if you don't disable wheels, someone uploading a more specific wheel that matches your platform will break your build, since a new wheel will require adding a new hash and pip isn't (yet) smart enough to ignore files it doesn't have hashes for
[15:01:27] <dstufft> the same problem exists for sdists too, since you can do .tar.gz, .tar, .tar.xz, .tar.bz2, .zip, etc sdists
[15:01:35] <dstufft> but in practice you're less likely to hit that case
[15:01:51] <dstufft> than you are for someone who uploaded sdists, and later added wheels (or added more wheels)