[10:19:34] <agoose77> Hi all, I'm trying to install a locally checked-out Git package as editable, into a custom target. I tried to use PYTHONUSERBASE, but it seems that when after installing the dependencies correctly, it fails after trying to write to the system packages path (which is readonly)
[10:20:06] <agoose77> --prefix also fails for some reason, it seems to ignore/wipe the PYTHONPATH
[12:59:28] <agoose77> Does anyone know how to use PYTHONUSERBASE for editable installs of the form `pip install --user -e ./some-dir` ?
[13:43:09] <agoose77> Something seems to be setting `sys.flags.no_user_site` to False. Anyone know if Pip does that secretly?
[14:06:33] <toad_polo> pradyunsg: Something I was thinking of last night - are there no tests in `pip` for installing a PEP 517 module?
[14:08:15] <toad_polo> It occurs to me that as much as the "wheel hacks break PEP 517 builds" issue is definitely a debian issue, I assume they are running the test suite in their environment after the hacks go in, so why didn't they detect this before launch?
[14:19:49] <toad_polo> Interesting. I wonder if they're just not running the tests or something.
[14:35:43] <dstufft> toad_polo: afaik Debian does not run a lot of our tests because they require network access and i don't think their machines support that
[14:43:22] <toad_polo> Interesting. I assumed that they were, since they constantly do.
[14:43:57] <dstufft> of course we would rather they not debundle at all
[14:44:19] <dstufft> but that documentation exists in a "if you're going to do it, please do it in this way"
[14:44:35] <toad_polo> Ah. Hah, that does not exactly read as a ringing endorsement.
[14:45:24] <dstufft> mostly because we (well me, I was the only one who worked on it IIRC at least on the pip side) figured it would be better to push them to do something we had some control over
[14:45:25] <dstufft> versus jsut doing whatever they wanted
[14:45:49] <toad_polo> Yeah. It's certainly murkier than I had thought.
[14:46:20] <dstufft> It was basically the compromise to get them to not just debundle everything and change things to import from site-packages
[14:46:53] <dstufft> because I really didn't want them to do that, because then ``pip install ...`` could break pip with no good way to fix it
[14:47:04] <toad_polo> Though I'm still inclined to see this as frustrating because their system creates more work and confuses the branding.
[14:48:18] <toad_polo> "You are really going to fuck up the brand if you do it the way you want, so we'll sort of support your use case to save ourselves some work." feels like roughly the same thing here.
[14:49:15] <toad_polo> And the message I wish would come across is, "These are the consequences of using LTS distros for interacting with dynamic ecosystems."
[14:49:38] <toad_polo> That actually seems to be coming across is "PEP 517 implementations are not ready yet, don't use them."
[14:49:45] <dstufft> Ehh, from my PoV working with Debian has severely reduced the divergence between pip upstream and Debian, and it also has created a working relationship where I have some measure of input in the patches they do implement
[14:50:41] <dstufft> like prior to this effort to semi support them, the relationship was largely just antagonistic on both sides, and we both just trampled over each other with little regard for the other
[14:51:24] <dstufft> is it the ideal world that I would want? No of course not, but politics rarely is, and politics is just another name for the dynamics of 2 or more people ;)
[14:53:23] <dstufft> IIRC at the time, Debian was about 2 steps away from just removing python-pip (and thus also breaking their shipped versions of virtualenv and venv) because the problem had come to a head and without some give from upstream they were just deciding it was too much work to continue to support pip on Debian at all
[14:53:24] <dstufft> Maybe that would have been a better outcome, idk, I didn't think so (and I still don't)
[15:00:02] <toad_polo> dstufft: I agree that it's a good thing to work with them if possible.
[15:00:37] <toad_polo> I'm not sure how I feel about the possibilty of them not shipping pip at all.
[15:01:50] <toad_polo> In some ways I feel like this is the kind of thing where PSF could use trademark and say, "You can't ship something you call Python without including pip and the entire standard library."
[15:02:00] <dstufft> (to be clear, it wasn't like a threat, they were discussing it and barry reached out to me to be like "hey, folks want to do this, I don't want it to happen, lets figure something out")
[15:02:23] <dstufft> that's certainly a possibility yea
[15:02:55] <dstufft> that's probably a worse overall outcome though, because it's not like Debian hasn't just renamed a project before lol
[15:03:07] <toad_polo> Though again that could come down to "Debian doesn't ship something called Python", and the question is whether these minor and occasional impacts to Python's brand is worth that.
[15:03:24] <dstufft> (I'm looking at you, iceweasel)
[15:04:05] <dstufft> over all, the way Debian unbundles pip is roughly equal to them just overwriting our bundled versions with the versions of the libraries they already ship
[15:04:10] <toad_polo> Yeah, and presumably the branding there is tough anyway because I know that I have `alias fd=fdfind` in my debian-based work computer, and when I do it I think, "Jeez, Debian.", but me from 5 years ago would be like, "What is this weird shit, ah well."
[15:04:33] <dstufft> there's just some gaps in that "roughly"
[15:05:39] <toad_polo> I'd really like to see an improvement in how users can get Python independent of their distro.
[15:06:15] <dstufft> (one of the things I've suggested to them, and they might be forced to do if pep517 decides zip import is not supported, it to stop shipping a directory of zipped wheels, and to ship a directory of unzipped wheels, which would lower that delta)
[15:08:39] <toad_polo> I always want to tell people "The system python is the system's python", but the follow up is, "Well, how do I use Python, then?" and I have to give them a bunch of sub-optimal answers like pyenv.
[15:09:10] <toad_polo> Pyenv is very close to what I want, but they don't have an "X.Y-latest" version, which is almost always what I want.
[15:09:26] <toad_polo> Plus the way to get updates is to do `git pull` in a hidden directory, which is... not good UX..
[15:10:28] <dstufft> yea, I use pyenv on my mac and I hate that X.Y-latest doesn't exist
[15:10:39] <dstufft> I think Fedora has done some interesting things
[15:10:47] <dstufft> where they have a system python and a user python
[15:27:49] <dstufft> it looks like they might have removed it though?
[17:22:07] <ngoldbaum> toad_polo: yeah i don’t understand why “pyenv update” is a separate project from pyenv itself
[17:42:05] <techalchemy> dstufft, I've been struggling with what to recommend for my team @ canonical recently as well, but have been emphasizing that system pip and python tend to be very broken
[17:42:30] <techalchemy> mostly pip and related libraries i guess
[17:43:14] <techalchemy> but i don't think trying to forbit debian from shipping python or pip is the answer...
[17:43:44] <sumanah> di_codes: did you see the recent "Re: TF's 500Mb wheels dedicate 300Mb to 6 CUDA compute capabilities" thread on build@tensorflow.org ? The most recent msg makes me go "wait what"
[17:51:05] <sumanah> ngoldbaum: do you work on Numpy? I forget
[17:51:24] <ngoldbaum> sumanah: nah, the CPython C API scares me
[17:51:35] <ngoldbaum> manual reference counting makes my head hurt
[17:51:42] <sumanah> dstufft: we are ok, thanks .... we don't quite have your record but we have been staying inside a LOT. I'm grateful that my spouse and I have a lot of experience working from home
[17:51:50] <sumanah> ngoldbaum: I don't know C at all so you're ahead of me!
[17:52:05] <techalchemy> sumanah, do you mean to say you've actually been _out_?
[17:52:18] <ngoldbaum> pretty hard to be in NYC and _never_ go out
[17:52:20] <sumanah> techalchemy: I have, yes, unfortunately
[17:52:30] <techalchemy> i live in the middle of nowhere and i've just been hiding
[17:53:25] <sumanah> some of the reasons for the home-leaving get into some kind of personal stuff so I don't want to talk here about it, but yeah we've _nearly_ not been out in a few weeks, and we've been taking as many precautions as possible.
[17:53:58] <sumanah> btw I think some of you know: my spouse Leonard is the creator and maintainer of Beautiful Soup. https://blog.tidelift.com/how-open-source-software-is-fighting-covid-19 I'm proud of my spouse right now -- Beautiful Soup is part of the stack for a COVID-19 info tracker
[17:54:31] <ngoldbaum> lol yes i'm sure many horrible government websites are being scraped with beautiful soup atm
[17:55:59] <dstufft> We have a somewhat paranoid setup here
[17:56:15] <dstufft> even incoming pacakges/deliveries get quarantined
[17:56:44] <techalchemy> sumanah, i didnt know that actually but thats super cool
[17:57:33] <sumanah> dstufft: I think that's sensible.... we're using a bleach solution when possible, for instance, on the outsides of cans, but otherwise we're trying to, for instance, wait a day to open packages
[18:00:06] <dstufft> sumanah: I imagine it's harder in NYC where you can't devote an entire 3 car garage to your semi insane quarantine procedure as packages move from one bay to the next over a period of several days
[18:01:41] <sumanah> (also, here, it looks like the mail carriers are getting stretched thin, so we are trying to be pretty limited in what packages we are ordering anyway.... and we live walking distance from a lot of shops, and want to save delivery slots for people who really need them)
[18:03:11] <sumanah> that's exactly it .... we are not immunocompromised, so we're trying to save delivery services/slots for folks like you who need it
[18:03:24] <dstufft> sumanah: for me personaly, other than the whole constant existential dread nothing's really changed. I imagine the folks who actually go into an office feel differently
[18:03:43] <dstufft> I've been training for this moment for all my life
[18:04:01] <dstufft> "You want me to sit in my house and do computer shit all day? Ok!"
[18:04:02] <sumanah> dstufft: you and a bunch of us, ha-ha-bitter-ha
[18:04:49] <dude-x> i also take a day to process packages, and wipe/spray everything once i open it. use isopropyl alcohol to wipe surfaces i touch, soapy water for the package contents (if possible) hydrogen peroxide spray to spray packages, use disposable gloves.
[18:05:45] <sumanah> I find it darkly funny that we are, in this channel, still talking about how to handle packages. just of a different kind
[18:06:52] <dstufft> someday I'm going to reach out to them to see if they actually realize I'm talking about a different kind of warehouse
[18:07:58] <dude-x> even a warehouse company runs software.
[18:08:23] <techalchemy> more like, the warehouse concept is not actually that inappropriate
[18:08:33] <techalchemy> some ideas really do translate
[18:08:51] <dstufft> my one good attempt at naming software
[18:09:17] <techalchemy> it's quite good tbh, every time i see 'forklift' in there i'm like oh yeah i guess that does work
[18:11:34] <dstufft> trying to decide if I sign up for this TF mailing list or not, because I feel like this idea of a dynamic setup.py that adjusts install_requires on the fly is going to be v painful in the long run for them
[18:13:10] <techalchemy> why are people still doing that :|
[18:13:58] <dstufft> they're trying to split up their package into smaller packages so people don't have to download 1GB of .whl files to use tensorflow
[18:14:21] <dstufft> and the idea is they'll auto detect what the current computer supports at install time, and only download what's needed
[18:14:39] <sumanah> dstufft: if you want you can write up something and I will forward it to the list!
[18:16:36] <techalchemy> i mean they can always host their own files and add a bootstrapping step or something, it'd be a lot less hacky
[18:17:26] <ngoldbaum> the CUDA runtime libraries should have their own wheels
[18:17:49] <ngoldbaum> it's weird that tensorflow is solving this problem, nvidia should
[18:17:58] <ngoldbaum> in principle pytorch could use the same technology
[18:18:22] <ngoldbaum> ofc also wrapped up into CUDA licensing issues :/
[20:05:36] <kroitor> hi! looks like pypi is down again, unless i'm the only one having the same problems we had on march 26 – twine fails to upload to legacy url, 5 attempts failing with a request timeout, request timeouts, and the downloads from pythonhosted fail as well
[20:05:49] <kroitor> can anyone confirm this or share some infos?
[20:16:30] <kroitor> now it's back again, sorry, must be a temporary glitch... what's interesting is that the timing matched the spice on CDN cache misses here: https://status.python.org/#day