[02:36:35] <dtux> njs: got it... any relevant reading you'd recommend? could've sworn there was an issue on this, but for some reason i can't find it atm
[03:24:43] <toad_polo> I think it would be really unfortunate if pip started taking over the functions of other packaging tools.
[03:25:51] <toad_polo> It may have made sense from the beginning (though setuptools/distutil already was that monolith in many ways and we're actively breaking it up).
[03:27:58] <toad_polo> But now we've spent a few years actively moving people to twine and pip and introducing loads of necessary churn and moving things around more is unnecessarily spending our churn budget to no major advantage.
[03:34:29] <njs> AFAICT the users overwhelming find the pip/twine split confusing, and it's the defense of the split is all coming from the devs who have some vision of scope/architecture/responsibilities
[03:36:01] <njs> which might be a good and sufficient rationale, I'm just saying that I haven't gotten any sense that having 'pip publish' would be bad for users
[03:43:31] <dstufft> having thought about this a lot I think that users overwhelming ask for pip publish to be a thing and I don't think it would be bad exactly for it to be a thing
[03:43:52] <energizer> why is sdist & bdist_wheel in setuptools instead of twine?
[03:44:07] <dstufft> I think that it's somewhat intlectually cleaner for pip publish to not be a thing
[03:45:49] <energizer> i.e. why isn't the split "pip installs packages, twine deploys packages'
[03:47:59] <dstufft> I think if we did combine them, there are some questions that would need answered, namely what about ``twine check``? Also I think it would make sense for there to be a ``twine sdist`` and a ``twine wheel`` that focus on only building from a VCS (E.g. a author oriented workflow, vs pip wheel and a hypothetical pip sdist that focus on it from a consumer oriented workflow)
[03:49:27] <dstufft> energizer: setuptools has those functions for historical reasons, the split you're talking about is roughly what exists now, but folks regularly ask for us to merge them
[04:02:24] <toad_polo> I think former npm users ask to merge them because that's what they're used to.
[04:02:42] <toad_polo> That thread has a lot of good practical reasons not to merge them.
[04:03:43] <njs> *shrug* I've never used npm, and I'd find it less confusing to learn 1 tool instead of 2
[04:04:20] <toad_polo> energizer: There aren't really good PEP 517 front-ends for building any build artifacts yet, IMO.
[04:04:23] <energizer> since uploading is the only thing people usually need twine for, it's understandable that people don't see it so much as an installing-vs-deploying split, and more of an everything-vs-uploading split
[04:04:40] <toad_polo> njs: It would still be two tools...
[04:05:08] <toad_polo> It would just be a subcommand of pip instead of having its own name.
[04:05:12] <energizer> (and everything-vs-upload suggests "why don't you just put 'upload' into the "everything" bin)
[04:05:34] <toad_polo> pip doesn't do "everything"
[04:05:37] <njs> toad_polo: yes, and I'd find that less confusing
[04:06:05] <toad_polo> It downloads and installs packages.
[04:06:08] <njs> but then, I think we should have a single tool that handles the duties of virtualenv/pip/tox/pipenv/twine together, and if we had that then it would obviate the need for pip publish, so probably better use of my time to agitate for that instead of for pip publish :-)
[04:06:21] <energizer> toad_polo: and builds distributions
[04:06:36] <energizer> and those three things are everything most people need -- except `upload`
[04:06:38] <toad_polo> Creating wheels is a necessary part of the install process, I'd argue it probably shouldn't be a public endpoint.
[04:07:06] <toad_polo> Checking, testing, building docs.
[04:07:17] <njs> energizer: we're in a transition between the old way where everything was ad hoc and organic and 'setup.py' figured prominently, to a new way where there are some standard abstracted commands to work with a source tree without assuming anything about what build system it uses
[04:07:27] <toad_polo> Lots of stuff pip doesn't do.
[04:07:47] <njs> energizer: so in the old style, 'setup.py sdist bdist_wheel' was just how you built sdists and wheels. In the new style... well, it's still under discussion :-)
[04:08:33] <energizer> toad_polo: ya i'm kinda interpreting pip and setuptools as going together
[04:08:48] <toad_polo> Going together in what way?
[04:09:25] <energizer> for most people, pip==setuptools
[04:09:29] <njs> energizer: maybe it will be 'twine sdist', 'twine wheel', maybe it will be 'pip sdist', 'pip wheel'... people have opinions but no-one actually knows :-)
[04:09:42] <toad_polo> If you count the stuff setuptools can do, it's... everything mentioned here.
[04:10:22] <toad_polo> If people think setuptools and pip are the same thing, they are simply wrong.
[04:10:31] <toad_polo> There's not even any overlap in the maintainers.
[04:11:06] <energizer> njs: is it desirable to have "upload stuff without an sdist" or "upload stuff without a wheel" as prominent options?
[04:12:10] <njs> energizer: the decade-long project is "kill setuptools". in as gentle, gradual, non-disruptive a way as possible, but... that's kinda the underlying goal behind a lot of what's been happning.
[04:12:37] <energizer> njs: i dont think so either :)
[04:12:46] <njs> energizer: I mean, both are supported, but we certainly think it's better if projects have both whenever possible
[04:13:07] <energizer> yeah, so having sdist and bdist_wheel commands seems like the wrong affordance
[04:13:11] <njs> sorry, "both are supported" = "leaving one or the other out is supported", "have both whenever possible" = "have both wheels and sdist"
[04:13:12] <toad_polo> I'm not sure that killing setuptools is a desirable outcome.
[04:13:46] <njs> toad_polo: I mean, it will always be there as an option for those who want it
[04:13:47] <toad_polo> There's nothing even close to ready to replace it, for one thing.
[04:14:21] <njs> .....yes see "decade-long project"
[04:15:11] <njs> which is fine but it's obviously not sufficient to replace setuptools by itself :-)
[04:15:27] <toad_polo> Flit considers many things out of scope.
[04:16:01] <njs> energizer: the problem is that in the general case you want to do source tree -> build sdist -> copy sdist onto multiple computers with delicate build environment configuration -> build wheel
[04:16:13] <njs> energizer: so having a single build-sdist-and-wheel command doesn't necessarily work
[04:16:18] <toad_polo> Even for the rare project of mine that it would be suitable for, I don't bother because that involves learning a new tool.
[04:16:48] <toad_polo> Which is harder than just using setuptools for no real benefit.
[04:17:40] <energizer> njs: sure, there's always `pip build --format=sdist` or whatever, but it could default to `--format sdist bdist_wheel`
[04:18:33] <njs> energizer: I guess another possible future is moving to a model where 99% of the time, you upload the sdist and pypi builds the wheels
[04:32:24] <njs> I don't know how many resources would be required, but it doesn't seem totally ridiculous. the piwheels project says they need <5 raspberry pis (!) to keep up with building every package released on pypi
[04:33:14] <njs> and if you heard a description of pypi without knowing it existed, then it would sound pretty ridiculous :-) ("*how* many tens of thousands of dollars a month in donated services?")
[04:33:15] <energizer> that seems hard to believe, honestly
[04:33:30] <energizer> installing numpy on a raspi takes minutes
[04:38:03] <toad_polo> And building the thing in the first place without reinventing conda or having something so limited in scope as to be almost useless would probably be a pretty serious undertaking.
[04:38:18] <njs> energizer: but say it takes 10 minutes to build numpy. that page shows 10 releases since july 23. if I'm calculating right, that means keeping up with numpy would require 0.03% of one raspberry pi :-)
[04:39:29] <toad_polo> With numpy you need to manage the LAPACK / BLAS libraries, plus a Fortran compiler.
[04:39:33] <njs> toad_polo: eh, all the major projects have already figured out how to use transient cloud build environments to produce their wheels.
[04:39:51] <energizer> njs: but the thing you're proposing is that *all* packages would be built on pypi
[04:40:10] <toad_polo> Yeah, building a transient cloud build container for your specific project is easy.
[04:40:13] <njs> "<njs> energizer: I guess another possible future is moving to a model where 99% of the time, you upload the sdist and pypi builds the wheels"
[04:42:01] <njs> energizer: it's an important distinction though. this isn't a project where if there's like 1 weird edge case that blocks 1 package from using it, then suddenly it becomes useless. you can get incrementally better over time.
[04:42:17] <toad_polo> It would basically either turn into a free CI offering or conda-forge because of native dependencies.
[04:42:20] <energizer> so for most projects it's just a matter of having a container on each of several platforms run `setup.py bdist_wheel`, that sounds like it's mostly just commpute time
[04:43:17] <toad_polo> Of course 99% of projects on PyPI are probably pure python anyway, but a service that builds universal wheels from pure python packages is probably not worth building.
[04:43:23] <energizer> yeah the native dependencies would be important, but probably most packages share a handful of common deps
[04:45:13] <njs> toad_polo: eh, it would have some value. for example, it gives you a guarantee that the sdist and wheel actually correspond.
[04:46:08] <toad_polo> If it were marked as the auto generated one, but I don't know why you'd care
[04:46:23] <energizer> unless your setup.py is designed intentionally to make a different bdist
[04:46:30] <toad_polo> Pure python wheels, as I've said before, are easier you audit than sdists.
[04:46:49] <toad_polo> They are just the source files already arranged in the way they are going to be installed.
[04:47:30] <toad_polo> Sdists are the same source files but with scripts that turn them into a wheel, so you need to audit all the same code, plus the build code.
[04:48:44] <toad_polo> Don't get me wrong, people would care about this, they would just be wrong.
[05:37:33] <njs> they seem to be very keen to talk
[05:37:54] <techalchemy> he fixes our build configs quite a lot
[05:38:26] <techalchemy> they were pretty active and hands on with that kind of thing / we did adopt super early but still
[05:38:47] <techalchemy> I see a lot of projects have switched over to azure for CI so i guess that's good
[16:12:58] <dtux> why does build-system require wheel in pyproject.toml?
[16:35:12] <toad_polo> Because setuptools can't build wheels by itself.
[16:36:06] <toad_polo> It may not be necessary to include it, I think `setuptools.build_meta` will add `wheel` automatically as part of the `get_build_requires_for_wheel` hook.
[16:36:40] <toad_polo> I do wonder if we'll want to break out `wheel-requires` and `sdist-requires` in the future.
[16:42:25] <hrw> pypi.org allows authors to upload their python packages as source + wheel. Wheel can be 'just python' arch=none or windows/linux/macos x86-64/i386 binary one. Which kind of sucks when you run aarch64(arm64) or other !x86 architecture as each binary version needs to be built during 'pip install' phase. which means adding compiler, libs, tools etc to nearly every system which uses 'pip install'.
[16:43:13] <hrw> is there a way to add some CI to get those binary versions rebuilt on !x86 and uploaded to pypi archive?
[16:43:57] <hrw> or a way to do some kind of a mirror which would fetch from pypi, build and then provide binary wheels?
[16:44:44] <hrw> I am getting tired of workarounding it in each new project again and again
[16:49:11] <hrw> and any other python version of it
[16:49:28] <toad_polo> What's the "re" part of that?
[16:54:32] <hrw> toad_polo: on x86 I do 'pip install numpy' and it fetched wheel and done in 1-2 minutes. on aarch64 I do 'pip install numpy' and get 'sorry, no C compiler'. and even if I hunt and install all required packages then it can take 40+ minutes to build it.
[16:54:48] <hrw> looked into logs for some extreme example
[16:54:59] <toad_polo> Yeah, I get what the problem is.
[16:55:19] <hrw> toad_polo: so I am asking is there a way to get it built once, uploaded somewhere and have fast 'pip install numpy' next time
[16:55:20] <toad_polo> I just don't understand why you're calling it "rebuilding".
[16:55:30] <toad_polo> You're basically asking for PyPI to support more platforms, right?
[16:58:28] <hrw> toad_polo: manylinux is x86 only so far ;(
[16:59:06] <toad_polo> Anyway, if there's a tag for your platform then I think it's down to asking for /helping the projects you care about to publish wheels on your platform.
[17:02:09] <hrw> I am afraid that to make it in some sensible way a way to rebuild on !x86 would have to be provided... security care then etc... argh ;D
[17:08:32] <toad_polo> It's almost certainly not a huge burden to get big projects to build on smaller platforms, particularly if you can provide some resources on the relevant platforms.
[17:10:34] <toad_polo> A lot of the most popular projects are pure python anyway, so you can probably solve a huge fraction of the problem by getting wheels up for matplotlib, numpy, scikit-learn, scipy, pandas and a few others.
[17:11:07] <toad_polo> They tend to be ahead of the curve on supporting built extensions, they may already have wheels built for your platform.
[17:12:52] <hrw> toad_polo: the problem is that I usually land in x86-only-so-far project and work on adding aarch64, ppc64le etc support. starting with 'let us switch from pip to XYZU tool' does not sound good
[17:13:33] <toad_polo> This is for existing open source projects? I'm not sure the context here.
[17:15:25] <hrw> toad_polo: Two years ago I landed in OpenStack Kolla. took 4 months, over 100 patches/revisions to get it support aarch64 and ppc64le architectures. all without changing tools used by project. Then OpenStack Loci followed etc.
[17:16:16] <toad_polo> If it's just to speed up CI builds for your dependencies, I think the easiest thing to do is to set up a wheel cache somewhere.
[17:16:53] <hrw> toad_polo: I build other people's code for ~15 years now. gets easier each time
[17:17:09] <hrw> yeah, wheel cache is a thing I though about
[17:17:23] <toad_polo> Not sure exactly the right commands to use off-hand, but if you separate the `pip install` into `pip download` / `pip wheel` / `pip install`, then you can cache those first two steps on disk until a new version comes out.
[17:18:10] <hrw> toad_polo: that's kind of how Loci works. they first build wheels for anything and then reuse.
[17:18:30] <hrw> toad_polo: in Kolla we install all into openstack-base image and then use it as a base for next ones
[17:20:08] <hrw> just hoped that I can avoid that part
[17:23:01] <toad_polo> I think for the foreseeable future that will be the easiest thing to do.
[18:25:17] <tos9> It blows up depending on the exact end-user setup, specifically, if a pyc was generated, then creating some venv with a different python will blow up with Bad magic numbers if it doesn't like the pyc
[18:26:06] <tos9> Does anyone see an obvious suggestion better than "evade the import system to ensure I get a .py rather than a possible .pyc"
[18:26:25] <tos9> (And don't suggest -p :), which is broken for other reasons and is the reason that code does what it does)