[16:58:49] <dstufft> oh man, Github lets you require status checks without requiring the branch is up to date now, that makes it actually useful for OSS proejcts :3
[17:05:09] <edk> i just wish it'd let the merge button make fastforward merges
[17:06:01] <dstufft> eh, I don't like FF merges :D
[17:07:03] <edk> i do appreciate the record part but i also like having graphs that don't look horrific for bisecting and stuff
[17:07:38] <edk> i guess it's a question of choosing a compromise
[17:10:16] <dstufft> edk: I guess my thing is that fast forward merges can only happen *sometime*, so you either deal with non-ff merges sometimes (but sometimes not!) or you need something more like a rebase rather than a merge so that you never have a non-ff merge
[18:10:08] <njs> dstufft: oh hey do you have a link on what this github status check change is?
[18:13:22] <njs> it is not hard for computers to make sure that master actually works all the time
[18:13:39] <dstufft> but requiring people keep their branch up to date is, in practice, very hard to do in an OSS project
[18:13:45] <njs> but somehow the travis / github / etc. ecosystem just cannot seem to figure this out, so this switch lets you give up on that :-)
[18:14:01] <dstufft> particularly since contributors don't get any notification that their branch needs updated
[18:14:07] <njs> what rust and friends do is that they merge, re-run the tests, and then push the merged version to master
[18:14:17] <dstufft> right, but they don't use the merge button at all
[18:14:42] <dstufft> they have a robot doing the merges, and it puts all of the pending merges into some order, and runs the tests, then merges them in that order
[18:15:06] <njs> I would have preferred if github had added tooling so that you could flip a settings switch, and then after that hiting the merge button would fire some webhook that told travis etc. to re-run tests :-)
[18:16:17] <dstufft> it'd be nice if github made it possible for the project to more directly control what the merge button does... but I'm not sure if that is really that big of a target for them
[18:16:46] <njs> yeah, they're gonna do whatever they're gonna do
[18:17:32] <njs> (my favorite is that they've made it so that in certain circumstances, the PR "changes" view does *not* show you what will be merged to master if you hit merge. I filed a bug about this and they said that it's intentional to be less confusing.)
[18:18:32] <dstufft> I don't mind the setting as is though, because prior to this we had no required statuses because the "must be up to date" requirement was too onerous for OSS where many contributions are drive by and such
[18:18:41] <dstufft> so this narrows the gap at the very least
[18:18:52] <njs> (specifically what the "changes" view actually shows you is which commits would be merged to master if you were to hit the merge button at a moment when the PR contains the latest commits that it contains, and master contains whateever it contained at the moment the PR was created (not what it contains now))
[18:19:03] <njs> yeah, it's probably an improvement
[18:19:22] <dstufft> for things like pip and such, the downside is that maybe develop will be broken for awhile
[18:19:38] <njs> it might even be enough to help me push through my plot to start handing out commit bits like candy on certain projects
[18:20:03] <dstufft> that's not a huge thing since pip (and other things with traditional releases) it just means that you can't merge additional PRs until you fix develop
[18:20:13] <njs> (because it gives us a response to the probably-irrational-but-still-scary fear that people will surreptitiously push backdoors directly into master without anyone noticing)
[18:20:42] <njs> it's just... it's so close to the simple obvious correct thing and yet... isn't :-)
[18:20:51] <dstufft> for something like warehouse which auto deploys, you just need your auto deploy script to wait to deploy a new push to master until your CI verifies it
[18:23:46] <njs> ...anyway, on another matter, while I've got your attention :-) can we talk for a minute about whether thre's a way to un-stick the alternative build system discussion?
[18:26:43] <dstufft> njs: yea, I was out of comission for a bit and that kind of fell to the way side. I'm trying to get caught up on all the email and stuff I missed and the build system is on my list of stuff
[18:27:07] <njs> specifically it seems like the two big blockers are (a) the question about whether to build a new sdist format, and (b) the hook versus cmdline API issue
[18:29:09] <njs> I'm wondering what you think of the following proposal: (1) given that the new sdist thing adds a lot of additional complexity and you're overloaded, it isn't going to happen in the short term regardless; so, I'll write a proper spec for the legacy sdist format so at least there's something clean and unambiguous to work with for now, and then we split off the question of doing something better for the future when you have more time to devote; (2) it sounds
[18:29:10] <njs> like you and I are both convinced that the hook API approach is better, and lifeless doesn't really care about the resolution so long as there's a resolution, so if you throw your weight behind the hook approach then we can move on?
[18:29:50] <njs> I think given these two things, then the debate then drops down to just arguing over the fiddly details of what exactly the hook semantics should be, and lifeless and I are the only ones who care about that so we can probably work something out :-)
[18:30:57] <njs> and in particular it would let you take this off your big list of stuff :-)
[18:31:11] <dstufft> njs: so I started on a PEP and that sort of clarified some of my thinking about the new sdist format and why I was bothered by just adding this to the old format
[18:31:42] <dstufft> this is a bit of a brain dump here, but:
[18:32:55] <dstufft> I don't like things that just sort of break non-obivously for end users. Versions of pip that support wheel also support versioning the wheel file, so if we pull down a wheel that is incomptible we'll check the wheel version number and print out a reasonable error message.
[18:34:01] <dstufft> However, for projects using this new build system they need to either provide a setup.py (at which point, what we're really needing to do is define the setup.py interface and any old project can make a hook interface for behind a shim setup.py) or they break under old versions of pip, but they break in ways that someone who doesn't understnad python packaging is unlikely to connect to the real reason
[18:34:42] <njs> hmm, what I was expecting is that for projects who don't care about legacy pip, they'll include a setup.py that's just
[18:35:10] <njs> print("Please don't run setup.py directly / your pip is too old! Upgrade to pip >= X.Y and then use pip install")
[18:35:39] <dstufft> however, if we have a new sdist format (even if it looks remarkibly similar to the old format, just with the setup.py removed and the new file added) with a new extension, then old pip will ignore it and new pip will be able to see it (and assuming the new format includes a version number, will be able to give a reasonable error message)
[18:36:15] <dstufft> ontop of that though, I think there's an underlying question here that needs answered about what the "source" of some data is
[18:36:21] <dstufft> take version number for instance
[18:37:03] <dstufft> the "create a wheel" step needs to acquire a version number for the wheel from someplace
[18:38:28] <dstufft> if we keep the same sort of thing we have now (where you can create a wheel from a VCS checkout, from a sdist, etc) then every step of the process needs to understand how to create that version number
[18:39:44] <dstufft> which means that we're tying every step of the process together still, we're just allowing you to swap out one do-it-all library (setuptools) with another... which isn't a _wrong_ thing to do, but if we're going to do that then I think we need to flesh out the hooks for all of the other things that the one, do-it-all library has that we want to keep
[18:40:48] <njs> so I understand your concern about not wanting the new thing to cause confusing error messages during the transition period, but the other thing seems like it's "wouldn't it be if we could also get ...". and my concern is that by worrying about this, what will happen is that we get nothing instead of getting more, because we demonstrably don't have the bandwidth to actually solve these issues right now :-)
[18:42:04] <dstufft> but if we're trying to break apart the pieces so instead of a do-it-all library, we're allowing different things to handle different steps, then I think we need to define a more standard format for the *input* to the wheel build step (which I would assume would be an sdist, or at least the sdist metadata) that already has whatever metadata isn't wheel specific computed, and the wheel just uses that
[18:42:48] <dstufft> njs: I don't think it's like that at all, because what's being proposed is something that replaces one API (a poorly thought out, documented, and not incredibly great API, but still an API) that allows us to do certain things
[18:42:56] <njs> I find it hard to believe that any given project will want to use, like, cmake to generate their sdist and then waf to build their wheel :-)
[18:43:50] <njs> maybe I'm not really understanding what your concern is
[18:44:05] <dstufft> and it's replacing it with a new API, a new API which is hopefully better but which doesn't actually contain all of the functionality that we currently have, so if a project uses this new API we lose functionality
[18:44:19] <dstufft> njs: it's not so much using cmake to generate their sdist and then waf to build their wheel
[18:45:39] <dstufft> let's say I want to compute my version number from VCS
[18:46:31] <dstufft> If we split the steps up so there a SDIST step which computes the version number (and name, etc) and then poops that out into a sdist (or sdist metadata file), then another step that takes that and turns it into a wheel, then cython doesn't need a "compute version from VCS" plugin
[18:47:10] <njs> in my experience though, everyone *project* has its own compute version from vcs plugin, and likes it that way :-)
[18:48:05] <njs> is this solving a real problem? if someone makes a really awesome compute-version-from-VCS package, then all of these different toolsets make it trivial to just call that
[18:49:48] <njs> and part of what turns out to be so bad about distutils in practice is that it tries to impose this modular structure on your build... but it's the wrong structure and you can't (easily) turn it off :-/ so I kinda like the idea of version 1 of the new thing being extremely thin, and then waiting to see what kinds of processing pipelines turn out to survive in the ecosystem
[18:50:44] <dstufft> njs: so first off, to be clear, I think the "break it up into smaller pieces" vs "all-in-one" thing is an interesting discussion, but I don't think it's the compelling reason for a new sdist format. What I think is is that we need to pick *one* of them, and if it's the all-on-one (which I think you prefer) then we need toa ctually replace all of the pieces of setuptools that are "commonly" used in this new, all-in-one API, which includes
[18:51:10] <dstufft> if it's the break-it-all-apart method, then I think we need a new sdist format to feed into the wheel build step
[18:51:57] <dstufft> doing the wheel step by itself worked because it's just a leaf, and the step of the process that it replaces is actually pretty simple (you just take some static stuff and spread it out on disk)
[18:52:23] <dstufft> I don't think we can isolate just building a wheel from everything else, because the rest of this is heavily entwined with each other
[18:52:58] <njs> so for me the rationale for not including generate-sdist as a hook in an "all-in-one" api is not that generating-sdists isn't common, but just that it's not something that pip or end-users need to do -- it's something that release managers care about.
[18:54:17] <njs> there's a huge amount of value in having a standard, works-for-every-package way to install or generate a wheel (pip install / pip wheel), because people do these operations on random third-praty packages all the time. There's very little value added to having a standard, works-for-every-package way to generate an sdist, because the only people who do this are individual project release managers, and there's very little burden for them to have to know some
[18:54:17] <njs> thing about the idiosyncratic details of their project's build system
[18:54:29] <njs> (in fact they probably have to know all kinds of idiosyncratic details.)
[18:54:45] <dstufft> njs: there's open tickets that would be solved by pip being able to generate a sdist, particular around how slow that ``pip install .`` is.
[18:55:06] <njs> compare how in the more-mature autoconf et all ecosystem, the sdist -> install step is totally standard ("./configure && make && make install"), but rolling source releases is still the wild west
[18:55:47] <njs> dstufft: that particular ticket would be solved even better by supporting in-place builds :-)
[18:56:01] <dstufft> njs: Disagree, in place builds are not what most people expect
[18:56:38] <njs> I don't mean develop-style builds, I just mean builds that are run inside the source directory
[18:56:52] <njs> ....afaict that is what literally everyone expects 'pip install .' to do?
[18:57:54] <dstufft> njs: pip has never done that, and conceptually ``pip install .`` should not modify ``.``, it should treat the source of packages (whether it's a file system or not) as immutable unless -e is passed
[18:58:59] <njs> if that is how pip is going to work, then I need a new tool for doing day-to-day builds of software I'm working on, because I cannot use a tool with those constraints.
[18:59:08] <njs> I'm not sure how to make it clearer than that :-)
[19:02:07] <dstufft> njs: I've never said that you couldn't *opt* in to that behavior (though i didn't make it obvious), probably through the -b flag to specify your own build directory.
[19:03:21] <njs> this is actually kind of typical of what I find frustrating about python packaging tooling :-/ the traditional 'run make-or-equivalent in the source directory' approach has limitations but is solid and familiar and works. but instead of providing that pip insists on providing two operations that try to fix some perceived limitations of the traditional approach, but don't actually work. ("pip install ." doesn't work in the sense that it breaks incremental b
[19:03:21] <njs> uilds, "pip install -e ." doesn't work in the sense that editable installs are a great dream with a million jagged corner cases.)
[19:04:59] <njs> have you ever had a user complain that they wish running "make" inside a source tree would copy everything to /tmp and build everything there? surely if this was useful then someone somewhere would have implemented it in their make-based build system? I just don't understand where this constraint that "pip install ." shouldn't mutate "." comes from.
[19:05:34] <njs> yeah, but the goal is to make "pip install ." into the replacement for make :-)
[19:05:58] <njs> end users don't do "pip install ." except in the cases where (if they were working in C) they'd run "make" directly
[19:06:30] <njs> or s/make/python setup.py install/, same comments apply
[19:08:42] <dstufft> njs: so I don't have a handy reference to point it out, but I'm aware of people regularly using ``setup.py install`` and ``pip install .`` in cases where they weren't actively developing that project and where they expect it to behave similarly to ``pip install foobar-1.0.tar.gz`` or similar.
[19:09:04] <dstufft> Your assertion seems to be that those people are wrong, and we shouldn't ever enable that behavior, opt in or opt out
[19:09:34] <njs> behave similarly in the sense that they expect that the source tree will not be modified? I'm curious how that expectation even manifests :-)
[19:12:01] <dstufft> one use case that is springing to mind right now is I'm aware of at least one company that has asked me for help, where they check all of their dependencies into VCS and then they have a bash script that just iterates over them and runs ``pip install path/`` on them. One dependency in particular was self modifying .py files as part of the install process and they were confused why it did that
[19:13:01] <dstufft> I forget if I wrote it up in my PEP or not, but my personal intention was that ``twine`` would become the more "I'm developing *THIS* particular package" tool, and would function more similarly to make, but largely act as a front end for the build tool
[19:13:13] <dstufft> so you'd have ``twine sdist`` which created a sdist, ``twine wheel`` which created a wheel, etc
[19:13:16] <njs> ....I guess my intuition is that the solution to this problem is for a company that wants things to work like that, they add 2 lines of code to their bash script to isolate each build, instead of asking every package developer to add an unbreak-me flag to every invocation of pip?
[19:14:36] <njs> (2 lines of code: 'cp -a pkg-src $TMPDIR/$$/pkg-src && pip install $TMPDIR/$$/pkg-src && rm -rf $TMPDIR/$$/pkg-src')
[19:16:09] <dstufft> njs: I think "every package developer to add an unbreak-me flag" is a bit hyperbolic, afaik the downside to not having an isolated build is that your thing compiles slower than it could have possibly compiled if it had used a static build directory, and I further believe that for the average python developer this doesn't really affect them at all because they either are pure python or they have a quick to compile step.
[19:16:45] <dstufft> I recognize that this is particularly painful for the science stuff, since afaik it can often have 30+ minute build times
[19:19:22] <njs> you're right, it's every developer working on packages with non-trivial extension modules
[19:21:39] <njs> ...you could have an option to pip to do the copy-to-tmpdir-and-build-there thing, but I wouldn't; pip already has too more options trying to handle obscure cases than can really be maintained, which is how it became a rube goldberg machine of sadness. and this particular option can be trivially implemented by the machinery calling pip without pip having to know or care.
[19:28:36] <dstufft> njs: putting aside how pip should treat it's build directories, I'm a bit confused why adding an API to the build system to produce a sdist seems to be so conterversial... If we're expecting the build systems to do that anyways (which I think we are? Linux distros are going to be very mad at me if people start releasing only wheels) it doesn't seem to be some huge stretch to expect them to wire that capability up to an API that allows
[19:28:36] <dstufft> things to progrmaticaly create a sdist.
[19:31:05] <njs> dstufft: sorry if I sound frustrated on some of this; I absolutely know that you're 100% trying to make things better for everyone and balance conflicting interests in doing it. It's just that the whole distutils swamp is a huge ongoing pain point for me and everyone I know, and I just want the pain to stop, or at least be reduced to the levels experienced by non-python build ecosystems, before I care that much about anything fancier...
[19:31:28] <njs> I don't really have a huge objection to having a build_sdist hook in the generic build system API, it'd be pretty trivial to add
[19:32:13] <njs> I would prefer to leave it out because (a) it's not clear anyone would use it, at least in the near term, and (b) it's not on the critical path to a minimum viable product, so I'd rather defer it to version 2
[19:34:10] <dstufft> also it's not really just about producing build_sdist, ``twine upload`` relies on the fact that sdists are going to look a particular way to enable uploading them. That way was basically the "old" standard of "whatever distutils/setuptools would produce" but if there's some new thing that a sdist is going to be, then we need to actually define that so that twine can be updated to be able to upload those new things OR so the tooling
[19:34:10] <dstufft> producing those sdists (whether there is an API to do it programmically or not) knows what it needs to produce because it can't rely on being the defacto tool (ala distutils/setuptools) anymore
[19:34:40] <dstufft> njs: packaging tends to get contentious :) I totally understand, and I know you're trying to make things better too!
[19:34:42] <njs> yeah, this is why my proposal was that I write up a PEP just for "this is how legacy sdists actually work"
[19:36:18] <dstufft> One thing that I'd like to do, is even if it looks incredibly similar to the legacy sdist, is that we define a standard with at least a few small tweaks (one being i'd like a new extension, for a couple reasons, and another being I'd want to get a version number of the _format_ being used baked into the format).
[19:36:33] <dstufft> we don't need to go crazy and metadata 2.0 it up
[19:36:55] <dstufft> wheels seems to be pretty successful, and they were basically just eggs with some small tweaks
[19:38:24] <dstufft> but those are specifics, even if we don't change anything, and we just document what existing things there are today, we need to define an actual standard, because you can't replace setuptools without standardizing an interface anywhere (or explicitly breaking it and saying we're not going to support that, at least for right now) that things currently just relied on the fact it was going to look/behave a certain way due to an assumption of
[19:38:49] <njs> is changing the extension actually going to be more or less disruptive in practice? it's not obvious to me so trying to think it through out-loud
[19:40:07] <njs> if people distribute both old-style and new-style sdists for the same releases, then obviously that's fine, old and new pip will both give seamless behavior. But this is somewhat odd -- not everyone will do this, and somewhat legitimately, because if you have two sdists for what's allegedly a single version then which one is canonical?
[19:40:15] <dstufft> So, it makes it slightly harder I think to support both old and new pip, in that you need to upload both a genericlly named .tar.gz/.tar.bz/.zip/etc and the new .whatever extension. but I think it makes it *easier* to support just the new pip
[19:40:27] <dstufft> so we actually already allow multiple sdists for one version
[19:41:31] <lifeless> external tooling has to support everything
[19:41:33] <dstufft> (and those aren't guaranteed to be sdists, they might also be bdist_dumb's, so we can't promise that it's actually a sdist that we're finding)
[19:41:53] <lifeless> on new file extensions.. https://xkcd.com/927/
[19:42:21] <dstufft> njs: easy, PyPI can watch the numbers of people installing with an old version of pip, and once it reaches some small threshold we can say "OK, we no longer support uploading new things that are .tar.gz/etc"
[19:42:37] <njs> dstufft: given a spec for what legacy sdists look like we can easily disallow bdist_dumb's at least, I think?
[19:43:00] <dstufft> it's been a TODO list for me to remove uploading bdist_dumb (and rpm, and probably egg) for a bit now
[19:44:15] <lifeless> dstufft: re: old pip and new filename
[19:44:23] <lifeless> dstufft: I don't think it will error as cleanly as you think
[19:44:42] <lifeless> dstufft: the symptoms users will see will be one of two cases
[19:44:45] <dstufft> basically, what I'd really like to do is get us to a place where there is *one* cannonical sdist that can be released per version and then N wheels for that on PyPI. Obviously the extension isn't *mandatory* for that (we could just say .tar.gz is what it is!) but I think the extension is a bit nicer
[19:44:49] <lifeless> a) installs an older version of the package
[19:45:16] <lifeless> b) errors with 'requirement can not be satisfied' of some description
[19:45:29] <lifeless> when a user looks at the package on pypi, they'll see the version there
[19:45:38] <lifeless> and then be thoroughly confused
[19:46:15] <lifeless> and if they don't -know- we've been doing this transition, it's going to be a near-vertical learning curve to understand whats happening
[19:46:31] <dstufft> reusing the old extension generally means they get an error saying it couldn't execute setup.py, the problem is there's a decent chunk of people who have no idea what a setup.py even is.
[19:46:36] <lifeless> *yes*, pip won't be throwing a backtrace, but that doesn't make the error comprehensible
[19:47:03] <lifeless> dstufft: sure - but do you think they'll be less confused by pip just not seeing the version they expect it to see ?
[19:48:23] <dstufft> lifeless: Yes, because it's easier to learn that "pip doesn't support a .whatever, so if that's all you see on PyPI it doesn't work on your version of pip" than "Ok, well to figure out what's wrong, first you have to download the package and unzip it to see if it has a setup.py, and then you need to see if it has a pypa.yaml file, if so that's a new style package and pip doesn't support it"
[19:49:50] <lifeless> dstufft: I'd really like to do some user testing on this. I'm skeptical that our intuitions are going to serve us well
[19:50:00] <njs> well, there are a few cases if we keep the old version
[19:50:51] <njs> case 1: if the developers have taken the trouble to provide a proper shim (which may be something that build systems can just take care of magically and automatically, once one person has written the shim), then everything just works, end users don't need to learn anything, packagers don't need to distribute multiple copies of the same sdist
[19:51:04] <lifeless> njs: (i've written the shim :P)
[19:51:28] <njs> case 2: the developers have decided that supporting old pip isn't important to them anymore, so they ship a setup.py that provides a meaningful error message when run, telling users what they need to do.
[19:51:47] <lifeless> could do with more fleshing out, but that can be done later (since the shim triggers easy-install, the bulk of it is in the pypi package)
[19:51:57] <njs> case 3: the developers have decided that supporting old pip isn't important to them anymore, and also they're kinda flaky and using poor tooling, so they don't provide a setup.py at all; users get an incomprehensible error message
[19:52:21] <njs> (but even this is arguably better than "silently get downgraded to an older version")
[19:52:22] <lifeless> we could in fact define that case 1/2 must be chosen.
[19:53:18] <njs> also at least some of the backscatter from case 3 will get directed at the developers themselves, so they'll be pretty motivated to fix it
[19:53:18] <njs> that's true, pypi could just require that all sdists must contain some kind of setup.py
[19:53:23] <lifeless> If we go to a new extension, I think we'd want to mandate that a .tar.gz is always supplied, with an erroring setup.py, to prevent silent old-version-usage
[19:53:35] <dstufft> FWIW, I'm planning on, at some point, pushing for a dedicated extension regardless. Whether that's part of this PEP or not.
[19:54:03] <lifeless> I'm much more worried about things that succeed incorrectly than things that don't succeed but do so poorly
[19:55:40] <njs> dstufft: I think there absolutely could be lots of benefits to a proper cleaned up sdist format
[19:55:58] <dstufft> using a generic .tar.gz regularly crops up as a pain point, for instance there was just a XSS available on PyPI because we served .tar.gz with application/octect-stream because some systems automatically decompress Content-Encoding: gzip which breaks hashes, but then some browsers, when faced with application/octect-stream content sniff which makes it possible to include a "broken" sdist that looks enough like HTML that browsers sniffed it
[19:55:58] <dstufft> as HTML and would then run arbitrary HTML/JS (this is fixed now, but it's just an example of the kind of random edge cases that happen because we use generic extensions)
[19:56:19] <lifeless> dstufft: oh god content sniffing was the worst idea ever
[19:56:36] <njs> dstufft: I just also think that given the constraints on your time, and how painful the current situation is, we should try to find a way to move forward without that :-)
[19:56:49] <lifeless> dstufft: to be clear, I'm not against a new extension per se, but I am worried about the failure modes intrinsic to changing it
[19:57:41] <dstufft> njs: like I think I said earlier (if not, I meant to say it) I don't care if the standard looks remarkably similar to what we have now as long as there is an actual documented standard since we won't be able to rely on "whatever setuptools does" anymore :)
[19:58:05] <dstufft> I mean I have lots of ideas about what a better sdist woulld look like, but those specific ideas aren't as important as actually having a standard
[20:00:16] <njs> just yesterday actually I was critiquing someone's draft spec for a new file format, and my first comment was "your magic number is no good; pick one that when passed to file(1) returns 'data'"
[20:00:59] <njs> (this is surprisingly non-trivial, e.g. file thinks that ISO-8859 text can contain nuls, because why not.)
[20:03:42] <dstufft> (I also plan on moving all of the files uploads to PyPI to a different domain that doesn't have any user data on it, but that's going to break all the mirroring, so I need ot figure that out still D:)
[20:04:20] <njs> dstufft: okay so to confirm -- are you saying that if I wrote up a spec that just documents the crucial parts of how current sdists work, without adding a new extension (at this time), then you'd be okay with moving forward with that for now?
[20:07:31] <dstufft> njs: I would greatly prefer it if it at least added the concept of a format version number (ala *.dist-info/WHEEL's Wheel-Version: 1.0., but it can be as simple as debian/compat's 9(.
[20:08:33] <dstufft> and I would probably still argue on the ML for a new extension, but if I lost that I wouldn't feel incredibly bad. (Important to note, Nick is the BDFL delegate for these PEPs now, so while I'm a loud and opinionated voice, the ultimate choice is up to him)
[20:10:51] <dstufft> njs: it occurs to me a place where setup.py sdist is going to be used progrmatically
[20:11:18] <dstufft> Gemfury lets you publish new packages to a private index using ``git push gemfury`` (once you've set up a remote)
[20:11:42] <dstufft> and during git push, they run ``setup.py sdist`` and publish that wheel to your index
[20:12:08] <dstufft> if we lose a programatic way to generate a sdist, then gemfury can't do that anymore, not without learning how each and every build tool does it
[20:13:31] <dstufft> you can have it publish packages to PyPI for you, so that if you tag a new version, it runs setup.py sdist bdist_wheel, then twine uploads to PyPI
[20:15:04] <njs> okay let's talk about how to add SDist-Version to sdists :-)
[20:15:05] <njs> the biggest issue here is that old pips obviously won't know to check for this
[20:15:07] <njs> is the idea just, oh well, let's put it in now, and in a few years then all pips will be checking for it and then we'll be able to bump it if we need to?
[20:15:18] <dstufft> njs: yea that's the basic idea
[20:15:24] <njs> if so then no worries, sounds good to me. just wanted to be clear on the scope.
[20:16:24] <dstufft> nothing is going to make those old pip's suddenly start checking (or give those old sdists a version), but if we start writing the data, then at least we move towards a world where we have it
[20:17:58] <dstufft> (I'm just looking at setup.py --help-commands and trying to remember what I know I've seen other people call)
[20:18:22] <dstufft> at least, in what I'd consider a "reasonable" use case
[20:20:03] <njs> so plan for unwedging the sdist part is: (a) I write up a PEP that documents/standardizes the current sdist format (basically: filename convention, PKG-INFO + a list of which fields in PKG-INFO pypi actually cares about, presence of setup.py), and adds some sort of optional-defaulting-to-1 SDist-Version (I guess in a file called SDIST by analogy with WHEEL). And also contains a rationale section explaining the trade-offs of standardizing this versus creati
[20:23:06] <dstufft> Sorry I'm probably annoying on this D: I have a hard time differentiating sometimes in how I speak about things which i personally think are better options and things which I think are absolutely essential must haves
[20:26:31] <njs> and dstufft will present his case for the new extension thing on the mailing list, and then we'll beg Nick to read over both things and make a ruling so we can move on
[20:29:30] <njs> dstufft: also do you think we can make any progress on the python-versus-cmdline API question?
[20:30:25] <dstufft> I'm down with a python API, particularly because of the ability to do hasattr
[20:30:27] <njs> IIUC the current status is that njs and dstufft think the python thing is better, and lifeless maybe likes the cmdline better but mostly just wants the question to go away. If that's correct then maybe we can just say "okay fine we're going with python let's move on"? but I could be misreading the situation.