PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Monday the 14th of September, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[00:44:18] <paulproteus> OK so dstufft I'm working on the test suite still for "dirtbike" and the code is kind of a giant mess, but I think it works. I presume for this to be useful, I should also get it into Debian. I'm going to clean up the code in a moment, but I wanted to show you a few things for your feedback.
[00:44:33] <paulproteus> https://travis-ci.org/paulproteus/dirtbike/builds/80158388 -- look at the last ~10 lines, sorry about the noise above those lines.
[00:44:56] <paulproteus> https://github.com/paulproteus/dirtbike/blob/nonsense/tests.sh is the test suite that we run.
[00:45:16] <paulproteus> livereload is a wheel that I install via wget-ing a wheel from PyPI and then pip installing it, and then using this tool to re-generate the same wheel.
[00:45:31] <paulproteus> six is a package we get by apt-getting it, and using this tool to generate a wheel.
[00:47:33] <paulproteus> From what I can tell, many Debian packages like python-requests don't include RECORD.
[00:47:55] <dstufft> that makes sense
[00:48:10] <dstufft> since RECORD is only from wheel and Debian typically does python setup.py install
[00:49:02] <dstufft> I think the best way to deal with that is to have Debian pass --record
[00:50:00] <paulproteus> Per https://github.com/paulproteus/dirtbike/blob/nonsense/dirtbike/__init__.py#L95 I copy the minimal subset of metadata into this weel.
[00:50:09] <paulproteus> So like it almost definitely loses metadata.
[00:50:19] <paulproteus> But I figure that's OK, but you should consider instead glaring at me and telling me to do something different.
[00:50:30] <paulproteus> The branch name is 'nonsense' because this is my random hackery branch.
[00:50:39] <dstufft> for the use cases we're looking at for this, losing metadata is fine
[00:50:41] <paulproteus> It makes a console_scripts script called 'dirtbike' -- is it OK to grab global namespace like that?
[00:50:47] <paulproteus> I think that's right re: losing metadata is fine.
[00:51:01] <dstufft> maybe someone else will want to use it for something else, but I think it's fine to cross that bridge when we come to it
[00:51:20] <paulproteus> Now look at https://github.com/paulproteus/dirtbike/blob/nonsense/.travis.yml and notice that I trans-grade Travis-CI from Ubuntu 12.04 to Debian jessie.
[00:51:22] <dstufft> (to be completely honest, it doesn't even really need to be a wheel file, just a zip file, but a wheel is a zip)
[00:51:27] <paulproteus> That is honestly one of my favorite things to do ever. But anyway.
[00:51:35] <dstufft> paulproteus: heh, I was trying to figure how where python-six 1.8 was coming from
[00:51:38] <paulproteus> Now forget I said that and let's talk about something that makes me seem less insane.
[00:51:40] <dstufft> it wasn't in any ubuntu
[00:51:44] <paulproteus> : D
[00:52:05] <paulproteus> Maybe I should document that in README.
[00:52:17] <dstufft> so yea, python-six doesn't come with installed-files.txt
[00:52:35] <paulproteus> FWIW this strikes me as not a huge problem; the https://github.com/paulproteus/dirtbike/blob/nonsense/dirtbike/__init__.py#L48 function seems not so insanely bad.
[00:52:44] <paulproteus> I'm happy to do whatever, though.
[00:53:13] <paulproteus> But I should wrap this up and get it into Debian shortly presumably, so python-pip in Debian can use this at build time.
[00:53:26] <paulproteus> How would you like to automate this?
[00:53:50] <paulproteus> Right now it installs a script called 'dirtbike' that takes one Distribution name and adds the wheel for that to dist/*.whl
[00:53:57] <paulproteus> It then prints nothing and exits.
[00:54:02] <paulproteus> You might want a different interface.
[00:54:48] <dstufft> there is two basic ways of handling this, either bomb out and demand an installation that has that metadata (which will require getting the Debian packages in testing/sid that pip depends on to have a --record file, or get the Debian Python dh helper to also pass that thing OR do some guessing by using top_level.txt to get the names of the top level packages and just assume everything below them belong to that package
[00:55:05] <dstufft> in a platform independent way*
[00:56:03] <paulproteus> I don't mean to be annoying, but the dpkg -S + dpkg -L strategy I'm using now seems completely perfect already.
[00:56:21] <paulproteus> It uses the dpkg metadata to search for what package contains the egg-info directory, then includes all the dist-packages files that came with that Debian package.
[00:56:41] <paulproteus> Having said that, (a) the downside is this isn't portable cross-distro and (b) I'm probably missing some failure mode.
[00:56:59] <dstufft> paulproteus: (a) is the qualifier I added to my statement :)
[00:57:02] <paulproteus> : D
[00:57:12] <dstufft> e.g. if you want to make it cross platform, those are th eonly options I can think of
[00:57:15] <paulproteus> Nod.
[00:57:16] <dstufft> of course, dpkg is fine for debian
[00:57:22] <dstufft> and ubuntu and whatever
[00:57:53] <paulproteus> I guess my question is, should we punt on that, and start using this right now in python-pip in Debian (and Ubuntu and whatever)? Or should we hold off trying to integrate this into python-pip in Debian until "dirtbike" is cross-platform-useful?
[00:58:44] <dstufft> Probably that depends on how soon Barry will be able to make pytohn-pip use it :D (Unless you're planning to do that too). I don't really know how to make updates to python-pip, I just whine at barry
[00:58:59] <dstufft> I don't have a problem with it being Debian specific until someone else wants to use it
[00:59:20] <paulproteus> Cool.
[01:02:18] <dstufft> as far as interface goes, that]s probably fine? It'll only need to be used at build time for python-pip so I'm not sure if optimizing for anything but that use case is super important, we just need to build wheels from the debian installed packages and stick them in a particular directory as part of the build step
[01:03:56] <dstufft> it'd probably be nice to get pybuild (is it called pybuild?) to drop installed-file.txt into the right place by default
[01:04:12] <dstufft> so things like ``pip show -f`` work
[01:05:08] <dstufft> (That's not really related to dirtbike though, except in that it would remove the need to have the debian specific stuff)
[01:19:53] <dstufft> paulproteus: btw, you can get the installed-files.txt case by doing ``pip install whatever --no-use-wheel``
[01:20:08] <dstufft> that'll disable the path that allows installing wheels and installs from sdists
[01:26:19] <paulproteus> Thanks for that info.
[01:26:45] <paulproteus> Just to make sure I understand the context properly, are you saying that because you think I should test the installed-files.txt case in the test suite for this? If so, +1.
[01:28:05] <dstufft> paulproteus: ya
[09:43:35] <nanonyme> Anyone have any idea if it's possible to tweak which Python scripts built during package installation use? It looks like the path is an absolute path figured during compile-time which is a bit awkward for our purposes
[09:50:19] <[Tritium]> nanonyme: other than hacking post-install?
[09:51:43] <[Tritium]> nanonyme: actually... *clears throat* "What are you really trying to do?©"
[09:59:08] <nanonyme> [Tritium], I'm building a Python environment on a build slave. Then I'm dropping this entire Python installation to a completely different machine under a completely different directory. Both of these are Windows
[09:59:38] <nanonyme> It works almost completely but I noticed there's hashbangs inside .exe files which point to the directory of Python on the build slave
[09:59:53] <[Tritium]> nanonyme: regex the paths. this is a sane reason to do this.
[10:00:17] <nanonyme> Thus far I've been telling everyone just not to use any of the .exe's but I've been playing with the idea of just fixing them
[10:00:29] <nanonyme> Yeah, rewriting would work, sure
[10:01:07] <[Tritium]> OR... install the python on the buildslave in the same directory as the destination host
[10:01:41] <nanonyme> That'd be awkward, there's a directory structure relevant to Jenkins on the build slave and I want to drop the end result closer to root under a fixed location
[10:02:19] <[Tritium]> in that case my best advice is the brute force method of rewriting the paths
[10:02:23] <nanonyme> Yeah
[10:02:38] <[Tritium]> which by the way, is probably not the BEST advice, so stay tuned!
[10:03:08] <nanonyme> We do use the same Python for building itself so it would be totally trivial to just write another Python script run as post-compilation
[10:03:54] <nanonyme> IOW fully self-sufficient Python pulled form Git to workspace, made to install packages to itself and package itself, then sent to a different machine
[10:04:00] <nanonyme> s/form/from/
[10:04:50] <nanonyme> And yes, I know I'm probably going to be totally pulling my hair off with future versions of Python that depend on newer C runtimes
[10:06:15] <[Tritium]> python 3.5
[10:06:21] <[Tritium]> ....but after that, nothing
[10:06:58] <nanonyme> How so?
[10:07:21] <[Tritium]> >=3.5 will always use the same runtime (or compatible runtime. ie. if you have 2015, all further pythons will be fine... or if you have 2018, 3.5 will be fine)
[10:07:35] <nanonyme> Oh, that I didn't know
[10:07:46] <nanonyme> How is that accomplished?
[10:08:00] <[Tritium]> they untied python abi from the c runtime, thanks to a stable abi from microsoft
[10:10:44] <[Tritium]> I mean, if you need in depth details, steve dower (i think thats how you spell his name) has them
[10:13:41] <nanonyme> I mean, that's probably something our developers would be interested in as well
[10:14:02] <[Tritium]> Warning: I might have the details wrong.
[10:14:39] <[Tritium]> python-dev@ im sure would be happy to fill you in on all the gorey details for your coworkers
[10:23:04] <nanonyme> http://blogs.msdn.com/b/vcblog/archive/2015/03/03/introducing-the-universal-crt.aspx right, so this
[10:23:16] <[Tritium]> that
[10:23:32] <nanonyme> It doesn't really solve the C runtime problem for us though for now. It obviously *eventually* will
[10:23:47] <nanonyme> We support customers all the way down to XP
[10:24:26] <nanonyme> (IOW wrt Python being deployable for all target platforms)
[10:26:09] <[Tritium]> ... 3.5 does not even support xp, fwiw
[10:26:50] <ronny> nanonyme: if you make wheels with console-scripts and install those, instead of just droping a folder tree somewhere else things could work
[10:27:07] <ronny> nanonyme: alternatively just rerun the script installers of easy_install/pip
[10:27:52] <nanonyme> ronny, the TA framework assumes you no longer need to install anything at that point but can just start using Python
[10:28:54] <[Tritium]> deploying to non-it customers
[10:28:57] <ronny> nanonyme: then build the python envon the build server with exactly the same paths?
[10:29:13] <nanonyme> Can't, no permissions for paths outside Jenkins root
[10:29:52] <nanonyme> Guess I'll just rewrite the paths inside the exe's, that's pretty simple
[11:15:11] <nedbat> I'm debugging a problem with get-pip.py on Appveyor, and wondering why get-pip.py is so quiet: it would be useful for it to report what it is doing. Would a PR adding logging like that (controllable with -q -v whatever) be welcome?
[11:33:44] <dstufft> nedbat: I think it already supports -v
[11:33:55] <nedbat> dstufft: i didn't see it in the code.
[11:34:50] <dstufft> You can think of get-pip.py as sort of a specialized invocation of ``pip install pip setuptools wheel``
[11:35:04] <dstufft> and any additional flags you pass to get-pip.py should be passed along with it
[11:35:42] <dstufft> because get-pip.py is reallly just pip
[11:35:56] <dstufft> that giant blob of basewhatever encoded shit, is a pip wheel
[11:36:45] <nedbat> dstufft: but the problems might be in the tempdir and unpacking of the blob
[11:37:38] <dstufft> oh, you want to add logged and such to the little bit of shim code that's in get-pip.py itself?
[11:37:47] <dstufft> I don't see any reason why that'd be a problem
[11:37:59] <dstufft> logging*
[11:38:27] <nedbat> i don't see why either :) that's the problem :)
[11:38:37] <nedbat> i'll try -v
[11:38:48] <dstufft> No, I mean I don't see why it'd be a problem to add logging
[11:40:04] <dstufft> I mean, there's not much there, it really just unzips and then calls into pip, but it's a computer so it continuously surprises me at new and inventive ways it comes up with to fail
[11:46:48] <nedbat> dstufft: ok, cool.
[11:47:01] <nedbat> the Appveyor people claim nothing has changed on their end, but something changed somewhere....
[11:47:28] <dstufft> what does the error look like
[13:55:50] <Callek> Is there a way to say a project needs pip>=<some-version> to install correctly
[13:56:19] <Callek> I hit an issue where mock doesn't install correctly due to older pip versions, and would love a way to just force pip to be upgraded when installing mock (or at the least, my own project)
[13:57:21] <Callek> due to https://github.com/testing-cabal/mock/issues/316 from mock
[14:51:46] <breakingmatter> Can anyone explain to me the best practices for using setuptools to copy systemd/upstart/init scripts to their correct locations?
[14:53:48] <tdsmith> it's "don't do it," probably
[14:54:13] <doismellburning> this
[14:54:35] <breakingmatter> I just need to be able to package my python projects *somehow* and get it to the point where you can install it, and then start the service.
[14:55:21] <breakingmatter> There are tons of resources out there on how to do each piece of it individually, but nothing that I can find that puts it all together.
[14:56:02] <doismellburning> it feels like you're conflating Python packages and OS packages
[14:56:57] <breakingmatter> Sure, I might be.
[14:57:29] <ronny> breakingmatter: python packages are bascially not realyl supposed to do that, os packages are
[14:57:53] <breakingmatter> Okay, then that's what I'm asking: can someone point me in the right direction?
[14:57:59] <ronny> breakingmatter: the general idea is make a python package with a runnable service, then make a os pacakge that ships all the surrounding very os specific details
[14:58:18] <doismellburning> breakingmatter: https://github.com/jordansissel/fpm may be useful
[14:58:19] <ronny> breakingmatter: whats the target distro?
[14:58:37] <breakingmatter> RHEL/CentOS/Fedora
[14:59:10] <ronny> then make a rpm package and if its opensource, put it on a copr
[14:59:26] <breakingmatter> ronny: So, does the python package need to be built into a wheel for something like this?
[14:59:55] <ronny> breakingmatter: in genera distro packages are built from a sdist
[15:00:45] <ronny> so the python side is up until the sdist, and then the distro side takes the sdist and brings it together with the init scripts / default confiugration / ...
[15:01:10] <breakingmatter> Long story short, I have a module with my library code, some daemons/processes/threads that need to be ran, an entry_script that sets up the process manager. I'd like a systemd service file that I can use to run that entry script, and I'd like for all of that to be done by the install so I don't have to run ten different commands when I need to add a new host to the environment.
[15:01:30] <breakingmatter> And I'd like the process to be simple enough that I could adapt it to some gunicorn/flask projects we have as well.
[15:01:48] <ronny> welcome to hell :)
[15:01:51] <breakingmatter> le sigh
[15:01:53] <breakingmatter> lol
[15:02:35] <breakingmatter> Python application deployment is hard.
[15:03:03] <breakingmatter> It's like all of the time you save during the development process is diverted into figuring out how to install on hundreds of servers painlessly.
[15:04:13] <breakingmatter> Anyways, sorry for the wall of text. Is fpm considered "best practice" for something like this?
[15:04:33] <RoyK> hi all. I'm having something of a messed up pip, so I was told to apt-get purge python-{pip,requests} - are there any files that should be removed manually after this?
[15:04:48] <ronny> breakingmatter: well, unfortunately nobody seems to want to spend the time to make that part painless, and its like that for most languages
[15:05:50] <ronny> RoyK: depends on how you created the mess
[15:06:22] <doismellburning> breakingmatter: so you'll probably have a hard time making OS packages thanks to virtualenv woe
[15:06:31] <doismellburning> breakingmatter: we've moved to just shipping Docker images
[15:07:57] <breakingmatter> doismellburning: I thought about using Docker for this, but it just felt like a dirty fix. I mean, all I want is to take my python app, package it (somehow), and run it as a service. I feel like there should be something out there already that makes it easy.
[15:08:19] <ronny> breakingmatter: there is tools like yadt maybe
[15:08:22] <breakingmatter> I mean, building wheels is fairly easy. But trying to combine that with setting up init scripts and such is nigh impossible.
[15:09:49] <breakingmatter> ronny: It still /feels/ like a heavy solution
[15:10:33] <doismellburning> breakingmatter: we _used_ to just use fpm
[15:10:47] <breakingmatter> doismellburning: And now you just use Docker?
[15:10:49] <doismellburning> python service goes in, RPM comes out, with /etc/init.d/badger etc.
[15:10:51] <doismellburning> yep
[15:11:04] <RoyK> ronny: looks like there's some leftovers under /usr/local/lib - anywhere else I should look?
[15:11:22] <breakingmatter> doismellburning: What features made you just move to Docker?
[15:11:33] <breakingmatter> Also, do you use supervisor to manage the process?
[15:11:45] <doismellburning> breakingmatter: greater isolation, simplicity, reusable builds
[15:12:05] <doismellburning> breakingmatter: nope, we used sysvinit / daemontools variously
[15:12:50] <breakingmatter> doismellburning: I thought that Docker doens't give you an init system in the container?
[15:13:31] <doismellburning> breakingmatter: it doesn't
[15:13:46] <doismellburning> breakingmatter: when you say "do you use supervisor to manage the process", which process do you mean?
[15:14:11] <doismellburning> "our python daemons when we built rpms" -> sysvinit / daemontools
[15:14:19] <doismellburning> our Docker images - various things
[15:14:27] <breakingmatter> doismellburning: I mean whatever python program you're trying to run. Like a flask webapp or whatever
[15:15:12] <breakingmatter> My understanding is that if you don't want to setup an init script the default deployment scheme is to run some kind of detached supervisord process, and it seems fairly common with docker folk too
[15:15:48] <doismellburning> breakingmatter: I'm not sure I follow what you mean by "the default deployment scheme"
[15:16:37] <breakingmatter> Running your service/app on a server. Everything I've read about deploying python apps on Docker says to use supervisord to run the process itself rather than just doing "python app.py".
[15:16:50] <breakingmatter> And basically use supervisord as a replacement init system
[15:17:03] <doismellburning> breakingmatter: _inside_ Docker? I can't say I'd ever do that
[15:19:27] <breakingmatter> doismellburning: So you just run your app script directly?
[15:20:23] <doismellburning> breakingmatter: inside Docker? absolutely
[15:20:52] <doismellburning> the only reason I see to use some sort of process manager _inside_ Docker is if you want to run multiple things in a container
[15:21:57] <breakingmatter> How do you control it as a daemon then? systemd/other init systems offer things I just can't see Docker itself handling.
[15:24:14] <doismellburning> breakingmatter: that's _outside_ Docker
[15:34:05] <breakingmatter> doismellburning: Oh, so you manage the Docker container itself as if it's the service you're running?
[15:34:21] <breakingmatter> i.e., you have a wsgi service inside the container and you manage the ct instead of the python service
[15:36:09] <doismellburning> breakingmatter: yes
[15:36:16] <breakingmatter> Interesting.
[15:36:28] <doismellburning> breakingmatter: that way I don't care if it's Python inside Docker, or hand-written ASM
[15:37:09] <breakingmatter> Well our architecture is all microservice oriented, so that's appealing on my end.
[15:37:26] <breakingmatter> I still wish there was a simple way to deploy/package python apps as _services_ though.
[15:37:43] <breakingmatter> I just want to be able to do `systemctl start my_python_app` immediately after a source install.
[15:37:46] <doismellburning> sure; we built a system and are now giving up on it
[15:38:10] <breakingmatter> built a system?
[15:41:15] <nedbat> I'm trying to figure out why pip can't install on Appveyor under python2.6 since late last week: https://ci.appveyor.com/project/nedbat/coveragepy/build/default-98/job/tr3r2eejjdsslnwm
[15:41:44] <nedbat> looking at versions, I see that setuptools updated a week ago: https://pypi.python.org/pypi/setuptools Does this look familiar to anyone?
[15:50:47] <dstufft> nedbat: doesn't look familar
[15:51:02] <dstufft> oh
[15:51:03] <dstufft> wait
[15:51:05] <dstufft> I see what it is
[15:51:25] <nedbat> yay!
[15:52:14] <dstufft> argparse 1.4 got released Sep 12, 2015 and didn't include a wheel
[15:52:41] <dstufft> and get-pip.py tries to install ``wheel``, which depends on argparse on 2.6
[15:52:47] <nedbat> oh, i didn't look for argparse
[15:52:50] <dstufft> but get-pip.py needs all wheels
[15:53:06] <dstufft> you can do get-pip.py --no-wheel to stop installing wheel
[15:53:10] <dstufft> which should fix it
[15:53:17] <dstufft> or bug whoever manages argparse to upload a wheel
[15:53:58] <nedbat> dstufft: help me understand how you got from the error messages on the appveyor page (about ssl!) to "argparse didn't make a wheel"
[15:54:26] <dstufft> nedbat: the ssl thing was just a warning
[15:54:31] <dstufft> not an error
[15:54:43] <nedbat> so there's no message here about the missing wheel?
[15:54:46] <dstufft> the real error message is on 36
[15:54:48] <dstufft> line 36
[15:55:01] <nedbat> ok, cool
[15:55:03] <dstufft> "setuptools must be installed to install from a source distribution"
[15:55:12] <dstufft> so I looked to see why we were installing a source dist
[15:55:20] <dstufft> which I saw we were using wheels for everything but argparse
[15:55:24] <nedbat> dstufft: btw: the "get-pip.py" is quiet problem was really a "no one understands Powershell" problem.
[15:56:20] <dstufft> then looked at warehouse.python.org/project/argparse/ to see what files they had available
[15:56:22] <dstufft> nedbat: heh
[15:56:24] <dstufft> makes sense
[15:56:30] <dstufft> powershell is scary
[15:56:34] <nedbat> yes
[16:07:59] <nedbat> dstufft: nice! https://github.com/ThomasWaldmann/argparse/issues/91
[16:32:47] <ronny> nedbat: its not clear if thomas is allowed to upload tho (he just took it to github)
[16:33:12] <nedbat> ronny: someone has just uploaded the .whl, and it does fix the problem on Appveyor
[16:48:28] <ronny> heh :)
[16:48:39] <ronny> why did it suddenly start? update of get-pip?
[16:58:15] <tdsmith> if anyone complains about wheel caching failing on 3.5, it's https://bitbucket.org/pypa/wheel/issues/146/wheel-building-fails-on-cpython-350b3#comment-21741627 i think
[16:59:09] <raydeo> !logs
[16:59:09] <pmxbot> http://chat-logs.dcpython.org/channel/pypa
[17:03:55] <Callek> how can I do `pip list --outdated` with respect to honoring max-version requirements for related packages. (I know `pip instal -U -e .` will respect it, but I want a "no-op" version of that)
[17:11:42] <ronny> dstufft: is there a way to mark projects on pypi as abadonded?
[17:26:38] <Callek> ronny: see https://www.python.org/dev/peps/pep-0426/#obsoleted-by
[17:51:48] <ronny> Callek: thats completely useless
[17:52:11] <Callek> why, since it allows you to mark your project as abandoned aiui
[17:52:22] <ronny> abadonded means there is no replacement an no future ^^
[17:52:59] <Callek> ronny: see that section, "None" is the approp value there, for "no replacement"
[17:53:43] <ronny> Callek: that pep is still a draft tho
[17:57:18] <ronny> hmm
[17:57:32] <ronny> i'll use development tatus inactive and just edit the descriptions
[18:07:43] <nedbat> ronny: the argparse problem was the lack of a wheel for 1.4.0. 1.3.0 had one.
[18:08:03] <nedbat> ronny: not sure when the wheel sensitivity started.
[18:08:20] <ronny> i see
[18:21:06] <nanonyme> dstufft, powershell is a bit less scary than batch though
[19:00:07] <nedbat> nanonyme: they are each scary in their own way...
[19:01:42] <dstufft> ronny: No
[19:02:05] <dstufft> I think there is an issue for it tho
[19:02:12] <dstufft> maybe not that specific thing
[19:05:49] <ronny> dstufft: is see, thanks
[20:51:41] <dstufft> tdsmith: thanks! :D
[20:52:47] <tdsmith> dstufft: you're welcome
[20:53:23] <tdsmith> our issue for that is getting a lot of me-toos so i should probably deal with it
[20:53:50] <tdsmith> sorry for the noise
[20:58:59] <_habnabit> apparently `~=-0.1.0` doesn't match `0.1+upstream.2` but does match `0.1`. how is this possible? pip seems to be really really bad at working with upstream versions
[21:01:04] <dstufft> _habnabit: did you mean ~=0.1.0?
[21:01:34] <dstufft> or I'm not sure what the - is there
[21:02:42] <dstufft> _habnabit: it's probably true that local versions aren't super well tested, they are a brand new thing and I wrote whatever tests I could think of at the time
[21:19:16] <_habnabit> dstufft, yes ~=0.1.0
[21:19:19] <_habnabit> dstufft, i didn't notice the -
[21:23:30] <_habnabit> dstufft, guess i should file an issue
[21:23:37] <dstufft> _habnabit: sec
[21:25:50] <dstufft> _habnabit: Ok yea, open a bug on github.com/pypa/packaging please, specifically the problem is that ==0.1.* doesn't match 0.1+upstream.2 (~= is implemented in terms of >= and == .*)
[21:26:02] <_habnabit> dstufft, ah ok
[21:28:27] <dstufft> _habnabit: once you make the issue, link me it please
[21:29:06] <_habnabit> dstufft, https://github.com/pypa/packaging/issues/41
[21:33:59] <dstufft> thanks
[21:34:04] <dstufft> wrote my notes down on the issue