[00:44:18] <paulproteus> OK so dstufft I'm working on the test suite still for "dirtbike" and the code is kind of a giant mess, but I think it works. I presume for this to be useful, I should also get it into Debian. I'm going to clean up the code in a moment, but I wanted to show you a few things for your feedback.
[00:44:33] <paulproteus> https://travis-ci.org/paulproteus/dirtbike/builds/80158388 -- look at the last ~10 lines, sorry about the noise above those lines.
[00:44:56] <paulproteus> https://github.com/paulproteus/dirtbike/blob/nonsense/tests.sh is the test suite that we run.
[00:45:16] <paulproteus> livereload is a wheel that I install via wget-ing a wheel from PyPI and then pip installing it, and then using this tool to re-generate the same wheel.
[00:45:31] <paulproteus> six is a package we get by apt-getting it, and using this tool to generate a wheel.
[00:47:33] <paulproteus> From what I can tell, many Debian packages like python-requests don't include RECORD.
[00:48:10] <dstufft> since RECORD is only from wheel and Debian typically does python setup.py install
[00:49:02] <dstufft> I think the best way to deal with that is to have Debian pass --record
[00:50:00] <paulproteus> Per https://github.com/paulproteus/dirtbike/blob/nonsense/dirtbike/__init__.py#L95 I copy the minimal subset of metadata into this weel.
[00:50:09] <paulproteus> So like it almost definitely loses metadata.
[00:50:19] <paulproteus> But I figure that's OK, but you should consider instead glaring at me and telling me to do something different.
[00:50:30] <paulproteus> The branch name is 'nonsense' because this is my random hackery branch.
[00:50:39] <dstufft> for the use cases we're looking at for this, losing metadata is fine
[00:50:41] <paulproteus> It makes a console_scripts script called 'dirtbike' -- is it OK to grab global namespace like that?
[00:50:47] <paulproteus> I think that's right re: losing metadata is fine.
[00:51:01] <dstufft> maybe someone else will want to use it for something else, but I think it's fine to cross that bridge when we come to it
[00:51:20] <paulproteus> Now look at https://github.com/paulproteus/dirtbike/blob/nonsense/.travis.yml and notice that I trans-grade Travis-CI from Ubuntu 12.04 to Debian jessie.
[00:51:22] <dstufft> (to be completely honest, it doesn't even really need to be a wheel file, just a zip file, but a wheel is a zip)
[00:51:27] <paulproteus> That is honestly one of my favorite things to do ever. But anyway.
[00:51:35] <dstufft> paulproteus: heh, I was trying to figure how where python-six 1.8 was coming from
[00:51:38] <paulproteus> Now forget I said that and let's talk about something that makes me seem less insane.
[00:52:05] <paulproteus> Maybe I should document that in README.
[00:52:17] <dstufft> so yea, python-six doesn't come with installed-files.txt
[00:52:35] <paulproteus> FWIW this strikes me as not a huge problem; the https://github.com/paulproteus/dirtbike/blob/nonsense/dirtbike/__init__.py#L48 function seems not so insanely bad.
[00:52:44] <paulproteus> I'm happy to do whatever, though.
[00:53:13] <paulproteus> But I should wrap this up and get it into Debian shortly presumably, so python-pip in Debian can use this at build time.
[00:53:26] <paulproteus> How would you like to automate this?
[00:53:50] <paulproteus> Right now it installs a script called 'dirtbike' that takes one Distribution name and adds the wheel for that to dist/*.whl
[00:53:57] <paulproteus> It then prints nothing and exits.
[00:54:02] <paulproteus> You might want a different interface.
[00:54:48] <dstufft> there is two basic ways of handling this, either bomb out and demand an installation that has that metadata (which will require getting the Debian packages in testing/sid that pip depends on to have a --record file, or get the Debian Python dh helper to also pass that thing OR do some guessing by using top_level.txt to get the names of the top level packages and just assume everything below them belong to that package
[00:55:05] <dstufft> in a platform independent way*
[00:56:03] <paulproteus> I don't mean to be annoying, but the dpkg -S + dpkg -L strategy I'm using now seems completely perfect already.
[00:56:21] <paulproteus> It uses the dpkg metadata to search for what package contains the egg-info directory, then includes all the dist-packages files that came with that Debian package.
[00:56:41] <paulproteus> Having said that, (a) the downside is this isn't portable cross-distro and (b) I'm probably missing some failure mode.
[00:56:59] <dstufft> paulproteus: (a) is the qualifier I added to my statement :)
[00:57:53] <paulproteus> I guess my question is, should we punt on that, and start using this right now in python-pip in Debian (and Ubuntu and whatever)? Or should we hold off trying to integrate this into python-pip in Debian until "dirtbike" is cross-platform-useful?
[00:58:44] <dstufft> Probably that depends on how soon Barry will be able to make pytohn-pip use it :D (Unless you're planning to do that too). I don't really know how to make updates to python-pip, I just whine at barry
[00:58:59] <dstufft> I don't have a problem with it being Debian specific until someone else wants to use it
[01:02:18] <dstufft> as far as interface goes, that]s probably fine? It'll only need to be used at build time for python-pip so I'm not sure if optimizing for anything but that use case is super important, we just need to build wheels from the debian installed packages and stick them in a particular directory as part of the build step
[01:03:56] <dstufft> it'd probably be nice to get pybuild (is it called pybuild?) to drop installed-file.txt into the right place by default
[01:04:12] <dstufft> so things like ``pip show -f`` work
[01:05:08] <dstufft> (That's not really related to dirtbike though, except in that it would remove the need to have the debian specific stuff)
[01:19:53] <dstufft> paulproteus: btw, you can get the installed-files.txt case by doing ``pip install whatever --no-use-wheel``
[01:20:08] <dstufft> that'll disable the path that allows installing wheels and installs from sdists
[01:26:45] <paulproteus> Just to make sure I understand the context properly, are you saying that because you think I should test the installed-files.txt case in the test suite for this? If so, +1.
[09:43:35] <nanonyme> Anyone have any idea if it's possible to tweak which Python scripts built during package installation use? It looks like the path is an absolute path figured during compile-time which is a bit awkward for our purposes
[09:50:19] <[Tritium]> nanonyme: other than hacking post-install?
[09:59:08] <nanonyme> [Tritium], I'm building a Python environment on a build slave. Then I'm dropping this entire Python installation to a completely different machine under a completely different directory. Both of these are Windows
[09:59:38] <nanonyme> It works almost completely but I noticed there's hashbangs inside .exe files which point to the directory of Python on the build slave
[09:59:53] <[Tritium]> nanonyme: regex the paths. this is a sane reason to do this.
[10:00:17] <nanonyme> Thus far I've been telling everyone just not to use any of the .exe's but I've been playing with the idea of just fixing them
[10:00:29] <nanonyme> Yeah, rewriting would work, sure
[10:01:07] <[Tritium]> OR... install the python on the buildslave in the same directory as the destination host
[10:01:41] <nanonyme> That'd be awkward, there's a directory structure relevant to Jenkins on the build slave and I want to drop the end result closer to root under a fixed location
[10:02:19] <[Tritium]> in that case my best advice is the brute force method of rewriting the paths
[10:02:38] <[Tritium]> which by the way, is probably not the BEST advice, so stay tuned!
[10:03:08] <nanonyme> We do use the same Python for building itself so it would be totally trivial to just write another Python script run as post-compilation
[10:03:54] <nanonyme> IOW fully self-sufficient Python pulled form Git to workspace, made to install packages to itself and package itself, then sent to a different machine
[10:04:50] <nanonyme> And yes, I know I'm probably going to be totally pulling my hair off with future versions of Python that depend on newer C runtimes
[10:07:21] <[Tritium]> >=3.5 will always use the same runtime (or compatible runtime. ie. if you have 2015, all further pythons will be fine... or if you have 2018, 3.5 will be fine)
[10:23:32] <nanonyme> It doesn't really solve the C runtime problem for us though for now. It obviously *eventually* will
[10:23:47] <nanonyme> We support customers all the way down to XP
[10:24:26] <nanonyme> (IOW wrt Python being deployable for all target platforms)
[10:26:09] <[Tritium]> ... 3.5 does not even support xp, fwiw
[10:26:50] <ronny> nanonyme: if you make wheels with console-scripts and install those, instead of just droping a folder tree somewhere else things could work
[10:27:07] <ronny> nanonyme: alternatively just rerun the script installers of easy_install/pip
[10:27:52] <nanonyme> ronny, the TA framework assumes you no longer need to install anything at that point but can just start using Python
[10:28:54] <[Tritium]> deploying to non-it customers
[10:28:57] <ronny> nanonyme: then build the python envon the build server with exactly the same paths?
[10:29:13] <nanonyme> Can't, no permissions for paths outside Jenkins root
[10:29:52] <nanonyme> Guess I'll just rewrite the paths inside the exe's, that's pretty simple
[11:15:11] <nedbat> I'm debugging a problem with get-pip.py on Appveyor, and wondering why get-pip.py is so quiet: it would be useful for it to report what it is doing. Would a PR adding logging like that (controllable with -q -v whatever) be welcome?
[11:33:44] <dstufft> nedbat: I think it already supports -v
[11:33:55] <nedbat> dstufft: i didn't see it in the code.
[11:34:50] <dstufft> You can think of get-pip.py as sort of a specialized invocation of ``pip install pip setuptools wheel``
[11:35:04] <dstufft> and any additional flags you pass to get-pip.py should be passed along with it
[11:35:42] <dstufft> because get-pip.py is reallly just pip
[11:35:56] <dstufft> that giant blob of basewhatever encoded shit, is a pip wheel
[11:36:45] <nedbat> dstufft: but the problems might be in the tempdir and unpacking of the blob
[11:37:38] <dstufft> oh, you want to add logged and such to the little bit of shim code that's in get-pip.py itself?
[11:37:47] <dstufft> I don't see any reason why that'd be a problem
[11:38:48] <dstufft> No, I mean I don't see why it'd be a problem to add logging
[11:40:04] <dstufft> I mean, there's not much there, it really just unzips and then calls into pip, but it's a computer so it continuously surprises me at new and inventive ways it comes up with to fail
[11:47:01] <nedbat> the Appveyor people claim nothing has changed on their end, but something changed somewhere....
[11:47:28] <dstufft> what does the error look like
[13:55:50] <Callek> Is there a way to say a project needs pip>=<some-version> to install correctly
[13:56:19] <Callek> I hit an issue where mock doesn't install correctly due to older pip versions, and would love a way to just force pip to be upgraded when installing mock (or at the least, my own project)
[13:57:21] <Callek> due to https://github.com/testing-cabal/mock/issues/316 from mock
[14:51:46] <breakingmatter> Can anyone explain to me the best practices for using setuptools to copy systemd/upstart/init scripts to their correct locations?
[14:54:35] <breakingmatter> I just need to be able to package my python projects *somehow* and get it to the point where you can install it, and then start the service.
[14:55:21] <breakingmatter> There are tons of resources out there on how to do each piece of it individually, but nothing that I can find that puts it all together.
[14:56:02] <doismellburning> it feels like you're conflating Python packages and OS packages
[14:57:29] <ronny> breakingmatter: python packages are bascially not realyl supposed to do that, os packages are
[14:57:53] <breakingmatter> Okay, then that's what I'm asking: can someone point me in the right direction?
[14:57:59] <ronny> breakingmatter: the general idea is make a python package with a runnable service, then make a os pacakge that ships all the surrounding very os specific details
[14:58:18] <doismellburning> breakingmatter: https://github.com/jordansissel/fpm may be useful
[14:58:19] <ronny> breakingmatter: whats the target distro?
[14:59:10] <ronny> then make a rpm package and if its opensource, put it on a copr
[14:59:26] <breakingmatter> ronny: So, does the python package need to be built into a wheel for something like this?
[14:59:55] <ronny> breakingmatter: in genera distro packages are built from a sdist
[15:00:45] <ronny> so the python side is up until the sdist, and then the distro side takes the sdist and brings it together with the init scripts / default confiugration / ...
[15:01:10] <breakingmatter> Long story short, I have a module with my library code, some daemons/processes/threads that need to be ran, an entry_script that sets up the process manager. I'd like a systemd service file that I can use to run that entry script, and I'd like for all of that to be done by the install so I don't have to run ten different commands when I need to add a new host to the environment.
[15:01:30] <breakingmatter> And I'd like the process to be simple enough that I could adapt it to some gunicorn/flask projects we have as well.
[15:02:35] <breakingmatter> Python application deployment is hard.
[15:03:03] <breakingmatter> It's like all of the time you save during the development process is diverted into figuring out how to install on hundreds of servers painlessly.
[15:04:13] <breakingmatter> Anyways, sorry for the wall of text. Is fpm considered "best practice" for something like this?
[15:04:33] <RoyK> hi all. I'm having something of a messed up pip, so I was told to apt-get purge python-{pip,requests} - are there any files that should be removed manually after this?
[15:04:48] <ronny> breakingmatter: well, unfortunately nobody seems to want to spend the time to make that part painless, and its like that for most languages
[15:05:50] <ronny> RoyK: depends on how you created the mess
[15:06:22] <doismellburning> breakingmatter: so you'll probably have a hard time making OS packages thanks to virtualenv woe
[15:06:31] <doismellburning> breakingmatter: we've moved to just shipping Docker images
[15:07:57] <breakingmatter> doismellburning: I thought about using Docker for this, but it just felt like a dirty fix. I mean, all I want is to take my python app, package it (somehow), and run it as a service. I feel like there should be something out there already that makes it easy.
[15:08:19] <ronny> breakingmatter: there is tools like yadt maybe
[15:08:22] <breakingmatter> I mean, building wheels is fairly easy. But trying to combine that with setting up init scripts and such is nigh impossible.
[15:09:49] <breakingmatter> ronny: It still /feels/ like a heavy solution
[15:10:33] <doismellburning> breakingmatter: we _used_ to just use fpm
[15:10:47] <breakingmatter> doismellburning: And now you just use Docker?
[15:10:49] <doismellburning> python service goes in, RPM comes out, with /etc/init.d/badger etc.
[15:12:05] <doismellburning> breakingmatter: nope, we used sysvinit / daemontools variously
[15:12:50] <breakingmatter> doismellburning: I thought that Docker doens't give you an init system in the container?
[15:13:31] <doismellburning> breakingmatter: it doesn't
[15:13:46] <doismellburning> breakingmatter: when you say "do you use supervisor to manage the process", which process do you mean?
[15:14:11] <doismellburning> "our python daemons when we built rpms" -> sysvinit / daemontools
[15:14:19] <doismellburning> our Docker images - various things
[15:14:27] <breakingmatter> doismellburning: I mean whatever python program you're trying to run. Like a flask webapp or whatever
[15:15:12] <breakingmatter> My understanding is that if you don't want to setup an init script the default deployment scheme is to run some kind of detached supervisord process, and it seems fairly common with docker folk too
[15:15:48] <doismellburning> breakingmatter: I'm not sure I follow what you mean by "the default deployment scheme"
[15:16:37] <breakingmatter> Running your service/app on a server. Everything I've read about deploying python apps on Docker says to use supervisord to run the process itself rather than just doing "python app.py".
[15:16:50] <breakingmatter> And basically use supervisord as a replacement init system
[15:17:03] <doismellburning> breakingmatter: _inside_ Docker? I can't say I'd ever do that
[15:19:27] <breakingmatter> doismellburning: So you just run your app script directly?
[15:20:52] <doismellburning> the only reason I see to use some sort of process manager _inside_ Docker is if you want to run multiple things in a container
[15:21:57] <breakingmatter> How do you control it as a daemon then? systemd/other init systems offer things I just can't see Docker itself handling.
[15:41:15] <nedbat> I'm trying to figure out why pip can't install on Appveyor under python2.6 since late last week: https://ci.appveyor.com/project/nedbat/coveragepy/build/default-98/job/tr3r2eejjdsslnwm
[15:41:44] <nedbat> looking at versions, I see that setuptools updated a week ago: https://pypi.python.org/pypi/setuptools Does this look familiar to anyone?
[15:53:17] <dstufft> or bug whoever manages argparse to upload a wheel
[15:53:58] <nedbat> dstufft: help me understand how you got from the error messages on the appveyor page (about ssl!) to "argparse didn't make a wheel"
[15:54:26] <dstufft> nedbat: the ssl thing was just a warning
[16:48:39] <ronny> why did it suddenly start? update of get-pip?
[16:58:15] <tdsmith> if anyone complains about wheel caching failing on 3.5, it's https://bitbucket.org/pypa/wheel/issues/146/wheel-building-fails-on-cpython-350b3#comment-21741627 i think
[17:03:55] <Callek> how can I do `pip list --outdated` with respect to honoring max-version requirements for related packages. (I know `pip instal -U -e .` will respect it, but I want a "no-op" version of that)
[17:11:42] <ronny> dstufft: is there a way to mark projects on pypi as abadonded?
[17:26:38] <Callek> ronny: see https://www.python.org/dev/peps/pep-0426/#obsoleted-by
[20:58:59] <_habnabit> apparently `~=-0.1.0` doesn't match `0.1+upstream.2` but does match `0.1`. how is this possible? pip seems to be really really bad at working with upstream versions
[21:01:04] <dstufft> _habnabit: did you mean ~=0.1.0?
[21:01:34] <dstufft> or I'm not sure what the - is there
[21:02:42] <dstufft> _habnabit: it's probably true that local versions aren't super well tested, they are a brand new thing and I wrote whatever tests I could think of at the time
[21:25:50] <dstufft> _habnabit: Ok yea, open a bug on github.com/pypa/packaging please, specifically the problem is that ==0.1.* doesn't match 0.1+upstream.2 (~= is implemented in terms of >= and == .*)