PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Thursday the 17th of January, 2019

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[07:22:18] <mithro> Any idea why https://pypi.org/project/tinyprog/1.0.24.dev13/ isn't getting rendered?
[07:23:48] <mithro> Description-Content-Type: text/markdown seems to have been correctly set?
[11:52:29] <ronny> mithro: but why the heck is mime metadata part of the document, something seems completely wrong with the upload
[12:04:04] <mithro> ronny: upload was here -> https://travis-ci.com/mithro/TinyFPGA-Bootloader/jobs/171055939
[12:05:44] <ronny> please ensure a apgrade of pip/setuptools and try again
[12:06:33] <ronny> mithro: its possible that a too old setuptools creates issues
[12:12:46] <mithro> ronny: Still no luck -> https://pypi.org/project/tinyprog/1.0.24.dev16/
[12:13:30] <mithro> ronny: build at https://travis-ci.com/mithro/TinyFPGA-Bootloader/jobs/171106213 -- Successfully installed setuptools-40.6.3
[12:14:32] <mithro> ronny: Anyway, bed time for me....
[13:42:23] <MapMan> Hi! Anyone got an idea why installing cryptography bombs? Here's full output, error at the end: https://hastebin.com/capomineri.cs
[14:27:41] <ngoldbaum> MapMan: https://github.com/pyca/cryptography/issues/3605
[14:32:31] <MapMan> ngoldbaum: Thanks, I'm on 1.1.0g-2ubuntu4.3 - is that still the issue on this version? Supposedly it was fixed?
[14:35:08] <ngoldbaum> sure but the bebian package might not have backported the fix
[14:35:19] <ngoldbaum> or it was backported but in a way that fooled crpytography
[14:35:38] <ngoldbaum> maybe open an issue against the cryptography repo?
[14:37:09] <MapMan> ngoldbaum: I'm stupid. I'm trying to install an old cryptography version, no wonder it's not fixed
[14:37:35] <MapMan> and the old version of cryptography I'm trying to install might simply not support latest openssl headers...
[14:38:08] <ngoldbaum> yup, makes sense
[15:50:56] <fdv> hi! I have now gone through pipenv, poetry and pip-tools, and it seems these tools are all lacking essential (for me) functionality, unless I'm mistaken
[15:52:08] <fdv> pip supports (through the -c flag) a file with constraints that are taken into account when installing from requirements.txt, does anybody know whether pipenv or pip-tools (pip-sync) support that?
[15:53:31] <fdv> sorry, pip-compile, not pip-sync. I'd like my final requirements.txt to contain the actual versions I want (including recursive dependencies) without having to also consider a second constraints file part of the spec
[15:54:48] <fdv> poetry probably isn't a use-case for me anyhow, as it seems restricted to only install within a virtualenv (please do correct me if I'm wrong)
[19:54:00] <nanonyme> Is it possible to have --pre only for a single package?
[19:57:08] <ngoldbaum> i think "pip install --pre --no-deps <package>"
[19:58:01] <ngoldbaum> i don't think there's a way to only install the dependencies first
[19:58:12] <ngoldbaum> so i guess you could do "pip install <package>"
[19:58:16] <ngoldbaum> then "pip uninstall package"
[19:58:19] <ngoldbaum> then --no-deps --pre
[19:59:48] <nanonyme> Hm
[20:00:11] <nanonyme> What about if I'd want to pip wheel a package and its dependencies such that my package is a prerelease but dependencies should be release packages?
[20:00:25] <ngoldbaum> sorry, i don't understand
[20:00:32] <ngoldbaum> you want to bundle all the dependencies in a single wheel?
[20:00:50] <nanonyme> Just create wheels of everything into a directory that works as a package cache in TravisCI
[20:01:26] <nanonyme> I currently have the setup but I just realized it doesn't work as expected for my own package for some reason. I think I broke things when taking extras into use
[20:01:41] <ngoldbaum> and you want to create the wheels yourself and not rely on the upstream wheels on pypi?
[20:02:42] <nanonyme> TravisCI has cache that is faster than PyPI
[20:03:05] <nanonyme> I've been putting entire wheel build directory into cache and rotating over builds
[20:05:29] <nanonyme> Never mind. Looks like I got my scheme working again. pip ate the .[extras] syntax
[20:13:18] <tos9> fdv: what functionality are you looking for
[20:13:33] <tos9> fdv: all 3 of those are certainly missing plenty of it :), but what thing are you trying to do
[20:47:17] <fdv> tos9: we've (they've) been using pipenv in the project that I joined a little while back, and now, I needed to upgrade a package (pika), so I did pipenv install pika==0.12.0 and lo and behold, all dependencies of all other packages were also upgraded to the latest permitted version. Now, this is deployed on systems running python 3.5.2, and some dependencies required python 3.5.3, but they still got upgraded, and compatible packages
[20:47:17] <fdv> were of course nowhere to be found in the package repository
[20:47:56] <ngoldbaum> i don't think it's possible in python to have more than one version of the same library in a dependency tree
[20:48:10] <fdv> no, which is also fine
[20:48:35] <fdv> python was set to 3.5 in the Pipfile, didn't make much of a difference
[20:49:47] <fdv> pipenv install even has an option (--selective-upgrade) that by the looks of it should do what I need (upgrade on package and its deps and no others), but that didn't work
[20:49:51] <ngoldbaum> rust does that, it's pretty neast
[20:50:09] <fdv> does what? upgrade one package?
[20:51:13] <ngoldbaum> can have liba depend on libb v1 but have libc depend on libb version 1.2
[20:51:16] <ngoldbaum> with no problems
[20:51:18] <ngoldbaum> cargo just handles it
[20:51:35] <ngoldbaum> and you can depend on both liba and libc
[20:51:40] <fdv> ah. well, if you can do that reliably, kudos, javascript deps are a mess because of that
[20:51:53] <ngoldbaum> yeah, cargo did it in a less shitty way than npm
[20:52:10] <fdv> :-)
[20:52:17] <fdv> that's a nice way of putting it
[20:52:42] <fdv> "the role of a developer is to make the use of a computer less shitty for the user" :-)
[20:53:00] <fdv> anyhow, after a bit of reading, it seems that this behaviour (pipenv upgrading the world) is by design, and won't change
[20:53:52] <fdv> and frankly, I disagree on that stance, but to each his own, by all means. you make a tool, you get to decide its philosophy
[20:54:39] <fdv> (I want to do controlled upgrades of the packages I need. If I need to upgrade or install a package to fix some issue, I want that to be as atomic as possible)
[20:55:40] <fdv> but anyhow, I set about looking for options, and poetry seemed nice. It also seemingly did a better job at dependency resolution, as it didn't upgrade deps to versions incompatible with my python version.
[20:56:02] <fdv> poetry, however, doesn't believe in system installs and will only play with virtualenvs
[20:56:39] <fdv> which, again, is fine, I didn't make the tool and I'm in no position to complain, but I need that functionality, so I keep looking
[20:58:15] <energizer> fdv: can you explain the situation that causes you to want to manage a system install?
[20:58:25] <fdv> enter pip-tools, which is the closest I've come to a good solution. only downside is that I end up with a finalized requirements.txt that includes dependencies with versions I can't use. pip has a solution for that (--constraint), but pip-tools don't seem to support that
[20:59:25] <ngoldbaum> messing with the system python installation is a great way to have a bad time
[20:59:26] <energizer> you can include constraints in your requirements.in, which pip-compile will process
[20:59:27] <ngoldbaum> esp on linux
[20:59:31] <fdv> energizer: we deploy to docker images, and those are used in various ways. installing the packages in a virtualenv would require specific knowledge about the platform you deploy on
[20:59:46] <ngoldbaum> ah in docker it's less bad kinda
[20:59:49] <fdv> energizer: can you do that? that's awesome!
[21:00:09] <ngoldbaum> although still easy to fuck up the OS if you aren't careful
[21:00:09] <energizer> yeah
[21:00:10] <fdv> ngoldbaum: yes, exactly, you solve this by having the system be easily rebuilt
[21:00:13] <energizer> i would think using virtualenv requires less knowledge about the platform
[21:00:31] <fdv> and you have a recipe to build the system
[21:01:11] <fdv> energizer: then, you'll need to mess with the deployed application's environment
[21:01:34] <fdv> if you have it installed system-wide, you just put the tape in and press play
[21:01:40] <fdv> so to speak
[21:01:59] <energizer> fdv: when do you mess with the environment?
[21:02:13] <fdv> activate?
[21:02:25] <energizer> ./venv/bin/python
[21:02:44] <fdv> still, it requires that knowledge
[21:02:53] <fdv> and that introduces a dependency
[21:03:02] <fdv> more moving parts
[21:03:05] <energizer> venv is stdlib
[21:03:54] <fdv> yes, but finding python in ./venv/bin/python is not standard (or is it? I do lack knowledge in many areas here)
[21:04:16] <energizer> it is very common, yes
[21:04:37] <fdv> yes, but you cannot expect to find it there
[21:04:49] <energizer> find what where?
[21:05:16] <fdv> to find python in $(pwd)/venv/bin
[21:06:10] <energizer> if you have a version of python that has venv, (this is typical), then `python3 -m venv venv && venv/bin/pip install . && venv/bin/python` will work
[21:06:27] <fdv> my solution so far is using pip-compile to merge a couple of requirements files into one and then use pip --constraint <constraint_file> --requirement merged_requirements.txt
[21:06:43] <fdv> energizer: sure, but that's a lot of moving parts
[21:06:54] <fdv> and they have to look the same everywhere, or you have more moving parts
[21:07:03] <energizer> it always works like that
[21:07:12] <ngoldbaum> it's one line in a shell script
[21:07:30] <energizer> i just wrote that from memory because i do it 30 times a day
[21:07:40] <fdv> yes, but #!/usr/bin/env python makes you system-agnostic
[21:07:55] <energizer> i'm not sure what that means
[21:08:16] <fdv> I see there's a gap here :)
[21:08:42] <fdv> it uses /usr/bin/env to find the first python in your path
[21:08:53] <fdv> and use that to run your script
[21:09:01] <energizer> what i described, or pex.rtfd.org, are standard practice, and they work for many people in many environments. you can try to blaze a trail, but you don't have to
[21:09:30] <fdv> https://stackoverflow.com/questions/2429511/why-do-people-write-the-usr-bin-env-python-shebang-on-the-first-line-of-a-pyt
[21:09:33] <ngoldbaum> also the best practice to always use a virtualenv comes at hard cost and many wasted hours from mistakes caused by not using it
[21:09:46] <ngoldbaum> you're kinda flying in the face of that accrued knowledge
[21:10:38] <fdv> ngoldbaum: I think the state of the debate around pipenv and other package managers speaks against that statement
[21:10:55] <fdv> there isn't one size that fits all
[21:11:11] <fdv> that's why you *have* a lot of exceptions and workarounds
[21:11:20] <energizer> fdv: if you can explain why what i'm describing doesn't work for you, we can talk about it
[21:11:28] <fdv> I'm not saying virtualenvs are bad, by all means, I use them almost all the time
[21:12:00] <energizer> don't forget :) https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
[21:12:41] <fdv> and you think we're there?
[21:13:14] <fdv> It seems pretty obvious to me that the existing state of affairs is far to complex to be understood
[21:13:20] <energizer> nah
[21:13:37] <energizer> it's just hard to discover because of the noise
[21:14:35] <fdv> then you see something I don't
[21:16:08] <energizer> can you explain why my suggestions (venv\virtualenv or pex) don't work for you
[21:17:42] <energizer> (in combination with pip-compile, generally)
[21:22:09] <ThiefMaster> is it normal that `python -m venv` doesn't update pip when creating the virtualenv like `virtualenv` used to do?
[21:22:42] <energizer> ThiefMaster: which distro?
[21:22:48] <ThiefMaster> gentoo
[21:24:42] <fdv> energizer: curreently, we set up the images in one place, and then run applications from elsewhere. as it is today, running the application requires no knowledge about where to find python or the necessary libraries, as they are in standard places
[21:25:08] <energizer> fdv: pex.rtfd.org might be what you want then
[21:25:15] <fdv> so the design is built around docker and that's currently where these dependencies are handled
[21:25:46] <fdv> yes, it *might*, I'm not saying it couldn't work, but it's not really what I want at the moment
[21:25:55] <fdv> it's a different design
[21:26:26] <fdv> we also need these applications to work in other contexts, like in dev, where we *do* use virtualenvs throughout
[21:27:04] <energizer> there's some writing about virtualenv+docker https://hynek.me/articles/virtualenv-lives/
[21:27:04] <fdv> and as I stated, I do lack some knowledge here, and perhaps I would have gone the pex route had I known about that before
[21:27:44] <energizer> there's plenty more options, https://sedimental.org/the_packaging_gradient.html but i believe most of them will be much less familiar to most python developers than virtualenv
[21:29:35] <fdv> and perhaps I still will (thanks for pointing it out), but I really have to do one thing at a time and the Newest Hottest Thing That Solves Everything has never really happened to do that for me (and I've tried my fair share)
[21:29:52] <fdv> so going for something I know and understand makes a lot of sense in the first place
[21:32:08] <fdv> energizer: but you said I could have constraints in requirements.in, do you know if that is documented anywhere?
[21:32:35] <energizer> fdv: its the same constraint format as in install_requires=
[21:38:58] <fdv> energizer: thanks. a quick round of googling didn't enlighten that much, however. it looks more or less like install_requires isn't that different from just specifying dependencies in requirements.txt?
[21:39:11] <energizer> fdv: right
[21:41:09] <fdv> but with pip's constraint file, I can set a constraint that (as is part of my current issue) async-timeout should be less than 3.0, but I don't have that dependency directly, it's a transitive one from aiohttp (which I do use)
[21:41:54] <fdv> that's different, isn't it?
[21:44:12] <energizer> fdv: i'm not familiar with that, sorry
[21:44:21] <energizer> oh, misread
[21:45:42] <fdv> the packages defined in the constraint file aren't installed because they are there, but *if* they are installed, either because they are mentioned in requirements.txt or are dependencies from those, it will adhere to the version requirements from the constraint file
[21:45:45] <energizer> if i want to set a constraint, normally i'd add it to requirements.in that way, but maybe there's a way around doing that. i'm not sure
[21:45:58] <fdv> OK, I see
[21:46:15] <fdv> that's the missing part for me to be happy with pip-tools for this task :)
[21:47:01] <fdv> it would be awesome if pip-compile could take in a constraint file as well (as pip can) to create a final requirements.txt that is actually athoritative
[21:50:39] <prsn> hi! can anyone explain this output to me? https://paste.pound-python.org/show/eiQYp0oMifUuhLYV09jU/
[21:50:55] <prsn> (ignore the xs in the protocol, that was just to get the paste site to let it through)
[21:51:22] <prsn> is this something as stupid as it looks, where pip things 11 is less than 9 because it's comparing the first digit only?
[21:53:44] <energizer> prsn: try --no-use-cache or whatever its called
[21:54:37] <energizer> prsn: also, i can't reproduce that, maybe upgrade pip
[21:55:22] <prsn> it's 18.1
[21:56:08] <prsn> also, i think i /want/ it to use the cached version (which says 1.8)
[21:56:13] <prsn> i guess i'll try it though
[21:56:20] <prsn> uh
[21:56:23] <prsn> when i uninstalled
[21:56:30] <prsn> "Successfully uninstalled Django-1.8.19"
[21:56:37] <prsn> so now i'm pretty confused
[21:57:09] <prsn> yeah ok i think it's just a text bug in the reporting
[21:57:14] <prsn> because i actually did get 1.8
[23:40:13] <mithro> So it turns out the issue was that if you have newlines in the description value things break