PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Friday the 6th of February, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[01:08:21] <kevc> ronny: dependencies
[01:08:53] <kevc> which in theory should be solved by packages listing their dependencies correctly
[01:09:35] <kevc> in practice that never works, there's always broken packages. Trying to get all of those resolved would be a losing battle
[08:05:23] <ronny> kevc: why not put fixed versions on a internal devpi and send patches upstream?
[08:05:32] <ronny> (at least thats what ido on my laptop
[08:07:23] <kevc> ronny: that's pretty mafan
[08:07:44] <ronny> mafan?
[08:07:46] <kevc> annoying
[08:08:15] <ronny> kevc: well, the other way around is troublesome forever
[08:08:33] <ronny> kevc: the only way to fix the world is fixing it one piece at a time
[08:08:53] <kevc> I think the way to fix it would be to have CI on PyPI
[08:09:02] <kevc> but out of my scope :)
[08:14:27] <ronny> kevc: thats in part planned
[08:14:37] <ronny> kevc: stuff like 'installs in clean env'
[08:14:54] <ronny> kevc: long term we are supposed to run on wheels anyway
[08:23:05] <malinoff_> hi, can i build a wheel with dynamic requirements? e.g. i want to install extra dependencies only on python 2.6
[08:23:39] <tomprince> malinoff_: That is supported, I think. I don't know the details, though.
[08:24:20] <malinoff_> tomprince, can you point me to the docs? Right now the list of requirements depends on the python i used to build the wheel
[08:27:11] <mgedmin> there's a (horrible) syntax using extras_requires={':somecomplicatexexpression': [...]}, supported by pip starting with version 6
[08:27:18] <mgedmin> I can't find where it's documented
[08:27:20] <mgedmin> some pep probably
[08:27:51] <mgedmin> then there's the old and hacky method of install_requires=[...] if sys.version_info[:2] == (2, 6) else [...]
[08:28:06] <mgedmin> (can't use it with universal wheel obviously)
[08:28:20] <malinoff_> mgedmin, yes, i use the latter, and that's the problem
[08:32:31] <malinoff_> So wheels can't dynamically change their requirements on installation phase?
[08:35:58] <tomprince> They can. I don't know where it is documented, unfortunately.
[08:38:50] <ronny> malinoff_: you use environment markers to do it
[08:39:07] <ronny> malinoff_: and there is one for python versions that can sue comparisation
[08:40:18] <malinoff_> ronny, awesome! Thanks, it's quite difficult to google that
[08:40:48] <ronny> im not quite sure how to use them myself
[08:41:03] <malinoff_> https://wheel.readthedocs.org/en/latest/#defining-conditional-dependencies
[08:41:17] <malinoff_> i believe they should be installed automatically
[08:47:14] <malinoff_> i'm also curious how would this work with pypy
[08:51:03] <malinoff_> ronny, worked like a charm!
[08:51:12] <ronny> \o/
[08:51:27] <malinoff_> ronny, many thanks
[08:51:49] <malinoff_> it is even more powerful than i expected
[15:28:25] <DanielHolth> So pip still logs to ~/.pip by default but gets its config from ~/.config/pip/pip.conf and also looks in ~/.pip
[15:28:48] <dstufft> I don't think it logs to ~/.pip by default
[15:29:10] <apollo13> I don't have any .pip either
[15:29:23] <DanielHolth> hm, I'm getting ~/.pip/pip.log
[15:30:09] <DanielHolth> I don't mind it, but the ~/.config/pip seems a little redundant. Not wrong but a little confusing.
[15:30:42] <apollo13> that's standard layout for any up2date linux program
[15:31:01] <DanielHolth> Do we have a "pip wheel --dont-build-already-installed-dependencies" option yet?
[15:31:39] <dstufft> no, and it doesn't seem generally useful
[15:32:13] <DanielHolth> running into errors with setup.py depending on in-house packages. Now trying to just use devpi instead.
[15:32:31] <DanielHolth> "use pre-built wheel if available" would also work
[15:35:03] <DanielHolth> So when building all the dependent wheels for an in-house package pip errors out on my handful of installed-for-development in house packages that obviously can't be found on pypi.
[15:47:22] <dstufft> DanielHolth: pipw heel will download a wheel if one is available, including from a --find-links
[15:51:21] <DanielHolth> dstufft even if the named package does not exist on pypi?
[15:52:10] <dstufft> might need to include your local devpi in --extra-index-url, but yea
[15:52:40] <DanielHolth> I think that will wind up being the solution, just publishing my internal packages to my shiny new internal inde.
[17:07:29] <ysionneau> Hi, how can I specify the optional requirements I want to install when doing "pip3 install url"? I can only get it to work with "edit/develop" mode with "pip3 install -e .[option]"
[17:07:54] <carljm> ysionneau: i'm not sure that pip supports extras syntax with a url.
[17:08:12] <ysionneau> I tried pip3 install git+https://....../repo.git[option] or with a space but it thinks option is part of the url :/
[17:08:22] <dstufft> it might work if you add #egg=name[option]
[17:08:29] <ysionneau> oh !
[17:08:59] <ysionneau> dstufft: it DOES :)
[17:09:39] <ysionneau> thank you very much
[17:09:46] <dstufft> no problem :)
[17:11:32] <ysionneau> it's very too bad there is now way to tell setup.py (setuptools) to use some extra requirement
[17:11:53] <ysionneau> like ./setup.py install --extras=something or whatever the syntax would be
[17:13:17] <ysionneau> (or maybe there is?)
[17:13:35] <carljm> dstufft: nice! good call :-)
[17:13:48] <ysionneau> I declared some requirements as optional behind the "GUI" keyname, but it seems setup.py only installs wihout any extra requirement
[17:14:14] <dstufft> ysionneau: I'm not aware of anything that lets setup.py install do that
[17:14:29] <ysionneau> ok :'
[17:14:32] <dstufft> I would prefer it if people just forgot setup.py was ane xecutable script and just treated it as a packaging metadata file
[17:14:55] <ysionneau> humm I see the spirit
[17:16:01] <doismellburning> dstufft: <3
[17:56:16] <ronny> dstufft: ping?
[20:53:52] <agronholm> dstufft: have you seen the kinds of things people put in setup.py, especially in projects that have C extensions in them? treating it like a packaging metadata file is wishful thinking.
[21:00:22] <tomprince> agronholm: That shouldn't matter much to somebody installing the package, though?
[21:01:50] <agronholm> tomprince: with source tarballs, it matters a lot
[21:02:33] <agronholm> tomprince: because that code is crucial for compiling the C extensions
[21:03:23] <tomprince> agronholm: dstufft's point was that people shouldn't be running 'python setup.py <stuff>', put instead using pip.
[21:03:43] <tomprince> agronholm: Certainly it is critical, but that doesn't mean that people should be running it directly.
[21:03:45] <dstufft> yea what tomprince said
[21:03:46] <dstufft> ronny: pong
[21:04:02] <agronholm> I guess I missed the point then
[21:04:27] <tomprince> The fact that it can be run as a script is an implementation detail.
[21:04:55] <agronholm> that metadata thinking mistake was made with distutils2 though. good riddance :)
[21:05:16] <agronholm> it was thought that a static metadata file would be sufficient as a replacement for setup.py
[21:05:20] <dstufft> sdist 2.0 won't have a script like setup.py that people can run
[21:05:26] <ionelmc> distutils didn't change the semantics
[21:05:31] <ionelmc> distutils2
[21:05:41] <dstufft> it'll have the ability to specify whatever build system you want to use though
[21:05:41] <_habnabit> dstufft, is that going to be in setuptools, or what?
[21:05:50] <ionelmc> it was the same broken metadata, and you'd still have to hardcode the list of packages in there
[21:05:52] <ionelmc> and crap like that
[21:05:53] <dstufft> _habnabit: it's not even a thing yet, but probably yea
[21:06:12] <agronholm> what will replace it then?
[21:07:19] <dstufft> the idea is that setuptools (or any "package builder", setuptools won't be special in a sdist 2.0 sdist), you'll get some static metadata wihch defines things like dependencies, and which defines an entry point (not setuptools entrypoints, jsut an entrypoint, probably a callable) which can be invoked as the build system, and people can add arbitrary metadata as well for their build system entrypoint
[21:07:40] <_habnabit> neat
[21:07:49] <agronholm> sounds reasonable so far
[21:08:03] <dstufft> so instead of having people cargo cult random shit around in their setup.py, they'll be able to release a build system on PyPI (or just include it as part of their package if they want) and pip will just invoke that build system to build a wheel, and then install from the wheel
[21:08:31] <agronholm> I'm starting to like it
[21:08:50] <agronholm> speaking of wheels, I've been doing some digging
[21:08:51] <_habnabit> dstufft, how do you indicate what third-party packages you depend on during setup, then?
[21:08:56] <agronholm> regarding linux binary wheels
[21:09:06] <ionelmc> i wonder how many competing build systems we gonna have
[21:09:23] <dstufft> _habnabit: the static metadata will include all of the dependencies, with new categories of dependencies, like "build dependencies"
[21:09:59] <_habnabit> dstufft, i see
[21:10:08] <agronholm> I've come up with two compatibility factors: LSB version and GLIBC version
[21:10:09] <_habnabit> dstufft, this will be setup.cfg?
[21:10:15] <dstufft> the static metadata isn't something that's designed to be user facing though, it's just what happens to get put inside of the .tar.gz, it's expected that people will use some sort of author centric build tool to build their sdists (for example, setuptools, but someone could create their own that wasn't setuptools)
[21:11:18] <dstufft> we're basically designing a format, and what people use to create or consume that format doesn't matter to us, we just define the format (and of course, pip and setuptools will both be able to consume and produce said formats, so people will be able to get it for free if their existing setup.py's, but if they want something better they can create it)
[21:11:33] <_habnabit> dstufft, i hope this is going to work for python 2, haha
[21:12:01] <dstufft> agronholm: I'm afraid I'm probably the worst person to really judge stuff about ABI compatability, I only kinda know what I'm talking about :(
[21:12:10] <_habnabit> agronholm, what about people who don't use glibc
[21:12:17] <agronholm> _habnabit: they're SOL :)
[21:12:28] <agronholm> but afaik all linux distros use GLIBC by default
[21:13:04] <dstufft> _habnabit: yes
[21:13:08] <agronholm> the basic rule of thumb seems to be this: you can compile everything else statically except for libc
[21:13:34] <agronholm> but even then you need to take into account the maximum required version of glibc
[21:13:34] <dstufft> agronholm: static compiling is something that I think would work for generic wheels yea, from my limited understanding
[21:14:04] <agronholm> dstufft: the produced binaries would be fairly large though
[21:14:12] <agronholm> it would work, still
[21:14:36] <ionelmc> why can't you statically compile libc?
[21:14:49] <tomprince> Do most packages do that, though? And do people want binaries that do that?
[21:14:53] <agronholm> ionelmc: it has something to do with loading libraries dynamically
[21:15:16] <agronholm> tomprince: you need the same binaries to run on multiple linux distros, so yeah
[21:15:19] <ionelmc> agronholm: is there a more elaborate explanation?
[21:15:35] <agronholm> yes, hang on
[21:15:56] <agronholm> http://insanecoding.blogspot.in/2012/07/creating-portable-linux-binaries.html
[21:16:37] <agronholm> this relates to application binaries though -- I'm not sure what problems there will be when doing the same with dynamic libraries
[21:16:42] <dstufft> _habnabit: part of the reason why PEP 453 went to great lengths to make sure pip wasn't included in the stdlib and instead we just essentially made get-pip.py part of the stdlib (sort of kinda) was because we decided that tying improvements to the packaging system to the Python release schedule wasn't a workable solution
[21:16:59] <tomprince> agronholm: Most existing packages don't statically compile everything. And I (at least) don't want everything statically compiled. At least, not all the time.
[21:17:09] <_habnabit> dstufft, well yeah, i figured that some sensible people would know that, but
[21:17:22] <agronholm> tomprince: do you really prefer spending 30-40 minutes compiling something like pyside every time you install it?
[21:17:43] <agronholm> or would you like precompiled binaries, even if they're "fatter" than locally compiled ones?
[21:17:47] <tomprince> That isn't what I said. I'm certainly in favor of binary wheels.
[21:18:12] <dstufft> tomprince: agronholm honestly I think the answer is both things, I think we can do static compiled binaries for "generic linux" wheels, and then Distro + Distro Version specific binaries for dynamicaly linked wheels
[21:18:22] <dstufft> and authors can decide which ones they want to upload
[21:18:31] <dstufft> or they can upload both!
[21:18:50] <dstufft> and pip just needs the ability to pick the most specific wheel
[21:18:58] <agronholm> dstufft: sure, but the number of variations will be overwhelming if you want to support every distro, version and 32/64 bit...
[21:19:18] <dstufft> agronholm: so maybe you upload a statically compiled wheel and pick a few popular distros for dynamically compiled
[21:19:25] <dstufft> and pip picks the most specific one
[21:19:54] <dstufft> gotta go though!
[21:20:03] <tomprince> Or, the PSF or your distro provides a service for building them.
[21:22:04] <tomprince> agronholm: I'm not saying that statically compiling things is categorically bad. I'm just saying there are trade-offs in doing that, and the some people may want to make different ones.
[21:22:35] <tomprince> Most distros, for distro packages, aren't going to accept statically linking. So the build process will need to support that, for example.
[21:24:13] <agronholm> tomprince: I'm not saying it shouldn't
[21:24:27] <agronholm> locally compiling is usually the best option if you can afford the time
[21:25:50] <agronholm> but that requires you to have the proper -dev packages installed system wide
[21:25:58] <agronholm> and figuring out the correct ones can be a PITA
[21:28:37] <_habnabit> agronholm, well, and presumably you don't want -dev packages installed on prod webservers
[21:29:31] <agronholm> _habnabit: that would be preferable, although the -dev packages don't really do any harm either
[21:33:02] <tomprince> Having them is a possible attack vector.
[21:33:44] <ionelmc> agronholm: it's a very interesting lib but it still deosn't explain what happens when you statically link libc
[21:33:52] <ionelmc> s/lib/read/
[21:34:54] <tomprince> ionelmc: nss depends on dynamically linked libc or so.
[21:35:06] <ionelmc> nss?
[21:35:31] <agronholm> ionelmc: bad things will happen, I had another article open sometime before this that explained it all
[21:36:25] <ionelmc> i don't say i don;t believe you, i just wanna know what's going on
[21:36:42] <agronholm> "Now you might be thinking, hey what about statically linking (E)GLIBC? Let me warn you that doing so is a bad idea. Some features in (E)GLIBC will only work if the statically linked (E)GLIBC is the exact same version of (E)GLIBC installed on the system, making statically linking pointless, if not downright problematic. (E)GLIBC's libdl is quite notable in this regard, as well as several networkin
[21:36:44] <ionelmc> i do wanna avoid finding out the hard way :)
[21:37:05] <agronholm> networking functions. (E)GLIBC is also licensed under LGPL. Which essentially means that if you give out the source to your application, then in most cases you can distribute statically linked binaries with it, but otherwise, not. Also, since 99% of the functions are marked as requiring extremely old versions of (E)GLIBC, statically linking is hardly necessary in most cases."
[21:37:28] <ionelmc> i get that but they have a linking exception don't they?
[21:37:50] <agronholm> I'm mostly concerned about the technical issues
[21:38:12] <agronholm> bbl ->
[21:38:15] <ionelmc> well yeah, but the article is vague :)