PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Monday the 12th of January, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[00:15:19] <prologic> so dstufft until the above issue is resolved somehow pinning to pip==6.0.2 and virtualenv==12.0.2 (uses pip==6.0.2) seems to work okay for me
[00:30:37] <prologic> https://gist.github.com/therealprologic/9a5446d1c472a5bd6819
[00:30:39] <prologic> so far so good :)
[00:30:56] <prologic> Although the time to install was ~9s pre pip 6.x
[00:30:57] <prologic> ohwell :)
[00:41:24] <dstufft> prologic: is this hitting a devpi server?
[00:44:06] <prologic> no
[00:44:08] <prologic> --no-index
[00:44:16] <prologic> no devpi-server inside our VM(s)
[00:45:04] <dstufft> interesting
[00:45:26] <dstufft> prologic: you're using --find-links?
[00:45:44] <prologic> yes --find-links = /var/lib/pip/sources /var/lib/pip/wheels
[00:45:53] <prologic> other than that, nothing else
[00:46:51] <dstufft> I wonder if it's because we're not caching the listdir in memory anymore
[00:48:22] <prologic> ahh
[00:48:32] <prologic> yeah if the no. of entries in the dir(s0 are significant
[00:48:34] <prologic> this could add up
[00:48:39] <prologic> over a number of dependencies
[00:48:45] <prologic> ~9s -> ~16s
[00:48:52] <prologic> almost a double hit in perfroamcne
[00:49:28] <dstufft> it used to be we had no persistent cache on the HTTP access (other than the download cache) but we did have an unconditional in memory cache
[00:49:47] <dstufft> I removed it when I added the new persistent cache
[00:49:55] <dstufft> might need to add something back
[00:50:08] <prologic> possibly
[00:50:14] <prologic> hitting the disk every time is going to cost :)
[00:50:29] <prologic> especially one as abstracted as a vmdk :)
[01:00:38] <prologic> https://gist.github.com/therealprologic/94dbaa9285016917c4dd <-- trying to get freezegun==0.1.19 fails for me
[01:00:47] <prologic> that requirement on python-dateutil looks kind of weird too
[01:02:20] <dstufft> prologic: https://github.com/spulec/freezegun/commit/15c7cc3cf1599efa65896e7138f3015e68ae5998
[01:03:50] <prologic> ahhh huh!
[01:03:56] <prologic> I knew that requirement was stupid
[01:03:56] <prologic> :)
[01:05:46] <prologic> now I need to find the released version this was fixed in
[01:06:39] <prologic> https://github.com/spulec/freezegun/blob/0.2.3/setup.py
[01:06:42] <prologic> apparently 0.2.3
[01:27:18] <ronny> dstufft: are there any docs when and how RECORD, installed-files.txt and entrypoin maps are generated? im trying to port pipsi to winsows and i keep running into surprises with paths
[01:27:54] <ronny> (for example shortenedfilenams in the installed-files list)
[01:28:33] <ronny> gmm also gn8
[12:23:09] <doismellburning> is there a way of running `pip search` that'll give me latest package version?
[13:27:35] <vBm> Is this issue with pip or something else is going on -> https://dpaste.de/6O0k
[13:28:10] <vBm> even thou correct version is installed it's not reported as is.
[13:29:41] <xafer> vBm, what does a pip list return ?
[13:30:16] <vBm> xafer, youtube-dl (2015.1.1)
[13:30:34] <vBm> means they packaged it wrongly ?
[13:32:07] <xafer> you dont seem to be in a virtualenv ?
[13:32:29] <vBm> no, i'm very very new to python so i haven't used virtualenv yet
[13:33:19] <xafer> Because inside a virtualenv I didnt reproduce the issue
[13:34:50] <xafer> try "pip uninstall youtube-dl"
[13:35:20] <vBm> Uninstalling youtube-dl-2015.1.1:
[13:35:20] <vBm> c:\python27\scripts\youtube-dl.exe
[13:36:17] <xafer> and pip list again, I guess it wont be uninstalled
[13:36:31] <vBm> yeah, it's not uninstalled
[13:37:04] <vBm> guess it's an issue at my end then xD
[13:38:19] <xafer> nope I'd say it comes from pip (I have the same here)
[13:38:41] <vBm> oh, good then xD
[14:01:43] <ionelmc> vBm: maybe you had the previous version installed as egg, and it's tramping over the new one
[14:02:13] <ionelmc> best to cleanup manually if pip uninstall can't fix it (usually does, if you run it few times)
[14:06:32] <xafer> I'd say it's a bug in pip
[14:07:05] <xafer> which has issue removing installs with non pep440 version
[14:07:37] <xafer> youtube is installed in youtube_dl-2015.01.11.dist-info but pip will only searhc it in youtube_dl-2015.1.11.dist-info
[14:08:12] <xafer> it is somewhat similar to https://github.com/pypa/pip/issues/2293
[14:12:01] <xafer> I'll try to update my patch for dist also
[15:27:36] <vBm> xafer / ionelmc ... ive tried searching but can't find it ... is there any way to purge pip list ? ... it still includes stuff i've removed via pip uninstall
[15:28:00] <ionelmc> vBm: run uninstall again
[15:29:07] <vBm> i did several times
[15:29:44] <ionelmc> vBm: so it reports it's not installed?
[15:29:54] <vBm> Can't uninstall 'subliminal'. No files were found to uninstall.
[15:30:05] <ionelmc> what do you get in pip list?
[15:30:05] <vBm> yet in pip list i still see subliminal (0.8.0.dev0)
[15:30:14] <ionelmc> grrrr
[15:30:17] <xafer> vBm, run import pkg_resources;[dist for dist in pkg_resources.working_set if dist.project_name=='youtube-dl'][0].egg_info
[15:30:18] <ionelmc> where is it?
[15:30:31] <vBm> xafer, i'm on winblows :D
[15:30:38] <xafer> launch python
[15:30:42] <ionelmc> vBm: search for the file :-)
[15:30:46] <xafer> and run 'import pkg_resources;[dist for dist in pkg_resources.working_set if dist.project_name=='youtube-dl'][0].egg_info'
[15:30:47] <ionelmc> and kill it manually
[15:31:07] <vBm> 'c:\\python27\\lib\\site-packages\\subliminal-0.8.0_dev-py2.7.egg-info'
[15:31:09] <vBm> heh
[15:31:12] <xafer> O_o
[15:31:30] <ionelmc> vBm: how are you running uninstall?
[15:31:56] <vBm> pip uninstall subliminal -y
[15:32:00] <xafer> well look inside c:\\python27\\lib\\site-packages\\ and search for something named like youtube_dl :o
[15:32:08] <vBm> yeah yeah ... found it
[15:32:15] <vBm> for youtube-dl i saw two dirs
[15:32:59] <xafer> youtube_dl and one like youtube_info-blabla.dist-info ? remove them
[15:33:13] <ionelmc> since you can have multiple eggs for same package installed, so uninstall only removes the first one it finds, peculiar behavior
[15:33:32] <vBm> cleaned it up and everything is back to as it should be
[15:33:37] <vBm> thank you very much guys
[15:36:12] <xafer> glad we could help
[15:36:44] <vBm> im very new to python even thou i poke around sphinx long time ago
[15:36:51] <vBm> *poked
[15:37:10] <vBm> thanks again
[16:09:31] <tos9> is there a decent library form of building wheels in `wheel`
[16:10:03] <ionelmc> tos9: can you reprase?
[16:10:07] <ionelmc> rephrase
[16:10:15] <tos9> Hm I guess I'd need to combine it with parse_requirements
[16:10:29] <tos9> ionelmc: "What function(s) do I call to create a bundle of wheels out of a requirements file"
[16:10:54] <ionelmc> tos9: aaah, you want the old pip bundle functionality right?
[16:11:33] <ionelmc> https://pip.pypa.io/en/latest/user_guide.html?highlight=bundle#create-an-installation-bundle-with-compiled-dependencies
[16:11:47] <tos9> ionelmc: Yeah, except I want to do it from python
[16:12:31] <ionelmc> tos9: not sure about pip's api
[16:12:32] <tos9> I mean, I'm going to have to subprocess out to run the pip install anyhow, but :/
[16:12:52] <ionelmc> i'm pretty sure pip does weird things with the logging config so you can't just run it in the same process
[16:13:35] <ionelmc> unless you use some of the more 'inner' api
[16:13:38] <tos9> ionelmc: Yeah, I know that I don't want to run anything pip related in-process... I guess since pip is doing the downloading I might as well just shut up and fork
[16:13:57] <ionelmc> maybe dstufft would know what's best
[16:15:19] <xafer> that's what pip does to build a wheel: https://github.com/pypa/pip/blob/develop/pip/wheel.py#L551-L570
[16:15:43] <xafer> so basically call setup.py bdist_wheel, via subprocess
[16:16:35] <pf_moore> tos9: it's actually a surprisingly ill-defined problem (says me, who keeps trying to do it :-))
[16:16:41] <pf_moore> the steps are basically
[16:16:51] <pf_moore> 1. froma requirement, find a sdist
[16:17:04] <pf_moore> 2. run setup.py bdist_wheel on that sdist
[16:17:26] <pf_moore> So far so good
[16:17:27] <DanielHolth> hi
[16:17:48] <tos9> pf_moore: If we're breaking down the steps and doing it ourselves, theoretically 1 and 2 have a 0, which is "check to see if a wheel exists already" yeah?
[16:18:06] <tos9> Well, does pip wheel do that part?
[16:18:08] <tos9> DanielHolth: Hi :)
[16:18:17] <pf_moore> tos9: well, yeah, but what if there's a wheel for 1.0 but a sdist for 1.1?
[16:18:27] <tos9> Right yeah one that satisfies the requirement
[16:18:42] <pf_moore> But yes, pip wheel does all that for you. It also downloads a wheel if there's one.
[16:19:10] <pf_moore> The problem is taht a foo 1.0 wheel satisfies foo, but building from a foo 1.1 sdist is better
[16:19:22] <tos9> Ah I see you mean where there isn't one for latest
[16:19:25] <pf_moore> That version checking etc is part of what step 1 does.
[16:19:44] <pf_moore> And pulling it out without knowing what you're doing it for is *hard*
[16:20:16] <pf_moore> Also, if your requirement is (say) a git URL, you don't get a sdist, you get a chckout and do setup.py bdist_wheel in place
[16:20:49] <pf_moore> Honestly, you're probably better doing "pip wheel -r requirements.txt" in a subprocess.
[16:21:08] <tos9> pf_moore: yeah, I'm going to just do that for my particular case
[16:21:14] <tos9> pf_moore: Are you working on implementing a wheelhouse? Or this is just tinkering?
[16:21:48] <pf_moore> tos9: I want to build something that maintains a local wheelhouse automatically.
[16:22:13] <tos9> I've written all of our builds assuming that at some point something like taht will spring into being :P
[16:22:16] <pf_moore> So far, it's been hard to get anything better than "pip wheel -r req.txt" in a cron job :-)
[16:22:17] <ionelmc> pf_moore: the rumored "wheel caching"?
[16:22:38] <pf_moore> ionelmc: No, that's *far* better :-)
[16:22:43] <tos9> pf_moore: Well, that doesn't help does it? You still need to run it right before the build, and pip still won't know that it's there
[16:22:51] <tos9> Ah I guess I was talking about what ionelmc was
[16:22:59] <ionelmc> pf_moore: lol, what then
[16:23:33] <pf_moore> What that does is take "pip install foo" and convert it into "pip wheel foo" followed by "pip install <the-wheel>" and caches the wheel to avoid step 1 in the future
[16:23:48] <ionelmc> pf_moore: a wheelhouse!
[16:23:56] <ionelmc> do i get the lolly? :-)
[16:24:13] <pf_moore> heh
[16:24:21] <tos9> pf_moore: But how do you avoid step 1? If you run pip wheel again, it doesn't check to see that there's an existing wheel
[16:24:44] <xafer> tos9, it does if you add --find-links into your pip.conf
[16:24:47] <ionelmc> tos9: basically this http://blog.ionelmc.ro/2015/01/02/speedup-pip-install/
[16:24:57] <tos9> Oh right sorry, it only doesn't for VCS links
[16:25:06] <tos9> (I abandoned that approach because of ^)
[16:25:18] <pf_moore> tos9: internal magic. The wheel caching thing would be a transparent improvement to pip install, and isn't really the same as what we were talking about...
[16:26:08] <tos9> pf_moore: Yeah I'm on the same page now -- I did the same thing, but VCS links broke it, so now I am doing... other things
[16:27:02] <pf_moore> For me, the whole "maintain a wheelhouse" thing is to avoid ever installing from sdist, because I don't like setuptools' executable wrappers (on Windows) and I prefer the ones I get from a wheel install. Petty of me I know...
[16:27:58] <ionelmc> tos9: what i'm doing currently for some terrible debian package is something like this:
[16:28:03] <ionelmc> pip wheel -r "requirements.txt" --find-links=file://$build_cache/wheels --wheel-dir=$build_cache/wheels && pip wheel -r "requirements.txt" --find-links=file://$build_cache/wheels --wheel-dir=$final_bundle
[16:28:27] <tos9> ionelmc: Yeah, same
[16:28:28] <ionelmc> works quite well :-)
[16:28:44] <tos9> ionelmc: It works fine, but it won't work if requirements.txt has a VCS dep
[16:28:50] <tos9> or a tarball dep
[16:28:57] <tos9> where "won't work" means "pip will still try to clone it"
[16:29:00] <pf_moore> Why do you need to do it twice?
[16:29:10] <tos9> (even if you have #egg=)
[16:29:15] <ionelmc> tos9: well yeah i don't have so many of those
[16:29:30] <tos9> ionelmc: I don't either, but I run the pip install in a place that won't be able to do the clone
[16:29:31] <pf_moore> Isn't $build-cache/wheels the same as $final_bundle?
[16:29:33] <ionelmc> pf_moore: i need a "clean" bundle
[16:29:54] <pf_moore> hmm, OK, I see (I think)
[16:29:59] <ionelmc> only for what's now in req.txt, not all the crap that accumulated in $build_cache
[16:30:26] <pf_moore> Ah, ok - you use a common build cache for multiple requirements files. Makes sense.
[16:31:13] <ionelmc> ah yeah, cause i have a couple debs
[16:31:18] <pf_moore> And yeah, VCS deps are a PIT with this sort of thing (because they don't have a well-defined version, basically, so pip sees them as always newer than any built file)
[16:31:29] <pf_moore> s/PIT/PITA/
[16:31:30] <ionelmc> plus there's the garbage issue
[16:31:51] <ionelmc> pf_moore: what if you'd tag it with a "fake" version?
[16:31:55] <tos9> pf_moore: yeah, which is why I would have answered your earlier question (about what to do when a wheel satisfies a req but isn't latest) with "meh just use it" :P
[16:32:27] <xafer> I'd say fork the VCS and put a local tag on its version number ?
[16:32:43] <ionelmc> eg: pip install git+https://git.repo/some_repo.git#egg=subdir&version=1.2.3
[16:32:47] <tos9> xafer: it doesn't help, even if you specify a ref, it'll still get cloned.
[16:32:54] <pf_moore> ionelmc: AFAIK, ignores versions internally for VCS URLs, so basically you can't trick it that way
[16:33:02] <xafer> I meant with a private pypi
[16:33:15] <ionelmc> pf_moore: fake version tagging seems feasible to me
[16:33:34] <xafer> release a some_repo-1.23+tos9 version
[16:33:51] <ionelmc> sure, private pypi
[16:33:56] <pf_moore> ionelmc: Can't say I've ever tried it, it may work
[16:34:02] <ionelmc> but not everyone wants to maintain internal infrastructure
[16:35:01] <pf_moore> I could do loads of cool things with a private PyPI where I could serve generated stuff. devpi plugins basically, but I've never investigated...
[16:35:21] <ionelmc> tos9: as a workaround, you could manually do pip wheel for those source packages
[16:35:31] <ionelmc> and then match the versions in your req.txt
[16:35:36] <pf_moore> Serve VCS URLs as versioned sdists, automatically wheel convert wininsts and exes, etc.
[16:35:50] <ionelmc> you need two req files with that, one with only the source deps
[16:36:06] <ionelmc> and one with matching versions (so they hit the wheelhouse)
[16:38:11] <ionelmc> tos9: you can prolly automate generating the version reqs, by running pip freeze
[16:42:27] <tos9> ionelmc: it's easier I think to well, either send a PR to pip to "fix" it or add a flag :P, or to do what I'm doing now, which is to parse the file for VCS links and manually pip install them without the vcs link once the wheel is present
[16:43:35] <ionelmc> meh
[16:43:39] <ionelmc> it can be automated
[16:44:56] <tos9> er, yeah, "manually" meant "automanually"
[16:45:23] <xafer> or ask the VCS maintainer to produce a release ? :p
[16:45:36] <tos9> xafer: The VCS maintainer is me :)
[16:45:58] <xafer> it should be easier to convince him then ^^
[16:47:19] <ionelmc> xafer: he needs an internal pypi then
[16:47:33] <ionelmc> he needs to convince the sysadmin to run it
[16:47:38] <ionelmc> that's the hard part
[16:47:47] <tos9> Right, and also it's a useless extra step
[16:47:51] <tos9> we always depend on latest
[16:48:06] <tos9> (And that always should be true, otherwise things are out of sync)
[17:12:29] <tomprince> Put stuff in install_requires, and install that?
[17:12:41] <tomprince> Then have a requirements that points at the VCS repos.
[17:15:32] <aclark> 104.28.31.126
[17:15:39] <aclark> wtf?
[17:15:41] <aclark> sorry
[17:58:11] <ionelmc> :-)
[18:10:01] <aclark> heh
[19:11:45] <wsanchez> It appears that when pip encounters an error it doesn't exit with an error, but continues. That seems unfortunate, no?
[19:13:06] <Alex_Gaynor> What error?
[19:13:08] <wsanchez> I see "Installing collected packages" then python-ldap is stupid and has an error, then "cleaning up" and exit 0
[19:13:37] <wsanchez> It does emit "Complete output from command"... so it seem to know there's an error
[19:15:28] <wsanchez> Well dammit never mind my fault
[19:15:36] <wsanchez> I think
[19:18:16] <wsanchez> Note: all sh scripts must 'set -e', always.
[19:18:29] <wsanchez> Also 'set -u' but that's not why I'm dumb today
[19:33:07] <ionelmc> wsanchez: pastebin the whole output
[19:33:27] <ionelmc> hard to tell what the actual problem is otherwise
[19:37:45] <wsanchez> ionelmc: I think pip may be exiting with the correct status and my script is not catching it because the script is a Mailefile and make sucks
[19:38:01] <wsanchez> So lemme make sure that's not the problem.
[19:39:07] <ionelmc> ah yes, the perils of makefiles :)
[19:39:29] <Alex_Gaynor> wsanchez: It may depend how the script is failing; I just did a trivial `pip install extension-that-wont-compile` and pip exited with $status => 1
[20:07:52] <wsanchez> For GNU Makefiles, this is a good thing to do: .SHELLFLAGS = -euc
[20:08:18] <wsanchez> Which will cause the shell to exist on errors or undefined variables
[20:10:50] <wsanchez> The python-ldap is a crime against humanity.
[20:11:19] <Alex_Gaynor> wsanchez: https://github.com/twisted/ldaptor ?
[20:11:34] <wsanchez> Anyway it was a "Build" directory in it, and if I ask pip to build it from a tarball, it gets an error trying to create "build
[20:11:37] <wsanchez> "
[20:11:48] <wsanchez> Alex_Gaynor: Yeah, I should look at that again
[20:23:31] <ronny> sup