PMXBOT Log file Viewer

Help | Karma | Search:

#pypa-dev logs for Tuesday the 9th of September, 2014

(Back to #pypa-dev overview) (Back to channel listing) (Animate logs)
[00:01:34] <jaraco> dstufft, excellent
[00:01:41] <dstufft> hm
[00:01:52] <dstufft> packaging.version.parse() or packaging.version.parse_version()
[00:02:03] <jaraco> former
[00:02:24] <dstufft> that was my thinking too!
[00:02:30] <jaraco> \o/
[12:10:18] <[Tritium]> I am looking at the setup.py for bandersnatch... install_requires has me raising a brow. it requires setuptools (which would be installed if running setup.py didnt end in a violent traceback by the time setup() is called), mock, pytest, pdbpp... are these actually required to install and run bandersnatch? shouldnt mock and pytest* be in test_requires?
[14:17:55] <pmxbot> jaraco pushed 2 commits to setuptools (https://bitbucket.org/pypa/setuptools/) :
[14:17:55] <pmxbot> Move sister functions into proximity
[14:17:55] <pmxbot> Rename argument for consistency
[15:35:59] <dstufft> [Tritium]: via setup.py isn't the only way to install something, installing from Wheel for example doesn't require setuptools to be installed
[15:36:59] <[Tritium]> dstufft: and pytest?
[15:37:09] <dstufft> dunno about that part
[17:31:10] <kevinburke> hello
[17:31:16] <kevinburke> has anyone seen this error before? https://travis-ci.org/kevinburke/pip/jobs/34834776
[17:31:46] <kevinburke> Don't think it's related to my code, the only difference between that & the previous build was what I believe was an unrelated change & it passed
[19:41:43] <kevinburke> dstufft: ping
[19:41:50] <dstufft> kevinburke: pong
[19:42:08] <kevinburke> Ok. So, the same test failed on python 2.7 twice, but I'm having trouble seeing how it's related. I also can't reproduce it locally with CPython 2.7.8 (only on Travis)
[19:42:19] <kevinburke> The failing test is test_install_user_conflict_in_globalsite_and_usersite
[19:42:32] <kevinburke> Console output is here: https://travis-ci.org/kevinburke/pip/jobs/34834776
[19:42:47] <kevinburke> It also succeeds on pypa/develop
[19:43:33] <dstufft> normally the timeout errors are transient network things
[19:44:01] <dstufft> brb
[19:44:26] <kevinburke> Okay, hmm. I'd expect in that case for failures to be distributed randomly across test cases (not fail in the same place)
[19:44:32] <kevinburke> Let me try pushing a dummy change to re-run it.
[19:56:24] <Ivo> kevinburke: just ask one of the maintainers to restart the job in that case
[19:56:47] <kevinburke> hmm... it's consistently failing
[21:35:34] <kevinburke> is there a way to run pip or the tests in a debug mode?
[21:36:55] <kevinburke> it looks like a subprocess spawned by pip, "pip install --user INITools==0.1" is blocking, which is causing the tests in my branch to fail. It's predictable so I can add commands and re-run the tests
[21:37:07] <kevinburke> but I'm not sure why it's failing, it sure would be nice to know what the subprocess is blocked on.
[22:24:33] <dstufft> kevinburke: our test suite is real bad at this
[22:24:47] <dstufft> kevinburke: can you reproduce outside of the subprocess?
[23:13:52] <kevinburke> dstufft: I tried running it locally on python 2.7.8 with no luck
[23:13:57] <kevinburke> but it *only* fails on python 2.7
[23:14:29] <dstufft> kevinburke: try adding debug=True to the script.pip() call
[23:14:32] <dstufft> I think it's debug=True
[23:14:34] <kevinburke> okay
[23:35:01] <kevinburke> ehhh... https://travis-ci.org/kevinburke/pip/jobs/34864589
[23:35:08] <kevinburke> still not too clear what's going on
[23:35:11] <kevinburke> the blocking command is
[23:35:17] <kevinburke> pip install --user INITools==0.1
[23:35:44] <kevinburke> the other ones fail with a AttributeError, presumably verbose mode trying to print something and failing, https://travis-ci.org/kevinburke/pip/jobs/34864588#L233
[23:44:58] <dstufft> kevinburke: your develop branch is up to date?
[23:45:04] <kevinburke> Yep
[23:45:06] <kevinburke> Er
[23:45:17] <kevinburke> I didn't branch off of the latest develop. One sec.
[23:48:24] <kevinburke> Just merged the latest develop into the branch and pushed again, now wait 45 minutes or whatever for it to acquire a worker and run :)
[23:48:45] <dstufft> there was a problem a little while back where things would deadlock
[23:49:44] <dstufft> I don't remember what fixed it or what it was exactly
[23:55:43] <kevinburke> Ok