PMXBOT Log file Viewer

Help | Karma | Search:

#pypa-dev logs for Friday the 2nd of October, 2020

(Back to #pypa-dev overview) (Back to channel listing) (Animate logs)
[00:15:36] <travis-ci> pypa/pip#18091 (master - d0f80a4 : Pradyun Gedam): The build passed.
[00:15:36] <travis-ci> Change view : https://github.com/pypa/pip/compare/2ef8040495711c7e7e695f80a35e208f7404f879...d0f80a44c9810a133d7258a72910844c3c3334f6
[00:15:36] <travis-ci> Build details : https://travis-ci.com/pypa/pip/builds/187726892
[05:36:20] <frickler> dstufft: on opendev we are seeing some weird pypi issues again since yesterday, similar to what we had a couple of weeks ago, did you recently change something in your setup that could correlate?
[05:41:05] <frickler> looking at status.python.org, "PyPI CDN Miss Times" might indicate something happening about 3 days ago
[05:42:29] <ianw> last time it was https://mirror.dub1.pypi.io/ out of sync, but it seems to be reporting itself as ok
[06:00:03] <frickler> although the issue manifests itself in a slightly different way now, so might be a different one altogether: we see pip2.7 trying to install latest versions of things like setuptools, which no longer support py27. in a normal run, an older version would get installed without error
[06:00:25] <frickler> see e.g. https://zuul.opendev.org/t/zuul/build/87db92acdc874593935c7d8e6e60c559 and https://zuul.opendev.org/t/openstack/build/f8146de7ac674481af95762893587a6f
[08:32:54] <travis-ci> pypa/pip#18094 (master - 8aab76c : Pradyun Gedam): The build passed.
[08:32:54] <travis-ci> Change view : https://github.com/pypa/pip/compare/d0f80a44c981...8aab76c63f5f
[08:32:54] <travis-ci> Build details : https://travis-ci.com/pypa/pip/builds/187759233
[14:32:57] <fungi> frickler: yeah, if memory serves, dstufft said the bandersnatch fallback mirror lacks requires-python metadata in its indices, which would explain why latest pip on python2.7 would try to install the source dist for latest setuptools even though it doesn't support python2.7
[14:34:13] <fungi> also the correlation seems to be providers around montreal canada (we're seeing it happen at random for requests in two different cloud providers' facilities in that metro area)
[14:35:27] <dstufft> hmm
[14:35:29] <dstufft> guess we need to just drop the mirror completely right now
[14:42:25] <cooperlees> Or later versions do embed that metadata
[14:42:46] <cooperlees> Why do we fail over to it so much?
[14:43:11] <cooperlees> (Later versions of bandersnatch)
[14:46:03] <dstufft> I'm not sure tbh
[14:46:13] <dstufft> We shouldn't
[14:46:36] <dstufft> I suspect we've just never noticed it because failing over to it is largely a silent action
[14:49:41] <fungi> my completely uninformed guess would be cdn nodes having trouble reaching warehouse due to intermittent network trouble
[14:50:09] <fungi> when we've seen it in the past, it usually manifests in only one part of the world at a time
[14:51:54] <clarkb> also, we're caching those results and reserving them so we may notice more than others if we hit it once, then that becomes many other occurences behind our cache
[14:52:55] <fungi> that and we've tuned our continuous integration to make these sorts of failures readily apparently rather than many folks who just prefer to paper over them and keep checking until something works
[14:53:16] <cooperlees> dstufft: Can we quantify it and try categorize why?
[14:53:16] <fungi> er, readily apparent
[15:30:03] <cooperlees> dstufft: Since it's not to hard, can we update to latest bandersnatch at least on the mirror?
[15:30:15] <cooperlees> Then I'll keep it up to date incase it lingers around (before we remove)
[15:30:22] <dstufft> sure
[15:31:01] <cooperlees> I'd prob want to remove it from CDN, update, then force a full sync to rewrite the simple HTML
[15:31:11] <cooperlees> For every project to get python-requires for example