PMXBOT Log file Viewer

Help | Karma | Search:

#pypa-dev logs for Wednesday the 16th of September, 2020

(Back to #pypa-dev overview) (Back to channel listing) (Animate logs)
[00:08:23] <travis-ci> pypa/pip#17966 (master - a13f201 : Pradyun Gedam): The build passed.
[00:08:23] <travis-ci> Change view : https://github.com/pypa/pip/compare/7fdf1634e083befe6334f10ddbe4b9ce3e7826bb...a13f2014f99ed3ca8fb2b3fa25d3631a50a0cace
[00:08:23] <travis-ci> Build details : https://travis-ci.com/pypa/pip/builds/184500101
[11:11:07] <graingert> is there a way to get pip to consume hashes from the CLI, or ignore hashes for file:/// urls? I'm currently trying to work around it in https://github.com/tox-dev/tox/issues/1672 and https://github.com/PyCQA/modernize/pull/228/files/#diff-256be86b218458267e29f38e19906417R72
[11:20:02] <graingert> I tried using direct references, `foo[extras] @ file:///...#hash=` but that's slower as I need to find out the egg name
[11:48:11] <graingert> McSinyx[m]: it does seem sensible to merge the channels - but every now and again you get "I can't pip install this because it's got require_python~=3.6 and I'm using jython 2.6"
[11:51:51] <graingert> McSinyx[m]: and this channel gets all the travis-ic noise
[12:38:37] <travis-ci> pypa/pip#17968 (master - 33890bf : Xavier Fernandez): The build passed.
[12:38:37] <travis-ci> Change view : https://github.com/pypa/pip/compare/a13f2014f99e...33890bf825fa
[12:38:37] <travis-ci> Build details : https://travis-ci.com/pypa/pip/builds/184664735
[13:43:10] <McSinyx[m]> IMHO we can remove the travis bot now since
[13:43:38] <McSinyx[m]> 1. the latency is incredibly terrible (as in 6h sometimes)
[13:44:15] <McSinyx[m]> 2. most (if not all, I'm not sure) are for pip, and pip uses over a dozen of CI jobs
[13:45:44] <McSinyx[m]> as long as I'm here (hint: it's via matrix, no inconvenience at all since I have friends) I'm happy to assist users or redirect them to better places
[17:44:11] <graingert> McSinyx[m]: there's 2 ways you can tell someone on IRC uses matrix
[18:51:43] <abn> Does anyone have any idea why the mirror size dropped today according to https://p.datadoghq.com/sb/7dc8b3250-85dcf667bd?from_ts=1600273495941&to_ts=1600277095941&live=true
[18:58:13] <clarkb> abn: dstufft discovered yesterday that a (not sure if the same as the one you are looking at) mirror had filled its disk around August 2 and had gone stale. This was related to debugging https://github.com/pypa/warehouse/issues/8568. It si my understanding that the fix was to rebuild with a newer bandersnatch as it would use a lot less disk (I don't fully understand why that is the case)
[18:59:06] <cooperlees> we kept same bandsersnatch - But it's refilling from scratch cause bandersnatch delete has not worked in years
[18:59:17] <cooperlees> dstufft and I will fix when I add TUF support into bandersnatch
[18:59:39] <cooperlees> We might even be able to simplify the code as warehouse will pre generate and store the simple html files too
[19:00:35] <clarkb> got it, so rebuild would avoid writing things that working delete would have pruned
[19:00:56] <cooperlees> Ya - fwiw - delete hasn't worked since I became maintainer of bandersnatch
[19:01:07] <cooperlees> (and that's over 3 years)
[19:01:14] <cooperlees> I wrote verify to be able to do it
[19:01:25] <cooperlees> But, the version on the PyPI mirror does not even have verify :P
[19:01:48] <cooperlees> I've got access and if we decide to keep the mirror for PyPI DR (we might be able to get rid of it) I'll go update and fix it all
[19:02:13] <cooperlees> 501 5700 5699 97 03:25 pts/4 15:14:20 /opt/bandersnatch/bin/python3.6 /opt/bandersnatch/bin/bandersnatch -c /etc/bandersnatch.conf mirror
[19:02:19] <cooperlees> ^^ It's been at it for 15 hours :P
[19:02:40] <cooperlees> - /dev/xvdb 12T 2.7T 9.3T 23% /data
[19:02:51] <cooperlees> Still got a ways to go
[19:03:12] <cooperlees> using 10 workers too :O
[19:47:09] <abn> clarkb: ah thanks that makes sense with regard to the drop.
[19:49:12] <abn> cooperlees: would be great to know how long it takes to get back in sync; I guess one could watch when the graph flatlines (ish).
[20:23:08] <cooperlees> If dstufft is writing to a log I can calculate it easily too