[11:00:21] <pf_moore> So what about users putting up a local index? Dumping some files in a directory and exposing them via Apache or twisted or something
[11:00:43] <pf_moore> Those won't have rel=internal, so they should use --find-links rather than --extra-index-url?
[11:00:52] <dstufft> it requires a <meta> tag to trigger that behavior
[11:01:13] <dstufft> item #3 under the stuff talking about PyPI
[11:03:03] <dstufft> I'm going over it again right now actually
[11:03:09] <dstufft> to update it, reword it, address comments people had
[11:03:49] <pf_moore> But it does mean that for little tools like the ones I'm forever writing, scraping the simple index (in a way that supports non-PyPI indexes) is an ugly beast.
[11:04:20] <pf_moore> distlib's locators cover it sort of, but I keep hitting issues with the API :-(
[11:04:48] <pf_moore> So I write my own and find all the reasons I shouldn't, but should just use existing code...
[11:05:17] <dstufft> You mean PEP 470 makes it harder?
[11:06:18] <pf_moore> sorry, no I meant that the existing stuff is hard, PEP 438 makes it easier if you're willing to ignore anything but rel=internal
[11:06:44] <pf_moore> but pep 438 doesn't help old-style indexes like local ones would be
[11:07:21] <pf_moore> PEP 470 might mean that more people end up with simple indexes that just throw up a load of links.
[11:07:44] <dstufft> PEP 470, assuming it gets accepted, should make things a lot easier, since it goes back to the older style of parsing links on PyPI
[13:34:48] <pf_moore> Sigh. Every time I write a program to query PyPI, it takes so long to set up the code to get the data that I forget why I wanted it in the first place
[13:34:53] <pf_moore> And every time I try to write a library to do the querying it takes so long I forget what my use cases were :-(
[14:47:25] <isomorphismes> buck1: do you mean I should make just a 3-line text file test.py: #!/usr/bin/python; import urllib; print urllib #and that's it?
[18:28:34] <nanonyme> pf_moore, how about next time writing documentation instead of code as the first thing you do? ;)
[19:22:31] <pf_moore> nanonyne: well yeah, but most of these things start as "hmm, I wonder how many packages on PyPI have wheels" or similar Not something I'd document
[19:23:02] <pf_moore> Then after hours fighting with xmlrpc and scraping webpages, I realise I've lost track of the original question
[19:23:26] <pf_moore> Essentially you're making the mistake of assuming I'm organised :-)
[19:25:03] <dstufft> pf_moore: I have an advantage, I answer most of those questions with SQL queries :V
[19:25:09] <dstufft> unless I care about things that aren't hosted on PyPI
[19:25:27] <pf_moore> I know. And as a DBA/SQL developer, I'm insanely jealous :-)
[19:27:58] <dstufft> I can probably arrange for database dumps with the user accounts stripped out fwiw
[19:28:06] <dstufft> if you ever have querstions like that and want one
[19:28:28] <dstufft> it uses postgresql though, I bet you're a MSSQL kind of guy ;P
[19:29:59] <pf_moore> dstufft: Actually Oracle... But I'm sort of trying to learn postgresql in my spare time, so if you ever did have the time, a dump would be really cool to play with.
[19:30:40] <dstufft> I actually jsut took a fresh pg_dump yesterday, so I can jsut clean out the personal data real quick
[19:31:38] <dstufft> pf_moore: oracle was going to be my second choice :D
[19:31:53] <pf_moore> lol - love being predictable ;-)
[20:02:01] <carlio> dstufft: oo, could i have a copy of the dump too?
[20:26:08] <xafer> any chance to also have it ? :p