PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Thursday the 11th of June, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[01:17:29] <tdsmith> During build_ext, it looks like a user-specified CFLAGS replaces the sysconfig CFLAGS but a user-specified LDFLAGS is inserted after the sysconfig LDFLAGS, which means the sysconfig include search path but not the sysconfig library search path can be overridden at extension build time: https://github.com/Homebrew/homebrew/issues/40516#issuecomment-110960510
[01:17:57] <tdsmith> Does that feel like a distutils bug?
[01:48:08] <dstufft> tdsmith: um, I don't know the compiling stuff very well, but different behaviors like that always makes me feel like one or the other should be fixed
[01:55:56] <tdsmith> okay, i'll file if i can't find a bug, thanks!
[02:03:34] <tdsmith> hmm, is it the case that virtualenv ignores a global distutils.cfg but respects ~/.pydistutils.cfg?
[02:44:49] <tdsmith> ah boo, the LDFLAGS thing is because LDFLAGS gets embedded in LDSHARED
[07:58:59] <ronny> dstufft: i'd like to move setuptools_scm from bb to github, can someone add me to pypa on gh?
[09:41:18] <ionelmc> ronny: yay for github!
[14:55:14] <HenryG> I am seeing some caching issue with pip when building a tox py34 environment after a py27 env.
[14:57:04] <HenryG> In my particular example, the routes package. It gets installed in the py27 env just fine. Then when I build the py34 env, routes gets installed in its "python 2" form.
[14:58:19] <HenryG> Apparently routes has some hook that converts itself to py3k form if the interpreter is py3k. But with routes in the cache this conversion does not happen.
[14:59:13] <HenryG> If I run "pip --no-cache-dir install routes" in the py34 env then it installs correctly.
[15:02:02] <Wooble> ugh, people are still automatically running 2to3? :/
[15:03:14] <dstufft> 2to3 shouldn't cause problems unless routes claims to be universal
[15:03:44] <dstufft> which it does
[15:03:45] <dstufft> lol
[15:04:15] <Wooble> it is universal... you can run the same setup.py for both versions! What else could that mean? ;)
[15:52:16] <HenryG> You folks are talking over my head a bit :)
[15:52:41] <HenryG> So is routes not really py3k? What can I do about it?
[15:54:37] <a-tal> HenryG: can you link the project? I think I might know a way to solve this
[15:55:35] <HenryG> a-tal: openstack neutron (may affect other projects in openstack)
[15:55:57] <HenryG> a-tal: https://github.com/openstack/neutron/
[15:56:16] <Wooble> HenryG: I assume it's a wheel that's being cached? (caching the actual sdist download should work just fine...)
[15:57:10] <HenryG> Wooble: wooosh. sorry :(
[16:04:47] <HenryG> I see dhellmann is here, he might know
[16:05:14] <dstufft> the problem is in routes
[16:05:28] <dstufft> https://github.com/bbangert/routes/blob/master/setup.cfg#L1-L2
[16:05:35] <dstufft> HenryG: are you familar with Linux packaging at all?
[16:06:43] <HenryG> dstufft: only as a user
[16:07:20] <a-tal> heh, yeah that's not actually universal when you have a bunch of py3 hooks in your setup.py. i bet if you reverse the order in tox it'll work too tho
[16:07:40] <dstufft> HenryG: are you familar with the distinction between like a source deb and a binary deb? or a source rpm and a binary rpm?
[16:08:35] <Wooble> I'm sure "add a universal wheel" sounded like a good idea at the time. :/
[16:08:56] <HenryG> dstufft: somewhat, yes
[16:09:08] <dstufft> HenryG: in Python an sdist == source deb/rpm
[16:09:11] <dstufft> a wheel == binary
[16:09:33] <HenryG> dstufft: nice, makes sense
[16:09:34] <dstufft> the routes library has a build step that transforms the source differently for python2 or python3
[16:10:05] <dstufft> but that line I linked, is it saying "there is no difference between python2 and python3 on this library, so you can use the same binary wheel for both"
[16:10:33] <dstufft> so pip downloads the sdist from PyPI, and it turns it into a wheel, and because it has that line, it says "OK, this wheel is good for both Py2 and Py3"
[16:10:33] <HenryG> dstufft: nice explanation, thanks!
[16:10:38] <dstufft> and then just re-uses it
[16:10:41] <a-tal> https://github.com/bbangert/routes/pull/49 :)
[16:10:56] <HenryG> Yup, I see that
[16:11:17] <HenryG> The explanation and the bug
[16:11:45] <Wooble> a-tal: well that makes my bug report with no patch rather useless. :)
[16:12:19] <a-tal> Wooble: :D lol sorry m8. feel free to link to it if you want to make even more noise!
[16:12:42] <Wooble> tox hasn't even finished running the py2 env tests to try to reproduce it on my machine :)
[16:13:02] <a-tal> yeah mine's still chugging away as well lol
[16:13:46] <a-tal> too many cooks^Wtests!
[16:22:16] <Wooble> Glad my test suite doesn't take that long to run; I'd never get anything done! :)
[16:26:55] <HenryG> dstufft: a-tal: Wooble: Thanks for the help here! When the next version of routes is released we can set that as the min version in openstack requirements.
[19:24:48] <tos9> Hi, today's fun -- why is pip lying to me
[19:25:12] <tos9> https://bpaste.net/show/a66a597509e7
[19:25:27] <tos9> https://pypi.python.org/simple/faf/ I see more links than you do pip
[19:26:22] <tos9> (if I pass --no-cache everything works too -- does pip 6.0.8 have a bug where it won't check for packages newer than it's cache?)
[19:26:35] <tos9> its*
[19:26:51] <dstufft> tos9: No, but the /simple/ page is cached for 10 minutes
[19:27:04] <nanonyme> tos9, you mean more than those two links?
[19:27:15] <tos9> dstufft: Oh. So I just ran stuff before and after fastly updated basically?
[19:27:16] <tos9> Alright.
[19:27:49] <dstufft> tos9: well, fastly itself is purged instantly (though that can fail sometimes, we don't handle that case very well yet), but locally pip will cache /simple/faf/ for 10 minutes
[19:28:02] <dstufft> "instantly"
[19:28:04] <tos9> Oh *locally*... Hrm alright.
[19:28:05] <dstufft> meaning in < 1s
[19:28:16] <dstufft> passing --no-cache-dir makes me think it might be that
[19:28:22] <dstufft> since that would disable that
[19:29:14] <dstufft> I've considered setting it so we don't cache /simple/*/ pages locally for reasons similar to this (also because it adds churn in the cache which can matter on things like travis)
[19:29:32] <dstufft> and because we only really cache it for 10 minutes anyways, so the benefit isn't huge
[19:29:48] <tos9> dstufft: YEah. I mean at very least saying " Getting page https://pypi.python.org/simple/faf/" with nothing more when I passed -v certainly is confusing :/
[19:30:25] <dstufft> tos9: can you open an issue to add an indicator that we're serving from the cache on that
[19:30:36] <dstufft> we indicate when we're serving from cache from files, we should do it there too
[19:31:32] <tos9> yup will do
[19:34:12] <tos9> done
[19:34:15] <dstufft> thanks
[19:34:18] <dstufft> should be an easy fix
[19:46:32] <elarson> dstufft: fyi: https://github.com/ionrock/cachecontrol/issues/81
[19:47:02] <dstufft> elarson: oh right
[19:47:06] <dstufft> elarson: interesting
[19:47:35] <dstufft> elarson: I was meaning to open anothe rissue... I think there's a problem with detecting success vs detecting an interupted download
[19:47:50] <dstufft> https://twitter.com/zzzeek/status/608633827909640192 see
[19:54:33] <elarson> dstufft: I can see how that would be difficult to sort out if the file handle just stops and no error is thrown
[19:55:15] <dstufft> elarson: I don't know if there's a better way to handle this than there is right now so ymmv
[19:55:33] <elarson> yeah, I agree
[19:56:03] <dstufft> One poossible option if it can't really be done, is to provide a way on the response to delete the cached response, so the caller can make the call to evict a response from the cache if we determine it wasn't satisfactory
[19:57:03] <elarson> like: if resp.from_cache and response_failed(resp): resp.evict()
[19:57:20] <dstufft> yea
[19:57:22] <elarson> well... resp.clear_cache()
[19:57:46] <dstufft> I'm not sure if that would be super specific to pip or not
[19:57:58] <dstufft> we have a checksum of the download we know if it was successful or not
[19:58:13] <dstufft> so we can evict from the cache if the hash doesn't match
[19:58:42] <elarson> it would work for any system that has a checksum it can verify, which I'd argue is a pretty reasonable use case
[19:59:15] <elarson> dstufft: thanks for adding the issue