PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Monday the 3rd of August, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[07:27:57] <lansman> hello guys, tell me please, how to extract dependecies from setup.py to requirements.txt?
[07:31:27] <mgedmin> add '-e .' to requirements.txt
[07:31:37] <mgedmin> see also https://caremad.io/2013/07/setup-vs-requirement/
[08:22:42] <doismellburning> lansman: I just install + `pip freeze`
[16:41:54] <doismellburning> is there any link on https://pypi.python.org/pypi/tornado I'm missing for "these are the versions"?
[16:42:15] <doismellburning> that page shows me 4.2.1, whereas https://pypi.python.org/pypi/Django gives me a version index
[16:43:36] <dstufft> Nope
[16:43:47] <dstufft> which kind of page you get depends on the settings of that particular project
[16:44:36] <doismellburning> ah right
[16:44:37] <doismellburning> thanks
[17:14:51] <_habnabit> urrrrrgh i didn't know about this new "This filename has previously been used, you should use a different version." thing. i wasn't trying to replace a version with a different tarball marked the same version, but i forgot to attach the .asc during the upload and went to delete and re-create the file and got that
[17:15:12] <_habnabit> it's the same exact file with the same exact hash, so i'm not sure why it would be rejected
[17:15:47] <_habnabit> guess i have to make a new version
[17:18:30] <dstufft> _habnabit: lol
[17:18:46] <ronny> dstufft: how is your warehouse work progressing?
[17:18:56] <_habnabit> AUGH FUCK
[17:19:06] <_habnabit> now i signed it with the wrong key and so now i have to make aNOTHER new version
[17:19:10] <_habnabit> thanks pypi
[17:19:16] <dstufft> _habnabit: yea, it needs modified so that it lets you reupload the exact same tarball again (or change it so that delete doesn't really delete just hides it from everything and lets you restore)
[17:19:23] <dstufft> also lets you upload a signature afer the fact
[17:19:30] <_habnabit> dstufft, or just the latter thing
[17:19:48] <_habnabit> i wouldn't care about anything else if i could just attach signature later
[17:19:52] <dstufft> ronny: it's coming, trying to finish up file uploading right now
[17:20:07] <dstufft> well 'finish" in the "make it work" sense, there will still be some TODOs
[17:20:07] <ronny> dstufft: so potentially a within a month item?
[17:20:35] <dstufft> for warehouse itself? Probably not within a month, couple months hopefully
[17:20:45] <ronny> oh, ok
[17:21:50] <dstufft> We have a designer now whose started making up some wireframes
[17:21:51] <dstufft> so that's cool
[17:22:04] <ronny> wireframes?
[17:22:59] <ronny> dstufft: i grew healthy enoug that i started to pick up on gumb elf, target is to be able to create wheels, eggs and sdists sometime soon
[17:24:02] <ronny> (eggs since setup_requires being wheel-ized propperly is too damn hard)
[17:38:02] <benjaoming> dstufft: For security, I think it's pretty cool that you don't get to re-upload a release... but I get that it's quite human to err... maybe a ~30 minute window for re-uploading would be cool? Then after the window expires, the package would appear in feeds for mirrors etc?
[17:39:08] <dstufft> ronny: that's good that you're feeling better
[17:39:27] <dstufft> benjaoming: it's likely it'll change in Warehouse at some point, not sure what it will end up looking like yet though
[17:40:15] <dstufft> a lot of what's done in PyPI legacy is done in the interest of changing as little code as possible
[17:42:50] <ronny> dstufft: btw can warehouse do things like devpi as well?
[17:43:54] <dstufft> ronny: you'll need to be more specific about what you mean by things
[17:44:14] <ronny> dstufft: stuff like user/topic specific indexes
[17:45:38] <dstufft> Currently? No. Ever? IDK. It's only real purpose is to live at pypi.python.org, so whether it gets that or not is mostly a question of deciding if PyPI itself needs it
[17:48:27] <ronny> dstufft: i see
[17:48:36] <benjaoming> dstufft: actually it was possible to manually delete releases and then re-upload the same release... but the latter was dis-allowed, I don't remember when... but I think it's a recent decision.
[17:49:23] <dstufft> benjaoming: right, I removed the ability to do that, and the way I implemented it was done that way to reduce the amount of code I had to change in PyPI legacy with the intention to revist it in Warehouse with a better system
[17:50:25] <benjaoming> dstufft: cool, so YOU did it ;) ;) don't want to disturb you too much, just saying that in case you want to change the decision, I wouldn't see a big issue :)
[17:51:17] <dstufft> Most likely it'll just be changed so that instead of a hard delete, it's a soft delete that you can hit a button to restore it, and the ability to upload signatures after the fact
[17:51:54] <dstufft> I thought about the Window thing, but realistically it's going to have weird interactions for a lot of people if you change anything meaningful
[17:52:26] <dstufft> like, if someone does ``pip install foo`` between when you uploaded it and when it got downloaded it'll be cached locally for those people for basically forever
[17:52:36] <dstufft> er, when it got deleted and reuploaded*
[17:53:08] <dstufft> The fact that a particular file name will only ever be equal to one content (or be nothing) is used to provide caching throughout the system without having to mess with purging
[17:55:51] <dstufft> We push something like 5TB a day via Fastly, so things that let us leverage client caching for as long as humanly possible is super important
[17:56:16] <shader> so, I've run into what seems like a common issue - pip not working for https requests
[17:56:28] <dstufft> shader: can you provide more details?
[17:56:30] <shader> but the weird thing is tha it works if I manually pip install all of the depenendencies
[17:56:42] <shader> I'm trying to install mongoengine in an alpine docker
[17:57:30] <shader> so, you can repro if you do 'docker run -ti --rm frolvlad/alpine-python3 sh'
[17:57:36] <shader> not sure if you need that though
[17:58:00] <dstufft> what command are you running, and what does the traceback show (pastebin it)
[17:59:54] <shader> https://bpaste.net/show/28450e893b5b
[18:00:41] <shader> I tried the suggestions on stackoverflow for installing root certs, or setting the index to http://
[18:00:57] <shader> but it doesn't seem to work, because it's a dependency that's having the issue
[18:01:48] <shader> neither 'rednose' nor 'nose' (the two listed dependencies of mongoengine) will install automatically, but I can pip install either of them manually just fine
[18:01:52] <dstufft> ahh
[18:01:53] <dstufft> see
[18:02:14] <dstufft> the problem here is that mongoengine has those in it's setup_requires
[18:02:38] <dstufft> setuptools/easy_install actually installs (to a temporary location) anything in setup_requires without any way for pip to inject in there and install it instead
[18:02:45] <dstufft> so the SSL error is coming from setuptools/easy_install
[18:02:49] <shader> hmm
[18:03:08] <shader> anything I can do on my end to fix it, besides installing everything manually?
[18:03:11] <dstufft> if you install rednose before you try to install mongoengine it should work
[18:03:18] <dstufft> in two different pip invocations
[18:03:26] <dstufft> assuming that's the only thing it has in it's setup_requires
[18:03:48] <dstufft> alternatively, I think you can install certifi and easy_install will use that
[18:04:30] <shader> certifi?
[18:04:50] <shader> ok, I see it in pip; I'll try that
[18:05:38] <shader> oh, that worked
[18:05:39] <shader> thanks :)
[18:06:27] <shader> Thanks for the help dstufft, looks like that will prevent similar issues from happening in the future
[18:09:13] <dstufft> shader: no problem :)
[18:23:17] <mitsuhiko> dstufft: so. you know this old python bug where too many subject alternative names break the ssl module?
[18:23:27] <mitsuhiko> seems like it affects pypi.python.org again
[18:23:34] <mitsuhiko> did someone update the cert?
[18:24:27] <mitsuhiko> we got some funky failures on osx system python because of it (our guess, need to verify)
[18:24:49] <dstufft> shouldn't have been updated for quite awhile
[18:24:54] <dstufft> couple months?
[18:25:12] <dstufft> it currently has 15 SANs
[18:25:20] <dstufft> I don't recall that bug off the top of my head though
[18:27:43] <mitsuhiko> wonder what changed
[18:27:48] <mitsuhiko> first time it happened was sometime today
[18:28:18] <dstufft> PyPI's TLS is controlled by Fastly, they might have changed a cipher or something
[18:28:37] <mitsuhiko> http://bugs.python.org/issue13034
[18:28:39] <mitsuhiko> that's the old bug
[18:28:43] <dstufft> the cert being returned by ssllabs looks like the right cert, we provide that to Fastly so they couldn't have changed that
[18:30:36] <dstufft> mitsuhiko: I have to run and pick up my daughter from ice skating, I'll be back in a bit but if you're poking at it joining #fastly and asking them if they would have made and TLS config changes that would affect PyPI is what I would do if i wasn't leaving right now
[18:31:49] <mitsuhiko> thanks
[18:51:00] <bbatha> Hello, I'm attempting to setup an environment where I have many different scripts that should share one environment. Essentially I'd like to be able to deploy several virtualenv's and let the script pick what set of modules it gets. Is there a recommended way to do something like this? I'm on RHEL (6 primarily) and supporting python 3.4 and 2.7
[20:48:14] <aclark> bbatha: virtualenvwrapper?
[20:50:18] <bbatha> aclark: essentially that's the interface to consumers I want to expose. However, I need to install my venvs globally on a few hundred boxes. It seems that there isn't a good way to package the venv's up though
[20:52:29] <aclark> bbatha: you may need to give an example because I'm not really following what you are going for
[20:53:27] <bbatha> I have a.py and b.py these two scripts share a venv Foo
[20:54:01] <bbatha> I would like to package Foo up and distribute it to a few hundred machines
[20:54:14] <aclark> ooo
[20:54:39] <bbatha> such that any arbitrary python script could use Foo
[20:54:46] <aclark> right
[20:56:11] <aclark> hmmm
[20:56:38] <aclark> in that case I might make an rpm? Not sure
[20:57:21] <bbatha> Ya that's what I'd like to do but it seems like that's not really production ready: https://virtualenv.pypa.io/en/latest/userguide.html#making-environments-relocatable
[20:59:01] <aclark> bbatha: in that case I'd probably explore the automation of the creation of Foo on X number of hosts
[20:59:29] <aclark> Or, just roll your own Python and rpm that instead of introducing venv to the mix
[20:59:51] <aclark> e.g. /usr/local/my/redistributable/python/bin/python
[21:00:17] <bbatha> aclark: Ya I've been looking at the later but it would be nice avoid reinventing the wheel and use venv to do the environment management.
[21:00:25] <aclark> (i.e. Foo could be a real env)
[21:00:39] <aclark> Yeah I'm not sure this wheel is fully formed yet :-)
[21:00:58] <bbatha> aclark: Unfortunately :).
[21:01:05] <aclark> And I don't think the main goal of venv was ever distribution.
[23:09:05] <eluria> hello, I am trying to install a specific version of a package using pip on osx but it fails (either inside or outside virtualenv) while installing the newest version works
[23:10:15] <eluria> i.e., `pip install nltk` works fine, but `pip install nltk==2.0b9` fails with ' IOError: [Errno 2] No such file or directory: '/private/tmp/pip-build-Xke7Hl/nltk/setup.py''
[23:11:02] <eluria> I can't find out what's going wrong. passing -vvv does not help. I would be happy if anyone has a clue about this
[23:23:18] <aclark> eluria: nothing is going on, that release is "bad"
[23:27:12] <eluria> aclark: oh I see, so I should just try the next older version then
[23:27:18] <aclark> right
[23:27:58] <eluria> wow thanks, I could not have figured out even from the most verbose pip output that the package itself was causing this
[23:31:22] <eluria> hmm more errors now. perhaps because their setup is just broken or something. is it normal that installing older packages causes so much trouble?
[23:32:15] <aclark> eluria: it's sadly not unusual to encounter broken packages
[23:37:20] <eluria> I see. yes I seem to have hit a chain of errors here already :S
[23:53:47] <eluria> aclark: anyway, thanks. you've helped me a few steps further. cheers