PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Thursday the 5th of February, 2015

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[02:46:00] <tdsmith> New tool to help package Python applications and their dependencies into Homebrew formulas: https://github.com/tdsmith/homebrew-pypi-poet
[02:59:17] <dstufft> tdsmith: neat, I thought homebrew didn't package stuff that could be pip installed, is that wrong?
[03:04:52] <dstufft> tdsmith: btw, pip 6.0.8 and virtualenv 12.0.7 released, jsut a fyi!
[03:17:28] <tdsmith> current thinking is we don't like packaging libraries unless they're difficult to pip-install but we don't discriminate against command-line apps that happen to be written in python, dstufft
[03:17:51] <dstufft|laptop> tdsmith: ah gotcha
[03:19:51] <tdsmith> we bundle apps and their python dependencies together and keep everything out of the global site-packages which is actually kinda nice
[03:25:36] <dstufft|laptop> tdsmith: are you using something like pipsi for that or doing it manually?
[03:31:55] <tos9> pipsi has some fairly annoying parts
[03:32:02] <tos9> probably I should send mitsuhiko a PR at some point to remove some of them
[03:49:01] <tdsmith> dstufft|laptop: manually
[03:50:01] <tdsmith> we just install packages to a private prefix and then wrap invocation points with a script that sets PYTHONPATH
[06:33:28] <mgedmin> hello, 503 backend read error, long time no see
[06:35:42] <mgedmin> dstufft, have you seen how https://pypi.python.org/pypi/virtualenv/12.0.7 looks?
[06:36:28] <mgedmin> all 12.0.x versions look unformatted actually
[08:30:04] <ronny> dstufft: ping?
[09:06:29] <ronny> hmm
[09:06:41] <ronny> who is the current lead behind pip?
[09:53:16] <kevc> anyone know about the error "For --editable= ... only svn+URL, git+URL, hg+URL, bzr+URL is currently supported" ?
[09:53:23] <kevc> Seems new and quite strange behaviour
[09:59:26] <kevc> actually it mostly seems that I can only trigger this when calling pip via xargs
[10:37:50] <mgedmin> seeing the command you're running and its full output could be helpful; pastebin?
[10:41:14] <kevc> http://pastebin.com/bz5iDtZ5
[10:41:44] <kevc> the pip install -e command works fine if entered directly into that shell
[10:43:01] <mgedmin> xargs splits on spaces by default, so you get 'pip install -e' and 'pip install git+...'
[10:43:05] <mgedmin> tell xargs to split on newlines
[10:43:12] <kevc> I think this was working fine on an older version of pip. Would needs to check the versions
[10:43:15] <mgedmin> xargs -d '\n', I think
[10:44:12] <kevc> mgedmin: doesn't change anything. Also note I've used -t and it shows a single command being run, not multiple
[10:44:12] <mgedmin> but doesn't 'pip install -r reqtmp' already skip lines starting with #?
[10:44:22] <mgedmin> oh
[10:44:29] <kevc> the line doesn't start with #, it contains # for the egg=...
[10:44:52] <kevc> the xargs was workaround against pip not installing packages in order
[10:45:02] <mgedmin> I was trying to guess the reason for your sed '/%#/ d' bit
[10:45:10] <mgedmin> and now I know
[10:46:22] <mgedmin> ok, I think I know what the problem is: xargs runs ['pip', 'install', '-e git+https://...'] instead of ['pip', 'install', '-e', 'git+https://...']
[10:46:51] <mgedmin> fixup with sed 's/^-e /-e' maybe?
[10:48:56] <kevc> that seems to work, althought not quite clear why. Now it gets ['pip', 'install', '-egit+https://...'] and it works
[10:51:01] <mgedmin> the why is clear to me: pip thinks the URL you specify in '-e git+https://...' is ' git+https://...' with a leading space
[10:51:10] <ronny> oO
[10:51:14] <mgedmin> so it doesn't .startswith('git+')
[10:51:20] <ronny> mgedmin: why not use pip install -r thatfile ?
[10:51:44] <mgedmin> ronny, "<kevc> the xargs was workaround against pip not installing packages in order"
[10:51:51] <ronny> oh
[10:51:55] <ronny> why is the in order needed?
[10:51:57] <mgedmin> I, too, am curious why it was important to install packages in a specific order
[10:52:00] <ronny> kevc: why do you need in order?
[10:52:35] <ronny> bascially in order should never matter
[13:24:34] <theuni2> howdi
[13:25:58] <theuni2> hm
[13:26:07] <theuni2> Interesting edge case.
[13:26:48] <theuni2> I uploaded a file to pypi with setup.py upload. The upload failed (Upload failed (503): backend read error) but the file shows up.
[13:27:04] <theuni2> Also, the file is a valid zip file with no errors but it has a different md5sum than I do.
[13:27:04] <mgedmin> see also: the byzantine generals problem
[13:27:12] <mgedmin> now that is interesting
[13:27:18] <theuni2> *exactly*
[13:27:25] <theuni2> And I'm not sure what triggered the 503
[13:27:32] <mgedmin> if you cmp it with your copy, what happens? is it an exact prefix?
[13:27:53] <theuni2> as i can't delete files any longer on pypi (yay) i wonder whether i will keep triggering errors on the server doing new releases ...
[13:27:56] <theuni2> thats a good question
[13:28:07] <mgedmin> deleting files should still work
[13:28:13] <mgedmin> you just can't re-upload them with the same filename again
[13:28:16] <theuni2> ah
[13:28:34] <mgedmin> you'll have to bump the version number and re-run the setup.py upload
[13:28:38] <theuni2> ah
[13:28:39] <theuni2> there it is
[13:28:55] <theuni2> ah ok, so i can remove but not reupload. got it.
[13:29:07] <mgedmin> or maybe it's enough to rename it and append a .build1 or .post1 after the version then upload via web or twine? I'm not sure
[13:29:21] <theuni2> well lets say it this way
[13:29:28] <theuni2> i have a correct tar.gz up
[13:29:30] <mgedmin> still, it worries me that pypi can serve a file different from the one I tried to upload
[13:29:33] <theuni2> i was just expecting to upload a zip file
[13:29:38] <theuni2> so i upladed the zip file afterwards
[13:29:45] <theuni2> i can just delete the zip and roll with the tar.gz
[13:29:49] <theuni2> not picky here
[13:29:49] <mgedmin> sure
[13:30:06] <mgedmin> have you checked the prefix hypothesis?
[13:30:24] <theuni2> i took a quick look at the hexdump
[13:30:30] <theuni2> they're the same length and end in the same bytes
[13:31:38] <theuni2> individual bytes screwed in the middle
[13:31:42] <theuni2> w t f
[13:31:45] <theuni2> solar flare
[13:31:46] <theuni2> ?
[13:31:56] <theuni2> CIA injection?
[13:32:16] <mgedmin> how many bytes?
[13:32:29] <theuni2> wrong question. visual compare of the hexdump diff :)
[13:32:39] <mgedmin> /usr/bin/cmp ftw
[13:32:54] <theuni2> hm. interesting. you got me there.
[13:33:38] <theuni2> very interesting
[13:33:39] <theuni2> http://dpaste.com/19P2739
[13:33:46] <mgedmin> http://mina.naguib.ca/blog/2012/10/22/the-little-ssh-that-sometimes-couldnt.html is an amazing story about in-flight data corruption
[13:33:50] <theuni2> it always got a byte value 216 to 132
[13:34:06] <mgedmin> 0xd8 replaced with 0x84
[13:34:38] <mgedmin> (or does cmp print octal?)
[13:34:56] <theuni2> i thought thats dec
[13:35:15] <mgedmin> I hope so
[13:35:24] <mgedmin> in any case the pattern is unclear
[13:35:37] <theuni2> '0b11011000'
[13:35:37] <theuni2> '0b10000100'
[13:36:02] <theuni2> it los a few ones in both upper and lower nibbles
[13:36:17] <theuni2> this is freaking scary
[13:36:32] <theuni2> now. lets check how that affected the content of the zip
[13:36:37] <mgedmin> were all 0xd8 bytes replaced or just some of them?
[13:36:46] <theuni2> will check that in a second
[13:37:28] <theuni2> diff doesn't show any change
[13:37:32] <theuni2> on the extracted trees
[13:38:10] <theuni2> hmm
[13:38:10] <theuni2> ah
[13:38:11] <theuni2> I think
[13:38:26] <theuni2> When I ran the command a second time it regenerated the zip.
[13:38:57] <theuni2> that could explain the difference: i don't really have the original file any longer and zip for some reason wasn't deterministic
[13:39:05] <mgedmin> timestamps!
[13:39:14] <theuni2> didn't touch any
[13:39:18] <theuni2> i think
[13:39:25] <mgedmin> I believe zip files may include zip creation timestamps
[13:39:48] <mgedmin> the only thing I can come up with
[13:40:00] <theuni2> that's what wikipedia says
[13:40:31] <theuni2> ok. so.
[13:40:44] <theuni2> i think i'll consider this solved then and trust that the new checksum that pypi sees is fine
[13:40:51] <mgedmin> I wonder if you can re-upload a deleted file to pypi if it has the same checksum
[13:41:16] <theuni2> that would be bad as they're using md5.
[13:42:11] <dstufft> mgedmin: no
[13:43:33] <theuni2> mgedmin: thanks for that entertaining digression :)
[13:44:03] <theuni2> dstufft: that change with the server signatures caused all mirrors listed in pypi-mirrors.org but mine to be stuck
[13:44:09] <theuni2> i wish i had a list of contacts for thos mirror operators
[13:44:50] <dstufft> theuni2: might try whois on the domains?
[13:45:14] <mgedmin> whee, http://www.pypi-mirrors.org/ suggests accessing the mirrors over plaintext http://
[13:45:18] <theuni2> yeah, well, too little time and too lazy to start that rat-hole :)
[13:45:50] <mgedmin> petition to replace pypi-mirrors.org with <font size="96">Use the CDN</font>
[13:46:07] <dstufft> mgedmin: yea well, soon pip won't connect to a http:// index unless you pass --trusted-host too
[13:46:31] <theuni2> hmm.
[13:47:58] <dstufft> mgedmin: mirrors are useful in some situations, public mirrors are probably not generally useful tot he average person, though some cases like China they can make a lot of sense
[13:48:20] <dstufft> (I don't think its a coincidence that most of the mirrors are in CN)
[13:49:07] <mgedmin> hm,m
[13:49:42] <dstufft> CN is an interesting problem, because they have decent bandwidth inside of CN, but the pipes in and out are heavily congested
[13:50:16] <dstufft> and Fastly doesn't have a CDN pop in China because of the way the laws of China are setup, that Fastly the company can become liable for what all of their customers do on the CDN, as if Fastly itself were doing it
[13:50:26] <dstufft> (is my understanding of it)
[13:50:50] <dstufft> Fastly does offer a thing where they can setup a Fastly POP custom for the customer on customer owned hardware inside China, but that's $$$
[13:52:48] <theuni2> also i like the control i get with a really local mirror in my datacenter :)
[13:53:01] <theuni2> i think fastly does a more than decent job
[13:53:08] <theuni2> but i really hate transparent middleboxes
[13:53:22] <theuni2> we had a couple of incidents in our data center where fastly problems look like our problems
[13:53:59] <theuni2> also, the mirrors are a nice insurance that does not require a central authority
[13:54:12] <theuni2> fastly itself may be distributed, but then again its a single commercial entity we all start to relying upon
[13:54:25] <theuni2> adding a bit of self-sustainability is nice in itself
[13:55:01] <theuni2> i should have just done it and add a self-updater right into bandersnatch
[13:55:29] <theuni2> i mean. it is mirroring newer versions of itself anyway
[13:56:43] <dstufft> the mirroring protocol is an important feature, it's really just the public mirrors that for the average person aren't generally worth it (though it's also not hard to make a mirror public if you're running one in your DC)
[13:57:37] <dstufft> which is why PEP wahtever just ditched the mirror discovery protocol and not mirroring all together :D
[13:58:03] <theuni2> yuo
[13:58:05] <theuni2> yup
[13:58:28] <theuni2> i think the public mirrors might mostly be operated by people with good intentions but not so good follow-through :)
[13:58:33] <dstufft> Yea
[13:58:35] <dstufft> that's basically it
[13:58:37] <theuni2> it just feels nice to do something publicly good
[13:58:41] <theuni2> but then it's hard to keep it ut
[13:58:43] <theuni2> it up
[13:59:03] <theuni2> the weird thing is that bandersnatch was my way to respond to the previous version that was _really_ hard to keep running
[13:59:08] <dstufft> The current pypi-mirrors.org is a pretty good example of why Fastly is a better solution than mirroring for the common case
[13:59:12] <theuni2> now you still need to update the software every now and than
[13:59:34] <theuni2> yeah, a good commercial entity usually has better follow-through
[13:59:50] <theuni2> especially if they give you something for free that they depend upon with their core business, not a side-thing
[14:00:16] <dstufft> (I <3 Bandersnatch though, and pypi-mirrors.org, between the two of them we can pretty much detect whenever we have a broken purge somewhere)
[14:00:35] <dstufft> I know we discovered a bunch of bugs in Fastly in the begining from it
[14:00:36] <theuni2> hehe
[14:00:40] <theuni2> indeed
[14:00:50] <theuni2> heterogenity keeps everyone on their toes :)
[14:01:19] <dstufft> I know that Openstack loves their bandersnatch mirrors too
[14:01:26] <dstufft> they run an insane number of test jobs in a day
[14:01:37] <theuni2> yeah, got quite some good feedback and really exotic edge cases from them
[14:01:42] <theuni2> i was hoping we won't have those
[14:01:53] <theuni2> but luckily we're better at dealing with them than the old client
[14:02:05] <dstufft> a non trivial amount of them would fail if they relied on the CDN, not because of the CDN being bad but because the internet doesn't actually work
[14:03:28] <theuni2> reminds me of the reasoning of one of the py core guys why google runs the python core test suite the way they do
[14:03:35] <theuni2> no tolerance for false positives
[14:03:45] <theuni2> (or rather false negatives)
[14:04:37] <mgedmin> "the way they do" being ... ?
[17:53:39] <famille> hi, can someone tell me where PIP keep information on installed packages ?
[17:53:48] <famille> (wheel packages)
[17:54:19] <tomprince> .egg-info or .dist-info directories in site-packages.
[17:57:04] <famille> don't find them... Is it true on windows also ?
[17:58:31] <famille> ah, ok independant files you mean . thanks !
[17:59:50] <famille> I hoped it was centralised in a file, apparently not
[18:33:35] <ggherdov`> Hello. I made my virtualenv with "virtualenv --no-site-packages foo", but when I run "activate" and run some python program looks like my "imports" still go to global site-packages.
[18:33:35] <ggherdov`> Running "yolk -l" from inside the env confirms that I have duplicate packages, and the global ones are "active". How do I isolate my env from the external world?
[18:37:41] <ggherdov`> virtualenv 1.5.2
[18:41:43] <Wooble> ggherdov`: are you "running" activate, or sourcing it?
[18:42:54] <ggherdov`> Wooble: sourcing
[18:43:17] <Wooble> ggherdov`: do you have PYTHONPATH set?
[18:44:02] <ggherdov`> it is set before I source activate, and after that the $PYTHONPATH value doesn't change
[18:44:14] <Wooble> yeah. Don't use PYTHONPATH, ever.
[18:44:36] <ggherdov`> ok, so should I export it to empty ?
[18:44:45] <ggherdov`> like, unset
[18:44:57] <Wooble> ideally you should remove wherever it's getting set in the first place.
[18:45:52] <ggherdov`> Wooble: thanks, unsetting $PYTHONPATH made it work.
[23:24:53] <buck1> how do i tell pip wheel that i need to separate 2.6 and 2.7 wheels?
[23:27:10] <buck1> bdist_wheel --python-tag py26 ?
[23:36:23] <buck1> uhm i see pip installing lots of whl marked py26 into my 2.7 venv
[23:36:28] <buck1> is that expected?
[23:38:21] <dstufft> yea, for better or worse Wheels treat the pure python "py26" as >= not ==
[23:38:50] <buck1> dstufft: i mean that i've built both a py26 and py27 wheel
[23:39:08] <buck1> >= to what?
[23:39:31] <dstufft> oh
[23:39:38] <dstufft> it should probably select a py27 wheel over a py26 wheel
[23:39:46] <dstufft> it might just select whichever one it sees first now
[23:39:49] <dstufft> since it'll consider them both successful
[23:40:13] <buck1> ok yea it's doing that
[23:40:17] <buck1> it's my hack that's not
[23:41:29] <buck1> thanks
[23:44:21] <buck1> dstufft: do you happen to recall where that priority is defined?
[23:44:35] <dstufft> code wise or standard wise
[23:44:40] <buck1> codewise
[23:44:50] <buck1> some kind of sort function =/
[23:45:04] <dstufft> pip/pep425tags.py or so
[23:45:06] <dstufft> i think
[23:51:34] <buck1> i dont see where that enters into find_requirement
[23:57:54] <buck1> ah it's _link_sort_key