[00:25:25] <Yasumoto> holy moly: Most of the current tests do string comparisons of these values which will not work properly with a two-digit version number ('10.10' < '10.9' --> True).
[00:29:49] <Moult> now i'm not on osx, but upgrading doesn't hurt
[00:30:53] <Moult> Yasumoto: by the way, your variables actually decrease the verbosity
[03:37:31] <Moult> Yasumoto: now, according to pep425, and according to what my logs are spitting out, the actual values should be cp33 / cp34 ie. no m suffix
[03:41:35] <Moult> Yasumoto: as an example the PEP does mention "cp33m", with the m suffix, but let me do a bit more reading to see exactly when the m comes into play
[03:42:31] <Moult> Yasumoto: if you import sysconfig and do sysconfig.get_config_var('SOABI') what do you get?
[03:52:33] <Moult> Yasumoto: aha https://www.python.org/dev/peps/pep-3149/
[03:52:47] <Moult> Yasumoto: "Python implementations MAY include additional flags in the file name tag as appropriate. For example, on POSIX systems these flags will also contribute to the file name:"
[03:53:27] <Moult> Yasumoto: --with-pymalloc (flag: m ), <-- so that's what the m is for, and also "By default in Python 3.2, configure enables --with-pymalloc so shared library file names would appear as foo.cpython-32m.so . "
[03:54:07] <Moult> Yasumoto: i suspect that the m there is incorrect as it assumes that it will always be compiled --with-pymalloc
[03:57:26] <_habnabit> i thought m was the only one, but d is important too
[03:57:35] <Moult> Yasumoto: none is valid too, which explains why yours works and mine doesn't, i suspect the way gentoo manages multiple pythons had a part to play in this
[03:59:56] <Yasumoto> _habnabit: ah, gotcha, good point
[04:01:59] <Moult> so (excuse my beginner python) - do you folks think this is sufficient as a fix? abis.extend(['cp%s' % version, 'cp%sd' % version, 'cp%sm' % version, 'cp%su' % version, 'abi3'])
[04:02:28] <Moult> whoops, forgot that combinations can occur (eg: cp33dmu)
[04:06:01] <Moult> Yasumoto _habnabit any ideas if there is an alternative to adding all the permutations of dmu?
[10:24:37] <underyx> this will reinstall all dependencies, which makes deployment take around 100-1000x the time it should, which also isn't very good
[12:28:36] <The-Compiler> I have a virtualenv on Windows 7. Now when I start the virtualenv's pip and try to install something, it tells me it's already installed in my C:\Python34\Lib\site-packages. Why does that happen?
[12:29:12] <The-Compiler> Requirement already satisfied (use --upgrade to upgrade): pylint in c:\python34\lib\site-packages
[12:33:28] <The-Compiler> and it seems to work fine on Windows 8
[12:36:15] <apollo13> looks as if system-site-packages are enabled for that venv
[12:36:21] <apollo13> wasn't that the default on windows?
[12:38:43] <The-Compiler> oh, wait, *I* do that (or my script rather) if I'm on Windows. :D
[12:39:10] <The-Compiler> so the issue is probably pylint being installed system-wide on my Win7 machine, but not on my Win8 machine
[12:40:07] <apollo13> cause it can't remove pylint from /usr…
[12:40:33] <apollo13> not sure what happens in windows, but installing the same package in the venv and the system site-packages sounds like a bad idea
[12:41:36] <The-Compiler> Well, I'd like the venv to work no matter what is installed system-wide. And I have a library (PyQt) installed which I need in the venv as well, thus the --system-site-packages
[12:42:05] <The-Compiler> on Linux I just symlink the .so's - maybe I should just copy the relevant files on Windows as well, instead of using --system-site-packages
[12:43:31] <apollo13> no idea, I don't use windows :)
[12:44:19] <The-Compiler> I wish, this is my test bot (buildbot), so that'd mean "don't support windows" :P
[12:47:28] <The-Compiler> Not uninstalling pylint at c:\python34\lib\site-packages, outside environment C:\Users\florian\buildbot\slave\win7\build\.venv
[12:47:31] <mgedmin> /tmp/first/bin/pip list => requests (2.0.0)
[12:47:39] <apollo13> mgedmin: -p onto a venv will result in the site-package from your global python, not from the other venv python
[12:47:42] <mgedmin> IOW, it's safe to pip install -U even when you use --system-site-packages
[13:25:35] <ychaouche> as I said, it's installed with setup.py develop (not install) and I can see the infomaniak-link in the virtualenv I am currently in
[13:32:35] <ychaouche> So I installed with pip -e, and pip freeze is a little confused it seems : https://gist.github.com/ychaouche/c70afdf4e35035fdb5c7
[13:33:12] <mgedmin> yes, pip freeze really likes to output -e git:...#commithash URLs for editable packages
[13:38:31] <calston> mgedmin: okay well I recreated the virtualenv and installed the stuff again and it works, so I guess whatever package required it has dropped that
[14:48:28] <ychaouche> So my script needs some non python files, like HTML files and one configuration file.
[14:49:28] <ychaouche> I don't know where they should belong
[14:49:40] <ychaouche> I was thinking of putting the configuration file in /etc/ ?
[14:49:46] <ychaouche> but where do the HTML files go ?
[14:51:15] <mgedmin> if you want pip install to work, keep them with the source code, maybe in a subdirectory
[14:51:26] <mgedmin> if you're building debian packages, feel free to move them to /usr/share, /etc etc.
[14:52:04] <mgedmin> config files? ~/.config/yourpackage.ini (well, $XDG_CONFIG_DIR if it's defined) or /etc/yourpackage.ini or (best) allow the user to specify the config file on the command line
[14:52:31] <mgedmin> don't require a config file in /etc, maybe users will want to pip install your stuff and use it when they don't have root on the machine
[15:00:20] <ychaouche> actually, i don't intend to release the final "thing", it's only to write things the right way on my machine then deploy them on one or two servers, and also to make smooth updates.
[15:01:09] <ychaouche> Since the "thing" I want to write will be a daemon, I also need to put things in /etc/init.d/ etc., so I don't know what's the right packaging to choose (debian or python)
[15:01:35] <ychaouche> maybe also a script in /usr/bin/ to launch it
[15:02:26] <ychaouche> but since the dameon is written in python, I don't know where to put the python files, so I thought of creating a python archive that would go in site-packages (dist-package in debian)
[15:04:08] <ychaouche> also the install script should issue some post-install commands, like updaterc.d and service infomaniak start.
[15:08:06] <DanielHolth> debian packaging will be better at installing things outside of site-packages
[15:13:47] <ychaouche> How bad should I feel if I write an install.sh script that does the pip install then continues with copying the files and calling other commands ?
[15:16:47] <tos9> Sigh. I'm trying to track down a bug, and I can't quite figure out what the correct combination is to reproduce it is.
[15:17:25] <tos9> I *think* the combination is 1) pip install --download 2) a private git repo 3) with no git tags 4) with a specifier that specifies a branch name
[15:18:10] <tos9> Which produces a traceback in pip's rmtree_errorhandler that's tryign to remove a file in the temporary repo that looks like .git/tags.12345, when by the time shutil.rmtree runs, that file doesn't exist.
[15:18:33] <tos9> (There is one called .git/tags though.) I don't suppose anyone has any insights or has seen this before?
[15:45:54] <pf_moore> tos9: Sounds like a race condition in rmtree_errorhandler. Maybe add a check if the file exists, and if not then just return. See if that fixes the issue.
[15:47:12] <pf_moore> As shutil.rmtree isn't (can't really be) atomic, that's probably the best we could do.
[15:50:05] <tos9> pf_moore: it does, but it's consistent -- it can't be atomic but pip could ignore the error presumably?
[15:50:53] <tos9> oh sorry, that's what you said :)
[16:09:01] <twilder6629> Hey guys, seeing something strange with pip installation.
[16:10:17] <twilder6629> We deploy on closed networks so we cannot use a package index unless we bring one up ourselves. Right now the solution is to ship a bunch of wheels in an installer and use --find-links and --no-index to satisfy deps. I just made a custom package and strangely cannot get it picked up when I call pip install on the dir it resides in.
[16:10:30] <twilder6629> Wondering if my tagging scheme is off or if I'm missing something.
[16:10:47] <twilder6629> pip install --find-links ~/ansible_wheels --no-index --pre mass-install is the call
[16:12:14] <twilder6629> if i specify the package directly as the target of pip it installs correctly and hass the mass-install name associated with it
[16:13:17] <twilder6629> the name of the package in setup.py is mass-install
[16:13:34] <twilder6629> curious about why --find-links can't pick up the wheel
[16:15:21] <twilder6629> The package is built with setuptools rather than distutils and has a version of 2.6.1.2 and a dev tag of .commit.<git_commit_hash>
[16:15:22] <bowlofgrapes> hey, so remember that pulp plugin i was talking about to sync from PyPI? we released the first version of it today! http://pulp-python.readthedocs.org/en/0.0-release/ however, it doesn't have the sync feature just yet
[16:15:44] <bowlofgrapes> but you can use it to upload and publish python modules, and it publishes in a way that is compatible with pip
[16:34:38] <ionelmc> twilder6629: shouldn't find-links be an URI ?
[16:42:03] <twilder6629> whereas pip install --find-links file:///root/ansible_wheels --no-index --pre mass-install is the failing call
[16:42:43] <twilder6629> interestingly it seems to be discovering the package
[16:43:43] <twilder6629> Could not find a version that satisfies the requirement mass-install (from versions: 2.6.1.2.commit.544f732dd4f3a6d8a65d9bc15476e05745f7b2c9)
[16:44:00] <twilder6629> maybe that commit tag can't be parsed even as a development tag
[16:46:49] <dstufft> twilder6629: what version of pip
[16:54:52] <twilder6629> the purpose of the commit tags is to implement parallel build caching for unchanged packages
[16:55:03] <twilder6629> we have something like 100 packages to build
[16:55:21] <twilder6629> and whenever we build one, we upload to a local package index and tag it with the git commit of the last change to that package's directory
[16:55:36] <twilder6629> then when we build, if a package matching the last commit to one of ours exists in the package index
[16:55:49] <twilder6629> we grab the prebuilt binary instead of rebuilding the package
[16:55:56] <twilder6629> which makes running parallel builds a LOT faster
[16:56:35] <twilder6629> so the requirement is to be able to communicate a git commit in the version identifier somehow that we can use when installing a package, but at the same time be able to wildcard with just the package name
[16:57:11] <twilder6629> it mostly matters for things with lots of C extensions that take a while to build
[16:57:13] <dstufft> You can do ``pip install 1.0+foo``, and it'll install _only_ 1.0+foo, there's no wild card if you specify the package version like that
[17:15:28] <dstufft> ionelmc: possibly, I'm not sure what i'm doing this weekend. We were supposed to go to a concert for my daughter's birthday present but that got canceled.. so idk yet if we're doing to do sometihng else or not
[17:17:24] <twilder6629> dstufft, the change works, is tested, and is checked in. Thanks again! Best one character swap ever.
[21:50:43] <charettes> I've been following https://packaging.python.org/en/latest/distributing.html#universal-wheels to create an universal wheel for https://github.com/charettes/django-sundial but `python setup.py bdist_wheel --universal` keeps building me a platform specific one.
[21:51:01] <charettes> I've got latest pip and and setuptools installed
[21:51:58] <charettes> I'm on Ubuntu 14.04 LTS (linux-x86_64)