PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Monday the 8th of February, 2016

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[10:44:14] <AlecTaylor> hi
[10:45:36] <AlecTaylor> So I have a github repo package in my requirements.txt. To programmatically install its requirements.txt, is there a solution that doesn't involve cloning/curling and unwrapping its requirements.txt manually, for each line of my top-level requirements.txt?
[11:02:17] <mgedmin> that's what install_requires in setup.py is for!
[11:07:53] <AlecTaylor> mgedmin: Yes, but what about requirements.txt?
[11:08:27] <mgedmin> those are for extras and things that are nice-to-have but not required
[11:08:35] <mgedmin> and they're not intended to be nested
[11:08:54] <AlecTaylor> Because I can't imagine you'd do `install_requires=with open('requirements.txt') as f: f.readlines()`
[11:09:50] <AlecTaylor> mgedmin: Does install_requires support all the fancy #egg= and DVCS stuff that requirements.txt supports?
[11:10:01] <mgedmin> probably not
[11:10:08] <AlecTaylor> Yeah, that's what I thought
[18:49:06] <exploreshaifali> hello! I use virtualenvwrapper for django devleopment. Today I observed a wired thing, when I run `python manage.py runserver` and check for python process running in another terminal, I get two python processes running with command `python manage.py` one inside virtualenvwrapper and another outside it
[18:49:30] <exploreshaifali> I am sure I didn't run any other server for django
[18:49:39] <exploreshaifali> any clues why is this happening?
[18:53:45] <Ivoz> exploreshaifali, can you repeatably observe this behaviour
[18:54:08] <exploreshaifali> Ivoz, since today morning I am checking, it is there
[18:54:36] <exploreshaifali> I have tested it with fresh new virtualenvwrapper env
[18:54:52] <exploreshaifali> still it behaved in same manner
[18:55:12] <Ivoz> not sure what you mean 'inside virtualenvwrapper'
[18:55:22] <Ivoz> virtualenvwrapper is mostly, just some shell scripts
[18:55:40] <exploreshaifali> let me dpaste you result here
[18:57:06] <exploreshaifali> Ivoz, http://dpaste.com/1QN8TKN
[18:57:18] <exploreshaifali> see second and third process
[18:57:32] <exploreshaifali> both says ./manage.py runserver
[18:57:49] <exploreshaifali> but I used command only once inside new_zaya virtualenv
[18:59:16] <Ivoz> you can try it with a manual virtualenv
[19:01:13] <Wooble> exploreshaifali: you want --noreload (and probably #django)
[19:02:14] <exploreshaifali> Wooble, you mean to run `./manage.py runserver --noreload`?
[19:02:27] <exploreshaifali> Ivoz, yea, I was just trying manual virtualenv :)
[19:02:42] <Wooble> exploreshaifali: well, if you don't want another processes, yes. But then obviously nothing will reload.
[19:02:55] <Ivoz> maybe as Wooble says it is an explicit behaviour of manage.py
[19:02:58] <exploreshaifali> hmm
[19:03:23] <exploreshaifali> so the problem due to this is, I am implementing caching for my app
[19:03:55] <Ivoz> don't use `python manage.py runserver` as the way you run your app
[19:04:03] <Wooble> that too.
[19:04:23] <Ivoz> exploreshaifali, anyway, also as Wooble says for now you want #django, not #pypa
[19:04:40] <exploreshaifali> okay
[19:04:43] <exploreshaifali> Thanks!
[19:04:54] <Ivoz> they can probably tell you all about caching there
[19:05:05] <exploreshaifali> Ivoz, but what to use if not `python manage.py runserver`?
[19:05:22] <exploreshaifali> you want me to use it with --reload?
[19:05:31] <Ivoz> run a webserver that serves wsgi
[19:05:41] <Wooble> exploreshaifali: that's really only for development.
[19:05:57] <exploreshaifali> Wooble, yea that make sense
[19:06:05] <exploreshaifali> all right
[19:06:09] <exploreshaifali> Thanks guys :)
[19:06:11] <Ivoz> https://docs.djangoproject.com/en/1.9/howto/deployment/wsgi/
[19:07:45] <marcoamorales> I've fallen into this pitfall, dropping link in case anyone has any good ideas https://github.com/pypa/pip/issues/231
[19:09:00] <tos9> marcoamorales: Personally I agree with carljm.
[19:09:03] <Wooble> "You have to use sudo with pip on a single-purpose server" seems like a misguided statement to start with.
[19:09:05] <Ivoz> marcoamorales, a properly configured ~/.ssh/config should work
[19:09:40] <tos9> marcoamorales: Every tool under the sun shouldn't really come with its own way of interacting with ssh
[19:10:06] <tos9> marcoamorales: If you want a solution for multiple private repos, you can use Host aliases.
[19:10:28] <tos9> marcoamorales: Beyond that I think it's GitHub's issue to fix, to allow re-using deploy keys across repos.
[19:10:49] <tos9> (At $WORK we use the former, host aliases)
[19:11:39] <Ivoz> or even a GIT_SSH_COMMAND specifying `ssh -F <configfile>` if you wanted
[19:11:56] <tos9> Ivoz: That won't work on its own.
[19:12:11] <Ivoz> why not
[19:12:23] <tos9> Ivoz: Deploy keys will successfully auth under SSH
[19:12:33] <tos9> they just will get denied the right to clone the repo after the connection
[19:13:03] <tos9> So if you want to sniff which repo is about to be connceted to that would work
[19:13:09] <tos9> but you can't "only" use a config file
[19:13:49] <Ivoz> oic, all to the one domain
[19:15:17] <Ivoz> marcoamorales, just shallow clone them with git commands, then pip install from the cloned dirs
[19:16:30] <Ivoz> exploreshaifali, i would recommend uwsgi, should be simple enough starting out https://uwsgi.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
[19:29:23] <exploreshaifali> Ivoz, okay, looking.... Thanks!