PMXBOT Log file Viewer

Help | Karma | Search:

#pypa-dev logs for Wednesday the 30th of January, 2019

(Back to #pypa-dev overview) (Back to channel listing) (Animate logs)
[01:09:25] <techalchemy> njs: I think I may not have understood the proposal
[01:11:05] <njs> techalchemy: wasn't even a proposal really, just me trying to figure out what a resolver could do with a bare uri vs a package-name+uri
[01:12:15] <techalchemy> njs: I mean the original one, i'm not totally clear on whether the best guess is derived from user input or whether that includes version info / how we are avoiding downloading anything
[01:12:28] <techalchemy> especially the last point
[01:13:38] <dstufft> The answer is roughly it depends on semantics
[01:13:57] <dstufft> when we wrote PEP 440 we weren't sure what the semantics were going to be for that part yet really
[01:14:05] <dstufft> so we hedged our bets
[01:14:14] <njs> techalchemy: I just meant, you have a resolver that's trying to satisfy some requirements, and (this is the main point) is trying to do this in a way where it does as much as it can before it actually starts downloading packages or building them
[01:14:43] <techalchemy> njs: in practice, we tried that and I actually don't bother anymore
[01:14:54] <techalchemy> the minute I have a url i am going to resolve against I just resolve it
[01:15:46] <njs> nod
[01:16:12] <techalchemy> it's probably not that great for people on slow internet where the contents aren't cached yet
[01:16:29] <njs> ...actually even if you were trying to be clever, probably it would make sense to download all the '@' urls first before starting to download sdists, and that would give the same effect anyway
[01:17:11] <njs> none of this really matters of course because even if the package name is technically redundant, no-one's going to bother going back and revising the PEP to remove it
[01:17:21] <techalchemy> njs: I've hacked up pipenv's implementation currently such that it actually does precisely that, any non-'named' requirement gets resolved first and separately
[01:17:21] <dstufft> It would save some small amount of time if you discover two @ urls for the same dependency pointing at different URLs!
[01:17:49] <njs> dstufft: ha, true
[01:17:58] <dstufft> All about dem edge cases
[01:18:21] <dstufft> although I'd probably argue for the redudancy just for the human side of things
[01:18:25] <techalchemy> dstufft: more importantly it exposes to the rest of the dependencies the _other_ direct dependencies and their respective names so we don't try to install from pip or whatever
[01:18:40] <techalchemy> pypi*
[01:19:26] <dstufft> it's a lot nicer to see foo @ <url> and know it's for foo, than to have to try to mentally parse that out of an URL, assuming it even exists in the URL and then human beings have to download the file and extract it to even tell
[01:19:36] <dstufft> even if it was 100% useless for the tooling
[01:19:58] <njs> fairy nuff
[01:20:21] <techalchemy> that I agree with, although it actually made my parser more annoying because I am just splitting stuff
[01:20:49] <dstufft> you're not using packaging?
[01:21:29] <dstufft> (I'm pretty sure all of these things are largely post hoc rationalizations though, nad I think the real answer as to why EPP 440 did it that way is when I wrote it, dealing with unnamed dependencies was annoying in pip for some reason I can't recall, so I made sure it wasn't unnamed)
[01:21:29] <techalchemy> i have a full reimplementation of pip's logic + additional logic
[01:21:58] <dstufft> I'm sure curious why you rewrote version specifier parsing
[01:22:03] <dstufft> I'm just*
[01:22:08] <cooperlees> +1
[01:22:36] <cooperlees> Everywhere I've ever done version comparisons etc. I've always used packaging
[01:23:00] <dstufft> It's not _wrong_ to do that, I'm just curious the rationale :)
[01:23:11] <dstufft> e.g. did packaging fail in some way
[01:23:14] <techalchemy> oh
[01:23:19] <techalchemy> I use packaging for all of that
[01:23:26] <dstufft> ah gotcha
[01:23:30] <dstufft> I misunderstood then
[01:24:04] <techalchemy> I have a specific api use case -> Requirement.from_line() and Requirement.as_pipfile()
[01:24:20] <techalchemy> (and the inverse)
[01:25:33] <techalchemy> i do use packaging extensively, even to represent the actual requirement objects internally, but packaging doesn't like representations that aren't pep508 compliant but which contain urls
[01:26:33] <techalchemy> and i needed my parser to handle vcs uris both in the shorthand syntax of git+git@host:user/repo.git#egg=name and the full syntax + all the extraneous stuff
[01:26:39] <njs> ehashman: šŸ‘‹
[01:27:55] <dstufft> techalchemy: FWIW we can add "Legacy" stuff to packaging too
[01:28:03] <dstufft> if that makes things easier or mae more sense
[01:28:08] <dstufft> see for example: the version module
[01:28:34] <techalchemy> dstufft: I was about to say no it's fine I just use the `Link` object
[01:28:39] <techalchemy> but uh, yeah iguess that's a pip internal
[01:29:20] <dstufft> I'm hoping soon to have more time to start refactoring pip a lot more heavily to try and clean a lot of this stuff up
[01:39:09] <techalchemy> dstufft: `Link` objects are super useful and kind of independent, probably a good candidate to be in packaging
[01:39:27] <techalchemy> i rely on them pretty heavily
[20:10:42] <cooperlees> EWDurbin: Thanks for codecov - Do I now just do the tax updates asking for the enviers and I should be good?
[20:10:47] <cooperlees> + requirements?
[20:11:01] <cooperlees> *tox update even
[20:11:07] <cooperlees> Not looking forward to tax
[20:16:59] <EWDurbin> cooperlees: I’m not actually sure!
[20:17:59] <EWDurbin> cooperlees: here is their sample Python project: https://github.com/codecov/example-python
[20:18:21] <cooperlees> Yeah - cool - that's what I have in a tab - Will do.
[20:18:24] <EWDurbin> And I suppose you could reference warehouse, but dstufft set that up :)
[20:18:27] <cooperlees> Basically what I explained in the email
[20:18:39] <cooperlees> Cheers