PMXBOT Log file Viewer

Help | Karma | Search:

#pypa logs for Thursday the 28th of March, 2019

(Back to #pypa overview) (Back to channel listing) (Animate logs)
[12:42:14] <palate> hello :)
[12:42:23] <palate> is it the right place to ask for information about manylinux?
[15:26:07] <di_codes> palate: Sure
[15:28:21] <palate> di_codes: when I ldd my binary built with manylinux1, I see linux-vdso.so.1 and /lib64/ld-linux-x86-64.so.2
[15:28:26] <palate> di_codes: is that normal?
[15:28:44] <palate> They are not listed as "allowed shared libraries" on the PEP513 definition
[15:46:55] <ngoldbaum> linux-vdso.so.1 isn't actually in the binary, it's injected into the binary by the kernel
[15:47:08] <ngoldbaum> are you using ldd on the manylinux image?
[15:47:37] <ngoldbaum> i think same with ld-linux-x86-64
[16:30:41] <palate> ngoldbaum: not sure what you mean with for ldd. I am building my binary from manylinux, and I link static libs there
[16:31:11] <palate> ngoldbaum: but ldd doesn't show anything that is in the binary, but the shared libs that it will try to link at runtime, right?
[17:25:25] <ngoldbaum> sorry, i mean the shared libs
[17:25:35] <ngoldbaum> i'm asking if you're running "ldd" inside the docker container
[17:25:45] <ngoldbaum> or are you running it outside the docker container using the built wheel?
[17:26:06] <ngoldbaum> for the latter those extra libraries make sense, they're injected by your linux kernel and are expected
[17:26:20] <ngoldbaum> basically if auditwheel passes on the manylinux image you should be good to go
[17:33:32] <palate> ngoldbaum: no, I'm running ldd from the host, outside from the docker container
[17:33:57] <palate> ngoldbaum: good to know, thank you :-)
[17:34:50] <palate> Then if I may: I built this c++ binary with manylinux, which runs a server. I would like to start it in background from python, in its own process, so that python can then communicate with it (it's a grpc server)
[17:35:02] <palate> Am I right to be looking into Cython for that?
[17:35:54] <palate> and am I right to try to start a c++ executable from python, or should I make a library instead and run that from python?
[17:36:44] <ngoldbaum> it depends on what you want to do
[17:36:52] <ngoldbaum> cython could be good, although IMO cython's C++ wrapping isn't great
[17:37:01] <ngoldbaum> pybind11 is what i see people going for these days for wrapping C++
[17:37:11] <ngoldbaum> although the approaches are very different so it depends on what you're doing
[17:37:40] <ngoldbaum> that's if you want to communicate at the C level
[17:37:51] <palate> so the way it works right now is that you have to start this server (i.e. run the binary), and then from python you can connect to it over the network
[17:37:56] <ngoldbaum> you could also communicate via IPC or some other mechanism
[17:38:01] <ngoldbaum> so yeah, you're using IPC now
[17:38:15] <ngoldbaum> with cython or pybind11 you'd make python wrappers for your C++ library
[17:38:20] <ngoldbaum> and call the library directly at the C level
[17:38:24] <palate> ngoldbaum: not IPC but over the network. Still, same principle
[17:38:25] <ngoldbaum> totally different approach
[17:38:34] <palate> right, but I need to run that server somehow
[17:38:44] <ngoldbaum> sure, just trying to explain what cython is
[17:38:48] <ngoldbaum> it might not be what you want
[17:38:53] <palate> right
[17:39:42] <palate> so would there be a way to just run the binary in its own process from python, and have all that in one wheel?
[17:40:03] <ngoldbaum> i've never done something like that and don't know of any examples offhand
[17:40:07] <palate> ideally people would `pip install myproject`, and then run `import myproject; myproject.startServer()` or something like that
[17:40:13] <palate> ngoldbaum: I see
[17:40:27] <ngoldbaum> startServer() would need to call another binary in e.g. a subprocess?
[17:40:33] <palate> because the alternative is to expose one function of the library, say "startServer()", and wrap it using pybind11
[17:41:11] <palate> ngoldbaum: yes, the python startServer() would run my c++ binary, which basically runs its own c++ start_server()
[17:41:15] <ngoldbaum> another way to do it would be to make a wheel for the C++ thing and then depend on that
[17:41:51] <ngoldbaum> or a linux package or whatever
[17:42:04] <palate> but the linux package could not be installed by pip, right?
[17:42:14] <ngoldbaum> no, just like any other C++ thing
[17:42:41] <palate> I'd like to have it as a pip package, so that I don't need to explain to people that they need to run the server separately
[17:43:07] <ngoldbaum> sure, that makes sense
[17:43:45] <palate> but then I wonder if I need to wrap my c++ library with pybind11, or if I can "just run" the binary in its own process without any wrapping
[17:44:01] <palate> It's the first time I try to embed c++ inside a python package
[17:44:03] <ngoldbaum> if it's installed, sure
[17:44:32] <palate> can it be installed through pip? Say I add it to the "extensions", it should end up being installed somewhere on the system, right?
[17:45:14] <ngoldbaum> i think the wheel-building machinery will "statically" include your C++ stuff into the wheel if your package declares it has a wrapper for it
[17:45:47] <ngoldbaum> i think that's what auditwheel does?
[17:45:52] <palate> hmm, which means that I need to go for the wrapper around the library?
[17:46:05] <ngoldbaum> if you want to use that pip-only approach
[17:46:09] <ngoldbaum> you could also tell people to use conda
[17:46:26] <ngoldbaum> and make conda packages for your C++ thing and your python thing
[17:47:00] <ngoldbaum> pip is for installing python things, not C++ things, so if you'd rather the C++ server be a separate package that people can install and not bundle it in the wheel then you don't really want to use pip
[17:47:19] <ngoldbaum> you want to use the system package manager (since C/C++ don't really have language-specific package managers)
[17:48:58] <palate> the good point with the pip-only approach is that it reaches a lot of different platforms at once. If I want to distribute my server on all main package managers (pacman, yum, apt, ...), then it sounds more difficult to maintain
[17:49:18] <palate> and I'm sure some people will complain that the python code doesn't work because they forgot to start the server manually xD
[18:02:38] <ngoldbaum> palate: i guess the pyqt wheel might be worth looking at? since it bundles qt5
[18:10:50] <palate> good point, thanks!
[20:44:40] <njs> palate: yeah, linux-vdso.so.1 isn't actually a library, it's a name for some code that the kernel will inject into the process. and ld-linux-x86-64.so.2 isn't a library, it's the code that loads your executable and its libraries into memory.
[20:44:59] <njs> palate: I guess it's a bit confusing that the PEP doesn't mention them, but in any case they are totally normal and expected :-)
[21:13:00] <palate> njs: great :)