If taken to either extreme it can become problematic. On one hand you try to dynamically link everything and you can end up with huge issues when you try to upgrade one of those common dependencies. On the other hand if you statically link everything (or have separate copies of dynamically linked libraries) you have a lot more disk and memory usage.
So the sensible thing is to take the middle road. Dynamically link all of your system packages, like your desktop environment, core utilities, etc, and containerize the rest of your apps. That way all of your riskier applications (closed source, or stuff with a big stack surface like a browser) can have a layer of security between it and the rest of the OS and also have a separate set of libraries that the vendor ships with. You’ll pay a small penalty for duplicate libraries, but you should only have a handful of them.
I think every containerized application should have duplicate libs. You want this exposed applications to have whatever the vendor has vetted, and you want to make sure it’s only interacting with other containerized libs.
This is basically solved for decades with package management. Dependences are in a database and things are rebuilt accordingly.
Somethings are already run in a container. Without lib duplication. I agree that there is an argument for more network facing things to be run like that.
This isn’t just static vs dynamic, but the whole app folder things again.
Far too often this whole thing is just an excuse to avoid packaging properly. Instead they gift wrap their environment of old libs. Closed stuff has no choice, so always champions ways of shipping with old libs. Really pisses me of when open stuff does it so they can avoid the work of porting to the current version of say, Python. When it’s Docker, it’s often most of some hacked old Debian/Ubuntu. It’s the exact opposite of “reproducable builds” and means that software will never make it into things like BuildRoot or Yocto. Never mind Debian/etc proper.
Closed source could document system dependencies, it just can’t be rebuilt on demand to target a different set of libraries. So it’s usually easier to give it a separate set of libraries instead of expecting the system to accommodate it.
porting to the current version of say, Python
It kinda goes both ways. To properly work with package management, it needs to support the oldest and newest versions of a library in popular distros. So for Python, that may mean Python 2.x and 3.x around the launch of Python 3. Supporting both is possible, it’s just more work for the developers for what could be considered a pretty minor benefit.
This isn’t just a Python or an interpreted language thing, it also happens for compiled shared libraries. If you need a feature from the latest libc, Debian can’t ship your package until it ships a new enough libc, but maybe it’ll ship with Ubuntu or Arch.
That said, most “system” packages are willing to go through this effort, so I expect things like KDE, git, GNU utils, etc to all use the same set of shared libraries. I think browsers are special enough that they should be containerized, if only because of the large attack surface that you should use the exact libraries they recommend instead of perhaps an older or newer one that happens to work.
Shipping your own dependencies should absolutely be the exception, not the rule, but I think it’s a good thing for some types of applications. Bloating an install from something like 30MB to 300MB is fine if it’s only for a handful of applications that tend to use a ton of resources anyway, like a browser, video game, or web service.
No, things support Python 3 or Python 2. Not seen stuff packaged with support for both. In the package dependency information is a window of versions required. 3.9 - 3.11 kind of thing.
If you need the new features in the latest libs (rarely really case) get involved with getting the latest lib into the new stable release. Yes, in the mean time, maybe ship your own version, but it’s makes a right old mess if that is done all the time for all dependencies.
If it’s for a handfull of apps, maybe that’s acceptable for a short period of time. Closed apps have no choice but to be messy.
Me, I’m in little rush for latest and greatest. Rolling with Debian Testing is plenty new enough for my desktop. For servers, I want old and stable anyway, so Debian Stable.
I meant that maybe one target doesn’t support Python 3 (e.g. older Debian) and another doesn’t support Python 2 (i.e. a bleeding edge distro). There were tons of libraries that supported both at the time, though it wasn’t due to distro compatibility. I used it as an example because the transition was so rocky and most people were aware of the issue.
But the same idea can happen when using newer compiler features or something. If I need a newer version of something than a distro supports, my options are:
maintain an older version of the tool that works with whatever the distro supports
campaign to get that version upgraded
not ship on that distro
ship with my special dependencies (e.g. use a FlatPak)
Of those options, the last is a lot more attractive and requires the least work. Ideally these are rare exceptions, but it should absolutely be an option.
The Python 2 -> 3 transission was a mess. Same with GTK2 -> GTK3. Lots of young developers think they can just ditch legacy and do all new and shiny. Learning it is a mistake to disregard legacy is part of maturing as a developer.
Debian has packaged libs as python-libname and python3-libname. There is a few python2-libname but python2 is on the path to removal.
Right, but they didn’t support Python 3 for a while. And with 5-year release cycles, it’s entirely possible that you’ll have a situation where some distros don’t support Python 3 and some don’t support Python 2. Or any other platform (e.g. maybe you need a shiny new C++ feature, but certain distros don’t support that version yet).
So those distros can either not include those packages (seems to be the current approach), add a feature mid-cycle (pretty rare, though people do make repos that do so), or allow some method of bundling it with its unsupported dependencies.
Debian isn’t 5 year releases… it’s more like 2, and that’s Stable. The whole point of stable is to be boring but reliable. Testing is more fun, and Unstable more fun still. Mixing in some Experimental for even closer to the edge. There is also Backports to bring a more stable base closer to the edge.
Having grown up on RISCOS with app folders and then gone to Windows, then wondered Linux land until finding Debian was my home, I worry about the movement back to basically app folders. I love the order of Debian. All those packages, with their dependencies, their build dependencies and source, all in database, I count with Wikipedia as achievements of man kind.
I meant the support timeline. Debian releases are supported for 5 years. You can basically skip an entire release and still be completely supported with security patches.
If taken to either extreme it can become problematic. On one hand you try to dynamically link everything and you can end up with huge issues when you try to upgrade one of those common dependencies. On the other hand if you statically link everything (or have separate copies of dynamically linked libraries) you have a lot more disk and memory usage.
So the sensible thing is to take the middle road. Dynamically link all of your system packages, like your desktop environment, core utilities, etc, and containerize the rest of your apps. That way all of your riskier applications (closed source, or stuff with a big stack surface like a browser) can have a layer of security between it and the rest of the OS and also have a separate set of libraries that the vendor ships with. You’ll pay a small penalty for duplicate libraries, but you should only have a handful of them.
I think every containerized application should have duplicate libs. You want this exposed applications to have whatever the vendor has vetted, and you want to make sure it’s only interacting with other containerized libs.
This is basically solved for decades with package management. Dependences are in a database and things are rebuilt accordingly.
Somethings are already run in a container. Without lib duplication. I agree that there is an argument for more network facing things to be run like that.
This isn’t just static vs dynamic, but the whole app folder things again.
Far too often this whole thing is just an excuse to avoid packaging properly. Instead they gift wrap their environment of old libs. Closed stuff has no choice, so always champions ways of shipping with old libs. Really pisses me of when open stuff does it so they can avoid the work of porting to the current version of say, Python. When it’s Docker, it’s often most of some hacked old Debian/Ubuntu. It’s the exact opposite of “reproducable builds” and means that software will never make it into things like BuildRoot or Yocto. Never mind Debian/etc proper.
Closed source could document system dependencies, it just can’t be rebuilt on demand to target a different set of libraries. So it’s usually easier to give it a separate set of libraries instead of expecting the system to accommodate it.
It kinda goes both ways. To properly work with package management, it needs to support the oldest and newest versions of a library in popular distros. So for Python, that may mean Python 2.x and 3.x around the launch of Python 3. Supporting both is possible, it’s just more work for the developers for what could be considered a pretty minor benefit.
This isn’t just a Python or an interpreted language thing, it also happens for compiled shared libraries. If you need a feature from the latest libc, Debian can’t ship your package until it ships a new enough libc, but maybe it’ll ship with Ubuntu or Arch.
That said, most “system” packages are willing to go through this effort, so I expect things like KDE, git, GNU utils, etc to all use the same set of shared libraries. I think browsers are special enough that they should be containerized, if only because of the large attack surface that you should use the exact libraries they recommend instead of perhaps an older or newer one that happens to work.
Shipping your own dependencies should absolutely be the exception, not the rule, but I think it’s a good thing for some types of applications. Bloating an install from something like 30MB to 300MB is fine if it’s only for a handful of applications that tend to use a ton of resources anyway, like a browser, video game, or web service.
No, things support Python 3 or Python 2. Not seen stuff packaged with support for both. In the package dependency information is a window of versions required. 3.9 - 3.11 kind of thing.
If you need the new features in the latest libs (rarely really case) get involved with getting the latest lib into the new stable release. Yes, in the mean time, maybe ship your own version, but it’s makes a right old mess if that is done all the time for all dependencies.
If it’s for a handfull of apps, maybe that’s acceptable for a short period of time. Closed apps have no choice but to be messy.
Me, I’m in little rush for latest and greatest. Rolling with Debian Testing is plenty new enough for my desktop. For servers, I want old and stable anyway, so Debian Stable.
I meant that maybe one target doesn’t support Python 3 (e.g. older Debian) and another doesn’t support Python 2 (i.e. a bleeding edge distro). There were tons of libraries that supported both at the time, though it wasn’t due to distro compatibility. I used it as an example because the transition was so rocky and most people were aware of the issue.
But the same idea can happen when using newer compiler features or something. If I need a newer version of something than a distro supports, my options are:
Of those options, the last is a lot more attractive and requires the least work. Ideally these are rare exceptions, but it should absolutely be an option.
The Python 2 -> 3 transission was a mess. Same with GTK2 -> GTK3. Lots of young developers think they can just ditch legacy and do all new and shiny. Learning it is a mistake to disregard legacy is part of maturing as a developer.
Debian has packaged libs as python-libname and python3-libname. There is a few python2-libname but python2 is on the path to removal.
Right, but they didn’t support Python 3 for a while. And with 5-year release cycles, it’s entirely possible that you’ll have a situation where some distros don’t support Python 3 and some don’t support Python 2. Or any other platform (e.g. maybe you need a shiny new C++ feature, but certain distros don’t support that version yet).
So those distros can either not include those packages (seems to be the current approach), add a feature mid-cycle (pretty rare, though people do make repos that do so), or allow some method of bundling it with its unsupported dependencies.
Debian isn’t 5 year releases… it’s more like 2, and that’s Stable. The whole point of stable is to be boring but reliable. Testing is more fun, and Unstable more fun still. Mixing in some Experimental for even closer to the edge. There is also Backports to bring a more stable base closer to the edge.
Having grown up on RISCOS with app folders and then gone to Windows, then wondered Linux land until finding Debian was my home, I worry about the movement back to basically app folders. I love the order of Debian. All those packages, with their dependencies, their build dependencies and source, all in database, I count with Wikipedia as achievements of man kind.
I meant the support timeline. Debian releases are supported for 5 years. You can basically skip an entire release and still be completely supported with security patches.