I’m doing a bunch of AI stuff that needs compiling to try various unrelated apps. I’m making a mess of config files and extras. I’ve been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would like to keep them out of my main home directory.
Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.
I’ve noticed a few oddball times during installations pip said something like “package unavailable; reverting to base system.” This was while it is inside conda, which itself is inside a distrobox container. I’m not sure what “base system” it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.
I hope someone who has more info comes along. It might be time for you to make a new post though since we’re getting to the heart of the problem now.
Also it will be a lot easier for people to diagnose if you are specific about which programs you are failing to install.
I’ve only experimented with Python in docker and it gave me a lot of headaches.
That’s why I prefer to pip install things inside venvs because I can just tar them myself and have decent portability.
But since your installing files across the system I’m not sure what the best solution is.