Are there any risks or disadvantages to building software from source, compared to installing a package? Can it mess with my system in any way?

I usually avoid it because I’ve found it to be a faff and often doesn’t work anyway but in a couple of cases it has been necessary.

  • Jay🚩@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    20 hours ago

    If you are using pkgsrc like system it will be easier. Even NASA uses it with their OpenSUSE system for NAS. Compiling has it’s own advantages but normal people don’t need it

  • Aiwendil@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    1 day ago

    Gentoo user…but assuming now it’s about building on distributions that don’t automate it like gentoo.

    Disadvantages:

    • No easy way to uninstall again. Some build systems generate lists of files that were installed or even some uninstall rules but that requires to either keep the build directory with the source-code around or to make backups of the necessary build files for proper uninstall. And in some build systems there are no helpers for uninstalling at all.
    • Similar…updating doesn’t guarantee to remove all traces of the previous version. If the new build overwrites every file of the previous version…fine. But if an update version doesn’t need files anymore that were installed in previous versions those will usually not get remove from your system and stick around.
    • In general lack of automatism for updates
    • Compiling takes time
    • You are responsible for dealing with ABI breakages in dependencies. In most cases the source-code you compile will depend on other libraries. Either those come from your distro or you also build them from source…but in both cases you are responsible for rebuilding a package if an update to one of the dependencies breaks the ABI.
    • You need build-time dependencies and depending on your distro also -devel packages installed. If source-code you install needs an assembler to build you have to install that assembler which wouldn’t be necessary if you installed the binary (You can remove those build-dependencies again of course until you need to rebuild). Similar for -devel packages for libraries from your distro…if the source-code depends on a library coming from your distro it also needs the header files, pkgconfig files and other development relevant files of that library installed which many distros split out in own -devel packages and that aren’t necessary for binaries.
    • You have to deal with compile flags and settings. It’s up to you to set the optimization level, architecture and similar for your compiler in environment variables. Not a big deal but still something someone has to look into at the start.
    • You have to deal with compile-time options and dependencies. The build-systems might tell you what packages are missing but you have to “translate” their errors into what to install with your package manager and do it yourself. Same for the detection of the build systems…you have to read the logs and possibly reconfigure source-code after installing some dependencies if he build systems turned off features you want because of lacking dependencies.
    • Source-code and building need disk space so make sure you have enough free. Similar with RAM…gentoo suggests 2GB of ram for each --job of make/ninja but that’s for extreme cases, you usually can get away with less than 2GB per job.

    Of course you also gain a lot of advantages…but that wasn’t asked ;)

    You can “escape” most of the mentioned disadvantages by using a distro like gentoo that automates much of this. It’s probably worth a look if you plan on doing this regularly.

    edit:typos

  • balsoft@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    Are there any risks or disadvantages to building software from source, compared to installing a package?

    Well, compiling from source is the “installing dodgy freeware .exe” of the Linux world. You have to trust whoever is distributing that particular version of the source code, and ideally vet it yourself. When installing a binary package from your distro’s repositories, presumably someone else did the vetting for you already. Another slight risk is that technically you are running some extra build scripts before you can even run the application, which is a slight security risk.

    Can it mess with my system in any way?

    Yeah, unless you take precautions and compile in a container or at least a sandbox, the build scripts have complete unadulterated access to your user account, which is pretty much game over if they turn out to be malicious (see: https://xkcd.com/1200). Hopefully most FOSS software is not malicious, but it’s still a risk.

    If you “install” the software on your system, it also becomes difficult to uninstall or update, because those files are no longer managed from any centralized location.

    I recommend using a source-based package manager, and package your software with it (typically won’t be any more difficult than just building from source) to mitigate all of those (as typically source-based PMs will use sandboxing and keep track of the installed files for you).

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Gentoo user here.

    Of course I always build every package from source because that’s how Gentoo works.

    Well, you get well optimized software for your specific cpu and architecture that often will not run on a different CPU. At the cost of lots of time.

    For big ones like Firefox or rust I always choose the prebuilt ones… But everything else is from sources.

    Also, another great advantage is to customize package features to your likings, like disable an audio backend or enable another, and such.

    • lengau@midwest.social
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      The irony is that big things like Firefox can get the most advantages from building for your specific CPU variant, especially if you use them frequently.

    • Da Oeuf@slrpnk.netOP
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      you get well optimized software for your specific cpu and architecture

      That’s really cool. How does that work?

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 day ago

        All x86_64 CPUs support a certain “base” set of instructions. But most of them also support some additional instruction sets: SIMD (single instruction multiple data - operations on vectors and matrices), crypto (encryption/hashing), virtualization (for running VMs), etc. Each of those instructions replaces dozens or hundreds of “base” instructions, speeding certain specific operations dramatically.

        When compiling source code into binary form (which is basically a bunch of CPU instructions plus extra fluff), you have to choose which instructions to use for certain operations. E.g. if you want to multiply a vector by a matrix (which is a very common operation in like a dozen branches of computer science), you can either do the multiplication one operation at a time (almost as you would when doing it by hand), or just call a single instruction which “just does it” in hardware.

        The problem is “which instruction sets do I use”. If you use none, your resulting binary will be dogshit slow (by modern standards). If you use all, it will likely not work at all on most CPUs because very few will support some bizarre instruction set. There are also certain workarounds. The main one is shipping two versions of your code: one which uses the extensions, the other which doesn’t; and choosing between them at runtime by detecting whether the CPU supports the extension or not. This doubles your binary size and has other drawbacks too. So, in most cases, it falls on whoever is packaging the software for your distro to choose which instruction sets to use. Typically the packager will try to be conservative so that it runs on most CPUs, at the expense of some slowdown. But when you the user compile the source code yourself, you can just tell the compiler to use whatever instruction sets your CPU supports, to get the fastest possible binary (which might not run on other computers).

        In the past this all was very important because many SIMD extensions weren’t as common as they are today, and most distros didn’t enable them when compiling. But nowadays the instruction sets on most CPUs are mostly similar with minor exceptions, and so distro packagers enable most of them, and the benefits you get when compiling yourself are minor. Expect a speed improvement in the range of 0%-5%, with 0% being the most common outcome for most software.

        TL;DR it used to matter a lot in the past, today it’s not worth bothering unless you are compiling everything anyways for other reasons.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    2 days ago

    The best would be to ask a Gentoo user. :D

    Disadvantage (besides the update procedure mentioned by the other answers here) is, it might take lot of time, download lot of dependencies and files and need additional space on your drive to compile. It can be a hassle to install and setup the required tools and libraries too. This highly depends on the project itself if its worth it. In example nobody in their right mind wants to compile their web browser (Firefox, Chromium, whatever) themselves (sorry if I offended someone with that. :D). But a simple and short C program is as simple as running make command in example (given the dependencies are installed, which are most likely for simple programs after a few programs have been compiled).

    Most of the time you don’t need to compile software. Especially if you trust the source or its in the official repositories of your distribution.

    Can it mess with my system in any way?

    Depends on what you mean by that.

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    Just convenience. That’s what packages provide. There’s no special magic under the hood in most cases as a downside to packages, and in most cases for specific projects, this is why stacks have containers, because you set the build steps to include the things you need in a pragmatic way, but now have to mess with static files on a filesystem.

  • MyNameIsRichard@lemmy.ml
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    The only disadvantage is that you have to manually update, unless you’ve installed it from the aur.

  • TMP_NKcYUEoM7kXg4qYe@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    The only potential downside is that software is not handled by your package manager, so uninstalling or upgrading can be pain. But there are ways around it like source based package managers or manually building binary packages and then installing them.

  • hades@feddit.uk
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Think about it this way: you’re downloading someone else’s code and running it on your system. The OS doesn’t care: it will give it access to everything your user has access to, but won’t give access to anything else.

    So (under the caveat below) the software won’t be able to mess with your system because your user generally can’t mess with your system. However, you still need to trust the software, since it will be able to access e.g. your saved passwords, SSH keys, install a keylogger, etc. In comparison, the binary packages can be seen as safer, because they have more “eyes” on them, and there is more time between the code being published and you running that code on your system.

    Caveat: if you run something like sudo make install, then, of course the risk is way higher, and the package definitely will be able to mess with your system up to and including destroying it.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    The main disadvantage is that it’s less automated, and also you don’t get automatic updates without any other package management system in place. If you’re using something like e.g. source packages from the AUR then that solves both those problems and there’s no downsides (beyond extra computational power/time you spend waiting) so long as the package maintainer does their job correctly.

    Can it mess with my system in any way?

    Not… really? I guess if you’re downloading random tarballs off the internet and running make install without checking the integrity or trustworthiness of what you’re downloading then you could get a virus. But if you’re certain the source you’re getting is legitimate, then I suppose the only way building from source could “mess up your system” is if you mess up your system libraries or something whilst trying to install dependencies.

  • bacon_pdp@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    It has security advantages but it is slower and requires your computer to do more work.

      • bacon_pdp@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        You can disable functionality that you don’t use or want (code that is not used cannot be exploited).

        You can enable hardware/kernel specific security mitigations.

        You can know what source code corresponds to the generated binary.