There is a similar question on the site which must not be named.

My question still has a little different spin:

It seems to me that one of the biggest selling points of Nix is basically infrastructure as code. (Of course being immutable etc. is nice by itself.)

I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible. It seems to me, that a lot of the vocal Nix user (fans) switched from a pet desktop and discover IaC via Nix, and that they are in the end raving about IaC (which Nix might or might not be a good vehicle for).

When I gave Silverblue a try, I totally loved it, but then to configure it for my needs, I basically would have needed to configure the host system, some containers and overlays to replicate my Debian setup, so for me it seemed like too much effort to arrive nearly at where I started. (And of course I can use distrobox/podman and have containerized environments on Debian w/o trouble.)

Am I missing something?

  • skilltheamps@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    Well maybe you youself are too new to recognize some of the appeals ;)

    One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie’s image registry, where the target machines will fetch it from automatically and rebase to it.

    • hackeryarn@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      No, I fully understand it. But if you build the whole system where every package is isolated, none of the packages interfere with each other, and every package is tested across a wide array of architectures, you can just as safely put together your ideal OS setup and don’t have to deal with being locked into very simple and bare system.

      The right place for immutable OSes is if you’re using it as a server for container workloads, where you will never customize the base system. Or if you never want to customize your system. Yes, you can customize the system image, but it breaks all the guarantees that the images gives you because the packages themselves are not isolated and by bumping a wrong dependency for a custom packages you can still break the whole system.

      • skilltheamps@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there’s no need to compose the OS and configurations over and over again for every machine.

        Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.

        Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.

        • hackeryarn@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Your summary of the language is spot on. I still hope that more distros take inspiration from the declarative config and try to move in the direction, or nix supports a better language in the future. I think that ultimately that’s what the average linux user would want. The ability to still customize in a safe manner. Silverblue, and others, are and will remain a great option for the new or indifferent user.

          On your point about the transient phase, nix actually does that by default already. It installs everything at a separate path and then flips over in one go. You can even pick the mode, either try to do a live switch as you describe, or on boot. I don’t know if I see many benefits to images there.

          I am at a second place now that uses NixOS in a corporate setting, and it is much easier than maintaining the CoreOS images, or similar. I’ve had some many broken builds of CoreOS images because something goes wrong between the custom packages and the base CoreOS images, I would rather just run an Ansible script at this point. Also, you end up using the exact same test suite for NixOS images as for your other images, so the same guarantees end up being met.