Like many self-hosters, I’ve looked upon the recent price hikes for storage in utter disbelief. Faced with paying double the price of what I paid only last year for new hard drives, I dug around my hardware stash and came across about a dozen of old 2.5" 320-500 GB drives which I had saved from the dumpster once, but never deployed. After all, they were too slow to be used as PC system drives and too small in storage size for any meaningful use in a server. Now seemed like a perfect time to look for a way to put them to good use after all. And I found it in mergerFS.

For anyone not familiar with it: in spite of its name, mergerFS is not a filesystem in the sense that in order to deploy it, you’ll need to reformat any drives (although this wouldn’t have been a problem for my use case). Instead, you can theoretically take a bunch of drives (JBOD) and string them together with no modification to their filesystem, keeping existing data intact. It is agnostic of the filesystems present on the drives, meaning you can even combine volumes formatted with, say, ext4, btrfs, and xfs. All drives will show up in your filesystem as a single volume, and - depending on the policies you configured - store some data on this and some data on that drive. Since data isn’t striped, the drives will remain individually legible, i.e. there’s no need to rebuild all of them after a drive fails.

Speaking of drive failure: while mergerFS itself does not come with RAID, you can add SnapRAID to the mix for parity-based RAID (although it’s not real-time RAID; parity data must be written on schedule, so it’s not for mission-critical data that is frequently being updated and rewritten).

Combined, these two technologies allow me to have my cake and eat it too:

  • I can put drives to use that would otherwise be rotting in a drawer.
  • I can avoid additional cost - both financial and ecological. (The energy bills won’t increase by much, either, because most of the energy comes from solar cells on the roof.)
  • I can always flexibly tack on more drives, regardless of size.
  • I can have the added data security of a RAID, but at the price of very few (if any) of its drawbacks (e.g. no drives of equal size needed).

If this was news to you - maybe you want to give it a shot too. (I don’t consider myself a very advanced user and I found it dead simple to deploy.)
If you’re already running mergerFS and SnapRAID, feel free to showcase your use case and setup!
If you found any of the above incorrect or misleading, feel free to correct me.

  • adarza@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    i have three snapraids here. one with (what was at the time) new disks, and two made up of old salvaged disks like you’ve got–pulled from systems and laptops headed for the recycle bin.

    • yo_scottie_oh@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      17 hours ago

      How do you connect your disks to your host machine? Are they in an external cage w/ SATA-to-USB adaptors or mounted internally to SATA ports?

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Was it hard to set up? Any field expedient modifications, adjustments, or fiddling? I’ve got a ton of old HDD from desktops, laptops, old servers sitting in one of my closets. Hmmmmmm

      • adarza@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        not difficult at all, snapraid’s online documentation is very good.

  • Overspark@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    SnapRAID offers an additional benefit over real RAID-like systems: it functions as a short-term backup. If you sync it daily like I do, that means that if you accidentally delete a bunch of files (old enough to have been synced, I.e. older than one day in my case) you can restore them from the SnapRAID parity.

    The reverse is also true of course: if you lose a disk you also lose today’s changes to that data. So it’s most suited to large collections of rarely changing stuff like photos and videos and music IMHO.

    • eightys3v3n@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      This SnapRAID occupies an interesting middle ground between the least “proper” solution and the most “proper” solution for when more resources aren’t available or justified, it seems.

      Rather than a single drive, or dozens of drives, with data randomly duplicated around or lost when individual drives die. Rather than a huge volume on zfs with it’s large setup cost and lack of expandability (until AnyRaid is done) and potentially unneeded additional functionality.

      Then mergerfs is a natural expansion offering a unified way to organize and access the data that SnapRAID is securing (instead of mounting all those drives somewhere).

      If someone merged these projects into one solution, and added a couple extra functions (like managing compression or deduplication, caching) it seems like it could be a comparable offer to zfs for different use cases. Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

      • IratePirate@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        a comparable offer to zfs

        Weeell, zfs does bring a lot more to the table than mergerFS + snapRAID, e.g. snapshotting and scrubs/bitrot protection. But then again, it does so at a much higher price.

        Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

        Agreed. unRAID has something very similar and even (slightly) better (their RAID syncs automatically, not on command). But then again, unRAID isn’t FOSS.

  • plz1@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    This is why I went with Unraid. Being able to slap whatever drives in that I have on hand was the primary driver for getting away from btrfs (Synology). And that build was about 3 months before RAM prices started to explode last year, which I read as “all parts gonna skyrocket”, which they have.

    • IratePirate@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Good call! I’m doing regular borgbackups to an off-site, self-hosted backup server. (I’d still prefer not to be bombed! :D)

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 hours ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    4 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

    [Thread #269 for this comm, first seen 1st May 2026, 23:00] [FAQ] [Full list] [Contact] [Source code]