I’ve been interested in building a DIY NAS out of an SBC for a while now. Not as my main NAS but as a backup I can store offsite at a friend or relative’s house. I know any old x86 box will probably do better, this project is just for the fun of it.

The Orange Pi 5 looks pretty decent with its RK3588 chip and M.2 PCIe 3.0 x4 connector. I’ve seen some adapters that can turn that M.2 slot into a few SATA ports or even a full x16 slot which might let me use an HBA.

Anyway, my question is, assuming the CPU isn’t a bottle neck, how do I figure out what kind of throughput this setup could theoretically give me?

After a few google searches:

  • PCIe Gen 3 x4 should give me 4 GB/s throughput
  • that M.2 to SATA adapter claims 6 GB/s Gb/s throughput
  • a single 7200rpm hard drive should give about 80-160MB/s throughput

My guess is that ultimately, I’m limited by that 4GB/s throughput on the PCIe Gen 3 x4 slot but since I’m using hard drives, I’d never get close to saturating that bandwidth. Even if I was using 4 hard drives in a RAID 0 config (which I wouldn’t do), I still wouldn’t come close. Am I understanding that correctly; is it really that simple?

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    No matter what adapter you use, you will be limited to the throughput of the PCIe 3 x4 port.

    SATA is 6 gigabits per second, not gigabytes. The SATA adapter is only PCIe 3 x2 which would limit the throughput if you used it with SSDs, but it will still have plenty of bandwidth for hard drives.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      SATA is 6 gigabits per second, not gigabytes.

      Oh shit. I misread the Amazon description. Thanks for catching that and thanks for your response

  • PuppyOSAndCoffee@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    Actually…for a NAS, your network link is your limit.

    You could have 4xPCIe5 M.2’s in full-raid, saturating your bus w/64Gb/s of glory, but if you are on 1Gb/s wifi, that’s what you’ll actually get.

    Still, would be fun to ssh in and dupe 1TB in seconds, just for the giggles. Do it for the fun!

    Remember, it is almost always cheaper and fast enough to use a Thunderbolt / high-speed USB4/40Gbs flash drive for a quick backup.

  • Shdwdrgn@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Just for some real-world comparison, I set up a new NAS earlier this year using a rackserver, SAS cards, and eight 18TB HDDs configured like RAID6 (actually using zfs-z2). I played with a few different configurations but ultimately my write speeds reached around 480MB/s because of the parallel access to so many drives. Single drive access was of course quite a bit slower. Because of this testing I knew I could use cheap SATA2 backplanes without affecting the performance.

    So basically, do a lot of testing with your planned hardware to get the best throughput, but a single HDD is going to be your single biggest bottleneck in anything you set up.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    PCIe Peripheral Component Interconnect Express
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage

    5 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #166 for this sub, first seen 25th Sep 2023, 21:35] [FAQ] [Full list] [Contact] [Source code]

  • Yote.zip@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    yeah you’ve got it about right. Gen 3x4 is 8gb/s*4 == 4GB/s, which is your bottleneck. Hard drives might be closer to ~200-250MB/s each depending on your specific model. That M.2 -> SATA thing seems like it’s more geared towards SATA SSDs with how few ports it has - I wouldn’t be surprised if you could find something with more ports available if needed, or at least for a cheaper price.

    Also as you note, RAID0 will be the fastest config but depending on your RAID configuration or workload you’ll probably be getting less than max bandwidth out of each drive anyway.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I wouldn’t be surprised if you could find something with more ports available if needed, or at least for a cheaper price.

      Based on another comment I read, each SATA port would be 6 gigabits/s which equates to 0.75 gigabytes/s. If I fully saturated all 5 ports, that puts the throughput at 3.75 gigabytes/s. Anything over 5 ports would be bottlenecked by the M.2 PCIe Gen 3 x4 port wouldn’t it?

      • Yote.zip@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Yeah but you’re not going to saturate each SATA port with your harddrives, which will be closer to 2 gbps max. The PCIE connector only needs to worry about what actually goes across it. I imagine that card is built to spec with the situation you’re describing in mind, but for more practical purposes I think it shouldn’t be a problem to have even more slots?