I’ve been interested in self hosting a small variety of services yet I’m so confused on where to start. What would you guys recommend for a server machine?
My main uses (and some of the services I think are appropriate for the use case) are:
- 1tb photo, video storage, push/pull (immich)
- 512gb total shared between downloaded music storage (navidrome) and pdf/ebook storage (calibre)—all pull only
- 1tb movies/tv storage on a media server (jellyfin)
- 512gb storage for random junk or whatever, plus a file transfer push/pull (syncthing…? or nextcloud?)
- potential basic bio website hosting (near future)
- potential email hosting (distant future)
anyways with that all said i have a few questions:
- what server should i buy if i want to expand storage in the future? should i just build a pc with like 3x1tb storage, or 6x1tb storage w/ redundancy? totally confused about the concept of redundancy lol
- any thoughts on the services im suggesting? especially for file transfer
If you have any old hardware/laptop laying around then use that until you nail down what you actually want to do and need.
You’re going to want a NAS. Most consumer systems can only wire up four ssds/nvme ssds. If you want 6 TB of capacity and you want redundancy. That means that minimum raid one and 12 TB of capacity. https://en.wikipedia.org/wiki/Standard_RAID_levels
Used business desktop from eBay is what I run. With what you want to run, you’ll be fine with even 10 year old hardware. I’m running a dozen services on 10 year old basic business hardware with no issues. With regards to media though: if you’re not getting a dedicated GPU, get a Intel 7xxx or later CPU so you have Quick sync for transcoding.
I run Ubuntu Server on one, proxmox on another. Both have their pros and cons. Depends on what you want to do. If your plan is just to run everything in containers (and it should be), Ubuntu with docker is plenty. If you plan on playing around with VMs, go proxmox.
As for what services, here’s a huge list of different self hostable services grouped by category/function: https://awesome-selfhosted.net/ Most have a demo site or a quick install guide for docker that makes it easy to try stuff out.
Avoid selfhosted email if you can… it’s a whole different animal.
I would still consider myself a noob but i do feel accomplished enough to answer this properly.
Hardware depends on your budget. It does not need to be bleeding edge either, i would focus on a good server case that makes it easy to upgrade over time and maybe fits a few harddrives if you don’t plan on having a nas.
Also make sure to check how much sata connections your motherboard can handle, using an m.2 slots may occupy some of the physical sata connections.
I highly, highly recommend proxmox for an OS.
You can set up every different service into its own lxc container, its wonderful to know you can experiment with whatever and everything else will be unaffected and just keep working. Within lxc things can just run using docker (though this is officially not recommended it works fine). The resource sharing between lxc containers is excellent. Taking snapshots a breeze. And when an lxc is not enough you can easily spin up some vm with whatever distro or even windows also. Best server-choice i made ever!
The zfs format for your storage pool is also very good. And you definitely want redundancy, redundancy makes it so x amount of drives can fail and the system just keeps running like normal while you replace the broken drive, otherwise a single drive failing ruins all your data.
Unless you make every drive its own pool with specific items that you backup separately but thats honestly more troublesome then learning how to setup a pool.
How you want a pool and how much redundancy is a personal choice but i can tell you how i arranged mine.
I have 5 identical drives which is the max My system can handle. 4 of them are in a pool with a raidz1 configuration (equivalent to raid-5) this setup gives me 1 drive of redundancy and leaves me 3 drives of actual usable space.
I could have added the fifth drive in the pool fo more but i opted not too, to protect my immich photons against complete critical failure. This fifth drive is unmounted when not used.
Basically my immich storage are in a dataset, which you can think of as a directory on your pool that you can assign to different lxc to keep things separate.
Every week a script will mount the fifth drive, rsync copy my immich dataset from the pool onto it. Unmount the drive again. Its a backup of the most important stuff outside of the pool.
This drive can also be removed from the cases front in an emergency, which is part of why I recommend spending some time finding a case that fits your wants more then worrying about how much ram.
Best of luck!
If you want to self-host email or websites, I’d use a VPS for those use cases. For websites, a $30/year VPS would be more than sufficient. You can try host at home, but hosting those things from a residential IP doesn’t always work well.
What’s your budget?
If you want to make it enterprise, buy a Dell or HP server (maybe Fujitsu if you’re not in the US, or Lenovo, Cisco, or IBM if you get a good deal and are okay with it being uncommon and weird, i.e. can be hard to work with and not much community support).
If you don’t care about that and want to DIY, get a case with room for expansion and start picking parts. If you want redundancy, get 2x6tb or bigger (because you’ll immediately start filling it) and set them in a RAID or zfs mirror. That way if one drive dies you can limp along with the other until the replacement comes in.
I always recommend not hosting internet-facing services yourself if you can avoid it, because it presents an opportunity for compromise. I self-host a lot of things, but my personal site is on a Namecheap shared host and my email is still Gmail.
Do not go for server hardware, used consumer hardware is good enough for you use cases. Basically any machine from the last 5-10 yeare is powerfull enough to handle the load.
Most difficult decision is on the GPU or transcoding hardware for your jellyfin. Do you want to be power efficient? Then go with a modern but low end intel CPU there you got quicksync as transcoding engine. If not, i would go for a low end NVIDIA GPU like the 1050ti or a newer one, and for example an old AMD CPU like the 3600.
For storage, also depends on budged. Having a backup of your data is much more important then having redundancy. You do not need to backup your media, but everything that is important to you,lime the photos in immich etc.
I would go SSD since you do not need much storage, a seperate 500 GB drive for your OS and a 4 TB one for the data. This is much more compact and reduces power consumption, and especially for read heavy applications much more durable and faster inoperation, less noise etc.
Ofc, HDDs are good enough for your usecase and cheaper (factor 2.5-3x cheaper here) .
Probably 8-16 GB RAM would be more then enough.
For any local redundancy or RAID i would always go ZFS.
QuickSync is more than sufficient for most users. It can handle several concurrent 4K transcode. It’s also not that common to have to transcode, unless you stream your media content when away from home a lot, and have poor upload speed.
If going Intel, there’s different models of Intel iGPU, so I’d go for the lowest-end GPU that has the higher end iGPU. My home server is a few years old and has an Intel Core i5 13500. The difference between the 13400 and 13500 looks small on paper, but the 13400 only has UHD Graphics 730 while the 13500 had UHD Graphics 770 which can handle double the number of concurrent transcodes.
Intel iGPUs also support SR-IOV which lets you share one iGPU across multiple VMs. For example, if you have a Plex server on the host Linux system, and Blue Iris in a Windows Server VM, and both need to use hardware transcoding.
I’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.
I’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.
The main issue is afaik still the software support, here are NVIDIA and Intel years ahead.
The benefit of going with a dGPU is that in a few years when for example maybe AV1 takes even more off, you can just switch the GPU and you’re done and do not have to swap the whole system. That at least was my thinking on my setup. My CPU, a 3600x is still good for another 10 years probably.
for example maybe AV1 takes even more off,
I know this was just an example, but Intel 11th gen and newer has hardware acceleration for AV1.
GPUs have their place, but they significantly increase power consumption, which is an issue in areas with high power prices.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters IP Internet Protocol NAS Network-Attached Storage NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency Plex Brand of media server package RAID Redundant Array of Independent Disks for mass storage SMB Server Message Block protocol for file and printer sharing; Windows-native SSD Solid State Drive mass storage VPS Virtual Private Server (opposed to shared hosting) ZFS Solaris/Linux filesystem focusing on data integrity
9 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #168 for this comm, first seen 15th Mar 2026, 22:00] [FAQ] [Full list] [Contact] [Source code]
What would you guys recommend for a server machine?
I would recommend buying fairly modern equipment, say within the past 5 years or so. Desktops, workstations, with a few additions/adjustments, can make excellent, energy efficient servers. As far as RAM, if your equipment takes DDR3, you will escape the ridiculous current price gouging. For RAM, I shop at MemoryStock. HDD drives still make good storage units, tho I go with SSD for the OS, and HDD for everything else. I would stay far away from enterprise type equipment, even though the prices may be tempting. The money you may save buying cheap, enterprise equipment will be spent on your power bill.
Redundancy covers a lot of ground. You can have a redundant server to fall back to should the wheels fall off of the main server. In the case of say a NAS, RAID gives you redundancy where if one drive fails, you can hot swap it for a fresh one and keep on rocking…pretty much. Redundancy can also apply to backups. I have a main, daily backup, and the same backed up to two different locations.
In addition to equipment selection, you will need to do some reading up on securely setting up a server, if you’ve never done so. Also start thinking about firewalls, WAFs, etc. I would recommend going through the Linux Upskill Challenge. Get your server set up and secured. Familiarize yourself with your server. Add a single service, and play around with that until things start to gel. Then you can think about slowly adding additional services.
Machine wise anything will work. Give yourself a chassis with room to add more disks down the road or just build your storage setup in a way that gives you what flexibility you need (though that tends to come with sacrifices).
I use Nextcloud for general file syncing between devices as occaisonal small file sharing.
I’d say that a good starting point would be the smallest setup that would serve a useful purpose. This is usually some sort of network storage, and it sounds this might be a good starting point for you as well. And then you can add on and refine your setup however you see fit, provided your hardware is up to it.
Speaking of hardware, while it’s certainly possible to go all out with a rack-mounted purpose built 19" 4U server full of disks, the truth is that “any” machine will do. Server generally don’t require much (depending on use case, of course), and you can get away with a 2nd hand regular desktop machine. The only caveat here is that for your (percieved) use cases, you might want the ability to add a bunch of disks, so for now, just go for a simple setup with as many disk as you see fit, and then you can expand with a JBOD cabinet earlier.
Tying this storage together depends on your tastes, but it generally comes down to to schools of thought, both of which are valid:
- Hardware RAID. I think I’m one of the few fans of this, as it does offer some advantages over software RAID. I suspect that the ones who are against hardware RAID and call it unreliable have not been using proper RAID controllers. Proper RAID controllers with write cache are expensive, though.
- Software RAID. As above, except it’s done via software instead (duh), hence the name. There are many ways to approach this, but personally I like ZFS - Set up multiple disks as a storage pool, and add more drives as needed. This works really well with JBOD cabinets. The downside to ZFS is that it can be quite hungry when it comes to RAM. Either way, keep in mind that RAID, software or hardware, is not a backup.
Source: Hardware RAID at work, software RAID at home.
Now that we’ve got storage addressed, let’s look at specific services. The most basic use case is something like an NFS/SMB share that you can mount remotely. This allows you to archive a lot of the stuff you don’t need live. Just keep in mind, an archive is not a backup!
And just to be clear: An archive is mainly a manner of offloading chunks of data you don’t need accessible 100% of the time. For example older/completed projects, etc. An archive is well suited for storing on a large NAS, as you’ll still have access to it if needed, but it’s not something you need to spend disk space on on your daily driver. But an archive is not a backup, I cannot state this enough!
So, backups… well, this depends on how valuable your data is. A rule of thumb in a perfect world involves three copies: One online, one offline, and one offsite. This should keep your data safe in any reasonable contingency scenarious. Which of these you implement, and how, is entirely up to you. It all comes down to a cost/benefit equation. Sometimes keeping the rule of thumb active is simply not viable, if you have data in the petabytes. Ask me how I know.
But, to circle back on your immediate need, it sounds like you can start with something simple. Your storage requirement is pretty small, and adding some sort of hosting on top of that is pretty trivial. So I’d say that, as a starting point, any PC will do - just add a couple of harddrives to make sure you have enough for the forseeable future.
If you are a real and total noob try to get a synology, ugreen or another reputable brand of nas an start from there.
The point of having one of these is to avoid a big fuck up resulting in a data loss. An from there you will be able to Bild up what you need.
all the best in this journey
I would absolutely discourage the use of synology and probably any other brand in the NAS realm.
Synology has pulled of some really scummy things in the last few years with their certified SSDs where only a white list of SSDs could be used in an array or when they tried to push their own HDDa and show warnings and messengers to worry the user that something is wrong. Also they retroactively removed transcoding capabilities from their systems.
Those Systems are all quite limited for how expensive they are. They are great for just simple things but with the list OP posted, you would be heavily limited and have to jump through hoops in order to have a well functioning home lab/server.





