Do you have any advice or suggestions about it?
- Hardware (what should be enough for a local PC, or VPS…)
- Software (OS [Debian, Yunohost, other…], “containerization” (Docker, virtual machines?), dashboard, management, backups, VPN tunneling…)
- “Utilities” to host (Lemmy, Peertube, Matrix, Mastodon, Actual Budget, Jellyfin, Forgejo, Invidious/Piped, local Pi-Hole, email, dedicated videogame servers like for Minecraft, SearXNG, personal file storage like Drive, AI [in the future, when I can afford a rig that can run a local model decently]…)
I’m aware it’s a lot of stuff to take on, so, do you have any advice on where to start? (how to find a cheap PC to experiment with, if not get a VPS, what to test on it, what “utilities” to try self-hosting first…)
I’ve been through this whole process and wanted to make the best choice and explore all options myself. In the end my conclusion ended up being what most people online recommended after all: keep NAS and compute separate and that Debian is best for a Linux server. Now I have a Synology NAS and a 12th gen Intel mini PC. I run most of what you mention above and it works great.
I spent ages looking at so many sources to learn and get this set up. After I got it all done, I found this is one simple guide that basically covered the whole process and I really with O found this early: https://thecybersecguru.com/tutorials/self-hosting-guide/#essential-infrastructure-networking-security-and-access
There are a lot of comments about setups, which is fine and all.
2 things if you’re completely green:
- Yunohost simplifies a lot of installs and gives you popular applications with ease. You can SSH into it later for more customizations too.
- ZimaOS is a great NAS platform for basically any hardware. It’ll run Docker containers and gives you a little more control, but it’s a outlet more complicated for someone new. I see a lot of Synology references, just think of this like a free/cheap Synology. (Cheap bc they do have a “pro” version giving you unlimited disks whereas free is up to 4HDDs. I’m currently using 18 drives).
I just feel this is the most automated and complete way to get set up quickly, both have forums and community support too.
I run Bitwarden and Forgejo on an old Raspberry Pi 3 B+. On my PC I run Fedora Kinoite and the following services (podman quadlets):
- *arr stack
- Jellyfin
- Seerr
- qBittorrent
- Shelfmark
- Grimmory
I use my PC for everything, including gaming, and the services running in the background aren’t even noticeable in terms of performance degradation (unless you’re for example transcoding 4k files on Jellyfin). You don’t necessarily need to buy new hardware, use what you have. When it comes to Lemmy, Mastodon, etc., I’d probably get a VPS. I recommend Anubis when you expose stuff to the internet, especially Forgejo.
Hardware
Anything with an x86 processor and some form of graphics (iGPU is totally fine). You can use a raspberry pi, but it will give you headaches. The more RAM the better, but 8gb is good enough for a few services. You definitely want an SSD.
Setup
You’ll need a domain and you’ll need to point the root domain at your public IP with an A record. Then you can set up subdomains for each service with a CNAME record to point to your root domain (use “@“ as the host name). So like “example.com” points to “123.123.123.123” with an A record, and “nextcloud.example.com” points to “@“ (“example.com”) with a CNAME record.
For your domains, I recommend Cloudflare. They’re relatively easy to set up, but more importantly, they don’t charge a markup on domains.
From your router, give your server a DHCP reservation to make sure it’s IP address doesn’t change, then forward ports 80 and 443 to your server.
Software
I prefer Kubuntu LTS, cause it’s super stable. When you’re installing, tell it to log you in automatically. Then once installed, in the power settings, turn off automatic sleep. You can leave on automatic lock, but it doesn’t really matter, since if someone has physical access to the machine, you’ve already lost.
Docker and Docker Compose for sure. When you set up a docker compose stack, put it in its own directory, to make life easier. So, you can have a directory “nextcloud”, with the docker-compose.yml for the Nextcloud stack (Nextcloud itself, Nextcloud again but running in cron mode, and MySQL/MariaDB).
NGINX Proxy Manager should be your first docker compose stack. Use “host” network mode, so it can talk to your services. Set up your SSL certificates with this, using the DNS option. Your certificate should have two domain entries, one wildcard and one for the root. So your entries would be like “*.example.com” and “example.com”. You can do that on the same cert. You’ll need an API key from your registrar that has access to your domain’s zone to get it working. On Cloudflare you can set that up in your profile. Just give it access to all zones, then jot down the secret key somewhere safe like a password manager. That key is what you’ll enter into NPM when setting up your cert.
Now you can set up some docker compose stacks with your services. Choose a port range for your services, like 8201, 8202, 8203, etc. Each service usually only needs one port mapped, the HTTP port. So use a port you haven’t used and forward it to the HTTP port (“8201:80”). Don’t forward any ports to your DB. Containers in the same stack can talk to each other without having ports forwarded. Use regular directories for your volume mounts, not Docker volumes (so like “./nextcloud:/path/to/nextcloud/data”).
Set up the subdomain for each service to point to its port in NPM. The address is just “127.0.0.1”, and the port is whatever you set it up as in the Docker Compose stack.
Start with Nextcloud using the “Nextcloud” docker hub image. It says it’s for advanced users, but I’ve been using it for years. It’s super easy.
All of the stuff from linuxserver.io is great, except Nextcloud, cause you can’t run Nextcloud Office with the built in server.
Next, try Immich. It’s awesome.
Then Jellyfin, Nephele WebDAV, Wordpress, Home Assistant.
Remote Access
Install Flatpak and Flathub, then the RustDesk flatpak to access your server remotely. Set it up as a startup program in KDE settings so it launches on boot. Install Flatseal to give RustDesk full permission so it doesn’t always need to ask the local user to approve the screen share. You might need to get an HDMI dummy plug to make it work without a monitor. They’re super cheap.
Oooorrr, you can access it with SSH, but that’s a little more dangerous if you don’t set it up correctly.
Notes
Don’t try Podman, it’s very difficult to get working, and simply won’t work with NPM. Use the official Docker installation method, where you set up their repositories in Kubuntu.
Every once in a while (at least monthly), go through your docker stacks and update them. Usually that’s just a “docker compose pull” and “docker compose up -d”, but sometimes it needs manual intervention, like with Nextcloud’s upgrade script, “occ”. For that you’ll use “docker compose exec -it …”.
Every once in a while, run “docker system prune -a --volumes” to clean up old stuff. (This is one reason why you don’t want to use docker volumes for your data, they would get scrubbed too unless they were running.)
You’ll probably want to set up some backup solution. Just note that a lot of the files you want to back up are owned by root, so userland backup tools probably won’t work.
Don’t try to host your own email. You can probably do it, but it’s astoundingly complicated and difficult to maintain. I know because I run an email service, https://port87.com/. Most ISPs make you jump through hoops to open up outbound traffic on port 25, the email port.
Most Importantly
Have fun!
My first advice is: it’s always too small. You always realize that you need more, as you can get bigger. As an example from me. I started with 3TB of storage for data hoarding. I quickly upgraded to 21TB and it’s still not enough. You may start with something small, but there is so much. Technically you could go up to AI selfhosting. Especially when you go the route with image and video generation, this takes up resources. I heard the Mac Mini is getting used for local AI.
As for what to host, you should ask yourself what do you need. Lemmy and Peertube I count towards not useful for private usage. A cloud storage like Nextcloud is something very useful. Jellyfin is useful. I would start with cutting out 3rd party cloud services from your personal usage. Instead using dropbox/google drive/iCloud and so on, use Nextcloud. Same for images. Make your local media like movies, music, audio books, books and so on accessible to all devices, with the neat features we love from other services like Netflix, Audible, Kindle and so on. You could also just starting hosting your own game servers, than renting or making it only available when you also play.
But be aware of the risks. Something like a Minecraft Server can be made accessible via VPN. If its open to the internet, the damage is rather small, if you don’t value your minecraft world that high. I rather have my Minecraft World deleted, than my personal pictures stolen. When you can open your service only to LAN, you have a lower risk to get it compromised (the risk isn’t zero! It’s never zero. Air gaped systems make it near zero, but not zero.)
When you do open your stuff to the internet, you need to update your software well and configure it well. Stuff like email is a pain to configure. I looked long for a managing software package, which made it easier for me but to leave me freedom. Next important thing is updates. YOU NEED TO UPDATE YOUR STUFF! I do prefer everything with auto updates. I use watchtower for my docker containers, even though it’s not recommended with some containers. What’s currently a big deal breaker for me is PostgreSQL. I threw out containers and avoided containers requiring it, as it needs manual interaction. For work I actually need to migrate from MariaDB to PostgreSQL for our chat system. At least they do use the LTS version, so you aren’t constantly needing to manually update.
As for the hardware, it highly depends on what do you need. A rented server (a VPS or a dedicated Server) does have the advantage to be easily in the internet and on the same note, this is a disadvantage. For email and websites this is good but you need to be very careful. You can start with a Raspberry Pi. Home Assistant does run on it and they do offer some apps, like AdGuard Home, Bookstack and Vaultwarden. You can also start with a NAS. I run my stuff on my Synology NAS, a DS920+. It has Docker. But you may want to look into a different company, as Synology did some bad stuff, that makes me them not recommending it anymore. I did heard Ugreen should be good. Obviously you could always go bigger and build your own NAS and using truenas or something else. You can also start with a MiniPC and use Proxmox.
For the operating system, I think the best thing is what floats your boat. I do use Ubuntu. Why? Because I like it. Using Containers is a big recommendation from me. With Proxmox you have VMs and LXC containers, which allows you to experiment within a container and separate stuff. You can throw it away easier, without disturbing other stuff, that is working.
I really would recommend to start small and keeping an eye on risk. Start in your local network and with stuff that isn’t big risk. If you stark taking more risks, don’t go full in in the beginning. If for example you hosts your own file cloud or email, don’t abandon your previous provider and start small, with unimportant stuff.
Now to myself: I have running a rented server, a Pi, a Synology NAS and a mini PC.
I did start with a VPS. Very hard and I made quite a few mistakes. the authorities twice called me out for stupid mistakes. It was the fun days, where I actually thought running a Windows Server in the Internet was a smart idea. I did run a webserver and email from them for quite some time. I even had a Skype Music Bot running without issues. What the authorities didn’t like where my attempts with the DNS Server and my MSSQL Server. Now my rented server is running my mail and webserver (with Nextcloud) and if I feel fancy, a game server. I don’t utilize it as I could and in the near future, I do want to switch things up but I need to keep my mail running.
My Synology NAS is the big stuff. It has my data on it and runs most my docker stuff. There I run audiobookshelf, calibre-web, gitea, jellyfin and paperless-ngx for my main stuff.
My Raspberry Pi 5 is running HomeAssistant to control my smart home stuff and a new addition is Music Assistant.
My mini PC is running Proxmox with Frigate. Frigate is a NVR for your cctv. Not that I have a big cctv system.
Technically I did start earlier with a Minecraft Server and a Teamspeak server, running from my own PC but that has the big downside, that you need to keep your PC running.
My first question would be, how familiar are you with the Linux cli? How much experience do you have with Docker containers? You are right, your list is quite extensive. and there is nothing wrong with goals, but I would caution to start small and slow. I would learn how to:
- Drive the Linux bus fairly well. I’m using Ubuntu Jammy for my servers, but there are other options. NixOS seems popular.
- Understand what reverse proxies are and how to deploy one. Caddy is pretty much dead simple with a small learning curve. There are many of them to choose from tho.
- Learn about various security implementations. Security is paramount. Fail2ban is a good start, but I would also explore Crowdsec, Wazuh, etc.
- Learn about Docker and how to set up Docker containers so that they are secure.
- Instead of mass deploying apps and containers, choose one. Get to know it on a personal basis. The installation process, the security aspect, etc. Things are easier when there is only one container to mother hen. Then as your knowledge base grows, add another…and so on.
- Document everything you do. Seriously. You can’t write too many notes. When you’ve successfully deployed your first app while documenting, go back and clean up your notes and make them a part of your 3,2,1 back up policy. I can almost guarentee you that 6 months down the road, you won’t be able to remember every command you typed in or what you’ve done. Documentation makes troubleshooting much easier.
- Speaking of backups, you’ll need a reliable way to make backups of your server. Borg seems to be quite popular, but there are others.
- Have fun!
Copy/paste from another comment I made a while back:
Look into docker containers in general. If I was going to start from scratch in your position this is what I’d do:
Install a Linux distribution on the computer you plan to use for self hosting. This can be anything from a raspberry pi up to a custom build but I would recommend starting with something you have physical possession of. I found Debian with the KDE plasma desktop environment to be pretty familiar coming from Windows. You could technically do most of this on Windows but imo self hosting is pretty much the only thing that a casual user would find better supported through Linux than Windows. The tools are made for people who want to do things themselves and those kinds of people tend to use Linux.
Once you have a Linux distribution installed, get docker set up. Once docker is set up, install portainer as your first docker container. The steps above require some command line work, which may or may not be intimidating for you, but once you have portainer functional you will have a GUI for docker that is easier to use than CLI for most people.
From this point you can find the docker installation instructions for any service you want to run. Docker containers have all the required dependencies of a given service packaged together nicely so deploying new services is super easy once you get the hang of it. You basically just have to define where the container should store it’s data and what web port you want to access the service on. The rest is preconfigured for you by the people who created the container.
There’s certainly more to be said on this topic, some of which you would likely want to look into before you deploy something your whole family will be using (storage setup and backup capability, virtual machines to segregate services, remote accessibility, security, etc). However, the above is really all you need to get to the point where you can deploy pretty much anything you’d like on your local network. The rest is more about best practices and saving yourself headaches when something breaks than it is about functionality.
+1 for docker. So much easier than managing dependencies for a ton of services
One of my first self hosting projects was a jellyfin server. Double check, but I think the main hardware requirements are just 4GB of RAM and enough harddrive space for your videos/files!
I really like immich too. It’s like Google photos, but self hosted. It’s super fast for uploading and backing up your photos over your local network. Immich also needs at least 4GB of RAM I think
immich is not a backup solution. you need to use a backup solutiin forfir the stuff in immich.:)
Pinhole could be something good to start with, its pretty simple to setup, doesnt depend on other services, doesnt require hefty hardware, and has a meaningful impact.
Depends if you’re hosting something public, or something private.
For public, a webserver is a simple start. Can be anything you want it to be, but as complexity increases, so does the amount of potential attack vectors, so keep that in mind of you’re considering adding things like WordPress and the like.
For private, a NAS and/or a simple game server is a simple and useful start.
As for how, there’s a million ways to do it, and I’m an old stubborn BOFH that still cling to the old ways of doing it (as in, no VMs, no containers), so I’ll defer to others for that.
While purpose built server hardware is always nice since it comes with some useful additions, the truth is that “any” machine will do. Old discarded PC will do just fine.
Hardware (what should be enough for a local PC, or VPS…)
One of my “servers” I picked up for $15, saving from electronics “recycling”. Unless you’re transcoding video or hosting something with a hefty database that eats ram, whatever you can scrounge is generally good enough.
Software (OS [Debian, Yunohost, other…], “containerization” (Docker, virtual machines?), dashboard, management, backups, VPN tunneling…)
Debian and proxmox is pretty much my host for everything. I run a bunch of containers, usually lxc though a few docker containers here and there.
“Utilities” to host (Lemmy, Peertube, Matrix, Mastodon, Actual Budget, Jellyfin, Forgejo, Invidious/Piped, local Pi-Hole, email, dedicated videogame servers like for Minecraft, SearXNG, personal file storage like Drive, AI [in the future, when I can afford a rig that can run a local model decently]…)
Jellyfin doesn’t have much in the way of requirements if you’re not transcoding, and if you’ve got a relatively modern iGPU on intel, you’ve got plenty of power to transcode as well. Pi-hole is also pretty lightweight.
In terms of where to find something, I’d start with checking if there are local computer recycling companies, they will resell, and I’ve found they go cheap if you go direct. Otherwise, it depends on where you are. Craigslist occasionally has worthwhile stuff, sometimes ebay, sometimes (and I hate that its become so popular) facebook market. Or maybe just see when a business is getting rid of their off lease stuff and see if you can take something home.
At this point I’m almost exclusively tiny/mini/micro. When one dies (which happened recently), I gut the useful bits and move it somewhere else, or add it to the replacement - which is how my most recent addition, a nuc, has 32gb ram rather than 8gb, and a 500gb m2 rather than a 128gb m2.
Have fun!
What are you trying to do?
In the business world, this would be your business requirements. Once you have those then you can spec the technical requirements.
Without having a target, you’ll just be all over the place.
Start with one thing, get that setup, get management for it in place, backup processes, etc.
Then do the next thing.
Iceberg made a great rec - start with Jellyfin. It’s pretty easy, but touches on all sorts of stuff like storage, backups (which media is worth backing up?), etc. Plus it has a high reward - watching what you want, when you want, from almost any device.
Every self hoster will say start with something, like… and another will disagree.
My suggestion is look at what you have and think about what you want to do, and go from there.
I personally did not do that, so take what I said with a grain of salt, I saw ads that where super targeted at me and started to get a whole lot annoyed. This annoyance got me to buy a pi zero and started hosting pihole on my network, I did something and the SD card got fried so I got a pi 4 to replace the thing not yet realizing I probably just needed a new SD card. I got grumpy that some ads where getting through so I got another pi 4 to act as a secondary pihole.
I now can say that I have 1 pi zero 2 running wireguard just for DNS, 2 pi 5’s running pihole 1 of them also runs my Jellyfin server and sails the high seas for me while the other one has some other services doing other things. I also have a pi 4 running HAOS, as I try so hard to get out of proprietary systems. I plan on getting another pi 5 to be my firewall and another to act as my blog/email server.
Just know that running an email server is really hard and also requires your ISP to unblock outbound traffic on port 25.
Hardware: either
- use whatever you have lying around, e.g. an old laptop, or
- get a used thin client like e.g. a Dell Wyse. (passive cooling = no noise)
A Raspberry Pi is needlessly expensive for self-hosting, since it comes with GPIO pins etc. for controlling custom electronics.
Hardware is too wide to tell anything useful out of the blue, depends on what you can get your hands on (as in what’s available locally) and what you actually want to run. Used corporate desktop might be fine, raspberry pi might be good too, mini-pcs are popular and so on. All have their pros and cons.
For the OS proxmox is a solid choise. It has both containers and ‘full’ virtual machines as an option. Debian is good too.
And for the utilities, build something you actually want to use. Pihole is pretty nice. Gaming severs are good to practise with if you’re into that stuff. But if you just build stuff for the sake of it you’ll of course learn on the way but it leaves very little to actually enjoy on what you’ve built.
I really like my immich and nextcloud servers and they’re well worth my time to keep up and running. But with those there’s additional challenge to keep them backed up. Losing pihole server wouldn’t be that bad, it’s easy enough to rebuild, but losing a terabyte of photos is a bit another thing.








