Background
Hello fellow self-hosters and homelabbers,
A few weeks ago I was able to fill my new NAS with the proper hardware I needed to expand on my earlier setup.
Due to the new capabilities I also wanted a fresh restart. But the more I think about doing one thing, the more I hit other road blocks amd think about doing Y.
So I wanted to ask how you would solve my goal.
My current (main) setup:
- Hardware: 11th Gen i5 Nuc with a 8TB HDD attached via USB
- OS: Debian 11
- Software: OMV6 for management and Docker for a diverse set of containers
- Current containers: HortusFox + MongoDB, *arrs-stack, Jellyfin, uptime kuma, unifi network application + mariaDB, traefik, wallos
Current available hardware for use:
1x 13th gen i3 NUC running Proxmox 8.2
1x 11th gen i5 NUC
1x uGreen DXP4800+ NAS with 4x15TB HDDs in Raidz2. The OS is TrueNAS scale
My plans:
- NAS storage made accessible via NFS to the proxmox VE.
- NAS storage mainly planned as mass-storage for Jellyfin.
- Reimage my 11th gen NUC with a bare-metal Debian install for Docker.
(I will not virtualize on the 11th Gen NUC because I can’t pass the iGPU to the VM and not really interested in LXC containers)
Problems and questions I have at this moment:
1: Should I do a media-storage VM only utilized for serving media and do the computing on another VM or do a general VM for both?
- Upside to an all-in-one VM: Less problems with serving storage between many different nodes and keeping it organized.
Upside to specialized VMs (storage & compute VM): Better focus on ressources like CPU and RAM.
2: Should I place my whole docker stack again on the 11th Gen NUC or place the stacks in their own VM(s)? Example:
service stack in service-focused VM
media-focused stack in media VM (which also serves the files for jellyfin)
Jellyfin bare-metal/dockerized on NUC 11th Gen
I hope someone can maybe help me untangle my grown mess and plans. My skills with Linux are not very deep and very beginner level. If you are willing to help please be patient with stupid questions.
If you have any better solutions, pointers to research, (blog) articles on architecting such solutions, examples how you solved storage/management or just willing to help me, I’d be very grateful :)
Are you going to be hosting things for public use? Does it feel like you’re trying to figure out how to emulate what a big company does when hosting services? If so, I’ve been struggling with the same thing. I was recently pointed at NIST 800-207 describing a Zero Trust Architecture. It’s around 50 pages and from August 2020.
Stuff like that, your security architecture, helps describe how you set everything up and what practices you make yourself follow.
Entirely for home use and entertainment but also a bit of learning.
I try to be best practice from the get-go even if it’s a bit steep to start like this. I believe that doesnt even get me close to scenario of “give everyone every permission recursively”.
But I will expose it via a reverse proxy.
Right now I am experimenting with my VM on doing the All-in-one VM doing NFS shares from my other 2 linux devices. And that was successful besides the issue of now having system1 think 100 = user “pi” and system2 100 = user “appoxo”But yes. If you actually know what your goal/achievement is (e.g. reach a 0-trust permission state for the folder-tree of department Y) then it’s easier to research what you need to achieve it.
And that’s where I am already stuck. What do I want to do and how do I achieve that with the limited time, motivation and resources I have.I believe my current wish is:
- Have a VM -> Reason: Able to snapshot and being able to easily go back if needed
- Jellyfin and the *arr stack should be able to access and modify the files. Jellyfin via docker+NFS, *arr stack via mount-points from the same host
- The permissions shouldnt be overly complicated and not consist of juggling a hundred users -> Maybe groups?
- Beginner friendly to use and administrate.
All in all I think I will proceed with doing the all-in-one storage and compute VM and let jellyfin access it via a docker-compose mounted NFS mountpoint.
Why: I believe it’s easier to use as the bloody beginner I am ;)
BUT if you have a better idea or think I should do it a different way, I want to be open to feedback and advice
I have one VM for running Docker stuff (i.e. the arr stack, jellyfin, etc.). Unless your hypervisor supports docker containers natively, separating them is just going to make it more difficult for you for no good reason.
I don’t run anything else in Docker right now, but if I did, I’d probably stick it in the same VM for now to save on overhead. If it was enough to be its own stack, I’d separate it.
You mean I should plug my stack directly into LXC containers in proxmox?
What are my benefits over running docker stack in a media-storage VM which I will spin up regardless?I don’t think Proxmox LXC containers support Docker well, if at all, so no.
I run all my dockers in LXCs on Proxmox and there hasn’t been a single problem.
I didnt mean that literally ;)
You said I should abandon the docker platform in favor of utilizing the LXC container world? Can’t you use Docker inside a LXC container? But that sounds like more work vs a proper VMNo, I recommended Docker in a VM.
Assumed so and will probably continue. Thanks for your input :)
I’m afraid I do not follow. TrueNAS scale has support for kubernetes: install containers on top, maybe different containers for different fileshares/uses (one container for VM images, one for media etc).
Mount said network volumes on the compute boxes.
Not interested in utilizing kubernetes.
If I am right kubernetes is a sort of HA for containers? If it is, it would be way out of scope for my use case.
If I’d need to rewrite my whole compose stack it would be very annoying…
Also not sure if the kubernetes functionality is the same as truenas scale apps but the dev team deprecated it: https://truecharts.org/news/scale-deprecation/