Yeah, I know, that’s a huge advantage in this situation, but not one I can take advantage of 🙂
Yeah, I know, that’s a huge advantage in this situation, but not one I can take advantage of 🙂
Switched to qbittorrent+gluetun side car recently and it’s been pretty good compared to the poorly maintained combo torrent+OpenVPN images I was using. Being able to update my torrent client image/config independent from the VPN client is great. Unfortunately most of the docs are Docker focused so it’s a bit of trial and error to get it setup in a non-docker environment like Kubernetes. Here’s my deployment in case it’s useful for anyone. Be careful that you configure qbittirrent to use “tun0” as it’s network interface or you will be exposed (got pinged by AT&T before I realized that one). I’m sure there’s a more robust way to makeuse of gluetun’s DNS over TLS and iptables kill switch that doesn’t require messing with qbittorrent config to secure, but that’s what I have so far and it works well enough for now.
Look for refurbished units, you can get enterprise grade units for like half the retail price. I recently got a refurbished APC from refurbups.com. Comes with brand new batteries, mostly rack mountable stuff. Ended up being a little over half the price of a brand new one with shipping. Can’t tell at a glance if they ship to Canada, but if not I’d be surprised if there wasn’t a similar Canada based site you could find.
Got a refurbished APC coming in today. Looking forward to not having to worry about my NAS drives or losing internet because or a split second power blip.
Not really, its mostly a hobby/nerdy/because I can thing. I am a software engineer with a decade of experience. The job sometimes requires virtual sys admin work (VM/container, cloud networking, etc). Setting up my own baremetal cluster has given me more insight into how things work, especially on the network side. Most of my peers take for granted that traffic gets in or out of a cluster, but I can actually troubleshoot it or design with it in mind.
I considered it but RAM is very limited on the NAS and the cluster nodes, it’s my primary bottleneck. it would also be more volitile. the two SSDs are RAID 1 redundant, just like the underlying HDDs, in addition to the built in power loss protection on the drives. RAM discs are great if you can spare them and have a UPS though.
Fyi you will not be able to do live video transcoding with a raspberry pi. I overclocked my pi4’s CPU and GPU and it just can’t handle anything but direct play and maybe audio stream transcoding, though I’ve never had luck with any transcoding period. I either download a format I know can direct play or recently started using tdarr (server on pi, node running on my desktop when I need it) to transcode into a direct play format before it hits my Jellyfin library. Even just using my AMD Ryzen 5 (no GPU) it transcodes like 100x faster than a tdarr node given 2 of the rpi cpu cores. You could probably live transcode with a decent CPU (newer Intel CPUs are apparently very good at it) if you run Jellyfin on the NAS but then you’re at odds with your low power consumption goals. Otherwise rpi Jellyfin is great.
Good luck, I’d like to build a NAS myself at some point to replace or supplement my Synology.
I’d start with trying to find aarch64 container images. Search “image name aarch64”. If the source is available you could also build the image yourself, but I’ve never found software I wanted to use badly enough to do that. If you’re lucky someone already did it for you, but these images often aren’t kept up to date. Do the community a favor and drop the owner an issue asking for aarch64 builds if nothing else.
Nice! I have my Turing pi 2 still sitting in the box until I can actually find some pi CM4s. One of these days…
I do have a k3s cluster running on two regular pis with PoE Hats and a little network switch. Works well. Here’s my homelab repo if you want food for thought (ansible for bootstrapping the cluster, then helm for apps).
Generally a hostname based reverse proxy routes requests based on the host header, which some tools let you set. For example, curl:
curl -H 'Host: my.local.service.com' http://192.168.1.100
here 192.168.1.100 is the LAN IP address of your reverse proxy and my.local.service.com is the service behind the proxy you are trying to reach. This can be helpful for tracking down network routing problems.
If TLS (https) is in the mix and you care about it being fully secure even locally it can get a little tricky depending on whether the route is pass through (application handles certs) or terminate and reencrypt (reverse proxy handles certs). Most commonly you’ll run into problems with the client not trusting the server because the “hostname” (the LAN IP address when accessing directly) doesn’t match what the certificate says (the DNS name). Lots of ways around that as well, for example adding the service’s LAN IP address to the cert’s subject alternate names (SAN) which feels wrong but it works.
Personally I just run a little DNS server so I can resolve the various services to their LAN IP addresses and TLS still works properly. You can use your /etc/hosts file for a quick and dirty “DNS server” for your dev machine.
I host publicly accessible content on my network. There is a risk, yes, and it takes understanding and work to do it safely. My firewall is dropping packets from all over the world constantly from crawlers/bots. If you don’t know what you are doing, using something like tailscale is going to be way safer and easier.
Never used Inoreader, but recently switched to Newsblur which is open source (app installable via F-Droid) and selfhostable. If you don’t want to self host they have a freemium model to use their hosted service, couldn’t tell you what free vs paid gets you but I haven’t bumped into any limits yet. You can also log in to their site to browse via web browser.
So far the app looks better than other open source readers I’ve tried and thumbnails generally load so the lists are a bit livelier.
So I have jellyfin deployed to my kubernetes home lab, router port forwarded to the ingress controller (essentially a reverse proxy) on the cluster. So exposed to the internet. Everything on it has authentication, either built in to the application or using an oauth proxy. All applications also have valid SSL configurations thanks to the reverse proxy. I also use cloudflare DNS with their proxy enabled to access it and have firewall rules to drop traffic that hits port 80/443 that doesn’t originate from those cloudflare proxy ips (required some scripting to automate). It drops a lot of traffic every day. I have other secuirty measures in place as well, but those are the big ones.
So yeah, if you expose your router to the internet, its gonna get pinged a lot by bots and someone might try to get in. Using a VPN is a very simple way to do this securely without exposing yourself and I’d suggest going that route unless you know what you’re doing.
So a good exercise for threat modeling is to think through what would happen if your instance is compromised. Are there shared passwords on the machine? Other services? Private user data? Etc. Most likely your answer is there is nothing particularly sensitive on your Lemmy machine. If the instance is compromised they just have access to your compute resources at which point they might try to mine crypto with it or something.
So with that in mind, I might check on your billing model to make sure there isnt any sort of scaling cost they might be able to run up if that happened. Perhaps put some resource usage alarms in place. Im honestly not familiar with Linode, but have a lot of experience with AWS and GCP from my job.
I also recently found a nice general guide to securing a Linux server on GitHub you might find useful or interesting.
My homelab is a 2 node Kubernetes cluster (k3s, raspberry pis), going to scale it up to 4 nodes some day when I want a weekend project.
Built it to learn Kubernetes while studying for CKA/CKD certification for work where I design, implement and maintain service architectures running in Kubernetes/Openshift environments every day. It’s relatively easy for me to manage Kubernetes for my home lab, but It’s a bit heavy and has a steep learning curve if you are new to it which (understandably) puts people off it I think. Especially for homelab/selfhosting use cases. It’s a very valuable (literally $$$) skill if you are in that enterprise space though.