I’m using Heimdall to easily access my self hosted stuff ATM. I would like for my family to use them too if they’re so inclined, but there’s no way they will be able to remember the IP addresses, I know I can’t!
Is it a DNS I’m looking for? If so, I’m already hosting a couple of instances of Adguard, can I just set it so that Plex is 192.xxx.x.47 and snapdrop is 192.xxx.x.53 and use that to resolve the request so my 13 year old can just type Plex into his browser and find it?
Or do I need something like Caddy or Nginx or something in between?
Thanks for any advice.
Not *.crypt. say you buy mycooldomain.crypt. you then point the domain service that domain and *.mycooldomain.crypt to your public Ip.
You would then forward the ports on your router (like 80 and 443) to your proxy. This will get your external users working.
Internally you’ll need to set up your local DNS so that it knows you are the mycooldomain.crypt. there are multiple ways based on what hardware and software You’re running, do some googling. For me in my local DNS then I say *.mycooldomain.crypt points to my local proxy DNS, so that it resolves inside the network.
So, external DNS points to public Ip, router/firewall forwards those ports to proxy. Internally your DNS will reach out to your router/DNS/whatever you use to ask what that domain is and it will respond with the local IP of the proxy.
This is how I do it. It works internally and externally, though it’s more than OP needs. :)
To add to what’s been said (in case it’s useful to others), it’s worth looking at SWAG and Authelia to do the proxying for services visible to the Internet. I run them in a Docker container and it does all the proxying, takes care of the SSL certificate and auto-renews it, and adds MFA to the services you run that support it (all browsing, MFA-aware apps, etc).
Another thing I like about SWAG’s setup is that you select which services/hostnames you want to expose, name them in the SUBDOMAINS environment variable in Docker (easy to remove one if you take a service down, for maintenance, etc), and then each has its own config file in Nginx’s proxy-confs directory that does the
https://name.domain -> http://IP:port
redirection for that service (e.g. wordpress.subdomain.conf), assuming the traffic has met whatever MFA and geo-whitelisting stuff you have set up.I also have Cloudflare protecting the traffic (proxying the domain’s A record and the wildcard CNAME) to my public address, which adds another layer.
Ok that makes sense. I’m not trying to tie any of this to my external domain though, I just want to proxy Lan names at the moment.
I have a domain set up for home assistant though Cloudflare and I don’t want anything else to be externally accessible.
So what I’m currently trying to do is have radarr.crypt, Sonarr.crypt, plex.crypt, openwrt.crypt hit the correct lanIP:port
I can’t do that with just my DNS because that’s just for lanIP not port
So I’m trying to have the *.crypt go to the Nginx IP and have that proxy the name to the IP and port.
I’ve been prodding it all day and even had ChatGPT have a go at it but I’m getting nowhere.
And this is why I don’t like Nginx.
I have managed to get Nginx working and even got a pretty UI for it, but just can’t get it to proxy my IP addresses to names.
It ain’t a big deal, Heimdall does the job for now, just thought it would be a nice way to dip my toes in.
deleted by creator
Thanks, I’ve saved this comment for next time I try to do this
Trust me, getting an external domain makes it easier. You can get one for like $5. I tried to do the internal thing too, but ssl just gets so complicated because you have to trust any internal certs. With a proxy you can have one cert source that your computers already trust because it’s be let’s encrypt. Just buy a cheap domain and you’ll thank yourself later.
I already have one, I treated myself to my own .com domain for Chrimbo just gone and own it for the next 5 years