So I have a webserver running nginx, and I want to use it as a reverse proxy to access web applications running elsewhere. I know this is a pretty standard use case, and that the traditional approach is to use virtual hosts to proxy the different apps.
Like, normally you would do something like:
- https://app1.webserver.something => app1.host:3000
- https://app2.webserver.something => app2.host:1234
- https://app3.webserver.something => app3.host:8080
I am familiar with this approach, and know how to set it up.
In this case, there is a catch though. For reasons that I can’t get into here, I can’t use virtual hosts, and everything should be hosted in the same webserver.something
domain. So I thought I would use a subpath to host each app.
What I want to do is this basically:
- https://webserver.something/app1 => app1.host:3000
- https://webserver.something/app2 => app2.host:1234
- https://webserver.something/app3 => app3.host:8080
In my nginx config file I have something like this:
upstream app1 {
server app1.host:3000;
}
server {
...
location /app1 {
proxy_pass http://app1/;
}
...
}
This works to the extent that all requests going to /app1/*
get forwarded to the correct application host. The issue though is that the application itself uses absolute paths to reference some resources. For example, app1 will try to reference a resource like /_app/something/something.js
, which of course produces a 404 error.
I suppose that for this particular error I could map /_app/
to the app1 application host with another location statement, but that seems dirty to me and I don’t like it. First off it could quickly become a game of whack-a-mole, trying to get all the absolute paths remapped, and secondly it could easily lead to conflicts if other applications use that absolute path too.
So I guess my question is: is there a way to do this cleanly, and dynamically rewrite those absolute paths per app?
Apps control the URLs. They either provide configuration for that and then you have to set them up manually, or they don’t and you’re screwed. Use sub-domains. You can use the wildcard domain (
*
) to point everything to the same IP address and then let nginx deal with it.Yeah the problem was more that this machine is running on a network where I don’t really control the DNS. That is to say, there’s a shitty ISP router with DHCP and automatic dynamic DNS baked in, but no way to add additional manual entries for vhosts.
I thought about screwing with the
/etc/hosts
file to get around it but what I ended up doing instead is installing a pihole docker for DNS (something I had been contemplating anyway), pointing it to the router’s DNS, so every local DNS name still resolves, and then added manual entries for the vhosts.Another issue I didn’t really want to deal with was regenerating the TLS certificate for the nginx server to make it valid for every vhost, but I just bit through that bullet.
If you’re serious about self hosting, you shouldn’t use an ISP router. Buy a proper router which will allow you to dynamically manage DNS records. My recommendation would be top end ASUS routers.
For SSL certificate management, use https://nginxproxymanager.com/ You just add a new domain there and it will fetch and manage a certificate for the domain.