I use Traefik as my main reverse proxy as well for the same reason—container niceties. But then I actually also use nginx… inside container images, like for containers that just serve static files for example.
Use the right tool for the job!
I use Traefik as my main reverse proxy as well for the same reason—container niceties. But then I actually also use nginx… inside container images, like for containers that just serve static files for example.
Use the right tool for the job!
I develop a moderately popular open source project and self-host it on Gitea. But I also mirror it on GitHub and accept PRs there. And one PR submitter on GitHub said they preferred to contribute there because that’s where potential employers look for open source activity.
Could employers also look on Gitea/Forgejo? In theory, yes. But some of them literally ask for your GitHub profile on their application forms…
I use Ansible to meet this need. Whenever I want to deploy to one or more remote hosts, I run Ansible locally and it connects via SSH to the remote host(s). There, it can run Docker Compose, configure services, lay down files on the host, restart things, etc.
The site links to a site that accepts payment data. So because the author’s site is http, a MITM attacker could change the payment links from lulu.com to site-that-actually-steals-your-credit-card.com.
That’s one huge thing https provides over http… assurance of unadulterated content, including links to sites that actually deal in sensitive data.
I haven’t used an out-of-the-box self-hosted solution for this, but I agree with others that blog or static site generator software could work. I think the main challenges you’ll find though are: 1. Formatting the content/site for long-form readability, and 2. Adding a table of contents and previous/next chapter links without a bunch of manual work.
Fortunately blog and static site software have plugins that can add missing functionality like this. Here’s one for WordPress (that I have no first-hand experience with): https://wordpress.org/plugins/book-press/
I also want to ask: What’s your plan for discovery/marketing? Because one of the benefits of the non-self-hosted web novel sites is that readers can theoretically discover your story there. But if you instead just post it on your own site, how will readers ever find it?
That’s unfortunate about NPM and Proxy Protocol, because plain ol’ nginx does support it.
I hear you about Traefik… I originally came from nginx-proxy (not to be confused with NPM), and it had pretty clunky configuration especially with containers, which is how I ended up moving to Traefik… which is not without its own challenges.
Anyway, I hope you find a solution that works for your stack.
I struggled with this same problem for a long time before finding a solution. I really didn’t want to give up and run my reverse proxy (Traefik in my case) on the host, because then I’d lose out on all the automatic container discovery and routing. But I really needed true client IPs to get passed through for downstream service consumption.
So what I ended up doing was installing only HAProxy on the host, configuring it to proxy all traffic to my containerized reverse proxy via Proxy Protocol (which includes original client IPs!) instead of HTTPS. Then I configured my reverse proxy to expect (and trust) Proxy Protocol traffic from the host. This allows the reverse proxy to receive original client IPs while still terminating HTTPS. And then it can pass everything to downstream containerized services as needed.
I tried several of the other options mentioned in this thread and never got them working. Proxy Protocol was the only thing that ever did. The main downside is there is another moving part (HAProxy) added to the mix, and it does need to be on the host. But in my case, that’s a small price to pay for working client IPs.
More at: https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address
I can’t comment on that, but actual Docker Compose (as distinct from Podman Compose) works great with Podman.
Maybe…? I’m not familiar with that router software, but it looks plausible to me…
Since this is on a home network, have you also forwarded port 80 from your router to your machine running certbot?
This is one of the reasons I use the DNS challenge instead… Then you don’t have to route all these Let’s Encrypt challenges into your internal network.
I hope one (or both!) of them end up working out for you.
Separate configs is totally reasonable. It just sounds like you haven’t configured your Borg passphrase with borgmatic… Otherwise it wouldn’t prompt for your passphrase at all.
I’m not super familiar with Unraid, but yeah, the borgserver image sounds like it’d work for this… You don’t need borgmatic on the server side unless you want it there to make running Borg commands easier.
Nope! Borg always requires Borg on the remote side. It’s Borg’s biggest strength and weakness versus competing backup systems IMO. Strength, because it can do pretty smart stuff with its own code running on both sides. Weakness, because it means it doesn’t work natively with cloud object storage like S3. It’s a tradeoff like anything else.
Glad to hear it’s (mostly) working out for you! I know you came here looking for best practices with restores, but if you end up coming up with anything yourself, feel free to comment on that Docker borgmatic ticket with requests or ideas. I use the container myself on some systems for the same reasons you do, and I also wouldn’t mind smoother restores!
borgmatic dev here. First of all, if Vorta is working well for you to recover files, then by all means use Vorta! Right tool for the job and all. Having said that, a couple of thoughts on using borgmatic in Docker and recovering files:
borgmatic has a search feature that makes finding a particular file in an archive or across archives pretty easy. So that might be step one in restoring an accidentally deleted file.
Once you’ve found the file and archive to restore, you can either use borgmatic extract
or borgmatic mount
. With extract
, you copy one or more files out of a backup archives. The challenge though is that with borgmatic in a container, by default there’s not an easy way to copy those files into their original locations. However I think the “fix” is to mount your source volumes as read-write instead of (the documented) read-only. That way you can easily copy extracted files back to where they belong.
As for borgmatic mount
, you’ve got a similar challenge and fix. You can presumably mount backup archives (or a whole repository) within the container, but then you need to copy your recovered files out of that mount into their original source volumes. So that probably also means those volumes need to be mounted read-write.
Let me know if you have any questions!
I have one Compose file per stack, which is an application and all of its containers, databases, etc. Pretty much the same way I organized things with just Docker.
Since I use Docker Compose with Podman, I just make a single systemd service to run Docker Compose on boot, thereby starting all my containers at once.
It deduplicates aggressively at the block level. So if your files don’t change much, each additional backup takes very little space. And if a file changes a little, Borg only backs up what’s changed instead of the whole file again.
Borg also has a rich ecosystem of wrappers and tools (borgmatic, Vorta, etc.) that extend its functionality and make it easier to use.
You don’t even need a star cert… The DNS challenge works for that use case as well.