I think the benefits of federation is discoverability. I can spin up my gitea or forgejo (or something else!) Instance, but when people look for code in their instances, they can still discover my public repositories, and if they want to contribute, they can fork and open PRs from their instances.
So yeah, it means mostly you can selfhost and provide space to others, but with the same benefits that right now github offers (I.e., everything is there).
Oh, it looks like! Something went wrong with Zola build and I must have not noticed. Thanks a lot for pinging me about that, I will fix it today!
EDIT: Fixed! That’s what you get when you forget to bump the Docker image version after you upgrade zola version locally with a breaking change in the config! :) Thanks for letting me know, it would have taken me a long time to see it was broken!
Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can’t plug my Yubikey and SSH from there. It’s a rare occurrence, but still…
Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.
To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.
Hey, the short answer is yes, you can.
I would elaborate a little more:
In practice I personally would choose a simple setup where the interesting logs are just forwarded (in Syslog format for example) to a single crowdsec instance. If you have ingress from a single node, I’d go for running it on the host and banning via firewall, if you have multiple ingress nodes, then I would run it inside the cluster and ban via a loadBalancer/cloud firewall/whatever you have in front.
In essence, I would spend some time to think about your preferences, and it might take a little bit to make the setup clean, but I think you have plenty of flexibility to do what you prefer. Let me know if you want to bounce some more ideas!
Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.
I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.
Nice! I didn’t know this. Thanks!
AFAIK I know that SSH has MaxAuthTries and LoginGraceTime, but all it does is terminating the SSH session (I.e. slow down at most), it won’t block the IP via firewall or configuration.
Not sure if there is a recent feature that does the same.
Yes, I have used it in the past and it was annoying…
You can get SSL certs with letsencrypt, but you need to use the http verification method.
Yeah, what I mean is that it’s useless using ports like 2222, that’s like the unofficial SSH port! Bots are generally harmless (once you move to key auth), and you get functional the same result with the automatic IP ban on failed auth, minus the bother to change client configurations to your custom port. Anyway, if someone does want cleaner logs, changing port works :)
Also hypervisors get escape vulnerabilities every now and then. I would say that in a realistic scale of difficulty of escape, a good container (doesn’t matter if using Docker or something else) is a good security boundary.
If this is not the case, I wonder what your scale extremes are.
A good container has very little attack surface, since it can have almost no code or tools available, a read-only fs, no user privileges or capabilities whatsoever and possibly even a syscall filter. Sure, the kernel is the same but then the only alternative is to split that per application VMs-like) and you move the problem to hypervisors.
In the context of this asked question, I think the gains from reducing the attack surface are completely outweighed from the loss in functionality and waste of resources.
Completely agree, which is why I do the same.
Additional bonus: proxies that interact with the docker API directly (I think also caddy can do it) save you from exposing the services on any port at all (only in the docker network). So it’s way less likely to expose a port with a service by mistake and no need for arbitrary and unique localhost ports.
Since you run already OpenWrt, you can check out https://openwrt.org/docs/guide-user/services/ddns/client
There is a list on this page of compatible services. If you don’t want to use one more service (DNS), you can use a domain registrar with an API (like porkbun) and find online tools that work with that.
Be aware of the risks of hosting your websites publicly from home, make sure to run them in very isolated environments. Having your VPS compromised is bad, but having your home network compromised is much worse!
Fair question. What I meant is that suggesting that would have made the whole post 10 lines long and not worth doing. So I avoided such suggestions that completely change the threat model.
It’s not useless to avoid a good security posture (although you might have concerns of a monopoly gatekeeping the internet, TLS traffic inspection privacy concerns etc.), on the contrary makes everything I have written about here redundant (+ provide more, like DDoS protection) as you are outsourcing the security controls.
Yep I agree. Especially looking at all the usernames that are tried. I do the same and the only risk come from SSH vulnerabilities. Since nobody would burn a 0-day for SSH (priceless) on my server, unattended upgrades solve this problem too for the most part.
That is basically the essence of this post too! Except crowdsec is used to do what fail2ban does + some light form of WAF (without spinning another machine - which is not strictly needed for a WAF, you can use owasp modsecurity-ready proxies).
Thanks! I did mention this briefly, although I belong to the school that “since I am anyway banning IPs that fail authentication a few times, it’s not worth changing the port”. I think that it’s a valid thing especially if you ingest logs somewhere, but if you do don’t choose 2222! I have added a link to shodan in the post, which shows that almost everybody who changes port, changes to 2222!
Yes, pretty much that. Plus some configuration might be easier with a DNS hosting. But the main benefit is decoupling domain and DNS for easier change.
Been there…
I thought my API keys were expired, I regenerated them, changed a couple of things, checked all API calls to see if they changed API itself…then I searched the exact error and found out.
For such a breaking change to the API, was it hard to drop an email to every account not meeting the damn “requirements” with an API call performed in the last x months, to alert of the change?
Yep, I like bunny in fact. It didn’t have all the features I needed back then, but it’s a very good product, I heard very good things.
I also agree about the pricing. I ended up not using desec.io, but if I did, I would have probably set a 1-2 Euros recurring donation, as I feel that’s a totally acceptable price.
As for why people use GoDaddy well… I feel personally attacked as that’s exactly how I ended up there, when I didn’t know better.
The biggest items on the graph are all out of bounds accesses, use-after-free and overflows. It is undeniable that memory safe languages help reducing vulnerabilities, we know for decades that memory corruption vulnerabilities are both the most common and the most severe in programs written in memory-unsafe languages.
Unsafe rust is also not turning off every safety feature, and it’s much better to have clear highlighted and isolated parts of code that are unsafe, which can be more easily reviewed and tested, compared to everything suffering from those problems.
I don’t think there is debate here, rewriting is a huge effort, but the fact that using C is prone to memory corruption vulnerabilities and memory-safe languages are better from that regard is a fact.