Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.
Storage in the cloud is expensive, there’s just no way around it.
Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.
Storage in the cloud is expensive, there’s just no way around it.
There was a good blog post about the real cost of storage, but I can’t find it now.
The gist was that to store 1TB of data somewhat reliably, you probably need at least:
Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.
I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.
Imagine if all the people who prefer systemd would write posts like this as often as the opposition. Just use what you like, there are plenty of distros to choose from.
At this stage I’ll probably just mirror my stuff from GH. I have a feeling they’ll be doing something stupid soon, forcing people to look for alternatives.
Would be nice to collaborate with others, but getting started is hard when you don’t have enough free time.
It seems Gitea has basic CI + package registries now, that will be plenty for my needs.
Nextcloud, Synapse + bridges, Adguard Home, Uptime Kuma, Home Assistant. Thinking about spinning up Gitea, Forgejo or Gitlab again.
I have a feeling you are overthinking the Matrix key system.
Basically it’s just another password, just one you probably can’t remember.
Most of the client apps support verifying a new session by scanning a QR code or by comparing emoji. The UX of these could be better (I can never find the emoji option on Element, but it’s there…). So if you have your phone signed in, just verify the sessions with that. And it’s not like most people sign in on new devices all the time.
I’d give Matrix a new look if I were you.
Wireguard runs over UDP, the port is undistinguishable from closed ports for most common port scanning bots. Changing the port will obfuscate the traffic a bit. Even if someone manages to guess the port, they’ll still need to use the right key, otherwise the response is like from a wrong port - no response. Your ISP can still see that it’s Wireguard traffic if they happen to be looking, but can’t decipher the contents.
I would drop containers from the equation and just run Wireguard on the host. When issues arise, you’ll have a hard time identifying the problem when container networking is in the mix.
You install the Google services and Play store from the gOS Apps application, then use them like normal.
Behind the scenes they run in the sandboxed environment, but to the user it makes no difference.
resolvectl flush-caches
just in caseLook at resolvectl dns
to check there’s no DHCP-acquired DNS servers set anymore
If you use a VPN, those often set their own DNS servers too, remember to check it as well.
I run GrapheneOS too. Fortunately there are so few issues that I can just focus on using it, no need to engage the community around it.
Protonmail, but not really because of encryption. I just liked their Android client and webmail the most. I’ve had sensitive backups on Proton Drive for a long time, so that also played a role in the choice.
I hosted my own server for quite a few years, but the SMTP clients (Thunderbird, Evolution, K9 mail) all doing things slightly differently made me give up. Biggest push was that K9 mail didn’t really move deleted mail to trash. These were probably dovecot configuration issues, but I got tired of searching for solutions. Never had any deliverability issues.
I used to run everything with Pis, but then got a x86 USFF to improve Nextcloud performance.
With the energy price madness last year in Europe, I moved most things to cloud VPSs.
One Pi is still running Home Assistant, hooked to my heating/ventilation unit via RS485/modbus.
I had a ZFS backup server with 2 HDDs hooked up over USB to a Pi 8GB. That is just way too unreliable for anything serious, I think I now have a lot of corrupted files in the backups. Looking into getting some Synology unit for that.
For anything serious that requires file storage, I’d steer clear from USB or SD cards. After getting used to SATA performance, it’s hard to go back anyways. I’d really like to use the Pis, but family photo backups turning gray due to bitflips is unacceptable.
They are a great entrypoint to self-hosting and the Linux world though!
Perhaps I misunderstand the words “overlapping” and “hot-swappable” in this case, I’m not a native english speaker. To my knowledge they’re not the same thing.
In my opinion wanting to run an extra service as root to be able to e.g. serve a webapp on an unprivileged port is just strange. But I’ve been using Podman for quite some time. Using Docker after Podman is a real pain, I’ll give you that.
on surface they may look like they are overlapping solutions to the untrained eye.
You’ll need to elaborate on this, since AFAIK Podman is literally meant as a replacement for Docker. My untrained eye can’t see what your trained eye can see under the surface.
In my limited experience, when Podman seems more complicated than Docker, it’s because the Docker daemon runs as root and can by default do stuff Podman can’t without explicitly giving it permission to do so.
99% of the stuff self-hosters run on regular rootful Docker can run with no issues using rootless Podman.
Rootless Docker is an option, but my understanding is most people don’t bother with it. Whereas with Podman it’s the default.
Docker is good, Podman is good. It’s like comparing distros, different tools for roughly the same job.
Pods are a really powerful feature though.
The article is old, yes, the first one from a search engine. If you have a source for saying it’s not in the works anymore, I’d be glad to see it. Not saying you’re wrong.
Just this month there was a statement from FiCom (finnish organization advancing IT businesses’ interests) urging our government to not accept the bill, so to me it seems it’s just under development.
Coming soon to EU, probably.
This is true, with a couple gigs of RAM and SATA storage Nextcloud is not at all bad. Assuming an instance with not that much simultaneous users.
It feels like slow sometimes, then after an hour with M365 at work it doesn’t feel slow at all.
Even though you said “isn’t Nextcloud”, I’d still say it’s perhaps the simplest solution.
You can disable most the other apps and set calendar as the landing page. If you don’t use the other features, the resource usage is very low, just a cron job that does basically nothing. I don’t think disabling the default apps has much effect on the footprint, by the way.
Calendar, contacts and notes are why I still self host nextcloud. Just remember to pay/donate to Davx5, they’re one of the projects that need to keep running!
Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.
Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.
The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.