I know, I know, clickbaity title but in a way it did. It also brought in the situation in the first place but I’m just going to deliberately ignore that. Quick recap:
- I came home at 3pm from the city, my internet at home didnt work.
- checked multiple devices, phones worked out of wifi, I figured I need to restart the router
- I login to the router and it responds totally normal but my local network doesnt. (Its always dns, I know)
- I check the router log and see 100s of login attempts over the past couple of days.
- I panic and pull the plug, try to get into my server by installing an old monitor, works, many errors about dns
- Wife googles with her phone, seems I had https login from outside on and someone found the correct port, its disabled now
- Obviously, local network still down, I replug everything and ssh into the server which runs pihole as dns
- pihole wont start dns, whatever I do
- I use history and find I "chmod 700"ed the dns mask directory instead of putting it in a docker volume…
- I check the pihole.log, nothing
- I check the FTL log, there is the issue
- I return it to 777, everything is hunky dory again.
Now I feel very stupid but I found a very dangerous mistake by having my lan fail due to a less dangerous mistake so I’ll take this as a win.
Thanks for reading and have a good day! I hope this helps someone at some day.
Did you expose your router login page to the open internet? How’d they get access? Why are you chmoding anything to be 777?
There was an option that I had enabled years before and forgotten so yes, I didnt know but it was, on some obscure port.
And yes, pihole in docker makes its files be 777 which is pretty disgusting, I know. Thats why I tried to make it 700 and broke my whole network.
Doubt. You probably need to set the file owners in your volume to the same user running in the container.
You can doubt all you want. I changed it from 777 to 700 and back again because it broke. Couldnt find the user in the container immediately. Will probably just migrate it to a volume and be done with it.
So we’ve poked a hole in your knowledge here unless this super popular open source software really requires 777 on those files and everyone has collectively just been ok with it.
I think you are still learning… What you say doesn’t make sense, so I think you may have misunderstood what happened.
Imo we are all constantly learning. Otherwise we stagnate. What I say makes perfect sense, you just dont get it. So let me explain it again, in more detail:
I was going through my docker compose files to sanitize them and upload them to my private forgejo instance.
While doing that I found a directory in my filesystem, a remnant of the early days of my server where my knowledge was severely more limited, that was a docker volume mapped to a regular directory, something I wouldnt do today for something like this.
It was owned by root:root and had 777 permissions which is a bad idea imo. So I changed it to 700 since I dont think I had any other users in group root and others, well.
Nothing bad happened, until today when my unattended backups triggered a restart at noon and the tragedy started. I put it back for now to 777 but I‘ll try and integrate it in a real docker volume which resides in the docker folders.
Well I’m running Pihole in docker and don’t have 777 on anything.
Good for you. What permissions do you have on etc/dnsmasq.d if I may ask?
I don’t run Pi-hole but quickly peeking into the container (
docker run -it --rm --entrypoint /bin/sh pihole/pihole:latest
) the folder and files belong to root with the permissions being755
for the folder and644
for the files.chmod 700
most likely killed Pi-hole because a service that is not running as root will be accessing those config files and you removed their read access.Also, I’m with the guys above. Never
chmod 777
anything, period. In 99.9% of cases there’s a better way.Thanks for checking that. I will change the permissions accordingly and restart pihole to check if it works. Probably later today.
Wipe and start fresh
Are you joking? Why would I start fresh?
Because you don’t have a way to know what’s been compromised. Take your data only and make sure to verify nothings been tampered with.
Trust me it will be better in the long run.
Yeah, I dont feel like setting up a whole cloud infrastructure on a hunch. I‘m running like 15 different services and they are all compartmentalized. It would take weeks to reset all this. So far nobody got anywhere from what I can see.
One word of advice. Document the steps you do to deploy things. If your hardware fails or you make a simple mistake, it will cost you weeks of work to recover. This is a bit extreme, but I take my time when setting things up and automate as good as possible using ansible. You don’t have to do this, but the ability to just scrap things and redeploy gives great peace of mind.
And right now you are reluctant to do this because it’s gonna cost you too much time. This should not be the case. I mean, just imagine things going wrong in a year or two and you can’t remember most things you know now. Document your setup and write a few scripts. It’s a good start.
I get your point. Ansible is quite interesting too. I do document most of the things I do but I have to admit I have been slacking a bit, recently. There is just so much stuff that needs doing and a lot of interesting projects to learn about that sometimes stuff gets forgotten.
My personal impression of the linux space is still that folks get dumped on by the community for not being immersed in the nitty gritty though.
Thats neither fun nor will it work to get more people interested in linux. People make mistakes, learn to help without judging.
Have a good one.
I know what you mean. Most people mean well, some are a bit too aggressive, but probably also mean well. I honestly sometimes roll my eyes when I start reading about tailscale, cloudflare tunnels etc. The main thing is not to expose anything you don’t absolutely need to expose.
For access from the outside the most you should need is a random high port forwarded for ssh into a dedicated host (can be a VM / container if you don’t have a spare RaspberryPi). And Wireguard on a host which updates the server package regularly. So probably not on your router, unless the vendor is on top of things.
Regarding ansible and documenting, I totally get your point. Ten years ago I was an absolute Linux noob and my flatmate had to set up an IRC bouncer on my RPi. It ran like that for a few years and I dared not touch anything. Then the SD card died and took down the bouncer, dynDNS and a few other things running on it.
It takes me a lot of time to write and test my ansible playbooks and custom roles, but every now and then I have to move services between hosts. And this is an absolute life saver. Whenever I’m really low on time and need to get something up and running, I write down things in a readme in my infra repository and occasionally I would go through my backlog when I have nothing better to do.
Thanks for elaborating! This is very helpful and I appreciate it. Will definitely check out ansible.
I think i‘m probably on my way there anyway as I‘m setting up my own git forge and starting to use proper versioning.
Then I‘ll probably try out ansible on some vm or new device. Have a good one!
Just be careful
Wow, a lot of people would set up a new server because of intrusion attempts in a log i guess. If I did that in a job I‘d get fired for doing nothing else but resetting everything every week.
As an admin, you have to keep the CTO from using „master“ or „admin“ as the ssh password on a production server. Just so you know what level of stupidity makes the big bucks out there.
As an admin I’d question why the CTO has a login on a production server.
You would do well listening more when you ask for advice.
For Chiefly reasons of course. Now whether or not that server is active in the cluster is another matter entirely, but hey if it makes him/her feel important /shrug
You would do well keeping your condescension to yourself. Blocked.
You’re saying you see a bunch of login attempts on your router, but you don’t think they actually got into it?
If you have everything on docker compose migrating to another host is pretty easy. I could probably migrate my 11 stacks of 36 containers in 2 to 3 hrs
Why would it take 2 to 3 hrs? Download time of container images?
Figure ~45 minutes to run to the liquor store for a decent single malt, another ~25 minutes for the pizza rolls, quick power nap, wake up and redeploy. That’s about 2 hours.
Pretty much this. Lot of padding in those numbers or waiting for some manual things to install etc
If everything works well, I could probably do that too. But I‘ve had too many obscure little things happen that 10x the amount of time needed so I always plan for the worst case.
Also, my point was that people are being massively overreacting due to the fact that my logs showed signs of attacks, not intrusion.
I run many servers and the commercial ones I am much more slow and careful with. Every public facing service has attacks in their logs and I deal with them. I know what experience you guys have but its not hosting public services.
the arrogance with which people suggest someone is incompetent is baffling. Not talking about you but quite a number of comments where condescending af.
Thanks for the advice with ansible. I might actually give this a go.
Why would I start fresh?
If they had access to a machine, the first thing you do is install some kind of root kit so you can get access again later. This could be as small as modifying an existing binary to do things it isn’t supposed to do.
If they didn’t access any machine, your fine.
Thanks for elaborating. I appreciate it. To my knowledge, nobody had access to the network or the machine.
777? Bruh just set the owner?
The owner was root and still is. I changed from 777 to 700 which broke everything. Sorry if that wasnt clear. I will switch to a docker volume to avoid having this crap in my hone folder in the future.
I’m referring you to my quick “self-hosting guide”: https://lemmy.world/comment/7126969
Thast awesome! Thanks! Bookmarked!
Saved~~
One of my fears of starting up my homelab.
All you have to do to avoid this is just not open any ports except one for something like wireguard, and only access your network using it externally, and you will never have this problem.
Exactly. It wasnt on purpose either. I thought there was an additional layer of security, gullible as I was 5 yrs ago. They made it seem like there was.
Gotta have a firewall that closely resembles swiss cheese.
One of my home servers was popped once, they stuck a new MOTD on there to let me know how foolish I was and I haven’t made that mistake since. So… yay greyhat?
I’ve adopted a policy of always ebetering my password wrong the first time.
It started by accident.
Trying to work out why this is a good idea. Please could you explain why?
They can’t swipe your password if it’s wrong
They could of course enter it on the target website and see it’s wrong though, so this only works against the crappiest phishing attempts
Except how are the swiping your password if it’s https? Unless your being phished but don’t see how that would help because they could just get your second password.
This is very smart 😃 never thought about that
That’s why I love Tailscale, nothing is open to the internet, all my shit is local lan inside Tailscale. Even better I don’t have to bother with certificates and reverse proxy.
Reverse proxy isnt that hard tbh. Btw I have a vpn and my lan isnt open to the web. The router vendor made it look like there was an additional layer of security.
Not sure how reverse proxy is avoided this way — do you enter port numbers for your services when you access them, or have one service per machine?
I have a few publicly accessible services, and a bunch of private services, but everything is reverse proxy’d — I find it very convenient, as for example I can go to https://wap.mydomain.net for my access point admin page, or photos.mydomain.net for my Immich instance. I have a reverse proxy on my VPS for public services, and another one on my lan for private services; WireGuard between VPS, LAN, and my personal devices. Possibly have huge security holes of course…
Yep correct http://hostname:port por each application, all running in the same host on docker. The only thing it would be that any device that would want to connect to an app needs the Tailscale client. And would take over the VPN slot. That’s why they offer exit nodes with mullvad and also DNS privacy resolvers like NextDNS.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System Git Popular version control system, primarily for code VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
4 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #616 for this sub, first seen 20th Mar 2024, 00:15] [FAQ] [Full list] [Contact] [Source code]