If you can fuck up a database in prod you have a systems problem caused by your boss. Getting fired for that shit would be a blessing because that company sucks ass.
What if you’re the one that was in charge of adding safe guards?
Never fire someone who fucked up (again; it isn’t their fault anyways). They know more about the system than anyone. They can help fix it.
This is the way usually but some people just don’t learn from their mistakes…
If you are adding guardrails to production… It’s the same story.
Boss should purchase enough equipment to have a staging environment. Don’t touch prod, redeploy everything on a secondary, with the new guardrails, read only export from prod, and cutover services to the secondary when complete.
Sorry, not in budget for this year. Do it in prod and write up the cap-ex proposal for next year.
Yeah right? Offset via the cortisol of developers
Small companies often allow devs access to prod DBs. It doesn’t change the fact that it’s a catastrophically stupid decision, but you often can’t do anything about it.
And of course, when they inevitably fuck up the blame will be on the IT team for not implementing necessary restrictions.
Frequent snapshots ftmfw.
I always run my queries in a script that will automatically rollback if the number of rows changed isn’t one. If I have to change multiple rows I should probably ask myself what am I doing.
Damn that’s a good idea. Going to write that down, put it in the to do list, and regret not dosing it.
I always start a session with disabling auto commit (note, I could add it to my settings, but then it would backfire that one time my settings don’t execute, so I’m making it a habit to type it out every time, first thing I connect)
BTW: what kind of genius decides that auto commit should be enabled by default?
That’s a good idea too. I’ll have to look into that.
Or at least run it in the test database first.
Or run your updates/deletes as select first.
Don’t you people have a development environment?
The P in Prod stands for “It’ll be Pfine”
The letter you want after the P is an H.
Phrod?
Phear
There’s that old saying ‘everyone has a development environment. Some people are lucky enough to have a separate production environment, too’
I get you’re making a meme but I’ve never worked anywhere that only has one environment in the last 10years.
If he recognized his typo with the space after the D:\ in his restore command he could have been saved at the bargaining stage. I am so glad I don’t work with this stuff anymore.
The famous onosecond
A few months back I crashed a db in prod. I detached it and when I tried to reattach it simply refused, saying it was corrupted or some shit.
Lucky me we have a backup solution.
Unfortunately it was being upgraded, with difficulties.
That was a long day.emails
Yeah, don’t hire that guy.
Makes me think of what happened to gitlab
I have several times insisted that a migration be done via an ad hoc endpoint, because I’m a jerk, but also it’s much easier then to test, and no one has to yolo connect directly to prod.
Endpoint? Why the fuck is a migration using an endpoint, if you want testability a script will do just fine
Because I didn’t want someone to yolo connect to production, and we don’t have infrastructure in place for running arbitrary scripts against production. An http endpoint takes very little time to write, and let’s you take advantage of ci/cd/test infrastructure that’s already in place.
This was for a larger more complicated change. Smaller ones can go in as regular data migrations in source control, but those still go through code review and get deployed to dev before going out.
You’re definitely over complicating it and adding unnecessary bloat/tech debt. But if it works for you then it works