Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch… more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.


I am a heavy user of AI tools, I have a Claude Code Max x20 subscription. I basically do not write any code myself but only direct CC to do so. This article is BS. It is a nice tool and it makes tedious work enjoyable (refactoring, searching for files, understanding legacy code, etc.). But it is incredibly incompetent quite often, needing adjustment and guidance. It does stuff in some way, it might even work but the code is a mess, the architecture might be alright, it might also be a complete chaos. I never was able to let it implement a feature on its own, it sometimes fucks up single method implementations.
Yes it is quite a bit better than a year ago (Opus that is, Sonnet is meh, how people use Codex is a mystery to me, that thing is terrible). I do not deny that but articles like these are fearmongering at best. These are tools that can help you quite well, but they are not, in any way, at the level described in this blogpost.