AI-generated code is shipping to production without security review. The tools that generate the code don’t audit it. The developers using the tools often lack the security knowledge to catch what the models miss. This is a growing blind spot in the software supply chain.



This really shouldn’t be recent news to anyone. it’s been like this since day one of vibe coding. It’s all exploitable, none of it scales, and the “vibe coders” have zero clue how any of it works when it comes out the other end of the AI. none of them. and anyone that tells you otherwise is lying.
It’s not a “growing blind spot” it’s a blind spot that has always been there. And it happens with all companies even large ones like Amazon. look what happened with the AWS outages. hell you can even go on youtube and watch people who work at Amazon and you’ll quickly realize these kids have no idea what the hell they’re doing. I’ve followed one guy for the past year who documents his on calls with Amazon and this kid hasn’t learned a single thing. He doesn’t know what he’s doing but will proudly tout how Amazon “helps” those that are laid off. The kid still gets tickets at 1am and has no clue how to fix the stuff and just hands it off to another team in the morning. he’s been doing this for over a year!
So of course this stuff is going to go unchecked because the ones who are supposed to monitor it don’t know what they’re doing.