AI-generated code is shipping to production without security review. The tools that generate the code don’t audit it. The developers using the tools often lack the security knowledge to catch what the models miss. This is a growing blind spot in the software supply chain.



The article says:
but I’ve seen exactly this. After years of not seeing any SQL injection vulnerabilities (due to the large increase in ORM usage plus the fact that pretty much every query library supports/uses prepared statements now), I caught one while reviewing vibe-coded code
writtengenerated by someone else.Forget SQL injection and XSS, LLMs are bringing back unsanitised inputs as a whole, including reintroducing previously removed vulnerabilities. You can casually browse Github for submissions by Claude bot and find …/… vulns all over.
“Someone else” “writes” vibe-coded code in the same way that someone buying a meal at a restaurant cooks said meal.
Haha good point - maybe “generated by” is a better description?