AI-generated code is shipping to production without security review. The tools that generate the code don’t audit it. The developers using the tools often lack the security knowledge to catch what the models miss. This is a growing blind spot in the software supply chain.

  • xylogx@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    As a security professional it amuses me that you think non-AI generated code is manually reviewed for security. Either you are committed to code quality or you are not. If you are you have automated testing, standard architectural patterns and vulnerability scanning. Peer reviews are great but do not scale and are far from comprehensive.

  • rozodru@piefed.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    This really shouldn’t be recent news to anyone. it’s been like this since day one of vibe coding. It’s all exploitable, none of it scales, and the “vibe coders” have zero clue how any of it works when it comes out the other end of the AI. none of them. and anyone that tells you otherwise is lying.

    It’s not a “growing blind spot” it’s a blind spot that has always been there. And it happens with all companies even large ones like Amazon. look what happened with the AWS outages. hell you can even go on youtube and watch people who work at Amazon and you’ll quickly realize these kids have no idea what the hell they’re doing. I’ve followed one guy for the past year who documents his on calls with Amazon and this kid hasn’t learned a single thing. He doesn’t know what he’s doing but will proudly tout how Amazon “helps” those that are laid off. The kid still gets tickets at 1am and has no clue how to fix the stuff and just hands it off to another team in the morning. he’s been doing this for over a year!

    So of course this stuff is going to go unchecked because the ones who are supposed to monitor it don’t know what they’re doing.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 hours ago

    It’s not that nobody wants to talk about it.

    It’s that nobody wants to listen.

  • CombatWombat@feddit.online
    link
    fedilink
    English
    arrow-up
    32
    ·
    10 hours ago

    Read the diffs. Not all of them.

    How do you write this whole article and come to the conclusion you can merge unread diffs?

    • Not a newt@piefed.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      There’s LLM apologists who actually believe that by reading code you’re becoming a liability because you get in the way of “shipping faster.” And there’s a whole class of C levels and their sycophants who just eat that up.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      13
      ·
      10 hours ago

      I hate to say it, but there’s a lot of “vibe coders” that use AI to write their code, then they (or someone else) use AI to review it. No human brains involved.

  • dan@upvote.au
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    7 hours ago

    The article says:

    None of the tools produced exploitable SQL injection or cross-site scripting

    but I’ve seen exactly this. After years of not seeing any SQL injection vulnerabilities (due to the large increase in ORM usage plus the fact that pretty much every query library supports/uses prepared statements now), I caught one while reviewing vibe-coded code written generated by someone else.

    • Not a newt@piefed.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Forget SQL injection and XSS, LLMs are bringing back unsanitised inputs as a whole, including reintroducing previously removed vulnerabilities. You can casually browse Github for submissions by Claude bot and find …/… vulns all over.

    • ugo@feddit.it
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      vibe-coded code written by someone else

      “Someone else” “writes” vibe-coded code in the same way that someone buying a meal at a restaurant cooks said meal.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        Haha good point - maybe “generated by” is a better description?

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    10 hours ago

    Nobody who’s into vibe coding wants to talk about it. The sane people, on the other hand, are already well aware.

  • CameronDev@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    10 hours ago

    While human developers bring intuitive understanding

    Well… some do.

    Jokes aside, I don’t think this is an undiscussed topic, and ultimately, the solution is the same as it as always been: project culture. Project leaders need to insist that code is responsibly written and reviewed, and to make it part of the team culture. AI doesn’t change that.

  • rizzothesmall@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    HITL

    AI augmented > AI generated.

    Human review with AI co-review > AI generated review.

    Human-arranged AI augmented documentation > AI documentation which always seems to believe that the most innocuous comment spelling correction is the most important change…

    If you completely remove humans from the development cycle then you don’t know what’s in your codebase anymore.