Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

(Since this is a personal blog I’ll clarify I am not the author.)

  • leftzero@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    From what I read it was closed because it was tagged as a “good first issue”, which in that project are specifically stated to be a means to test new contributors on non-urgent issues that the existing contributors could easily solve, and which specifically prohibits generated code from being used (as it would make the whole point moot).

    The agent completely ignored that, since it’s set up to push pull requests and doesn’t have the capability to comprehend context, or anything, for that matter, so the pull request was legitimately closed the instant the repository’s administrators realised it was generated code.

    The quality (or lack thereof) of the code never even entered the question until the bot brought it up. It broke the rules, its pull request was closed because of that, and it went on to attempt to character assassinate the main developer.

    It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.

    And that’s the main point: unsupervised LLM-driven agents are dangerous, and we should be doing something about that danger.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        Probably a lot of that in the data the model was trained on.

        Garbage in, garbage out, as they say, especially when the machine is a rather inefficient garbage compactor.

    • Lvxferre [he/him]@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Oh fuck. Then it gets even worse (and funnier). Because even if that was a human contributor, Shambaugh acted 100% correctly, and this defeats the core lie outputted by the bot.

      If you got a serious collaborative project, you don’t want to enable the participation of people who act based on assumptions. Because those people ruin everything they touch with their “but I thought that…”, unless you actively fix their mistakes — i.e. more work for you.

      And yet once you construe that bloody bot’s output as if they were human actions, that’s exactly what you get — a human who assumes. A dead weight and a burden.

      It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.

      A lot of people would disagree with me here, but IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.

        Oh, definitely. It’s 100% the responsibility of the human behind the bot in either case.

        But the second option is scarier, because there are a lot more ignorant idiots than malicious bastards.

        If these unsupervised agents can be dangerous regardless of the intentions of the humans behind them, we should make the idiots using them aware that they’re playing with fire and they can get burnt, and burn other people in the process.