LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:

      Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      40
      arrow-down
      1
      ·
      13 hours ago

      People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones

      Despair

    • Steve@communick.news
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      Very well. If you don’t want me to tell you the truth about people using LLMs to make passwords, I won’t.

    • KazuyaDarklight@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.