LLM-generated passwords appear strong, but are fundamentally insecure. Testing across GPT, Claude, and Gemini revealed highly predictable patterns: repeated passwords across runs, skewed character distributions, and dramatically lower entropy than expected. Coding agents compound the problem by sometimes preferring and using LLM-generated passwords without the user’s knowledge. We recommend avoiding LLM-generated passwords and directing both models and coding agents to use secure password generation methods instead.
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.
Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:
Furthermore, with the recent surge in popularity of coding agents and vibe-coding tools, people are increasingly developing software without looking at the code. We’ve seen that these coding agents are prone to using LLM-generated passwords without the developer’s knowledge or choice. When users don’t review the agent actions or the resulting source code, this “vibe-password-generation” is easy to miss.
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.
I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.
Don’t tell me people are using llms to generate passwords
I know someone who generate passwords in AI chat
Even where they aren’t, I bet this is something that could end up happening when using them as open-ended agents that might try making their own accounts. The article also mentions this:
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
Despair
Very well. If you don’t want me to tell you the truth about people using LLMs to make passwords, I won’t.
Getting away from more direct requests I can absolutely imagine AI offering passwords/suggestions as part of a coding session. Including “temp” passwords that look secure so “why bother changing it?”.
I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.