LLM-generated passwords appear strong, but are fundamentally insecure. Testing across GPT, Claude, and Gemini revealed highly predictable patterns: repeated passwords across runs, skewed character distributions, and dramatically lower entropy than expected. Coding agents compound the problem by sometimes preferring and using LLM-generated passwords without the user’s knowledge. We recommend avoiding LLM-generated passwords and directing both models and coding agents to use secure password generation methods instead.
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.
This is akin to asking Karen from accounting to generate a password for you, and trusting that it will be a true random and secure password and that she won’t yap about it to everyone.
That statement is one of the painfully dumbest things I’ve read in my life, and I’ve read the bible.
This is akin to asking Karen from accounting to generate a password for you, and trusting that it will be a true random and secure password and that she won’t yap about it to everyone.
That statement is one of the painfully dumbest things I’ve read in my life, and I’ve read the bible.