LLM-generated passwords appear strong, but are fundamentally insecure. Testing across GPT, Claude, and Gemini revealed highly predictable patterns: repeated passwords across runs, skewed character distributions, and dramatically lower entropy than expected. Coding agents compound the problem by sometimes preferring and using LLM-generated passwords without the user’s knowledge. We recommend avoiding LLM-generated passwords and directing both models and coding agents to use secure password generation methods instead.
LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones
Despair