LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool) appear strong, but are fundamentally insecure, because LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters.

  • Kogasa@programming.dev
    link
    fedilink
    arrow-up
    48
    arrow-down
    1
    ·
    1 day ago

    People are using LLMs to diagnose disease, write prescriptions, deny health care claims, deny loans and grants, write scientific papers, review scientific papers, draft engineering and architectural documents, and talk to their loved ones

    Despair