• ptu@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.