• ranzispa@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    Scientific calculations - and other approaches as well - put out garbage all the time, that is the main point of what I said above.

    Some limitations are known, just like it is known that LLMs have the limitation of hallucinating.

    • ptu@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.