Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • Tracaine@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    15 hours ago

    We don’t. Period. I could be looking at you dead in the eye right now and have no objective way of knowing you are sentient in the same way I am.

    • TheJesusaurus@piefed.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      Didn’t ask how you know. But how you understand.

      Sure you don’t know someone else is sapient. But you treat them as if they are