23
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
That’s kind of silly semantics to quibble over. Would you tell a robot hunting you down “you’re only acting intelligent, you’re not actually intelligent!”?
People need to get over themselves as a species. Meat isn’t anything special, it turns out silicon can think too. Not in quite the same way, but it still thinks in ways that are useful to us.