I am calling it idiotic because spending just a minute with ChatGPT proofs it wrong. Just like the claim that GPT doesn’t make predictions about the world:
User: A dog sits on the porch, a squirrel climbs the tree. What happens next.
ChatGPT: Next, the dog notices the squirrel climbing the tree. Its natural instinct to chase small animals is triggered, and it becomes excited by the presence of the squirrel. The dog might start barking or whining, expressing its desire to chase after the squirrel. […]
It’s obviously capable making predictions about the world. Frequently giving very detailed and correct answers, which requires a deep understanding of the world. And yes, that ability to predict and understand the world is limited by how much of the world it can perceive through words alone, but that is no different from our ability to understand the world being limited by our perception. Also as it turns out, there is a surprising amount of stuff you can learn about the world just by text alone. There are surprisingly few topics that you can express in language that GPT doesn’t have an answer too (math calculations being one example, due to the digits getting lost in the tokenization step).
If you wanna make arguments that GPT isn’t intelligent, you have come up with something better than the same old tired phrases that are trivial debunked by just using it for a minute.
@lloram239 that’s really akin to claiming that a mannequin is a human being because it really really looks alike.
The “predictions about the world” you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.
I don’t have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.
that’s really akin to claiming that a mannequin is a human being because it really really looks alike.
For the job of presenting clothes in a shop, it’s close enough. The problem domain matters. You can’t expect a model that was never trained on a thing to perform well on that thing. Blind people aren’t good at drawing pictures either, doesn’t mean they aren’t intelligent.
The “predictions about the world” you refer to here are instead predictions about the text.
Text that describes the world. What do you think the electrical signal zapping around your brain are? Cats and dogs? The “world” is not what intelligence operates on. Your brain gets sensory information and that’s it (see any of Donald Hoffman’s talks). Just like ChatGPT gets text. All the “intelligence” does is figuring out patterns in that data and predicting what might come next. More diverse data from different senses of course helps. But as a little bit of playing around with ChatGPT easily shows, quite a lot of our understanding actually does survive getting mapped into the domain of language and text.
I am calling it idiotic because spending just a minute with ChatGPT proofs it wrong. Just like the claim that GPT doesn’t make predictions about the world:
It’s obviously capable making predictions about the world. Frequently giving very detailed and correct answers, which requires a deep understanding of the world. And yes, that ability to predict and understand the world is limited by how much of the world it can perceive through words alone, but that is no different from our ability to understand the world being limited by our perception. Also as it turns out, there is a surprising amount of stuff you can learn about the world just by text alone. There are surprisingly few topics that you can express in language that GPT doesn’t have an answer too (math calculations being one example, due to the digits getting lost in the tokenization step).
If you wanna make arguments that GPT isn’t intelligent, you have come up with something better than the same old tired phrases that are trivial debunked by just using it for a minute.
@lloram239 that’s really akin to claiming that a mannequin is a human being because it really really looks alike.
The “predictions about the world” you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.
I don’t have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.
For the job of presenting clothes in a shop, it’s close enough. The problem domain matters. You can’t expect a model that was never trained on a thing to perform well on that thing. Blind people aren’t good at drawing pictures either, doesn’t mean they aren’t intelligent.
Text that describes the world. What do you think the electrical signal zapping around your brain are? Cats and dogs? The “world” is not what intelligence operates on. Your brain gets sensory information and that’s it (see any of Donald Hoffman’s talks). Just like ChatGPT gets text. All the “intelligence” does is figuring out patterns in that data and predicting what might come next. More diverse data from different senses of course helps. But as a little bit of playing around with ChatGPT easily shows, quite a lot of our understanding actually does survive getting mapped into the domain of language and text.