Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
chatGPT just guesses the next word. stop anthropomorphizing it.
Humans are just electrified meat. Stop anthropomorphizing it.
Found Andrew Ure’s account
🙄
Another example of why I hate techies
it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.
Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
Lol making a mistake isn’t unique to humans. Machines make mistakes.
Congratulations for knowing that a LLM isn’t the same as a human though, I guess!
I knew someone would say that.