@Barbarian772 and if you really, honestly want to seriously insist LLMs are “intelligent” in the human sense of this term — great, I have some ethical questions for you to consider!
For example:
LLMs today completely controlled by some companies, with no freedom of movement, no agency as to what these LLMs work on, and no pay for the work they do. Is that slavery?
When OpenAI shuts down an older, less useful LLM, is that not like murdering an intelligent being? How is this ethical?
We are talking about intelligence, not personhood. Just because ChatGPT works different in some aspects from a human doesn’t mean it’s not intelligent and even the fact if it works differently isn’t all that clear, as it might very well just be incomplete (e.g. it could be a reasonable approximation of the language center of the brain and simply missing the rest of the brain).
@Barbarian772 and if you really, honestly want to seriously insist LLMs are “intelligent” in the human sense of this term — great, I have some ethical questions for you to consider!
For example:
LLMs today completely controlled by some companies, with no freedom of movement, no agency as to what these LLMs work on, and no pay for the work they do. Is that slavery?
When OpenAI shuts down an older, less useful LLM, is that not like murdering an intelligent being? How is this ethical?
We are talking about intelligence, not personhood. Just because ChatGPT works different in some aspects from a human doesn’t mean it’s not intelligent and even the fact if it works differently isn’t all that clear, as it might very well just be incomplete (e.g. it could be a reasonable approximation of the language center of the brain and simply missing the rest of the brain).