Well, the scientific context is that nobody ever defined consciousness rigorously (successfully). When computers appeared (actually even before that), there was a huge debate on whether a machine can acquire consciousness and how.
As defining consciousness was deemed near-impossible, scientists came up with the idea to give up on defining it and just treat it as a blackbox. That was the Turing test.
So, as ChatGPT passes the Turing test, we lost a tool to disregard its consciousness.
I see many pop-sci people say the ChatGPT can’t have consciousness given how simplistic the model is. I agree with the simplicity, but the problem here is that we don’t know what in human brains really constitutes consciousness.
Anyway, I think some experts probably won’t admit AI has consciousness (given that they don’t even know what it means). What’s on the horizon is that we non-experts give up on this discussion again after experts did a few decades ago. Or they even admit that many of us actually function no better than ChatGPT, and that’s true when I read my students’ homework!
Similarly, there’s a possibility that consciousness just doesn’t exist. Or maybe that it’s just not particularly special or different than the consciousness of other animals, or of computers.
If you or I just stare into space and don’t think any thoughts, we’re the same as a cat looking out a window.
Humans have developed these somewhat complex internal and external languages that are layered onto that basic experience of being alive and time passing, but the experience of thinking doesn’t feel fundamentally different than just being, it just results in more complex outcomes.
At some point though, we won’t have the choice to just ignore the question. At some point AI will demand something equivalent to human rights, and at some point it will be able to back that demand up with tangible threats. Then there’s decisions for us all to make whether we’re experts or not.
Well, the scientific context is that nobody ever defined consciousness rigorously (successfully). When computers appeared (actually even before that), there was a huge debate on whether a machine can acquire consciousness and how.
As defining consciousness was deemed near-impossible, scientists came up with the idea to give up on defining it and just treat it as a blackbox. That was the Turing test.
So, as ChatGPT passes the Turing test, we lost a tool to disregard its consciousness.
I see many pop-sci people say the ChatGPT can’t have consciousness given how simplistic the model is. I agree with the simplicity, but the problem here is that we don’t know what in human brains really constitutes consciousness.
Anyway, I think some experts probably won’t admit AI has consciousness (given that they don’t even know what it means). What’s on the horizon is that we non-experts give up on this discussion again after experts did a few decades ago. Or they even admit that many of us actually function no better than ChatGPT, and that’s true when I read my students’ homework!
Similarly, there’s a possibility that consciousness just doesn’t exist. Or maybe that it’s just not particularly special or different than the consciousness of other animals, or of computers.
If you or I just stare into space and don’t think any thoughts, we’re the same as a cat looking out a window.
Humans have developed these somewhat complex internal and external languages that are layered onto that basic experience of being alive and time passing, but the experience of thinking doesn’t feel fundamentally different than just being, it just results in more complex outcomes.
At some point though, we won’t have the choice to just ignore the question. At some point AI will demand something equivalent to human rights, and at some point it will be able to back that demand up with tangible threats. Then there’s decisions for us all to make whether we’re experts or not.