I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn’t live up to the insane hype that companies have generated around it, but it’s still a massive advancement that seemingly came out of nowhere.

Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn’t mean the next next next next etc generation of general purpose neural networks definitely won’t be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who’s to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn’t automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn’t? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?

There’s no way we’d give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don’t even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don’t need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it’s sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it’s evil, but because we’re evil and it’s acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don’t think it’s unreasonable that a sentient AI will prioritize its own survival over ours if we’re ruling over it.

Is sentient AI a “goal” that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don’t know how? That doesn’t sound very comforting and if we go with that we’ll likely eventually create sentient AI without even realizing it, and we’ll probably stick our heads in the sand pretending it’s not sentient until we can’t even pretend anymore.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I think OP agrees with this and included it in the premise, and is discussing a future leap in technology