… No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.
Dipshits giving their opinions as fact is a scourge with no cure.
About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?
I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.
That’s a completely different kind of AI. This story, and all the discussion up to this point, has been about the LLM based AIs being employed by Google search and ChatGPT.
And I explained to you that these models aren’t incapable of learning, they’re given artificial restrictions not to in order to prevent what I linked from happening. They don’t learn to preserve the initial experience but are the exact same kind of AI. Generative.
… No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.
Dipshits giving their opinions as fact is a scourge with no cure.
I love how confidently wrong you are!
About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?
I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.
That’s a completely different kind of AI. This story, and all the discussion up to this point, has been about the LLM based AIs being employed by Google search and ChatGPT.
And I explained to you that these models aren’t incapable of learning, they’re given artificial restrictions not to in order to prevent what I linked from happening. They don’t learn to preserve the initial experience but are the exact same kind of AI. Generative.
You can “explain” it that way as much as you want, that doesn’t make it true.
@BradleyUffner @RealFknNito wow this is just like reddit 🍿
The mere existence of the things I linked you is pretty overwhelming evidence. Feel free to refute it at your leisure.