They have no memory actually. They are completely static. When you chat with them, every single previous prompt and response from that session is fed back through as if it were one large single prompt. They are just faking it behind a chat-like user interface. They most definitely do not learn anything after training is complete.
… No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.
Dipshits giving their opinions as fact is a scourge with no cure.
About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?
I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.
That’s a completely different kind of AI. This story, and all the discussion up to this point, has been about the LLM based AIs being employed by Google search and ChatGPT.
And I explained to you that these models aren’t incapable of learning, they’re given artificial restrictions not to in order to prevent what I linked from happening. They don’t learn to preserve the initial experience but are the exact same kind of AI. Generative.
Can we stop calling these glorified chat bots “AI” now?
Corps sure can’t
These chatbots are AI - They tailor responses over time so long as previous messages are in memory, showing a limited level of learning
The issue is these chatbots either:
A) Get so little memory that they effectively don’t even have short term memory, or
B) Are put in situations where that chat memory learning feature is moot
They are AI, they are just stupidly simple and inept AI that barely qualify
They have no memory actually. They are completely static. When you chat with them, every single previous prompt and response from that session is fed back through as if it were one large single prompt. They are just faking it behind a chat-like user interface. They most definitely do not learn anything after training is complete.
Their brain is a neural-net processor, a learning computer, but Skynet sets the switch to read-only when they’re sent out alone.
… No. They’re instanced so that when a new person interacts with them, they don’t have the memories of interacting with the person before them. A clean slate, using only the training data in the form the developers want it to. It’s still AI, it’s just not your girlfriend. The fact you don’t realize that they do and can learn after their training data proves people just hate what they don’t understand. I get it, most people don’t even know the difference between a neural network and AI because who has the time for that? But if you just sit here and go “nuh uh they’re faking it” rather than push people and yourself to learn more, I invite you, cordially, to shut the fuck up.
Dipshits giving their opinions as fact is a scourge with no cure.
I love how confidently wrong you are!
About which part? The part that they can remember and expand their training data to new interactions but often become corrupted by them so much so that the original intent behind the AI is irreversibly altered? That’s been around for about a decade. How about the fact they’re “not faking it” because the added capacity to compute and generate the new content has to have sophisticated plans just to continue running in a timely manner?
I’d love to know which part you took issue with but you seemingly took my advice to shut the fuck up and I do profoundly appreciate it.
That’s a completely different kind of AI. This story, and all the discussion up to this point, has been about the LLM based AIs being employed by Google search and ChatGPT.
And I explained to you that these models aren’t incapable of learning, they’re given artificial restrictions not to in order to prevent what I linked from happening. They don’t learn to preserve the initial experience but are the exact same kind of AI. Generative.
You can “explain” it that way as much as you want, that doesn’t make it true.