I think this makes a bit of sense though doesn’t it? They wrote “guy”. Given that training data is probably predominantly white “guy” would give you a white guy nine times out of ten without clarification of what the word means to the AI, i.e. ethnically ambiguous. Because that’s what guy is, ethnically ambiguous. The spelling is because DALL-E suuuuucks at text, but slowly getting better at least.
But they should 100% tweak it so that when a defined character is asked for stuff like that gets dropped. I think the prompt structure is what makes this one slip through. Had they put quotes around “guy with swords pointed at him” to clearly mark that as it’s own thing this wouldn’t have happened.
But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.
I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.
If they’re inserting random race words in, presumably there’s some kind of preprocessing of the prompt going on. That preprocessor is what would need to know if the character is specific enough to not apply the race words.
I don’t think it’s literally a search and replace but a part of the prompt that is hidden from the user and inserted either before or after the user’s prompt. Something like [all humans, unless stated otherwise, should be ethnically ambiguous]. Then when generating it’s got confused and taken it as he should be named ethnically ambiguous.
Let’s say hypothetically I had given you that question and that instruction on how to format your response. You would presumably have arrived at the same answer the AI did.
What steps would you have taken to arrive at that being your response?
Honestly my eyes glommed onto the capital letters first. I brought to mind images from the words, and Homer Simpson is clearer and brighter, and somehow that’s the internal representation of coherence or something. That aspect of using the brightness to indicate the match/answer/solution/better bet might be an instruction I gave my brain at some point too. I’m autistic and I’ve built a lot of my shit like code. It’s kinda like the iron man mask in here to be honest. But so more more elaborate. I often wish I could project it onto a screen. It’s like kinex models doing transformer jiu jitsu and me flicking those little battles off into the darkness to run on their own. I’m afraid I might not be a good candidate for questions of how human cognition normally works. Though I’ve done a lot of zen and drugs and enjoy watching it and analyzing it too.
I’m curious, why do you ask? What does that tell you?
I will admit this is almost entirely gibberish to me but I don’t really have to understand. What’s important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.
You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.
An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.
It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.
ChatGPT was just able to parse a list of fictional characters out of concepts, nouns, and historical figures.
It wasn’t perfect, but if it can take the prompt and check if any mention of a fictional or even defined historical character is in there it could be made to not apply additional tags to the prompt.
I think this makes a bit of sense though doesn’t it? They wrote “guy”. Given that training data is probably predominantly white “guy” would give you a white guy nine times out of ten without clarification of what the word means to the AI, i.e. ethnically ambiguous. Because that’s what guy is, ethnically ambiguous. The spelling is because DALL-E suuuuucks at text, but slowly getting better at least.
But they should 100% tweak it so that when a defined character is asked for stuff like that gets dropped. I think the prompt structure is what makes this one slip through. Had they put quotes around “guy with swords pointed at him” to clearly mark that as it’s own thing this wouldn’t have happened.
But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.
I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.
If they’re inserting random race words in, presumably there’s some kind of preprocessing of the prompt going on. That preprocessor is what would need to know if the character is specific enough to not apply the race words.
Yeah but
replace("guy", "ethnically ambiguous guy")
is different than does this sentence reference any possible specific characterI don’t think it’s literally a search and replace but a part of the prompt that is hidden from the user and inserted either before or after the user’s prompt. Something like [all humans, unless stated otherwise, should be ethnically ambiguous]. Then when generating it’s got confused and taken it as he should be named ethnically ambiguous.
It’s not hidden from the user. You can see the prompt used to generate the image, to the right of the image.
Let’s see about that: https://imgur.com/a/QCGRrUb
Let’s say hypothetically I had given you that question and that instruction on how to format your response. You would presumably have arrived at the same answer the AI did.
What steps would you have taken to arrive at that being your response?
Honestly my eyes glommed onto the capital letters first. I brought to mind images from the words, and Homer Simpson is clearer and brighter, and somehow that’s the internal representation of coherence or something. That aspect of using the brightness to indicate the match/answer/solution/better bet might be an instruction I gave my brain at some point too. I’m autistic and I’ve built a lot of my shit like code. It’s kinda like the iron man mask in here to be honest. But so more more elaborate. I often wish I could project it onto a screen. It’s like kinex models doing transformer jiu jitsu and me flicking those little battles off into the darkness to run on their own. I’m afraid I might not be a good candidate for questions of how human cognition normally works. Though I’ve done a lot of zen and drugs and enjoy watching it and analyzing it too.
I’m curious, why do you ask? What does that tell you?
I will admit this is almost entirely gibberish to me but I don’t really have to understand. What’s important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.
You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.
An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.
It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.
ChatGPT was just able to parse a list of fictional characters out of concepts, nouns, and historical figures.
It wasn’t perfect, but if it can take the prompt and check if any mention of a fictional or even defined historical character is in there it could be made to not apply additional tags to the prompt.