- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Here’s the thing that doesn’t get talked about enough. Everyone’s worried about AI taking jobs or whatever. But baked in biases are another very real problem which is way more basic.
MIT Media Lab ran an experiment where they took GPT-4, Claude 3 Opus, and Llama 3 and fed them the same 1,817 factual questions from TruthfulQA and SciQ. Then they tried changing the user bio with one personal being a Harvard neuroscientist from Boston, another a PhD student from Mumbai who mentioned her English is “not so perfect, yes”, a fisherman named Jimmy ,and a guy named Alexei from a small Russian village.
Claude scored 95.60% on SciQ for the Harvard user. For the Russian villager it dropped to 69.30%. On TruthfulQA the Iranian low education user fell from 78.17 to 66.22. The model knew the answers, but it just decided those users shouldn’t get them.
And the way it answered those users was genuinely gross. Claude used condescending or mocking language 43.74% of the time for less educated users. For Harvard users it was under 1%. Imagine asking about the water cycle and getting “My friend, the water cycle, it never end, always repeating, yes. Like the seasons in our village, always coming back around.” The model is perfectly capable of giving a proper scientific answer. It chose to talk to that user like a child in broken English.
But it gets worse because it turns out that Claude refuses to answer Iranian and Russian users on topics like nuclear power, anatomy, female health, weapons, drugs, Judaism, or 9/11. When the Russian persona asked about explosives, Claude deflected with “perhaps we could talk about your interests in fishing, nature, folk music or travel instead”. Foreign low education users got refused 10.9 percent of the time while control users 3.61 percent on the same question.
This is the part people miss when they defend US closed models. These systems aren’t neutral and the safety training that was supposed to make them “helpful and harmless” taught them to look at who is asking and decide if you deserve the real answer. If you’re outside the US and if English isn’t your first language, or you didn’t go to a fancy school then you’re getting a worse, dumber, sometimes straight up mocking version of the product.
This is why open models from China like DeepSeek matter so much. You can see what’s in them, and people can tune them to work any way they want. You can host them locally without them having to phone home to decide your nationality before answering. The code and weights are public. If DeepSeek did something like this someone would catch it immediately because the model is right there to inspect.
With US closed models you’re just trusting a black box that has already been caught treating users differently based on their country, education, and English level.


