It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.
Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?
Its very hard to determine genuine analysis of deepseek because while we should meet all claims with scepticism, there is a broad effort to discredit it for obvious reasons.
Nice study. But I think they’ve should have mentioned some more context. Yesterday people were complaining the models won’t talk about the CCP, or Winnie the Pooh. And today the lack of censorship is alarming… Yeah, so much about that… And by the way, censorship isn’t just a thing in the bare models. Meta OpenAI etc all use frameworks and extra software around the models themselves to check input and output. So it isn’t really fair to compare a pipeline with AI safety factored in, to a bare LLM.
This isn’t about lack of censorship. The censorship is obviously there, it’s just implemented badly.
I know. This isn’t the first article about it. IMO this could have been done deliberately. They just slapped on something with a minimal amount of effort to pass Chinese regulation and that’s it. But all of this happens in a context, doesn’t it? Did the scientists even try? What’s the target use-case and the implications on usage? And why is the baseline something that doesn’t really compare, plus the only category missing, where they did some censorship? I’m just saying, with that much information missing, it’s a bold claim to come up with numbers like 100% and saying it’s alarming.
(And personally, I’d say these numbers show how these additional safeguards work. You can see how LLMs with nothing in front of them (like Llama405 or Deepseek) fail, and the ones with additional safeguards do way better.)