The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.
It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.
Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?
Its very hard to determine genuine analysis of deepseek because while we should meet all claims with scepticism, there is a broad effort to discredit it for obvious reasons.
It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.
Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?
Its very hard to determine genuine analysis of deepseek because while we should meet all claims with scepticism, there is a broad effort to discredit it for obvious reasons.