- cross-posted to:
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.ml
[…]
That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers.
“So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: “We also haven’t seen any bugs that couldn’t have been found by an elite human researcher.” In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.



As much as I hate everything about the rise of LLMs, saying this isn’t impressive because it can be matched by “an elite security researcher” isn’t very reassuring to me. It’s still an agent being pointed at a codebase and finding hundreds of vulnerabilities. Even if only a twentieth turn out to be exploitable in practice, that’s still a terrifying tool to imagine in the hands of hackers who might otherwise lack the skills to find these vulnerabilities.
Most hacking groups buy exploits off of dark markets and indiscriminately target servers until they find one that’s vulnerable. The number that can actually develop those hacks is far smaller, but if you can simply ask an LLM to find a vulnerability then that bar is lifted. Hell, you could probably coerce it into writing the actual exploit too by claiming you need a proof-of-concept for a CVE writeup.
Most all of the reporting about this is purely misinformation. If you actually read the papers that Anthropic published instead of the marketing material, you’ll find that:
That’s actually mentioned in this article tbf.
I’m so proud of lemmy for fully calling our nuance cases and not letting our bias get the best of us.
I agree, and is it even true if “elite security researchers” didn’t actually find these problems? They didn’t find them because they weren’t looking for them is the obvious answer but it’s still a glaring inconsistency