A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic’s official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.


For some reason I missed that sentence trekking what "GPT Researcher "is, my bad.
I totally agree with what you said, and that confirms it’s not a vulnerability. Handing access to others comes with risks, and tools are not responsible for security measures. This is the job of virtualisation or things like LSM.