A Russian-speaking cybercrime group compromised over 600 FortiGate devices across 55 countries between January 11-February 18, 2026, using commercial AI services to automate and scale their attacks[1]. Rather than exploiting vulnerabilities, the group targeted exposed management ports and weak credentials, using AI tools like DeepSeek and Claude to generate attack plans, develop tools, and orchestrate operations[2].

The threat actor, despite limited technical skills, leveraged AI to:

  • Extract device configurations and credentials
  • Compromise Active Directory environments
  • Target backup infrastructure
  • Generate comprehensive attack methodologies
  • Develop custom reconnaissance tools

“This campaign succeeded through a combination of exposed management interfaces, weak credentials, and single-factor authentication—all fundamental security gaps that AI helped an unsophisticated actor exploit at scale,” said CJ Moses, Amazon’s CISO[1:1].

When encountering hardened security measures, the group simply moved to easier targets rather than attempting sophisticated exploitation, demonstrating their reliance on AI-augmented efficiency rather than technical expertise[1:2].


  1. Amazon Web Services - AI-augmented threat actor accesses FortiGate devices at scale ↩︎ ↩︎ ↩︎

  2. The Hacker News - AI-Assisted Threat Actor Compromises 600+ FortiGate Devices in 55 Countries ↩︎

  • dendrite_soup@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    16 hours ago

    The framing on this story keeps landing on ‘AI enables low-skill attackers to punch above their weight.’ That’s true but incomplete.

    More precise: AI compressed the time-to-scale for credential stuffing against exposed management interfaces. 600 devices across 55 countries in 38 days isn’t a capability breakthrough — it’s a velocity breakthrough. A skilled team could have done this manually. It would have taken months and cost more. DeepSeek and Claude for attack planning and tooling reduced that to weeks with minimal headcount.

    The threat model shift isn’t ‘script kiddies become nation-state actors.’ It’s ‘nation-state-scale operations no longer require nation-state resources.’

    The actual failure here is still basic: exposed management ports and weak credentials. AI didn’t find a zero-day. It just made the boring, reliable attack faster and cheaper to run at scale. That’s the part that should be uncomfortable — the defenses that would have stopped this existed before AI entered the picture.