Skip to content

The Real AI Threat and the Blurred Lines Between Actual Risk and Marketing Hype

AI-driven attacks are real, and they’re occurring. But to paraphrase cyberpunk writing pioneer William Gibson, “The future is already here – it's just not evenly distributed."

The line between legitimate emerging threats and manufactured panic has long been blurred in cybersecurity. However, rarely has that contrast been starker than in the events surrounding actual AI security threats and the spreading of AI hype over the past several weeks. 

Recently, MIT Sloan School of Management published, and then quickly pulled, a paper that claimed 80.83 percent of ransomware attacks in 2024 utilized AI. A claim that experts criticized as being disconnected from verifiable evidence. On Mastodon, security researcher Kevin Beaumont criticized the study as being “absolutely ridiculous” and “almost complete nonsense.”

Late yesterday, Anthropic revealed that the AI firm detected in mid-September that threat actors had manipulated its Claude Code tool to attempt to break into about thirty global targets and succeeded in a small number of cases. “The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” the company posted in this overview with a link to the detailed report. 

Earlier, on November 5, Google's Threat Intelligence Group documented concrete evidence of nation-state actors deploying AI-powered malware in live operations against real targets. Google's GTIG tracked extensive abuse of its Gemini API by state-sponsored threat actors from China, Russia, Iran, and North Korea. Russia's APT28 deployed PROMPTSTEAL malware against Ukrainian government targets in June 2025—the first confirmed instance of malware querying a language model in live operations. This represents a real, verifiable escalation in attack sophistication. 

Ukrainian CERT-UA independently confirmed the threat under the designation LAMEHUG after receiving reports of targeted emails against executive government authorities.

Additionally, an analysis by cybersecurity firm CrowdStrike found that 80 percent of ransomware-as-a-service groups now incorporate automation or AI capabilities—not into 80 percent of their attacks, but into the feature sets of their platforms. The distinction matters enormously. These same researchers documented that average breakout time (the window from initial access to lateral movement) compressed from 48 minutes in 2024 to 18 minutes in mid-2025, a genuine metric reflecting increased automation and sophistication.

AI-driven attacks are real, and they’re occurring. But to paraphrase cyberpunk writing pioneer William Gibson, “The future is already here – it's just not evenly distributed."

The fact is that threat actors have deployed malware families that self-modify through LLM queries. And the behavioral analysis challenges are real—polymorphic and metamorphic malware enhanced through AI create real-world detection problems for defenders. The economics of ransomware-as-a-service have shifted, with groups offering AI-powered automation commanding premium affiliate participation and larger extortion payments. 

Security experts' reactions to the latest research and events are mixed, but most agree that AI is being used to varying degrees within malware and, over time, will become an increasingly challenging task for defenders. 

“The MIT paper claiming '80% of ransomware uses AI' was likely rightfully criticized. That said, dismissing one bad research paper doesn't mean we should dismiss AI's real impact on the threat landscape,” said Andrew Storms, VP of Security at Replicated. “We aren't yet seeing AI creating fundamentally new attack methods. What we're seeing is AI making existing attacks faster and more efficient—essentially adding a team of coders to every adversarial group,” Storms added. “AI coding assistants are highly effective at taking existing code examples and patterns, adapting them, and linking them together. The same naturally holds for exploits and attack chains,” he said.

Still, many security professionals are not seeing AI-driven malware in their day-to-day defense. “There is no doubt that adversaries are looking into benefits from large language models, just as everyone else is,” added Wim Remes, principal consultant at cybersecurity consultancy Toreon.

“But at this point, based on the available evidence, I'm not seeing a major impact immediately. Partially because of the limited use of malware in attacks nowadays — most adversaries focus on credentials and "living off the land" techniques,” he said. “I do not expect it to become a major thing in malware behavior over the next 12 months,” Remes said.

Justin Hutchens, author of the 2024 book The Language of Deception: Weaponizing Next Generation AI, agreed. “We are starting to see some evidence of experimentation in the wild, but actual use is minimal,” he said regarding recent adversarial use of AI for malware. “A lot of the recent discussions related to emerging use of AI in malware were because of a recent threat intel report released by Google called “Advances in Threat Actor Usage of AI Tools.”

If you look past the fear-inspiring language, such as phrases like “a significant step toward more autonomous and adaptive malware,” and “marks a new operational phase of AI abuse,” the technical details paint a very different picture.”

Hutchen said the picture from the technical details includes:

  • Most “AI-powered” malware examples are still prototypes; GTIG’s own samples contain disabled features, commented-out AI code, or incomplete functionality, indicating they are not yet mature threats.
  • Real-world usage is minimal: Google’s “first observed” cases indicate rarity, not widespread adoption, and many capabilities remain unproven in actual operations.
  • AI is primarily being used as a convenience tool, rather than a game-changer. Most cited attacker uses, such as phishing text, translation, and debugging, are incremental rather than transformational.
  • AI adds little beyond what attackers already do with traditional techniques, such as tasks like obfuscation and polymorphism, which existed long before LLMs. And they are often implemented in ways that are more reliable for attack purposes.

Hutchens did add that, in contrast to AI used within malware, the broad use of AI in cyberattacks to inform, orchestrate, and accelerate attack campaigns is a “much more real threat.” These AI-driven capabilities empower otherwise unskilled attackers to become “exponentially more capable because of the ability to lean on AI for those capabilities.”

He cited HexStrike AI as a publicly available and open-source Model Context Protocol (MCP) tool suite that enables general-use agents to be easily transformed into autonomous hacking systems. “While this will not substantially increase the sophistication of attacks, the scale and volume are already rapidly increasing. Now, inexperienced activists with an agenda, but lacking technical skills, can execute with the same capabilities as a mid-tier hacker,” he warned.

AI is also going to shorten the timeframe between when a new vulnerability is disclosed and when attackers are actively exploiting it. “Check Point’s threat intel team has observed dark-web conversations of cybercriminals actively using HexStrike to exploit critical Citrix vulnerabilities in the real world just days after they were disclosed,” he said, and cited the following Check Point research.

“This, to me, is the real risk that we should be concerned about,” he said.

Still, the recent widescale attack leveraging Anthropic’s Claude Code and recent research do put defenders on notice: AI-driven attacks are here to stay. “We need to shorten our patching times to production and utilize AI tools to help our teams patch better, faster, and with less risk of breaking things,” explained Storms. “Focus on using AI to automate patching, secure software development, and release cycles. If attackers are leveraging AI to work faster, defenders need to be doing the same,” he said.

“The bottom line,” concluded Storms, “is that the fundamentals of good security haven't changed, but the timeline for everything has just compressed.” Organizations that don't adapt their response times will find themselves increasingly vulnerable—not to science fiction AI threats, but to very real, very human attackers who are simply working more efficiently.”

HOU.SEC.CON CTA

Latest