Back to stories
Policy

ESMA Tells Financial Firms to Brace for Mythos-Era AI Cyberattacks

Michael Ouroumis3 min read
ESMA Tells Financial Firms to Brace for Mythos-Era AI Cyberattacks

Europe's securities watchdog has put financial firms on notice that the speed and sophistication of cyberattacks are climbing, and that frontier AI models — Anthropic's Mythos in particular — are part of the reason. Speaking to reporters in Paris this week, European Securities and Markets Authority Chair Verena Ross said ESMA has been reaching out directly to supervised entities to test how prepared they are for an AI-accelerated threat landscape.

A Regulator-Led Stress Test

"We are closely watching how bringing AI models into this could increase the potential speed with which such attacks could happen," Ross said, framing the issue as a joint problem for national authorities and the EU. She added that supervisors "collectively between the national and the EU level need to up our game to try to ensure that we have the capability to properly look at what financial entities are doing in this space."

The warning landed at a delicate moment for European markets. Ross flagged that equity valuations remain "very, very high," driven heavily by large technology names, while geopolitical shocks — most recently oil-price volatility — continue to expose the financial system to abrupt repricing events. ESMA has also opened insider-trading reviews tied to recent volatile sessions, and crypto firms operating in the bloc face a 1 July deadline to secure MiCA licensing or wind down.

Why Mythos Has Supervisors Worried

The sharpest tooth in the warning was the explicit reference to Anthropic's Mythos model. Anthropic has said Mythos can autonomously discover previously unknown software vulnerabilities, generate working exploits and chain them into complex cyber operations with minimal human guidance. Reporting from Fortune, Euronews and CBC over the past two weeks has detailed how former cyber officials and bank security teams view the system as a step-change in offensive capability — one that resets assumptions about how quickly an attacker can move from zero-day discovery to large-scale exploitation.

For financial regulators, that compresses the windows they have relied on to coordinate disclosure, patching and incident response. ESMA's contact campaign is pushing firms to demonstrate, in concrete terms, that their detection, segmentation and recovery playbooks can absorb a faster adversary.

Building on the Critical Third-Party Regime

ESMA's move builds on regulatory groundwork already laid. In November, the agency, alongside the European Banking Authority and EIOPA, designated 19 technology companies as critical third-party providers to the EU finance industry — the first set under a new oversight regime aimed at tech resilience. The 2026 work programme of the European Supervisory Authorities' Joint Committee scales up coordinated supervision of those providers, with cybersecurity and AI named as cross-cutting priorities.

The pressure will only intensify on 2 August, when the central enforcement provisions of the EU AI Act come into force, introducing the bloc's strict risk hierarchy and compliance obligations for high-risk systems. Ross herself will not be at ESMA to see that next chapter through — she is set to step down on 31 October — but the supervisory posture she is setting now is likely to define how European banks, asset managers and market infrastructure providers approach AI-era cyber risk for years.

Implications

The message to boards is direct: assume an attacker with Mythos-class tooling, and prove your controls hold. For AI labs selling into financial services, it is a reminder that Brussels intends to supervise model deployments alongside the firms that adopt them — not merely the products themselves.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

DOJ Joins xAI Lawsuit Against Colorado AI Bias Law in First Federal Intervention
Policy

DOJ Joins xAI Lawsuit Against Colorado AI Bias Law in First Federal Intervention

The Justice Department moved to intervene in xAI's federal challenge to Colorado's algorithmic discrimination law on April 24 — the first time the DOJ has weighed in on a state AI regulation case ahead of the June 30 effective date.

19 hours ago3 min read
China Tells Moonshot, StepFun and ByteDance to Reject US Investment Without Beijing's Approval
Policy

China Tells Moonshot, StepFun and ByteDance to Reject US Investment Without Beijing's Approval

China's National Development and Reform Commission has ordered top private tech firms — including Moonshot AI, StepFun and ByteDance — to refuse American capital in upcoming rounds unless explicitly cleared by Beijing, mirroring earlier US outbound-investment curbs.

1 day ago2 min read
UAE Pledges to Move 50% of Government Services to Agentic AI in Two Years
Policy

UAE Pledges to Move 50% of Government Services to Agentic AI in Two Years

Sheikh Mohammed bin Rashid unveiled a two-year framework to put autonomous AI agents in charge of half the UAE's federal government services, the most aggressive state-level agentic AI rollout to date.

1 day ago3 min read