Palo Alto Networks said on Wednesday that frontier AI models from Anthropic and OpenAI helped its security teams uncover 75 legitimate software vulnerabilities across more than 130 of its products in roughly one month — about a seven-fold jump over the company's typical monthly haul — and warned customers they have only a "narrow three-to-five-month window" before AI-driven exploit development becomes the baseline for attackers.
The advisory, disclosed by Lee Klarich, Chief Product and Technology Officer at Palo Alto Networks, covers 26 CVEs and represents the first time the majority of the company's internally found bugs came from AI scanning rather than human researchers. In a typical month, Palo Alto publishes fewer than five CVEs from internal review.
Which models, and what they did
Palo Alto used Anthropic's Mythos preview and Claude Opus 4.7, alongside OpenAI's GPT-5.5-Cyber, the latter accessed through OpenAI's Trusted Access for Cyber program. Anthropic has limited Mythos to a small group of defenders — including Palo Alto Networks, CrowdStrike, Amazon, Apple and JPMorgan — so they can find and patch issues before the model reaches broader use.
During internal testing, the models produced working exploits more than 70 percent of the time when given a candidate vulnerability, Klarich said, with a false-positive rate around 30 percent that varied by configuration. The standout capability, he noted, was chaining: stitching minor flaws together into high-severity exploit paths by reasoning across application logic.
"These models are much better at writing working exploits than what we had seen before," Klarich said.
He cautioned that the results were not effortless. "These models aren't magic. We spent a tremendous amount of time building an AI-scanning harness," he said — Palo Alto built dedicated tooling around the models to channel them at codebases productively.
The three-to-five-month clock
Underneath the headline numbers is a deadline. Klarich estimated that organizations have three to five months before adversaries either gain access to comparable frontier models or replicate the harness engineering Palo Alto put in. After that point, he argued, AI-assisted vulnerability discovery and exploit generation move from frontier capability to standard tradecraft.
All 75 vulnerabilities have been patched, and none were observed being exploited in the wild before disclosure, according to the company.
What Palo Alto wants defenders to do
The company is urging customers to run frontier models against their own application code, extend the same scrutiny to open-source dependencies in their software supply chain, and tighten patching cycles by pairing security response more closely with product and development teams.
The broader implication is uncomfortable for the defender side of the industry: the value of frontier models in offensive security is now demonstrable in concrete CVE counts, not theoretical benchmarks. Defenders who delay integrating these systems risk facing attackers who already have. With Mythos still gated to a handful of partners and GPT-5.5-Cyber restricted to a vetted access program, the window Klarich described is, in effect, a window of unequal access — and that window is closing.



