Back to stories
Policy

White House Issues Executive Order on AI Safety Standards

Michael Ouroumis2 min read
White House Issues Executive Order on AI Safety Standards

The White House has issued a new executive order establishing mandatory safety testing requirements for AI models that exceed certain capability thresholds. The order represents the most significant federal action on AI safety to date.

Key Provisions

Mandatory Safety Testing

AI developers must conduct and report results from a standardized battery of safety evaluations before deploying models that meet or exceed defined capability thresholds. These evaluations cover areas including:

Reporting Requirements

Companies developing frontier AI models must notify the government when beginning training runs that exceed certain compute thresholds. They must also share safety evaluation results within 30 days of completing testing.

Red-Teaming Standards

The order establishes standardized red-teaming protocols that must be followed before deployment. These include both automated testing and human evaluation by independent third parties.

Industry Response

The major AI labs have generally responded positively, noting that many of the requirements align with voluntary commitments they made previously. However, some smaller companies have expressed concern about the compliance burden.

Supporters

Critics

Implementation Timeline

The executive order takes effect in phases:

  1. Immediate — Reporting requirements for training runs exceeding compute thresholds
  2. 90 days — Publication of detailed safety testing protocols by NIST
  3. 180 days — Full compliance with safety testing requirements
  4. 1 year — First annual review and potential updates to capability thresholds

International Coordination

The order includes provisions for coordinating with allies on AI safety standards, building on the Bletchley Declaration and subsequent international agreements. The UK AI Safety Institute's Alignment Project, which now includes OpenAI and Microsoft, represents one concrete example of this coordination in action. The goal is to prevent a race to the bottom where companies relocate to jurisdictions with weaker oversight.

What It Means

The executive order signals that AI regulation in the United States is moving from voluntary commitments to enforceable requirements. While the scope is currently limited to frontier models, the framework could be expanded as AI capabilities continue to advance. Globally, the EU AI Act takes a broader approach with its risk-based classification system, while China mandates government review for all models before public release.

More in Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence
Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence

The British government announces a new £40 million Fundamental AI Research Lab aimed at solving core AI limitations like hallucinations and unreliable reasoning while reducing dependence on US tech giants.

8 hours ago2 min read
Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship
Policy

Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship

The Recording Academy announces that AI-generated music is eligible for Grammy Awards as long as a human author makes meaningful creative contributions, setting the first major industry standard for AI in music.

1 day ago3 min read
AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge
Policy

AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge

The FBI reports that AI voice cloning scams caused over $1 billion in losses in 2025, a 400% increase from the prior year, as deepfake audio tools become cheap, accurate, and widely available.

1 day ago3 min read