Back to stories
Policy

White House Issues Executive Order on AI Safety Standards

Michael Ouroumis2 min read
White House Issues Executive Order on AI Safety Standards

The White House has issued a new executive order establishing mandatory safety testing requirements for AI models that exceed certain capability thresholds. The order represents the most significant federal action on AI safety to date.

Key Provisions

Mandatory Safety Testing

AI developers must conduct and report results from a standardized battery of safety evaluations before deploying models that meet or exceed defined capability thresholds. These evaluations cover areas including:

Reporting Requirements

Companies developing frontier AI models must notify the government when beginning training runs that exceed certain compute thresholds. They must also share safety evaluation results within 30 days of completing testing.

Red-Teaming Standards

The order establishes standardized red-teaming protocols that must be followed before deployment. These include both automated testing and human evaluation by independent third parties.

Industry Response

The major AI labs have generally responded positively, noting that many of the requirements align with voluntary commitments they made previously. However, some smaller companies have expressed concern about the compliance burden.

Supporters

Critics

Implementation Timeline

The executive order takes effect in phases:

  1. Immediate — Reporting requirements for training runs exceeding compute thresholds
  2. 90 days — Publication of detailed safety testing protocols by NIST
  3. 180 days — Full compliance with safety testing requirements
  4. 1 year — First annual review and potential updates to capability thresholds

International Coordination

The order includes provisions for coordinating with allies on AI safety standards, building on the Bletchley Declaration and subsequent international agreements. The UK AI Safety Institute's Alignment Project, which now includes OpenAI and Microsoft, represents one concrete example of this coordination in action. The goal is to prevent a race to the bottom where companies relocate to jurisdictions with weaker oversight.

What It Means

The executive order signals that AI regulation in the United States is moving from voluntary commitments to enforceable requirements. While the scope is currently limited to frontier models, the framework could be expanded as AI capabilities continue to advance. Globally, the EU AI Act takes a broader approach with its risk-based classification system, while China mandates government review for all models before public release.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Anthropic Outspends OpenAI in Biggest-Ever AI Lobbying Quarter
Policy

Anthropic Outspends OpenAI in Biggest-Ever AI Lobbying Quarter

Federal disclosures filed for Q1 2026 show Anthropic spent $1.6 million and OpenAI $1 million lobbying Washington — both record quarters for the rival AI labs as Pentagon procurement, copyright and export controls dominate the agenda.

6 min ago2 min read
Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea
Policy

Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea

Anthropic is requiring government-issued photo IDs and live selfies from some Claude users to cut off access from US adversaries, according to The Information, as black-market relay platforms race to preserve Chinese developer demand.

4 hours ago3 min read
YouTube Opens AI Likeness Detection to Hollywood as Deepfakes Target Celebrities
Policy

YouTube Opens AI Likeness Detection to Hollywood as Deepfakes Target Celebrities

YouTube is expanding its AI likeness detection tool to celebrities, talent agencies, and management companies, giving Hollywood a Content ID-style system for hunting down deepfakes of their clients.

6 hours ago3 min read