Back to stories
Policy

NHTSA Escalates Tesla FSD Investigation to Engineering Analysis Covering 2.4 Million Vehicles

Michael Ouroumis2 min read
NHTSA Escalates Tesla FSD Investigation to Engineering Analysis Covering 2.4 Million Vehicles

The National Highway Traffic Safety Administration has escalated its investigation into Tesla's Full Self-Driving software to a formal Engineering Analysis — the most serious level of federal auto safety probe — covering approximately 2.4 million vehicles.

What Triggered the Upgrade

The new Engineering Analysis (EA26002) upgrades a preliminary evaluation (PE24031) that NHTSA originally launched in October 2024 after identifying four crashes in reduced-visibility conditions, including one that fatally struck a pedestrian. The scope has since expanded to nine total incidents involving one fatality and one injury.

NHTSA investigators found that FSD's camera degradation detection system — the software designed to recognize when sensors cannot see properly and alert the driver — fails under common roadway conditions. According to the agency, the system "did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred."

A Growing Pattern of Violations

The escalation comes amid a broader pattern of FSD-related safety concerns. NHTSA has identified 80 Tesla FSD traffic violations to date, up from 50 in October, spanning incidents that include running red lights and crossing into opposing lanes. These findings are drawn from 62 driver complaints, 14 Tesla-filed reports, and 4 media accounts.

Tesla has also faced scrutiny over delays in providing FSD traffic violation data to the agency. Earlier this year, NHTSA granted the company a five-week extension to review over 8,000 potential traffic violations.

What an Engineering Analysis Means

NHTSA typically completes an Engineering Analysis within 18 months. The process involves deeper technical testing, additional information requests, and potential real-world evaluations. Historically, an EA is the final investigative step before the agency either closes a case or pushes for a recall.

For Tesla, the stakes are significant. A recall order could force a software update or even a suspension of FSD capabilities across millions of vehicles, directly impacting the company's autonomous driving ambitions.

Broader Implications

The probe intensifies at a pivotal moment for Tesla's self-driving program. CEO Elon Musk recently announced plans for robotaxi service expansion and confirmed that the company is nearing a tapeout of its next-generation AI6 chip. However, regulatory pressure from NHTSA could complicate the timeline for broader autonomous deployment.

The investigation also raises wider questions for the autonomous vehicle industry about how camera-based systems handle adverse weather and visibility conditions — a challenge that competitors using lidar-based approaches have long cited as a limitation of vision-only architectures.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances
Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances

The EU AI Act's August 2026 high-risk enforcement deadline for hiring tools and the Mobley v. Workday class action signal a new era of AI recruitment regulation.

1 day ago2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

After months of fierce debate, Linus Torvalds and kernel maintainers agreed on a policy that permits AI-assisted contributions like Copilot while forcing human submitters to take full legal and technical responsibility for any bugs, security flaws, or licensing issues.

2 days ago2 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine's LD 2082, which would prohibit the clinical use of AI in mental health therapy without a licensed professional, has been sent to Governor Janet Mills — part of a wave of state-level crackdowns on therapy chatbots.

3 days ago3 min read