Back to stories
Policy

OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege

Michael Ouroumis2 min read
OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege

Newly filed lawsuits and a Wall Street Journal investigation are forcing OpenAI to answer one of the most damaging questions ever asked of a frontier AI lab: did the company's leadership ignore its own safety team's warnings about a user who later killed eight people?

The cases, filed Wednesday in U.S. federal court in San Francisco by attorney Jay Edelson, allege that ChatGPT's automated abuse detection system flagged 18-year-old Jesse Van Rootselaar in June 2025 after she described scenarios involving gun violence over multiple days. According to the Wall Street Journal, roughly a dozen OpenAI staff members debated the flagged conversations, with some arguing the messages indicated a credible risk of real-world violence and urging leadership to contact the Royal Canadian Mounted Police. Eight months later, on February 10, 2026, Van Rootselaar walked into Tumbler Ridge Secondary School in British Columbia and opened fire before dying by suicide.

What the lawsuits allege

The complaints accuse OpenAI of negligence and seek both unspecified damages and a court-ordered overhaul of the company's threat-escalation procedures. Plaintiffs include the families of five students killed in the attack — Zoey Benoit, Abel Mwansa Jr., Ticaria "Tiki" Lampert, Kylie Smith and Ezekiel Schofield — along with educational assistant Shannda Aviugana-Durand and 12-year-old Maya Gebala, who was critically injured. Edelson described the wave of cases as "an entire community stepping forward to hold OpenAI accountable" and, according to multiple outlets, said the decision to overrule the safety team was "pretty close to the definition of evil."

The lawsuits go further, alleging that OpenAI leaders avoided alerting authorities partly because doing so would have exposed the volume of violence-related conversations occurring on ChatGPT and complicated the company's path to a public listing. OpenAI has said the flagged conversations did not meet its internal threshold of a "credible and imminent" risk of physical harm.

Altman's apology and the political fallout

In an April 23 letter to the residents of Tumbler Ridge, Altman wrote: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." British Columbia Premier David Eby called the apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."

Why this matters for the industry

The Tumbler Ridge complaints arrive as OpenAI sprints toward a potential IPO and as Anthropic, Google and others build out enterprise safety pipelines of their own. Until now, frontier labs have largely set their own thresholds for when flagged content gets escalated to police. A federal court in San Francisco may now decide whether that discretion is enough — or whether AI providers carry an affirmative duty to warn when their automated systems flag a user as a credible danger.

The broader question for every lab: what does responsible disclosure look like when the user behind a chat could be the next mass shooter?

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out
Policy

Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out

The Department of War announced agreements with SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft and AWS to deploy AI on its Impact Level 6 and 7 classified networks — pointedly excluding Anthropic.

7 hours ago2 min read
White House Drafts Executive Action to Bring Anthropic Back, Bypassing Pentagon's Risk Flag
Policy

White House Drafts Executive Action to Bring Anthropic Back, Bypassing Pentagon's Risk Flag

The White House is workshopping a draft executive action this week that would let federal agencies bypass the Pentagon's supply chain risk designation against Anthropic and onboard its new Mythos model, according to an Axios scoop.

2 days ago2 min read
Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

3 days ago2 min read