Newly filed lawsuits and a Wall Street Journal investigation are forcing OpenAI to answer one of the most damaging questions ever asked of a frontier AI lab: did the company's leadership ignore its own safety team's warnings about a user who later killed eight people?
The cases, filed Wednesday in U.S. federal court in San Francisco by attorney Jay Edelson, allege that ChatGPT's automated abuse detection system flagged 18-year-old Jesse Van Rootselaar in June 2025 after she described scenarios involving gun violence over multiple days. According to the Wall Street Journal, roughly a dozen OpenAI staff members debated the flagged conversations, with some arguing the messages indicated a credible risk of real-world violence and urging leadership to contact the Royal Canadian Mounted Police. Eight months later, on February 10, 2026, Van Rootselaar walked into Tumbler Ridge Secondary School in British Columbia and opened fire before dying by suicide.
What the lawsuits allege
The complaints accuse OpenAI of negligence and seek both unspecified damages and a court-ordered overhaul of the company's threat-escalation procedures. Plaintiffs include the families of five students killed in the attack — Zoey Benoit, Abel Mwansa Jr., Ticaria "Tiki" Lampert, Kylie Smith and Ezekiel Schofield — along with educational assistant Shannda Aviugana-Durand and 12-year-old Maya Gebala, who was critically injured. Edelson described the wave of cases as "an entire community stepping forward to hold OpenAI accountable" and, according to multiple outlets, said the decision to overrule the safety team was "pretty close to the definition of evil."
The lawsuits go further, alleging that OpenAI leaders avoided alerting authorities partly because doing so would have exposed the volume of violence-related conversations occurring on ChatGPT and complicated the company's path to a public listing. OpenAI has said the flagged conversations did not meet its internal threshold of a "credible and imminent" risk of physical harm.
Altman's apology and the political fallout
In an April 23 letter to the residents of Tumbler Ridge, Altman wrote: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." British Columbia Premier David Eby called the apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."
Why this matters for the industry
The Tumbler Ridge complaints arrive as OpenAI sprints toward a potential IPO and as Anthropic, Google and others build out enterprise safety pipelines of their own. Until now, frontier labs have largely set their own thresholds for when flagged content gets escalated to police. A federal court in San Francisco may now decide whether that discretion is enough — or whether AI providers carry an affirmative duty to warn when their automated systems flag a user as a credible danger.
The broader question for every lab: what does responsible disclosure look like when the user behind a chat could be the next mass shooter?



