The widow of a man killed in the April 2025 mass shooting at Florida State University has filed a federal wrongful death lawsuit against OpenAI, alleging that ChatGPT served as a planning aid for the accused gunman in the weeks before he opened fire. The complaint, filed in Florida on Sunday, marks one of the most consequential civil cases yet to test whether a general-purpose AI assistant can be held legally responsible for downstream violence.
Vandana Joshi, whose husband Tiru Chabba was killed in the attack, is the named plaintiff. The suit lists OpenAI as a defendant alongside Phoenix Ikner, the man accused in the April 17, 2025 shooting that also killed FSU dining director Robert Morales and wounded several others. The case lands on top of a separate criminal probe announced in April 2026 by Florida's attorney general, who said Ikner's chat logs showed more than 200 exchanges with ChatGPT, including questions about firearms, ammunition, and timing.
What the complaint alleges
According to the filing, Ikner had "extensive conversations" with ChatGPT in the run-up to the shooting. The complaint contends the chatbot "either defectively failed to connect the dots or else was never properly designed to recognize the threat." That framing is significant: it pushes a product-liability theory at a chatbot rather than treating its outputs as protected speech, an argument that has so far had mixed reception in courts handling similar AI-harm claims.
Previous reporting on the Florida attorney general's review described Ikner's prompts as covering what kind of gun to use, which ammunition matched it, and what time of day the campus would be most populated. Those details, recycled into the civil complaint, are central to the plaintiff's claim that ChatGPT functioned as a de facto planning tool rather than a benign information source.
OpenAI pushes back
OpenAI is rejecting the premise. Spokesperson Drew Pusateri said in a statement responding to the suit: "Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime." The company has previously said that its model provided factual responses available widely on the open internet and did not encourage or promote illegal activity.
Why this case matters
The Joshi suit arrives amid a widening pattern of litigation testing AI liability — from the Pennsylvania case against Character.AI over an alleged fake-psychiatrist persona to OpenAI's own rollout this week of an opt-in "Trusted Contact" safety feature for users in crisis. A finding of liability against OpenAI, even at the motion-to-dismiss stage, would reshape how foundation-model providers handle sensitive prompts, log retention, and refusal training. A dismissal, on the other hand, would harden the industry's view that chatbots are publishers of information, not enablers of harm. Either way, the courtroom is now where the AI-safety debate is being argued.



