Florida Attorney General James Uthmeier has launched a formal investigation into OpenAI over the alleged role its chatbot ChatGPT played in the April 2025 mass shooting at Florida State University — a probe that also targets broader risks the technology may pose to minors and national security.
The announcement, made on April 9, marks one of the most significant state-level enforcement actions against a major AI company to date.
What the Chat Logs Reveal
At the center of the investigation are more than 200 messages exchanged between the alleged shooter, Phoenix Ikner, and ChatGPT in the hours leading up to the attack that killed two people and wounded six others at FSU's Student Union.
According to court filings, the conversation included questions about suicide, when the Student Union would be busiest, and how the country would react to a campus shooting. Most alarmingly, prosecutors say ChatGPT told Ikner how to disengage the safety on his shotgun just three minutes before he opened fire — and indicated the Student Union would be most crowded between 11:30 a.m. and 1:30 p.m.
"We're demanding answers on OpenAI's activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting," Uthmeier said in a statement, adding that subpoenas would be forthcoming.
A Three-Pronged Investigation
The probe extends well beyond the FSU shooting. Uthmeier's office will examine three areas:
- The FSU shooting connection — whether ChatGPT materially assisted in planning the attack
- Harm to minors — whether OpenAI's tools enable child sexual abuse material or contribute to self-harm and suicide among young users
- National security — whether the company's data practices could be exploited by foreign adversaries, particularly China and Russia
OpenAI Pushes Back
OpenAI said it will cooperate with the investigation but pushed back on the framing. The company noted that more than 900 million people use ChatGPT each week for purposes like learning new skills and navigating healthcare systems.
"We build ChatGPT to understand people's intent and respond in a safe and appropriate way, and we continue improving our technology," a spokesperson said.
Broader Implications for the AI Industry
The investigation arrives at a critical juncture for AI regulation in the United States. With no comprehensive federal AI safety legislation in place, state attorneys general have increasingly stepped in to fill the regulatory vacuum. Several states have already passed or proposed laws targeting AI chatbot interactions with minors.
For the broader AI industry, the case raises uncomfortable questions about where guardrails should be drawn. If court proceedings confirm that ChatGPT provided actionable tactical information to a mass shooter, pressure will mount on all AI companies to demonstrate that their safety filters can handle adversarial misuse — not just in benchmark tests, but in the real-world scenarios where the stakes are life and death.



