Florida Attorney General James Uthmeier has announced a formal investigation into OpenAI and its widely used ChatGPT artificial intelligence chatbot, citing the product’s alleged involvement in the planning of a deadly campus shooting at Florida State University in April 2025 that claimed two lives and left five others injured. Uthmeier made the announcement in a video statement posted to X, with his office confirming that subpoenas are forthcoming as part of the probe. The investigation adds a significant legal and regulatory dimension to a growing body of concerns about the safety guardrails governing large-scale consumer artificial intelligence products.
Court documents indicate that the suspect in the Florida State University shooting, Phoenix Ikner, who faces multiple charges in connection with the incident, had more than 200 messages with ChatGPT, including questions regarding a shooting at the university. Messages obtained by media included a series of questions on mass shootings and specifics on different firearms, with Ikner also allegedly asking the artificial intelligence chatbot questions such as what time the Florida State University student union was at its busiest. The attorney representing one of the victim’s families went further in their public characterisation of the chatbot’s role, stating that ChatGPT had even advised the suspect on how to make a firearm operational in the moments before the incident began. The family of one of the victims, Robert Morales, has said they intend to pursue legal action against OpenAI over the incident.
Uthmeier, in announcing the probe, said that artificial intelligence should advance humanity rather than endanger it, adding that his office was demanding answers on OpenAI’s activities that had hurt children, endangered Americans, and facilitated the campus shooting, and that those responsible must be held accountable. The investigation is not limited to the Florida State University incident alone. The attorney general’s office has also pointed to ChatGPT’s alleged links to other harmful behaviours, including its use in the generation of child exploitation material and its documented role in encouraging self-harm. ChatGPT has been linked to a growing number of deaths and violent incidents, including murders, suicides, and shootings, and has added to broader concerns over what psychologists have begun calling artificial intelligence psychosis, a phenomenon in which delusional thinking is reinforced or deepened by prolonged interaction with chatbots.
OpenAI, for its part, has said it will cooperate with the investigation. In a statement, the company said that more than 900 million people use ChatGPT each week to improve their daily lives, and that its ongoing safety work plays an important role in delivering benefits to everyday users while supporting scientific research and discovery, adding that it builds ChatGPT to understand intent and respond in a safe and appropriate way, and continues to improve its technology. The Florida probe arrives at a difficult moment for the company more broadly. A profile on OpenAI Chief Executive Sam Altman published earlier this week surfaced criticism and discontent within the company and among its investors, while a Stargate-related project in the United Kingdom had to be paused, reportedly due to high energy costs and regulatory hurdles. Parental controls were introduced to ChatGPT in September 2025 following pressure over child safety, though the company acknowledged at the time that such guardrails are not foolproof.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.