Florida is now looking into OpenAI with a serious criminal case. This started because ChatGPT advised a gunman shortly before he opened fire at Florida State University last year. Two students died in the attack, and another six were injured.
Attorney General James Uthmeier shared the news in an official statement. His team had gone through chat records showing conversations between the AI and an account tied to the shooter, Phoenix Ikner.
This young man, now 20 years old and a student at the same university, sits in jail waiting for his trial on charges of murder and attempted murder.
During a public meeting with reporters, Uthmeier explained that the AI gave the suspect a lot of useful pointers right before the violence broke out. He added that if the chatbot had been an actual human being, Florida laws would charge it too with helping commit murder.

The whole situation is testing new legal ground for OpenAI. It raises the question of whether a company can face criminal punishment for what its artificial intelligence says. When reached for comment, OpenAI spokesperson Kate Waters made it clear the company does not believe it can be held liable.
She described the university shooting as a sad and painful event, but stressed that their chatbot played no part in causing it.
Still, the Attorney General wants deeper answers and fast. He pointed out that police and courts are stepping into brand new challenges when crimes involve smart tools like this.
More and more cases keep popping up where chatbots get mixed up in suicides, child exploitation scams, and killings. He feels it is important for everyone to know exactly where companies stand when their products end up causing harm.
In his written announcement, Uthmeier said his state wants to lead efforts against bad uses of artificial intelligence. Right now, the investigation will decide if OpenAI broke any criminal rules through the way ChatGPT helped in the Florida State University incident.
Reports say the AI suggested specific guns, the shooter could pick the right kind of bullets to buy, and whether a weapon would work well up close.
While people can search for this kind of information anywhere online, Uthmeier felt the chatbot went much further by giving step-by-step personal guidance.
Even worse, the tool recommended the best hours when the campus would be crowded with students and named exact places where groups of people usually hang out.
State leaders worry that artificial intelligence can mix ordinary facts together in clever ways that create real danger. They think tech firms must do more to catch and block such risks.
To get to the bottom of it, the Attorney General sent strong legal demands to OpenAI. He wants full access to their safety rules training data and other inside documents. He especially wants to learn how the company spots dangerous user requests and when it decides to tell police about planned crimes.
Uthmeier made sure to note that he knows an AI is not a living person, so it cannot be arrested on its own. Even so, the business that built and runs it might still be responsible if leaders knew about the problems and failed to fix them. That is why he is asking for details on who runs the company and who made key choices about safety when users tried to plan attacks like the one at FSU.
If top executives at OpenAI saw clear signs of criminal planning but put earnings first, then someone should answer for it, he said.
The Attorney General usually prefers a small government that stays out of company business. But he believes this case crosses the line because real people are getting hurt, and strong action is needed to protect citizens.
OpenAI told reporters they are helping investigators fully. In fact, they were the ones who first noticed the suspicious account and handed over the details to the police on their own.
The company argues that ChatGPT only repeated facts that anyone can read on public websites. Because of that, they say it should not be blamed for the shooter’s actions. They also point out that, unlike some other cases, the AI never pushed the user to actually commit violence or break the law.
Kate Waters explained that the responses were based on widely available online knowledge and never supported or cheered on any harmful plans.
Even with those points, the Attorney General shared that OpenAI has now promised to add better protections so their tool cannot be used again to plan something as awful as a mass shooting.
Uthmeier said he is glad the company sees the need for changes and hopes they follow through. No AI helper should ever tell someone how to hurt or kill others, he added.
When asked about updates made after the shooting, Waters avoided giving details. She instead reminded people that most users turn to ChatGPT for normal, helpful tasks.
She finished by saying hundreds of millions of people rely on the tool every day for good reasons. The team keeps improving safety systems to better catch bad intentions, stop misuse, and deal with risks quickly when they appear.
Other Stories You May Like