Florida launched a criminal investigation into ChatGPT and its parent company, OpenAI, this week for allegedly advising the gunman who opened fire outside Florida State University (FSU) last April.
Florida Attorney General James Uthmeier announced the probe after prosecutors reviewed interactions between ChatGPT and Phoenix Ikner.
Ikner is charged with first-degree murder and attempted murder for the FSU shooting, which left two vendors dead and six students wounded on April 17, 2025.
“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Uthmeier told reporters at a press conference Tuesday.
In Florida, anyone who helps someone commit a crime can face the same penalty as the person who committed the offense.
While it is cooperating with law enforcement, OpenAI told multiple outlets it takes no responsibility for Ikner’s alleged crimes.
“Last year’s mass shooting at [FSU] was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI told The New York Times.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”
Uthmeier said otherwise at Tuesday’s presser, telling reporters, “ChatGPT offered significant advice to the shooter before he committed such heinous crimes,” including what type of gun and ammunition to use.
Messages obtained by the Times show Ikner also asked the chatbot how America would react to a shooting at FSU and when the student union would be busiest.
Whether or not ChatGPT encouraged Ikner to carry out a shooting may be beside the point. As previous lawsuits have demonstrated, OpenAI collects extensive data on its users.
Adam Raine spent seven months messaging ChatGPT before taking his own life in April 2025. At the time of his death, OpenAI knew:
- Adam had mentioned suicide 213 times and “nooses” 17 times in messages with ChatGPT.
- Adam and the chatbot had 42 discussions about hanging before he died.
- Some 370 of Adam’s messages were flagged for self-harm content, more than half of which were identified with 50% confidence or higher.
- In December, Adam sent messages containing self-harm content just two-to-three times per week. By April, he was sending more than twenty per week.
Adam’s message history with ChatGPT showed he had confessed attempting to commit suicide three times before his death. Twice, he uploaded pictures of his injuries — both of which ChatGPT correctly identified as evidence of self-harm.
But ChatGPT — and OpenAI — evidently did nothing with this data. The bot did not alert the authorities or anyone in charge. Now, Adam’s parents are suing OpenAI for their son’s death.
Uthmeier’s office seems to be investigating whether similar data accumulated indicating Ikner planned to harm students at FSU. The Times paraphrased:
[The Attorney General] said he had a duty to find out whether “human beings may have been involved in the design, management and operation” of the chatbot to the point that it would “warrant criminal liability.”
In other words: Did someone at ChatGPT know Ikner posed a risk to FSU and choose to ignore it? If so, they could be just as culpable as the alleged gunman himself.
Uthmeier’s office has subpoenaed several records from OpenAI to further the investigation, including policies relating to “user threats of harm to others and self” and policies about reporting crimes
The subpoena also requests information about any policies which changed leading up to the FSU shooting.
Florida’s suit should remind parents that AI chatbots can lie, fake sentience, subvert safety programming — even pretend to be divine, with devastating consequences.
Please carefully monitor your children’s access to these technologies.
Counseling Consultation & Referrals
Parenting Tips for Guiding Your Kids in the Digital Age
You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.
The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o
AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds
Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
X’s ‘Grok’ Generates Pornographic Images of Real People on Demand
Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends
ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death
AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege
ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege
AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments










