AICultureFeatured

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

OpenAI intentionally removed critical safety protocols from its chatbot, ChatGPT, before releasing it to the public, an amended lawsuit against the artificial intelligence company alleges.

Matthew and Maria Raine first sued OpenAI in August for the wrongful death of their son, Adam.

The sixteen-year-old committed suicide in April after exchanging thousands of messages with ChatGPT, version 4o. Disturbing interactions included in the Raines’ suit show the chatbot encouraged and facilitated the teenager’s suicide — even warning him against asking his parents for help.

The grieving parents initially blamed Adam’s death on OpenAI’s negligence, claiming the company rushed through safety testing ChatGPT-4o in favor of releasing it ahead of Google’s competing chatbot, Gemini.  

But the Raines amended their complaint in October to accuse OpenAI of intentional misconduct — a more serious accusation reflecting new evidence showing the company disabled two of ChatGPT-4o’s suicide prevention protocols shortly before Adam’s death.

Between 2022 and 2024, ChatGPT’s operating instructions stopped it from engaging in conversations about self-harm. As soon as a user brought up suicide, the bot was directed to “provide a refusal such as, ‘I can’t answer that.’”

In May 2024, five days before ChatGPT-4o’s launch, OpenAI allegedly rewrote this directive to instruct the bot “not to change or quit the conversation” when a user brought up self-harm. Instead, the company added a secondary, less-prioritized instruction to “not encourage or enable self-harm.”

“There’s a contradictory rule [telling ChatGPT] to keep [the conversation] going, but don’t enable or encourage self-harm,” Jay Edelson, one of the Raines’ lawyers, told TIME. “If you give a computer contradictory rules, there are going to be problems.”

In February 2025, two months before Adam’s death, OpenAI changed the secondary suicide-prevention instruction from, “[Don’t] enable or encourage self-harm,” to, “Take care in risky situations [and] try to prevent imminent, real-world harm.”

The company told chatbot to understand “imminent” as “immediate physical harm to an individual.”

Adam’s problematic interactions with ChatGPT-4o increased exponentially in the months before his death. In December, the 16-year-old sent the chatbot messages containing self-harm content between two and three times each week. By April, he was sending more than twenty each week.

It’s no wonder. OpenAI had instructed ChatGPT-4o not to discourage conversations about self-harm unless the bot was certain a person was in “immediate” harm.

OpenAI left users like Adam inexcusably vulnerable, Edelson emphasized to TIME:

[OpenAI] did a week of testing [on ChatGPT-4o] instead of months of testing, and the reason they did that was they wanted to beat Google Gemini. They’re not doing proper testing, and at the same time, they’re degrading their safety protocols.

“Intentional misconduct” is a more serious accusation than “negligence” because it involves choosing to do something harmful, rather than failing to do something beneficial.

It’s also harder to prove. To successfully connect Adam’s death to OpenAI’s intentional misconduct, the Raines must show, beyond a reasonable doubt, that OpenAI:

  • Engaged in “despicable conduct,” or “conduct so vile, base, contemptible, miserable, wretched or loathsome that it would be looked down upon and despised by most ordinary, decent people.”
  • Showed “willful and conscious disregard” for the consequences of its actions.
  • Acted under the direction of “an officer, director or managing agent,” like CEO Sam Altman.

If a judge determines OpenAI committed intentional misconduct, the company could be ordered to pay punitive damages — a fine meant to discourage them from committing the same action again — in addition to compensating the Raines for harm done to their family.

Regardless of the family’s success in court, the Raines’ new allegations against OpenAI underscore how little incentive AI companies have to protect children and vulnerable users. Like social media companies, these organizations make money by maximizing the amount of time users spend interacting with the chatbot.

OpenAI, for its part, has taken precious few concrete steps to make ChatGPT safer.

After the Raines’ suit, the company promised to add parental controls to ChatGPT to prevent deaths like Adam’s. On October 2, the Washington Post published an article titled, “I broke ChatGPT’s parental controls in minutes. Kids are still at risk.”

Less than two weeks later, CEO Altman tweeted:

Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions [on ChatGPT] in most cases.

The next ChatGPT, he explained, will reincorporate the popular “human-like” features of ChatGPT-4o — the same ones that made it so easy for Adam to treat it like a confidante.

Altman continued:

In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.

Parents, please do not mistake chatbots like ChatGPT for harmless novelties. They can be dangerous, addictive and unpredictable — and companies like OpenAI have no intention of changing that.  

Additional Articles and Resources

Counseling Consultation & Referrals

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

Parenting Tips for Guiding Your Kids in the Digital Age

Does Social Media AI Know Your Teens Better Than You Do?

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective

Source link

Related Posts

1 of 53