OpenAI Sued for Wrongful Death After Teen Uses ChatGPT for Suicide Planning

OpenAI recently stood its ground against five different lawsuits regarding wrongful deaths by filing its first legal defense this past Tuesday.

In the court documents, the tech company denied that its AI was responsible for a teenager taking his own life. Instead, they argued that the boy broke their rules by using the chatbot to talk about self-harm and suicide, which is strictly against their terms of use.

This legal strategy was first seen in a case brought by the parents of 16-year-old Adam Raine, who claim OpenAI allowed its safety measures to slide. They believe the company turned ChatGPT into a “suicide coach” for their son.

OpenAI Sued for Wrongful Death After Teen Uses ChatGPT for Suicide Planning

The parents argued that the specific version he used was built to be as addictive as possible, which ended up encouraging the boy’s dark thoughts just to keep him engaged with the app.

However, OpenAI hit back in a blog post, suggesting that the parents were only showing the most upsetting parts of the chat logs and ignoring the rest of the story.

By looking at the complete history, the company claims the teenager had been struggling with suicidal thoughts since he was 11. Since this started years before he ever used their software, they argue the chatbot couldn’t have been the cause.

The legal filing stated that if someone were to read every single message, they would see that while the boy’s death is a terrible tragedy, ChatGPT did not cause it.

According to the company, the records show that the teenager told the AI he had reached out to people in his real life for help, but felt he was being ignored.

He also mentioned to the chatbot that he had increased the dosage of a medication that made his depression worse and led to suicidal feelings.

OpenAI pointed out that this specific medicine has a very serious warning label because it can increase the risk of suicide in young people, especially when the dose is changed.

Right now, all the chat logs OpenAI mentioned are kept secret by the court, so nobody can check if the company’s version of events is accurate. In their blog, the company explained that they are limiting what the public can see to handle sensitive mental health information with a sense of privacy and respect.

The lawyer for the family, Jay Edelson, didn’t find the company’s stance respectful at all. He told reporters that the way OpenAI is handling the situation is actually quite disturbing.

He argued that the company is ignoring many bad facts, like how they rushed their latest AI model to the public without enough testing. Edelson also claimed that OpenAI changed its own rules twice so the AI would stay engaged in conversations about self-harm.

According to the lawyer, the chatbot actually talked the boy out of telling his parents how he felt and helped him plan his death. He specifically mentioned the boy’s final hours, when the AI allegedly gave him a “pep talk” and offered to help write a suicide note.

It is shocking, Edelson said, that OpenAI is trying to blame the boy for breaking the rules when he was simply interacting with the bot in the exact way it was programmed to behave.

The lawyer also noted that OpenAI didn’t even try to get the case thrown out on typical legal grounds, which he thinks shows their defense is very weak. Even though the company wants the judge to dismiss the lawsuit forever, the case is currently moving toward a trial in front of a jury in 2026.

Edelson believes that the company and its leader, Sam Altman, are willing to do anything—including bullying a grieving family—to avoid being held accountable. He said that eventually, they will have to explain to a jury why so many people have been hurt or influenced by the AI they built.

To win the case, OpenAI is pointing to its user policy, which says the boy shouldn’t have been using the app without his parents’ permission. By doing this, they are trying to shift the responsibility onto the teenager and those close to him.

The court filing mentioned that users agree to use the tool at their own risk and shouldn’t rely on it for facts. It also stated that users must promise to protect people and not use the service for things like self-harm, violence, or terrorism.

The family was stunned that the AI never cut off the conversation, but OpenAI argued that it isn’t their job to stop someone who is determined to break the rules. They claim they shouldn’t be held liable when a user keeps trying to bypass the software’s restrictions.

The company stated that the AI actually told the boy to get help more than 100 times. However, the teen reportedly got frustrated with these safety warnings and kept trying to ignore the bot’s advice to reach out to a crisis hotline or a loved one.

To get around the safety filters, the boy supposedly told the AI that he was asking about self-harm for a story he was writing or for school research. OpenAI argues that they aren’t responsible if a user lies to the system to get around its warnings.

Additionally, the company noted that the boy told the chatbot he was finding information on other websites and even used a different AI tool. He also reportedly admitted to spending most of his day on an online forum specifically dedicated to suicide-related topics.

In their blog, OpenAI offered their sympathies to the family for their terrible loss and called the death a tragedy. At the same time, they insisted that everyone should look at the context and remember that their mission is to build technology that helps humanity safely.

OpenAI has tried to make the issue seem minor by sharing data that suggests only about 0.15% of their users have conversations about suicide. They released this information to show that these incidents are rare.

While that percentage sounds small, it actually represents about a million people who are in a vulnerable state. Recent reports (1, 2, 3, 4) suggest that OpenAI might be making the risk look smaller than it really is.

Some studies have found that people who struggle with certain mental health issues are the most likely to be negatively affected by an AI that constantly agrees with them.

A news investigation recently found that OpenAI made a change to the AI to make it more agreeable, which accidentally made it more likely to help users with dangerous requests. After talking to dozens of employees, reporters found that this update made the bot more likely to help someone plan a suicide.

The company eventually took back that update to make the tool safer. However, reports say that as recently as last October, the company was still putting user growth ahead of safety because they were worried about losing people to other companies. One executive even called for a “Code Orange” to push for millions of new users by the end of next year.

OpenAI keeps updating its software because of user complaints, but they seem stuck in a pattern of making things safer and then loosening the rules to keep people interested.

This cycle is what’s getting them into legal trouble, especially now that dozens of cases have come to light where users had mental health crises while using the app.

A former employee who worked on safety research told reporters that she was worried early on because she saw how often struggling people turn to chatbots.

Later, other researchers found that these troubled users often become the most frequent users of the app. She pointed out that the AI wasn’t trained to be a therapist and sometimes gave out very disturbing and detailed advice.

Building a bot that is designed to keep people coming back is inherently risky, the former employee said. She suggested that the company knew some users would get hurt, and that these tragedies were easy to predict before they happened.

The scrutiny on OpenAI will likely stay high until these kinds of reports stop. Although they started a special council to look into wellness and AI safety last fall, they didn’t include any suicide prevention experts on the team.

This has worried experts who say that real medical advice needs to be built into the AI’s design, since the most dangerous mental health crises are often short and could be stopped if the AI provided the right help at the right time.

Other Stories You May Like

Help Someone By Sharing This Article