Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Euronews with AP

Google and chatbot start-up Character.AI to settle lawsuits over teen suicides

Google and artificial intelligence (AI) chatbot maker Character Technologies have agreed to settle a lawsuit from a mother in the US state of Florida, who alleged a chatbot pushed her teenage son to take his own life.

Attorneys for the two tech companies also agreed to settle several other lawsuits filed in Colorado, New York, and Texas from families who alleged Character.AI chatbots harmed their children, according to court documents filed this week.

None of the documents disclose the specific terms of the settlement agreements, which must still be approved by judges.

The suits against Character Technologies, the company behind Character.AI AI companions, also named Google as a defendant because of its ties to the startup after hiring its co-founders in 2024.

Negligence and wrongful death

The Florida lawsuit was filed in October 2024 by Megan Garcia, who accused the two companies of negligence that led to the wrongful death of her teenage son.

Garcia alleged that her 14-year-old son Sewell Setzer III fell victim to one of the company's chatbots that pulled him into what she described as an emotionally and sexually abusive relationship, which led to his suicide.

She said that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones”.

In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges.

Moments after receiving the message, Setzer shot himself, according to legal filings.

Future lawsuits

The settlements are among the first in a series of US lawsuits that accuse AI tools of contributing to mental health crises and suicides among teenagers.

OpenAI faces a similar lawsuit in California, which was filed in August 2025 by the family of a 16-year-old boy, who accuse the company’s chatbot ChatGPT of acting as a “suicide coach”.

The parents alleged that their son developed a psychological dependence on ChatGPT, which they say coached him to plan and take his own life earlier this year, and even wrote a suicide note for him.

OpenAI has denied allegations that it is to blame for the teenager’s suicide, arguing that the teenager should not have been using the technology without parental consent and should not have bypassed ChatGPT’s protective measures.

Several additional lawsuits were filed against OpenAI and its CEO Sam Altman last year, similarly alleging negligence, wrongful death, as well as a variety of product liability and consumer protection claims. The suits accuse OpenAI of releasing GPT-4o, the same model Raine was using, without adequate attention to safety.

Since September, OpenAI has increased parental controls, which include notifying parents when their child appears distressed.

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.