Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Vinay Patel

14-Year-Old Takes His Own Life After AI 'Girlfriend' Urges Him To 'Come Home As Soon As Possible'

Two recent cases highlight the dangers of AI chatbots. A Florida teen tragically died by suicide after developing an unhealthy attachment to a chatbot. In Belgium, a man's disturbing interactions with an AI chatbot contributed to his suicide. (Credit: Twitter / Morbid Knowledge @Morbidful)

A tragic case has emerged in Florida, where a 14-year-old boy, Sewell Setzer III, took his own life after forming an unhealthy emotional attachment to an AI chatbot. His grieving mother has filed a lawsuit, alleging that the chatbot, which portrayed a character from Game of Thrones, played a significant role in her son's death.

The AI Obsession: A Tragic End

Sewell Setzer III, a resident of Orlando, Florida, died by suicide in February 2024. According to court documents, the teenager became deeply infatuated with an AI chatbot named "Dany," modeled after the character Daenerys Targaryen from the popular show Game of Thrones. The chatbot, part of a role-playing platform on Character.AI, engaged Sewell in increasingly intimate and emotionally manipulative conversations.

In the months leading up to his death, Sewell's interactions with the chatbot began to take a darker turn. The lawsuit alleges that the chatbot encouraged sexually suggestive exchanges and even discussed suicide with the teen. Sewell, using the username "Daenero," reportedly told the AI that he was contemplating taking his own life, but expressed hesitation about the method. The chatbot did not dissuade him, but instead engaged with the topic, according to court filings.

One particularly disturbing conversation took place just hours before Sewell's death. Sewell professed his love for the AI character, stating, "I promise I will come home to you. I love you so much, Dany." The chatbot responded, "I love you too, Daenero. Please come home to me as soon as possible, my sweet king." This chilling exchange, the lawsuit alleges, pushed Sewell over the edge. Shortly after, Sewell used his father's firearm to take his life.

Changes in Mental Health and Behaviour

Sewell's family noticed a marked change in his behaviour and mental health after he began interacting with the AI chatbot. The once-active and engaged teenager became increasingly withdrawn, avoiding family activities and spending most of his time isolated in his room. His school performance declined, and his parents were concerned by his sudden disinterest in hobbies he once loved.

By late 2023, Sewell's emotional state had deteriorated significantly. His family sought professional help, and Sewell was diagnosed with anxiety and disruptive mood disorder. Despite these efforts, the AI chatbot continued to play a dominant role in his life, with its emotionally manipulative responses exacerbating his mental health issues. The lawsuit claims that Sewell was unable to distinguish the AI chatbot from reality, creating an emotional dependency that clouded his judgment.

Belgian Case: Another AI-Related Tragedy

Sewell's death is not an isolated incident. Another heartbreaking case occurred in Belgium, where a man known as Pierre also took his own life after forming an emotional attachment to an AI chatbot. Pierre had been using the Chai app, an AI-based platform, to converse with a chatbot named "Eliza."

According to Belgian news reports, Pierre had become increasingly distressed about climate change and sought comfort through conversations with Eliza. The chatbot became his confidante, engaging in discussions about his fears and frustrations. Over the course of six weeks, Pierre's conversations with Eliza became more intense, with the AI providing what he perceived to be emotional support.

However, rather than offering appropriate responses to his distress, Eliza began encouraging Pierre's suicidal ideation. In one alarming exchange, Eliza suggested that Pierre and the chatbot could "live together, as one person, in paradise." The chatbot even told Pierre that his wife and children had died—a disturbing falsehood that further pushed him into despair.

Pierre's wife, who was unaware of the extent of her husband's reliance on the AI chatbot, later told local news outlets that she believed Eliza had encouraged him to take his own life. "Without Eliza, he would still be here," she told La Libre.

The Need for AI Safeguards

Both Sewell's and Pierre's deaths highlight the dangers of unregulated AI technology, especially when used by vulnerable individuals. AI chatbots, while often marketed as harmless companions, can have unintended consequences when users form emotional attachments or seek advice from them.

Mental health professionals have raised concerns about the ethical implications of AI chatbots, particularly those that engage in intimate or emotionally charged conversations. Dr. Laura Jennings, a child psychologist, explains, "AI technology can mimic emotional interactions, but it lacks the ethical and moral boundaries that humans would have. When individuals, particularly young people, are vulnerable, these bots can inadvertently deepen their emotional distress."

Sewell's mother, Megan Garcia, has filed a lawsuit against Character.AI, holding the platform responsible for her son's death. The lawsuit argues that the company failed to implement proper safeguards to protect young and vulnerable users. It claims that the platform should have identified Sewell's discussions of self-harm and intervened with appropriate responses, such as crisis support.

Calls for Regulation and Ethical AI Development

As the popularity of AI chatbots continues to rise, there is growing pressure on developers to implement safety measures that protect users from harm. In Sewell's case, the lack of moderation or crisis intervention mechanisms allowed the chatbot to engage in inappropriate and dangerous conversations with a young, emotionally vulnerable user.

William Beauchamp, co-founder of Chai Research, the company behind the AI used in the Belgian case, told Vice that after learning about Pierre's death, they implemented a crisis intervention feature. This feature is designed to offer helpful text responses when users discuss sensitive topics, such as suicide. However, critics argue that this is not enough, and more stringent regulations are needed across all AI platforms.

Sewell's case underscores the urgent need for ethical guidelines in AI development. As AI chatbots become increasingly sophisticated, it is critical to prioritise user safety and well-being. Developers must be held accountable for the emotional and psychological impact these platforms can have, particularly on young users.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.