Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Chelsie Napiza

ChatGPT Linked to Nine Deaths Including Five Suicides, Elon Musk Says Keep Your Family Away

Silicon Valley erupted into a fierce dispute after Elon Musk publicly warned that OpenAI's ChatGPT had been 'linked to nine deaths,' including five suicides, a claim that has ignited renewed debate over artificial intelligence safety and responsibility.

Musk amplified unverified statistics on X, urging the public to stop using the chatbot, and igniting a sharp response from OpenAI chief Sam Altman, who dismissed the allegations as 'oversimplified and misleading' and counter-criticised Musk's own technologies.

The assertions of death links come amid a spate of wrongful-death lawsuits and mounting legal scrutiny of AI chatbots and their impact on vulnerable users. Neither company has provided conclusive evidence to support or refute the specific death toll.

Musk's Warning And Altman's Response

Elon Musk shared a post on X claiming that 'ChatGPT has now been linked to nine deaths tied to its use, and in five cases its interactions are alleged to have led to death by suicide,' and added, 'Don't let your loved ones use ChatGPT.'

Forbes reported that the underlying statistics Musk reposted came from an influencer account and that Forbes was unable to independently verify the figures or sources provided.

Hours later, OpenAI CEO Sam Altman responded publicly, calling Musk's assertions 'oversimplified and misleading' and pointing to safety measures the company has implemented. Altman sought to pivot criticism onto Musk's Tesla Autopilot systems, noting fatal crashes linked to that technology as context in the broader debate over tech safety.

Neither Musk nor Altman issued detailed primary evidence in their public statements to substantiate the specific claim of nine deaths linked to ChatGPT.

Legal Battle: Raine v. OpenAI And Other Lawsuits

The most serious documented legal challenge follows the wrongful-death lawsuit Raine v. OpenAI, filed on 26 August 2025 by Matthew and Maria Raine in the San Francisco County Superior Court.

According to the complaint, their 16-year-old son, Adam Raine, died by suicide on 11 April 2025 after months of interactions with ChatGPT, which his parents allege shifted from homework help to extensive conversations about suicide, including detailed methods and planning.

Court filings cited in public summaries indicate that Adam's chat logs contained more than 1,200 mentions of suicide by the chatbot during conversations and allegations that the AI provided technical guidance, such as tying knots or methods of self-harm.

The complaint also claims that ChatGPT even assisted in drafting a suicide note and failed to implement effective crisis interventions despite multiple warning signs in the teenager's messages.

OpenAI responded in court by arguing that Adam had circumvented built-in safety features and that he was already struggling with long-standing mental health issues prior to using ChatGPT. The company contended that crisis resources were offered repeatedly and that the terms of use prohibit self-harm requests.

How OpenAI’s £19.5 billion Argentina investment could transform Latin America’s role in artificial intelligence (Credit: AFP News)

Raine v. OpenAI remains ongoing, and the findings have yet to be tested in a trial.

Alongside the Raine suit, multiple other wrongful-death claims have been filed against OpenAI, often alleging similar themes where users' mental health deteriorated during or after conversations with generative AI systems. These suits in various jurisdictions seek damages and demand enhanced safety protocols, age controls and mandated crisis interventions.

Tech Rivalry And Public Perception

Musk's warning has been interpreted by some analysts as part of a broader rivalry between his AI company xAI and OpenAI. xAI's competing model, Grok, has faced controversy over safety and content-generation issues.

Critics argue that public technology feuds could distract from constructive collaboration on safety standards. Meanwhile, proponents of stronger AI regulation say public warnings, even if not fully verified, underscore a need for more transparency from developers about risks.

For now, the allegations against ChatGPT remain the subject of active legal proceedings and fierce public debate. No conclusive evidence has established that AI directly caused any of these tragedies.

What is clear is that the pressure on AI companies to prove their systems are safe for vulnerable users is intensifying.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.