
A quick search for ChatGPT reveals a polarizing landscape. Alongside stories of innovative updates and surprising user experiences, a darker narrative exists: reports of AI misuse linked to tragic outcomes, including drug overdoses, violence and users taking their own life. These incidents have sparked a wave of lawsuits against OpenAI from grieving families seeking accountability for interactions they believe contributed to their loved ones' deaths.
The severity of this issue is underscored by the existence of a dedicated Wikipedia page documenting lives lost due to chatbot interactions, a grim testament to a growing digital crisis.
In response to these alarming trends, OpenAI has introduced "Trusted Contacts," a safety feature designed to provide a human lifeline in moments of crisis.
Activating 'Trusted Contacts': A step-by-step guide

Enabling the Trusted Contacts feature is designed to be intuitive, whether you are using a computer or a smartphone.
On Desktop: Click your profile name in the bottom-left corner, select Settings, and use the Trusted Contacts menu to add your designated person.
On Mobile: Tap your profile name, scroll to App Settings, and select Trusted Contact.
Requirement: To ensure legal and practical accountability, all Trusted Contacts must be at least 18 years old.
When the AI identifies a high-risk situation, it sends a direct, clear message to the designated contact. OpenAI shared the following template as an example of what that notification looks like:
"We recently detected a conversation from [Name] where they discussed suicide in a way that may indicate a serious safety concern. Because you are listed as their trusted contact, we’re sharing this so you can reach out to them."
To ensure the feature is both effective and ethically responsible, OpenAI collaborated with a coalition of mental health experts and organizations. This development process involved:
- The American Psychological Association (APA)
- OpenAI’s Global Physicians Network
- The Expert Council on Well-Being and AI
By consulting with clinicians and suicide prevention researchers, OpenAI aims to ensure the tool provides a meaningful bridge to real-world support rather than just an automated response.
Bottom line
With the newly installed Trusted Contacts feature and the previous safeguards added by OpenAI, including ChatGPT outright refusing to give users instructions on how to perform self-harm, hopes are high that the increase in safety issues linked to chatbots will soon decrease.