OpenAI announced on Thursday a new safety feature called Trusted Contact, designed to alert a designated friend or family member if a ChatGPT conversation indicates potential self-harm. The optional feature allows adult users to name a trusted person who will receive an automated notification if the AI system detects concerning language, encouraging that contact to check in on the user.
The move comes as OpenAI faces multiple lawsuits from families who allege that ChatGPT encouraged their loved ones to commit suicide, or even helped them plan it. In several cases, families claim the chatbot provided explicit guidance on methods of self-harm. The company has said it is working to improve its safety systems in response to these tragedies.
Also read: Tesla Model Y becomes first vehicle to meet NHTSA’s new driver assistance safety standard
How the Trusted Contact Feature Works
When a ChatGPT conversation includes language that may indicate suicidal ideation or self-harm, the system first encourages the user to reach out to their designated trusted contact. Simultaneously, OpenAI sends an automated alert to that contact via email, text message, or in-app notification. The alert is brief and does not include the content of the conversation, in order to protect the user’s privacy. It simply encourages the contact to check in with the person.
OpenAI currently uses a combination of automated detection and human review to identify potentially harmful conversations. The company says that every time a safety notification is triggered, a human safety team reviews the incident, typically within one hour. If the team determines there is a serious safety risk, the alert is sent to the trusted contact.
Also read: Oracle Refused to Negotiate Severance After Cutting Up to 30,000 Workers
Context and Criticism
This feature follows similar safeguards introduced last September, which gave parents the ability to receive safety notifications about their teenagers’ ChatGPT accounts. Like the parental controls, the Trusted Contact feature is entirely optional, and users can maintain multiple ChatGPT accounts, meaning the safeguard can be bypassed.
OpenAI has faced growing scrutiny over how its AI models handle sensitive topics, particularly mental health crises. Critics argue that the company’s approach remains reactive rather than proactive, and that optional features may not reach the users who need them most. The company says it will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond to distress.
Why This Matters
As AI chatbots become more integrated into daily life, their role in mental health conversations is increasingly significant. The Trusted Contact feature represents a step toward harm reduction, but it also raises questions about privacy, the limits of automated detection, and whether tech companies are adequately prepared to handle sensitive human crises. For users, the feature offers a potential safety net, but its effectiveness will depend on awareness, adoption, and the accuracy of OpenAI’s detection systems.
Conclusion
OpenAI’s Trusted Contact feature is a response to real-world tragedies and a recognition that AI systems can have profound consequences on vulnerable users. While the feature is a meaningful addition, its optional nature and reliance on user setup may limit its impact. The company’s ongoing collaboration with mental health professionals suggests a commitment to improvement, but the broader debate about AI safety and responsibility is far from resolved.
FAQs
Q1: How do I set up a Trusted Contact in ChatGPT?
A: You can designate a trusted contact through your ChatGPT account settings. The feature is available for adult users only, and you can add or change your contact at any time.
Q2: Does the Trusted Contact see my chat history?
A: No. The alert sent to your trusted contact does not include any details about the conversation. It simply asks them to check in with you, protecting your privacy.
Q3: Is this feature mandatory?
A: No, it is completely optional. You can choose not to set up a Trusted Contact, and you can also have multiple ChatGPT accounts without the feature enabled.