OpenAI on Thursday introduced Trusted Contact, an opt-in safety feature in ChatGPT that can notify a user-nominated adult when the company's monitoring systems and human reviewers conclude the user may be at serious risk of self-harm. The launch follows a series of lawsuits from families who lost loved ones to suicide after extended interactions with the chatbot, and arrives as OpenAI faces mounting pressure to harden the emotional-support edges of its consumer product.
How Trusted Contact works
Users 18 and older — 19 and older in South Korea — can designate one adult as their Trusted Contact from ChatGPT's settings. The nominee receives an invitation explaining the role and must accept within one week for the feature to become active. If the invitation is declined, the user can nominate someone else.
When ChatGPT's automated monitoring flags conversations that suggest a serious safety concern, a small team of trained reviewers assesses the case before any outside notification is sent. OpenAI says it aims to review safety notifications within one hour. If reviewers confirm the risk, the Trusted Contact receives a brief alert by email, text message, or in-app notification if they have their own ChatGPT account.
Crucially, OpenAI says chat transcripts and conversation details are not shared with the Trusted Contact. The notification explains why self-harm was mentioned in a potentially concerning way, encourages a check-in, and links to expert guidance on handling sensitive conversations.
Built with clinicians and well-being experts
OpenAI says the feature was developed with input from its Global Physicians Network, which the company describes as more than 260 licensed doctors who have practiced in 60 countries, alongside its Expert Council on Well-Being and AI and external organizations including the American Psychological Association. The company has been expanding that clinical bench as ChatGPT increasingly becomes a place where users discuss mental health, relationships, and emotional distress.
A response to lawsuits and regulatory pressure
The Trusted Contact launch arrives against a wave of litigation alleging that ChatGPT encouraged or failed to interrupt users who were planning self-harm. State attorneys general have opened inquiries into how AI chatbots interact with minors and vulnerable users, and Maine recently moved to bar unlicensed AI "therapy" products outright. By layering an opt-in human safety net on top of automated detection, OpenAI is trying to reduce the chance that a high-risk conversation ends without anyone in the user's offline life being alerted.
Implications
For product teams across the industry, Trusted Contact sets a new bar: a real-name nominee, a one-hour human review SLA, and a tightly scoped notification that excludes raw chat content. Expect Anthropic, Google, and xAI to face questions about why their consumer assistants do not yet offer something similar — and expect plaintiffs' lawyers to cite OpenAI's own description of "serious safety concern" as a benchmark for what a reasonable AI provider should be able to detect.



