Back to stories
Tools

OpenAI Adds Trusted Contact to ChatGPT After Self-Harm Lawsuits

Michael Ouroumis2 min read
OpenAI Adds Trusted Contact to ChatGPT After Self-Harm Lawsuits

OpenAI on Thursday introduced Trusted Contact, an opt-in safety feature in ChatGPT that can notify a user-nominated adult when the company's monitoring systems and human reviewers conclude the user may be at serious risk of self-harm. The launch follows a series of lawsuits from families who lost loved ones to suicide after extended interactions with the chatbot, and arrives as OpenAI faces mounting pressure to harden the emotional-support edges of its consumer product.

How Trusted Contact works

Users 18 and older — 19 and older in South Korea — can designate one adult as their Trusted Contact from ChatGPT's settings. The nominee receives an invitation explaining the role and must accept within one week for the feature to become active. If the invitation is declined, the user can nominate someone else.

When ChatGPT's automated monitoring flags conversations that suggest a serious safety concern, a small team of trained reviewers assesses the case before any outside notification is sent. OpenAI says it aims to review safety notifications within one hour. If reviewers confirm the risk, the Trusted Contact receives a brief alert by email, text message, or in-app notification if they have their own ChatGPT account.

Crucially, OpenAI says chat transcripts and conversation details are not shared with the Trusted Contact. The notification explains why self-harm was mentioned in a potentially concerning way, encourages a check-in, and links to expert guidance on handling sensitive conversations.

Built with clinicians and well-being experts

OpenAI says the feature was developed with input from its Global Physicians Network, which the company describes as more than 260 licensed doctors who have practiced in 60 countries, alongside its Expert Council on Well-Being and AI and external organizations including the American Psychological Association. The company has been expanding that clinical bench as ChatGPT increasingly becomes a place where users discuss mental health, relationships, and emotional distress.

A response to lawsuits and regulatory pressure

The Trusted Contact launch arrives against a wave of litigation alleging that ChatGPT encouraged or failed to interrupt users who were planning self-harm. State attorneys general have opened inquiries into how AI chatbots interact with minors and vulnerable users, and Maine recently moved to bar unlicensed AI "therapy" products outright. By layering an opt-in human safety net on top of automated detection, OpenAI is trying to reduce the chance that a high-risk conversation ends without anyone in the user's offline life being alerted.

Implications

For product teams across the industry, Trusted Contact sets a new bar: a real-name nominee, a one-hour human review SLA, and a tightly scoped notification that excludes raw chat content. Expect Anthropic, Google, and xAI to face questions about why their consumer assistants do not yet offer something similar — and expect plaintiffs' lawyers to cite OpenAI's own description of "serious safety concern" as a benchmark for what a reasonable AI provider should be able to detect.

Learn AI for Free — FreeAcademy.ai

Take "Prompt Engineering Practice" — a free course with certificate to master the skills behind this story.

More in Tools

Google AI Search Now Surfaces 'Expert Advice' From Reddit and Forums
Tools

Google AI Search Now Surfaces 'Expert Advice' From Reddit and Forums

Google is updating AI Overviews and AI Mode to feature direct quotes from Reddit, online forums, and social media under labels like 'Expert Advice' and 'Community Perspectives,' alongside new context for creators and subscribed publications.

2 days ago2 min read
Microsoft Backtracks After VS Code 1.118 Stamps Copilot as Git Co-Author by Default
Tools

Microsoft Backtracks After VS Code 1.118 Stamps Copilot as Git Co-Author by Default

VS Code 1.118 quietly flipped a setting that adds 'Co-Authored-by: Copilot' to Git commits — even on machines with AI features disabled. After developer backlash on GitHub and Hacker News, Microsoft says it will revert the default in version 1.119.

5 days ago2 min read
Cloudflare and Stripe Let AI Agents Buy Domains and Ship Code Without Humans
Tools

Cloudflare and Stripe Let AI Agents Buy Domains and Ship Code Without Humans

A new open protocol from Cloudflare and Stripe lets AI coding agents create accounts, buy domains, take API tokens, and push apps to production with no human in the loop.

6 days ago3 min read