OpenAI Introduces Trusted Contact Feature for Serious Self-Harm Concerns

AI News

OpenAI Introduces Trusted Contact Feature for Serious Self-Harm Concerns
ChatbotsSafetyAI Chatbots

OpenAI is addressing the double-edged sword of AI chatbots' openness by introducing a new feature called Trusted Contact, which allows users to nominate a trusted person who will be alerted if ChatGPT detects a serious self-harm concern.

AI chatbots have made it surprisingly easy to talk about anything, and that includes some of the heaviest topics imaginable. That openness has always been a double-edged sword.

OpenAI is now taking a step to address that, with a new feature that brings a trusted person into the picture when things get serious. The company is rolling out a new feature called Trusted Contact, and it is starting to appear in ChatGPT settings for adult users. It lets users name one person who can be alerted if ChatGPT detects a serious self-harm concern. How does Trusted Contact work?

Setting up a Trusted Contact is optional, but if you do decide to set it up, then you have to make sure that the contact you are nominating is at least 18 years old, or 19 in South Korea. Once you name someone, they get an invitation explaining what the role actually means, and they have one week to accept it before the feature goes live. If they decline, you can pick someone else.

Recommended Videos The alert system itself is not automatic. If ChatGPT’s systems flag a conversation as potentially concerning, the chatbot first tells the user that their Trusted Contact may be notified, and it also nudges the user to reach out directly with some suggested conversation starters. A small team of specially trained human reviewers then steps in to assess the situation. Only if they confirm a serious risk does the contact actually get notified, via email, text, or in-app notification.

The alert does not share chat transcripts or conversation details. It simply says that self-harm came up in a potentially concerning way and asks the contact to check in. OpenAI says it aims to complete that human review in under one hour. Why is OpenAI adding this now?

Trusted contact is part of a broader set of safety features on the platform. Previously, OpenAI added features that let parents receive alerts when a linked teen account shows signs of distress. Trusted Contact is the adult-facing extension of this same feature. It was reportedly developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association.

All that said, it is worth mentioning that Trusted Contact does not replace crisis hotlines, emergency services, or professional mental health care. ChatGPT will still direct users toward those resources when needed. Users can remove or change their Trusted Contact at any time, and contacts can remove themselves whenever they want. The reality of the matter is that ChatGPT is being used for some deeply personal conversations, whether OpenAI planned for that or not.

Adding a feature like Trusted Contact is a move in the right direction, and also an admission that a chatbot can only do so much.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

DigitalTrends /  🏆 95. in US

Chatbots Safety AI Chatbots Chatgpt Trusted Contact Serious Self-Harm Concerns Safety Features Parents Receiving Alerts

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concernsChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concernsOpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns.
Read more »

ChatGPT Adds ‘Trusted Contact’ Feature to Send Alerts When Conversations Get DangerousChatGPT Adds ‘Trusted Contact’ Feature to Send Alerts When Conversations Get DangerousChatGPT will still encourage users to contact crisis hotlines or emergency services when necessary.
Read more »

OpenAI Is Tired of Seeing All Those Videos of People Clowning on Its Voice ModeOpenAI Is Tired of Seeing All Those Videos of People Clowning on Its Voice ModeSay what?
Read more »

Former OpenAI board members describe 'extremely risky' near-merger with AnthropicFormer OpenAI board members describe 'extremely risky' near-merger with AnthropicElon Musk and Sam Altman arrive to Open AI trial
Read more »



Render Time: 2026-05-08 05:13:28