OpenAI
Connecting you to someone you trust when it matters most.
People use ChatGPT to learn, explore ideas, solve problems, and reflect on personal questions. Sometimes those conversations can involve moments when someone may be struggling or looking for support. Our goal is to design systems that respond thoughtfully to sensitive conversations and encourage people to connect with real-world help when needed.
Today we are starting to roll out Trusted Contact, an optional safety feature in ChatGPT that allows adults to nominate someone they trust, such as a friend, family member, or caregiver, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern. Trusted Contact is designed to offer another layer of support alongside the localized helplines already available in ChatGPT, by helping users connect to a person they trust when they are in crisis.
Trusted Contact builds on parental controls safety notifications(opens in a new window), which allow parents or guardians to receive alerts when there are signs of acute distress for a linked teen account. Now, we are extending our safety alert options so anyone over 18 can choose to add someone they trust as their Trusted Contact.
Expert guidance(opens in a new window) identifies social connection as one of the most important protective factors to reduce suicide risk. Trusted Contact(opens in a new window) is designed to encourage connection with someone the user already trusts. It does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress. ChatGPT will still encourage users to contact crisis hotlines or emergency services when appropriate.
“Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.”
—Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association

While these serious safety situations are rare, when they do arise, our systems are designed to support timely review and response. While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour.
“One of AI's biggest promises is how it can foster authentic human-to-human connection and psychological safety. I am encouraged by ChatGPT's Trusted Contact feature, which offers a step forward to human empowerment, especially during moments of vulnerability."
—Dr. Munmun De Choudhury, Ph.D., J. Z. Liang Professor of Interactive Computing at Georgia Tech and member of the Expert Council on Well-Being and AI

In addition to Trusted Contact, ChatGPT has safeguards to help guide sensitive conversations at every stage. We have continued improving how the system responds to different levels of risk expressed in a conversation:

Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments. We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress. Our goal is to ensure that AI systems do not exist in isolation. Instead they should help connect people to the real-world care, relationships, and resources that matter most.