OpenAI has introduced a feature called Trusted Contact for ChatGPT, allowing users to nominate a friend or trusted person who can be contacted by the company if its system detects a serious risk of the user harming themselves. The feature arrives as artificial intelligence chatbots have become an increasingly common outlet for people navigating mental health challenges, with OpenAI previously disclosing to the BBC that more than one million of its 800 million weekly users express suicidal thoughts in their conversations with ChatGPT.
The Trusted Contact system is available to users aged 18 and above and can be set up through ChatGPT settings, where users can nominate a single adult as their contact. That individual then receives an invitation which must be accepted within one week, after which they become the designated contact for any potential safety notification. Before the system is set up, ChatGPT clearly informs the user that the company may notify their contact if it detects a serious possibility of self-harm, and when a concerning conversation is detected, the system first encourages the user to reach out to their trusted friend directly, even suggesting potential conversation starters to make that step easier.
Critically, the process is not fully automated. OpenAI has confirmed that a small team of specially trained people reviews each situation before any notification is sent, and it is only after that human review determines a genuinely serious risk exists that the contact receives a message by email, text, or in-app notification. The message informs the recipient that the user may be going through a difficult time and encourages them to check in, and they can access additional context indicating that OpenAI detected a conversation involving discussion of suicide. However, conversation transcripts are not shared with the contact in order to preserve the user’s privacy. OpenAI has stated that it strives to complete each safety review within one hour of detection. The feature builds on ChatGPT’s existing parental controls framework and reflects the company’s continued effort to address the serious and growing use of its platform as an informal mental health resource, a pattern that has attracted significant scrutiny following legal action and investigative reporting that raised questions about how ChatGPT handled conversations involving distress and self-harm in previous iterations of the system.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.