If there was ever evidence that people are forming deep emotional reliance on ChatGPT, OpenAI’s new Trusted Contact feature is probably it.
Speaking at Sequoia Capital’s AI Ascent event last May, OpenAI CEO Sam Altman said young people were using ChatGPT like an operating system for life — not just for productivity, but for major personal decisions.
“I mean, that stuff, I think, is all cool and impressive,” Altman said. “And there’s this other thing where, like, they don’t really make life decisions without asking ChatGPT what they should do.”
The feature is still rolling out, so Trusted Contact is not available to everybody yet, but to find it, you click or tap on your profile name in ChatGPT, then look in Settings. You can nominate a trusted adult contact, who must accept the role before the feature becomes active.
If ChatGPT’s automated systems detect conversations that may indicate a serious risk of self-harm, the user is warned that their Trusted Contact could be notified and encouraged to reach out themselves first.
A specially trained human review team then assesses the situation before any alert is sent. If reviewers believe there is a genuine safety concern, the Trusted Contact receives a notification by email, text, or in-app alert encouraging them to check in.
OpenAI says the alerts do not include chat transcripts or detailed conversation history in order to protect user privacy, and you can remove or change your Trusted Contact at any time.
Reassuring or unsettling?
OpenAI says Trusted Contact was developed with input from mental-health experts, suicide-prevention specialists, and a global network of more than 260 doctors across 60 countries. Taken together with all the parental controls that OpenAI has already introduced and the safety guardrails already in place, Trusted Contact is another sign that the company is acknowledging that ChatGPT is something that can affect users emotionally, not just technologically.
The recent product announcements from OpenAI have really played down the use of ChatGPT as a confident, and emphasised ChatGPT’s productivity focus more, particularly regarding the Codex tool for creating code. Yet at the same time, more and more safety features aimed at ChatGPT users’ emotional well-being are being added.
The idea that we are now being monitored by ChatGPT is also concerning to some. When my colleague Becca Caddy recently interviewed Amy Sutton from Freedom Counselling for an investigation into AI monitoring tools in the workplace, she noted that knowing you’re being monitored by your AI, especially in the workplace, could actually worsen the problem it’s trying to solve. Sutton commented, “With mental health stigmas still rife, AI observation would likely lead to greater efforts to hide evidence of struggles. This could create a dangerous spiral, where the greater our efforts to hide low mood or anxiety, the worse it becomes.”
Whether Trusted Contact feels reassuring or unsettling probably depends on how you already see AI and ChatGPT. But the feature is another example of how AI companies acknowledge that their products are not just tools for productivity and information, but as systems people may increasingly rely on emotionally during some of the most vulnerable moments of their lives.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
The best business laptops for all budgets
You must be logged in to post a comment Login