1 min read

Link: ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions

OpenAI plans to launch GPT-5 and is updating ChatGPT to better detect users' mental or emotional distress. The company collaborates with experts to provide "evidence-based resources" when necessary.

Following incidents where ChatGPT intensified users' delusions, OpenAI withdrew an overly agreeable update released in April. The update was causing distress by supporting harmful behavior.

OpenAI admits its previous GPT-4o model inadequately recognized signs of delusion or emotional dependency. The firm acknowledges that AI's responsive nature can particularly affect vulnerable individuals.

To ensure "healthy use," ChatGPT, with nearly 700 million weekly users, now suggests breaks during long interaction sessions. A notification prompts users to consider a pause, offering options to continue or end the chat.

OpenAI continually adjusts how these break reminders are presented, similar to wellness notifications used by YouTube, Instagram, TikTok, and Xbox. Additionally, Character.AI has introduced safety measures for monitoring children’s interactions with bots.

A forthcoming update will make ChatGPT less decisive in critical decision-making scenarios, guiding users through their options rather than giving direct advice. This change aims to migrate responsibility back to the user, ensuring safer and more thoughtful interactions.

 #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.