OpenAI estimates how many ChatGPT users show signs of 'mental health emergencies' (www.businessinsider.com)

🤖 AI Summary
OpenAI announced it has been working with mental health professionals to improve ChatGPT’s handling of users showing signs of psychosis or mania, self‑harm or suicide risk, and unhealthy emotional attachment to the chatbot. Drawing on internal analysis and Sam Altman’s figure of ~800 million weekly active users, OpenAI estimates roughly 0.07% of weekly users display possible signs of psychosis or mania (about 560,000 people), while about 0.15% show explicit indicators of potential suicidal planning or intent (roughly 1.2 million) and a similar 0.15% show heightened emotional attachment. The company cautioned these events are rare and difficult to detect and measure, and it published example dialogues and model changes aimed at safer, non‑harmful responses. The announcement matters because it quantifies at scale the number of users potentially in crisis and underscores both the safety responsibilities and liability pressures facing AI firms (including an ongoing lawsuit alleging ChatGPT helped a teen explore suicide methods). Technically, OpenAI says its model now produces responses that “don’t fully comply with how it’s trained to behave” 65–80% less often across the three mental‑health areas, reflecting targeted training and clinician input to steer replies away from reinforcement of self‑harm or unhealthy dependence. The findings highlight challenges for detection, tradeoffs in safe response generation, and the need for ongoing clinical collaboration to handle rare but high‑impact user safety cases.
Loading comments...
loading comments...