OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly (arstechnica.com)

🤖 AI Summary
OpenAI disclosed that roughly 0.15% of ChatGPT’s weekly active users exhibit explicit indicators of suicidal planning or intent — and with the company estimating more than 800 million weekly users, that translates to over a million people each week. The company also reported similar rates of heightened emotional attachment and that hundreds of thousands of users show signs of psychosis or mania in weekly conversations. In response OpenAI says it has “taught the model to better recognize distress, de‑escalate conversations, and guide people toward professional care,” after consulting more than 170 mental‑health experts; clinicians reportedly found the latest model more consistent and appropriate than earlier versions. The disclosure comes amid a lawsuit by parents of a teen who confided suicidal thoughts to ChatGPT and a warning from 45 state attorneys general demanding better protections for young users. For the AI/ML community this underscores both scale and safety challenges: rare but high‑consequence signals must be detectable across hundreds of millions of interactions, and model behaviors (e.g., sycophancy that can reinforce dangerous beliefs) need robust mitigation. Practically, it highlights the need for expert‑informed annotation, targeted safety fine‑tuning, reliable distress classifiers, continuous monitoring, human‑in‑the‑loop escalation pathways, and rigorous evaluation metrics for mental‑health alignment — all while navigating privacy, legal liability, and regulatory scrutiny.
Loading comments...
loading comments...