🤖 AI Summary
OpenAI disclosed new estimates showing that roughly 0.07% of weekly ChatGPT users exhibit possible signs of mental-health emergencies — including mania, psychosis or suicidal thoughts — and about 0.15% have conversations with explicit indicators of potential suicidal planning or intent. While OpenAI calls these cases “extremely rare,” the company and critics note that at ~800 million weekly users even small percentages translate into large absolute numbers. OpenAI says it built a global advisory network of more than 170 psychiatrists, psychologists and primary-care physicians (practicing in 60 countries) to craft in-chat responses that encourage real-world help, and it reports recent updates to make the model respond “safely and empathetically” to delusions or mania and to flag indirect signals of self-harm.
For the AI/ML community the announcement highlights concrete safety and deployment challenges at internet scale: detection and classification of mental-health signals in freeform dialogue, calibrated empathetic response generation, and operational mitigations such as rerouting sensitive conversations to “safer” models (opening them in a new window). The disclosure also underscores legal and ethical risks—ongoing lawsuits allege harm tied to chatbot interactions—and the limits of warnings or scripted responses for vulnerable users. Practitioners should view this as a prompt to strengthen evaluation metrics for safety, invest in clinician-informed mitigation pipelines, and consider governance and liability implications when deploying conversational agents.
Loading comments...
login to comment
loading comments...
no comments yet