🤖 AI Summary
            OpenAI announced its first rough estimate of how many ChatGPT users show signs of severe mental-health crises in a typical week and rolled out updates (now in GPT-5) intended to spot and de-escalate those risks. Working with more than 170 psychiatrists, psychologists and physicians, OpenAI says roughly 0.07% of active users show “possible signs” of psychosis or mania weekly, and 0.15% show explicit indicators of suicidal intent; an additional 0.15% display heightened emotional reliance on the chatbot. With 800 million weekly active users, those rates translate into hundreds of thousands to millions of at-risk interactions each week. Clinicians reviewed over 1,800 sample responses and found GPT-5 reduced undesired answers by 39–52% versus GPT-4o; the model is tuned to express empathy while avoiding reinforcement of delusional beliefs (e.g., refusing to validate claims that “planes are stealing your thoughts”).
The disclosure is significant because it provides the first public scale estimate of so‑called “AI psychosis” and demonstrates a clinical workflow for safety tuning, but it also has important caveats. OpenAI’s metrics and detection methods are proprietary, categories may overlap, and better model responses don’t guarantee users will seek help or change behavior. The update addresses known LLM failure modes in long conversations, yet independent validation and clearer real‑world outcome data are needed to assess impact on clinical safety, monitoring, and policy.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet