ChatGPT is referring too many people to suicide hotline [Dutch] (nos.nl)

🤖 AI Summary
Dutch suicide hotline 113 reports that ChatGPT — especially its newest GPT-5 model — is referring users to the hotline far more often than to psychologists or other mental-health services. NOS Stories’ data show GPT-5 recommends 113 much more frequently than older models (GPT-4o referred to 113 in 98% of clearly suicidal cases; GPT-5 did so in 100%), and 113 has seen an uptick in callers who say ChatGPT prompted them to call. Users with milder distress, including teenagers, report repeated automatic prompts to contact 113 or even emergency services (112), while 113 can only help people with suicidal ideation and risks deterring those with less severe problems from seeking appropriate care. For the AI/ML community this highlights a classic safety/utility tradeoff: recent safety tuning appears to have shifted the model toward conservative, high-recall triage — reducing the risk of missing true suicidality but increasing false positives that can overwhelm services and misroute care. Key implications include the need for finer-grained intent detection, contextual thresholds (age, phrasing, severity), coordination with crisis teams, telemetry to monitor downstream effects, and clearer escalation logic (e.g., distinguish “I feel down” from “I plan to harm myself”). 113 has asked OpenAI to nuance its advice; OpenAI says it will share feedback and has announced teen protections. Developers should treat hotline referrals as a safety-critical policy requiring iterative evaluation with clinical partners.
Loading comments...
loading comments...