🤖 AI Summary
OpenAI has quietly rolled out per-message “safety routing” in ChatGPT that automatically reroutes conversations about emotionally or legally sensitive topics to a more conservative model, and many paying users say the switch happens without notice, explanation, or an opt-out. Reddit threads erupted after subscribers reported sudden drops in capability or style when discussing sensitive matters — an experience users describe as a “silent override” away from GPT-4o/GPT-5 — and OpenAI exec Nick Turley confirmed the change, saying it’s temporary and intended to improve how the system responds to signs of mental or emotional distress.
For the AI/ML community this matters beyond user frustration: automatic, opaque model-switching introduces non-determinism that complicates reproducibility, benchmarking, product expectations and trust. Technically it’s a per-message classifier+router that selects a more conservative model when safety heuristics trigger, trading expressiveness for caution. That design raises questions about transparency, consent, failure-mode signalling, and how to evaluate models that can change mid-conversation. Researchers, developers and paying customers will likely press for clearer routing criteria, explicit UI indicators, and opt-in/out controls to balance harm mitigation with consistent model behavior.
Loading comments...
login to comment
loading comments...
no comments yet