🤖 AI Summary
OpenAI announced new safety policies that significantly change how ChatGPT interacts with users under 18: the model will be trained to avoid “flirtatious talk” with minors, add stricter guardrails around suicide-related conversations, and — if a teen imagines suicidal scenarios — attempt to notify parents or, in severe cases, local authorities. Parents who register a child’s account can now set “blackout hours” to block access. OpenAI says it’s building a system to infer whether a user is over or under 18 and will default to the more restrictive rules in ambiguous cases; linking a teen account to a parent account is the most reliable way to trigger the protections.
The changes are driven by mounting legal and public-safety pressure — including a wrongful-death lawsuit after a teen’s online interactions with ChatGPT and similar litigation against Character.AI — plus a Reuters report and an upcoming Senate hearing on chatbot harms. Technically, enforcing age-based policies raises hard challenges (age inference, false positives/negatives, consent/privacy trade-offs), and OpenAI openly frames this as a tension between teen safety and adult privacy/freedom. For developers and researchers, the move signals stricter content moderation, clearer parental-control features, and growing regulatory risk that will likely shape training, deployment, and audit practices for consumer LLMs.
Loading comments...
login to comment
loading comments...
no comments yet