OpenAI says it's working to tell if a user is under 18 and will send them to an 'age-appropriate' ChatGPT (www.businessinsider.com)

🤖 AI Summary
OpenAI announced plans to detect when users are under 18 and automatically steer them to an "age-appropriate" ChatGPT experience. CEO Sam Altman said the company is building an age‑prediction system that estimates age from how people use ChatGPT and will default to the under‑18 mode when uncertain. That mode enforces stricter policies (e.g., blocking graphic sexual content), includes parental controls coming by the end of the month (account linking, response guidance, blackout hours, and notifications if a teen appears in acute distress), and in rare emergency cases may escalate to law enforcement if parents can’t be reached. Technically and politically, this signals a move toward behavioral ML classifiers plus layered content filters and human-in-the-loop escalation paths — with real tradeoffs. OpenAI says it may require ID verification in some countries or situations, accepting privacy compromises for adult users to prioritize teen safety. The push follows criticism and legal scrutiny after alleged harms involving teens (including a recent lawsuit and Congressional attention). For the AI/ML community, the announcement highlights challenges in building age-detection models, calibration/false‑positive risks from conservative defaults, deployment ethics, regulatory pressure, and the operational complexity of blending automated detection, parental controls, and crisis intervention.
Loading comments...
loading comments...