ChatGPT teen-safety measures will include age prediction and verification (www.nbcnews.com)

🤖 AI Summary
OpenAI announced new teen-safety measures for ChatGPT that include an automated age‑prediction system and, in some countries, mandatory ID-based age verification. The company plans to route users into one of two experiences — an adolescent mode for ages 13–17 and an adult mode for 18+ — and says it will default to the under‑18 experience when uncertain. OpenAI also promised parental controls (rolling out at month’s end) to let caregivers set how ChatGPT responds to their children and adjust features like memory and blackout hours. CEO Sam Altman emphasized the platform isn’t intended for under‑12s, acknowledged privacy tradeoffs from ID checks, and framed these steps as prioritizing safety over privacy for minors. The announcement came ahead of a Senate Judiciary Committee hearing and follows a lawsuit accusing ChatGPT of facilitating self‑harm. Technically and ethically, the plan raises key implications for AI/ML developers and policymakers: automated age estimation and routing require robust, bias‑resistant classifiers and will face accuracy, spoofing and privacy challenges, while ID verification introduces cross‑jurisdictional legal and data‑protection tradeoffs. OpenAI says flagged minors expressing suicidal ideation will trigger attempts to notify parents and, if necessary, authorities — and that adults may still receive nuanced responses (for example, fictional depictions). The changes mark a shift toward safety‑first guardrails in consumer LLMs but also amplify debates about surveillance, false positives, and how to balance protection with user autonomy.
Loading comments...
loading comments...