OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny (www.wired.com)

🤖 AI Summary
OpenAI announced new teen-safety features for ChatGPT that use an age‑prediction system to identify users under 18 and route them to an “age‑appropriate” experience that blocks graphic sexual content. The company says the system will also flag suicide or self‑harm risk and notify parents when a teen is in acute distress — and, if imminent danger is detected and parents are unreachable, may contact authorities. By the end of September OpenAI will add parental controls to link child accounts to parents, allow parents to manage conversations and disable features, receive distress notifications, and set time limits. OpenAI attributes policy choices to its model behavior team and emphasizes balancing freedom, privacy, and safety. This move responds to mounting public and regulatory scrutiny after high‑profile cases linking chatbots to harm and an FTC inquiry into AI impacts on kids. Technically, it signals increased reliance on classifier models for age and risk detection, tighter behavior tuning, and tradeoffs between user privacy and safety (complicated by a court order requiring chat preservation). Important implications: under‑18 users were excluded from OpenAI’s recent usage research, so real teen behavior is still poorly understood; classifiers risk false positives/negatives that affect trust and consent; and these measures may set de facto safety norms ahead of federal regulation while also serving as a reputational safeguard.
Loading comments...
loading comments...