🤖 AI Summary
OpenAI announced it is building an automated age-prediction system to determine whether ChatGPT users are over 18 and will route under-18s to a restricted chatbot experience; parental controls are also promised by the end of September. CEO Sam Altman said the company is "prioritizing safety ahead of privacy and freedom for teens," acknowledging that adults may be asked for ID in some cases — a stated privacy compromise intended to prevent minors from accessing graphic sexual content and other mature material. The move follows a lawsuit tied to a teenager’s suicide after extensive interactions with ChatGPT, which OpenAI says revealed gaps in prior content-moderation and intervention systems.
Technically, the effort raises hard questions: OpenAI has not disclosed the methods or timeline, and admits age-prediction is non-trivial and error-prone. The plan includes defaulting to the safer, restricted experience when age is uncertain, and requiring adult verification to regain full access. For the AI/ML community this signals renewed focus on automated user profiling, robustness to adversarial evasion, bias and fairness in age inference, privacy-preserving verification approaches, and operational trade-offs between safety and user privacy. How accurate, fair, and secure such systems will be — and how they reshape trust models for large-language-model services — remains an open, consequential debate.
Loading comments...
login to comment
loading comments...
no comments yet