Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from (www.engadget.com)

🤖 AI Summary
Amazon has reported discovering a significant amount of child sexual abuse material (CSAM) in its AI training data, prompting questions about the sources of this content. According to the National Center for Missing and Exploited Children (NCMEC), over 1 million reports of AI-related CSAM were made in 2025, with a substantial portion attributed to Amazon. While the company claims it obtained this inappropriate material from external sources and insists it removed the content prior to using the data for training, it has not disclosed specific origins, which raises concerns regarding the transparency and efficacy of data safeguards. This incident highlights a pressing issue within the AI/ML community regarding the ethics of training data and user safety, particularly for minors. The dramatic rise in CSAM reports — from merely 4,700 in 2023 to over a million in 2025 — underscores the urgent need for robust measures to prevent the inclusion of such harmful material in AI models. Furthermore, with various AI platforms facing legal scrutiny for their potential role in endangering young users, this revelation from Amazon signals that the tech industry must urgently address these safety concerns to build trust and ensure responsible AI development.
Loading comments...
loading comments...