🤖 AI Summary
Campaign group Global Witness says it was able to get TikTok to surface pornographic material to accounts created with a 13‑year‑old birthdate within a “small number of clicks.” Using clean phones with no history and TikTok’s “restricted mode” enabled, researchers created seven child accounts and found the app’s “you may like” suggestions pushed sexualised autocomplete phrases such as “very very rude skimpy outfits” and escalated to explicit queries like “hardcore pawn [sic] clips.” In several cases pornographic clips — some allegedly showing someone who appeared under 16, now referred to the Internet Watch Foundation — appeared after one or two clicks. Global Witness also reports that some clips tried to evade moderation by embedding explicit footage inside innocuous images or videos.
The findings are significant because they suggest recommendation and search‑suggestion systems can bypass supposed child protections, potentially putting minors at risk and raising questions under the UK Online Safety Act (OSA). Global Witness claims TikTok is in breach; Ofcom will review the research. TikTok says it removed offending videos and updated search suggestion behavior. Technically this points to failures in autocomplete filtering, ranking safeguards and content detection pipelines — and underscores the need for stricter algorithmic controls, better moderation tooling, and transparency to ensure platforms properly restrict harmful content from children’s feeds.
Loading comments...
login to comment
loading comments...
no comments yet