🤖 AI Summary
Two adult companies, Strike 3 Holdings and Counterlife Media, have sued Meta for $359 million, alleging the company downloaded and seeded torrents of nearly 2,400 copyrighted porn videos to train AI models. The plaintiffs point to 47 IP addresses they link to Meta and highlight one home IP (the father of a Meta contractor) that allegedly downloaded 97 titles. They also suggest the activity could relate to a not-safe-for-work variant of Meta’s upcoming video generator, Movie Gen. Meta has moved to dismiss, calling the torrent-tracking “guesswork and innuendo,” arguing the downloads average roughly 22 files per year — too sparse to plausibly power model training — and saying the pattern looks like private personal use, not corporate scraping. Meta insists it bans sexually explicit material from its training datasets.
The case matters for AI/ML because it tests how copyright, dataset provenance and IP attribution intersect with modern model building. If corporations are found to have ingested or seeded copyrighted content via distributed networks, it would heighten legal and compliance risk, push firms to tighten data ingestion controls, and raise questions about employee device use, VPNs and contractor oversight. Technically, the dispute highlights key evidence challenges: linking downloads to enterprise activity, assessing whether download volume and distribution could meaningfully affect training, and detecting seeded torrents vs. incidental personal access — all issues that could shape future industry practices around dataset curation and content-policy enforcement.
Loading comments...
login to comment
loading comments...
no comments yet