Authors, Time to Get That (Anthropic) Bag (whatever.scalzi.com)

🤖 AI Summary
Anthropic — accused of using thousands of authors’ copyrighted works to train its large language models — quietly reached a settlement with a plaintiffs’ law firm and the lawyers have published a searchable database of works covered by the deal. The author here found 17 qualifying titles and filed claims (the settlement baseline is roughly $3,000 per title), though net payouts will shrink after attorney fees. Writers can opt out and sue individually, but most lack the resources to pursue costly litigation, so this deal will be the realistic outcome for many. The case matters because it puts a dollar sign on the value of scraped training data and confirms that major LLMs were built in part from commercial creative content — which helps explain why model outputs can sound like specific authors. The settlement is a minimal but significant legal reckoning: it reduces Anthropic’s liability exposure without reversing the benefits the company already got (it sits within a multibillion-dollar market valuation). Practically, authors should check the database and submit claims if listed; more broadly, the episode intensifies pressure on AI builders to document data provenance, rethink sourcing and compensation practices, and prepares the field for more copyright-driven legal and policy change.
Loading comments...
loading comments...