AI and Copyright: Expanding Copyright Hurts Everyone–Here's What to Do Instead (www.eff.org)

🤖 AI Summary
A new position piece warns that expanding copyright to require licensing for AI training data would cripple socially valuable ML research, entrench tech monopolies, and curtail free expression. It argues fair use has enabled a decade of text-and-data-mining (TDM) research—powering everything from telescope data analysis to protein-structure breakthroughs like AlphaFold—and that forcing licenses for the billions or trillions of works used to train foundation models is practically impossible and cost‑prohibitive. Empirical evidence shows countries that protect TDM produce more ML research; legal battles such as Thomson Reuters v. Ross Intelligence and Getty Images’ litigation around Stable Diffusion illustrate how copyright claims can shut down competitors and incentivize gatekeepers to build closed, proprietary models. Technically and economically, mandatory licensing would concentrate training-data control with incumbents that own massive content libraries, raising barriers to entry, reducing model diversity, and worsening downstream harms (higher costs, security risks, less creative remixing). The piece stresses that copyright expansion won’t solve real harms from generative AI—job displacement, privacy violations, nonconsensual imagery, misinformation, or environmental costs—and recommends targeted fixes instead: stronger antitrust enforcement, privacy and labor protections, environmental rules, and media literacy. In short, preserve fair use and address root problems with issue-specific policy rather than broad copyright creep that would stifle innovation and expression.
Loading comments...
loading comments...