Worries about Open Source in the age of LLMs (www.jvt.me)

🤖 AI Summary
A thoughtful blog post questions whether the rise of large language models (LLMs) and AI agents will erode the open source ecosystem. The author—pro-open-source, favoring AGPL-3.0 or Apache-2.0—argues that instead of reusing small composable libraries, developers may increasingly ask LLMs to regenerate the same snippets inline. That shift can balkanize code (hundreds of teams recreating the same logic), hide license provenance, and discourage upstream contribution and collaboration. Maintainers are already reacting by restricting scraping, moving projects off major hosts, or privatizing repositories to avoid having their work used as training data. Technically, this trend risks practical harms: inlined/generated code evades package tooling that tracks updates and security fixes; license compliance becomes harder when copying replaces declared dependencies; and “copyright laundering” creates legal uncertainty over LLM-derived output. There’s also a feedback risk where LLMs trained on increasingly synthetic or proprietary code degrade over time. The author recommends transparency (marking LLM-generated code for easy removal/review) and laments potential loss of the social and educational benefits of contributory open source. The post is a call to preserve reuse, proper licensing, and community norms as AI changes how code is produced.
Loading comments...
loading comments...