🤖 AI Summary
Anthropic and Meta have loudly claimed that a large share of their code is now written by AI agents—Anthropic’s leadership (and Claude Code’s head Boris Cherny) recently put the figure around 80%, while Meta reported roughly 30% and expects 50% by 2026. Those statements are plausible: these labs run far more powerful, experimental models than public subscriptions get, and if you count lines via tools like GitHub Blame, a lot of trivial files, docs, prototypes and boilerplate will naturally be attributed to AI. The announcement matters because it signals frontier players are already deploying generative agents at scale, but the headline percentage masks what really drives product value.
Technically, most AI-generated lines are low-risk (landing pages, tests, boilerplate), not core business logic — a rough decomposition cited by Andriy Burkov is ~10% boilerplate, 40% reuse of libraries/APIs, 20% rework, and 30% new business logic. The hard engineering work remains in design, reviews, integration, testing, maintenance and legacy cost. Existing metrics (PR throughput, cycle time, deployment frequency, lead time) still matter; there’s no single “Bill James” score for engineering productivity. The practical implication for teams: adopt process- and quality-focused metrics, invest in reviewers and architects, and treat AI as a force multiplier for producing code quickly—not as a replacement for the judgment and long-term stewardship that determine software value.
Loading comments...
login to comment
loading comments...
no comments yet