GenAI Predictions (www.tbray.org)

🤖 AI Summary
A candid forecast argues that many headline promises of GenAI won’t pan out. Hallucinations are likely intrinsic to current LLM training approaches — hard to connect model internals to ground truth and supported by research (e.g., “Why Language Models Hallucinate”) — so don’t expect a reliable fix soon. The “reverse centaur” model (models do work, humans tidy up) creates lots of low-quality “workslop” whose cleanup erodes productivity gains, and even the gentler “centaur” mode (humans using AI as tools) probably won’t eliminate tens of millions of knowledge jobs because the net output/quality balance won’t cover the enormous AI operating costs. Meanwhile the money flow behind GenAI looks unsustainably large; a bubble could burst (perhaps around 2026), doing financial damage without collapsing the broader economy. Where GenAI will matter is more concrete: code generation is already practical because “reality” for code can be verified by compilation and tests, and agent-based systems can iteratively generate and validate snippets. Expect routine adoption for application logic, API glue (Android, AWS), SQL, and StackOverflow–style lookups, while interaction design, low-level infrastructure, and concurrency-sensitive systems remain hard. Open questions include automated testing generation and infrastructure-as-code safety. Finally, beware the incentives: much of the push is driven by vendors and investors, with environmental and labor-cost externalities; when the hype recedes, we’ll likely end up with a more modest, mixed set of tools rather than the automated apocalypse some sell.
Loading comments...
loading comments...