Why AI won't work as a software development abstraction (blog.robbowley.net)

🤖 AI Summary
The piece argues that treating large language models as a new “abstraction layer” for software—where prompts are the source of truth and AI regenerates systems like a compiler—fails even under generous assumptions (e.g., hallucination and non‑determinism solved). The core issue is complexity: software requirements continually change and entropy accumulates. Human engineering disciplines (refactoring, architecture, composition) exist precisely to manage that complexity, and an LLM without superintelligent reasoning will inherit the same accumulation of technical debt, likely faster rather than slower. Beyond conceptual limits, there are hard technical and physical constraints. Rebuilding large systems from prompts would require enormous compute, time and energy; the author’s back‑of‑envelope says a mid‑size 500k‑LOC codebase would take days and cost thousands with today’s LLMs, breaking the tight human feedback loops (seconds–minutes) that development depends on. That shift moves effort from human cognition to machine compute, but is currently far less efficient—an instance of the 2nd law of thermodynamics in software engineering. For the AI/ML community this is a caution: LLMs are powerful assistants, but replacing software engineering abstractions demands breakthroughs in scalability, incremental generation, and energy efficiency or a rethink toward hybrid human–AI workflows.
Loading comments...
loading comments...