🤖 AI Summary
After a decade of seeing documentation created and then ignored, the author argues that LLMs change the question from “how much should we document?” to “what should remain documented?” Because large models make writing cheap, the instinct to produce more static docs is misguided: most code documentation is a temporal snapshot of intent, and production code moves faster than those snapshots can stay accurate. Instead of bloating repositories with stale explanations, developers can ask LLMs for on‑demand, tailored explanations—dialing complexity to their needs—which the author frames as an “AI proficiency” skill. Conversely, piling text on top of code to help coding agents is counterproductive; agents perform better when they work directly from the codebase rather than an extra layer of prose.
The practical takeaway for the AI/ML community is to shift from quantity to precision: keep a minimal set of enduring artifacts (API specs, operational runbooks, and architecture decision records) that capture interfaces, operations, and irreversible choices, and rely on agents to generate ephemeral explanations in real time. That implies tooling and workflows that verify and surface up‑to‑date code understanding, integrate LLMs into code reading and change processes, and treat documentation as a living, on‑demand layer rather than a static product. This reduces maintenance overhead while preserving the contractual and operational docs teams must keep.
Loading comments...
login to comment
loading comments...
no comments yet