Faking a Rational Design Process in the AI Era: Why Documentation Matters (albertsikkema.com)

🤖 AI Summary
In 1986 Parnas and Clements argued that since a perfectly rational design process is impossible, teams should nonetheless document their work as if they had followed one. That idea is freshly urgent in 2025: AI programming assistants (the author’s example is Claude Code, alongside tools like GitHub Copilot and Cursor) don’t retain session memory, so documentation becomes the single source of truth that guides every stateless AI interaction. The piece maps the original “fake it” prescription to an AI-assisted PDCA-style workflow—research, plan, execute, review/rationalize—where documentation is both the input and the output of each cycle rather than an afterthought. Technically, this means maintaining machine- and human-readable artifacts (CLAUDE.md as the canonical project guide, README for implementation flows, and ADRs for recorded decisions/alternatives) that AI agents reference before acting. During execution the AI implements plans, humans handle surprises, then teams update docs to present the cleaned, rationalized design and rejected alternatives. The implications for ML-driven development are concrete: consistent architecture and coding patterns across AI sessions, fewer divergent styles, faster onboarding for humans and agents, and a feed-forward loop where each session inherits refined context—reducing technical debt and improving validation. Ultimately, “faking” rationality becomes a discipline that constrains solution space, amplifies human-in-the-loop judgment, and makes AI collaboration reliable.
Loading comments...
loading comments...