🤖 AI Summary
A veteran developer argues that two enduring truths of software—“we don’t know what to build” and “it’s never done”—remain true in the AI era, and that current AI coding tools don’t change those fundamentals. The piece traces prior waves of “no-code” optimism (4GLs, RUP/model-driven generation) that promised to remove programming but failed, and warns that today’s AI-assisted scaffolding can produce working artifacts without the tacit understanding born of iterative discovery, experiments, and modeling. That loss of embedded knowledge—often residing in the prompts, reviewer decisions, or a single expert’s head—raises reproducibility, maintainability and long-term risk: artifacts become hollow, hard-to-evolve systems.
For the AI/ML community this is a cautionary note about misplaced hype and incentives. AI code generation improves efficiency and accessibility (helping non-coders and consultants), but it doesn’t replace the epistemic work of defining requirements, evolving mental models, or maintaining software lifecycles. Technical implications include brittle knowledge provenance when prompts are the primary source, increased dependency on human experts for system understanding, and persistent maintenance burdens illustrated by legacy system anecdotes (e.g., single-expert COBOL/Wang survivors). The author concedes future AI might narrow gaps, but until then engineering practices—experimentation, documentation, architecture and upkeep—remain essential.
Loading comments...
login to comment
loading comments...
no comments yet