🤖 AI Summary
An experienced practitioner argues that current hype around AI agents misses a crucial gap: LLM-based agents are rich repositories of theoretical knowledge but lack the practical wisdom that comes from hands‑on experience. Treating them like deterministic tools (vending machines or scripts) and relying on zero‑ or few‑shot prompts leads to inconsistent, brittle results. Instead, the author proposes a new mental model—treat agents as novices who have read thousands of books but never shipped real systems—and "mentor" them through iterative, conversational collaboration so their strengths (pattern knowledge, design principles, jargon) complement human strengths (debugging experience, trade‑off wisdom).
Practically, the author lays out a workflow: decompose problems jointly, have the agent propose holistic solutions, iteratively refine solutions and plans, save snapshots, then implement step‑by‑step while pausing for human approval. A sample back‑and‑forth shows how this loop surfaces needed constraints and alternative designs. The piece closes with a tooling implication: code should be structured to be intelligible to both humans and agents—hence the author’s open‑source C# library Flow, designed to make business logic clearer for automated analysis and generation. The takeaways: rethink prompting from single-shot to mentored collaboration, design workflows and code to expose real‑world constraints, and build tools that bridge knowledge and wisdom.
Loading comments...
login to comment
loading comments...
no comments yet