🤖 AI Summary
LLMs aren’t broken tools — they’re probabilistic coworkers. The piece argues that frustration with models often stems from expecting deterministic behavior (like a compiler) when models behave like human teammates: context-hungry, improvisational, and sometimes confidently wrong. A short vague prompt returns the model’s best guess from training data, not your private intent, and every session starts without shared history. That mismatch is why a “it didn’t do what I meant” reaction is as common with humans as it is with LLMs.
For practitioners this implies concrete process and engineering changes: treat LLMs as components in a non-deterministic system and design around that. Use clearer specs (prompt engineering = better specs), plan-first interactions, short feedback loops, and explicit context curation (project-wide docs, README, docs/system-design.md, .github/copilot-instructions.md, AGENTS.md, and pointing the model at exact files). Rely on tests, CI, automation, version control and code review to catch and roll back model-induced errors. Remember model limits—finite context windows and no persistent memory—so surface tribal knowledge in durable docs. Crucially, agents amplify whatever process you already have: good onboarding, tests and automation make LLMs productive; sloppy processes make them noisy and risky.
Loading comments...
login to comment
loading comments...
no comments yet