🤖 AI Summary
Practical lessons from working with coding agents in 2025: success starts with the prompt — it’s how you transmit intent, state, and constraints. Whether an agent has tool access (terminal, CI, browser) dramatically changes what you should ask it to do: without tools you must frontload test results, with tools you can instruct the agent to run, iterate, and verify. Always onboard agents with precise context (patterns, edge cases, expected lint/build steps) so they behave like a well-briefed engineer. Make verification explicit (compile, run unit/integration tests, lint) and formalize recurring rules or commands (e.g., /add-new-service) so the system learns reusable flows and reduces back-and-forth.
Model choice and workflow matter: faster models keep you in flow for rapid iteration; slower models can be left to run autonomously if given a detailed plan and validation steps. Treat mature agents as background workers by scoping clear objectives, verification, and autonomy — examples include implementing cursor-based pagination with limit/cursor and nextCursor, or an autonomous task list that runs integration tests and updates docs. Parallel agents (ideally 1–3) multiply throughput but require non-overlapping scopes to avoid coordination overhead — e.g., one agent adds express-rate-limit with Redis (100/15min auth, 20 anon), another implements SendGrid email templates, a third updates a socket.io dashboard hook. These practices shift AI coding from ad hoc assistance to scalable, verifiable engineering workflows.
Loading comments...
login to comment
loading comments...
no comments yet