🤖 AI Summary
A practical playbook for getting reliable results from Codex, Cursor and Claude Code: create persistent memory/context files (CLAUDE.md for common commands and project norms, AGENTS.md for rules) and use Cursor’s @-mentions (@codebase, @docs) to surface real‑time code context. Prefer “boring” stable libraries that predate model training cutoffs so LLMs have good examples in their data, and feed recent docs/examples when using newer libs. Prime agents by dumping relevant code, then follow a Read → Plan → Code → Commit loop: ask the model to read files, make a plan, implement, run tests and open a PR. Use exact function signatures and detailed specs (e.g., async def download_db(...)) to constrain output, request multiple approaches with pros/cons, and employ prototypes and screenshots to iterate UI and behavior.
Operational best practices: use special prompts like “think” or tags such as <think hard>/<ultrathink> to force extended planning; treat the assistant as a precise intern (give exact signatures); offload tedious refactors while keeping code simple and explicit; log extensively so agents can self-diagnose; let agents run tests and attempt fixes but always run manual QA. Use subagents for verification, ask the assistant to review its own diffs, and always inspect full diffs and run tests yourself—LLMs speed development but do not replace human responsibility for correctness and design.
Loading comments...
login to comment
loading comments...
no comments yet