🤖 AI Summary
Repo Prompt argues that careful planning and context curation beat “convenient” coding agents for reliable code changes. The author highlights an “agent orientation” problem: agents waste precious context window searching and ingesting unrelated files, system prompts, tool specs and failed calls, which degrades reasoning as the model approaches effective context limits. Models also have an advertised vs. effective context gap (sharp drop past ~32k tokens historically; many models are robust through 64–128k but go perilous beyond), smaller models suffer more, and reasoning models (e.g., GPT-5 Pro) work best when allowed to “think then act” rather than interleave tool-driven edits during chain-of-thought.
Repo Prompt’s workflow solves this with a two-stage discovery + plan approach. A Context Builder (orchestrating Claude Code/Codex and MCP tools) collects only the most relevant files and generates codemaps and handoff prompts while enforcing a token budget (default 60k for GPT-5 Pro; 24–32k suggested when pasting into other agents). Techniques include file-slicing, compact codemaps, and an XML Pro Edit mode where a reasoning model gets full context, outputs a minimal XML of edits, and Repo Prompt applies per-file changes in sandboxes. For larger scopes it generates architectural plans via GPT-5 Pro; for multi-step projects it supports staged/pair-programming workflows. The result: fewer context losses, clearer verifiable specs, and higher-quality code edits from LLMs.
Loading comments...
login to comment
loading comments...
no comments yet