Context Management in Amp (ampcode.com)

🤖 AI Summary
The context window is the complete input a model uses to generate output — messages, model replies, tool calls, file contents and even system prompts — and everything in it influences the result. Amp treats a Thread as that context window, but because it’s an agent (model + system prompt + tools) the window also contains tool definitions, AGENTS.md from your repo, and environmental metadata (OS, current files, open selection). Two core limits matter: context windows are finite (models have hard token limits) and, because inference effectively multiplies token representations against one another, every token can affect output — more context often means worse or drift-prone results. Amp gives concrete controls to manage this: include files with @-mentions (text files truncated to 500 lines and ~2KB/line), run shell commands with $ (command + output go into the window), edit or restore past messages to remove content (edits reset the thread and re-run inference), fork threads to branch contexts, and use Handoff to distill and move only relevant info into a new focused thread. You can also reference other threads and let Amp’s read_thread tool (a second-model extraction) pull targeted items without copying entire histories. These features make agent behavior more predictable, help avoid token bloat and unwanted influence, and enable modular, multi-thread workflows for complex engineering tasks.
Loading comments...
loading comments...