🤖 AI Summary
Researchers and builders are pushing a simple but underused idea in context engineering for LLM agents: use hyperlinks (URIs) as the primary mechanism to surface on-demand context. Instead of stuffing everything into a long system prompt or building dozens of bespoke get_* tools, agents should carry a small “read resources” capability that accepts URIs and fetches only the documents the model actually needs. That approach preserves append-only conversational history (better cacheability), supplies fresh context near the model’s decision point, and avoids overwhelming the model with irrelevant tokens — offering a practical, token- and tool-efficient solution to relevance and recency problems.
Technically this needs only a lightweight scaffold: an entrypoint URI, a tool that accepts lists of URIs, and support for recursively following links. The pattern works across web pages, HTTP APIs, local files and MCP Resources (server-registered URIs for dynamic/static content). The author demos ~30 lines of JS (Genkit) and describes adding read_resources support to a Firebase MCP server; agents like Gemini CLI, Claude Code and Cursor already follow links when given this tooling. For broader adoption, MCP-enabled agents should expose a read_resources tool and aggregate reads across MCP servers and the web; indexing and RAG-layer search over MCP resources would be natural next steps. In short: hyperlinks let agents discover, fetch, and apply just-in-time context cheaply and cleanly — a small change with outsized impact.
Loading comments...
login to comment
loading comments...
no comments yet