🤖 AI Summary
MCP servers expose lots of ready-made tools to agents, which is convenient but often harmful: too many tools confuse the LLM (it must pick from dozens of tool definitions), increase token usage and latency (and risk hitting context limits), and can let agents perform destructive write operations unintentionally. The post explains why trimming MCP toolsets matters for reliability, cost, and safety, and recommends least-privilege tokens and optional human-in-the-loop confirmation when write ops are possible.
Practically, the author shows how to filter tools in three environments. In GitHub Copilot’s VS Code agent mode you can globally enable/disable tools via the chat gear, or create custom chat modes (modename.mode.md) that include an explicit "tools" allowlist (example: a fixer mode listing only issue- and test-related tools). For Python frameworks, Langchain v1 uses langchain_mcp_adapters and a MultiServerMCPClient to fetch tools, then filters by name (e.g., keep only "list_issues", "search_code", "search_issues", "search_pull_requests") before creating the agent; Langchain also supports middleware for human confirmations. Pydantic AI exposes MCPServerStreamableHTTP and a server.filtered(lambda ctx, tool: tool.name in allowed_tool_names) that you pass as a toolset to Agent. The techniques are similar across frameworks: fetch server tools, apply an allowlist predicate, and instantiate the agent with the filtered set to improve accuracy, cost-efficiency, and safety.
Loading comments...
login to comment
loading comments...
no comments yet