Improving MCP tool call performance through LLM code generation (github.com)

🤖 AI Summary
mcpcodeserver is a new Model Context Protocol (MCP) proxy that rewrites multi-tool orchestration as TypeScript code generation: instead of LLMs issuing many sequential JSON-formatted tool calls (which burn tokens, complicate data passing, and limit control flow), parent models request the proxy to generate and execute TypeScript that calls multiple child MCP tools in sequence or in parallel. The approach leverages LLMs’ strong grounding in real-world code (see CodeAct and Cloudflare Code Mode) and yields more compact prompts, natural use of variables/loops/try-catch, easier data flow between tools, and measurable improvements in agent success rates and token efficiency. Technically, mcpcodeserver acts as an MCP client to one or more child servers, discovers their tools, and exposes three parent-facing tools: list_servers, get_tool_definitions (returns TypeScript typings, filterable by server), and generate_and_execute_code (generates sandboxed TypeScript that invokes discovered tools). It auto-refreshes tool catalogs (checks every ~30s and notifies parents on changes), namespaces generated functions by server (e.g., filesystem_read_file), supports pass-through MCP features like elicitation/roots/sampling when available, and runs via npx/bun with stdio or HTTP transport (Node >=18 recommended). The result: LLMs can orchestrate complex, multi-tool workflows more reliably, efficiently, and with richer error handling by generating executable code rather than making repeated tool calls.
Loading comments...
loading comments...