Show HN: Give LLMs TypeScript tools without writing MCP servers (github.com)

🤖 AI Summary
A new open-source pattern called MCP-RPC (inspired by Cloudflare’s “Code Mode”) lets LLMs use ordinary TypeScript functions as first-class tools instead of issuing many synthetic MCP tool calls. The idea: LLMs are much better at writing real code than using special tool-call tokens, so let them generate TypeScript that runs in a secure sandbox. That yields big wins for the AI/ML community — parallel execution (Promise.all), complex map/filter/reduce workflows, far fewer tokens and round trips, stronger type safety via TypeScript checks, and fewer brittle protocol-specific prompts. Practically, this can turn multi-step tool chains into single script executions, improving latency and reliability for tasks like fetching multiple APIs, batch status checks, or data transformations. Technically, MCP-RPC exposes exported TypeScript functions as RPC endpoints, automatically extracting type signatures for the LLM. A bridge translates MCP calls from a client (e.g., Claude) to a WebSocket RPC runtime that executes user-written functions inside a Deno sandbox with controlled permissions. Setup is lightweight (install scripts, configure the MCP client to call the bridge, run mcp-rpc-runtime -r ./test-rpc -p 8080). Typical flow: discover functions (get_available_rpc_tools), have the LLM write TypeScript using the typed rpc object, then run_script to execute it. The result: familiar programming primitives, parallelism, and type safety make LLM-driven integrations faster, more flexible, and more robust than conventional sequential tool-calling MCP servers.
Loading comments...
loading comments...