MCP Is Anthropic Biggest Mistake (medium.com)

🤖 AI Summary
Anthropic’s Model-Connected Protocol (MCP) has come under fire: the company celebrated cutting token usage from ~150,000 to ~2,000, but the community critique argues that reduction is achieved by bypassing MCP’s core design. MCP currently preloads every tool definition into the model context (so connecting to many servers can burn hundreds of thousands of tokens before a query), and tool calls often cause data to flow through the model twice (read then write). Anthropic’s pragmatic “fix” is to have Claude generate code that runs in a sandboxed environment to call tools, then filter and return only the relevant outputs — which avoids massive context bloat but effectively sidesteps MCP’s direct tool-calling feature. The significance is twofold: technical and economic. Technically, the code-execution workaround exposes architecture and security trade-offs (sandboxing, filtering, round-trip complexity) and admits the original protocol doesn’t scale as intended. Economically, an expanding MCP ecosystem—SDKs, servers, startups—may be building on shaky assumptions; if the protocol needs workarounds to function, products and investments risk being brittle. The critique urges the community to reconsider whether a universal tool-calling protocol is the right abstraction, or whether better model APIs, AI-aware API design, and libraries would be safer, simpler paths forward.
Loading comments...
loading comments...