🤖 AI Summary
A new MCP (model control protocol) helper called consult-llm lets Claude Code offload hard debugging and code‑review tasks to stronger or specialized LLMs (o3, Gemini 2.5/3 Pro, DeepSeek Reasoner, GPT‑5.1 Codex) when Claude “gets stuck.” The tool exposes a single MCP action that can be installed into Claude Code (claude mcp add consult-llm …), accepts file lists or git diffs as context, and supports multiple modes (direct API, CLI wrappers for Gemini/Codex to use free/app keys, and a web mode that copies formatted prompts to the clipboard). It includes comprehensive logging, cost estimation, optional simple mode to avoid context clutter, and supports multiple API keys and scoped installs for developer workflows.
For AI/ML and developer tooling communities this matters because it demonstrates practical multi‑model orchestration: delegating tough cases to more capable models (or specialty reasoners) while preserving provenance (file context, git changes, logs) and cost control. The README examples are concrete: fixing a Neovim treesitter break by switching iter_matches()→iter_captures() and child()→named_child(), diagnosing race conditions in hint generation, and recommending design changes for dynamic shell completions in Rust/clap. Technically it’s an MCP tool that pipelines file-aware prompts, returns patch updates, and can toggle between API/CLI/web modes — a lightweight pattern for model chaining and fallbacks in real developer workflows.
Loading comments...
login to comment
loading comments...
no comments yet