🤖 AI Summary
MCP Shark (alpha) is a local monitoring and aggregation tool for the Model Context Protocol (MCP) that captures and analyzes all MCP traffic between your IDE and multiple MCP servers (HTTP and stdio). It exposes a real-time web UI (default http://localhost:9853) backed by an aggregation server (default port 9851), auto-detects IDE MCP config files (e.g., Cursor ~/.cursor/mcp.json, Windsurf config), and can spawn and manage multiple MCP endpoints into one unified interface. Key features include live traffic capture with Wireshark-like detail, view modes (flat, grouped by session/server), advanced filtering/search (method, status, session, protocol), full request/response inspection (headers, bodies, timings), export to JSON/CSV/TXT, and SQLite audit logging (~/.mcp-shark/db/mcp-shark.sqlite) with correlation IDs, latency metrics, error stack traces, and session tracking. Common MCP methods (tools/list, tools/call, prompts/list/get, resources/list/read) are surfaced for fast forensic inspection.
For AI/ML developers and platform engineers, MCP Shark provides critical observability for model-driven workflows: it makes prompt/tool invocation flows auditable and debuggable, surfaces performance bottlenecks and error causes, and aids reproducibility and security reviews. Because it logs full payloads and can modify IDE configs (with automatic backups/restores), treat this alpha as a powerful but sensitive tool—use locally and carefully. An Electron wrapper is available for native desktop packaging; the project is actively developed and labeled alpha, so expect evolving features and possible bugs.
Loading comments...
login to comment
loading comments...
no comments yet