🤖 AI Summary
The author rejected a heavyweight “MCP server” install mess — daemons, port collisions and platform-specific instructions — in favor of a pragmatic pattern: put AI-runnable Markdown guides and tiny curl + jq scripts in a repo so an AI can reliably query your data and humans can audit every command. The approach bundles simple service wrappers (scripts that call APIs with curl and emit raw JSON), small jq formatters to shape results, and runnable guides with tenant/time-scoped recipes. No discovery layer, no background services — just copy/paste/run.
Why it matters: this minimizes friction for both teammates and LLMs. Deterministic, tiny commands let the model execute repeatable data-collection workflows (the writer shows a real run that discovers guides, prepares env vars, runs parallel queries, anonymizes and summarizes in ~22 minutes). The repo shape (guides/, scripts/, formatters/, playbooks/) standardizes onboarding and makes outputs human-auditable and secure-ish via env var-based auth. It’s a practical interim standard: MCPs are powerful but often brittle to install and operate across runtimes; until vendor installs and discovery are consistent, small HTTP-first building blocks let AI compose complex analyses reliably.
Technical implication: treat AI as a tool-runner — give it deterministic primitives (curl wrappers + jq), machine-readable guides, and it will assemble multi-step investigations while humans retain control and review. When MCPs mature, the same HTTP calls can be re-wrapped behind them.
Loading comments...
login to comment
loading comments...
no comments yet