🤖 AI Summary
An engineer used an LLM-driven agent (Anthropic via the Avante interface) to extract the “Spring”/Recoil protocol specification from an HTML page into a JSON protocol spec usable for implementations (think LSP-style metadata for codegen). The workflow converted HTML→Markdown, fed command lists to the agent, and iteratively expanded parsing requirements (command, source, arguments, and later richer fields like response types embedded in mixed HTML/Markdown). The agent produced usable artifacts (spring-protocol-1.json → spring-protocol-2.json) that covered ~80% of the target in a 30–60 minute iterative session; total experiment cost was about $32 across attempts.
Technically, the experiment highlights practical strengths and pitfalls of LLM-assisted engineering: the model often succeeded even without an execution MCP, but adding an mcp-run-python tool enabled more automated parsing runs (with caveats about workspace mounting, sandboxing, and tool provisioning). Integration quirks (Avante’s per-call approvals, neovim interactions, lost prompts/logs) and agent “tunnel vision” consuming time/resources argue for careful prompt engineering, early accuracy targets, and iterative rather than one-shot approaches. For the AI/ML community this is a concrete example of LLMs delivering rapid, pragmatic value parsing semi-structured docs into machine-readable schemas—provided tooling, security, and prompting are thoughtfully configured.
Loading comments...
login to comment
loading comments...
no comments yet