🤖 AI Summary
The author describes an upgrade to xmlui-mcp — a metadata and content provider used with chat agents like Claude, Cursor and ChatGPT — that reorients “prompt engineering” into practical context engineering. The server now enforces explicit agent guidance (e.g., “do not invent xmlui syntax,” always cite documentation URLs, and start responses with an admission when no examples exist), and returns faceted, structured search results instead of undifferentiated snippets. Search is staged (exact → relaxed → partial) to balance precision and recall, and results are bucketed into components, howtos, examples and source with confidence scores and a JSON query_plan. A concrete win: the query “width 100% equal” found docs/public/pages/howto/make-a-set-of-equal-width-cards.md via the partial stage, enabling agents to cite the exact how‑to rather than guess.
This matters because it reduces hallucinations, forces evidence‑backed answers, and makes documentation “testable” — failing searches reveal documentation gaps that can be patched with working examples and then re-tested. Technically, the MCP acts as an agent-friendly document indexer and policy enforcer, guiding tool selection and search strategy, while agent base prompts require obedience to xmlui-mcp guidance. The result is a tighter feedback loop: better-structured docs help agents provide correct, citable solutions, and agents help surface where docs need improvement.
Loading comments...
login to comment
loading comments...
no comments yet