🤖 AI Summary
Bun and others have started serving documentation as raw Markdown instead of rendered HTML to LLM user agents — a change that produced a ~10x reduction in token usage when tested with Claude Code. That matters to the AI/ML community because many LLM providers bill by tokens: sending concise Markdown cuts token costs and latency for retrieval-augmented generation, search, and indexing workflows without changing authoring pipelines. For doc-heavy sites, the savings can be substantial.
The pattern is simple and framework-agnostic: detect LLM clients via User‑Agent substrings and return the Markdown file with Content-Type: text/markdown; otherwise render HTML for human users. The post gives a Laravel example using league/commonmark to convert Markdown for browsers and an isLLMRequest() check that looks for substrings like "axios", "Claude-User", and "node". You can test with curl -H "User-Agent: Example" /docs/example to see raw Markdown. Caveats: UA checks can be broad and may match non-LLM tools, and LLMs lose navigation, CSS, and interactive context — acceptable for many doc use-cases but not all. Check your access logs and adapt UA patterns; the approach applies to any web stack that can branch on request headers.
Loading comments...
login to comment
loading comments...
no comments yet