Yes, let's teach LLMs accessibility, but also the companies using them (hidde.blog)

🤖 AI Summary
A prominent accessibility-tooling vendor recently argued for “teaching” large language models and in-editor AI agents accessibility concepts as a way to shift left — i.e., move accessibility work earlier into developer tooling like linters, design systems and context-aware code assistants. The vendor and the author agree that improving the factual quality of training data (including via vendor-maintained MCP servers) and giving LLMs access to canonical guidance can help surface correct accessibility practices to developers. They also call out the ongoing problem of models trained on scraped copyrighted content. But the author warns this is a bazooka: useful, but blunt and potentially misdirected if it replaces the more direct, scalable wins of educating the organizations that adopt these tools. Practical steps include investing in up-to-date HTML/ARIA knowledge, providing copy-pasteable or installable components, embedding guidance into authoring tools, avoiding coercing developers to rely on AI-generated code, and retaining accessibility specialists who understand real-world usage. Technically, that means prioritizing deterministic, maintainable tooling and documentation over opaque model outputs, and considering legal, business and environmental costs of heavy AI reliance. The take-away: welcome LLM-aware accessibility, but don’t substitute it for training people and building robust developer-first accessibility infrastructure.
Loading comments...
loading comments...