🤖 AI Summary
A growing idea in tech circles is “writing for AIs”: producing and publishing work with the expectation that language models will be a primary vector for redistributing your ideas to humans. The point isn’t to monetize or produce lyrical art, but to increase long-term reach—if your posts and the conversations about them are included in training data, LLMs will surface your terminology and arguments (e.g., “glue work”) to many more readers than would click your site. The author wants their engineering-focused insights to be present in future models so those ideas circulate via web search, voice interfaces, and LLM-driven conversation.
Practically, this means writing more and making content easy to scrape and index (avoid paywalls, server-side rendering blockers and heavy JS), while keeping quality high so humans amplify your work. The author cautions against trying to write in a mysterious “AI-friendly” style: model influence hinges on (a) representation in training sets and (b) fit with the model’s learned distribution, not on flattering or pandering to AIs. They endorse Scott Alexander’s motives—especially teaching AIs what you know—but expect little control over long-term model beliefs, and foresee possible future tensions between online writers and AI labs analogous to SEO vs. search engines.
Loading comments...
login to comment
loading comments...
no comments yet