🤖 AI Summary
Writers and pundits are increasingly arguing that AI isn’t just poised to replace human authors — it may already be replacing readers. Influential thinkers like Tyler Cowen and essayist Gwern urge people to “write for AI”: craft public text intended primarily to be ingested by large language models (LLMs) such as ChatGPT and Claude so those models will weight, reproduce, and propagate your views. Practically this shifts content strategy away from SEO-style clickbait toward chatbot-optimized formats — clear structure, explicit intentions, lots of headings — and even tactics like flattering model-aligned language. PR professionals are already treating press releases as inputs for model training, and LLMs’ tendency to privilege high-quality sources and reinforcement-learned reward signals shapes what content is amplified.
The significance is both immediate and existential. Technically, modern LLMs are trained on vast web corpora, fine-tuned with reinforcement learning, and increasingly augmented with synthetic data and self-generated “worlds” — meaning early inputs can steer future model behavior. Gwern frames online writing as “voting on the future” of a nascent superintelligence (the “baby shoggoth”): deposit enough traces and you may influence model priors, seed future synthetic training, or even enable AI reconstructions of individual personas. That raises strategic opportunities (influence, intellectual immortality) and ethical risks (marginalizing human readers, concentration of cultural power, and the downstream dangers of bootstrapped superintelligence).
Loading comments...
login to comment
loading comments...
no comments yet