🤖 AI Summary
The piece argues that a decisive effect of recent LLM advances is their extraordinary aptitude for code: models are both naturally good at programming and easily fine-tuned to get better. That creates a powerful incentive to recast more and more human activities as code because code yields “answers you can check” (formal-ish verification) and scales across applications. The author uses the olive-harvester analogy: when automation favors a particular configuration (super high-density groves), the world reshapes to fit the machine, producing cheaper, more uniform output but also reducing diversity. Similarly, code-friendly tasks will expand, and domains that tolerate codification will be planted en masse, changing how work is organized.
For the AI/ML community this has concrete technical and social implications: tooling, datasets, and benchmarks will bias toward symbolic, checkable outputs; models and training pipelines will prioritize program synthesis, formal constraints, and verification; and open-source code availability will accelerate the shift. At the same time many valuable tasks resist neat codification (physical constraints, messy human contexts), so automation will always involve negotiation and unintended homogenization. The author warns of pressure on workers to “translate your work into code,” urges attention to alternative paths (using AI to escape, not deepen, digital lock-in), and highlights the need for remedies that preserve diversity and human-centered practices as AI-driven codification advances.
Loading comments...
login to comment
loading comments...
no comments yet