Boring Is Good (jenson.org)

🤖 AI Summary
The piece argues the feverish LLM hype is giving way to a more pragmatic “boring” phase, driven by evidence that deployment hasn’t delivered for most firms — an MIT report finds 95% of companies implementing LLM tech haven’t seen positive outcomes. The author warns that fluency has been mistaken for intelligence, leading people to push models into unrealistic “assistant” roles. Instead, they urge a shift away from top-down automation toward smaller, task-focused uses: think invisible query rewrites, proofreading, and other low-level syntactic jobs that reduce hallucinations and user friction. Technically, this means a move from massive centralized models toward SLMs (Small Language Models) trained on much smaller data, with far fewer parameters and lighter quantization — examples include Microsoft’s Phi3 which can run on an eight‑year‑old PC using <10% CPU. Those SLMs are cheaper, easier to train ethically, and better suited to localized, bespoke tasks. The broader implication: after the “Trough of Disillusionment” the field is likely to mature into boring infrastructure — distributed, reliable, and embedded into workflows — rather than chasing humanlike intelligence.
Loading comments...
loading comments...