Against the Protection of Stocking Frames (ethanmarcotte.com)

🤖 AI Summary
A provocative critique argues that large language models (LLMs) should be treated as a failed technology: despite intense hype and investment, they haven’t delivered consistent product value, often produce poor or mistrusted user experiences, and impose severe social, cultural and ecological costs (energy footprint, copyright infringement, contractor harm, and even links to real-world harms). The author cites evidence—corporate pilots largely fail (per an MIT report)—and contends LLMs’ ubiquity is sustained more by venture capital and government contracts than by technical merit. This reframing urges the AI/ML community to distinguish occasional, useful niche applications from the broader systemic shortcomings of current generative-model deployments. A concrete symptom is top-down “AI” mandates at companies like Zapier, which now require AI fluency in hiring and performance reviews, with a four-tier rubric from “Unacceptable” (resistant) to “Transformative” (reimagines work with AI). The author warns this converts attitude toward tools into cultural compliance, accelerating deskilling and weakening worker power. The piece’s technical and social implication: adoption metrics (usage) aren’t a proxy for beneficial impact, and governance, labor protections, and collective organizing are necessary to shape how LLMs are introduced. The takeaway for AI/ML practitioners: evaluate models by real-world value and harms, resist managerial coercion that treats AI as inevitable, and support workplace safeguards and democratic oversight.
Loading comments...
loading comments...