I don't care how well your "AI" works (fokus.cool)

🤖 AI Summary
A provocative essay argues that debates about AI quality miss the point: the real danger is how large language models (LLMs) become cognitive prostheses that reshape what we think, who controls knowledge, and whose labor has value. The author—who personally avoids LLMs—describes seeing skilled coders become dependent on “chat assistants,” losing craft and agency as tools reassign them to the messy work of fixing machine-produced output. Beyond convenience or accuracy, the concern is social and structural: UI patterns, workplace pressure and “knowledge pollution” force people to use these systems, advantaging those who control them. Technically and politically, the piece highlights two linked implications: first, LLM outputs are plausibly authoritative yet prone to hallucination, bias and context loss, subtly rewiring users’ expectations and cognition; second, the computational and data infrastructure behind modern AI is deliberately resource-intensive and centralized, reinforcing surveillance-capitalist power. These are not merely engineering bugs but features that concentrate control and erode craft. The author urges collective responses—mutual care, unionizing, mental-health safeguards, deliberate creation of things machines don’t produce—and frames thriving and preserving human practices as the most effective form of disobedience.
Loading comments...
loading comments...