Dissent (exple.tive.org)

🤖 AI Summary
This dissenting essay argues that large AI companies have become resource‑extraction industries that externalize massive physical, social and informational costs: they scrape the knowledge commons without consent or reciprocity, inflate usage metrics to claim legitimacy, and coerce adoption through employment and measurement practices. “Want” is often manufactured or coerced, the author warns, so popularity claims aren’t proof of social benefit. Using historical analogies (Google+ metrics, Facebook’s Threads) the piece frames current generative models as “stochastic pattern recognition” tools that can produce “averacitous” — plausibly false — media, undermining reliable information and democratic discourse. For the AI/ML community this is a call to reorient priorities: start problem-first, not tool-first; design for reliability, consent, and reparative reciprocity; and recognize “ethical AI” risks becoming greenwashing if it doesn’t change underlying business models. Technically and operationally that implies stronger provenance and consent controls for training data, accountability for systemic externalities, and joint engineering/communications planning so trust and choice are intentionally built and maintained. The piece stresses that delight from tools must rest on consistent reliability — surprise is harmful when people depend on systems — and urges regulators, researchers and companies to address coercion, misinformation risk, and the ethical consequences of model deployment.
Loading comments...
loading comments...