Tech Capitalists Don't Care About Humans (jacobin.com)

🤖 AI Summary
Philosopher Émile Torres (with Timnit Gebru) warns that a cluster of ideas—summarized by the acronym TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism)—has become influential in Silicon Valley and shapes how many tech leaders think about AI. TESCREAL combines a totalist utilitarian moral frame (maximize an impersonal “value” across the cosmos) with a transhumanist ambition to build AGI/superintelligence, upload minds, engineer bodies, colonize space, and run vast simulations on “planet‑sized” computers. In that view humans are instrumental “bootloaders” for superior digital beings; superintelligence is both the engineer of utopia and a population that could replace biological humanity. Technically and culturally this matters for the AI/ML community because it informs research priorities (AGI, decision theory, cognitive‑bias mitigation), funding, and governance debates, and carries fraught ethical baggage—rationalist thought experiments that trade individual suffering for aggregate value, IQ‑realist beliefs tied to eugenic ideas, and dysgenics anxieties. Torres flags continuity with historical eugenics and warns that these impersonal value calculations can devalue human life and justify harmful policies. For practitioners and policymakers, the implication is clear: scrutinize the philosophical assumptions embedded in AGI agendas, center human‑centric ethics in design and governance, and be wary of incentive structures that prioritize abstract cosmic value over present human well‑being.
Loading comments...
loading comments...