Tescreal (en.wikipedia.org)

🤖 AI Summary
TESCREAL is a new acronym and critique coined by Timnit Gebru and Émile P. Torres to group seven overlapping ideologies—Transhumanism, Extropianism, Singularitarianism, modern Cosmism, the Rationalist internet community, Effective Altruism, and Longtermism—into a single “bundle.” First argued in a 2024 paper, the term is used to describe a cluster of Silicon Valley beliefs that treat catastrophic future scenarios (especially existential risk from artificial general intelligence) as justification for prioritizing large-scale, often expensive technological projects like AGI, life‑extension, and space colonization. Gebru and Torres contend this bundle has roots in 20th‑century eugenics and enables actors to sidestep present harms (racial bias, environmental damage, regulatory safeguards) by foregrounding speculative future benefits. For the AI/ML community the label crystallizes an ongoing debate about priorities: proponents see AGI-focused work and long‑termist funding as necessary to avert extinction, while critics warn that invoking existential risk legitimizes “unscoped” systems, deregulation, and distraction from immediate safety, fairness, and equity problems. The concept touches technical governance issues—alignment research vs. deployment speed, funding allocation, and standards for safe system scope—and fuels political and ethical pushback against perceived techno‑utopian consolidation of power. TESCREAL has been contested as an unfair conflation of diverse philosophies by some rationalist and effective‑altruist commentators, but it has nonetheless reframed public discussion about who shapes AI priorities and why.
Loading comments...
loading comments...