🤖 AI Summary
A thoughtful essay warns that the biggest AI risk isn't the machines themselves but the human structures that arise to govern them: a new “theocracy” where a few dominant AI providers (OpenAI being a prime example) impose guardrails and norms that shape how millions reason, decide and communicate. Technically, the author emphasizes that modern transformer LLMs—when augmented with external memory—can be Turing‑complete (see “Memory Augmented Large Language Models are Computationally Universal”), meaning prompts function as a new, conversational programming language. The medium is shifting too: multi‑modal LMMs let symbolic instructions be graphical as well as textual, changing collaboration from syntax/bug‑fixing to object‑relationship reasoning (illustrated by maps vs. code and copilot‑style pair programming). Tools will become contextual, not fixed one‑size fits all, because the digital “sledgehammer” can be reconfigured by symbolic inputs.
The significance for AI/ML is institutional and epistemic: centralized platforms that control conversational interfaces can standardize thought, suppress diversity of perspectives, and harden rules via ethics committees and regulations—especially problematic given the lack of truly open‑source major models and limited governmental support for openness. The author argues for defensive measures: promote diversity, critical thinking, transparency and genuine open approaches to model development to prevent concentrated, quasi‑religious authority over how people reason with AI.
Loading comments...
login to comment
loading comments...
no comments yet