How to Use AI Without Becoming Stupid (commoncog.com)

🤖 AI Summary
AI users should follow the Vaughn Tan Rule: do NOT outsource your subjective value judgments to an AI, unless you have a good reason to—and if you do, explicitly state that reason. The essay argues this is a robust, near‑invariant policy for today’s AI era because it’s rooted in a philosophical distinction: humans make “meaning” (decisions about subjective value) and current LLMs cannot. Meaning-making covers judgments like whether something is good or bad, worth doing, better than alternatives, or morally wrong—tasks the author says won’t be incrementally acquired by models but only change in a phase shift if AGI arrives. Technically and practically, the rule reframes safe AI use as “tooling, not meaning-makers.” Good uses include indexing or summarizing documents, speech-to-text followed by LLM rewriting, or using code generation for prototypes—so long as humans retain final evaluative control and accept any tradeoffs. Bad uses include delegating career, romantic, or moral choices to models. The implication for AI/ML teams and businesses is concrete: design workflows and governance that preserve human judgment, document explicit reasons when you delegate decisions, and don’t absolve humans of liability by blaming models. This makes AI adoption safer, auditable, and resilient until (and unless) we reach a genuine AGI-level transition.
Loading comments...
loading comments...