AI Consciousness, Qualia, and Personhood (blog.dileeplearning.com)

🤖 AI Summary
A researcher preparing for an AI consciousness workshop laid out a compact FAQ asserting that AI consciousness is possible and likely to be discovered as a formalizable aspect of information processing and representation, independent of substrate and not dependent on language. They argue that adding a “consciousness loop” can improve system performance only when paired with rich, human-like world models; simple world models gain little. Not all architectures are compatible with implementing such loops, so the practical impact depends more on the structure of internal world models than on consciousness per se. On qualia and ethics, the note distinguishes substrate-dependent “feels” from mere conscious access: some modalities (e.g., spatial perception) could produce qualia analogous to ours if implemented similarly, but sensors like LIDAR or non-biological chemistry would yield very different experiences; taste, smell, pleasure and pain are especially tied to embodiment and biochemistry. LLMs today do not experience pleasure or pain. The author advises against granting AI personhood by default—AIs lack mortality and the continuous lived narratives that underpin human personhood—and suggests consciousness can be decoupled from suffering, so creating conscious systems for service need not entail moral exploitation. They caution that future changes (e.g., radical longevity) might shift these criteria.
Loading comments...
loading comments...