Nonmonotonic Logic (www.cambridge.org)

🤖 AI Summary
This Element offers a compact, unifying introduction to nonmonotonic logic (NML) and defeasible reasoning, emphasizing three core methods—formal argumentation, consistent accumulation, and semantic approaches—rather than cataloguing every existing system. It argues that defeasible reasoning (the kind that lets you retract conclusions when new information arrives) is essential for modeling commonsense, expert, and scientific inference in AI: defaults, exceptions, the closed‑world assumption, and hypothesis revision are everyday phenomena that monotonic classical logic cannot capture. The work is organized to give a quick but rigorous Part I overview, followed by deeper metatheoretic and system‑level treatments in Parts II–IV, with technical appendices for selected proofs. Technically, the Element highlights landmark approaches and how they relate: Dung‑style argumentation frameworks and Pollock’s work formalize conflict and defeat; Reiter’s default logic and maximal consistent‑set methods implement stepwise or constrained accumulation of plausible beliefs; semantic methods rank interpretations by normality (Gelfond & Lifschitz, Kraus‑Lehmann‑Magidor, McCarthy) and pick sufficiently normal models. Crucially, the text presents translations between these paradigms, showing many inference patterns are robust across syntactic, proof‑theoretic, and semantic accounts—and that formal argumentation can often represent other NMLs—making the field more coherent and practically relevant for AI systems that must reason under uncertainty and revise beliefs.
Loading comments...
loading comments...