🤖 AI Summary
Dan Shipper argues that recent LLMs force a rethinking of “intelligence”: the old Western/symbolic ideal treats intelligence as explicit, rule‑based reasoning you can write down and verify. Using a scheduling example, he shows how even a simple task explodes into endless, interdependent rules once you try to codify concepts like “urgency” or client importance—so much so that “to schedule a meeting from scratch, you must first define the universe.” Early AI (McCarthy’s symbolic program) pursued that explicit-rule path and collapsed under combinatorial complexity. LLMs (post‑GPT‑3) reveal a complementary picture: much of intelligent behavior rests on intuition—tacit, pattern‑based knowledge that can’t be fully enumerated but can be learned from large data.
For AI/ML practitioners this matters because it reframes how we build systems: instead of exhaustively programming domain rules, we should leverage data-driven models that internalize interlocking, fuzzy heuristics. That shift explains LLMs’ practical power and their limits—emergent, useful “intuition” but weaker traceability and harder-to-formalize values. The takeaway: design will increasingly combine learned, pattern-based components (to capture tacit knowledge) with explicit logic and human oversight (for accountability, alignment, and rare-case correctness), and progress depends as much on modeling implicit knowledge as on formal reasoning.
Loading comments...
login to comment
loading comments...
no comments yet