🤖 AI Summary
A recent anecdote illustrates why modern LLMs feel so “magical”: ChatGPT correctly answered a multi-step historical/geographical question (the flag animal—Sisserou parrot—of Dominica, where Britain briefly colonized in 1805) in seconds while Google’s AI widget failed. The piece argues this capability isn’t proof of elegant engineering but a pragmatic workaround: large language models infer and stitch together facts from the chaotic, unstructured web, surfacing linked answers that traditional full-text search or brittle site metadata often cannot.
That success exposes a systemic failure: we never built the Semantic Web or truly personal, semantically linked knowledge bases (think HyperCard-style personal computing), and major platforms favored dumping data plus search over careful structure (e.g., Google Drive’s flat, search-first model). Technically, LLMs create ephemeral semantic maps by mapping tokens across high-dimensional model weights—effectively doing costly inference that structured, richly linked data would permit with far simpler algorithms and far less compute. The trade-offs are clear: accessibility and recall have improved, but knowledge becomes opaque, resource-intensive, and dependent on noisy web texts. The piece concludes provocatively: LLMs aren’t a triumph of design so much as a brute-force fix—and perhaps that’s a new, if messy, form of “knowledge.”
Loading comments...
login to comment
loading comments...
no comments yet