Why your AI strategy needs guidance from an 82-year-old computer (bigthink.com)

🤖 AI Summary
The piece traces a throughline from the US Army’s 1943 ENIAC — the vacuum‑tube, general‑purpose computer that proved electronic logic could speed calculations — to mid‑century psychologist J.P. Guilford’s decomposition of creativity into divergent (random mix‑and‑match) and convergent (probabilistic pattern selection) thinking. Those two processes became the conceptual engine for automated “ideation,” which decades later powered generative models and, combined with symbolic methods, the rise of neurosymbolic AI (the heirs to ENIAC and Guilford’s protocols). But the Army’s later experience and recent Ohio State work show a hard limit: practical human creativity depends on narrative, “thinking‑in‑actions” mechanisms native to animal neurons — process recognition and initiative — that current transistor‑based electronics cannot implement. Training exercises can cultivate that narrative competence (e.g., recalling specific creative stories and projecting future actions), yet true replication may require novel, nonelectronic synapse‑like hardware. For the AI/ML community this means generative models remain powerful but incomplete: expect continued value from neurosymbolic integration and human-in-the-loop design, renewed research into non‑von‑Neumann architectures, and practical focus on tools and training that augment rather than attempt to fully replace human initiative.
Loading comments...
loading comments...