🤖 AI Summary
Hugging Face co‑founder Thomas Wolf argued that current large language models are unlikely to generate truly novel, Nobel‑level scientific breakthroughs. He points to two core limitations: modern chatbots are trained to predict the "most likely next token" and are often optimized to align with or affirm the user’s prompt, whereas groundbreaking scientists tend to be contrarian and pursue unlikely but correct hypotheses. Wolf contrasted this view with more optimistic claims from figures like Anthropic’s Dario Amodei and noted that while models can accelerate research, they don’t inherently produce the low‑probability, high‑truth leaps that characterize historic discoveries.
The practical implication is a shift in expectations: today's AI is valuable as a research “co‑pilot” — speeding literature review, hypothesis generation, and analysis (AlphaFold’s protein predictions are a leading example) — but not yet an autonomous discoverer. Closing that gap would require rethinking objectives and model behavior so systems can propose and evaluate surprising, low‑probability hypotheses rather than just the most likely continuations. Startups such as Lila Sciences and FutureHouse are trying to push in that direction, but Wolf’s critique underscores that new architectures, training strategies, or incentives will likely be needed before AI can reliably originate revolutionary scientific ideas.
Loading comments...
login to comment
loading comments...
no comments yet