Google AI aims to make best-in-class scientific software better (www.nature.com)

🤖 AI Summary
Google researchers unveiled an AI-driven workflow that “evolves” scientific software by treating individual programs as nodes in evolutionary trees and using a large language model (LLM) to generate and mutate them. For six scientific tasks the team built trees of up to ~2,000 nodes, seeding initial nodes with LLM-written implementations (including re-implementations, hybrids or novel methods), then prompting the LLM—augmented with paper summaries and domain knowledge—to modify, duplicate or recombine nodes. The system was iteratively refined on Kaggle-style tasks to tune how nodes were selected for mutation and how prompts were constructed, and mutations could include literature searches, enabling open-ended, meandering exploration rather than only optimizing the current best program. Across domains the evolved programs often beat human-written state of the art: for single-cell RNA-seq batch integration the top program outperformed ComBat by ~14%; the best COVID-19 hospitalization predictor topped models in the COVID-19 Forecast Hub; other wins included satellite-image labeling, zebrafish neural-activity prediction, time-series forecasting, and improved calculus routines (solving 17 of 19 problems a baseline failed). The work demonstrates automated, LLM-guided code discovery can accelerate scientific-software development and potentially free researchers from repetitive coding, but results await peer review and raise questions about reproducibility, robustness and trust as such systems are deployed. Google says many optimized tools will be made available.
Loading comments...
loading comments...