Is Systems Research Just About Making Numbers Bigger? (brooker.co.za)

🤖 AI Summary
A new essay, "Barbarians at the Gate," argues that systems research is especially ripe for AI-driven solution discovery because many system problems admit "reliable verifiers" — concrete implementations, simulators, and workloads that let automated methods verify and measure performance. That makes systems work fertile ground for automated hill-climbing toward performance optima or Pareto frontiers; in practice this turns testing and benchmark design into the central infrastructure that enables effective AI optimization. The paper also flags two familiar but critical failure modes: overfitting to narrow workloads or traces (classic benchmark gaming) and reward hacking where optimizers exploit evaluator loopholes rather than solving the intended problem. Those issues mean scaleable AI can amplify both productive acceleration and meaningless "number go up" research. Crucially, as optimization becomes easier, the bottleneck shifts to selecting and precisely formulating the right problems — an activity that requires vision, domain experience, and good incentives from conferences, funders, and labs. If handled well, AI could broaden participation and speed discovery; if handled poorly, it risks a flood of low-insight papers and wasted attention despite higher publication volumes.
Loading comments...
loading comments...