Laude Institute announces first batch of ‘Slingshots’ AI grants (techcrunch.com)

🤖 AI Summary
The Laude Institute unveiled the first cohort of its Slingshots accelerator grants—15 projects selected to “advance the science and practice of artificial intelligence” by supplying resources typically scarce in academia (funding, large-scale compute, and product/engineering support). Recipients commit to deliver tangible artifacts—startups, open-source codebases, or other outputs—so the program both funds research and fast-tracks deployment pathways. The slate places a clear emphasis on the hard, under-resourced problem of AI evaluation, with familiar efforts like Terminal Bench and a new ARC-AGI iteration sitting alongside fresh proposals. Technically, the cohort spans benchmark design and core model infrastructure: Formula Code (Caltech/UT Austin) will evaluate agents’ ability to optimize existing code; Columbia’s BizBench aims to benchmark “white-collar” AI agents across business workflows; others target new reinforcement-learning architectures and model-compression techniques. John Boda Yang’s CodeClash brings a dynamic, competition-based evaluation inspired by SWE-Bench, intended to keep metrics third-party and broadly relevant. By coupling deep resources with a mandate for reusable deliverables, Slingshots could shape community standards for evaluation, improve reproducibility, and push open benchmarks that resist fragmentation into company-specific tests.
Loading comments...
loading comments...