Air Force AI Targeting Tests Show Promise, Despite Hallucinations (www.twz.com)

🤖 AI Summary
The Air Force’s DASH-2 sprint at the Shadow Operations Center-Nellis put AI-assisted decision tools into a live weapon-to-target matching exercise and found striking gains in speed and breadth but persistent quality and trust problems. Six industry teams plus a SHOC-N innovation team built AI-enabled microservices by observing human battle-management crews, then competed with human-only teams. Algorithms generated courses of action (COAs) in roughly eight seconds versus 16 minutes for humans and produced about 10 COAs to the human team’s three. However, many AI-suggested COAs were nonviable—examples included failing to factor in sensor/weapon-weather constraints (e.g., IR seekers vs. cloudy conditions)—illustrating “hallucinations” and the need for human oversight. Technically and operationally, the sprint highlights both the potential and immediate limits of AI in high-stakes command-and-control (ABMS/JADC2) settings: microservice architectures can massively accelerate candidate generation and multi-kill-chain execution, but checks, domain-aware constraints, and human-in-the-loop validation remain essential. Short development cycles limited embedded safeguards in this two-week sprint, and cultural adoption is an open issue—less than 2% of DoD personnel currently use AI tools and past tests showed operators may disable systems they don’t trust. The program will iterate in DASH-3 and beyond while the Air Force works toward enterprise-wide convergence of battle networks and standards in 2026.
Loading comments...
loading comments...