US general uses AI for military decisions and is "really close" with ChatGPT (www.dexerto.com)

🤖 AI Summary
Major General William “Hank” Taylor of the US Army told reporters he’s been using ChatGPT to help make command decisions and is “really close” with the tool — saying he’s asking it to build models to give commanders better, timelier advantages. Taylor’s comments signal a concrete, high-level embrace of commercial large language models (LLMs) as decision-support tools in operational settings and underscore a belief shared by some military leaders that future battles will be decided at “machine speed,” not human speed. The admission is significant because it brings into relief both the capabilities and the risks of inserting consumer-grade and cloud-based AI into military workflows. Technically, leaders are adopting LLMs for rapid inference, situational summarization, and model-assisted decision-making, which requires low-latency deployment, robust prompt engineering, and careful integration with sensor and command systems. But there are major security and governance implications: LLMs can leak sensitive data, be vulnerable to adversarial inputs, and complicate auditability and provenance of decisions. Reports about companies like BrainCo potentially sharing neurodata with foreign entities highlight supply-chain and data-control concerns. The takeaway for AI/ML practitioners is clear — operational gains from LLMs are real, but safe military use demands hardened, audited models, strict data controls, human-in-the-loop safeguards, and clear policy frameworks.
Loading comments...
loading comments...