The Guernica of AI (zigguratmag.substack.com)

🤖 AI Summary
A former Palantir employee warns that modern AI and data-integration systems—sold widely to militaries, corporations and governments—are enabling a new, dehumanized form of targeting that played a role in recent high-civilian-casualty strikes in Gaza. The insider recounts marketing work at Palantir (Gotham, Foundry, Apollo), involvement in Project Maven–era doctrine, and links between an error-prone, AI-driven "kill list" platform called Lavender and Palantir-style big-data processes. The allegation: semi‑autonomous kill chains—find, fix, track, target, engage, assess—borrow directly from military targeting doctrine and have been exported into civilian infrastructure, with reportedly accepted thresholds of collateral harm and failure modes that produce mass civilian casualties. Technically, the piece connects the kill chain to ubiquitous practices in industry: constructing "digital twins" from surveillance sensors (drones, satellites, vehicle and location data, social media, third‑party brokers), training models to classify and continuously track subjects, and closing iterative feedback loops that optimize engagement—whether weapons or consumer manipulation. The author highlights systemic risks (data quality, bias, privacy, proprietary monocultures) documented by MIT and scholars like Kate Crawford, and warns that marketing and policy rollbacks are normalizing deployment before adequate oversight. The upshot for AI/ML practitioners: architectures and tooling that improve situational awareness also scale harms when used without accountability, robust evaluation, and governance across both military and commercial domains.
Loading comments...
loading comments...