Anthropic lost a Pentagon contract after refusing to remove 'mass surveillance' language from its usage terms. Michael Horowitz, former deputy assistant secretary of defense for force development and former DoD official with direct experience in AI procurement, explains why that specific word choice became a breaking point, and what it reveals about the structural mismatch between how frontier AI labs think about model deployment and how the military actually needs to use these tools in live workflows.
The conversation goes deeper than the contract dispute. Horowitz draws a hard line between autonomous weapon systems, which already exist and are widely used, and fully autonomous weapons, which remain legally and operationally contested. He maps what current models like Claude can and cannot do in real battlefield decision chains, and why the gap between demo capability and operational trust is the actual obstacle, not the technology itself.
The section on supply chain risk starting at 30:26 is worth the full listen. Horowitz frames Anthropic not just as a vendor but as a dependency, and walks through what a trust breakdown between a frontier lab and the DoD means for long-term AI procurement strategy. The reshaping-warfare section at 41:01 closes with specifics on timeline and force structure that most coverage skips entirely.
[WATCH ON YOUTUBE →]