The Pentagon signed a contract with Anthropic to use Claude for intelligence analysis. Then Anthropic's own usage policies became the story, raising questions about whether a company built on safety principles can serve a military client without contradiction.
The real tension here is not about one contract. It is about the structural incompatibility between AI labs that publish acceptable use policies restricting harmful applications and defense agencies whose core mission includes lethal decision-making. Anthropic's policy explicitly prohibits uses that cause physical harm. The DoD's job description includes exactly that.
The Equity podcast episode digs into what this means for the pipeline of startups now courting federal contracts. The question worth reading for: if Anthropic, one of the most credentialed and well-funded AI companies in the world, cannot navigate this cleanly, what does that mean for a Series A company that needs the revenue and lacks the leverage to set terms?
[READ ORIGINAL →]