Anthropic walked away from a $200 million Pentagon contract rather than drop two conditions: no mass domestic surveillance, no fully autonomous weapons without human sign-off. The US government responded by designating Anthropic a national security risk, cutting them off from all Pentagon contractors and putting billions in future revenue in jeopardy. A replacement model was signed within hours. Anthropic did not move. That is the founding tension this piece is built around, and it is worth understanding before you pick your tools.

The article draws a clear structural map most users never see: the company (Anthropic, OpenAI, Google, DeepSeek), the model (Claude, GPT, Gemini), and the tool (the chat interface or app you actually open). These are not interchangeable layers. Claude's character is governed by a 30,000-word document called the Constitution, written by Amanda Askell, a philosopher who left OpenAI before joining Anthropic. The model tiers are named after poetry forms: Haiku for speed, Sonnet for daily use, Opus for complexity. Claude is named after Claude Shannon, the mathematician who invented information theory. One model powers many products you already use, from Notion AI to Cursor, often invisibly.

The piece is the first in a series on how AI actually works, who builds it, and what it is doing to human thinking. The Pentagon contract story is documented in detail, with a pointer to Lawfare's legal breakdown for readers who want the full dispute. What makes this worth reading in full is not the conclusion but the structural argument underneath it: when a model quietly runs beneath almost everything you touch, the company that trained it and the principles they encoded into it are not background noise. They are the product.

[READ ORIGINAL →]