Anthropic refused a Pentagon contract. OpenAI took it within hours. The US government then designated Anthropic, an American company, a supply chain risk, the first time that classification has ever been applied domestically, and directed every federal agency to stop using its technology. The sticking points were two hard limits baked into Claude's design: no mass domestic surveillance, no integration into autonomous weapons systems. The Pentagon wanted access for 'all lawful purposes.' Anthropic said no. OpenAI CEO Sam Altman later admitted the deal was 'definitely rushed' and that 'the optics don't look good.'

By the weekend, users had launched the QuitGPT campaign and Claude had overtaken ChatGPT in the Apple App Store. This is consistent with existing data. A 2025 Givsly study of 2,100 US adults found 88% buy from brands aligned with their values, and 79% of Gen Z consumers pay a premium for them. The article's core argument is worth reading in full because it draws a clean line between two product philosophies: values as architecture, meaning constraints that are structurally non-removable, versus values as policy, meaning contract language that can be renegotiated or quietly updated. Anthropic's red lines were the former. OpenAI's deal, which relies on deployment configuration and existing law to enforce the same limits, is the latter. One surveillance loophole the article identifies is specific and unresolved: US agencies can legally buy location data, financial records, and social media activity from commercial brokers and analyze it at scale. AI does not create that loophole. It scales it. Whether OpenAI's contract closes that door remains an open question.

The article is not a hit piece on OpenAI. It takes the company's architectural argument seriously and acknowledges it is not frivolous. What it does instead is force a harder question every product team is now being asked to answer publicly: where do your values actually live? In the code, in the contract, in a blog post, or nowhere in particular? The answer, as this week demonstrated, now has measurable commercial consequences.

[READ ORIGINAL →]