DeepSeek V3 costs $0.14 per million input tokens. GPT-5.2 costs $1.75. Benchmarks show comparable performance. US frontier labs, OpenAI and Anthropic, generated $22 billion in 2025. Chinese labs generated $1.8 billion. The 12:1 revenue gap is not a usage gap. It is a price gap. Chinese API prices collapsed 90% in 2024, averaging $0.48 per million input tokens versus $3.38 for US models.

Three forces explain the discount. First, distillation: Anthropic accused DeepSeek, Minimax, and Moonshot AI of running industrial-scale extraction campaigns against Claude, and OpenAI made similar accusations to Congress. Second, hyperscaler subsidies: Alibaba cut LLM pricing by 97%, and Baidu, ByteDance, and Tencent spent $1.1 billion on AI subsidies during Chinese New Year 2026 alone. Third, DeepSeek set the cost floor by training V3 for $6 million versus OpenAI's $100 million-plus for GPT-4, then hitting $220 million ARR with 122 employees.

The pharma analogy is the piece's sharpest argument and worth reading in full. Pharma gets 20 years of patent protection before generics arrive. In AI, the generic window opens in weeks. The original asks the question that has no clean answer yet: how do you protect an asset that costs hundreds of millions to build when it can be copied in a month.

[READ ORIGINAL →]