World Models are attracting serious capital and serious researchers. Fei-Fei Li's World Labs raised $1 billion. Yann LeCun's AMI raised $1.03 billion. General Intuition closed a $133.7 million seed round. These are not bets on incremental improvement to existing architectures. They are bets that a fundamentally different class of model, one that learns to predict the near future from action-labeled data, will produce machines capable of operating in the physical world in ways LLMs cannot.

This piece is co-written by Not Boring's Packy McCormick and General Intuition co-founder Pim De Witte. That combination matters. De Witte is building in this space, not observing it. The essay covers history, theory, current progress, and competing architectural approaches, including an honest accounting of General Intuition's own trade-offs. The authors claim it is the most comprehensive public guide to World Models available right now. The argument worth reading in full: World Models trained on gaming clips may be the on-ramp to embodied AI agents that direct physical machines, not because LLMs failed, but because they were never designed for this.

The field is moving fast enough that this essay is a navigation tool, not a final answer. NVIDIA GTC featured World Models prominently this week. The architectures are still competing. The winners are not determined. Read this to understand what the bets are actually on before the next billion-dollar announcement lands.

[READ ORIGINAL →]