NVIDIA is a $4 trillion company and Jensen Huang tells Lex Fridman exactly how it got there. The conversation opens on rack-scale engineering and extreme co-design, the physical and architectural choices that made NVIDIA's hardware the default substrate for AI compute. Huang explains scaling laws, names the blockers, and walks through the supply chain, memory, and power constraints that determine how fast AI can actually grow.

The operational details are what make this worth more than a headline skim. Huang describes how he runs NVIDIA, including his management structure and decision-making under pressure, and addresses China exposure and TSMC dependence directly. He also discusses Elon Musk's Colossus cluster and what it signals about where compute density is heading, including the speculative but seriously considered idea of AI data centers in space.

The back half moves fast: AGI timelines, the future of programming, and whether NVIDIA hits $10 trillion. Huang does not dodge the $10 trillion question. The episode closes on consciousness and mortality, which sounds indulgent but lands differently coming from someone who rebuilt a company from near-bankruptcy into the infrastructure layer of a technological era. The full transcript is linked. Read it.

[WATCH ON YOUTUBE →]