Google's TurboQuant algorithm is the story here. The technique compresses AI model weights during inference, reducing memory bandwidth requirements significantly enough that it sent memory chip stocks into a visible correction. Nvidia-adjacent memory suppliers felt it first. This is not a minor optimization: it challenges the assumption that scaling AI performance requires proportionally scaling expensive HBM memory, which has been a core thesis for investors in companies like SK Hynix and Micron.
The episode covers four other live situations worth tracking. OpenAI quietly wound down Sora's public momentum and is realigning resources toward AGI deployment timelines. SpaceX IPO speculation has attached a $2 trillion valuation figure to the conversation. Apple's latest App Store policy updates are creating friction for AI coding tools and vibe-coding workflows. Meta is publicly naming a market cap target while simultaneously running layoffs, a combination the hosts treat as a strategic signal rather than a contradiction.
The TurboQuant section from 1:14 to 6:35 is the reason to watch this in full. The hosts work through exactly which memory market segments absorb the most risk and why the compression breakthrough could paradoxically accelerate AI deployment by lowering infrastructure costs. The bubble framing in the title is earned: if memory demand assumptions built into current valuations are wrong, the repricing has only started.
[WATCH ON YOUTUBE →]