Sequoia Capital partners Pat Grady and Sonya Huang declared AGI is here, in January 2026, and coding agents are the proof. Their definition is functional, not philosophical: AGI is the ability to figure things out. Three components make it real: baseline knowledge from pre-training, reasoning via inference-time compute, and iteration through long-horizon agents. The third piece arrived with Claude Code and similar tools crossing a capability threshold in the last weeks before publication.

The piece is worth reading for the concrete 31-minute recruiter scenario alone. An agent receives a single vague prompt, conducts layered research across LinkedIn, YouTube, and Twitter, filters for behavioral signals over credentials, identifies one high-probability candidate at a Series D company that just did marketing layoffs, and drafts a targeted outreach email. No script. No hand-holding. That is the loop, and the authors argue it maps directly onto how any competent human researcher actually thinks.

Grady and Huang are explicit that agents still hallucinate, lose context, and confidently pursue wrong paths. They are also explicit that the failure modes are increasingly fixable and the trajectory is not reversible. The essay sets up a larger argument about what long-horizon agents mean for markets and labor in 2026, and the framing they build in the first half is what makes the second half land.

[READ ORIGINAL →]