Find shows
you love
Build your personal media library. Follow your favorite people, shows, and topics

Andrej Karpathy — AGI is still a decade away
The Andrej Karpathy episode.
During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.
It was a pleasure chatting with him.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh
* Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com
* Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.google
Timestamps
(00:00:00) – AGI is still a decade away
(00:29:45) – LLM cognitive deficits
(00:40:05) – RL is terrible
(00:49:38) – How do humans learn?
(01:06:25) – AGI will blend into 2% GDP growth
(01:17:36) – ASI
(01:32:50) – Evolution of intelligence & culture
(01:42:55) - Why self driving took so long
(01:56:20) - Future of education
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

How Lee Edward Smith & Katherine Aguilar Turned Dance, Culture & Real Estate Into an Empire

Satya Nadella — How Microsoft is preparing for AGI
As part of this interview, Satya Nadella gave Dylan Patel (founder of SemiAnalysis) and me an exclusive first-look at their brand-new Fairwater 2 datacenter.
Microsoft is building multiple Fairwaters, each of which has hundreds of thousands of GB200s & GB300s. Between all these interconnected buildings, they’ll have over 2 GW of total capacity. Just to give a frame of reference, even a single one of these Fairwater buildings is more powerful than any other AI datacenter that currently exists.
Satya then answered a bunch of questions about how Microsoft is preparing for AGI across all layers of the stack.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox produces high-quality data at massive scale, powering any capability you want your model to have. Whether you’re building a voice agent, a coding assistant, or a robotics model, Labelbox gets you the exact data you need, fast. Reach out at labelbox.com/dwarkesh
* CodeRabbit automatically reviews and summarizes PRs so you can understand changes and catch bugs in half the time. This is helpful whether you’re coding solo, collaborating with agents, or leading a full team. To learn how CodeRabbit integrates directly into your workflow, go to coderabbit.ai
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) - Fairwater 2
(00:03:20) - Business models for AGI
(00:12:48) - Copilot
(00:20:02) - Whose margins will expand most?
(00:36:17) - MAI
(00:47:47) - The hyperscale business
(01:02:44) - In-house chip & OpenAI partnership
(01:09:35) - The CAPEX explosion
(01:15:07) - Will the world trust US companies to lead AI?
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Sarah Paine — How Russia sabotaged China's rise
In this lecture, military historian Sarah Paine explains how Russia—and specifically Stalin—completely derailed China’s rise, slowing them down for over a century.
This lecture was particularly interesting to me because, in my opinion, the Chinese Civil War is 1 of the top 3 most important events of the 20th century. And to understand why it transpired as it did, you need to understand Stalin’s role in the whole thing.
Watch on YouTube; read the transcript.
Sponsors
Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our cash balance, AR, and AP all in one place. Join us (and over 200,000 other entrepreneurs) at mercury.com
Labelbox scrutinizes public benchmarks at the single data-row level to probe what’s really being evaluated. Using this knowledge, they can generate custom training data for hill climbing existing benchmarks, or design new benchmarks from scratch. Learn more at labelbox.com/dwarkesh
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – How Russia took advantage of China’s weakness
(00:22:58) – After Stalin, China’s rise
(00:33:52) – Russian imperialism
(00:45:23) – China’s and Russia’s existential problems
(01:04:55) – Q&A: Sino-Soviet Split
(01:22:44) – Stalin’s lessons from WW2
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Last Interview on the Left: Matt Bettinelli-Olpin & Tyler Gillett - Ready or Not 2: Here I Come

#2471 - Mark Normand

Terence Tao – Kepler, Newton, and the true nature of mathematical discovery
We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.
People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.
But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.
During this time, what we know today as the better theory can actually make worse predictions.
And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy!
Watch on YouTube; read the transcript.
Sponsors
- Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street’s ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there’s one live now at janestreet.com/dwarkesh.
- Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you’re focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh.
- Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It’s a super low-friction way to stay on top of your business. Learn more at mercury.com/insights.
Timestamps
(00:00:00) – Kepler was a high temperature LLM
(00:11:44) – How would we know if there’s a new unifying concept within heaps of AI slop?
(00:26:10) – The deductive overhang
(00:30:31) – Selection bias in reported AI discoveries
(00:46:43) – AI makes papers richer and broader, but not deeper
(00:53:00) – If AI solves a problem, can humans get understanding out of it?
(00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other
(01:09:48) – How Terry uses his time
(01:17:05) – Human-AI hybrids will dominate math for a lot longer
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe


