RETYPED

Site Navigation

RETYPED

Discover, listen, and connect with the stories that matter. Explore the world's best podcasts

Legal

  • Privacy Policy
  • Contact Us
  • Terms of Service

About Us

  • Categories
  • My Library
Back to Home
Dwarkesh Podcast
Science

Dwarkesh Podcast

Deeply researched interviews
- Views
- Episodes

All Episodes (122)

Dwarkesh Podcast

Michael Nielsen – How science actually progresses

NEW

Really enjoyed chatting with Michael Nielsen about how we recognize scientific progress.

It's especially relevant for closing the RL verification loop for scientific discovery.

But it's also a surprisingly mysterious and elusive question when you look at the history of human science.

We approach this question stories like Einstein (who claimed that he hadn't even heard of the famous Michelson-Morley experiment, which is supposed to have motivated special relativity, until after he had come up with the theory), Darwin (why did it take till 1859 to lay out an idea whose essence every farmer since antiquity must have observed?), Prout (how do you recognize that isotopes exist if you cannot chemically separate them?), and many others.

The verification loop on scientific ideas is often extremely long and weirdly hostile. Ancient Athenians dismissed Aristarchus's heliocentrism in the 3rd century BC because it would imply that the stars should shift in the sky as the Earth orbits the sun. The first successful measurement of stellar parallax was in 1838. That's a 2,000-year verification loop.

But clearly human science is able to make progress faster than raw experimental falsification/verification would imply, and in cases where experiments are very ambiguous. How?

Michael has some very deep and provocative hypotheses about the nature of progress. One I found especially thought-provoking is that aliens will likely have a VERY different science + tech stack than us. Which contradicts the common sense picture of a linear tech tree that I was assuming. And has some interesting implications about how future civilizations might trade and cooperate with each other.

Watch on Youtube; read the transcript.

Sponsors

* Labelbox researchers built a new safety benchmark. Why? Well, current safety benchmarks claim that attacks on top models are successful only a few percent of the time, but the prompts in those benchmarks don’t reflect how real bad actors actually write. You can read Labelbox’s research here. If this could be useful for your work, reach out at labelbox.com/dwarkesh

* Mercury has an MCP that lets you give an LLM access to your full transaction history, including things like attached receipts and internal notes. I just used it to categorize my 2025 transactions, and it worked shockingly well. Modern functionality like this is exactly why I use Mercury. Learn more at mercury.com

* Jane Street’s ML engineers presented some of their GPU optimization workflows at GTC, showing how they use CUDA graphs, streams, and custom kernels to shave real time off their training runs. You can watch the full talk here. And they open-sourced all the relevant code here. If this kind of stuff excites you, Jane Street is hiring — learn more at janestreet.com/dwarkesh

Timestamps

(00:00:00) – How scientific progress outpaces its verification loops

(00:17:51) – Newton was the last of the magicians

(00:23:26) – Why wasn’t natural selection obvious much earlier?

(00:29:52) – Could gradient descent have discovered general relativity?

(00:50:54) – Why aliens will have a different tech stack than us

(01:15:26) – Are there infinitely many deep scientific principles left to discover?

(01:26:25) – What drew Michael to quantum computing so early?

(01:35:29) – Does science need a new way to assign credit?

(01:43:57) – Prolificness versus depth

(01:49:17) – What it takes to actually internalize what you learn



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:03:03•April 7, 2026
Dwarkesh Podcast

Terence Tao – Kepler, Newton, and the true nature of mathematical discovery

We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.

People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.

But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.

During this time, what we know today as the better theory can actually make worse predictions.

And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy!

Watch on YouTube; read the transcript.

Sponsors

- Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street’s ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there’s one live now at janestreet.com/dwarkesh.

- Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you’re focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh.

- Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It’s a super low-friction way to stay on top of your business. Learn more at mercury.com/insights.

Timestamps

(00:00:00) – Kepler was a high temperature LLM

(00:11:44) – How would we know if there’s a new unifying concept within heaps of AI slop?

(00:26:10) – The deductive overhang

(00:30:31) – Selection bias in reported AI discoveries

(00:46:43) – AI makes papers richer and broader, but not deeper

(00:53:00) – If AI solves a problem, can humans get understanding out of it?

(00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other

(01:09:48) – How Terry uses his time

(01:17:05) – Human-AI hybrids will dominate math for a lot longer



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:23:44•March 20, 2026
Dwarkesh Podcast

Dylan Patel — Deep dive on the 3 big bottlenecks to scaling AI compute

Dylan Patel, founder of SemiAnalysis, provides a deep dive into the 3 big bottlenecks to scaling AI compute: logic, memory, and power.

And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers.

Learned a ton about every single level of the stack. Enjoy!

Watch on YouTube; read the transcript.

Sponsors

* Mercury has already saved me a bunch of time this tax season. Last year, I used Mercury to request W-9s from all the contractors I worked with. Then, when it came time to issue 1099s this year, I literally just clicked a button and Mercury sent them out. Learn more at mercury.com.

* Labelbox noticed that even when voice models appear to take interruptions in stride, their performance degrades. To figure out why, they built a new evaluation pipeline called EchoChain. EchoChain diagnoses voice models’ specific failure modes, letting you understand what your model needs to truly handle interruptions. Check it out at labelbox.com/dwarkesh.

* Jane Street is basically a research lab with a trading desk attached – and their infrastructure backs this up. They’ve got tens of thousands of GPUs, hundreds of thousands of CPU cores, and exabytes of storage. This is what it takes to find subtle signals hidden deep within noisy market data. If this sounds interesting, you can explore open positions at janestreet.com/dwarkesh.

Timestamps

(00:00:00) – Why an H100 is worth more today than 3 years ago

(00:24:52) – Nvidia secured TSMC allocation early; Google is getting squeezed

(00:34:34) – ASML will be the #1 constraint for AI compute scaling by 2030

(00:55:47) – Can't we just use TSMC's older fabs?

(01:05:37) – When will China outscale the West in semis?

(01:16:01) – The enormous incoming memory crunch

(01:42:34) – Scaling power in the US will not be a problem

(01:54:44) – Space GPUs aren't happening this decade

(02:14:07) – Why aren't more hedge funds making the AGI trade?

(02:18:30) – Will TSMC kick Apple out from N2?

(02:24:16) – Robots and Taiwan risk



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:31:03•March 13, 2026
Dwarkesh Podcast

The most important question nobody's asking about AI

Read the full essay here: https://www.dwarkesh.com/p/dow-anthropic

Timestamps

(00:00:00) - Anthropic vs The Pentagon

(00:04:16) - The overhangs of tyranny

(00:05:54) - AI structurally favors mass surveillance

(00:08:25) - Alignment...to whom?

(00:13:55) - Coordination not worth the costs



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
00:24:38•March 11, 2026
Dwarkesh Podcast

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Renaissance history is so much wilder and weirder than you would have expected. Very fun chatting with Ada Palmer (historian, novelist, and composer based at the University of Chicago).

Some especially fascinating things I learned from the conversation and her excellent book, Inventing the Renaissance:

Not only did Gutenberg go bankrupt in the 1450s (after inventing the printing press), but so did the bank that foreclosed on him, and so did his apprentices. This is because paper was still very expensive, and so you had to make this big upfront CAPEX decision to print a batch of 300 copies of a book - say the Bible. But he’s in a small landlocked German town where only priests are allowed to read the Bible - so he sells maybe 7 copies. It’s only when this technology ends up in Venice, where you can hand 10 copies to each of 30 ship captains going to 30 different cities, that it starts taking off.

Speaking of which, the printing revolution wasn’t just one single discrete event, just as the computer revolution has been this whole century of going from mainframes -> personal computers -> phones -> social media, each with different and accelerating social impact. Books came first, but they’re slow to print, and made in small batches. The real revolution is pamphlets - much faster, much harder to censor. Pamphlet runners are how you can have Luther’s 95 Theses go from Wittenberg to London in 17 days.

So much other wild stuff from this episode. For example, did you know that the largest and best-funded experimental laboratory in 17th century Europe was very likely the Roman one run by inquisitors? Ada jokes that the Inquisition accidentally invented peer review. The focus of the Inquisition is really misunderstood - it was obsessed with catching dangerous new heretics like Lutherans and Calvinists - it only executed one person for doing science.

And this leads Ada to make an observation that I think is really wise: the authorities and censors are always worried about the exact wrong things given 20/20 hindsight. When Inquisition raids an underground bookshop during the French Enlightenment, they don’t mind the Rousseau, Voltaire, and Encyclopédie, but they lose their minds about some Jansenist treatises about the technical nature of the Trinity.

More broadly, a lesson for me from this episode is that it’s just really hard to shape history in the specific way that you want to impact things. One of the most famous medieval scholars is this guy Petrarch. He survives the Black Death in the 1340s, watches his friends die to plague and bandits, and says: our leaders are selfish and terrible, we need to raise them on the Roman classics so they’ll act like Cicero. So Europe pours money into finding ancient manuscripts, building libraries, and educating princes on classical virtues. Those princes grow up and fight bigger, nastier wars than ever before with new deadlier technology. And this, combined with greater urbanization and endemic plague, results in European life expectancy decreasing from 35 in the medieval period to 18 during the Renaissance (the period which we in retrospect think of as a golden age but which many people living through it thought of as the continuation of the dark ages that had persisted since the fall of Rome).

Anyways, the libraries Petrarch inspires stick around, the printing press makes them accessible to everyone, and 200 years later a generation of medical students is reading Lucretius and asking “what if there are atoms and that’s how diseases work?” which eventually leads to germ theory, vaccines, and a cure for the Black Death (Ada has longer more involved explanation of how cosplaying the Romans results through a series of many steps to the scientific revolution). Petrarch wanted to produce philosopher-kings that shared his values. Instead he created a world that doesn’t share his values at all but can cure the disease that destroyed his.

Watch on YouTube; read the transcript.

Sponsors

* Jane Street is still waiting on someone to solve their backdoor puzzle… They’re accepting submissions until April 1st and have set aside $50,000 for the best attempts. Separately, applications are live for Jane Street’s summer ML internships in NY, London, and Hong Kong. Go check all of this out at janestreet.com/dwarkesh.

* Labelbox can help ensure your agents don’t need to rely on overspecified prompts. They tailor real-world scenarios to whatever domain you’re focused on, and they make sure the data you train on rewards real understanding, not just instruction-following. Learn more at labelbox.com/dwarkesh

* Mercury’s personal accounts let you add users, issue cards, and customize permissions. This is super useful for sharing finances with a partner, a roommate… or even an OpenClaw agent. And, if you’re already a Mercury Business user, your personal account is free! See terms and conditions below, and learn more at mercury.com/personal-banking

Eligible Mercury Business users who apply for and maintain a Mercury Personal account may have their Mercury Personal subscription fee waived provided they remain a user on an active Mercury Business account in good standing. Standard Mercury Platform Subscription fees will apply if they no longer meet eligibility requirements, including but not limited to no longer being associated with an eligible Mercury Business account, or if the program is modified or terminated. Mercury may modify or discontinue this offering at any time and will provide notice as required by law. See Subscription Terms for full details.

* To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) - How cosplaying Ancient Rome led to the Renaissance

(00:28:49) - How Florence’s weird republic worked

(00:38:13) - How the Medicis took over Florence

(00:58:12) - Why it was so hard for Gutenberg to make any money off the printing press

(01:17:34) - Why the industrial revolution didn’t happen in Italy

(01:23:02) - The Library of Alexandria isn’t where most ancient books were lost

(01:41:21) - The Inquisition accidentally invented peer review



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:02:19•March 6, 2026
Dwarkesh Podcast

Dario Amodei — "We are near the end of the exponential"

Dario Amodei thinks we are just a few years away from AGI — or as he puts it, from having “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, why task-specific RL might lead to generalization, and how AI will diffuse throughout the economy. We also dive into Anthropic’s revenue projections, compute commitments, path to profitability, and more.

Watch on YouTube; read the transcript.

Sponsors

* Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at labelbox.com/dwarkesh.

* Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at janestreet.com/dwarkesh.

* Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at mercury.com/personal-banking.

Timestamps

(00:00:00) - What exactly are we scaling?

(00:12:36) - Is diffusion cope?

(00:29:42) - Is continual learning necessary?

(00:46:20) - If AGI is imminent, why not buy more compute?

(00:58:49) - How will AI labs actually make profit?

(01:31:19) - Will regulations destroy the boons of AGI?

(01:47:41) - Why can’t China and America both have a country of geniuses in a datacenter?



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:22:20•February 13, 2026
Dwarkesh Podcast

Elon Musk — "In 36 months, the cheapest place to put AI will be space”

In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI’s business and alignment plans, DOGE, and much more.

Watch on YouTube; read the transcript.

Sponsors

* Mercury just started offering personal banking! I’m already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at mercury.com/personal-banking

* Jane Street sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but… I didn’t quite nail it. If you’re curious, or if you think you can do better, you should take a stab at janestreet.com/dwarkesh

* Labelbox can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at labelbox.com/dwarkesh

Timestamps

(00:00:00) - Orbital data centers

(00:36:46) - Grok and alignment

(00:59:56) - xAI’s business plan

(01:17:21) - Optimus and humanoid manufacturing

(01:30:22) - Does China win by default?

(01:44:16) - Lessons from running SpaceX

(02:20:08) - DOGE

(02:38:28) - TeraFab



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:49:45•February 5, 2026
Dwarkesh Podcast

Adam Marblestone — AI is missing something fundamental about the brain

Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.

In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question.

Watch on YouTube; read the transcript.

Sponsors

* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com

* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – The brain’s secret sauce is the reward functions, not the architecture

(00:22:20) – Amortized inference and what the genome actually stores

(00:42:42) – Model-based vs model-free RL in the brain

(00:50:31) – Is biological hardware a limitation or an advantage?

(01:03:59) – Why a map of the human brain is important

(01:23:28) – What value will automating math have?

(01:38:18) – Architecture of the brain

Further reading

Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.

A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI

Adam’s blog, and Convergent Research’s blog on essential technologies.

A Tutorial on Energy-Based Learning by Yann LeCun

What Does It Mean to Understand a Neural Network? - Kording & Lillicrap

E11 Bio and their brain connectomics approach

Sam Gershman on what dopamine is doing in the brain

Gwern’s proposal on training models on the brain’s hidden states



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:49:53•December 30, 2025
Dwarkesh Podcast

Thoughts on AI progress (Dec 2025)

Read the essay here.

Timestamps

00:00:00 What are we scaling?

00:03:11 The value of human labor

00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified

00:08:23 RL scaling

00:09:18 Broadly deployed intelligence explosion



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
00:12:28•December 23, 2025
Dwarkesh Podcast

Sarah Paine — Why Russia Lost the Cold War

This is the final episode of the Sarah Paine lecture series, and it’s probably my favorite one. Sarah gives a “tour of the arguments” on what ultimately led to the Soviet Union’s collapse, diving into the role of the US, the Sino-Soviet border conflict, the oil bust, ethnic rebellions and even the Roman Catholic Church. As she points out, this is all particularly interesting as we find ourselves potentially at the beginning of another Cold War.

As we wrap up this lecture series, I want to take a moment to thank Sarah for doing this with me. It has been such a pleasure.

If you want more of her scholarship, I highly recommend checking out the books she’s written. You can find them here.

Watch on YouTube; read the transcript.

Sponsors

* Labelbox can get you the training data you need, no matter the domain. Their Alignerr network includes the STEM PhDs and coding experts you’d expect, but it also has experienced cinematographers and talented voice actors to help train frontier video and audio models. Learn more at labelbox.com/dwarkesh.

* Sardine doesn’t just assess customer risk for banking & retail. Their AI risk management platform is also extremely good at detecting fraudulent job applications, which I’ve found useful for my own hiring process. If you need help with hiring risk—or any other type of fraud prevention—go to sardine.ai/dwarkesh.

* Gemini’s Nano Banana Pro helped us make many of the visuals in this episode. For example, we used it to turn dense tables into clear charts so that’d it be easier to quickly understand the trends that Sarah discusses. You can try Nano Banana Pro now in the Gemini app. Go to gemini.google.com.

Timestamps

(00:00:00) – Did Reagan single-handedly win the Cold War?

(00:15:53) – Eastern Bloc uprisings & oil crisis

(00:30:37) – Gorbachev’s mistakes

(00:37:33) – German unification and NATO expansion

(00:48:31) – The Gulf War and the Cold War endgame

(00:56:10) – How central planning survived so long

(01:14:46) – Sarah’s life in the USSR in 1988



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:54:55•December 19, 2025
Dwarkesh Podcast

Ilya Sutskever — We're moving from the age of scaling to the age of research

Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.

Watch on YouTube; read the transcript.

Sponsors

* Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google

* Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh

* Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – Explaining model jaggedness

(00:09:39) - Emotions and value functions

(00:18:49) – What are we scaling?

(00:25:13) – Why humans generalize better than models

(00:35:45) – SSI’s plan to straight-shot superintelligence

(00:46:47) – SSI’s model will learn from deployment

(00:55:07) – How to think about powerful AGIs

(01:18:13) – “We are squarely an age of research company”

(01:20:23) – Self-play and multi-agent

(01:32:42) – Research taste



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:36:03•November 25, 2025
Dwarkesh Podcast

Satya Nadella — How Microsoft is preparing for AGI

As part of this interview, Satya Nadella gave Dylan Patel (founder of SemiAnalysis) and me an exclusive first-look at their brand-new Fairwater 2 datacenter.

Microsoft is building multiple Fairwaters, each of which has hundreds of thousands of GB200s & GB300s. Between all these interconnected buildings, they’ll have over 2 GW of total capacity. Just to give a frame of reference, even a single one of these Fairwater buildings is more powerful than any other AI datacenter that currently exists.

Satya then answered a bunch of questions about how Microsoft is preparing for AGI across all layers of the stack.

Watch on YouTube; read the transcript.

Sponsors

* Labelbox produces high-quality data at massive scale, powering any capability you want your model to have. Whether you’re building a voice agent, a coding assistant, or a robotics model, Labelbox gets you the exact data you need, fast. Reach out at labelbox.com/dwarkesh

* CodeRabbit automatically reviews and summarizes PRs so you can understand changes and catch bugs in half the time. This is helpful whether you’re coding solo, collaborating with agents, or leading a full team. To learn how CodeRabbit integrates directly into your workflow, go to coderabbit.ai

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) - Fairwater 2

(00:03:20) - Business models for AGI

(00:12:48) - Copilot

(00:20:02) - Whose margins will expand most?

(00:36:17) - MAI

(00:47:47) - The hyperscale business

(01:02:44) - In-house chip & OpenAI partnership

(01:09:35) - The CAPEX explosion

(01:15:07) - Will the world trust US companies to lead AI?



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:27:47•November 12, 2025
Dwarkesh Podcast

Sarah Paine — How Russia sabotaged China's rise

In this lecture, military historian Sarah Paine explains how Russia—and specifically Stalin—completely derailed China’s rise, slowing them down for over a century.

This lecture was particularly interesting to me because, in my opinion, the Chinese Civil War is 1 of the top 3 most important events of the 20th century. And to understand why it transpired as it did, you need to understand Stalin’s role in the whole thing.

Watch on YouTube; read the transcript.

Sponsors

Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our cash balance, AR, and AP all in one place. Join us (and over 200,000 other entrepreneurs) at mercury.com

Labelbox scrutinizes public benchmarks at the single data-row level to probe what’s really being evaluated. Using this knowledge, they can generate custom training data for hill climbing existing benchmarks, or design new benchmarks from scratch. Learn more at labelbox.com/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – How Russia took advantage of China’s weakness

(00:22:58) – After Stalin, China’s rise

(00:33:52) – Russian imperialism

(00:45:23) – China’s and Russia’s existential problems

(01:04:55) – Q&A: Sino-Soviet Split

(01:22:44) – Stalin’s lessons from WW2



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:30:36•October 31, 2025
Dwarkesh Podcast

Andrej Karpathy — AGI is still a decade away

The Andrej Karpathy episode.

During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.

It was a pleasure chatting with him.

Watch on YouTube; read the transcript.

Sponsors

* Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh

* Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com

* Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.google

Timestamps

(00:00:00) – AGI is still a decade away

(00:29:45) – LLM cognitive deficits

(00:40:05) – RL is terrible

(00:49:38) – How do humans learn?

(01:06:25) – AGI will blend into 2% GDP growth

(01:17:36) – ASI

(01:32:50) – Evolution of intelligence & culture

(01:42:55) - Why self driving took so long

(01:56:20) - Future of education



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
02:25:19•October 17, 2025
Dwarkesh Podcast

Nick Lane – Life as we know it is chemically inevitable

Nick Lane has some pretty wild ideas about the evolution of life.

He thinks early life was continuous with the spontaneous chemistry of undersea hydrothermal vents.

Nick’s story may be wrong, but I find it remarkable that with just that starting point, you can explain so much about why life is the way that it is — the things you’re supposed to just take as givens in biology class:

* Why are there two sexes? Why sex at all?

* Why are bacteria so simple despite being around for 4 billion years? Why is there so much shared structure between all eukaryotic cells despite the enormous morphological variety between animals, plants, fungi, and protists?

* Why did the endosymbiosis event that led to eukaryotes happen only once, and in the particular way that it did?

* Why is all life powered by proton gradients? Why does all life on Earth share not only the Krebs Cycle, but even the intermediate molecules like Acetyl-CoA?

His theory implies that early life is almost chemically inevitable (potentially blooming on hundreds of millions of planets in the Milky Way alone), and that the real bottleneck is the complex eukaryotic cell.

Watch on YouTube; listen on Apple Podcasts or Spotify.

Sponsors

* Gemini in Sheets lets you turn messy text into structured data. We used it to classify all our episodes by type and topic, no manual tagging required. If you’re a Google Workspace user, you can get started today at docs.google.com/spreadsheets/

* Labelbox has a massive network of domain experts (called Alignerrs) who help train AI models in a way that ensures they understand the world deeply, not superficially. These Alignerrs are true experts — one even tutored me in chemistry as I prepped for this episode. Learn more at labelbox.com/dwarkesh

* Lighthouse helps frontier technology companies like Cursor and Physical Intelligence navigate the U.S. immigration system and hire top talent from around the world. Lighthouse handles everything, maximizing the probability of visa approval while minimizing the work you have to do. Learn more at lighthousehq.com/employers

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – The singularity that unlocked complex life

(00:08:26) – Early life continuous with Earth's geochemistry

(00:23:36) – Eukaryotes are the great filter for intelligent life

(00:42:16) – Mitochondria are the reason we have sex

(01:08:12) – Are bioelectric fields linked to consciousness?

Ref: 868329



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
01:20:08•October 10, 2025