Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
I asked Jensen about TPU competition, Nvidia’s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn’t just become a hyperscaler, how it makes its investments, and much more. Enjoy!
Watch on YouTube; read the transcript.
Sponsors
* Crusoe’s cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story—for inference, Crusoe’s MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at crusoe.ai/dwarkesh
* Cursor helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out at github.com/dwarkeshsp/ai_coworker, or get started on your own Cursor project today at cursor.com/dwarkesh
* Jane Street spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions—like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor—but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at janestreet.com/dwarkesh
Timestamps
(00:00:00) – Is Nvidia’s biggest moat its grip on scarce supply chains?
(00:16:25) – Will TPUs break Nvidia’s hold on AI compute?
(00:41:06) – Why doesn’t Nvidia become a hyperscaler?
(00:57:36) – Should we be selling AI chips to China?
(01:35:06) – Why doesn’t Nvidia make multiple different chip architectures?
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe