Lex Fridman Podcast
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript:
https://lexfridman.com/ai-sota-2026-transcript
CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact
SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Box: Intelligent content management platform.
Go to https://box.com/ai
Quo: Phone system (calls, texts, contacts) for businesses.
Go to https://quo.com/lex
UPLIFT Desk: Standing desks and office ergonomics.
Go to https://upliftdesk.com/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
Perplexity: AI-powered answer engine.
Go to https://perplexity.ai/
OUTLINE:
(00:00) – Introduction
(01:39) – Sponsors, Comments, and Reflections
(16:29) – China vs US: Who wins the AI race?
(25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
(36:11) – Best AI for coding
(43:02) – Open Source vs Closed Source LLMs
(54:41) – Transformers: Evolution of LLMs since 2019
(1:02:38) – AI Scaling Laws: Are they dead or still holding?
(1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
(1:51:51) – Post-training explained: Exciting new research directions in LLMs
(2:12:43) – Advice for beginners on how to get into AI development & research
(2:35:36) – Work culture in AI (72+ hour weeks)
(2:39:22) – Silicon Valley bubble
(2:43:19) – Text diffusion models and other new research directions
(2:49:01) – Tool use
(2:53:17) – Continual learning
(2:58:39) – Long context
(3:04:54) – Robotics
(3:14:04) – Timeline to AGI
(3:21:20) – Will AI replace programmers?
(3:39:51) – Is the dream of AGI dying?
(3:46:40) – How AI will make money?
(3:51:02) – Big acquisitions in 2026
(3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
(4:08:08) – Manhattan Project for AI
(4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
(4:22:48) – Future of human civilization

Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence
Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences…
Jim Keller: Moore’s Law, Microprocessors, Abstractions, and First Principles
Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and…
David Chalmers: The Hard Problem of Consciousness
David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as…
Cristos Goodrow: YouTube Algorithm
Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm). This conversation is part of the Artificial Intelligence podcast. If you would like to get more information…
Paul Krugman: Economics of Innovation, Automation, Safety Nets & Universal Basic Income
Paul Krugman is a Nobel Prize winner in economics, professor at CUNY, and columnist at the New York Times. His academic work centers around international economics, economic geography, liquidity traps, and currency crises. This conversation…
Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems
Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme…
Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI
Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast…
Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics
Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics. This conversation is part of…
Stephen Kotkin: Stalin, Putin, and the Nature of Power
Stephen Kotkin is a professor of history at Princeton university and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet…
Donald Knuth: Algorithms, TeX, Life, and The Art of Computer Programming
Donald Knuth is one of the greatest and most impactful computer scientists and mathematicians ever. He is the recipient in 1974 of the Turing Award, considered the Nobel Prize of computing. He is the author…
