Lex Fridman Podcast
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript:
https://lexfridman.com/ai-sota-2026-transcript
CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact
SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Box: Intelligent content management platform.
Go to https://box.com/ai
Quo: Phone system (calls, texts, contacts) for businesses.
Go to https://quo.com/lex
UPLIFT Desk: Standing desks and office ergonomics.
Go to https://upliftdesk.com/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
Perplexity: AI-powered answer engine.
Go to https://perplexity.ai/
OUTLINE:
(00:00) – Introduction
(01:39) – Sponsors, Comments, and Reflections
(16:29) – China vs US: Who wins the AI race?
(25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
(36:11) – Best AI for coding
(43:02) – Open Source vs Closed Source LLMs
(54:41) – Transformers: Evolution of LLMs since 2019
(1:02:38) – AI Scaling Laws: Are they dead or still holding?
(1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
(1:51:51) – Post-training explained: Exciting new research directions in LLMs
(2:12:43) – Advice for beginners on how to get into AI development & research
(2:35:36) – Work culture in AI (72+ hour weeks)
(2:39:22) – Silicon Valley bubble
(2:43:19) – Text diffusion models and other new research directions
(2:49:01) – Tool use
(2:53:17) – Continual learning
(2:58:39) – Long context
(3:04:54) – Robotics
(3:14:04) – Timeline to AGI
(3:21:20) – Will AI replace programmers?
(3:39:51) – Is the dream of AGI dying?
(3:46:40) – How AI will make money?
(3:51:02) – Big acquisitions in 2026
(3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
(4:08:08) – Manhattan Project for AI
(4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
(4:22:48) – Future of human civilization

#114 – Russ Tedrake: Underactuated Robotics, Control, Dynamics and Touch
Russ Tedrake is a roboticist and professor at MIT and vice president of robotics research at TRI. He works on control of robots in interesting, complicated, underactuated, stochastic, difficult to model situations. Support this podcast…
#113 – Manolis Kellis: Human Genome and Evolutionary Dynamics
Manolis Kellis is a professor at MIT and head of the MIT Computational Biology Group. He is interested in understanding the human genome from a computational, evolutionary, biological, and other cross-disciplinary perspectives. Support this podcast…
#112 – Ian Hutchinson: Nuclear Fusion, Plasma Physics, and Religion
Ian Hutchinson is a nuclear engineer and plasma physicist at MIT. He has made a number of important contributions in plasma physics including the magnetic confinement of plasmas seeking to enable fusion reactions, which is…
#111 – Richard Karp: Algorithms and Computational Complexity
Richard Karp is a professor at Berkeley and one of the most important figures in the history of theoretical computer science. In 1985, he received the Turing Award for his research in the theory of…
#110 – Jitendra Malik: Computer Vision
Jitendra Malik is a professor at Berkeley and one of the seminal figures in the field of computer vision, the kind before the deep learning revolution, and the kind after. He has been cited over…
#109 – Brian Kernighan: UNIX, C, AWK, AMPL, and Go Programming
Brian Kernighan is a professor of computer science at Princeton University. He co-authored the C Programming Language with Dennis Ritchie (creator of C) and has written a lot of books on programming, computers, and life…
#108 – Sergey Levine: Robotics and Machine Learning
Sergey Levine is a professor at Berkeley and a world-class researcher in deep learning, reinforcement learning, robotics, and computer vision, including the development of algorithms for end-to-end training of neural network policies that combine perception…
#107 – Peter Singer: Suffering in Humans, Animals, and AI
Peter Singer is a professor of bioethics at Princeton, best known for his 1975 book Animal Liberation, that makes an ethical case against eating meat. He has written brilliantly from an ethical perspective on extreme…
#106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind
Matt Botvinick is the Director of Neuroscience Research at DeepMind. He is a brilliant cross-disciplinary mind navigating effortlessly between cognitive psychology, computational neuroscience, and artificial intelligence. Support this podcast by supporting these sponsors: – The…
#105 – Robert Langer: Edison of Medicine
Robert Langer is a professor at MIT and one of the most cited researchers in history, specializing in biotechnology fields of drug delivery systems and tissue engineering. He has bridged theory and practice by being…
