Author: Lex Fridman Podcast

  • #72 – Scott Aaronson: Quantum Computing

    Scott Aaronson is a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching “Ride Home” in your podcast app.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    05:07 – Role of philosophy in science
    29:27 – What is a quantum computer?
    41:12 – Quantum decoherence (noise in quantum information)
    49:22 – Quantum computer engineering challenges
    51:00 – Moore’s Law
    56:33 – Quantum supremacy
    1:12:18 – Using quantum computers to break cryptography
    1:17:11 – Practical application of quantum computers
    1:22:18 – Quantum machine learning, questionable claims, and cautious optimism
    1:30:53 – Meaning of life

  • Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence

    Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    02:55 – Alan Turing: science and engineering of intelligence
    09:09 – What is a predicate?
    14:22 – Plato’s world of ideas and world of things
    21:06 – Strong and weak convergence
    28:37 – Deep learning and the essence of intelligence
    50:36 – Symbolic AI and logic-based systems
    54:31 – How hard is 2D image understanding?
    1:00:23 – Data
    1:06:39 – Language
    1:14:54 – Beautiful idea in statistical theory of learning
    1:19:28 – Intelligence and heuristics
    1:22:23 – Reasoning
    1:25:11 – Role of philosophy in learning theory
    1:31:40 – Music (speaking in Russian)
    1:35:08 – Mortality

  • Jim Keller: Moore’s Law, Microprocessors, Abstractions, and First Principles

    Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    02:12 – Difference between a computer and a human brain
    03:43 – Computer abstraction layers and parallelism
    17:53 – If you run a program multiple times, do you always get the same answer?
    20:43 – Building computers and teams of people
    22:41 – Start from scratch every 5 years
    30:05 – Moore’s law is not dead
    55:47 – Is superintelligence the next layer of abstraction?
    1:00:02 – Is the universe a computer?
    1:03:00 – Ray Kurzweil and exponential improvement in technology
    1:04:33 – Elon Musk and Tesla Autopilot
    1:20:51 – Lessons from working with Elon Musk
    1:28:33 – Existential threats from AI
    1:32:38 – Happiness and the meaning of life

  • David Chalmers: The Hard Problem of Consciousness

    David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as “why does the feeling which accompanies awareness of sensory information exist at all?”

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    02:23 – Nature of reality: Are we living in a simulation?
    19:19 – Consciousness in virtual reality
    27:46 – Music-color synesthesia
    31:40 – What is consciousness?
    51:25 – Consciousness and the meaning of life
    57:33 – Philosophical zombies
    1:01:38 – Creating the illusion of consciousness
    1:07:03 – Conversation with a clone
    1:11:35 – Free will
    1:16:35 – Meta-problem of consciousness
    1:18:40 – Is reality an illusion?
    1:20:53 – Descartes’ evil demon
    1:23:20 – Does AGI need conscioussness?
    1:33:47 – Exciting future
    1:35:32 – Immortality

  • Cristos Goodrow: YouTube Algorithm

    Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm).

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    03:26 – Life-long trajectory through YouTube
    07:30 – Discovering new ideas on YouTube
    13:33 – Managing healthy conversation
    23:02 – YouTube Algorithm
    38:00 – Analyzing the content of video itself
    44:38 – Clickbait thumbnails and titles
    47:50 – Feeling like I’m helping the YouTube algorithm get smarter
    50:14 – Personalization
    51:44 – What does success look like for the algorithm?
    54:32 – Effect of YouTube on society
    57:24 – Creators
    59:33 – Burnout
    1:03:27 – YouTube algorithm: heuristics, machine learning, human behavior
    1:08:36 – How to make a viral video?
    1:10:27 – Veritasium: Why Are 96,000,000 Black Balls on This Reservoir?
    1:13:20 – Making clips from long-form podcasts
    1:18:07 – Moment-by-moment signal of viewer interest
    1:20:04 – Why is video understanding such a difficult AI problem?
    1:21:54 – Self-supervised learning on video
    1:25:44 – What does YouTube look like 10, 20, 30 years from now?

  • Paul Krugman: Economics of Innovation, Automation, Safety Nets & Universal Basic Income

    Paul Krugman is a Nobel Prize winner in economics, professor at CUNY, and columnist at the New York Times. His academic work centers around international economics, economic geography, liquidity traps, and currency crises.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    03:44 – Utopia from an economics perspective
    04:51 – Competition
    06:33 – Well-informed citizen
    07:52 – Disagreements in economics
    09:57 – Metrics of outcomes
    13:00 – Safety nets
    15:54 – Invisible hand of the market
    21:43 – Regulation of tech sector
    22:48 – Automation
    25:51 – Metric of productivity
    30:35 – Interaction of the economy and politics
    33:48 – Universal basic income
    36:40 – Divisiveness of political discourse
    42:53 – Economic theories
    52:25 – Starting a system on Mars from scratch
    55:11 – International trade
    59:08 – Writing in a time of radicalization and Twitter mobs

  • Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems

    Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    02:09 – Favorite robot
    05:05 – Autonomous vehicles
    08:43 – Tesla Autopilot
    20:03 – Ethical responsibility of safety-critical algorithms
    28:11 – Bias in robotics
    38:20 – AI in politics and law
    40:35 – Solutions to bias in algorithms
    47:44 – HAL 9000
    49:57 – Memories from working at NASA
    51:53 – SpotMini and Bionic Woman
    54:27 – Future of robots in space
    57:11 – Human-robot interaction
    1:02:38 – Trust
    1:09:26 – AI in education
    1:15:06 – Andrew Yang, automation, and job loss
    1:17:17 – Love, AI, and the movie Her
    1:25:01 – Why do so many robotics companies fail?
    1:32:22 – Fear of robots
    1:34:17 – Existential threats of AI
    1:35:57 – Matrix
    1:37:37 – Hang out for a day with a robot

  • Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI

    Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    02:36 – Lessons about human behavior from WWII
    08:19 – System 1 and system 2: thinking fast and slow
    15:17 – Deep learning
    30:01 – How hard is autonomous driving?
    35:59 – Explainability in AI and humans
    40:08 – Experiencing self and the remembering self
    51:58 – Man’s Search for Meaning by Viktor Frankl
    54:46 – How much of human behavior can we study in the lab?
    57:57 – Collaboration
    1:01:09 – Replication crisis in psychology
    1:09:28 – Disagreements and controversies in psychology
    1:13:01 – Test for AGI
    1:16:17 – Meaning of life

  • Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics

    Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    01:56 – What kind of math would aliens have?
    03:48 – Euler’s identity and the least favorite piece of notation
    10:31 – Is math discovered or invented?
    14:30 – Difference between physics and math
    17:24 – Why is reality compressible into simple equations?
    21:44 – Are we living in a simulation?
    26:27 – Infinity and abstractions
    35:48 – Most beautiful idea in mathematics
    41:32 – Favorite video to create
    45:04 – Video creation process
    50:04 – Euler identity
    51:47 – Mortality and meaning
    55:16 – How do you know when a video is done?
    56:18 – What is the best way to learn math for beginners?
    59:17 – Happy moment

  • Stephen Kotkin: Stalin, Putin, and the Nature of Power

    Stephen Kotkin is a professor of history at Princeton university and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet Union including the first 2 of a 3 volume work on Stalin, and he is currently working on volume 3.

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

    Episode Links:
    Stalin (book, vol 1): https://amzn.to/2FjdLF2
    Stalin (book, vol 2): https://amzn.to/2tqyjc3

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    00:00 – Introduction
    03:10 – Do all human beings crave power?
    11:29 – Russian people and authoritarian power
    15:06 – Putin and the Russian people
    23:23 – Corruption in Russia
    31:30 – Russia’s future
    41:07 – Individuals and institutions
    44:42 – Stalin’s rise to power
    1:05:20 – What is the ideal political system?
    1:21:10 – Questions for Putin
    1:29:41 – Questions for Stalin
    1:33:25 – Will there always be evil in the world?