Author: Lex Fridman Podcast

  • #105 – Robert Langer: Edison of Medicine

    Robert Langer is a professor at MIT and one of the most cited researchers in history, specializing in biotechnology fields of drug delivery systems and tissue engineering. He has bridged theory and practice by being a key member and driving force in launching many successful biotech companies out of MIT.

    Support this podcast by supporting these sponsors:
    – MasterClass: https://masterclass.com/lex
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    03:07 – Magic and science
    05:34 – Memorable rejection
    08:35 – How to come up with big ideas in science
    13:27 – How to make a new drug
    22:38 – Drug delivery
    28:22 – Tissue engineering
    35:22 – Beautiful idea in bioengineering
    38:16 – Patenting process
    42:21 – What does it take to build a successful startup?
    46:18 – Mentoring students
    50:54 – Funding
    58:08 – Cookies
    59:41 – What are you most proud of?

  • #104 – David Patterson: Computer Architecture and Data Storage

    David Patterson is a Turing award winner and professor of computer science at Berkeley. He is known for pioneering contributions to RISC processor architecture used by 99% of new chips today and for co-creating RAID storage. The impact that these two lines of research and development have had on our world is immeasurable. He is also one of the great educators of computer science in the world. His book with John Hennessy “Computer Architecture: A Quantitative Approach” is how I first learned about and was humbled by the inner workings of machines at the lowest level.

    Support this podcast by supporting these sponsors:
    – Jordan Harbinger Show: https://jordanharbinger.com/lex/
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    03:28 – How have computers changed?
    04:22 – What’s inside a computer?
    10:02 – Layers of abstraction
    13:05 – RISC vs CISC computer architectures
    28:18 – Designing a good instruction set is an art
    31:46 – Measures of performance
    36:02 – RISC instruction set
    39:39 – RISC-V open standard instruction set architecture
    51:12 – Why do ARM implementations vary?
    52:57 – Simple is beautiful in instruction set design
    58:09 – How machine learning changed computers
    1:08:18 – Machine learning benchmarks
    1:16:30 – Quantum computing
    1:19:41 – Moore’s law
    1:28:22 – RAID data storage
    1:36:53 – Teaching
    1:40:59 – Wrestling
    1:45:26 – Meaning of life

  • #103 – Ben Goertzel: Artificial General Intelligence

    Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of the Machine Intelligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence.

    Support this podcast by supporting these sponsors:
    – Jordan Harbinger Show: https://jordanharbinger.com/lex/
    – MasterClass: https://masterclass.com/lex

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    03:20 – Books that inspired you
    06:38 – Are there intelligent beings all around us?
    13:13 – Dostoevsky
    15:56 – Russian roots
    20:19 – When did you fall in love with AI?
    31:30 – Are humans good or evil?
    42:04 – Colonizing mars
    46:53 – Origin of the term AGI
    55:56 – AGI community
    1:12:36 – How to build AGI?
    1:36:47 – OpenCog
    2:25:32 – SingularityNET
    2:49:33 – Sophia
    3:16:02 – Coronavirus
    3:24:14 – Decentralized mechanisms of power
    3:40:16 – Life and death
    3:42:44 – Would you live forever?
    3:50:26 – Meaning of life
    3:58:03 – Hat
    3:58:46 – Question for AGI

  • #102 – Steven Pressfield: The War of Art

    Steven Pressfield is a historian and author of War of Art, a book that had a big impact on my life and the life of millions of whose passion is to create in art, science, business, sport, and everywhere else. I highly recommend it and others of his books on this topic, including Turning Pro, Do the Work, Nobody Wants to Read Your Shit, and the Warrior Ethos. Also his books Gates of Fire about the Spartans and the battle at Thermopylae, The Lion’s Gate, Tides of War, and others are some of the best historical fiction novels ever written.

    Support this podcast by supporting these sponsors:
    – Jordan Harbinger Show: https://jordanharbinger.com/lex/
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    05:00 – Nature of war
    11:43 – The struggle within
    17:11 – Love and hate in a time of war
    25:17 – Future of warfare
    28:31 – Technology in war
    30:10 – What it takes to kill a person
    32:22 – Mortality
    37:30 – The muse
    46:09 – Editing
    52:19 – Resistance
    1:10:41 – Loneliness
    1:12:24 – Is a warrior born or trained?
    1:13:53 – Hard work and health
    1:18:41 – Daily ritual

  • #101 – Joscha Bach: Artificial Consciousness and the Nature of Reality

    Joscha Bach is the VP of Research at the AI Foundation, previously doing research at MIT and Harvard. Joscha work explores the workings of the human mind, intelligence, consciousness, life on Earth, and the possibly-simulated fabric of our universe.

    Support this podcast by supporting these sponsors:
    – ExpressVPN at https://www.expressvpn.com/lexpod
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    03:14 – Reverse engineering Joscha Bach
    10:38 – Nature of truth
    18:47 – Original thinking
    23:14 – Sentience vs intelligence
    31:45 – Mind vs Reality
    46:51 – Hard problem of consciousness
    51:09 – Connection between the mind and the universe
    56:29 – What is consciousness
    1:02:32 – Language and concepts
    1:09:02 – Meta-learning
    1:16:35 – Spirit
    1:18:10 – Our civilization may not exist for long
    1:37:48 – Twitter and social media
    1:44:52 – What systems of government might work well?
    1:47:12 – The way out of self-destruction with AI
    1:55:18 – AI simulating humans to understand its own nature
    2:04:32 – Reinforcement learning
    2:09:12 – Commonsense reasoning
    2:15:47 – Would AGI need to have a body?
    2:22:34 – Neuralink
    2:27:01 – Reasoning at the scale of neurons and societies
    2:37:16 – Role of emotion
    2:48:03 – Happiness is a cookie that your brain bakes for itself

  • #99 – Karl Friston: Neuroscience and the Free Energy Principle

    Karl Friston is one of the greatest neuroscientists in history, cited over 245,000 times, known for many influential ideas in brain imaging, neuroscience, and theoretical neurobiology, including the fascinating idea of the free-energy principle for action and perception.

    Support this podcast by signing up with these sponsors:
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    EPISODE LINKS:
    Karl’s Website: https://www.fil.ion.ucl.ac.uk/~karl/
    Karl’s Wiki: https://en.wikipedia.org/wiki/Karl_J._Friston

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    01:50 – How much of the human brain do we understand?
    05:53 – Most beautiful characteristic of the human brain
    10:43 – Brain imaging
    20:38 – Deep structure
    21:23 – History of brain imaging
    32:31 – Neuralink and brain-computer interfaces
    43:05 – Free energy principle
    1:24:29 – Meaning of life

  • #97 – Sertac Karaman: Robots That Fly and Robots That Drive

    Sertac Karaman is a professor at MIT, co-founder of the autonomous vehicle company Optimus Ride, and is one of top roboticists in the world, including robots that drive and robots that fly.

    Support this podcast by signing up with these sponsors:
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    EPISODE LINKS:
    Sertac’s Website: http://sertac.scripts.mit.edu/web/
    Sertac’s Twitter: https://twitter.com/sertackaraman
    Optimus Ride: https://www.optimusride.com/

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    01:44 – Autonomous flying vs autonomous driving
    06:37 – Flying cars
    10:27 – Role of simulation in robotics
    17:35 – Game theory and robotics
    24:30 – Autonomous vehicle company strategies
    29:46 – Optimus Ride
    47:08 – Waymo, Tesla, Optimus Ride timelines
    53:22 – Achieving the impossible
    53:50 – Iterative learning
    58:39 – Is Lidar is a crutch?
    1:03:21 – Fast autonomous flight
    1:18:06 – Most beautiful idea in robotics

  • #96 – Stephen Schwarzman: Going Big in Business, Investing, and AI

    Stephen Schwarzman is the CEO and Co-Founder of Blackstone, one of the world’s leading investment firms with over 530 billion dollars of assets under management. He is one of the most successful business leaders in history, all from humble beginnings back in Philly. I recommend his recent book called What It Takes that tells stories and lessons from this personal journey.

    Support this podcast by signing up with these sponsors:
    – ExpressVPN at https://www.expressvpn.com/lexpod
    – MasterClass: https://masterclass.com/lex

    EPISODE LINKS:
    What It Takes (book): https://amzn.to/2WX9cZu

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    04:17 – Going big in business
    07:34 – How to recognize an opportunity
    16:00 – Solving problems that people have
    25:26 – Philanthropy
    32:51 – Hope for the new College of Computing at MIT
    37:32 – Unintended consequences of technological innovation
    42:24 – Education systems in China and United States
    50:22 – American AI Initiative
    59:53 – Starting a business is a rough ride
    1:04:26 – Love and family

  • #95 – Dawn Song: Adversarial Machine Learning and Computer Security

    Dawn Song is a professor of computer science at UC Berkeley with research interests in security, most recently with a focus on the intersection between computer security and machine learning.

    Support this podcast by signing up with these sponsors:
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    EPISODE LINKS:
    Dawn’s Twitter: https://twitter.com/dawnsongtweets
    Dawn’s Website: https://people.eecs.berkeley.edu/~dawnsong/
    Oasis Labs: https://www.oasislabs.com

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    01:53 – Will software always have security vulnerabilities?
    09:06 – Human are the weakest link in security
    16:50 – Adversarial machine learning
    51:27 – Adversarial attacks on Tesla Autopilot and self-driving cars
    57:33 – Privacy attacks
    1:05:47 – Ownership of data
    1:22:13 – Blockchain and cryptocurrency
    1:32:13 – Program synthesis
    1:44:57 – A journey from physics to computer science
    1:56:03 – US and China
    1:58:19 – Transformative moment
    2:00:02 – Meaning of life

  • #94 – Ilya Sutskever: Deep Learning

    Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.

    Support this podcast by signing up with these sponsors:
    – Cash App – use code “LexPodcast” and download:
    – Cash App (App Store): https://apple.co/2sPrUHe
    – Cash App (Google Play): https://bit.ly/2MlvP5w

    EPISODE LINKS:
    Ilya’s Twitter: https://twitter.com/ilyasut
    Ilya’s Website: https://www.cs.toronto.edu/~ilya/

    This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

    Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

    OUTLINE:
    00:00 – Introduction
    02:23 – AlexNet paper and the ImageNet moment
    08:33 – Cost functions
    13:39 – Recurrent neural networks
    16:19 – Key ideas that led to success of deep learning
    19:57 – What’s harder to solve: language or vision?
    29:35 – We’re massively underestimating deep learning
    36:04 – Deep double descent
    41:20 – Backpropagation
    42:42 – Can neural networks be made to reason?
    50:35 – Long-term memory
    56:37 – Language models
    1:00:35 – GPT-2
    1:07:14 – Active learning
    1:08:52 – Staged release of AI systems
    1:13:41 – How to build AGI?
    1:25:00 – Question to AGI
    1:32:07 – Meaning of life