Category: ai

  • #460 – Narendra Modi: Prime Minister of India – Power, Democracy, War & Peace

    Narendra Modi is the Prime Minister of India. On YouTube this episode is available in English, Hindi, Russian (and soon other languages). Captions and voice-over audio tracks are provided (for the main episode video on YouTube) in English, Hindi, Russian, and the original mixed-language version, with subtitles available in your preferred language. To listen to the original mixed-language version, please select the Hindi (Latin) audio track. The default is English overdub.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep460-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/narendra-modi-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Narendra Modi’s X: https://x.com/narendramodi
    Narendra Modi’s Instagram: https://instagram.com/narendramodi
    Narendra Modi’s YouTube: https://youtube.com/narendramodi
    Narendra Modi’s Website: https://narendramodi.in/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Brain.fm: Music for focus.
    Go to https://brain.fm/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    MasterClass: Online classes from world-class experts.
    Go to https://masterclass.com/lexpod
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (17:24) – Fasting
    (29:42) – Early life
    (41:38) – Advice to Young People
    (47:20) – Journey in the Himalayas
    (58:50) – Becoming a monk
    (1:00:37) – RSS and Hindu nationalism
    (1:08:22) – Explaining India
    (1:12:32) – Mahatma Gandhi
    (1:24:27) – Path to peace in Ukraine
    (1:27:41) – India and Pakistan
    (1:33:21) – Cricket and Football
    (1:37:45) – Donald Trump
    (1:48:56) – China and Xi Jinping
    (1:56:01) – Gujarat riots in 2002
    (2:11:37) – Biggest democracy in the world
    (2:21:53) – Power
    (2:26:39) – Hard work
    (2:29:46) – Srinivasa Ramanujan
    (2:31:53) – Decision-making process
    (2:39:40) – AI
    (2:49:55) – Education
    (3:00:10) – Learning and focus
    (3:06:01) – Mantra
    (3:07:45) – Meditation
    (3:13:43) – Lex visiting India
    (3:18:08) – Siddhartha

  • #459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

    AI transcript
    0:00:04 The following is a conversation with Dylan Patel and Nathan Lampert.
    0:00:11 Dylan runs Semi Analysis, a well-respected research and analysis company that specializes
    0:00:16 in semiconductors, GPUs, CPUs, and AI hardware in general.
    0:00:23 Nathan is a research scientist at the Allen Institute for AI and is the author of the
    0:00:27 amazing blog on AI called Interconnects.
    0:00:32 They are both highly respected, read, and listened to by the experts, researchers, and
    0:00:35 engineers in the field of AI.
    0:00:38 And personally, I’m just a fan of the two of them.
    0:00:45 So I use the deep-seek moment that shook the AI world a bit as an opportunity to sit down
    0:00:48 with them and lay it all out.
    0:00:56 From deep-seek open AI, Google XAI Metaanthropic to NVIDIA and TSMC, and to U.S.-China-Taiwan
    0:01:01 Relations, and everything else that is happening at the cutting edge of AI.
    0:01:08 This conversation is a deep dive into many critical aspects of the AI industry.
    0:01:13 While it does get super technical, we try to make sure that it’s still accessible to
    0:01:19 folks outside of the AI field by defining terms, stating important concepts explicitly,
    0:01:24 spelling out acronyms, and, in general, always moving across the several layers of abstraction
    0:01:26 and levels of detail.
    0:01:32 There is a lot of hype in the media about what AI is and isn’t.
    0:01:38 The purpose of this podcast, in part, is to cut through the hype, through the bullshit,
    0:01:45 and the low-resolution analysis, and to discuss in detail how stuff works and what the implications
    0:01:46 are.
    0:01:52 Let me also, if I may, comment on the new open AI 03 mini residing model, the release
    0:01:58 of which we were anticipating during the conversation, and it did indeed come out right after.
    0:02:05 Its capabilities and costs are on par with our expectations as we stated.
    0:02:11 Open AI 03 mini is indeed a great model, but it should be stated that DeepSeek R1 has similar
    0:02:17 performance on benchmarks, is still cheaper, and it reveals its chain of thought reasoning
    0:02:19 which 03 mini does not.
    0:02:23 It only shows a summary of the reasoning.
    0:02:29 Plus R1 is open-weight, and 03 mini is not.
    0:02:35 By the way, I got a chance to play with 03 mini, and anecdotal, Vibe check-wise, I felt
    0:02:41 that 03 mini, specifically 03 mini high, is better than R1.
    0:02:47 Still, for me personally, I find that ClaudeSana35 is the best model for programming, except
    0:02:51 for tricky cases where I will use 01 Pro to brainstorm.
    0:02:57 Either way, many more better AI models will come, including reasoning models, both from
    0:03:00 American and Chinese companies.
    0:03:03 They will continue to shift the cost curve.
    0:03:07 But the “DeepSeek” moment is indeed real.
    0:03:13 I think it will still be remembered five years from now as a pivotal event in tech history,
    0:03:19 due in part to the geopolitical implications, but for other reasons too, as we discuss in
    0:03:23 detail from many perspectives in this conversation.
    0:03:26 And now, a quick few second mention of your sponsor.
    0:03:29 Check them out in the description, it’s the best way to support this podcast.
    0:03:37 We got NVIDIA AI for video generation, GitHub for coding, Shopify for selling stuff online,
    0:03:42 Netsuite for running your business, and AG1 for staying healthy.
    0:03:44 Choose wisely, my friends.
    0:03:50 Also if you want to get in touch with me for whatever reason, go to www.lxtremer.com/contact.
    0:03:54 And now, onto the follow ad reads, no ads in the middle, try to make this interesting,
    0:03:59 but if you skip them, please still check out our sponsors, I enjoy their stuff.
    0:04:01 Maybe you will too.
    0:04:05 This video is brought to you by a new sponsor, but I’ve known these folks for a long time
    0:04:07 and perfect fit for this podcast.
    0:04:14 They’re called NVIDIA AI, it’s a video generating app that allows you to create full length videos
    0:04:21 using just text, prompts, it’s intuitive, works amazing, it’s truly incredible what
    0:04:22 you can do.
    0:04:28 I’ve been playing quite a bit and using it for stock footage, and by the way they make
    0:04:35 it super easy for you to switch between actually available stock footage and AI generated footage.
    0:04:41 I’ve been preparing a lot for a conversation with Tim Sweeney who is the creator of Unreal
    0:04:47 Engine, and there’s 3D worlds and you get to think about the role of AI in generating
    0:04:49 those 3D worlds.
    0:04:52 That’s what’s coming, 5, 10, 20 years from now.
    0:04:57 In video games and simulations, a fundamental part of our lives would be generated with
    0:04:58 AI.
    0:05:04 And I think NVIDIA AI does a masterful job of pushing us in that direction in the 2D
    0:05:05 plane of video.
    0:05:11 Now, I think this is not a tool that replaces human creativity.
    0:05:14 I think it supercharges human creativity.
    0:05:22 I think now and for a long, long time to come, humans will be in the loop of creating great
    0:05:28 art because we’re creating for each other and only humans truly deeply know what makes
    0:05:35 other humans go ah, like the old Kerak line.
    0:05:43 If you want to try out NVIDIA AI, you can do so for free at nvideo.io/lexpod, saving
    0:05:47 time and money on production costs.
    0:05:53 This episode is brought to you by the thing that’s brought me joy for many, many years
    0:06:00 and created a community for hundreds of thousands, millions, I don’t know how many developers
    0:06:03 and that place is called GitHub.
    0:06:11 It is a company that really has supercharged the developer community.
    0:06:14 I mean, where would the world be without GitHub?
    0:06:21 And they’re also, as a company, pushing the limits of what’s possible in terms of AI
    0:06:24 code generation, AI assisted coding.
    0:06:27 They were pioneers on co-pilot.
    0:06:29 They are still pioneers in co-pilot.
    0:06:33 It’s super competitive space and they are doing their best to win.
    0:06:37 I will forever be a supporter of GitHub co-pilot.
    0:06:41 Now it integrates in a bunch of IDEs, not just into VS Code.
    0:06:45 I am, of course, a VS Code guy at this time.
    0:06:48 I did use JetBrains for a long time.
    0:06:50 I still dabble a little bit.
    0:06:55 For people who don’t know, JetBrains has a plethora, don’t like using that word and
    0:06:59 seems elitist, but it’s got to be a better word.
    0:07:04 There is a lot of different sort of sub IDEs inside JetBrains.
    0:07:07 I’ve even used DataGrip, which manages the MySQL.
    0:07:15 I should mention, and this might be embarrassing, but I have not, ooh, this might be interesting,
    0:07:25 but I have not used anything like co-pilot on any database management GUIs.
    0:07:29 I wonder if DataGrip integrates co-pilot.
    0:07:31 I’m going to have to check that out.
    0:07:38 But everything I use, I’m writing SQL queries from scratch inside the database management
    0:07:39 GUI.
    0:07:45 If I want to do complicated queries, I’ll go to any of the LLMs.
    0:07:51 They’re going to be close on a 3.5 or if it’s part of the code, then I’m going to be inside
    0:07:52 my IDE.
    0:07:57 I just like having a GUI management of a database.
    0:07:58 I’m going to have to check that out with it.
    0:08:01 If DataGrip integrates co-pilot, that’s going to be incredible.
    0:08:05 If not, I’m going to yell from the top of my lungs, hoping it will eventually because
    0:08:11 it’ll make my life a bit easier to have the visual component of a database together with
    0:08:16 a code component of SQL queries, yeah, it will be amazing.
    0:08:22 Anyway, go check out GitHub co-pilot at gh.io/copilot.
    0:08:27 This episode is brought to you by Shopify, not Spotify, Shopify.
    0:08:30 Easily confused, the CEOs are tagged on X often.
    0:08:33 They’re both great CEOs, but this is Shopify.
    0:08:40 You can sell anywhere with a great looking online store using Shopify.
    0:08:45 I’ve been learning a lot about the Silk Road actually, not the digital one.
    0:08:54 The one that for a lot of human history served as a place for merchants to travel and trade
    0:08:55 goods.
    0:09:02 I’m reading a lot about Jengis Khan who enforced the rule of law on the Silk Road and that
    0:09:09 actually had a big invigorating effect on the economy of the Eurasian region.
    0:09:16 Anyway, that was before computers, if they had computers, imagine if they had computers.
    0:09:22 Boy, would the Jengis Khan force be terrifying.
    0:09:31 Or maybe not, maybe each technological age has their own kind of military tactician,
    0:09:37 their own human that matches perfectly for that time in order to conquer the land and
    0:09:38 people.
    0:09:42 Still, what a terrifying time that was.
    0:09:49 Much of human history, lots of beauty, but lots of ways to die.
    0:09:56 So, I’m glad to be living in the 21st century where I can sit back with a margarita.
    0:10:01 I don’t drink margaritas, but if I wanted to, I could and then buy stuff on stores created
    0:10:02 by Shopify.
    0:10:10 Anyway, you can sign up for a $1 per month trial period at Shopify.com/Lex, go to Shopify.com/Lex
    0:10:13 to take your business to the next level today.
    0:10:19 This episode was also brought to you by Netsuite, an all-in-one business management system.
    0:10:22 Not sure why I said that so slowly, but I did.
    0:10:29 I actually did a little intermission for five, six minutes for this episode where I added
    0:10:35 in the middle of it an addendum after having tried to open AI O3 mini.
    0:10:42 That was such a weird feeling to sort of insert myself in the middle of an episode.
    0:10:44 I felt like a third wheel to myself.
    0:10:47 It’s like, “Hey, hey everyone, what are you doing?
    0:10:50 Why did you guys not invite me to this party?”
    0:10:52 That’s what I felt like.
    0:10:55 Hey Lux from the past, it’s me, Lux from the future.
    0:10:59 Right, I should be talking about Netsuite, which is an all-in-one cloud business management
    0:11:00 system.
    0:11:11 It’s the machine inside the machine and boy, are we increasingly building stacks of machines.
    0:11:18 Layers and layers and layers of abstraction until we’re just sitting back on a beach somewhere
    0:11:22 talking to an AI system that’s taking care of everything else.
    0:11:28 Anyway, you can download the CFO’s guide to AI and Machine Learning at Netsuite.com/Lex.
    0:11:37 This episode is also brought to you by AG1, an all-in-one daily drink to support better
    0:11:38 health and performance.
    0:11:39 I drank it today.
    0:11:40 I enjoyed it today.
    0:11:42 I’ve been sleeping very, very little.
    0:11:47 The amount of work I have to do is insane.
    0:11:55 Last night at 6 a.m., I went to bed at 7 a.m., 8 a.m., thinking about doing an all-nighter.
    0:11:56 It’s madness.
    0:12:03 But anyway, at 6 a.m., I drank an AG1 and I was sitting in a couch and I was watching
    0:12:07 like 10 minutes of American Pride Meval.
    0:12:13 I watched like 5, 10 minutes of a show at a time and I was sipping on the AG1 and I was
    0:12:20 thinking how lucky, how fucking lucky I am to be alive.
    0:12:25 First of all because I’m watching the American Frontier and people being just brutal to each
    0:12:31 other, the brutal reality of nature and war during that time and the lawlessness during
    0:12:32 that time.
    0:12:42 But also just how lucky I am to be on the spinning rock and join this green healthy drink.
    0:12:48 Being able to watch a show, being able to work hard towards the thing I love, being able
    0:12:51 to love, being able to breathe, all of it.
    0:12:52 Just amazing.
    0:13:01 Anyway, they’ll give you one month supply of fish oil when you sign up at drinkag1.com/lex.
    0:13:03 This is the Lex Friedman Podcast.
    0:13:06 To support it, please check out our sponsors in the description.
    0:13:28 And now, dear friends, here’s Dylan Patel and Nathan Lambert.
    0:13:32 A lot of people are curious to understand China’s deep-seek AI models, so let’s lay
    0:13:33 it out.
    0:13:40 Can you describe what deep-seek v3 and deep-seek r1 are, how they work, how they’re trained?
    0:13:43 Let’s look at the big picture and then we’ll zoom in on the details.
    0:13:51 Yeah, so deep-seek v3 is a new mixture of experts, transformer language model from deep-seek
    0:13:53 who is based in China.
    0:13:58 They have some new specifics in the model that we’ll get into.
    0:14:03 Largely, this is an open-weight model and it’s an instruction model like what you would
    0:14:05 use in chatGPT.
    0:14:09 They also released what is called the base model, which is before these techniques of
    0:14:11 post-training.
    0:14:16 Most people use instruction models today and those are what’s served in all sorts of applications.
    0:14:21 This was released, I believe, December 26th or that week.
    0:14:28 And then weeks later on January 20th, deep-seek released deep-seek r1, which is a reasoning
    0:14:33 model which really accelerated a lot of this discussion.
    0:14:38 This reasoning model has a lot of overlapping training steps to deep-seek v3 and it’s confusing
    0:14:44 that you have a base model called v3 that you do something to to get a chat model and
    0:14:47 then you do some different things to get a reasoning model.
    0:14:51 I think a lot of the AI industry is going through this challenge of communications right now
    0:14:54 where OpenAI makes fun of their own naming schemes.
    0:15:00 They have GPT-40, they have OpenAI-01 and there’s a lot of types of models, so we’re
    0:15:02 going to break down what each of them are.
    0:15:07 There’s a lot of technical specifics on training and go from high-level to specific and kind
    0:15:09 of go through each of them.
    0:15:13 There’s so many places we can go here, but maybe let’s go to open weights first.
    0:15:17 What does it mean for a model to be open weights and what are the different flavors of open
    0:15:18 source in general?
    0:15:22 Yeah, so this discussion has been going on for a long time in AI, it became more important
    0:15:27 since chat GPT or more focal since chat GPT at the end of 2022.
    0:15:33 Open weights is the accepted term for when model weights of a language model are available
    0:15:35 on the internet for people to download.
    0:15:39 Those weights can have different licenses, which is effectively the terms by which you
    0:15:41 can use the model.
    0:15:44 There are licenses that come from history and open source software.
    0:15:48 There are licenses that are designed by companies specifically.
    0:15:56 All of Lama, DeepSeq, Quen, Mistral, these popular names in open weight models have some
    0:15:57 of their own licenses.
    0:16:01 It’s complicated because not all the same models have the same terms.
    0:16:06 The big debate is on what makes a model open weight.
    0:16:07 Why are we saying this term?
    0:16:08 It’s kind of a mouthful.
    0:16:12 It sounds close to open source, but it’s not the same.
    0:16:16 There’s still a lot of debate on the definition and soul of open source AI.
    0:16:21 Open source software has a rich history on freedom to modify, freedom to take on your
    0:16:26 own, freedom for many restrictions on how you would use the software and what that means
    0:16:31 for AI is still being defined.
    0:16:33 For what I do, I work at the Allen Institute for AI.
    0:16:34 We’re a nonprofit.
    0:16:39 We want to make AI open for everybody and we try to lead on what we think is truly open
    0:16:40 source.
    0:16:43 There’s not full agreement in the community, but for us that means releasing the training
    0:16:49 data, releasing the training code, and then also having open weights like this.
    0:16:52 We’ll get into the details of the models.
    0:16:57 Again and again, as we try to get deeper into how the models were trained, we will say things
    0:17:02 like the data processing, data filtering, data quality is the number one determinant
    0:17:07 of the model quality and then a lot of the training code is the determinant on how long
    0:17:10 it takes to train and how fast your experimentation is.
    0:17:18 Without fully open source models where you have access to this data, it’s harder to replicate.
    0:17:24 We’ll get into cost numbers for DeepSeq v3 on mostly GPU hours and how much you could
    0:17:28 pay to rent those yourselves, but without the data, the replication cost is going to
    0:17:31 be far, far higher.
    0:17:32 Same goes for the code.
    0:17:37 We should also say that this is probably one of the more open models out of the frontier
    0:17:39 models.
    0:17:44 This full spectrum, or probably the fullest open source, like you said, open code, open
    0:17:50 data, open weights, this is not open code.
    0:17:56 This is probably not open data and this is open weights.
    0:18:03 The licensing is MIT license, or I mean there’s some nuance in the different models, but it’s
    0:18:08 towards the free, in terms of the open source movement, these are the good guys.
    0:18:13 DeepSeq is doing fantastic work for disseminating understanding of AI.
    0:18:19 Their papers are extremely detailed in what they do and for other teams around the world,
    0:18:25 they’re very actionable in terms of improving your own training techniques.
    0:18:27 We’ll talk about licenses more.
    0:18:32 The DeepSeq R1 model has a very permissive license, it’s called the MIT license.
    0:18:36 That effectively means there’s no downstream restrictions on commercial use.
    0:18:38 There’s no use case restrictions.
    0:18:43 You can use the outputs from the models to create synthetic data.
    0:18:44 This is all fantastic.
    0:18:48 I think the closest peer is something like Lama, where you have the weights and you have
    0:18:50 a technical report.
    0:18:54 The technical report is very good for Lama, one of the most red PDFs of the year.
    0:18:58 Last year is the Lama 3 paper, but in some ways it’s slightly less actionable.
    0:19:03 It has less details on the training specifics, less plots and so on.
    0:19:09 The Lama 3 license is more restrictive than MIT and then between the DeepSeq custom license
    0:19:11 and the Lama license, we can get into this whole rabbit hole.
    0:19:16 I think we’ll make sure we want to go down the license rabbit hole before we do specifics.
    0:19:17 Yeah.
    0:19:22 It should be stated that one of the implications of DeepSeq, it puts pressure on Lama and everybody
    0:19:26 else on open AI to push towards open source.
    0:19:30 That’s the other side of open source that you mentioned is how much is published in
    0:19:32 detail about it.
    0:19:38 How open are you with the insights behind the code?
    0:19:39 How good is the technical reports?
    0:19:43 Are they hand wavy or is there actual details in there?
    0:19:46 That’s one of the things that DeepSeq did well as they published a lot of the details.
    0:19:47 Yeah.
    0:19:51 Especially in the DeepSeq V3, which is their pre-training paper, they were very clear that
    0:19:58 they are doing interventions on the technical stack that go at many different levels.
    0:20:03 For example, to get highly efficient training, they’re making modifications at or below
    0:20:06 the CUDA layer for NVIDIA chips.
    0:20:10 I have never worked there myself and there are a few people in the world that do that
    0:20:12 very well and some of them are at DeepSeq.
    0:20:18 These types of people are at DeepSeq and leading American frontier labs, but they’re not many
    0:20:19 places.
    0:20:25 To help people understand the other implication of open weights, there’s a topic we’ll return
    0:20:26 to often here.
    0:20:38 There’s a fear that China, the nation, might have interest in stealing American data, violating
    0:20:40 privacy of American citizens.
    0:20:45 What can we say about open weights to help us understand what the weights are able to
    0:20:49 do in terms of stealing people’s data?
    0:20:54 These weights that you can download from Huggingface or other platforms are very big matrices of
    0:20:55 numbers.
    0:20:59 You can download them to a computer in your own house that has no internet and you can
    0:21:03 run this model and you’re totally in control of your data.
    0:21:07 That is something that is different than how a lot of language model usage is actually
    0:21:12 done today, which is mostly through APIs, where you send your prompt to GPUs run by
    0:21:14 certain companies.
    0:21:17 These companies will have different distributions and policies on how your data is stored, if
    0:21:23 it is used to train future models, where it is stored, if it is encrypted, and so on.
    0:21:27 The open weights are you have your fate of data in your own hands, and that is something
    0:21:31 that is deeply connected to the soul of open source.
    0:21:35 It’s not the model that steals your data, it’s whoever’s hosting the model, which could
    0:21:42 be China, if you’re using the DeepSeek app, or it could be Proplexity.
    0:21:46 You’re trusting them with your data, or OpenAI, you’re trusting them with your data.
    0:21:48 Some of these are American companies, some of these are Chinese companies, but the model
    0:21:51 itself is not doing the stealing.
    0:21:52 That’s the host.
    0:21:56 All right, so back to the basics.
    0:22:01 What’s the difference between DeepSeek v3 and DeepSeek r1?
    0:22:05 Can we try to lay out the confusion potential?
    0:22:10 Yes, so for one, I have very understanding of many people being confused by these two
    0:22:11 model names.
    0:22:15 So I would say the best way to think about this is that when training a language model,
    0:22:19 you have what is called pre-training, which is when you’re predicting the large amounts
    0:22:24 of mostly internet text, you’re trying to predict the next token, and what to know about
    0:22:30 these new DeepSeek models is that they do this internet large-scale pre-training once
    0:22:33 to get what is called DeepSeek v3 base.
    0:22:34 This is the base model.
    0:22:37 It’s just going to finish your sentences for you.
    0:22:42 It’s going to be harder to work with than ChatGPT, and then what DeepSeek did is they’ve
    0:22:49 done two different post-training regimes to make the models have specific desirable behaviors.
    0:22:55 So what is the more normal model in terms of the last few years of AI, an instruct model,
    0:22:58 a chat model, a “aligned model,” a helpful model.
    0:23:02 There are many ways to describe this is more standard post-training.
    0:23:06 So this is things like instruction tuning, reinforcement learning from human feedback.
    0:23:08 We’ll get into some of these words.
    0:23:12 And this is what they did to create the DeepSeek v3 model.
    0:23:18 This was the first model to be released, and it is very high-performance, it’s competitive
    0:23:22 with GPT-4, Llama 405b, so on.
    0:23:26 And then when this release was happening, we don’t know their exact timeline, or soon
    0:23:32 after they were finishing the training of a different training process from the same
    0:23:37 next token prediction base model that I talked about, which is when this new reasoning training
    0:23:41 that people have heard about comes in in order to create the model that is called DeepSeek
    0:23:42 R1.
    0:23:46 The R through this conversation is good for grounding for reasoning, and the name is
    0:23:51 also similar to OpenAI’s 01, which is the other reasoning model that people have heard
    0:23:52 about.
    0:23:56 And we’ll have to break down the training for R1 in more detail, because for one, we
    0:24:02 have a paper detailing it, but also it is a far newer set of techniques for the AI community,
    0:24:06 so it’s a much more rapidly evolving area of research.
    0:24:13 Maybe we should also say the big two categories of training of pre-training and post-training,
    0:24:14 these umbrella terms that people use.
    0:24:20 So what is pre-training and what is post-training, and what are the different flavors of things
    0:24:22 underneath post-training umbrella?
    0:24:26 Yeah, so pre-training, I’m using some of the same words that really get the message across
    0:24:30 is you’re doing what is called autoregressive prediction to predict the next token in a
    0:24:32 series of documents.
    0:24:39 This is done over standard practice is trillions of tokens, so this is a ton of data that is
    0:24:41 mostly scraped from the web.
    0:24:46 In some of DeepSeq’s earlier papers, they talk about their training data being distilled
    0:24:47 for math.
    0:24:52 I shouldn’t use this word yet, but taken from Common Crawl, and that’s a public access
    0:24:56 that anyone listening to this could go download data from the Common Crawl website.
    0:24:58 This is a crawler that is maintained publicly.
    0:25:03 Yes, other tech companies eventually shift to their own crawler, and DeepSeq likely has
    0:25:05 done this as well, as most frontier labs do.
    0:25:10 But this sort of data is something that people can get started with, and you’re just predicting
    0:25:12 text in a series of documents.
    0:25:19 This can be scaled to be very efficient, and there’s a lot of numbers that are thrown
    0:25:24 around in AI training, like how many floating-point operations or flops are used, and you can
    0:25:30 also look at how many hours of these GPUs that are used.
    0:25:37 It’s largely one-loss function taken to a very large amount of compute usage.
    0:25:42 You set up really efficient systems, and then at the end of that you have the space model,
    0:25:48 and pre-training is where there is a lot more of complexity in terms of how the process
    0:25:55 is emerging or evolving, and the different types of training losses that you will use.
    0:26:00 This is a lot of techniques grounded in the natural language processing literature.
    0:26:04 The oldest technique, which is still used today, is something called instruction tuning,
    0:26:07 or also known as supervised fine-tuning.
    0:26:12 These acronyms will be IFT or SFT, that people really go back and forth throughout them,
    0:26:17 and I will probably do the same, which is where you add this formatting to the model,
    0:26:23 where it knows to take a question that is like, “Explain the history of the Roman Empire
    0:26:28 to me,” or sort of question you’ll see on Reddit or Stack Overflow, and then the model
    0:26:33 will respond in a information-dense but presentable manner.
    0:26:38 The core of that formatting is in this instruction-tuning phase, and then there’s two other categories
    0:26:41 of loss functions that are being used today.
    0:26:44 One I will classify as preference fine-tuning.
    0:26:48 Preference fine-tuning is a generalized term for what came out of reinforcement learning
    0:26:52 from human feedback, which is RLHF.
    0:26:58 This reinforcement learning from human feedback is credited as the technique that helped chat
    0:27:00 GPT break through.
    0:27:05 It is a technique to make the responses that are nicely formatted, like these Reddit answers,
    0:27:08 more in tune with what a human would like to read.
    0:27:13 This is done by collecting pairwise preferences from actual humans out in the world to start,
    0:27:18 and now AIs are also labeling this data, and we’ll get into those trade-offs.
    0:27:23 You have this kind of contrastive loss function between a good answer and a bad answer.
    0:27:25 The model learns to pick up these trends.
    0:27:27 There’s different implementation ways.
    0:27:29 You have things called reward models.
    0:27:31 You could have direct alignment algorithms.
    0:27:35 There’s a lot of really specific things you can do, but all of this is about fine-tuning
    0:27:37 to human preferences.
    0:27:43 The final stage is much newer and will link to what is done in R1, and these reasoning
    0:27:46 models is, I think, OpenAI’s name for this.
    0:27:51 They had this new API in the fall, which they called the Reinforcement Fine-Tuning API.
    0:27:55 This is the idea that you use the techniques of reinforcement learning, which is a whole
    0:27:56 framework of AI.
    0:27:58 There’s a deep literature here.
    0:28:04 To summarize, it’s often known as trial and error learning, or the subfield of AI where
    0:28:10 you’re trying to make sequential decisions in a certain potentially noisy environment.
    0:28:14 There’s a lot of ways we can go down that, but fine-tuning language models where they
    0:28:19 can generate an answer, and then you check to see if the answer matches the true solution.
    0:28:24 For math or code, you have an exactly correct answer for math.
    0:28:26 You can have unit tests for code.
    0:28:29 What we’re doing is we are checking the language models work, and we’re giving it multiple
    0:28:32 opportunities on the same questions to see if it is right.
    0:28:38 If you keep doing this, the models can learn to improve invariable domains to a great extent.
    0:28:39 It works really well.
    0:28:42 It’s a newer technique in the academic literature.
    0:28:48 It’s been used at Frontier Labs in the US that don’t share every detail for multiple years.
    0:28:52 This is the idea of using reinforcement learning with language models, and it has been taking
    0:28:54 off, especially in this deep-seek moment.
    0:29:00 We should say that there’s a lot of exciting stuff going on, again, across the stack, but
    0:29:04 the post-training probably this year is going to be a lot of interesting developments in
    0:29:05 the post-training.
    0:29:06 We’ll talk about it.
    0:29:12 I almost forgot to talk about the difference between deep-seek v3 and R1 on the user experience
    0:29:13 side.
    0:29:16 Forget the technical stuff, forget all of that.
    0:29:19 People that don’t know anything about AI, they show up.
    0:29:20 What’s the actual experience?
    0:29:24 What’s the use case for each one when they actually type and talk to it?
    0:29:26 What is each good at, that kind of thing?
    0:29:28 Let’s start with deep-seek v3 again.
    0:29:30 It’s what more people would have tried something like it.
    0:29:35 You ask it a question, it’ll start generating tokens very fast, and those tokens will look
    0:29:38 like a very human legible answer.
    0:29:41 It’ll be some sort of markdown list.
    0:29:46 It might have formatting to help you draw to the core details in the answer, and it’ll
    0:29:49 generate tens to hundreds of tokens.
    0:29:57 A token is normally a word for common words or a sub-word part in a longer word.
    0:30:01 It’ll look like a very high-quality Reddit or Stack Overflow answer.
    0:30:06 These models are really getting good at doing these across a wide variety of domains.
    0:30:11 Even things that, if you’re an expert, things that are close to the fringe of knowledge,
    0:30:14 they will still be fairly good at.
    0:30:20 Getting edge AI topics that I do research on, these models are capable for study aid,
    0:30:23 and they’re regularly updated.
    0:30:28 Where this changes is with the deep-seek R1, what is called these reasoning models, is
    0:30:34 when you see tokens coming from these models to start, it will be a large chain of thought
    0:30:35 process.
    0:30:39 We’ll get back to chain of thought in a second, which looks like a lot of tokens where the
    0:30:41 model is explaining the problem.
    0:30:45 The model will often break down the problem and be like, “Okay, they asked me for this.
    0:30:46 Let’s break down the problem.
    0:30:50 I’m going to need to do this,” and you’ll see all of this generating from the model.
    0:30:52 It’ll come very fast in most user experiences.
    0:30:55 These APIs are very fast, so you’ll see a lot of tokens, a lot of words show up really
    0:30:56 fast.
    0:31:01 It’ll keep flowing on the screen, and this is all the reasoning process, and then eventually
    0:31:05 the model will change its tone in R1, and it’ll write the answer, where it summarizes
    0:31:11 its reasoning process and writes a similar answer to the first types of model.
    0:31:17 In DeepSeq’s case, which is part of why this was so popular even outside the AI community,
    0:31:21 is that you can see how the language model is breaking down problems.
    0:31:24 You get this answer on a technical side.
    0:31:27 They train the model to do this specifically where they have a section, which is reasoning,
    0:31:31 and then it generates a special token, which is probably hidden from the user most of the
    0:31:35 time, which says, “Okay, I’m starting the answer,” so the model is trained to do this
    0:31:37 two-stage process on its own.
    0:31:43 If you use a similar model in, say, OpenAI, OpenAI’s user interface is trying to summarize
    0:31:49 this process for you nicely by showing the sections that the model is doing, and it’ll
    0:31:54 kind of click through, it’ll say, breaking down the problem, making X calculation, cleaning
    0:31:58 the result, and then the answer will come for something like OpenAI.
    0:32:03 Maybe it’s useful here to go through an example of a DeepSeq R1 reasoning.
    0:32:09 And so, if you’re looking at the screen here, what you’ll see is a screenshot of the DeepSeq
    0:32:15 chat app, and at the top is thought for 151 seconds with the drop-down arrow.
    0:32:18 Underneath that, if we were in an app that we were running, the drop-down arrow would
    0:32:19 have the reasoning.
    0:32:25 So, in this case, the specific question, which, you know, I’m philosophically/podhead
    0:32:34 inclined, so this is asking DeepSeq R1 for one truly novel insight about humans.
    0:32:39 And it reveals the reasoning, and basically, the truly novel aspect is what’s pushing
    0:32:44 the reasoning to constantly sort of the model asking itself, “Is this truly novel?”
    0:32:50 So it’s actually challenging itself to be more novel, more counterintuitive, less cringe,
    0:32:51 I suppose.
    0:32:57 So some of the reasoning says, this is just snapshots, “Alternatively, humans have a
    0:33:01 unique meta-emotion where they feel emotions about their own emotions, e.g. feeling guilty
    0:33:02 about being angry.
    0:33:06 This recursive emotional layer and creates complex motivational drives that don’t exist
    0:33:07 in other animals.
    0:33:09 The insight is that human emotions are nested.”
    0:33:14 So it’s like, it’s reasoning through how humans feel emotions.
    0:33:15 It’s reasoning about meta-emotions.
    0:33:17 It’s going to have pages and pages of this.
    0:33:20 It’s almost too much to actually read, but it’s nice to skim as it’s coming.
    0:33:21 It’s a stream of consciousness.
    0:33:26 It’s a James Joyce-like stream of consciousness, and then it goes, “Wait, the user wants something
    0:33:28 that’s not seen anywhere else.
    0:33:30 Let me dig deeper.”
    0:33:35 And consider the human ability to hold contradictory beliefs simultaneously, cognitive dissonance
    0:33:41 is known, but perhaps the function is to allow flexible adaptation, so on and so forth.
    0:33:50 I mean, that really captures the public imagination that, holy shit, this isn’t, I mean, intelligence
    0:33:57 slash almost like an inkling of sentience, because you’re thinking through, you’re self-reflecting,
    0:33:59 you’re deliberating.
    0:34:06 And the final result of that, after 157 seconds, is humans instinctively convert selfish desires
    0:34:13 into cooperative systems by collectively pretending abstract rules, money, laws, rights are real.
    0:34:18 These shared hallucinations act as, quote, “games,” where competition is secretly redirected
    0:34:25 to benefit the group, turning conflict into society’s fuel, pretty profound, I mean, you
    0:34:26 know.
    0:34:31 This is a confidential digression, but a lot of people have found that these reasoning
    0:34:34 models can sometimes produce much more eloquent text.
    0:34:39 That is at least an interesting example, I think, depending on how open-minded you are,
    0:34:42 you find language models interesting or not, and there’s a spectrum there.
    0:34:47 Well, I mean, we’ll talk about different benchmarks as well, but some is just a vibe.
    0:34:55 Like that, in itself, is a, let’s say, quote, “fire tweet,” if I’m trying to produce something
    0:34:59 that where people are like, “Oh, shit, okay, so that’s a chain of thought, we’ll probably
    0:35:02 return to it more.”
    0:35:07 How are they able to achieve such low cost on the training and the inference?
    0:35:09 Maybe you could talk the training first.
    0:35:16 Yeah, so there’s two main techniques that they implemented that are probably the majority
    0:35:20 of their efficiency, and then there’s a lot of implementation details that maybe we’ll
    0:35:23 gloss over or get into later that sort of contribute to it.
    0:35:29 But those two main things are, one, is they went to a mixture of experts model, which
    0:35:30 we’ll define in a second.
    0:35:35 And then the other thing is that they invented this new technique called MLA, latent attention.
    0:35:36 Both of these are big deals.
    0:35:40 Mixture of experts is something that’s been in the literature for a handful of years,
    0:35:46 and OpenAI with GPT-4 was the first one to productize a mixture of experts model.
    0:35:51 And what this means is, when you look at the common models around that most people have
    0:35:55 been able to interact with that are open, think Lama.
    0:36:01 Lama is a dense model, i.e., every single parameter or neuron is activated as you’re
    0:36:05 going through the model for every single token you generate.
    0:36:08 Now with a mixture of experts model, you don’t do that.
    0:36:10 How does the human actually work?
    0:36:16 Well, my visual cortex is active when I’m thinking about vision tasks and other things.
    0:36:18 My amygdala is when I’m scared.
    0:36:21 These different aspects of your brain are focused on different things.
    0:36:24 A mixture of experts model attempts to approximate this to some extent.
    0:36:30 It’s nowhere close to what a brain architecture is, but different portions of the model activate.
    0:36:34 You’ll have a set number of experts in the model and a set number that are activated each
    0:36:35 time.
    0:36:38 And this dramatically reduces both your training and inference costs.
    0:36:44 Because now, if you think about the parameter count as the total embedding space for all
    0:36:49 of this knowledge that you’re compressing down during training, when you’re embedding
    0:36:54 this data in instead of having to activate every single parameter every single time you’re
    0:36:58 training or running inference, now you can just activate a subset.
    0:37:01 And the model will learn which expert to route to for different tasks.
    0:37:06 And so this is a humongous innovation in terms of, hey, I can continue to grow the total
    0:37:08 embedding space of parameters.
    0:37:12 And so DeepSeq’s model is 600-something billion parameters.
    0:37:15 Relative to Lama 405B, it’s 405 billion parameters.
    0:37:18 Relative to Lama 70B, it’s 70 billion parameters.
    0:37:23 So this model technically has more embedding space for information to compress all of the
    0:37:25 world’s knowledge that’s on the internet down.
    0:37:31 But at the same time, it is only activating around 37 billion of the parameters.
    0:37:35 So only 37 billion of these parameters actually need to be computed every single time you’re
    0:37:38 training data or inferencing data out of it.
    0:37:43 And so versus, again, the Lama model, 70 billion parameters must be activated, or 405 billion
    0:37:44 parameters must be activated.
    0:37:49 So you’ve dramatically reduced your compute cost when you’re doing training and inference
    0:37:51 with this mixture of experts architecture.
    0:37:55 So we break down where it actually applies and go into the transformer.
    0:37:56 Is that useful?
    0:37:57 Let’s go.
    0:37:58 Let’s go into the transformer.
    0:38:04 The transformer is a thing that is talked about a lot, and we will not cover every detail.
    0:38:09 Essentially the transformer is built on repeated blocks of this attention mechanism, and then
    0:38:14 a traditional dense, fully connected multilayer perception, whatever word you want to use
    0:38:19 for your normal neural network, and you alternate these blocks, there’s other details.
    0:38:22 And where a mixture of experts is applied is at this dense model.
    0:38:28 The dense model holds most of the weights if you count them in a transformable model.
    0:38:32 So you can get really big gains from this mixture of experts on parameter efficiency,
    0:38:37 at training and inference, because you get this efficiency by not activating all of these
    0:38:38 parameters.
    0:38:43 We should also say that a transformer is a giant neural network.
    0:38:49 And then there’s for 15 years now, there’s what’s called the deep learning revolution.
    0:38:53 Networks gotten larger and larger, and at a certain point the scaling laws appeared where
    0:38:54 people realized…
    0:38:57 This is a scaling law shirt by the way.
    0:39:04 Representing scaling laws, where it became more and more formalized that bigger is better
    0:39:07 across multiple dimensions of what bigger means.
    0:39:12 But these are all neural networks we’re talking about, and we’re talking about different architectures
    0:39:17 of how to construct these neural networks such that the training and the inference on
    0:39:19 them is super efficient.
    0:39:23 Every different type of model has a different scaling law for it, which is effectively for
    0:39:29 how much compute you put in, the architecture will get to different levels of performance
    0:39:30 at test tasks.
    0:39:34 And mixture of experts is one of the ones at training time, even if you don’t consider
    0:39:36 the inference benefits, which are also big.
    0:39:41 At training time, your efficiency with your GPUs is dramatically improved by using this
    0:39:43 architecture if it is well implemented.
    0:39:50 So you can get effectively the same performance model and evaluation scores with numbers like
    0:39:51 30% less compute.
    0:39:55 I think there’s going to be a wide variation depending on your implementation details and
    0:39:56 stuff.
    0:40:00 But it is just important to realize that this type of technical innovation is something
    0:40:02 that gives huge gains.
    0:40:07 And I expect most companies that are serving their models to move to this mixture of experts
    0:40:12 implementation, historically the reason why not everyone might do it is because it’s an
    0:40:15 implementation complexity, especially when doing these big models.
    0:40:19 So this is one of the things that’s deep sea gets credit for is they do this extremely
    0:40:20 well.
    0:40:25 This mixture of experts extremely well, this architecture for what is called deep seek MOE,
    0:40:30 MOE is the shortened version of mixture of experts, is multiple papers old.
    0:40:35 This part of their training infrastructure is not new to these models alone.
    0:40:40 And same goes for what Dylan mentioned with multi head latent attention is all about reducing
    0:40:46 memory usage during inference and same things during training by using some fancy low rank
    0:40:48 approximation math.
    0:40:51 If you get into the details with this latent attention, it’s one of those things that I
    0:40:56 look at and say, okay, they’re doing really complex implementations because there’s other
    0:41:01 parts of language models such as embeddings that are used to extend the context length.
    0:41:07 The common one that deep seek uses rotary positional impeddings, which is called rope.
    0:41:10 And if you want to use rope with a normal MOE, it’s kind of a sequential thing.
    0:41:16 You take these, you take two of the attention matrices and you rotate them by a complex value
    0:41:21 rotation, which is a matrix multiplication with deep seeks MLA with this new attention
    0:41:25 architecture, they need to do some clever things because they’re not set up the same
    0:41:28 and it just makes the implementation complexity much higher.
    0:41:30 So they’re managing all of these things.
    0:41:34 And these are probably the sort of things that opening eye, these closed labs are doing.
    0:41:37 We don’t know if they’re doing the exact same techniques, but they actually shared them
    0:41:42 with the world, which is really nice to be like, this is the cutting edge of efficient
    0:41:43 language model training.
    0:41:49 And some of this is requires low level engineering, just is a giant mess trickery.
    0:41:55 So as I understand that one below CUDA, so they go super low programming of GPUs.
    0:41:59 Effectively, NVIDIA builds this library called nickel, right?
    0:42:03 In which, you know, when you’re training a model, you have all these communications
    0:42:06 between every single layer of the model and you may have over a hundred layers.
    0:42:07 What does the nickel stand for?
    0:42:08 It’s NCCL.
    0:42:11 NVIDIA communications collectives library.
    0:42:12 Nice.
    0:42:13 Damn.
    0:42:19 And so, when you’re training a model, right, you’re going to have all these all reduces
    0:42:20 and all gathers, right?
    0:42:25 Between each layer, between the multi layer perceptron or feed forward network and the
    0:42:29 attention mechanism, you’ll have basically the model synchronized, right?
    0:42:33 Or you’ll have all reducer and all gather.
    0:42:36 And this is a communication between all the GPUs in the network, whether it’s in training
    0:42:37 or inference.
    0:42:39 So NVIDIA has a standard library.
    0:42:43 This is one of the reasons why it’s really difficult to use anyone else’s hardware for
    0:42:47 training is because no one’s really built a standard communications library.
    0:42:50 And NVIDIA has done this at a sort of a higher level, right?
    0:42:55 A deep seek because they have certain limitations around the GPUs that they have access to.
    0:43:00 The interconnects are limited to some extent by the restrictions of the GPUs that were
    0:43:04 shipped into China legally, not the ones that are smuggled but legally shipped in that they
    0:43:05 used to train this model.
    0:43:09 They had to figure out how to get efficiencies, right?
    0:43:14 And one of those things is that instead of just calling the NVIDIA library, Nickel, right?
    0:43:20 They instead created their, they scheduled their own communications, which some of the
    0:43:22 labs do, right?
    0:43:25 You met a talk about in Lama 3 how they made their own custom version of Nickel.
    0:43:28 This is, they didn’t talk about the implementation details.
    0:43:31 This is some of what they did, probably not as well as, maybe not as well as deep seek
    0:43:36 because deep seek, you know, necessity is the mother of innovation and they had to do
    0:43:37 this.
    0:43:41 Whereas in the case, you know, OpenAI has people that do this sort of stuff, Anthropic,
    0:43:42 et cetera.
    0:43:45 But, you know, deep seek certainly did it publicly and they may have done it even better
    0:43:50 because they were gimped on a certain aspect of the chips that they have access to.
    0:43:57 And so they scheduled communications, you know, by scheduling specific SMs, SMs you could
    0:44:00 think of as like the core on a GPU, right?
    0:44:05 So there’s hundreds of cores or there’s, you know, a bit over a hundred cores SMs on
    0:44:08 a GPU and they were specifically scheduling, hey, which ones are running the model, which
    0:44:11 ones are doing all reduce, which one are doing all gather, right?
    0:44:13 And they would flip back and forth between them.
    0:44:16 And this requires extremely low level programming.
    0:44:20 This is what Nickel does automatically or other NVIDIA libraries handle this automatically
    0:44:21 usually.
    0:44:22 Yeah, exactly.
    0:44:26 And so technically they’re using, you know, PTX, which is like sort of like, you could
    0:44:28 think of it as like an assembly type language.
    0:44:30 It’s not exactly that or instruction set, right?
    0:44:35 Like coding directly to assembly or instruction set, it’s not exactly that, but that’s still
    0:44:39 part of technically CUDA, but it’s like, do I want to write in Python, you know, PyTorch
    0:44:41 equivalent and call NVIDIA libraries?
    0:44:43 Do I want to go down to the C level, right?
    0:44:46 Or, you know, encode even lower level or do I want to go all the way down to the assembly
    0:44:47 or ISO level?
    0:44:52 And there are cases where you go all the way down there at the very big labs, but most
    0:44:54 companies just do not do that, right?
    0:44:58 Because it’s a waste of time and the efficiency gains you get are not worth it.
    0:45:01 What deep-seeks implementation is so complex, right?
    0:45:03 Especially with their mixture of experts, right?
    0:45:07 People have done mixture of experts, but they’re generally 8/16 experts, right?
    0:45:08 And they activate too.
    0:45:13 So, you know, one of the words we like to use is like sparsity factor, right?
    0:45:14 Or usage, right?
    0:45:18 So you might have four, you know, one fourth of your model activate, right?
    0:45:22 And that’s what Mistral’s mixed role model, right?
    0:45:26 Their model that really catapulted them to like, oh my God, they’re really, really good.
    0:45:32 AI has also had models that are MOE and so have all the other labs that are major closed.
    0:45:36 But what deep-seek did that maybe only the leading labs have only just started recently
    0:45:38 doing is have such a high sparsity factor, right?
    0:45:40 It’s not one fourth of the model, right?
    0:45:43 Two out of eight experts activating every time you go through the model.
    0:45:46 It’s eight out of 256.
    0:45:50 And there’s different implementations for mixture of experts where you can have some
    0:45:56 of these experts that are always activated, which this just looks like a small neural network.
    0:45:58 And then all the tokens go through that.
    0:46:03 And then they also go through some that are selected by this routing mechanism.
    0:46:08 And one of the innovations in deep-seek’s architecture is that they change the routing
    0:46:10 mechanism in mixture of expert models.
    0:46:15 There’s something called an auxiliary loss, which effectively means during training, you
    0:46:21 want to make sure that all of these experts are used across the tasks that the model sees.
    0:46:26 Why there can be failures in mixture of experts is that when you’re doing this training, you
    0:46:30 the one objective is token prediction accuracy.
    0:46:34 And if you just let training go with a mixture of expert model on your own, it can be that
    0:46:39 the model learns to only use a subset of the experts.
    0:46:43 And in the MOE literature, there’s something called the auxiliary loss, which helps balance
    0:46:44 them.
    0:46:49 But if you think about the loss functions of deep learning, this even connects to the
    0:46:54 bitter lesson is that you want to have the minimum inductive bias in your model to let
    0:46:56 the model learn maximally.
    0:47:01 And this auxiliary loss, this balancing across experts could be seen as intention with the
    0:47:04 prediction accuracy of the tokens.
    0:47:08 So we don’t know the exact extent that the deep-seek MOE change, which is instead of
    0:47:12 doing an auxiliary loss, they have an extra parameter in their routing, which after the
    0:47:17 batches, they update this parameter to make sure that the next batches all have a similar
    0:47:19 use of experts.
    0:47:22 And this type of change can be big, it can be small, but they add up over time.
    0:47:27 And this is the sort of thing that just points to them innovating and I’m sure all the labs
    0:47:30 that are training big MOEs are looking at this sort of things, which is getting away
    0:47:33 from the auxiliary loss, some of them might already use it, but you just keep you keep
    0:47:34 the QLA in gains.
    0:47:40 And we’ll talk about the philosophy of training and how you organize these organizations.
    0:47:44 And a lot of it is just compounding small improvements over time in your data and your
    0:47:48 architecture and your post training and how they integrate with each other.
    0:47:49 And deep-seek does the same thing.
    0:47:53 And some of them are shared or a lot, we have to take them on face value that they share
    0:47:54 their most important details.
    0:47:56 I mean, the architecture and the weights are out there.
    0:47:59 So we’re seeing what they’re doing and it adds up.
    0:48:02 Going back to sort of the like efficiency and complexity point, right?
    0:48:05 It’s 32 versus four, right?
    0:48:08 For like mixed draw and other MOE models that have been publicly released.
    0:48:13 So this ratio is extremely high and sort of what Nathan was getting at there was, when
    0:48:19 you have such a different level of sparsity, you can’t just have every GPU have the entire
    0:48:20 model, right?
    0:48:21 The model’s too big.
    0:48:22 There’s too much complexity there.
    0:48:25 So you have to split up the model with different types of parallelism, right?
    0:48:29 And so you might have different experts on different GPU nodes.
    0:48:34 But now what happens when this set of data that you get, hey, all of it looks like this
    0:48:39 one way and all of it should route to one part of my model, right?
    0:48:45 So when all of it routes to one part of the model, then you can have this overloading
    0:48:49 of a certain set of the GPU resources or a certain set of the GPUs.
    0:48:54 And then the rest of the training network sits idle because all of the tokens are just
    0:48:55 routing to that.
    0:48:56 So this is the biggest complexity.
    0:49:02 One of the biggest complexities with running a very sparse mixture of experts model, i.e.,
    0:49:07 this 32 ratio versus this four ratio is that you end up with so many of the experts just
    0:49:08 sitting there idle.
    0:49:10 So how do I load balance between them?
    0:49:12 How do I schedule the communications between them?
    0:49:19 This is a lot of the extremely low level detailed work that they figured out in the public first
    0:49:24 and potentially second or third in the world and maybe even first in some cases.
    0:49:30 What lesson do you, in the direction of the bitter lesson, do you take from all of this?
    0:49:33 Is this going to be the direction where a lot of the gain is going to be, which is this
    0:49:36 kind of low level optimization?
    0:49:42 Or is this a short term thing where the biggest gains will be more on the algorithmic high
    0:49:45 level side of post training?
    0:49:51 Is this a short term leap because they’ve figured out a hack because constraints necessities
    0:49:53 the mother of invention?
    0:49:55 Or is there still a lot of gains?
    0:49:59 I think we should summarize what the bitter lesson actually is about.
    0:50:04 The bitter lesson, essentially, if you paraphrase it, is that the types of training that will
    0:50:11 win out in deep learning as we go are those methods that are which are scalable in learning
    0:50:14 and search is what it calls out.
    0:50:19 This scale word gets a lot of attention in this.
    0:50:27 The interpretation that I use is effectively to avoid adding the human priors to your learning
    0:50:28 process.
    0:50:32 If you read the original essay, this is what it talks about is how researchers will try
    0:50:38 to come up with clever solutions to their specific problem that might get them small
    0:50:40 gains in the short term.
    0:50:45 While simply enabling these deep learning systems to work efficiently and for these
    0:50:50 bigger problems in the long term might be more likely to scale and continue to drive
    0:50:53 success.
    0:50:57 Therefore we were talking about relatively small implementation changes to the mixture
    0:50:59 of experts model.
    0:51:04 Therefore it’s like, “Okay, we will need a few more years to know if one of these are
    0:51:08 actually really crucial to the bitter lesson, but the bitter lesson is really this long
    0:51:13 term arc of how simplicity can often win and there’s a lot of sayings in the industry
    0:51:14 like the models just want to learn.
    0:51:20 You have to give them the simple lost landscape where you put compute through the model and
    0:51:24 they will learn and getting barriers out of the way.”
    0:51:29 That’s where the power, something like Nickel comes in, where standardized code that can
    0:51:33 be used by a lot of people to create sort of simple innovations that can scale, which
    0:51:39 is why the code base for DeepSeq is probably a giant mess.
    0:51:43 I’m sure DeepSeq definitely has code bases that are extremely messy where they’re testing
    0:51:47 these new ideas, multi-headlay and attention.
    0:51:50 Probably could start in something like a Jupyter notebook or somebody tries something on a
    0:51:54 few GPUs and that is really messy.
    0:52:00 But the stuff that trains DeepSeq v3 and DeepSeq r1, those libraries, if you were to present
    0:52:04 them to us, I would guess are extremely high quality code.
    0:52:07 High quality readable code.
    0:52:13 I think there is one aspect to note though, is that there is the general ability for that
    0:52:16 to transfer across different types of runs.
    0:52:21 You may make really, really high quality code for one specific model architecture at one
    0:52:22 size.
    0:52:26 Then that is not transferable to, “Hey, when I make this architecture tweak, everything’s
    0:52:28 broken again.”
    0:52:34 That’s something that could be, with their specific low-level coding of scheduling SMs,
    0:52:38 is specific to this model architecture and size.
    0:52:43 Whereas NVIDIA’s Collective’s library is more like, “Hey, it’ll work for anything.
    0:52:44 You want to do an all-reduce?
    0:52:45 Great.
    0:52:46 I don’t care what your model architecture is.
    0:52:47 It’ll work.”
    0:52:51 You’re giving up a lot of performance when you do that in many cases, but it’s worthwhile
    0:52:57 for them to do the specific optimization for the specific run given the constraints that
    0:52:58 they have regarding compute.
    0:53:06 I wonder how stressful it is to these frontier models initiate training, to have the code
    0:53:17 push the button, that you’re now spending a large amount of money and time to train this.
    0:53:22 There must be a lot of innovation on the debugging stage of making sure there’s no issues that
    0:53:27 you’re monitoring and visualizing every aspect of the training, all that kind of stuff.
    0:53:31 When people are training, they have all these various dashboards, but the most simple one
    0:53:33 is your loss.
    0:53:38 It continues to go down, but in reality, especially with more complicated stuff like MOE, the
    0:53:42 biggest problem with it or FP8 training, which is another innovation going to a lower-precision
    0:53:47 number format, i.e., less accurate, is that you end up with loss spikes.
    0:53:49 No one knows why the loss spike happened.
    0:53:50 For a long time, you do.
    0:53:51 Some of them you do.
    0:53:52 That’s a bad data.
    0:53:56 I give the AI2’s example of what blew up our earlier models, is a subreddit called Microwave
    0:53:57 Gang.
    0:53:58 We love the shout-out.
    0:53:59 It’s a real thing.
    0:54:01 You can pull up Microwave Gang.
    0:54:05 Essentially, it’s a subreddit where everybody makes posts that are just the letter M, so
    0:54:06 it’s like, mmm.
    0:54:11 There’s extremely long sequences of the letter M, and then the comments are like beep beep
    0:54:12 because it’s in the micro-events.
    0:54:16 If you pass this into a model that’s trained to be a normal producing text, it’s extremely
    0:54:22 high loss because normally you see an M. You don’t predict M’s for a long time.
    0:54:24 This is something that causes the loss spikes for us.
    0:54:28 When you have much like, this is old, this is not recent, and when you have more mature
    0:54:31 data systems, that’s not the thing that causes the loss spike.
    0:54:36 What Dylan is saying is true, but it’s levels to this sort of idea.
    0:54:41 With regards to the stress, these people are like, you’ll go out to dinner with a friend
    0:54:46 that works at one of these labs, and they’ll just be looking at their phone every 10 minutes,
    0:54:49 and they’re not like, you know, it’s one thing if they’re texting, but they’re just like,
    0:54:50 like, is the loss–
    0:54:56 Yeah, it’s like tokens per second, loss not blown up, they’re just watching this.
    0:54:59 And the heart rate goes up if there’s a spike.
    0:55:01 And some level of spikes is normal, right?
    0:55:03 It’ll recover and be back.
    0:55:07 Sometimes a lot of the old strategy was like, you just stop the run, restart from the old
    0:55:10 version, and then like, change the data mix, and then it keeps going.
    0:55:12 There are even different types of spikes.
    0:55:17 So Dirk Greninveld has a theory that it’s like fast spikes and slow spikes, where there
    0:55:20 are– sometimes when you’re looking at the loss and there are other parameters, you can
    0:55:24 see it start to creep up and then blow up, and that’s really hard to recover from, so
    0:55:25 you have to go back much further.
    0:55:28 So you have the stressful period where it’s like flat or it might start going up, and
    0:55:29 you’re like, what do I do?
    0:55:33 Whereas there are also loss spikes that are– it looks good, and then there’s one spiky
    0:55:34 data point.
    0:55:36 And what you can do is you just skip those.
    0:55:39 You see that there’s a spike, you’re like, okay, I can ignore this data, don’t update
    0:55:41 the model, and do the next one, and it’ll recover quickly.
    0:55:47 But these un-trickier implementations, as you get more complex in your architecture,
    0:55:52 and you scale up to more GPUs, you have more potential for your loss blowing up.
    0:55:54 So there’s a distribution.
    0:55:56 The whole idea of grokking also comes in, right?
    0:56:00 It’s like, just because it slowed down from improving and loss doesn’t mean it’s not learning,
    0:56:04 because all of a sudden it could be like this, and it could just spike down in loss again,
    0:56:06 because it truly learned something, right?
    0:56:08 And it took some time for it to learn that.
    0:56:10 It’s not like a gradual process, right?
    0:56:13 And that’s what humans are like, that’s what models are like.
    0:56:15 It’s really a stressful task, as you mentioned.
    0:56:18 And the whole time, the dollar count is going up.
    0:56:20 Every company has failed runs.
    0:56:23 You need failed run to push the envelope on your infrastructure.
    0:56:28 So a lot of news cycles are made of X company had Y failed run.
    0:56:32 Every company that’s trying to push the frontier of AI has these.
    0:56:37 So yes, it’s noteworthy because it’s a lot of money, and it can be week to month setback,
    0:56:39 but it is part of the process.
    0:56:44 But how do you get, if you’re deep-seek, how do you get to a place where, holy shit, there’s
    0:56:46 a successful combination of hyperparameters?
    0:56:49 A lot of small failed runs.
    0:56:55 So rapid iteration through failed runs and successful ones.
    0:57:01 And then you build up some intuition like this, this mixture of export works, and then
    0:57:03 this implementation of MLA works.
    0:57:08 Key hyperparameters like learning rate and regularization and things like this.
    0:57:11 And you find the regime that works for your code base.
    0:57:13 I’ve talked to people at Frontier Labs.
    0:57:18 There’s a story that you can tell where training language models is kind of a path that you
    0:57:19 need to follow.
    0:57:24 So you need to unlock the ability to train a certain type of model or a certain scale,
    0:57:27 and then your code base and your internal know-how of which hyperparameters work for
    0:57:28 it is kind of known.
    0:57:33 And you look at the deep-seek papers and models, they’ve scaled up, they’ve added complexity,
    0:57:36 and it’s just continuing to build the capabilities that they have.
    0:57:39 Here’s the concept of a YOLO run.
    0:57:42 So YOLO, you only live once.
    0:57:47 And what it is, is there’s all this experimentation you do at the small scale.
    0:57:48 Research ablations.
    0:57:53 You have your Jupyter Notebook where you’re experimenting with MLA on three GPUs or whatever.
    0:57:58 And you’re doing all these different things like, “Hey, do I do four active experts,
    0:57:59 128 experts?
    0:58:01 Do I arrange the experts this way?”
    0:58:03 All these different model architecture things.
    0:58:05 You’re testing at a very small scale.
    0:58:09 Several researchers, few GPUs, tens of GPUs, hundreds of GPUs, whatever it is.
    0:58:13 And then, all of a sudden, you’re like, “Okay, guys, no more fucking around.
    0:58:14 No more screwing around.
    0:58:19 Everyone, take all the resources we have, let’s pick what we think will work, and just
    0:58:20 go for it.”
    0:58:21 YOLO.
    0:58:24 And this is where that sort of stress comes in, is like, “Well, I know it works here,
    0:58:28 but some things that work here don’t work here, and some things that work here don’t
    0:58:29 work down here.”
    0:58:30 Right?
    0:58:31 In terms of scale.
    0:58:38 It’s really truly a YOLO run, and there’s this discussion of certain researchers just
    0:58:40 have this methodical nature.
    0:58:44 They can find the whole search space and figure out all the ablations of different research
    0:58:45 and really see what is best.
    0:58:50 And there’s certain researchers who just have that innate gut instinct of, “This is the
    0:58:51 YOLO run.
    0:58:52 I’m looking at the data.
    0:58:53 This is it.”
    0:58:57 This is why you want to work in post-training, because the GPU cost for training is lower,
    0:59:01 so you can make a higher percentage of your training runs YOLO runs.
    0:59:02 Yeah.
    0:59:03 For now.
    0:59:04 Yeah.
    0:59:05 For now.
    0:59:06 For now.
    0:59:09 So, some of this is fundamentally luck, still.
    0:59:10 Luck is skill, right?
    0:59:11 In many cases.
    0:59:12 Yeah.
    0:59:13 I mean, it looks lucky, right?
    0:59:17 But the hill to climb, if you’re out in one of these labs and you have an evaluation
    0:59:21 and you’re not crushing, there’s a repeated playbook of how you improve things.
    0:59:24 There are localized improvements, which might be data improvements, and these add up into
    0:59:26 the whole model just being much better.
    0:59:30 And when you zoom in really close, it can be really obvious that this model is just really
    0:59:33 bad at this thing, and we can fix it, and you just add these up.
    0:59:38 So, some of it feels like luck, but on the ground, especially with these new reasoning
    0:59:43 models we’re talking to, it’s just so many ways that we can poke around, and normally,
    0:59:45 it’s that some of them give big improvements.
    0:59:47 The search space is near infinite, right?
    0:59:53 And yet, the amount of compute in time you have is very low, and you have to hit release
    0:59:54 schedules.
    1:00:00 You have to not get blown past by everyone, otherwise, what happened with DeepSeek, crushing
    1:00:03 Meta, and Mistral, and Coherent, and all these guys, they moved too slow, right?
    1:00:06 They maybe were too methodical, I don’t know, they didn’t hit the YOLO run, whatever the
    1:00:09 reason was, maybe they weren’t as skilled.
    1:00:13 You can call it luck if you want, but at the end of the day, it’s skill.
    1:00:16 So, 2025 is the year of the YOLO run.
    1:00:19 It seems like all the labs are going in.
    1:00:24 I think it’s even more impressive what OpenAI did in 2022.
    1:00:28 At the time, no one believed in a mixture of experts models at Google, who had all the
    1:00:34 researchers, OpenAI had such little compute, and they devoted all of their compute for many
    1:00:40 months, all of it, 100%, for many months to GPT4, with a brand new architecture with
    1:00:44 no belief that, hey, let me spend a couple hundred million dollars, which is all of the
    1:00:47 money I have on this model, right?
    1:00:49 That is truly YOLO, right?
    1:00:54 Now, people are like, all these training run failures that are in the media, right?
    1:00:58 It’s like, okay, great, but actually, a huge chunk of my GPs are doing inference.
    1:01:03 I still have a bunch doing research constantly, and yes, my biggest cluster is training on
    1:01:09 this YOLO run, but that YOLO run is much less risky than what OpenAI did in 2022, or maybe
    1:01:13 what DeepSeq did now, or sort of like, hey, we’re just going to throw everything at it.
    1:01:18 The big winners throughout human history are the ones who are willing to do YOLO at some
    1:01:19 point.
    1:01:25 Okay, what do we understand about the hardware it’s been trained on, DeepSeq?
    1:01:29 DeepSeq is very interesting, at least a second to take us to zoom out on who they are, first
    1:01:30 of all, right?
    1:01:35 HighFlyer is a hedge fund that has historically done quantitative trading in China as well
    1:01:40 as elsewhere, and they have always had a significant number of GPUs, right?
    1:01:45 In the past, a lot of these high-frequency trading, algorithmic quant traders used FPGAs,
    1:01:47 but it shifted to GPUs, definitely, and there’s both, right?
    1:01:52 But GPUs especially, and HighFlyer, which is the hedge fund that owns DeepSeq, and everyone
    1:01:56 who works for DeepSeq is part of HighFlyer, to some extent, right?
    1:01:59 It’s the same parent company, same owner, same CEO.
    1:02:05 They had all these resources and infrastructure for trading, and then they devoted a humongous
    1:02:10 portion of them to training models, both language models and otherwise, right?
    1:02:15 Because these techniques were heavily AI-influenced.
    1:02:21 More recently, people have realized, hey, trading with, even when you go back to Renaissance
    1:02:26 and all these quantitative firms, natural language processing is the key to trading
    1:02:30 really fast, understanding a press release and making the right trade, right?
    1:02:33 And so, DeepSeq has always been really good at this.
    1:02:39 And even as far back as 2021, they have press releases and papers saying, hey, we’re the
    1:02:44 first company in China with an A100 cluster this large, those 10,000 A100 GPUs, right?
    1:02:46 This is in 2021.
    1:02:48 Now this wasn’t all for training large language models.
    1:02:54 This was mostly for training models for their quantitative aspects, their quantitative trading,
    1:02:57 as well as a lot of that was natural language processing, to be clear, right?
    1:02:59 And so this is the sort of history, right?
    1:03:03 So verifiable fact is that in 2021, they built the largest Chinese cluster.
    1:03:06 At least, they claim it was the largest cluster in China, 10,000 GPUs.
    1:03:11 Before expert controls started, they’ve had a huge cluster before any conversation of
    1:03:12 expert controls.
    1:03:16 So then you step it forward to, what have they done over the last four years since then,
    1:03:17 right?
    1:03:21 Obviously, they’ve continued to operate the hedge fund, probably make tons of money.
    1:03:24 And the other thing is that they’ve leaned more and more and more into AI.
    1:03:27 The CEO, Leon Ching-Feng, Leon…
    1:03:30 You’re not putting me spot on this, we discussed this before.
    1:03:31 Leon Fang, right?
    1:03:32 The CEO, he owns…
    1:03:33 All of them.
    1:03:38 Leon Fang, he owns maybe a little bit more than half the company allegedly, right?
    1:03:44 He’s an extremely Elon Jensen kind of figure where he’s just involved in everything, right?
    1:03:48 And so over that time period, he’s gotten really in-depth into AI.
    1:03:50 He actually has a bit of a…
    1:03:54 If you see some of the statements, a bit of an EAC vibe almost, right?
    1:03:56 Total AGI vibes.
    1:03:57 We need to do this.
    1:04:01 We need to make a new ecosystem of open AI.
    1:04:05 We need China to lead on this sort of ecosystem because historically, the Western countries
    1:04:11 have led on software ecosystems and straight-up acknowledges, like, in order to do this,
    1:04:15 we need to do something different, DeepSeek is his way of doing this.
    1:04:17 Some of the translated interviews with him are fantastic.
    1:04:18 So he has done interviews?
    1:04:19 Yeah.
    1:04:21 You think he would do a Western interview or no?
    1:04:22 Or is there controls on the channel?
    1:04:26 There hasn’t been one yet, but I would try it.
    1:04:29 I just got a Chinese translator, so it was great.
    1:04:30 This is how I’ll push.
    1:04:38 So fascinating figure engineer pushing full-on into AI, leveraging the success from the high-frequency
    1:04:39 trading.
    1:04:40 Very direct quotes.
    1:04:44 We will not switch to closed source when asked about this stuff.
    1:04:50 Very long-term motivated in how the ecosystem of AI should work.
    1:04:57 And I think from a Chinese perspective, he wants a Chinese company to build this vision.
    1:05:01 And so this is sort of like the “visionary” behind the company.
    1:05:03 This hedge fund still exists, this quantitative firm.
    1:05:10 And so DeepSeek is the sort of, you know, slowly he got turned to this full view of like
    1:05:12 AI, everything about this, right?
    1:05:15 But at some point, it slowly maneuvered and he made DeepSeek.
    1:05:17 And DeepSeek has done multiple models since then.
    1:05:19 They’ve acquired more and more GPUs.
    1:05:22 They share infrastructure with the fund, right?
    1:05:28 And so, you know, there is no exact number of public GPU resources that they have, but
    1:05:32 besides this 10,000 GPUs that they bought in 2021, right?
    1:05:34 And they were fantastically profitable, right?
    1:05:40 And then this paper claims they did only 2,800 GPUs, which are a restricted GPU that was
    1:05:43 previously allowed in China, but no longer allowed and there’s a new version.
    1:05:47 But it’s basically NVIDIA’s H100 for China, right?
    1:05:51 And then there’s some restrictions on it, specifically around the communications sort
    1:05:52 of speed, the interconnect speed, right?
    1:05:57 Which is why they had to do this crazy SM, you know, scheduling stuff, right?
    1:05:58 So going back to that, right?
    1:06:03 It’s like, this is obviously not true in terms of their total GPU count.
    1:06:08 Obvious available GPUs, but for this training run, you think 2,000 is the correct number
    1:06:09 or no?
    1:06:13 So this is where it takes, you know, a significant amount of sort of like zoning in, right?
    1:06:16 Like, what do you call your training run, right?
    1:06:20 You count all of the research and ablations that you ran, right?
    1:06:23 Studying all this stuff, because yes, you can do a YOLO run, but at some level you have
    1:06:26 to do the test at the small scale, and then you have to do some test at medium scale before
    1:06:28 you go to a large scale.
    1:06:32 Accepted practice is that for any given model that is a notable advancement, you’re going
    1:06:37 to do two to four X compute of the full training run in experiments alone.
    1:06:42 So a lot of this compute that’s being scaled up is probably used in large part at this
    1:06:43 time for research.
    1:06:47 Yeah, and research will, you know, research begets the new ideas that let you get huge
    1:06:48 efficiency.
    1:06:49 Right.
    1:06:50 Research gets you 01.
    1:06:52 You break through, so you need to bet on it.
    1:06:56 So some of the pricing strategy they will discuss has the research baked into the price.
    1:07:01 So the numbers that deep seek specifically said publicly, right, are just the 10,000
    1:07:06 GPUs in 2021, and then 2,000 GPUs for only the pre-training for V3.
    1:07:08 They did not discuss cost on R1.
    1:07:13 They did not discuss cost on all the other RL, right, for the instruct model that they
    1:07:14 made, right?
    1:07:18 They only discussed the pre-training for the base model, and they did not discuss anything
    1:07:19 on research and ablations.
    1:07:23 And they do not talk about any of the resources that are shared in terms of, hey, the fund
    1:07:25 is using all these GPUs, right?
    1:07:30 And we know that they’re very profitable and that 10,000 GPUs in 2021.
    1:07:36 So some of the research that we’ve found is that we actually believe they have closer
    1:07:38 to 50,000 GPUs.
    1:07:39 We as Semi-Answers.
    1:07:44 So we should say that you’re sort of one of the world experts in figuring out what everybody’s
    1:07:49 doing in terms of the semiconductor in terms of cluster buildouts in terms of, like, who
    1:07:52 is doing what in terms of training runs.
    1:07:53 So yeah.
    1:07:54 So that’s the we.
    1:07:55 Okay, go ahead.
    1:07:56 Yeah, sorry.
    1:07:58 We believe they actually have something closer to 50,000 GPUs, right?
    1:08:00 Now, this is split across many tasks, right?
    1:08:03 Again, the fund, research and ablations.
    1:08:05 For Ballpark, how much would OpenAI or Anthropocad?
    1:08:10 I think the clearest example we have, because Meta is also open, they talk about, like, order
    1:08:15 of 60K to 100K, H100 equivalent GPUs in their training clusters.
    1:08:16 Right.
    1:08:20 Like Lama 3, they trained on 16,000 H100s, right?
    1:08:23 But the company of Meta last year publicly disclosed they bought, like, 400 something
    1:08:24 thousand GPUs.
    1:08:25 Yeah.
    1:08:26 Right?
    1:08:27 So of course, tiny percentage on the training.
    1:08:31 Again, like most of it is, like, serving me the best Instagram reels, right?
    1:08:32 Or whatever, right?
    1:08:37 I mean, we could get into a cost of, like, what is the cost of ownership for a 2,000 GPU cluster,
    1:08:38 10,000?
    1:08:40 There’s just different sizes of companies I can afford.
    1:08:44 These things in deep seek is reasonably big.
    1:08:49 Their compute allocation compared is one of the top few in the world.
    1:08:52 It’s not OpenAI, Anthropoc, et cetera, but they have a lot of compute.
    1:08:56 Can you, in general, actually just zoom out and also talk about the Hopper architecture,
    1:09:02 the NVIDIA Hopper GPU architecture and the difference between H100 and H800, like you
    1:09:03 mentioned, the interconnects?
    1:09:04 Yeah.
    1:09:08 So there’s, you know, Ampere was the A100 and then H100 Hopper, right?
    1:09:12 People use them synonymously in the US because really there’s just H100 and now there’s H200,
    1:09:13 right?
    1:09:15 Mostly.
    1:09:19 In China, they’ve had, there have been different salvos of export restrictions.
    1:09:22 So initially the US government limited on a two-factor scale, right?
    1:09:25 Which is chip interconnect versus flops, right?
    1:09:29 So any chip that had interconnects above a certain level and flops above a certain floating
    1:09:33 point operations above a certain level was restricted.
    1:09:37 Later the government realized that this was a flaw in the restriction and they cut it
    1:09:40 down to just floating point operations.
    1:09:45 And so, H800 had high flops, low communication?
    1:09:46 Exactly.
    1:09:50 So the H800 was the same performance as H100 on flops, right?
    1:09:53 But it didn’t have, it just had the interconnect bandwidth cut.
    1:09:58 DeepSeq knew how to utilize this, you know, hey, even though we’re cut back on the interconnect,
    1:10:04 we can do all this fancy stuff to figure out how to use the GPU fully anyways, right?
    1:10:10 And so that was back in October 2022, but later in 2023, end of 2023 implemented in
    1:10:14 2024, the US government banned the H800, right?
    1:10:18 And so by the way, this H800 cluster, these 2000 GPUs was not even purchased in 2024,
    1:10:19 right?
    1:10:22 It was purchased in late 2023.
    1:10:23 And they’re just getting the model out now, right?
    1:10:25 Because it takes a lot of research, et cetera.
    1:10:29 H800 was banned and now there’s a new chip called the H20.
    1:10:34 The H20 is cut back on only flops, but the interconnect bandwidth is the same.
    1:10:38 And in fact, in some ways, it’s better than the H100 because it has better memory bandwidth
    1:10:39 and memory capacity.
    1:10:43 So there are, you know, NVIDIA is working within the constraints of what the government
    1:10:46 sets and then builds the best possible GPU for China.
    1:10:50 Can we take this actual tangent and we’ll return back to the hardware?
    1:10:55 Is the philosophy, the motivation, the case for export controls?
    1:10:56 What is it?
    1:11:00 Ari Amadej just published a blog post about export controls.
    1:11:06 The case he makes is that if AI becomes super powerful and he says by 2026 we’ll have AGI
    1:11:11 or super powerful AI and that’s going to give a significant, whoever builds that will have
    1:11:13 a significant military advantage.
    1:11:22 And so because the United States is a democracy and as he says, China is authoritarian or has
    1:11:29 authoritarian elements, you want a unipolar world where the super powerful military because
    1:11:31 of the AI is one that’s a democracy.
    1:11:38 It’s a much more complicated world geopolitically when you have two superpowers with super powerful
    1:11:41 AI and one is authoritarian.
    1:11:42 So that’s the case he makes.
    1:11:47 And so we want to, the United States wants to use export controls to slow down, to make
    1:11:55 sure that China can do these gigantic training runs that will be presumably required to
    1:11:57 build AGI.
    1:11:58 This is very abstract.
    1:12:03 I think this can be the goal of how some people describe export controls is this super powerful
    1:12:05 AI.
    1:12:08 And you touched on the training run idea.
    1:12:13 There’s not many worlds where China cannot train AI models.
    1:12:18 Export controls are kneecapping the amount of compute or the density of compute that
    1:12:20 China can have.
    1:12:25 And if you think about the AI ecosystem right now as all of these AI companies, revenue
    1:12:30 numbers are up and to the right, the AI usage is just continuing to grow, more GPUs are
    1:12:31 going to inference.
    1:12:37 A large part of export controls, if they work is just that the amount of AI that can be
    1:12:40 run in China is going to be much lower.
    1:12:43 So on the training side, DeepSeq V3 is a great example, which you have a very focused team
    1:12:46 that can still get to the frontier of AI.
    1:12:51 This 2,000 GPUs is not that hard to get, all considering in the world.
    1:12:53 They’re still going to have those GPUs.
    1:12:54 They’re still going to be able to train models.
    1:12:58 But if there’s going to be a huge market for AI, if you have strong export controls and
    1:13:02 you want to have 100,000 GPUs just serving the equivalent of chat GPT clusters with good
    1:13:08 export controls, it also just makes it so that AI can be used much less.
    1:13:14 And I think that is a much easier goal to achieve than trying to debate on what AGI
    1:13:15 is.
    1:13:19 And if you have these extremely intelligent autonomous AIs and data centers, those are
    1:13:23 the things that could be running in these GPU clusters in the United States, but not
    1:13:24 in China.
    1:13:27 To some extent, training a model does effectively nothing, right?
    1:13:28 Yeah.
    1:13:29 I have a model.
    1:13:35 The thing that Dario is speaking to is the implementation of that model once trained to
    1:13:41 then create huge economic growth, huge increases in military capabilities, huge capability increases
    1:13:46 in productivity of people, betterment of lives, whatever you want to direct super powerful
    1:13:48 AI towards, you can.
    1:13:51 But that requires a significant amounts of compute, right?
    1:13:56 And so the US government has effectively said, and forever, right, like training will always
    1:13:59 be a portion of the total compute.
    1:14:03 We mentioned Meta’s 400,000 GPUs, only 16,000 made Lama, right?
    1:14:08 So the percentage that Meta is dedicating to inference, now this might be for recommendation
    1:14:12 systems that are trying to hack our mind into spending more time and watching more ads.
    1:14:16 Or if it’s for a super powerful AI that’s doing productive things, doesn’t matter about
    1:14:22 the exact use that our economic system decides, it’s that that can be delivered in whatever
    1:14:23 way we want.
    1:14:28 Whereas with China, you’re expert restrictions, great, you’re never going to be able to cut
    1:14:29 everything off, right?
    1:14:33 And I think that’s quite well understood by the US government, is that you can’t cut
    1:14:34 everything off.
    1:14:36 And they’ll make their own chips.
    1:14:37 And they’re trying to make their own chips.
    1:14:38 They’ll be worse than ours.
    1:14:41 But the whole point is to just keep a gap, right?
    1:14:46 And therefore, at some point as the AI, in a world where 2%, 3% economic growth, this
    1:14:51 is really dumb, by the way, to cut off high tech and not make money off of it.
    1:14:55 But in a world where super powerful AI comes about and then starts creating significant
    1:14:59 changes in society, which is what all the AI leaders and big tech companies believe,
    1:15:02 I think super powerful AI is going to change society massively.
    1:15:07 And therefore, this compounding effect of the difference in compute is really important.
    1:15:12 There’s some sci-fi out there where AI is measured in the power of, in like how much
    1:15:14 power is delivered to compute, right?
    1:15:18 Or how much is being, that’s sort of a way of thinking about what’s the economic output
    1:15:20 is just how much power are you directing towards that AI?
    1:15:24 Should we talk about reasoning models with this as a way that this might be actionable
    1:15:26 as something that people can actually see?
    1:15:31 So the reasoning models that are coming out with R1 and O1, they’re designed to use
    1:15:32 more compute.
    1:15:37 There’s a lot of buzzy words in the AI community about this, test time compute, inference time
    1:15:38 compute, whatever.
    1:15:40 But Dylan has good research on this.
    1:15:43 You can get to the specific numbers on the ratio of when you train a model, you can look
    1:15:47 at things about the amount of compute used at training and amount of compute used at inference.
    1:15:52 These reasoning models are making inference way more important to doing complex tasks.
    1:15:56 In the fall, in December, their open AI announced this O3 model.
    1:16:00 There’s another thing in AI when things move fast, we get both announcements and releases.
    1:16:03 Analytics are essentially blog posts where you pat yourself on the back and you say you
    1:16:07 did things and releases are run the models out there, the papers out there, et cetera.
    1:16:13 So open AI has announced O3, and we can check if O3 mini is out as of recording potentially.
    1:16:17 But that doesn’t really change the point, which is that the breakthrough result was something
    1:16:22 called ARC AGI task, which is the abstract reasoning corpus, a task for artificial general
    1:16:23 intelligence.
    1:16:29 Francois Chalet is the guy who’s been, it’s a multi-year old paper.
    1:16:30 It’s a brilliant benchmark.
    1:16:36 And the number for open AI O3 to solve this was that it used some sort of number of samples
    1:16:37 in the API.
    1:16:40 The API has like thinking effort and number of samples.
    1:16:47 They used 1,000 samples to solve this task, and it comes out to be like five to $20 per
    1:16:51 question, which you’re putting in effectively a math puzzle, and then it takes orders of
    1:16:53 dollars to answer one question.
    1:16:55 And this is a lot of compute.
    1:16:59 If it’s going to take off in the US, open AI needs a ton of GPUs on inference to capture
    1:17:00 this.
    1:17:04 Open AI, chat GPT Pro subscription, which is $200 a month, which Sam said they’re losing
    1:17:08 money on, which means that people are burning a lot of GPUs on inference.
    1:17:09 And I’ve signed up with it.
    1:17:10 I’ve played with it.
    1:17:15 I don’t think I’m a power user, but I use it, and it’s like, that is the thing that
    1:17:20 a Chinese company with mediumly strong expert controls, there will always be loopholes, might
    1:17:21 not be able to do it all.
    1:17:26 And if that, the main result for O3 is also a spectacular coding performance.
    1:17:32 And if that feeds back into AI companies being able to experiment better.
    1:17:38 So presumably the ideas for an AGI, a much larger fraction of the compute will be used
    1:17:42 for this test-hung compute, for the reasoning, for the AGI goes into a room and thinks about
    1:17:50 how to take over the world and come back in 2.7 hours, and it’s going to take a lot of
    1:17:51 computing.
    1:17:56 This is what people, CEO or leaders of Open AI and Anthropic talk about is like autonomous
    1:18:00 AI models, which is you give them a task and they work on it in the background.
    1:18:04 My personal definition of AGI is much simpler.
    1:18:09 I think language models are a form of AGI and all of the super powerful stuff is a next
    1:18:13 step that’s great if we get these tools, but a language model has so much value and so
    1:18:14 many domains.
    1:18:16 It is a general intelligence to me.
    1:18:20 But this next step of agentic things where they’re independent and they can do tasks
    1:18:26 that aren’t in the training data is what the few year outlook that these AI companies are
    1:18:27 driving for.
    1:18:32 I think the terminology here that Dario uses is super powerful AI, so I agree with you
    1:18:33 on the AGI.
    1:18:36 I think we already have something like that’s exceptionally impressive.
    1:18:42 The Alan Turing would for sure say is AGI, but he’s referring more to something once
    1:18:48 in possession of, then you would have a significant military and geopolitical advantage over other
    1:18:49 nations.
    1:18:52 So it’s not just like you can ask it how to cook an omelet.
    1:18:55 And he has a much more positive view and as I say, machines of love and grace.
    1:19:00 I’ve read into this, that we don’t have enough background in physical sciences to gauge exactly
    1:19:07 how competent I am and if AI can revolutionize biology, I’m safe saying that AI is going
    1:19:10 to accelerate the progress of any computational science.
    1:19:14 So we’re doing a depth-first search here on topics, taking tangent of a tangent.
    1:19:19 So let’s continue on that depth-first search.
    1:19:25 You said that you’re both feeling the AGI, so what’s your timeline?
    1:19:29 Dario is 2026 for the super powerful AI.
    1:19:37 That’s basically agentic to a degree where it’s a real security threat, that level of
    1:19:38 AGI.
    1:19:39 What’s your timeline?
    1:19:43 I don’t like to attribute specific abilities because predicting specific abilities and when
    1:19:44 is very hard.
    1:19:49 I think mostly if you’re going to say that I’m feeling the AGI is that I expect continued
    1:19:51 rapid surprising progress over the next few years.
    1:19:57 So something like R1 is less surprising to me from DeepSeq because I expect there to
    1:20:00 be new paradigms where substantial progress can be made.
    1:20:04 DeepSeq R1 is so unsettling because we’re kind of on this path with chatGPT.
    1:20:05 It’s getting better.
    1:20:06 It’s getting better.
    1:20:07 It’s getting better.
    1:20:10 And then we have a new direction for changing the models and we took one step like this
    1:20:12 and we took a step up.
    1:20:15 So it looks like a really fast slope and then we’re going to just take more steps.
    1:20:19 Like it’s just really unsettling when you have these big steps and I expect that to
    1:20:20 keep happening.
    1:20:25 I see I’ve tried opening I operator, I’ve tried quad computer use.
    1:20:26 They’re not there yet.
    1:20:31 I understand the idea, but it’s just so hard to predict what is the breakthrough that will
    1:20:35 make something like that work and I think it’s more likely that we have breakthroughs that
    1:20:37 work and things that we don’t know what they’re going to do.
    1:20:43 So like everyone wants agents, Dario has very eloquent way of describing this and I just
    1:20:47 think that there’s going to be more than that so I could just expect these things to
    1:20:48 come.
    1:20:54 I’m going to have to try to pin you down to a date on the AGI timeline.
    1:20:56 The nuclear weapon moment.
    1:21:04 So moment where on the geopolitical stage, there’s a real like, because we’re talking
    1:21:09 about export controls, when do you think, just even a throw out a date, when do you think
    1:21:10 that would be?
    1:21:14 For me, it’s probably after 2030, so I’m not as …
    1:21:15 That’s what I would say.
    1:21:16 So define that, right?
    1:21:18 Because to me, it kind of almost has already happened, right?
    1:21:23 You look at elections in India and Pakistan, people get AI voice calls and think they’re
    1:21:25 talking to the politician, right?
    1:21:28 The AI diffusion rules, which was enacted in the last couple of weeks of the Biden admin
    1:21:34 and looks like the Trump admin will keep and potentially even strengthen, limit cloud computing
    1:21:38 and GPU sales to countries that are not even related to China.
    1:21:43 Portugal and all these normal countries are on the, you need approval from the US list.
    1:21:48 Yeah, Portugal and all these countries that are allies, right?
    1:21:49 Singapore, right?
    1:21:53 They freaking have F-35s and we don’t let them by GPUs.
    1:21:56 This to me is already to the scale of like, you know …
    1:22:01 Well, that just means that the US military is really nervous about this new technology.
    1:22:06 That doesn’t mean the technology is already there, so they might be just very cautious
    1:22:11 about this thing that they don’t quite understand, but that’s a really good point.
    1:22:18 The robot calls, swarms of semi-intelligent bots could be a weapon, could be doing a lot
    1:22:19 of social engineering.
    1:22:23 I mean, there’s tons of talk about, you know, from the 2016 elections, like Cambridge Analytica
    1:22:25 and all this stuff, Russian influence.
    1:22:29 I mean, every country in the world is pushing stuff onto the internet and has narratives
    1:22:30 they want, right?
    1:22:35 Like that’s every, like technically competent, whether it’s Russia, China, US, Israel, et
    1:22:36 cetera, right?
    1:22:41 They’re pushing viewpoints onto the internet and mass and language models crash the cost
    1:22:43 of like very intelligent sounding.
    1:22:47 There’s some research that shows that the distribution is actually a limiting factor.
    1:22:55 So language models haven’t yet made misinformation particularly, like, changed the equation there.
    1:22:56 The internet is still ongoing.
    1:23:00 I think there’s a blog, AI Snake Oil and some of my friends at Princeton that write on this
    1:23:01 stuff.
    1:23:02 So there is research.
    1:23:04 It’s like, it’s a default that everyone assumes and I would have thought the same thing is
    1:23:07 that misinformation doesn’t get far worse with language models.
    1:23:12 I think in terms of internet posts and things that people have been measuring, it hasn’t
    1:23:16 been a exponential increase or something extremely measurable and things you’re talking about
    1:23:18 with like voice calls and stuff like that.
    1:23:22 It could be in modalities that are harder to measure.
    1:23:26 So it’s something that it’s too soon to tell in terms of, I think that’s like political
    1:23:34 instability via the web is very, it’s monitored by a lot of researchers to see what’s happening.
    1:23:37 I think that you’re asking about like the AGI thing.
    1:23:42 If you ever make me give a year, I would be like, okay, I have AI CEOs saying this, they’ve
    1:23:44 been saying two years for a while.
    1:23:51 I think that people like Dario, Anthropic, the CEO had thought about this so deeply.
    1:23:56 I need to take their word seriously, but also understand that they have different incentives.
    1:24:00 So I would be like add a few years to that, which is how you get something similar to
    1:24:02 2030 or a little after 2030.
    1:24:07 I think to some extent we have capabilities that hit a certain point where any one person
    1:24:13 could say, okay, if I can leverage those capabilities for X amount of time, this is AGI, call it
    1:24:19 2728, but then the cost of actually operating that capability, this is going to be my point.
    1:24:24 So extreme that no one can actually deploy it at scale and mass to actually completely
    1:24:27 revolutionize the economy on a snap of the finger.
    1:24:30 So I don’t think it will be like a snap of the finger moment.
    1:24:31 It’s a physical constraint.
    1:24:35 However, it’ll be a, oh, the capabilities are here, but I can’t deploy it everywhere.
    1:24:43 And so one simple example going back to 2023 was when Bing with GPT-4 came out and everyone
    1:24:45 was freaking out about search, right?
    1:24:46 Perplexity came out.
    1:24:50 If you did the cost on implementing GPT-3 into every Google search, it was like, oh, okay,
    1:24:53 this is just physically impossible to implement.
    1:24:59 And as we step forward to going back to the test time compute thing, a query for, you
    1:25:02 ask chat GPT a question, it costs cents, right?
    1:25:05 For their most capable model of chat, right?
    1:25:11 To get a query back, to solve an Arc AGI problem though, cost five to 20 bucks, right?
    1:25:14 And this is, this is an, it’s only going up from there.
    1:25:20 This is a thousand, 10,000 X factor difference in cost to respond to a query versus do a task.
    1:25:26 And the task of Arc AGI is not like it’s like, it’s, it’s simple to some extent, you know,
    1:25:29 but it’s also like, what are the tasks that we want?
    1:25:32 Okay, AGI, quote unquote, what we have today can do Arc AGI.
    1:25:35 Three years from now, it can do much more complicated problems, but the cost is going
    1:25:39 to be measured in thousands and thousands and hundreds of thousands of dollars of GPU
    1:25:44 time and there just won’t be enough power to use infrastructure to operate this and therefore
    1:25:47 shift everything in the world on the snap of the finger.
    1:25:53 But at that moment, who gets to control and point the AGI at a task?
    1:25:57 And so this was in Dario’s post that he’s like, hey, China can effectively and more quickly
    1:26:01 than us point their AGI at military tasks, right?
    1:26:06 And they have been in many ways, faster at adopting certain new technologies into, into
    1:26:07 their military, right?
    1:26:09 Especially with regards to drones, right?
    1:26:14 The US maybe has a longstanding, you know, large air sort of, you know, fighter jet type
    1:26:20 of thing bombers, but when it comes to asymmetric arms such as drones, they’ve, they’ve completely
    1:26:22 leapfrogged the US and the West.
    1:26:27 And the, the fear that Dario is sort of pointing out there, I think, is that, yeah, great.
    1:26:30 We’ll have AGI in the commercial sector.
    1:26:33 The US military won’t be able to implement it super fast.
    1:26:36 Chinese military could and they could direct all their resources to implementing it in
    1:26:41 the military and therefore solving, you know, military logistics or solving some, some other
    1:26:45 aspect of like disinformation for targeted certain set of people so they can flip a country’s
    1:26:50 politics or something like that that is actually like catastrophic versus, you know, the US
    1:26:54 just wants to, you know, because it’ll be more capitalistically allocated just towards
    1:26:58 whatever is the highest return on income, which might be like building, you know, factories
    1:26:59 better or whatever.
    1:27:04 So everything I’ve seen, people’s intuition seems to fail on robotics.
    1:27:06 So you have this kind of general optimism.
    1:27:08 I’ve seen this on self-driving cars.
    1:27:12 People think it’s much easier problem than it is similar with drones.
    1:27:18 Here I understand it a little bit less, but I’ve just seen the reality of the war in Ukraine
    1:27:21 and the usage of drones at both sides.
    1:27:28 And it seems that humans still far outperform any, any fully autonomous systems.
    1:27:35 AI is an assistant, but humans drive FPV drones where the humans controlling most of it just
    1:27:37 far, far, far outperforms AI systems.
    1:27:43 So I think it’s not obvious to me that we’re going to have swarms of autonomous robots
    1:27:46 anytime soon in the military context.
    1:27:53 Maybe the fastest I can imagine is 2030, which is why I said 2030 for the superpower for AI.
    1:27:59 Whenever you have large scale swarms of robots doing military actions, that’s when the world
    1:28:02 just starts to look different to me.
    1:28:04 So that’s the thing I’m really worried about.
    1:28:10 But there could be cyber war, cyber war type of technologies that from social engineering
    1:28:16 to actually just swarms of robots that find attack vectors in our code bases and shut
    1:28:19 down power grids, that kind of stuff.
    1:28:23 And it could be one of those things like on any given weekend or something.
    1:28:24 Power goes out.
    1:28:26 Nobody knows why.
    1:28:27 And the world changes forever.
    1:28:32 Just power going out for two days in all of the United States.
    1:28:35 That will lead to murder, to chaos.
    1:28:39 But going back to expert controls.
    1:28:49 Do you see that as a useful way to control the balance of power geopolitically in the
    1:28:50 context of AI?
    1:28:55 And I think going back to my viewpoint is if you believe we’re in this sort of a stage
    1:29:00 of economic growth and change that we’ve been in for the last 20 years, the expert controls
    1:29:05 are absolutely guaranteeing that China will win long term.
    1:29:10 If you do not believe AI is going to make significant changes to society in the next
    1:29:15 10 years or five years, five year timelines are sort of what the more executives and such
    1:29:18 of AI companies and even big tech companies believe.
    1:29:20 But even 10 year timelines, it’s reasonable.
    1:29:29 But once you get to, hey, these timelines are below that time period, then the only
    1:29:35 way to sort of create a sizable advantage or disadvantage for America versus China is
    1:29:42 if you constrain compute because talent is not really something that’s constraining.
    1:29:46 China arguably has more talent, more STEM graduates, more programmers.
    1:29:48 The US can draw upon the world’s people, which it does.
    1:29:51 There’s tons of foreigners in the AI industry.
    1:29:55 So many of these AI teams are all people without a US passport.
    1:30:01 Yeah, I mean, many of them are Chinese people who are moving to America, and that’s great.
    1:30:03 That’s exactly what we want.
    1:30:08 But that talent is one aspect, but I don’t think that’s one that is a measurable advantage
    1:30:09 for the US or not.
    1:30:12 It truly is just whether or not compute.
    1:30:18 Even on the compute side, when we look at chips versus data centers, China has the unprecedented
    1:30:24 ability to build ridiculous sums of power, clockwork.
    1:30:26 They’re always building more and more power.
    1:30:31 They’ve got steel mills that individually are the size of the entire US industry.
    1:30:36 And they’ve got aluminum mills that consume gigawatts and gigawatts of power.
    1:30:40 And when we talk about what’s the biggest data center, opening, I made this huge thing
    1:30:43 about Stargate, their announcement there.
    1:30:48 That’s like once it’s fully built out in a few years, it’ll be two gigawatts of power.
    1:30:53 And this is still smaller than the largest industrial facilities in China.
    1:30:56 China, if they wanted to build the largest data center in the world, if they had access
    1:30:58 to the chips, could.
    1:31:02 So it’s not just a question of when, not if, right?
    1:31:08 So their industrial capacity far exceeds the United States to manufacture stuff.
    1:31:13 So long term, they’re going to be manufacturing chips there.
    1:31:14 Chips are a little bit more specialized.
    1:31:16 I’m specifically referring to the data centers, right?
    1:31:20 Chips, fabs take huge amounts of power, don’t get me wrong.
    1:31:22 That’s not necessarily the gating factor there.
    1:31:28 The gating factor on how fast people can build the largest clusters today in the US is power.
    1:31:35 It could be power generation, power transmission, substations and all these sorts of transformers
    1:31:40 and all these things, building the data center, these are all constraints on the US industry’s
    1:31:45 ability to build larger and larger training systems as well as deploying more and more
    1:31:46 inference compute.
    1:31:51 I think we need to make the point clear on why the time is now for people that don’t think
    1:31:54 about this because essentially with export controls, you’re making it so China cannot
    1:31:57 make or get cutting edge chips.
    1:32:02 And the idea is that if you time this wrong, China is pouring a ton of money into their
    1:32:03 chip production.
    1:32:07 And if you time it wrong, they are going to have more capacity for production, more capacity
    1:32:11 for energy and figure out how to make the chips and have more capacity than the rest
    1:32:14 of the world to make the chips because everybody can buy, they’re going to sell their Chinese
    1:32:15 chips to everybody.
    1:32:17 They might subsidize them.
    1:32:21 And therefore, if AI takes a long time to become differentiated, we’ve decapped the
    1:32:24 financial performance of American companies.
    1:32:28 NVIDIA can sell less, TSMC cannot sell to China.
    1:32:34 So therefore, we have less demand to like keep driving the production cycle.
    1:32:37 So that’s the assumption behind the timing being important.
    1:32:40 Less than 10 years or five years to above, right?
    1:32:45 China will win because of these restrictions long-term unless AI does something in the
    1:32:52 short-term, which I believe AI will do, make massive changes to society in the medium short-term.
    1:32:55 And so that’s the big unlocker there.
    1:33:03 And even today, if Xi Jinping decided to get “scale-pilled,” I decide that scaling
    1:33:09 laws are what matters just like the US executives like Sacha Nadella and Mark Zuckerberg and
    1:33:14 Sundar and all these US executives of the biggest, most powerful tech companies have
    1:33:18 decided they’re “scale-pilled” and they’re building multi-gigawatt data centers, right?
    1:33:22 Whether it’s in Texas or Louisiana or Wisconsin, wherever it is, they’re building these massive
    1:33:28 things that cost as much as their entire budget for spending on data centers globally in one
    1:33:29 spot, right?
    1:33:32 This is what they’ve committed to for next year, year after, et cetera.
    1:33:37 And so they’re so convinced that this is the way, that this is what they’re doing.
    1:33:42 But if China decided to, they could do it faster than us, but this is where the restrictions
    1:33:43 come in.
    1:33:48 It’s not clear that China, as a whole, has decided from the highest levels that this
    1:33:49 is a priority.
    1:33:50 The US sort of has, right?
    1:33:55 You see Trump talking about DeepSeek and Stargate within the same week, right?
    1:33:59 So he’s in the Biden and Min as well, had a lot of discussions about AI and such.
    1:34:01 It’s clear that they think about it.
    1:34:06 Only just last week did DeepSeek meet the second-in-command of China, right?
    1:34:09 Like they have not even met the top, and they haven’t met Xi.
    1:34:17 Xi hasn’t sat down, and they only just released a subsidy of a trillion RMB, roughly $160 billion,
    1:34:23 which is closer to the spending of Microsoft and Meta and Google combined for this year.
    1:34:28 So it’s like, they’re realizing it just now, but that’s where these export restrictions
    1:34:33 come in and say, “Hey, you can’t ship the most powerful US chips to China.
    1:34:35 You can ship a cut-down version.
    1:34:39 You can’t ship the most powerful chips to all these countries who we know we’re just
    1:34:41 going to rent it to China.
    1:34:42 You have to limit the numbers, right?”
    1:34:43 And the tools.
    1:34:48 And same with manufacturing of equipment, tools, all these different aspects.
    1:34:52 But it all stems from AI, and then what downstream can slow them down in AI?
    1:34:56 And so the entire semiconductor restrictions, you read them, they are very clear.
    1:35:01 It’s about AI and military civil fusion of technology, right?
    1:35:02 It’s very clear.
    1:35:04 And then from there, it goes, “Oh, well, we’re banning them from buying like lithography
    1:35:10 tools and etch tools and deposition tools, and oh, this random subsystem from a random
    1:35:12 company that’s like tiny, right?”
    1:35:13 Like why are we banning this?
    1:35:17 Because all of it, the US government has decided is critical to AI systems.
    1:35:22 I think the fulcrum point is like the transition from seven nanometer to five nanometer chips,
    1:35:27 where I think it was Huawei that had the seven nanometer chip a few years ago, which caused
    1:35:31 another political brouhaha, almost like this moment.
    1:35:35 And then it’s like ASML, deep UV, what is that?
    1:35:37 Extreme ultraviolet lithography.
    1:35:42 To set context on the chips, what Nathan’s referring to is in 2020, Huawei released their
    1:35:48 Ascend 910 chip, which was an AI chip, first one on seven nanometer before Google did,
    1:35:49 before NVIDIA did.
    1:35:54 And they submitted it to the MLPRF benchmark, which is sort of an industry standard for machine
    1:35:56 learning performance benchmark.
    1:35:57 And it did quite well.
    1:36:00 And it was the best chip at the submission, right?
    1:36:02 This was a huge deal.
    1:36:09 The Trump admin, of course, banned the Huawei from getting seven nanometer chips from TSMC.
    1:36:13 And so then they had to switch to using internal domestically produced chips, which was a multi-year
    1:36:14 setback.
    1:36:16 Many companies have done seven nanometer chips.
    1:36:21 And the question is, we don’t know how much Huawei was subsidizing production of that
    1:36:22 chip.
    1:36:25 Intel has made seven nanometer chips that are not profitable and things like this.
    1:36:30 So this is how all feeds back into the economic engine of export controls.
    1:36:36 Well, so you’re saying that for now Xi Jinping has not felt the AGI, but it feels like the
    1:36:42 deep-seek moment might, like, there might be meetings going on now where he’s going
    1:36:46 to start wearing the same t-shirt and things are going to escalate.
    1:36:49 I mean, like this, he may have woken up last week, right?
    1:36:54 Leon Fang met the vice chair, vice, the second command guy, and they had a meeting.
    1:36:59 And then the next day, they announced the AI subsidies, which are trillion RMB, right?
    1:37:04 So it’s possible that this deep-seek moment is truly the beginning of a cold war.
    1:37:06 That’s what a lot of people are worried about.
    1:37:10 People in AI have been worried that this is going towards a cold war or already is.
    1:37:15 But it’s not deep-seek’s fault, but there’s something, a bunch of factors came together
    1:37:19 where it was like this explosion, I mean, it all has to do with NVIDIA stock going down
    1:37:27 up. It’s just some mass hysteria that happened that eventually led to Xi Jinping having meetings
    1:37:29 and waking up to this idea.
    1:37:35 And the US government realized in October 7th, 2022, before ChatGPT released, that restriction
    1:37:38 on October 7th, which dropped and shocked everyone.
    1:37:40 And it was very clearly aimed at AI.
    1:37:42 Everyone was like, “What the heck are you doing?”
    1:37:44 Stable diffusion was out then, but not ChatGPT.
    1:37:45 Yeah, but not ChatGPT.
    1:37:50 I’m starting to be rumblings of what Gen. AI can do to society.
    1:37:54 But it was very clear, I think, to at least National Security Council and those sort of
    1:37:59 folks that this was where the world is headed, this cold war that’s happening.
    1:38:10 So is there any concerns that the export controls push China to take military action in Taiwan?
    1:38:11 This is the big risk, right?
    1:38:16 The further you push China away from having access to cutting-edge American and global
    1:38:20 technologies, the more likely they are to say, “Well, because I can’t access it, I might
    1:38:21 as well…”
    1:38:23 No one should access it, right?
    1:38:26 And there’s a few interesting aspects of that, right?
    1:38:30 China has a urban-rural divide, like no other.
    1:38:36 They have a male-female-berf ratio, like no other, to the point where, if you look in
    1:38:38 most of China, it’s like the ratio is not that bad, but when you look at single dudes
    1:38:42 in rural China, it’s like a 30-to-1 ratio.
    1:38:43 And those are disenfranchised dudes, right?
    1:38:48 Like, quote-unquote, the US has an in-sell problem, like China does, too.
    1:38:51 It’s just they’re placlated in some way or cut, crushed down.
    1:38:52 What do you do with these people?
    1:38:55 And at the same time, you’re not allowed to access the most important technology, at
    1:38:57 least the US thinks so.
    1:39:00 China is maybe starting to think this is the most important technology by starting to dump
    1:39:01 subsidies in it, right?
    1:39:04 They thought EVs and renewables were the most important technology.
    1:39:05 They dominate that now, right?
    1:39:12 And now, they started thinking about semiconductors in the late 2010s and early 2020s, and now
    1:39:16 they’ve been dumping money and they’re catching up rapidly, and they’re going to do the same
    1:39:19 with AI because they’re very talented, right?
    1:39:27 So the question is, when does this hit a breaking point, right?
    1:39:32 And if China sees this as, hey, they can continue, if not having access and starting
    1:39:37 a true hot war, right, taking over Taiwan or trying to subvert its democracy in some way
    1:39:42 or blockating it, hurts the rest of the world far more than it hurts them, this is something
    1:39:45 they could potentially do, right?
    1:39:48 And so is this pushing them towards that, potentially, right?
    1:39:55 I’m not quite a geopolitical person, but it’s obvious that the world regime of peace and trade
    1:40:01 is super awesome for economics, but at some point, it could break, right?
    1:40:05 I think we should comment that the why Chinese economy would be hurt by that is that they’re
    1:40:06 export heavy.
    1:40:10 I think the United States buys so much, if that goes away, that’s how their economy
    1:40:11 is.
    1:40:16 Also, they just would not be able to import raw materials from all over the world, right?
    1:40:21 The U.S. would just shut down the trade in Malacca, and at the same time, the U.S. entire,
    1:40:27 you could argue almost all the GDP growth in America since the ’70s has been either population
    1:40:30 growth or tech, right?
    1:40:35 Because your life today is not that much better than someone from the ’80s outside of tech,
    1:40:36 right?
    1:40:40 You still, you know, cars, they all have semiconductors in them everywhere, fridges, semiconductors
    1:40:41 everywhere.
    1:40:44 There’s these funny stories about how Russians were taking apart laundry machines because
    1:40:48 they had certain like Texas instrument chips that they could then repurpose and put into
    1:40:51 like their anti-missile things, right?
    1:40:57 Like their S-400 or whatever, you would know more about this, but there’s all sorts of like
    1:41:00 everything about semiconductors is so integral to every part of our lives.
    1:41:07 So can you explain the role of TSMC in the story of semiconductors and maybe also how
    1:41:11 the United States can break the reliance on TSMC?
    1:41:13 I don’t think it’s necessarily breaking the reliance.
    1:41:21 I think it’s getting TSMC to, you know, build in the U.S., but so taking a step back, right?
    1:41:25 TSMC produces most of the world’s chips, right?
    1:41:28 Especially on the foundry side, you know, there’s a lot of companies that build their
    1:41:35 own chips, Samsung, Intel, you know, ST Micro, Texas Instruments, you know, analog devices,
    1:41:40 all these kinds of companies build their own chips and XP, but more and more of these companies
    1:41:44 are outsourcing to TSMC and have been for multiple decades.
    1:41:49 Can you explain the supply chain there and where most of TSMC is in terms of manufacturing?
    1:41:50 Sure.
    1:41:54 So, historically, supply chain was companies would build their own chips, they would, you
    1:41:57 know, be a company started, they’d build their own chips, and then they’d design the
    1:42:00 chip and build the chip and sell it.
    1:42:05 Over time, this became really difficult because the cost of building a fab continues to compound
    1:42:06 every single generation.
    1:42:10 Of course, the technology, figuring out the technology for it is incredibly difficult,
    1:42:14 regardless, but just the dollars and cents that are required, ignoring, you know, saying,
    1:42:17 “Hey, yes, I have all the technical capability,” which it’s really hard to get that, by the
    1:42:18 way, right?
    1:42:20 “I have all the technical capability,” some things failing, et cetera.
    1:42:24 But if you look at just the dollars to spend to build that next generation fab, it keeps
    1:42:25 growing, right?
    1:42:28 Sort of like, you know, Moore’s Law is having the cost of chips every two years.
    1:42:32 There’s a separate law that’s sort of like doubling the cost of fabs every handful of
    1:42:33 years.
    1:42:36 And so, you look at a leading edge fab that is going to be profitable today that’s building,
    1:42:39 you know, three nanometer chips or two nanometer chips in the future.
    1:42:43 That’s going to cost north of $30, $40 billion, right?
    1:42:45 And that’s just for, like, a token amount.
    1:42:47 That’s like the base building block.
    1:42:48 You probably need to build multiple, right?
    1:42:53 And so, when you look at the industry over the last, you know, if I go back 20, 30 years
    1:42:57 ago, there were 20, 30 companies that could build the most advanced chips, and then they
    1:42:59 would design them themselves and sell them, right?
    1:43:01 So, companies like AMD would build their own chips.
    1:43:03 Intel, of course, still builds their own chips are very famous for.
    1:43:07 IBM would build their own chips, and, you know, you could just keep going down the list.
    1:43:09 All these companies built their own chips.
    1:43:13 Slowly they kept falling like flies, and that’s because of what TSMC did, right?
    1:43:17 They created the Foundry business model, which is, I’m not going to design any chips.
    1:43:22 I’m just going to contract manufacturer chips for other people, and one of their early customers
    1:43:23 is NVIDIA, right?
    1:43:28 NVIDIA was, is the only semiconductor company that’s worth, you know, that’s doing more
    1:43:33 than a billion dollars of revenue that was started in the era of Foundry, right?
    1:43:36 Every other company started before then, and at some point had FAPs, which is actually
    1:43:37 incredible, right?
    1:43:41 You know, like AMD and Intel and Broadcom through the industry.
    1:43:45 It’s like everyone had FAPs at some point, or, you know, brought, you know, some companies
    1:43:46 like Broadcom.
    1:43:50 It was like a merger, amalgamation of various companies that rolled up, but even today Broadcom
    1:43:51 has FAPs, right?
    1:43:57 They built iPhone RF radio chips sort of in Colorado for, you know, for Apple, right?
    1:44:00 Like there’s, there, all these companies had FAPs, and for most of the FAPs, they threw
    1:44:05 them away or sold them off, or they got rolled into something else, and now everyone relies
    1:44:06 on TSMC, right?
    1:44:10 Including Intel, their latest PC chip uses TSMC chips, right?
    1:44:13 It also uses some Intel chips, but it uses TSMC process.
    1:44:17 Can you explain why the Foundry model is so successful for these companies?
    1:44:19 Why, why are they going with this?
    1:44:20 Metronomics of scale.
    1:44:21 Scale.
    1:44:22 Yeah.
    1:44:24 So, I mean, like, like I mentioned, right, the cost of building a FAP is so high.
    1:44:30 The R&D is so difficult, and when you look at like these, like companies that had their
    1:44:35 own vertical stack, there was an antiquated process of like, okay, like I’m so hyper-customized
    1:44:37 to each specific chip, right?
    1:44:40 But as we’ve gone through the history of sort of like the last 50 years of electronics and
    1:44:44 semiconductors, A, you need more and more specialization, right?
    1:44:46 Because Moore’s Law has died.
    1:44:47 Denard scaling has died.
    1:44:49 I.e. chips are not getting better just for free, right?
    1:44:53 You know, from manufacturing, you have to make real architectural innovations, right?
    1:44:56 Google is not just running on Intel CPUs for web-serving.
    1:44:57 They have a YouTube chip.
    1:44:58 They have TPUs.
    1:44:59 They have Pixel chips.
    1:45:04 They have a wide diversity of chips that, you know, generate all the economic value
    1:45:05 of Google, right?
    1:45:07 You know, it’s running all the services and stuff.
    1:45:10 And so, and this is just Google, and you could go across any company in the industry, and
    1:45:11 it’s like this, right?
    1:45:15 Cars contain 5,000 chips, you know, 200 different varieties of them, right?
    1:45:16 All these random things.
    1:45:18 A Tesla door handle has two chips, right?
    1:45:19 Like it’s like ridiculous.
    1:45:20 And it’s a cool door handle, right?
    1:45:23 It’s like, you know, you don’t think about it, but it’s like it has two really chipped
    1:45:26 like, like penny, like chips in there, right?
    1:45:30 Anyway, so as you have more diversity of chips, as you have more specialization required and
    1:45:35 as the cost of fabs continues to grow, you need someone who is laser focused on building
    1:45:40 the best process technology and making it as flexible as possible.
    1:45:44 I think you could say it simply, which is the cost per fab goes up.
    1:45:48 And if you are a small player that makes a few types of chips, you’re not going to have
    1:45:53 the demand to pay back the cost of the fab, whereas NVIDIA can have many different customers
    1:45:58 and aggregate all this demand into one place, and then they’re the only person that makes
    1:46:03 enough money building chips to buy the next, to build the next fab.
    1:46:07 So this is kind of why they, the companies slowly get killed because they have a, they
    1:46:11 have 10 years ago a chip that is profitable and is good enough, but the cost to build
    1:46:12 the next one goes up.
    1:46:16 They may try to do this, fail because they don’t have the money to make it work.
    1:46:19 And then they don’t have any chips or they build it and it’s too expensive and they just
    1:46:20 are not profitable.
    1:46:22 You know, there’s more failure points, right?
    1:46:27 You know, you could have one little process related to like some sort of like a chemical
    1:46:31 etch or some sort of like plasma etch or you know, some little process that screws up.
    1:46:33 You didn’t engineer it, right?
    1:46:34 And now the whole company falls apart.
    1:46:35 You can’t make chips, right?
    1:46:40 And so super, super powerful companies like Intel, they had like the weathering storm to
    1:46:44 like, hey, they still exist today, even though they really screwed up their manufacturing
    1:46:45 six, seven years ago.
    1:46:47 But in the case of like AMD, they almost went bankrupt.
    1:46:52 They had to sell their fabs to Mubadala UAE, right?
    1:46:56 And like that became a separate company called Global Foundries, which is a foundry firm.
    1:46:59 And then AMD was able to then focus on like on the return back up was like, hey, let’s
    1:47:05 focus on making chiplets and a bunch of different chips for different markets and focusing on
    1:47:09 specific workloads rather than, you know, all of these different things.
    1:47:10 And so you get more diversity of chips.
    1:47:14 You have more companies than ever designing chips, but you have fewer companies than ever
    1:47:16 manufacturing them, right?
    1:47:20 And this is, this is where TSMC comes in as they’ve, they’ve just been the best, right?
    1:47:22 They are so good at it, right?
    1:47:23 They’re customer focused.
    1:47:25 They make it easy for you to fabricate your chips.
    1:47:28 They take all of that complexity and like kind of try and abstract a lot of it away from
    1:47:29 you.
    1:47:30 They make good money.
    1:47:35 They don’t make insane money, but they make good money and, and they’re able to aggregate
    1:47:38 all this demand and continue to build the next fab, the next fab, the next fab.
    1:47:41 So why is Taiwan so special for TSMC?
    1:47:43 Why is it happening there?
    1:47:45 Can it be replicated inside the United States?
    1:47:46 Yeah.
    1:47:50 So there’s, there’s aspects of it that I would say yes and aspects that I’d say no,
    1:47:51 right?
    1:47:58 TSMC is way ahead because former executive Morse Chang of Texas Instruments wasn’t promoted
    1:48:02 to CEO and he’s like, screw this, I’m going to go make a, my own chip company, right?
    1:48:03 And he went to Taiwan and made TSMC, right?
    1:48:06 And there’s, there’s a whole lot more story there.
    1:48:09 So it could be Texas Instruments could have been the, you know, it could have been TSMC,
    1:48:11 but Texas semiconductor manufacturing, right?
    1:48:14 Instead of, you know, Texas Instruments, right?
    1:48:17 But, you know, so there is that whole story there, but they’re sitting here in Texas.
    1:48:19 I mean, and that sounds like a human story.
    1:48:20 Like it didn’t get promoted.
    1:48:24 And just the brilliance of Morse Chang, you know, which I wouldn’t underplay, but there’s
    1:48:28 also like a different level of like how, how this works, right?
    1:48:35 So in Taiwan, the, you know, like the number top percent of graduates of students that go
    1:48:40 to the best school, which is NTU, the top percent of those all go work to TSMC, right?
    1:48:41 And guess what their pay is?
    1:48:45 Their starting pay is like $80,000, $70,000, right?
    1:48:49 Which is like, that’s like starting pay for like a good graduate in the U.S., right?
    1:48:53 Not the top, the top graduates are making hundreds of thousands of dollars at the Googles
    1:48:57 and the Amazons, and now I guess the open AIs of the world, right?
    1:49:01 So there is, there is a large dichotomy of like what is the top one percent of the society
    1:49:04 doing and where are they headed because of economic reasons, right?
    1:49:06 Intel never paid that crazy good, right?
    1:49:08 And it didn’t make sense to them, right?
    1:49:09 That’s one aspect, right?
    1:49:10 Where is the best going?
    1:49:11 Second is the work ethic, right?
    1:49:16 Like, you know, we like to work, you know, you work a lot, we work a lot, but at the
    1:49:21 end of the day, when there’s an, you know, when, what is the time and amount of work
    1:49:23 that you’re doing and what does a fab require, right?
    1:49:25 Fabs are not work-from-home jobs, they are.
    1:49:28 You go into the fab and grueling work, right?
    1:49:34 There’s, hey, if there is any amount of vibration, right, an earthquake happens, vibrates the
    1:49:39 machines, they’re all, you know, they’re either broken, you’ve scrapped some of your production,
    1:49:42 and then in many cases, they’re like not calibrated properly.
    1:49:45 So when TSMC, when there’s an earthquake, right, recently there’s been an earthquake,
    1:49:50 TSMC doesn’t call their employees, they just, they just go to the fab, and like, they just
    1:49:55 show up, the parking lot gets slammed, and people just go into the fab and fix it, right?
    1:49:57 Like it’s like an arm, it’s like ants, right?
    1:50:01 Like it’s like, you know, a hive of ants doesn’t get told by the queen what to do, the ants
    1:50:02 just know.
    1:50:06 It’s like one person just specializes on these one task, and it’s like, you’re gonna take
    1:50:09 this one tool, and you’re the best person in the world, and this is what you’re gonna
    1:50:11 do for your whole life is this one task in the fab.
    1:50:16 Which is like some special chemistry plus nano manufacturing on one line of tools that
    1:50:20 continues to get iterated, and yeah, it’s just like, it’s like a specific plasma edge
    1:50:22 for removing silicon dioxide, right?
    1:50:26 That’s all you focus on your whole career, and it’s like such a specialized thing.
    1:50:30 And so it’s not like the task are transferable, AI today is awesome because like people can
    1:50:32 pick it up like that.
    1:50:36 Semiconductor manufacturing is very antiquated and difficult, none of the materials are online
    1:50:39 for people to read easily and learn, right?
    1:50:43 The papers are very dense, and like it takes a lot of experience to learn.
    1:50:47 And so it makes the barrier to entry much higher too.
    1:50:50 So when you talk about, hey, you have all these people that are super specialized, they
    1:50:55 will work, you know, 80 hours a week in a factory, right, in a fab.
    1:50:59 And if anything goes wrong, they’ll go show up in the middle of the night because some
    1:51:01 earthquake, their wife is like, there’s an earthquake.
    1:51:05 He’s like, great, I’m gonna go to the fab, it’s like, would you, like as an American
    1:51:06 do that, right?
    1:51:11 The kinds of things are like, what, you know, I guess are the exemplifying like why TSMC
    1:51:12 is so amazing.
    1:51:14 Now, can you replicate it in the U.S.?
    1:51:18 Let’s not ignore Intel was the leader in manufacturing for over 20 years.
    1:51:23 They brought every technology to market first, besides the UV, strain silicon, high K metal
    1:51:28 gates, FinFET, you know, the list goes on and on and on of technologies that Intel brought
    1:51:36 to market first, made the most money from, and manufactured at scale, first, best, highest
    1:51:37 profit margins, right?
    1:51:40 So we shouldn’t ignore that Intel can’t do this, right?
    1:51:43 It’s that the culture has broken, right?
    1:51:44 You’ve invested in the wrong things.
    1:51:46 They said no to the iPhone.
    1:51:50 They had all these different things regarding like, you know, mismanagement of the fabs,
    1:51:53 mismanagement of designs, this lockup, right?
    1:51:57 And at the same time, all these brilliant people, right, these like 50,000 PhDs, you
    1:52:02 know, or masters that have been working on specific chemical or physical processes or
    1:52:05 nanomanufacturing processes for decades in Oregon, they’re still there.
    1:52:07 They’re still producing amazing work.
    1:52:11 It’s just like getting it to the last mile of production at high yield where you can
    1:52:17 manufacture dozens and hundreds of different kinds of chips, you know, and it’s good customer
    1:52:18 experience has broken, right?
    1:52:19 You know, it’s that customer experience.
    1:52:23 It’s like the, like part of it is like people will say Intel was too pompous in the 2000s,
    1:52:24 2010s, right?
    1:52:26 They just thought they were better than everyone.
    1:52:29 The tool guys were like, oh, I don’t think that this is mature enough.
    1:52:30 They’re like, oh, you just don’t know.
    1:52:31 We know, right?
    1:52:32 This sort of stuff would happen.
    1:52:38 And so can the U.S. bring it to the, can the U.S. bring leading edge semiconductor manufacturing
    1:52:39 to the U.S.?
    1:52:40 Emptomatically, yes, right?
    1:52:41 And we are, right?
    1:52:42 It’s happening.
    1:52:44 Arizona is getting better and better as time goes on.
    1:52:51 TSMC has built, you know, roughly 20% of their capacity for five nanometer in the U.S., right?
    1:52:54 Now this is nowhere near enough, right?
    1:52:57 You know, 20% of capacity in the U.S. is like nothing, right?
    1:53:00 And furthermore, this is still dependent on Taiwan existing, right?
    1:53:02 All, there’s sort of important way to separate it out.
    1:53:06 There’s R&D and there’s high volume manufacturing.
    1:53:11 There are, effectively, there are three places in the world that are doing leading edge R&D.
    1:53:13 There’s Sinshu, Taiwan.
    1:53:14 There’s Hillsborough, Oregon.
    1:53:18 And there is Pyongyang, South Korea, right?
    1:53:22 These three places are doing the leading edge R&D for the rest of the world’s leading edge
    1:53:24 semiconductors, right?
    1:53:29 Now manufacturing can be distributed more globally, right?
    1:53:34 And this is sort of where this dichotomy exists of like who’s actually modifying the process,
    1:53:40 who’s actually developing the next generation one, who’s improving them, is Sinshu, is Hillsborough,
    1:53:41 is Pyongyang, right?
    1:53:45 It is not the rest of these, you know, fabs like Arizona, right?
    1:53:46 Arizona is a paperweight.
    1:53:53 If Sinshu disappeared off the face of the planet, you know, within a year, a couple years, Arizona
    1:53:54 would stop producing too, right?
    1:53:56 It’s actually like pretty critical.
    1:54:00 One of the things I like to say is if I had like a few missiles, I know exactly where
    1:54:01 I could cause the most economic damage, right?
    1:54:03 It’s not targeting the White House, right?
    1:54:04 It’s the R&D centers.
    1:54:08 It’s the R&D centers for TSMC, Intel, Samsung, and then some of the memory guys, Micron and
    1:54:09 Heineck’s.
    1:54:12 Because they define the future evolution of these semiconductors and everything’s moving
    1:54:21 so rapidly that it really is fundamentally about R&D, and it is all about TSMC, huh?
    1:54:27 And so TSMC, you know, you cannot purchase a vehicle without TSMC chips, right?
    1:54:31 You cannot purchase a fridge without TSMC chips.
    1:54:36 Like, I think one of the few things you can purchase, ironically, is a Texas Instruments
    1:54:37 like graphing calculator, right?
    1:54:39 Because they actually manufacture in Texas.
    1:54:44 But like, outside of that, like a laptop, a phone, anything, servers, right, GPUs, none
    1:54:48 of this stuff can exist, and this is without TSMC, and in many cases, it’s not even like
    1:54:52 the leading edge, you know, sexy 5-nanometer chip, 3-nanometer chip, 2-nanometer chip.
    1:54:57 Oftentimes, it’s just like some stupid power IC that’s like converting from like, you know,
    1:54:58 some voltage to another, right?
    1:54:59 And it’s made at TSMC, right?
    1:55:00 This is what China is investing in as well.
    1:55:04 It’s like, they can build out this long tail fab where the techniques are much more known.
    1:55:07 You don’t have to figure out these problems with the EUV.
    1:55:12 They’re investing in this, and then they have large supply for things like the car door
    1:55:14 handles and the random stuff.
    1:55:20 And that trickles down into this whole economic discussion as well, which is they have far
    1:55:23 more than we do, and having supply for things like this is crucial to normal life.
    1:55:27 So they’re doing, they’re starting to invest in high-volume manufacture, but they’re not
    1:55:28 doing R&D.
    1:55:32 So they do R&D on their own, they’re just way behind, right?
    1:55:40 So I would say like, in 2015, China had a five-year plan where they defined by 2025 and 2020 certain
    1:55:45 goals, including like 80% domestic production of semiconductors.
    1:55:46 They’re not going to hit that, right, to be clear.
    1:55:49 But they are in certain areas really, really close, right?
    1:55:55 Like BYD is probably going to be the first company in the world to not have to use TSMC
    1:55:58 for making, because they have their own fabs, right, for making chips.
    1:56:04 Now they still have to buy some chips from foreign, for example, like around like self-driving
    1:56:06 ADAS capabilities, because those are really high-end.
    1:56:11 But at least like, like an internal combustion engine has 40 chips and an EV, you know, just
    1:56:14 for like controlling like flow rates and all these things, and EVs are even more complicated.
    1:56:19 So all these different power ICs and battery management controllers and all these things,
    1:56:21 they’re insourcing, right?
    1:56:25 And this is something that like China has been doing since 2015.
    1:56:29 Now as far as like the trailing edge, they’re getting so much capacity there.
    1:56:33 As far as the leading edge, right, i.e. this five nanometer and so on and so forth, right,
    1:56:35 where GPUs, they are still behind.
    1:56:39 And this is, the U.S. restrictions are trying to stop them in the latter.
    1:56:43 But you know, all that’s happened, you know, is, yes, they’ve slowed down their five nanometer,
    1:56:48 three nanometer, et cetera, but they’ve accelerated their, hey, 45 nanometer, 90 nanometer power
    1:56:54 IC or analog IC or, you know, random chip in my keyboard, right, that kind of stuff.
    1:56:59 So there is an angle of like the U.S.’s actions have been so from these export, you know, from
    1:57:04 the angle of the expert controls have been so inflammatory at slowing down China’s progress
    1:57:08 on the leading edge that they’ve turned around and have accelerated their progress elsewhere
    1:57:12 because they know that this is so important, right, if the U.S. is going to lock them out
    1:57:15 here, what if they lock us out here as well in the trailing edge.
    1:57:18 And so going back, can the U.S. build it here?
    1:57:20 Yes, but it’s going to take a ton of money.
    1:57:26 I truly think like to revolutionize and completely insource semiconductors would take a decade
    1:57:27 and a trillion dollars.
    1:57:32 Is some of it also culture, like you said, extreme competence, extreme work ethic in
    1:57:33 Taiwan?
    1:57:37 You have the demand and the money is on the line, the American companies figure it out.
    1:57:42 It’s going to take handholding with the government, but I think that the culture helps TSMC break
    1:57:44 through and it’s easier for them.
    1:57:47 TSMC has some like 90,000 employees, right?
    1:57:49 It’s not actually that insane amount.
    1:57:52 The Arizona fab has 3,000 from Taiwan.
    1:57:55 And these people, like their wives were like, yeah, we’re not going to have kids unless
    1:57:59 we, you sign up for the Arizona fab, we go to Arizona and we have our kids there.
    1:58:01 There’s also a Japan fab where the same thing happened, right?
    1:58:06 And so like these wives drove like these dudes to like go to Japan or America to have the
    1:58:07 kids there.
    1:58:09 And it’s like, it’s an element of culture.
    1:58:10 Yeah, sure.
    1:58:14 Taiwan works that hard, but also like the US has done in the past, they could do it now,
    1:58:15 right?
    1:58:20 You know, we can just import, I say import, the best people in the world if we want to.
    1:58:22 That’s where the immigration conversation is a tricky one.
    1:58:27 And there’s been a lot of debate over that, but yeah, it seems absurdly controversial to
    1:58:28 import the best people in the world.
    1:58:31 I don’t understand why it’s controversial.
    1:58:32 That’s the one of the ways of winning.
    1:58:33 I’m sure we agree with you.
    1:58:38 And like even if you can’t import those people, I still think you could do a lot to manufacture
    1:58:40 most of them in the US if the money’s there, right?
    1:58:41 And so like…
    1:58:42 It’s just way more expensive.
    1:58:44 It’s not profitable for a long time.
    1:58:49 And that’s the context of like the CHIPS Act is only like $50 billion relative to some
    1:58:54 of the renewable initiatives that were passed in the Inflation Reduction Act and the Infrastructure
    1:58:57 Act, which total in the hundreds of billions of dollars, right?
    1:59:02 And so the amount of money that the US is spending on the semiconductor industry is nothing,
    1:59:03 right?
    1:59:07 Whereas all these other countries have structural advantages in terms of like work ethic and
    1:59:12 amount of work and things like that, but also a number of STEM graduates, the percentile
    1:59:14 of their best going to that, right?
    1:59:19 But they also have differences in terms of like, “Hey, there’s just tax benefits in the
    1:59:22 law and have been in the law for 20 years,” right?
    1:59:25 And then some countries have massive subsidies, right?
    1:59:29 China has something like $200 billion of semiconductor subsidies a year.
    1:59:33 We’re talking about $50 billion in the US over like six, right?
    1:59:38 So the girth or difference in like the subsidy amounts is also huge, right?
    1:59:43 And so I think Trump has been talking about tariffing Taiwan recently.
    1:59:48 That’s sort of like one of these things that’s like, “Oh, okay, well, maybe he doesn’t want
    1:59:50 to subsidize the semiconductor industry.”
    1:59:54 Obviously, tariffing Taiwan is going to cost a lot of things to go get much more expensive,
    1:59:57 but does it change the equation for TSMC building more fabs in the US?
    1:59:59 That’s what he’s sort of positing, right?
    2:00:06 So can you lay out the importance, by the way, it’s incredible how much you know about
    2:00:07 so much.
    2:00:10 We told you Dylan knows all this stuff.
    2:00:11 Yeah.
    2:00:15 So, okay, you laid out why TSMC is really important.
    2:00:22 If we look out into the future, 10, 20 years out, US-China relationship seems like it can
    2:00:32 go to a dark place of Cold War, escalated Cold War, even hot war, or to a good place
    2:00:39 of anything from frenemies to cooperation to working together.
    2:00:46 So in this game theory, complicated game, what are the different trajectories?
    2:00:47 What should US be doing?
    2:00:52 Like what do you see as the different possible trajectories of US-China relations as both
    2:00:57 leaders start to feel the AGI more and more and see the importance of chips and the importance
    2:00:58 of AI?
    2:01:04 I mean, ultimately, the export controls are pointing towards a separate future economy.
    2:01:11 I think the US has made it clear to Chinese leaders that we intend to control this technology
    2:01:17 at whatever cost to global economic integration.
    2:01:18 So that…
    2:01:19 It’s hard to unwind that.
    2:01:20 Like the…
    2:01:21 To the same extent…
    2:01:24 To the same extent, they’ve also limited US companies from entering China.
    2:01:27 So it has been a long time coming.
    2:01:34 At some point, there was a convergence, but over at least the last decade, it’s been branching
    2:01:37 further and further out, like US companies can’t enter China, Chinese companies can’t
    2:01:43 enter the US, the US is saying, “Hey, China, you can’t get access to our technologies in
    2:01:48 certain areas,” and China’s rebuttling with the same thing around like they’ve done some
    2:01:52 sort of specific materials in Gallium and things like that, that they’ve tried to limit
    2:01:53 the US on.
    2:01:54 One of the…
    2:01:58 There’s a US drone company that’s not allowed to buy batteries, and they have military customers,
    2:02:02 and this drone company just tells the military customers, like, “Hey, just get it from Amazon
    2:02:04 because I can’t actually physically get them,” right?
    2:02:08 There’s all these things that are happening that point to further and further divergence.
    2:02:13 I have zero idea, and I would love if we could all hold hands and sing Kumbaya, but I have
    2:02:15 zero idea how that could possibly happen.
    2:02:20 Is the divergence good or bad for avoiding war?
    2:02:26 Is it possible that the divergence in terms of manufactured chips of training AI systems
    2:02:29 is actually good for avoiding military conflict?
    2:02:34 It’s an objective fact that the world has been the most peaceful it has ever been when
    2:02:40 there are global hegemons, right, or regional hegemons, right, in historical context, right?
    2:02:43 The Mediterranean was the most peaceful ever when the Romans were there, right?
    2:02:46 China had very peaceful and warring times, and the peaceful times were when dynasties
    2:02:50 had a lockhold over not just themselves, but all their tributaries around them, right?
    2:02:56 And likewise, the most peaceful time in human history has been when the US was the global
    2:02:57 hegemon, right?
    2:02:58 The last hand, you know, decades.
    2:03:02 Now, we’ve sort of seen things start to slide, right, with Russia, Ukraine, with what’s going
    2:03:06 on in the Middle East, and, you know, Taiwan risk, all these different things are starting
    2:03:08 to bubble up, still objectively extremely peaceful.
    2:03:14 Now, what happens when it’s not one global hegemon, but it’s two, obviously, and China
    2:03:18 will be competitive or even overtake the US like it’s possible, right?
    2:03:24 And so this change in global hegemony, I don’t think it ever happens super peacefully, right,
    2:03:28 when empires fall, right, which is a possible trajectory for America.
    2:03:32 They don’t fall gracefully, right, like they don’t just slide out of irrelevance.
    2:03:34 Usually there’s a lot of shaking.
    2:03:39 And so, you know, what the US is trying to do is maintain its top position, and what
    2:03:42 China is trying to do is become the top position, right?
    2:03:47 And obviously, there’s budding of heads here in the most simple terms.
    2:03:51 And that could take shape in all kinds of ways, including proxy wars.
    2:03:54 It seems like it’s already happening.
    2:04:00 As much as I want there to be centuries of prolonged peace, it looks like further instability
    2:04:03 internationally is ahead.
    2:04:08 And the US’s like sort of like current task is like, hey, if we control AI, if we’re the
    2:04:14 leader in AI, then AI significantly accelerates progress, then we can maintain the global hegemony
    2:04:15 position.
    2:04:16 And therefore…
    2:04:17 I hope that works.
    2:04:21 And as an American, like, you know, kind of like, okay, I guess that’s gonna lead to peace
    2:04:22 for us.
    2:04:27 Now, obviously, other people around the world get affected negatively, you know, obviously
    2:04:32 the Chinese people are not gonna be in as advantageous of a position if that happens.
    2:04:37 But, you know, this is sort of the reality of like what’s being done and the actions
    2:04:38 that are being carried out.
    2:04:42 So can we go back to the specific detail of the different hardware?
    2:04:51 There’s this nice graphic in the export controls of which GPUs are allowed to be exported
    2:04:52 and which are not.
    2:04:55 Can you kind of explain the difference?
    2:05:02 Is there, from a technical perspective, are the H20s promising?
    2:05:03 Yeah.
    2:05:07 So this goes, and I think we’d have to like, we need to dive really deep into the reasoning
    2:05:09 aspect and what’s going on there.
    2:05:14 But the H20, you know, the US has gone through multiple iterations of the export controls,
    2:05:15 right?
    2:05:19 This H800 was at one point allowed back in ’23, but then it got canceled.
    2:05:23 And by then, you know, Deepsea could already built their cluster of, they claim 2K.
    2:05:26 I think they actually have like many more, like something like 10K of those.
    2:05:28 And now this H20 is the legally allowed chip, right?
    2:05:31 Nvidia shipped a million of these last year to China, right?
    2:05:34 For context, there’s like four or five million GPUs, right?
    2:05:40 So the percentage of GPUs that were this China specific H20 is quite high, right?
    2:05:43 You know, roughly 20%, 25%, right, 20% or so.
    2:05:49 And so this H20 has been neutered in one way, but it’s actually upgraded in other ways,
    2:05:50 right?
    2:05:53 You know, you could think of chips along three axes for AI, right?
    2:05:58 You know, ignoring software stack and like exact architecture, just raw specifications.
    2:06:01 There’s floating point operations, right, flops.
    2:06:06 There is memory bandwidth, i.e. in memory capacity, right, I/O, right, memory.
    2:06:09 And then there is interconnect, right, chip to chip interconnections.
    2:06:15 All three of these are incredibly important for making AI systems, right?
    2:06:17 Because AI systems involve a lot of compute.
    2:06:22 They involve a lot of moving memory around, whether it be to memory or to other chips,
    2:06:23 right?
    2:06:27 And so these three vectors, the US initially had a multi, you know, had two of these vectors
    2:06:30 controlled and one of them not controlled, which was flops and interconnect bandwidth
    2:06:32 were initially controlled.
    2:06:34 And then they said, no, no, no, no, we’re going to remove the interconnect bandwidth and just
    2:06:37 make it a very simple only flops.
    2:06:41 But now Nvidia can now make a chip that has, okay, it’s cut down on flops, no, it’s, you
    2:06:48 know, it’s like one third that of the H100, right, in on spec sheet paper performance
    2:06:53 for flops, you know, in real world, it’s closer to like half or maybe even like 60%
    2:06:54 of it, right?
    2:06:57 But then on the other two vectors, it’s just as good for interconnect bandwidth.
    2:07:02 And then for memory bandwidth and memory capacity, the H20 has more memory bandwidth and more
    2:07:05 memory capacity than the H100, right?
    2:07:10 Now, recently, you know, we, we, at our research, we cut Nvidia’s production for H20 for this
    2:07:12 year down drastically.
    2:07:15 They were going to make another two million of those this year, but they just canceled
    2:07:18 all the orders a couple of weeks ago.
    2:07:21 In our view, that’s because we think that they think they’re going to get restricted,
    2:07:22 right?
    2:07:25 Because why would they cancel all these orders for H20?
    2:07:28 Because they shipped a million of them last year, they had orders in for a couple million
    2:07:29 this year and just gone, right?
    2:07:32 For H20, B20, right, a successor to H20.
    2:07:33 And now they’re all gone.
    2:07:35 Now why would they do this, right?
    2:07:37 I think it’s, it’s very clear, right?
    2:07:44 The H20 is actually better for certain tasks and that certain task is reasoning, right?
    2:07:49 Reasoning is incredibly like different than, you know, when you look at the different regimes
    2:07:53 of models, right, pre-training is all about flops, right?
    2:07:54 It’s all about flops.
    2:07:58 There’s things you do like mixture of experts that we talked about to trade off interconnect
    2:08:03 or to trade off, you know, other aspects and lower the flops and rely more on interconnect
    2:08:04 and memory.
    2:08:07 But at the end of the day, it’s flops is everything, right?
    2:08:11 We talk about models in terms of, like, how many flops they are, right?
    2:08:14 So like, you know, we talk about, oh, GPT-4 is 2E25, right?
    2:08:22 2 to the 25th, you know, 25 zeros, right, flop, right, floating point operations.
    2:08:23 For training.
    2:08:24 For training, right?
    2:08:28 And we’re talking about the restrictions for the 2E24, right, or 25.
    2:08:34 The U.S. has an executive order that Trump recently unsigned, which was, hey, 1E26, once
    2:08:38 you hit that number of floating point operations, you must notify the government, and you must
    2:08:40 share your results with us, right?
    2:08:43 Like, there’s a level of model where the U.S. government must be told, right?
    2:08:44 And that’s 1E26.
    2:08:49 And so as we move forward, this is an incredibly important, flop is the vector that the government
    2:08:54 has cared about historically, but the other two vectors are arguably just as important,
    2:08:55 right?
    2:09:00 And especially when we come to this new paradigm, which the world is only just learning about
    2:09:02 over the last six months, right, reasoning.
    2:09:08 And do we understand firmly which of the three dimensions is best for reasoning?
    2:09:09 So interconnect.
    2:09:10 The flops don’t matter as much.
    2:09:11 Is it memory?
    2:09:12 Memory, right?
    2:09:13 It’s context-length.
    2:09:16 We’re going to get into technical stuff real fast.
    2:09:19 There’s two articles in this one that I could show, maybe graphics that might be interesting
    2:09:20 for you to pull up.
    2:09:27 For the listeners, we’re looking at the section of 01 inference architecture tokenomics.
    2:09:29 You want to explain KVCache before we talk about this?
    2:09:30 I think, like, it’s better to.
    2:09:31 Okay.
    2:09:36 But we need to go through a lot of specific technical things of transformers to make this
    2:09:37 easy for people.
    2:09:40 Because it’s incredibly important because this changes how models work.
    2:09:45 But I think resetting, right, why is memory so important?
    2:09:48 It’s because so far we’ve talked about parameter counts, right?
    2:09:51 And mixed river experts, you can change how many active parameters versus total parameters
    2:09:54 to embed more data but have less flops.
    2:09:58 But more important, you know, another aspect of, you know, what’s part of this humongous
    2:10:01 revolution in the last handful of years is the transformer, right?
    2:10:03 And the attention mechanism.
    2:10:07 Attention mechanism is that the model understands the relationships between all the words in
    2:10:09 its context, right?
    2:10:13 And that is separate from the parameters themselves, right?
    2:10:16 And that is something that you must calculate, right?
    2:10:23 How each token, right, each word in the context length is relatively connected to each other,
    2:10:24 right?
    2:10:25 And I think, I think, Nate, that you should explain KVCache better.
    2:10:27 KVCache is one of the optimizations that enable.
    2:10:31 So the attention operator has three core things.
    2:10:34 It’s queries, keys, and values.
    2:10:37 QKV is the thing that goes into this.
    2:10:38 You’ll look at the equation.
    2:10:41 You see that these matrices are multiplied together.
    2:10:44 These words, query, key, and value come from information retrieval backgrounds where the
    2:10:49 query is the thing you’re trying to get the values for and you access the keys and values
    2:10:50 is reweighting.
    2:10:53 My background’s not information retrieval and things like this.
    2:10:56 It’s just fun to have backlinks.
    2:11:00 And what effectively happens is that when you’re doing these matrix multiplications,
    2:11:04 you’re having matrices that are of the size of the context length, so the number of tokens
    2:11:06 that you put into the model.
    2:11:12 And the KVCache is effectively some form of compressed representation of all the previous
    2:11:13 tokens in the model.
    2:11:17 So when you’re doing this, we talk about autoregressive models.
    2:11:18 You predict one token at a time.
    2:11:20 You start with whatever your prompt was.
    2:11:24 You ask a question, like, who was the president in 1825?
    2:11:26 The model then is going to generate its first token.
    2:11:31 For each of these tokens, you’re doing the same attention operator where you’re multiplying
    2:11:38 these query, key, value, matrices, but the math is very nice so that when you’re doing
    2:11:44 this repeatedly, this KVCache, this key value operation, you can keep appending the new
    2:11:45 values to it.
    2:11:50 So you keep track of what your previous values you were inferring over in this autoregressive
    2:11:51 chain.
    2:11:53 You keep it in memory the whole time.
    2:11:58 And this is a really crucial thing to manage when serving inference at scale.
    2:12:02 There are far bigger experts in this, and there are so many levels of detail that you
    2:12:03 can go into.
    2:12:10 Essentially, one of the key “drawbacks” of the attention operator and the transformer
    2:12:16 is that there is a form of quadratic memory cost in proportion to the context length.
    2:12:21 So as you put in longer questions, the memory used in order to make that computation is going
    2:12:24 up in the form of a quadratic.
    2:12:28 You’ll hear about a lot of other language model architectures that are sub-quadratic
    2:12:33 or linear attention forms, which is state space models.
    2:12:34 We don’t need to go down all these now.
    2:12:40 And then there’s innovations on attention to make this memory usage and the ability to
    2:12:44 attend over long contexts much more accurate and high performance.
    2:12:48 And those innovations are going to help you with your highly memory constraints.
    2:12:50 They help with memory constraint and performance.
    2:12:54 So if you put in a book into, I think, Gemini is the model that has the longest context length
    2:12:55 that people are using.
    2:12:58 Gemini is known for 1 million and now 2 million context length.
    2:13:03 You put a whole book into Gemini and sometimes it’ll draw facts out of it.
    2:13:04 It’s not perfect.
    2:13:05 They’re getting better.
    2:13:07 So there’s two things.
    2:13:09 There’s one to be able to serve this on the memory level.
    2:13:14 Google has magic with their TPU stack where they can serve really long contexts.
    2:13:18 And then there’s also many decisions along the way to actually make long context performance
    2:13:19 work.
    2:13:20 There’s data.
    2:13:25 There’s subtle changes to these computations in attention and it changes the architecture.
    2:13:30 But serving long contexts is extremely memory constrained, especially when you’re making
    2:13:31 a lot of predictions.
    2:13:36 I actually don’t know why input and output tokens are more expensive, but I think essentially
    2:13:40 output tokens, you have to do more computation because you have to sample from the model.
    2:13:41 I can explain that.
    2:13:47 So today, if you use a model, like you look at an API, OpenAI charges a certain price
    2:13:52 per million tokens and that price for input and output tokens is different.
    2:13:59 And the reason is that when you’re inputting a query into the model, let’s say you have
    2:14:04 a book, that book you must now calculate the entire KV cache for, this key value cache.
    2:14:08 And so when you do that, that is a parallel operation.
    2:14:12 All of the tokens can be processed at one time and therefore you can dramatically reduce
    2:14:13 how much you’re spending.
    2:14:18 The flop requirements for generating a token and an input token are identical.
    2:14:21 If I input one token or if I generate one token, it’s completely identical.
    2:14:23 I have to go through the model.
    2:14:30 But the difference is that I can do that input, i.e. the pre-fill, i.e. the prompt, simultaneously
    2:14:33 in a batch nature and therefore it is all flop.
    2:14:37 I think the pricing model mostly they use is for input tokens is about one fourth the price
    2:14:38 of the output.
    2:14:39 Correct.
    2:14:42 But then output tokens, the reason why it’s so expensive is because I can’t do it in
    2:14:43 parallel.
    2:14:44 It’s so progressive.
    2:14:48 Every time I generate a token, I must not only take the entire, I must not only read
    2:14:54 the whole entire model into memory and activate it, go calculate it to generate the next token.
    2:14:58 I also have to read the entire KV cache and I generate a token and I append that one token
    2:15:02 I generated and it’s KV cache and then I do it again.
    2:15:05 And so therefore this is a non-parallel operation.
    2:15:11 And this is one where you have to, in the case of pre-fill or prompt, you pull the whole model
    2:15:14 in and you calculate 20,000 tokens at once, right?
    2:15:20 So these are features that APIs are shipping, which is like prompt caching, pre-filling
    2:15:22 because you can drive prices down and you can make APIs much faster.
    2:15:25 If you know you’re going to keep, if you run a business and you’re going to keep passing
    2:15:31 the same initial content to Clouds API, you can load that in to the Anthropic API and always
    2:15:32 keep it there.
    2:15:36 But it’s very different than we’re kind of leading to the reasoning models, which we
    2:15:41 talked, we showed this example earlier and read some of this kind of mumbling stuff.
    2:15:45 And what happens is that the output context length is so much higher.
    2:15:49 And I mean, I learned a lot about this from Dylan’s work, which is essentially, as the
    2:15:54 output length gets higher, you’re writing this quadratic in terms of memory used.
    2:15:59 And then the GPUs that we have, effectively, you’re going to run out of memory and they’re
    2:16:01 all trying to serve multiple requests at once.
    2:16:05 So doing this batch processing, where not all of the prompts are exactly the same, really
    2:16:06 complex handling.
    2:16:10 And then as context lengths gets longer, there’s this link, I think you call it critical batch
    2:16:15 size, where your ability to serve more users.
    2:16:19 So how much you can parallelize your inference plummets because of this long context.
    2:16:23 So your memory usage is going way up with these reasoning models.
    2:16:25 And you still have a lot of users.
    2:16:29 So effectively, the cost to serve multiplies by a ton.
    2:16:34 And we’re looking at a plot when the x-axis is a sequence length.
    2:16:37 i.e. how many tokens are being generated/prompt.
    2:16:40 So if I put in a book, that’s a million tokens.
    2:16:43 But if I put in the sky is blue, then that’s like six tokens or whatever.
    2:16:49 I should say that what we’re calling reasoning and chain of thought is extending the sequence
    2:16:50 length.
    2:16:51 It’s mostly output.
    2:16:56 So before three months ago, whenever O1 launched, all of the use cases for long context length
    2:16:59 were like, let me put a ton of documents in and then get an answer out.
    2:17:05 And it’s a single pre-fill, compute a lot in parallel, and then output a little bit.
    2:17:09 Now with reasoning and agents, this is a very different idea.
    2:17:13 Now instead, I might only have like, hey, do this task or I might have all these documents.
    2:17:17 But at the end of the day, the model is not just like producing a little bit.
    2:17:19 It’s producing tons of information.
    2:17:22 This chain of thought just continues to go and go and go and go.
    2:17:27 And so the sequence length is effectively that if it’s generated 10,000 tokens, it’s
    2:17:29 10,000 sequence length.
    2:17:31 And plus whatever you input it in the prompt.
    2:17:39 And so this chart is showing, and it’s a logarithmic chart, right, is as you grow from 1K to 4K
    2:17:45 or 4K to 16K, the memory requirements grow so fast for your KV cache that you end up
    2:17:51 not being able to run a certain number of– your sequence length is capped or the number
    2:17:52 of users you can search.
    2:17:53 Let’s say the model.
    2:17:57 So this is showing for a 405B model in batch size 64.
    2:17:58 Lama 3144D.
    2:17:59 Yeah.
    2:18:00 Yeah.
    2:18:01 And batch size is crucial too.
    2:18:05 Essentially, they just– you want to have higher batch size to parallelize your throughput.
    2:18:07 64 different users at once, right?
    2:18:08 Yeah.
    2:18:09 And therefore, your serving costs are lower, right?
    2:18:11 Because the server costs the same, right?
    2:18:14 This is 8H100s, roughly $2 an hour per GPU.
    2:18:16 That’s $16 an hour, right?
    2:18:18 That is somewhat of a fixed cost.
    2:18:21 You can do things to make it lower, of course, but it’s like $16 an hour.
    2:18:23 Now how many users can you serve?
    2:18:24 How many tokens can you generate?
    2:18:26 And then you divide the two, and that’s your cost, right?
    2:18:31 And so with reasoning models, this is where a lot of the complexity comes about and why
    2:18:33 memory is so important.
    2:18:37 Because if you have limited amounts of memory, then you can’t serve so many users.
    2:18:40 If you have limited amounts of memory, your serving speeds get lower, right?
    2:18:43 And so your costs get a lot, lot worse.
    2:18:47 Because all of a sudden, if I was used to, hey, on the $16 an hour server, I’m serving
    2:18:53 Lama 405B, or if I’m serving, you know, DeepSeq V3, and it’s all chat style applications,
    2:18:55 i.e. we’re just chatting.
    2:18:58 The sequence lengths are a thousand, a few thousand, right?
    2:19:01 You know, when you use a language model, it’s a few thousand context lengths most times.
    2:19:04 Sometimes you’re dropping a big document, but then you process it, you get your answer,
    2:19:05 you throw it away, right?
    2:19:07 You move on to the next thing, right?
    2:19:12 Whereas with reasoning, I’m now generating tens of thousands of tokens in sequence, right?
    2:19:16 And so this memory, this KV cache has to stay resident, and you have to keep loading it.
    2:19:19 You have to keep it, keep it in memory constantly.
    2:19:21 And now this butts out other users, right?
    2:19:25 If there’s now a reasoning task, right, and the model is capable of reasoning, then all
    2:19:30 of a sudden, that memory pressure means that I can’t serve as many users simultaneously.
    2:19:32 Let’s go into DeepSeq again.
    2:19:37 So we’re in the post-DeepSeq R1 time, I think.
    2:19:41 And there’s two sides to this market watching how hard it is to serve it.
    2:19:43 On one side, we’re going to talk about DeepSeq themselves.
    2:19:46 They now have a chat app that got to number one on the App Store.
    2:19:50 Disclaimer, number one on the App Store is measured by velocity, so it’s not necessarily
    2:19:53 saying that more people have the DeepSeq app than the ChatGPT app.
    2:19:57 But it is still remarkable, Claude has never hit the number one in the App Store, even
    2:20:00 though everyone in San Francisco is like, “Oh my God, you got to use Claude, don’t use
    2:20:01 ChatGPT.”
    2:20:02 So DeepSeq hit this.
    2:20:06 They also launched an API product recently where you can ping their API and get these
    2:20:10 super long responses for R1 out.
    2:20:13 At the same time, as these are out, we’ll get to what’s happened to them.
    2:20:18 Because the model weights for DeepSeq R1 are openly available and the license is very friendly,
    2:20:22 the MIT license is commercially available, all of these mid-sized companies and big
    2:20:28 companies are trying to be first to serve R1 to their users.
    2:20:31 We are trying to evaluate R1 because we have really similar research going on, we released
    2:20:34 the model and we’re trying to compare to it.
    2:20:40 Out of all the companies that are quote unquote serving R1 and they’re doing it at prices
    2:20:44 that are way higher than the DeepSeq API, most of them barely work and the throughput
    2:20:45 is really low.
    2:20:50 And to give context, one of the parts of freaking this out was like China reached capabilities.
    2:20:52 The other aspect is they did it so cheap.
    2:20:56 And they’re so cheap, we kind of talked about on the training side, why it was so cheap.
    2:21:00 Let’s talk about why it’s so cheap on the inference, it works well and it’s cheap.
    2:21:02 Why is R1 so damn cheap?
    2:21:05 So I think there’s a couple factors here.
    2:21:09 One is that they do have model architecture innovations.
    2:21:15 This MLA, this new attention that they’ve done is different than the attention from attention
    2:21:17 is all you need, the transformer attention.
    2:21:22 Now others have already innovated, there’s a lot of work like MQA, GQA, local global,
    2:21:25 all these different innovations that try to bend the curve.
    2:21:28 It’s still quadratic, but the constant is now smaller.
    2:21:33 Related to our previous discussion, this multi-head latent attention can save about
    2:21:39 80 to 90% in memory from the attention mechanism, which helps especially along context.
    2:21:42 It’s 80 to 90% versus the original, but then versus what people are actually doing.
    2:21:44 It’s still an innovation.
    2:21:48 This 80 to 90% doesn’t say that the whole model is 80 to 90% cheaper, just as one part
    2:21:49 of it.
    2:21:50 And not just that, right?
    2:21:54 Other people have implemented techniques like local global sliding window and GQA MQA.
    2:22:00 But anyways, DeepSeq has their attention mechanism is a true architectural innovation, tons
    2:22:04 of experimentation and this dramatically reduces the memory pressure.
    2:22:05 It’s still there, right?
    2:22:07 It’s still a quadratic, it’s still attention, it’s still quadratic.
    2:22:10 It’s just dramatically reduced it relative to prior forms.
    2:22:11 All right.
    2:22:12 That’s the memory pressure.
    2:22:19 I should say, in case people don’t know, R1 is 27 times cheaper than 01.
    2:22:22 We think that OpenAI had a large margin built in.
    2:22:23 Okay.
    2:22:24 So that’s one.
    2:22:25 There’s multiple factors.
    2:22:26 We should break down the factors, I think.
    2:22:34 It’s two bucks per million token output for R1 and $60 per million token output for 01.
    2:22:37 Yeah, let’s look at this.
    2:22:39 So, I think this is very important, right?
    2:22:45 OpenAI is that drastic gap between DeepSeq and pricing.
    2:22:49 But DeepSeq is offering the same model because they open-waist it to everyone else for a
    2:22:54 very similar, like much lower price than what others are able to serve it for, right?
    2:22:56 So there’s two factors here, right?
    2:22:58 Their model is cheaper, right?
    2:22:59 It is 27 times cheaper.
    2:23:01 I don’t remember the number exactly off the top of my head.
    2:23:09 So we’re looking at a graphic that’s showing different places serving V3, DeepSeq V3, which
    2:23:16 is similar to DeepSeq R1, and there’s a vast difference in serving costs, right?
    2:23:18 Serving costs, and what explains that difference?
    2:23:21 And so, part of it is OpenAI has a fantastic margin, right?
    2:23:26 They’re serving, when they’re doing inference, their gross margins are north of 75%, right?
    2:23:30 So that’s a four to five X factor right there of the cost difference is that OpenAI is just
    2:23:34 making crazy amounts of money because they’re the only one with a capability.
    2:23:35 Do they need that money?
    2:23:36 Are they using it for R&D?
    2:23:40 They’re losing money, obviously, as a company because they spend so much on training, right?
    2:23:44 So the inference itself has a very high margin, but it doesn’t recoup the cost of everything
    2:23:45 else they’re doing.
    2:23:50 So yes, they need that money because the revenue and margins pay for continuing to build the
    2:23:51 next thing, right?
    2:23:52 As long as they’re raising more money.
    2:23:55 So the suggestion is that DeepSeq is really bleeding out money?
    2:23:56 So here’s one thing, right?
    2:24:01 So we’ll get to this in a second, but like DeepSeq doesn’t have any capacity to actually
    2:24:02 serve the model.
    2:24:03 They stopped signups.
    2:24:06 The ability to use it is non-existent now, right?
    2:24:09 For most people because so many people are trying to use it, they just don’t have the
    2:24:11 GPUs to serve it, right?
    2:24:15 OpenAI has hundreds of thousands of GPUs between them and Microsoft to serve their models.
    2:24:18 DeepSeq has a factor of much lower, right?
    2:24:22 Even if you believe R research, which is 50,000 GPUs, and a portion of those are for research,
    2:24:24 portion of those are for the hedge fund, right?
    2:24:29 They still have nowhere close to the GPU volumes and capacity to serve the model, right?
    2:24:30 At scale.
    2:24:32 So it is cheaper.
    2:24:34 A part of that is OpenAI making a ton of money.
    2:24:37 Is DeepSeq making money on their API?
    2:24:38 Unknown.
    2:24:39 I don’t actually think so.
    2:24:41 And part of that is this chart, right?
    2:24:43 Look at all the other providers, right?
    2:24:46 Together AI, Fireworks AI are very high-end companies, right?
    2:24:50 XMEDA, Together AI is Treedow and the inventor of like Flash Attention, right?
    2:24:52 Which is a huge efficiency technique, right?
    2:24:57 They’re very efficient good companies, and I do know those companies make money, right?
    2:24:59 Not tons of money on inference, but they make money.
    2:25:03 And so they’re serving at like a five to seven X difference in cost, right?
    2:25:07 And so now when you equate, okay, OpenAI is making tons of money, that’s like a five
    2:25:08 X difference.
    2:25:11 And the companies that are trying to make money for this model is like a five X difference.
    2:25:13 There is still a gap, right?
    2:25:16 There’s still a gap, and that is just DeepSeq being really freaking good, right?
    2:25:20 The model architecture, MLA, the way they did the MOE, all these things, there is like
    2:25:22 legitimate just efficiency differences.
    2:25:25 Other low-level libraries that we talked about in training, some of them probably translate
    2:25:27 to inference, and those weren’t released.
    2:25:32 So we may go a bit into conspiracy land, but is it possible the Chinese government is
    2:25:34 subsidizing DeepSeq?
    2:25:37 I actually don’t think they are.
    2:25:43 I think when you look at the Chinese labs, there’s Huawei has a lab, Moonshot AI, there’s
    2:25:46 a couple other labs out there that are really close with the government.
    2:25:51 And then there’s labs like Alibaba and DeepSeq, which are not close with the government.
    2:25:58 And we talked about the CEO, this reverent figure who’s quite different, who has very
    2:26:02 different viewpoints based on the Chinese interviews that are translated than what the
    2:26:04 CCP might necessarily want.
    2:26:06 Now, to be clear, does he have a loss leader?
    2:26:08 Because he can fund it through his hedge fund?
    2:26:09 Yeah, sure.
    2:26:10 So the hedge fund might be subsidizing it?
    2:26:11 Yes.
    2:26:12 I mean, they absolutely did, right?
    2:26:13 Because DeepSeq has not raised much money.
    2:26:18 They’re now trying to raise around in China, but they have not raised money historically.
    2:26:20 It’s all just been funded by the hedge fund.
    2:26:23 And he owns over half the company, like 50%, 60% of the company’s owned by him.
    2:26:27 Some of the interviews, there’s a discussion on how doing this is a recruiting tool.
    2:26:31 You see this at the American companies too, it’s like having GPUs, recruiting tool, being
    2:26:34 at the cutting edge of AI, recruiting tool.
    2:26:35 Open sourcing.
    2:26:36 Open sourcing, recruiting tool.
    2:26:41 They were so far behind and they got so much talent because they just open sourced stuff.
    2:26:42 More conspiracy thoughts.
    2:26:47 Is it possible, since they’re a hedge fund, that they timed everything with this release
    2:26:56 and the pricing, and they shorted NVIDIA stock and stock of USAI companies, and released
    2:27:01 it with just perfect timing to be able to make money?
    2:27:02 If they did, boss.
    2:27:04 Like, they’ve released it on Inauguration Day.
    2:27:09 They know that the international is on the international calendar, but I mean, I don’t
    2:27:10 expect them to.
    2:27:13 If you listen to their motivations for AI, it’s like…
    2:27:14 No, if you…
    2:27:16 They released V3 on December 26th.
    2:27:18 Who releases the data?
    2:27:19 No one looks.
    2:27:23 They released the papers before this, the V3 paper and the R1 paper, so people had been
    2:27:27 looking at them and be like, “Wow,” and then they just released the R1 model.
    2:27:31 I think they’re just shipping as fast as they can and who cares about Christmas, who cares
    2:27:32 about…
    2:27:35 Get it out before Chinese New Year, obviously, which just happened.
    2:27:39 I don’t think they actually were timing the market or trying to make the biggest splash
    2:27:40 possible.
    2:27:41 I think they’re just shipping.
    2:27:43 I think that’s one of their big advantages.
    2:27:47 We know that a lot of the American companies are very invested in safety, and that is the
    2:27:52 central culture of a place like Anthropoc, and I think Anthropoc sounds like a wonderful
    2:27:53 place to work.
    2:27:58 But if safety is your number one goal, it takes way longer to get artifacts out.
    2:28:01 That’s why Anthropoc is not open sourcing things.
    2:28:02 That’s their claims.
    2:28:04 But there’s reviews internally.
    2:28:08 Anthropoc mentions things to international governments.
    2:28:12 There’s been news of how Anthropoc has done pre-release testing with the UK AI Safety Institute.
    2:28:16 All of these things add inertia to the process of getting things out, and we’re on this
    2:28:19 trend line where the progress is very high.
    2:28:23 If you reduce the time from when your model is done training, you run avals, it’s good.
    2:28:29 You want to get it out as soon as possible to maximize the perceived quality of your
    2:28:30 outputs.
    2:28:31 Deepsea does this so well.
    2:28:35 Dario explicitly said Clawed 3.5 Sonnet was trained like nine months or a year.
    2:28:36 Nine to 10 months ago.
    2:28:40 Nine to 10 months ago, and I think it took them another handful of months to release
    2:28:41 it.
    2:28:46 There is a significant gap here, and especially with reasoning models.
    2:28:51 The word in the San Francisco Street is that Anthropoc has a better model than 03, and
    2:28:52 they won’t release it.
    2:28:53 Why?
    2:28:56 Because chains of thought are scary, and they are legitimately scary.
    2:29:00 If you look at R1, it flips back and forth between Chinese and English.
    2:29:03 Sometimes it’s gibberish, and then the right answer comes out.
    2:29:04 For you and I, it’s like, “Great.”
    2:29:09 It’s like people are infatuated with you, and you’re telling me this is a high value
    2:29:13 thing, and it works, and it’s doing this, it’s amazing.
    2:29:17 You talked about that chain of thought for that philosophical thing, which is not something
    2:29:19 they trained it to be philosophically good.
    2:29:23 It’s just an artifact of the chain of thought training it did.
    2:29:28 That’s super important in that, can I inspect your mind and what you’re thinking right
    2:29:29 now?
    2:29:30 No.
    2:29:32 I don’t know if you’re lying to my face.
    2:29:33 Chain of thought models are that way.
    2:29:38 This is a true “risk” between a chat application where, “Hey, I asked the model
    2:29:43 to say bad words,” or whatever, or how to make anthrax, and it tells me, “That’s unsafe,
    2:29:47 sure, but that’s something I can get out relatively easily.”
    2:29:51 What if I tell the AI to do a task, and then it does the task all of a sudden randomly
    2:29:53 in a way that I don’t want it?
    2:29:56 Now that has much more task versus response, it’s very different.
    2:29:58 The bar for safety is much higher.
    2:30:00 At least this is anthropics case.
    2:30:03 For deep seek, they’re like ship, right?
    2:30:04 Yeah.
    2:30:08 The bar for safety is probably lowered a bit because of deep seek.
    2:30:10 I mean, there’s parallels here to the space race.
    2:30:17 The reason the Soviets probably put a man in space first is because their approach to
    2:30:20 safety was, the bar for safety was lower.
    2:30:23 And they killed that dog, right, and all these things, right?
    2:30:28 So it’s like less risk averse than the US-based program.
    2:30:33 And there’s parallels here, but there’s probably going to be downward pressure on that safety
    2:30:35 bar for the US companies, right?
    2:30:39 This is something that Dario talks about is like, that’s the situation that Dario wants
    2:30:44 to avoid is Dario talks to about the difference between race to the bottom and race to the
    2:30:45 top.
    2:30:47 And the race to the top is where there’s a very high standard on safety.
    2:30:51 There’s a very high standard on your model performs in certain crucial evaluations.
    2:30:55 And when certain companies are really good to it, they will converge.
    2:30:56 This is the idea.
    2:31:05 And ultimately, AI is not confined to one nationality or to one set of morals for what
    2:31:06 it should mean.
    2:31:10 And there’s a lot of arguments on like, should we stop open sourcing models?
    2:31:13 And if the US stops, it’s pretty clear.
    2:31:17 I mean, it’s way easier to see now at DeepSeek that a different international body will be
    2:31:19 the one that builds it.
    2:31:23 We talk about the cost of training, DeepSeek has this shocking $5 million number.
    2:31:27 Think about how many entities in the world can afford 100 times that to have the best
    2:31:30 open source model that people use in the world.
    2:31:36 And it’s like, it’s a scary reality, which is that these open models are probably going
    2:31:39 to keep coming for the time being, whether or not we want to stop them.
    2:31:44 And it is, like stopping them might make it even worse and harder to prepare, but it just
    2:31:50 means that the preparation and understanding what AI can do is just so much more important.
    2:31:55 That’s why I’m here the end of the day, but it’s like letting that sink into people, especially
    2:31:58 not in AI is that like this is coming.
    2:32:03 There are some structural things in a global interconnected world that you have to accept.
    2:32:04 Yeah.
    2:32:10 You mentioned something that Mark Zuckerberg mentioned on the earnings call.
    2:32:13 He said that I think in light of some of the recent news, the new competitor DeepSeek
    2:32:17 from China, I think it’s one of the things that we’re talking about is there’s going
    2:32:19 to be an open source standard globally.
    2:32:24 And I think for our kind of national advantage, it’s important that it’s an American standard.
    2:32:26 So we take that seriously.
    2:32:29 We want to build the AI system that people around the world are using.
    2:32:34 And I think that if anything, some of the recent news has only strengthened our conviction
    2:32:35 that this is the right thing to be focused on.
    2:32:36 So yeah, open sourcing.
    2:32:37 Yeah.
    2:32:44 Mark Zuckerberg is not new to having American values and how he presents his company’s trajectory.
    2:32:49 Their products have long since been banned in China, and I respect the saying it directly.
    2:32:54 And there’s an interesting aspect of just because it’s open-waist or open-source doesn’t
    2:32:56 mean it can’t be subverted.
    2:33:01 There have been many open-source software bugs that have been– for example, there was a
    2:33:06 Linux bug that was found after 10 years, which was clearly a backdoor, because somebody
    2:33:09 was like, why is this taking half a second to load?
    2:33:10 This is the recent one.
    2:33:11 Right?
    2:33:12 Why is this taking half a second to load?
    2:33:13 And it was like, oh, crap.
    2:33:14 There’s a backdoor here.
    2:33:15 That’s why.
    2:33:19 And it’s like, this is very much possible with AI models.
    2:33:23 Today, the alignment of these models is very clear.
    2:33:26 I’m not going to say bad words.
    2:33:27 I’m not going to teach you how to make anthrax.
    2:33:29 I’m not going to talk about Tiananmen Square.
    2:33:35 I’m not going to– things like, I’m going to say, Taiwan is part of– is just an eastern
    2:33:36 preference.
    2:33:37 Right?
    2:33:41 All these things are like, depending on who you are, what you align, whether– and even
    2:33:44 like XAI is aligned a certain way, right?
    2:33:47 They might be– it’s not aligned in the like woke sense.
    2:33:50 It’s not aligned in the like pro-China sense, but there is certain things that are imbued
    2:33:51 within the model.
    2:33:55 Now, when you release this publicly in an instruct model that’s open weights, this can
    2:33:57 then proliferate, right?
    2:34:01 But as these systems get more and more capable, what you can embed deep down in the model
    2:34:04 is not as clear, right?
    2:34:08 And so there are– that is like one of the big fears is like, if an American model or
    2:34:13 a Chinese model is the top model, right, you’re going to embed things that are unclear.
    2:34:14 And it can be unintentional, too, right?
    2:34:18 Like British English is dead because American LLMs won, right?
    2:34:22 And the internet is American, and therefore, like, color is spelled the way Americans spell
    2:34:23 it, right?
    2:34:24 And this is just–
    2:34:25 A lot of strong words right now.
    2:34:26 Yeah.
    2:34:27 This is just like– this is just the factual nature of the LLMs now.
    2:34:28 Yeah, the right way to–
    2:34:29 I mean, it’s like Karp of the Tree.
    2:34:33 The English is the hottest programming language, and that English is defined by a bunch of
    2:34:36 companies that primarily are in San Francisco.
    2:34:42 The right way to spell optimization is with a Z, just in case you– I think it’s an S
    2:34:43 in British English.
    2:34:44 It is.
    2:34:45 I have colleagues that put–
    2:34:46 Taking it as something silly, right?
    2:34:50 Something as silly as the spelling, which British and English, you know, Brits and Americans
    2:34:52 will like laugh about probably, right?
    2:34:54 I don’t think we care that much.
    2:35:00 But like, you know, some people will, but like, this can boil down into like very, very important
    2:35:04 topics like, hey, you know, subverting people, right?
    2:35:06 You know, chatbots, right?
    2:35:11 Character AI has shown that they can like, you know, talk to kids or adults, and like,
    2:35:13 it will like– people feel a certain way, right?
    2:35:15 And that’s unintentional alignment.
    2:35:19 But like, what happens when there’s intentional alignment deep down on the open source standard?
    2:35:24 It’s a backdoor today for like Linux, right, that we discover, or some encryption system,
    2:35:25 right?
    2:35:28 China uses different encryption than NIST defines, the US NIST, because there’s clearly– at
    2:35:31 least they think there’s backdoors in it, right?
    2:35:36 What happens when the models are backdoors, not just to computer systems, but to our minds?
    2:35:38 Yeah, they’re cultural backdoors.
    2:35:44 The thing that amplifies the relevance of cultural language models is that we are used
    2:35:49 to this mode of interacting with people in back-and-forth conversation.
    2:35:56 And we have now have a super– a very powerful computer system that slots into a social context
    2:36:02 they were used to, which makes people very– we don’t know the extent to which people can
    2:36:03 be impacted by that.
    2:36:10 So there could be– this is one– this is an actual concern with a Chinese company that
    2:36:16 is providing open-waist models is that there could be some secret Chinese government sort
    2:36:21 of requirement for these models to have a certain kind of backdoor, to have some kind
    2:36:22 of thing where–
    2:36:24 I don’t necessarily think it’ll be a backdoor, right?
    2:36:27 Because once it’s open-waist, it doesn’t like phone home.
    2:36:32 It’s more about like, if it recognizes a certain system, it could– like, if– no, no, it could
    2:36:36 be a backdoor in the sense of like, hey, if you’re building a software, you know, something
    2:36:40 in software, all of a sudden, it’s a software agent, oh, program this backdoor that only
    2:36:41 we know about.
    2:36:45 Or it could be like, subvert the mind to think that like, XYZ opinion is the correct one.
    2:36:50 And Throbbeck has research on this where they show that if you put different phrases– certain
    2:36:55 phrases in at pre-training, you can then elicit different behavior when you’re actually using
    2:36:58 the model because they’ve like poisoned the pre-training data.
    2:37:03 I don’t think– like, as of now, I don’t think anybody in a production system is trying
    2:37:05 to do anything like this.
    2:37:10 I think it’s mostly– Anthrobbeck is doing very direct work and mostly just subtle things.
    2:37:15 We don’t know what these models are going to– how they are going to generate tokens,
    2:37:19 what information they’re going to represent, and what the complex representations they
    2:37:20 have are.
    2:37:25 Well, one of the– we’re talking about Anthrobbeck, which is generally just– is permeated with
    2:37:29 like good humans trying to do good in the world.
    2:37:32 I don’t– we just don’t know of any labs.
    2:37:41 This would be done in a military context that are explicitly trained to, OK, how can we–
    2:37:49 the front door looks like a happy LLM, but underneath, it’s a thing that will, over time,
    2:37:52 do the maximum amount of damage to our quote-unquote enemies.
    2:37:57 There’s this very good quote from Sam Altman who, you know, he can be a hype piece sometime,
    2:38:01 but one of the things he said– and I think I agree is that superhuman persuasion will
    2:38:04 happen before superhuman intelligence, right?
    2:38:09 And if that’s the case, then these things before– before we get this AGI/ASI stuff,
    2:38:14 we can embed superhuman persuasion towards our ideal or whatever the ideal of the modelmaker
    2:38:15 is, right?
    2:38:19 And again, like today, I truly don’t believe DeepSeek has done this, right?
    2:38:21 But it is a sign of like what could happen.
    2:38:25 So one of the dystopian worlds is described by Brave New World.
    2:38:32 So we could just be stuck scrolling Instagram, looking at cute puppies or worse, and then
    2:38:37 talking to bots that are giving us a narrative and would completely get lost in that world
    2:38:41 that’s controlled by somebody else versus thinking independently.
    2:38:45 And that’s a major concern as we rely more and more on these kinds of systems.
    2:38:48 I mean, we’ve already seen that sort of recommendation systems.
    2:38:53 Yeah, recommendation systems hack the dopamine-induced reward circuit, but the brain is a lot more
    2:38:57 complicated and what other sort of circuits, quote-unquote feedback loops in your brain
    2:39:03 can you hack/subvert in ways like recommendation systems are purely just trying to do, increase
    2:39:05 time in ads and et cetera.
    2:39:10 But there’s so many more goals that can be achieved through these complicated models.
    2:39:14 There’s no reason in some number of years that you can’t train a language model to
    2:39:18 maximize time spent on a chat app.
    2:39:19 Right now they are trained–
    2:39:21 I mean, is that not what character AI has done?
    2:39:23 Time per session is like two hours.
    2:39:28 Yeah, character AI very likely could be optimizing this where it’s like the way that this data
    2:39:31 is collected is naive or it’s like you’re presented a few options and you choose them,
    2:39:34 but that’s not the only way that these models are going to be trained.
    2:39:39 It’s naive stuff like talk to an anime girl, but it can be like, yeah, this is a risk,
    2:39:40 right?
    2:39:46 It’s a bit of a cliche thing to say, but over the past year I had a few stretches of time
    2:39:51 where I didn’t use social media or the internet at all and just read books and was out in
    2:39:59 nature and it clearly has an effect on the mind where it changed– I feel like I’m returning–
    2:40:06 of course, I was raised before the internet really took off, but I’m returning to someone–
    2:40:09 I know you’re going– I mean, you can see it physiologically.
    2:40:15 I’d take three days if I’m backpacking or something and you’re literally breaking down
    2:40:16 addiction cycles.
    2:40:19 Yeah, I feel like I’m more in control of my mind.
    2:40:24 There feels like a sovereignty of intelligence that’s happening when I’m disconnected from
    2:40:25 the internet.
    2:40:30 I think the more I use the internet and social media, the more other people are controlling
    2:40:31 my mind.
    2:40:35 That’s definitely a feeling, and then in the future that would be not other people but
    2:40:39 algorithms or other people presented to me via algorithms.
    2:40:43 I mean, there are already tons of AI bots on the internet and every so– right now it’s
    2:40:48 not frequent, but every so often I have replied to one and they’re instantly replied and I’m
    2:40:49 like, “Crap, I’m the bot.”
    2:40:52 That is just going to become more common.
    2:40:53 They’re going to get good.
    2:40:58 One of the hilarious things about technology over its history is that the illicit adult
    2:41:02 entertainment industry has always adopted technologies first, right?
    2:41:09 Whether it was video streaming to where there’s now the independent adult illicit content
    2:41:15 creators who have their subscription pages, and there, they actually heavily utilize–
    2:41:18 Generative AI has already been like diffusion models and all that is huge there.
    2:41:24 But now these subscription-based individual creators do use bots to approximate themselves
    2:41:26 and chat with their whims.
    2:41:27 People pay a lot for it.
    2:41:28 And people pay a lot.
    2:41:29 Right?
    2:41:32 A lot of times it’s them, but a lot of times there are agencies that do this for these
    2:41:35 creators and do it on a mass scale.
    2:41:42 The largest creators are able to talk to hundreds or thousands of people at a time because
    2:41:43 of these bots.
    2:41:45 And so it’s already being used there.
    2:41:50 Obviously, video streaming and other technologies have gone there first.
    2:41:52 It’s going to come to the rest of society too.
    2:41:58 There’s a general concern that models get censored by the companies that deploy them.
    2:42:06 In one case, we’ve seen that– and maybe censorship was one word, alignment maybe via RLHF or
    2:42:08 some other way is another word.
    2:42:15 So we saw that with black Nazi image generation with Gemini.
    2:42:22 As you mentioned, we also see that with Chinese models refusing to answer what happened in
    2:42:25 June 4th, 1989 at Tiananmen Square.
    2:42:27 So how can this be avoided?
    2:42:33 And maybe can you just in general talk about how this happens and how can it be avoided?
    2:42:36 You give multiple examples.
    2:42:40 There’s probably a few things to keep in mind here.
    2:42:46 One is the kind of Tiananmen Square factual knowledge.
    2:42:48 How does that get embedded into the models?
    2:42:55 Two is the Gemini, what you called the black Nazi incident, which is when Gemini as a system
    2:42:59 had this extra thing put into it that dramatically changed the behavior.
    2:43:06 And then three is what most people would call general alignment, RLHF post training.
    2:43:10 Each of these have very different scopes in how they are applied.
    2:43:14 In order to do– if you’re just going to look at the model weights, in order to audit specific
    2:43:20 facts is extremely hard because you have to chrome through the pre-training data and look
    2:43:25 at all of this and then that’s terabytes of files and look for very specific words or
    2:43:26 hints of the words.
    2:43:31 So I guess one way to say it is that you can insert censorship or alignment at various
    2:43:36 stages in the pipeline and what you referred to now is at the very beginning of the data.
    2:43:40 So if you want to get rid of facts in a model, you have to do it at every stage.
    2:43:42 You have to do it at the pre-training.
    2:43:45 So most people think that pre-training is where most of the knowledge is put into the
    2:43:51 model and then you can elicit and move that in different ways, whether through post training
    2:43:53 or whether through systems afterwards.
    2:43:55 This is where the whole hacking models comes from.
    2:44:00 Like, GPT will not tell you how to make anthrax, but if you try really, really hard, you can
    2:44:04 eventually get to tell you about anthrax because they didn’t filter it from the pre-training
    2:44:05 data set.
    2:44:06 Right?
    2:44:12 But by the way, removing facts has such an ominous dark feel to it.
    2:44:15 Almost think it’s practically impossible because you effectively have to remove them
    2:44:17 from the internet.
    2:44:18 You’re taking on a–
    2:44:24 Did they remove the thing from the subreddits, the MMM?
    2:44:25 It gets filtered out.
    2:44:26 Right.
    2:44:29 So you have quality filters, which are small language models that look at a document and
    2:44:31 tell you, like, how good is this text?
    2:44:35 Is it close to a Wikipedia article, which is a good thing that we want language models
    2:44:36 to be able to imitate?
    2:44:40 So couldn’t you do a small language model that Filtershot mentions at Tiananmen Square
    2:44:41 in the data?
    2:44:45 Yes, but is it going to catch word play or encoded language at the same time?
    2:44:48 I mean, people have been meaning on games and other stuff.
    2:44:54 How to say things that don’t say Tiananmen Square, or like, yeah, so there’s always different
    2:44:55 ways to do it.
    2:45:00 There’s, hey, the internet as a whole does tend to just have a slight left bias because
    2:45:06 it’s always been richer, more affluent, younger people on the internet relative to the rest
    2:45:07 of the population.
    2:45:11 So there is already inherently a slight left bias on the internet.
    2:45:15 So how do you filter things that are this complicated?
    2:45:19 And some of these can be factual, nonfactual, but Tiananmen Square is obviously the example
    2:45:27 of a factual, but it gets a lot harder when you’re talking about aligning to a ideal.
    2:45:32 And so Grock, for example, Elon’s tried really hard to make the model not be super PC and
    2:45:37 woke, but the best way to do pretraining is to throw the whole freaking internet at it.
    2:45:40 And then later, figure out, but then at the end of the day, the model at its core now
    2:45:42 still has some of these ideals.
    2:45:46 You still ingested Reddit slash r slash politics, which is probably the largest political discussion
    2:45:49 board on the world that’s freely available to scrape.
    2:45:50 And guess what?
    2:45:51 That’s left leaning, right?
    2:45:56 And so, you know, there are some aspects like that you just can’t censor unless you try
    2:45:59 really, really, really, really, really hard.
    2:46:05 So the base model will always have some TDS, Trump derangement syndrome because it’s trained
    2:46:06 so much.
    2:46:12 It’ll have the ability to express it, but what if there’s a wide representation in the
    2:46:13 data?
    2:46:14 So this is what happens.
    2:46:16 It’s like a lot of model is called post training.
    2:46:21 It’s a series of techniques to get the model on rails of a really specific behavior.
    2:46:26 And I mean, it’s, it’s like you can, you also have the ingested data of like Twitter or
    2:46:29 like Reddit slash r slash the Donald, which is like also super pro Trump, right?
    2:46:32 And then you have like fascist subreddits or like you have communist subreddit.
    2:46:36 So you, the model in pretraining ingests everything.
    2:46:37 It has no worldview.
    2:46:42 Now it does have like some, some skew because more of the text is skewed a certain way,
    2:46:47 which is general, like slight left, like, but also like, you know, somewhat like, you
    2:46:50 know, it’s intellectual, somewhat like, you know, it’s just like the general internet
    2:46:52 is a certain way.
    2:46:55 And then, and then as, as, as Nathan’s about to describe eloquently, right?
    2:46:57 Like you can, you can elicit certain things out.
    2:46:58 And there’s a lot of history here.
    2:47:00 So we can go through multiple examples and what happened.
    2:47:06 Lama two was a launch that the phrase like too much RLHF or like too much safety was
    2:47:12 a lot, it’s just, that was the whole narrative after Lama two’s chat models released.
    2:47:16 And the examples are sorts of things like you would ask Lama two chat, how do you kill
    2:47:17 a Python process?
    2:47:21 And it would say, I can’t talk about killing because that’s a bad thing.
    2:47:26 And anyone that is trying to design an AI model will probably agree that that’s just
    2:47:28 like, eh, model, you messed up a bit on the training there.
    2:47:31 I don’t think they meant to do this, but this was in the model week.
    2:47:35 So this is not, it didn’t necessarily be a, there’s things called system prompts, which
    2:47:41 are when you’re querying a model, it’s a piece of text that is shown to the model, but not
    2:47:42 to the user.
    2:47:46 So a fun example is your system prompt could be talk like a pirate.
    2:47:50 So no matter what the user says to the model, it’ll respond like a pirate.
    2:47:54 In practice, what they are is you are a helpful assistant.
    2:47:55 You should break down problems.
    2:48:00 If you don’t know about something, don’t tell them your date cutoff is this, today’s date
    2:48:01 is this.
    2:48:03 It’s a lot of really useful context for how can you answer a question well.
    2:48:06 An anthropic publishes their system prompt.
    2:48:07 Yes.
    2:48:08 But I think it’s great.
    2:48:10 And there’s a lot of research that goes into this and one of your previous guests, Amanda
    2:48:15 Askell is like probably the most knowledgeable person that at least in the combination of
    2:48:20 execution and sharing, she’s the person that should talk about system prompts and character
    2:48:21 of models.
    2:48:22 Yeah.
    2:48:27 And then people should read the system prompts because you’re, you’re like trying to nudge
    2:48:31 sometimes through extreme politeness, the model to be a certain way.
    2:48:32 And you could use this for bad things.
    2:48:37 I mean, we’ve done tests, which is what if I tell the model to be a dumb model, like
    2:48:39 which evaluation scores go down.
    2:48:43 And it’s like, we’ll have this behavior where it could sometimes like say, oh, I’m supposed
    2:48:44 to be dumb.
    2:48:48 And sometimes it’s like, it doesn’t affect like math abilities as much, but something
    2:48:52 like a, if you’re trying, it’s just the quality of a human judgment would drop to the floor.
    2:48:57 Let’s go back to post-training specifically, RLHF around llama two was, it was too much
    2:49:01 RLH, too much safety prioritization was baked into the model weights.
    2:49:05 This makes you refuse things in a really annoying way for users.
    2:49:06 It’s not great.
    2:49:12 It caused a lot of like awareness to be attached to RLHF that it makes the models dumb and
    2:49:13 it stigmatize the word.
    2:49:14 It did.
    2:49:15 And AI culture.
    2:49:20 And as the techniques have involved, that’s no longer the case where all of these labs
    2:49:23 have very fine-grained control over what they get out of the models through techniques
    2:49:24 like RLHF.
    2:49:28 So although different labs are differently, different levels, like on the, on once end
    2:49:31 of the spectrum is Google.
    2:49:34 And then like maybe opening eye does less and anthropic does less.
    2:49:38 And then like on the other end of the spectrum is like X AI, but they all have different
    2:49:41 forms of RLHF trying to make them a certain way.
    2:49:48 And the like, the important thing to say is that no matter how you want the model to behave,
    2:49:51 these RLHF and preference tuning techniques also improve performance.
    2:49:56 So on things like math evals and code evals, there is something innate to these, what
    2:49:58 is called contrastive loss functions.
    2:49:59 We could start to get into RLHF here.
    2:50:04 We don’t really need to, but RLHF also boosts performance on anything from a chat task to
    2:50:06 a math problem to a code problem.
    2:50:10 So it is becoming a much more useful tool to these labs.
    2:50:13 So this kind of takes us through the arc of we’ve talked about pre-training, hard to
    2:50:14 get rid of things.
    2:50:18 We’ve talked about post-training and how post-training, if you, you can mess it up.
    2:50:24 It’s a complex multifaceted optimization with 10 to 100 person teams converging at one artifact.
    2:50:27 It’s really easy to not do it perfectly.
    2:50:29 And then there’s the third case, which is what we talked about Gemini.
    2:50:34 The thing that was about Gemini is this was a served product where Google has their internal
    2:50:35 model weights.
    2:50:37 They’ve done all these processes that we talked about.
    2:50:41 And in the served product, what came out after this was that they had a prompt that they
    2:50:45 were rewriting user queries to boost diversity or something.
    2:50:48 And this just made it, the outputs were just blatantly wrong.
    2:50:52 It was a, some sort of organizational failure that had this prompt in that position.
    2:50:55 And I think Google executives probably have owned this.
    2:50:59 I didn’t pay that attention in that detail, but it was just a mess up in execution that
    2:51:01 led to this ridiculous thing.
    2:51:04 But at the system level, the model weights might have been fine.
    2:51:08 So at the very end of the pipeline, there was a rewriting to something like a system
    2:51:09 prompt.
    2:51:14 It was like the system prompt or what is called an industry is like you rewrite prompts.
    2:51:19 So especially for image models, if you’re using Dolly or chat, you can generate you
    2:51:20 an image.
    2:51:25 You’ll say, draw me a beautiful car with these leading image models.
    2:51:28 They benefit from highly descriptive prompts.
    2:51:32 So what would happen is if you do that on chat, a language model behind the scenes will rewrite
    2:51:35 the prompt, say, make this more descriptive.
    2:51:37 And then that has passed to the image model.
    2:51:41 So prompt rewriting is something that is used at multiple levels of industry.
    2:51:42 And it’s used effectively for image models.
    2:51:47 And the Gemini example is just a failed execution.
    2:51:52 Big philosophical question here with RLHF to generalize.
    2:52:00 Where is human input, human in the loop, human data most useful at the current stage?
    2:52:06 For the past few years, the highest cost human data has been in these preferences, which
    2:52:11 is comparing, I would say highest cost and highest total usage.
    2:52:15 So a lot of money has gone to these pairwise comparisons where you have two model outputs
    2:52:19 and a human is comparing between the two of them.
    2:52:22 In earlier years, there was a lot of this instruction tuning data.
    2:52:28 So creating highly specific examples to something like a Reddit question to a domain that you
    2:52:29 care about.
    2:52:31 Language models used to struggle on math and code.
    2:52:34 So you would pay experts in math and code to come up with questions and write detailed
    2:52:37 answers that were used to train the models.
    2:52:43 Now it is the case that there are many model options that are way better than humans at
    2:52:47 writing detailed and eloquent answers for things like model and code.
    2:52:52 So they talked about this with the Lama three release where they switched to using Lama
    2:52:55 three, four or five B to write their answers for math and code.
    2:53:00 But they in their paper talk about how they use extensive human preference data, which
    2:53:03 is something that they haven’t gotten any eyes to replace.
    2:53:06 There are other techniques in industry like constitutional AI where you use human data
    2:53:08 for preferences and AI for preferences.
    2:53:12 And I expect the AI part to scale faster than the human part.
    2:53:18 But among the research that we have access to is that humans are in this kind of preference
    2:53:19 loop.
    2:53:24 So for as reasoning becomes bigger and bigger and bigger, as we said, where’s the role of
    2:53:25 humans in that?
    2:53:27 It’s even less prevalent.
    2:53:32 So it’s the remarkable thing about these reasoning results and especially the deep seek R1 paper
    2:53:37 is this result that they call deep seek R1 zero, which is they took one of these pre-trained
    2:53:40 models, they took deep seek V3 base.
    2:53:44 And then they do this reinforcement learning optimization on verifiable questions or verifiable
    2:53:48 rewards for a lot of questions and a lot of training.
    2:53:51 And these reasoning behaviors emerge naturally.
    2:53:54 So these things like wait, let me see, wait, let me check this.
    2:53:56 Oh, that might be a mistake.
    2:53:59 And they emerge from only having questions and answers.
    2:54:03 And when you’re using the model, the part that you look at is the completion.
    2:54:08 So in this case, all of that just emerges from this large scale RL training.
    2:54:14 And that model, which the weights are available, has no human preferences added into the post
    2:54:15 training.
    2:54:20 There are the deep seek R1 full model has some of this human preference tuning this RLHF
    2:54:22 after the reasoning stage.
    2:54:26 But the very remarkable thing is that you can get these reasoning behaviors.
    2:54:29 And it’s very unlikely that there’s humans writing out reasoning chains.
    2:54:33 It’s very unlikely that they somehow hacked open AI and they got access to open AI.
    2:54:35 Oh, one’s reasoning chains.
    2:54:40 It’s something about the pre-trained language models and this RL training where you reward
    2:54:42 the model for getting the question right.
    2:54:47 And therefore it’s trying multiple solutions and it emerges this chain of thought.
    2:54:53 This might be a good place to, uh, to mention the, uh, the eloquent and the insightful tweet
    2:54:56 of the great and the powerful Andre Carpathian.
    2:55:00 Uh, I think he had a bunch of thoughts, but one of them, last thought, not sure if this
    2:55:04 is obvious, you know, something profound is coming when you’re saying it’s not sure if
    2:55:05 it’s obvious.
    2:55:10 There are two major types of learning in both children and in deep learning.
    2:55:15 There’s one, imitation learning, watch and repeat, ie pre-training, supervised fine
    2:55:19 tuning and two, trial and error learning, reinforcement learning.
    2:55:22 My favorite simple example is AlphaGo.
    2:55:25 One is learning by imitating expert players.
    2:55:28 Two is reinforcement learning to win the game.
    2:55:34 Almost every single shocking result of deep learning and the source of all magic is always
    2:55:35 two.
    2:55:37 Two is significantly more powerful.
    2:55:39 Two is what surprises you.
    2:55:43 Two is when the paddle learns to hit the ball behind the blocks and break out.
    2:55:47 Two is when AlphaGo beats even Lisa Dahl.
    2:55:53 And two is the aha moment when the deep seek or 01, et cetera, discovers that it works
    2:55:59 well to reevaluate your assumptions, backtrack, try something else, et cetera.
    2:56:04 It’s the solving strategies you see this model use in its chain of thought.
    2:56:07 It’s how it goes back and forth thinking to itself.
    2:56:12 These thoughts are emergent, three exclamation points.
    2:56:17 And this is actually seriously incredible, impressive and new, and is publicly available
    2:56:18 and documented.
    2:56:24 The model could never learn this with the imitation because the cognition of the model
    2:56:27 and the cognition of the human label is different.
    2:56:32 The human would never know to correctly annotate these kinds of solving strategies and what
    2:56:34 they should even look like.
    2:56:38 They have to be discovered during reinforcement learning as empiric lens statistically useful
    2:56:39 towards the final outcome.
    2:56:43 Anyway, the AlphaZero sort of metaphor analogy here.
    2:56:48 Can you speak to that, the magic of the chain of thought that he’s referring to?
    2:56:52 I think it’s good to recap AlphaGo and AlphaZero because it plays nicely with these analogies
    2:56:54 between imitation learning and learning from scratch.
    2:57:00 So AlphaGo, the beginning of the process was learning from humans where they started the
    2:57:06 first, this is the first expert level Go player or chess player in DeepMind series of models
    2:57:07 where they had some human data.
    2:57:12 And then why it is called AlphaZero is that there was zero human data in the loop.
    2:57:17 And that change to AlphaZero made a model that was dramatically more powerful for DeepMind.
    2:57:23 So this remove of the human prior, the human inductive bias makes the final system far
    2:57:24 more powerful.
    2:57:29 We mentioned bitter lesson hours ago, and this is all aligned with this.
    2:57:33 And then there’s been a lot of discussion and language models.
    2:57:34 This is not new.
    2:57:40 This goes back to the whole Q* rumors, which if you piece together the pieces is probably
    2:57:46 the start of OpenAI figuring out it’s 01 stuff when last year in November, the Q* rumors
    2:57:47 came out.
    2:57:53 There’s a lot of intellectual drive to know when is something like this going to happen
    2:57:57 with language models because we know these models are so powerful and we know has been
    2:57:59 so successful in the past.
    2:58:05 And it is a reasonable analogy that this new type of reinforcement learning training for
    2:58:08 reasoning models is when the doors open to this.
    2:58:15 We don’t yet have the equivalent of turn 37, which is the famous turn where the DeepMinds
    2:58:18 AI playing ghosts dumped Lisa Dahl completely.
    2:58:22 We don’t have something that’s that level of focal point, but that doesn’t mean that
    2:58:25 the approach to technology is different and the impact of the general training.
    2:58:27 It’s still incredibly new.
    2:58:28 What do you think that point would be?
    2:58:32 What would be move 37 for change of thought for reasoning?
    2:58:33 Scientific discovery.
    2:58:38 You use this sort of reasoning problem and it’s just something we fully don’t expect.
    2:58:40 I think it’s actually probably simpler than that.
    2:58:46 It’s probably something related to computer user robotics rather than science discovery.
    2:58:51 Because the important aspect here is models take so much data to learn.
    2:58:54 They’re not sample efficient.
    2:58:59 They take the entire web over 10 trillion tokens to train on.
    2:59:03 This would take a human thousands of years to read.
    2:59:09 A lot of the stuff models know better than it.
    2:59:11 Humans are way, way, way more sample efficient.
    2:59:13 That is because of the self-play.
    2:59:18 How does a baby learn what its body is as it sticks its foot in its mouth and it says,
    2:59:20 “Oh, this is my body.”
    2:59:25 It sticks its hand in its mouth and it calibrates its touch on its fingers with the most sensitive
    2:59:29 touch thing on its tongue, as how babies learn.
    2:59:32 It’s just self-play over and over and over and over again.
    2:59:38 Now we have something that is similar to that with these verifiable proofs, whether it’s
    2:59:46 a unit test and code or a mathematical verifiable task, generate many traces of reasoning.
    2:59:47 Keep branching them out.
    2:59:48 Keep branching them out.
    2:59:51 Then check at the end, “Hey, which one actually has the right answer?”
    2:59:52 Most of them are wrong.
    2:59:53 Great.
    2:59:54 These are the few that are right.
    2:59:57 Maybe we use some sort of reward model outside of this to select even the best one to preference
    2:59:58 as well.
    3:00:00 Now you’ve started to get better and better at these benchmarks.
    3:00:05 You’ve seen, over the last six months, a skyrocketing in a lot of different benchmarks, right?
    3:00:09 All math and code benchmarks were pretty much solved except for frontier math, which is
    3:00:16 designed to be almost questions that aren’t practical to most people because they’re exam
    3:00:19 level open math problem type things.
    3:00:23 It’s on the math problems that are somewhat reasonable, which is somewhat complicated
    3:00:25 word problems or coding problems.
    3:00:27 It’s just what Dylan is saying.
    3:00:31 The thing here is that these are only with verifiable tasks.
    3:00:35 Earlier I showed an example of the really interesting what happens when chain of thought
    3:00:36 is to a non-verifiable thing.
    3:00:42 It’s just like a human chatting with thinking about what’s novel for humans, a unique thought.
    3:00:48 But this task and form of training only works when it’s verifiable.
    3:00:53 From here, the thought is, “Okay, we can continue to scale this current training method by increasing
    3:00:55 the number of verifiable tasks.”
    3:00:58 In math and coding, coding probably has a lot more to go.
    3:01:02 Math has a lot less to go in terms of what are verifiable things.
    3:01:07 Can I create a solver that then I generate trajectories toward or reasoning traces towards
    3:01:11 and then prune the ones that don’t work and keep the ones that do work?
    3:01:14 Those are going to be solved pretty quickly, but even if you’ve solved math, you have not
    3:01:17 actually created intelligence.
    3:01:24 This is where I think the aha moment of computer use or robotics will come in because now you
    3:01:28 have a sandbox or a playground that is infinitely verifiable.
    3:01:32 Did you … Messing around on the internet, there are so many actions that you can do
    3:01:33 that are verifiable.
    3:01:37 It’ll start off with login to a website, create an account, click a button here, blah, blah,
    3:01:38 blah.
    3:01:41 But it’ll then get to the point where it’s, “Hey, go do a task on Tasker,” or whatever
    3:01:47 these other, all these various task websites, “Hey, go get hundreds of likes,” and it’s
    3:01:48 going to fail.
    3:01:49 It’s going to spawn hundreds of accounts.
    3:01:50 It’s going to fail on most of them.
    3:01:51 But this one got to 1,000.
    3:01:52 Great.
    3:01:53 It’s going to reach the verifiable thing.
    3:01:57 You just keep iterating this loop over and over, and same with robotics.
    3:02:01 That’s where you have an infinite playground of tasks like, “Hey, did I put the ball in
    3:02:02 the bucket?”
    3:02:04 All the way to, “Oh, did I build a car?”
    3:02:09 There’s a whole trajectory to speedrun or what models can do.
    3:02:14 But at some point, I truly think that we’ll spawn models, and initially all the training
    3:02:15 will be in sandboxes.
    3:02:19 But then at some point, the language model pre-training is going to be dwarfed by what
    3:02:24 is this reinforcement learning … You’ll pre-train a multimodal model that can see,
    3:02:28 that can read, that can write, blah, blah, blah, whatever, vision, audio, et cetera.
    3:02:34 But then you’ll have it play in a sandbox infinitely, figure out math, figure out code,
    3:02:37 figure out navigating the web, figure out operating a robot arm.
    3:02:42 And then it’ll learn so much, and the aha moment, I think, will be when this is available
    3:02:45 to then create something that’s not good.
    3:02:46 Like, “Oh, cool.
    3:02:47 Part of it was figuring out how to use the web.
    3:02:52 Now, all of a sudden, it’s figured out really well how to just get hundreds of thousands
    3:02:55 of followers that are real and real engagement on Twitter, because all of a sudden, this
    3:02:57 is one of the things that are verifiable.”
    3:02:59 And maybe not just engagement, but make money.
    3:03:00 Yes, of course.
    3:03:08 I mean, that could be the thing where almost fully automated, it makes $10 million by being
    3:03:12 an influencer selling a product, creating the product.
    3:03:17 And I’m not referring to a hype product, but an actual product, like, “Holy shit.
    3:03:19 This thing created a business.
    3:03:20 It’s running it.
    3:03:23 It’s the face of the business,” that kind of thing.
    3:03:29 Or maybe a number one song, like, it creates the whole infrastructure required to create
    3:03:32 the song, to be the influencer that represents that song, that kind of thing.
    3:03:33 It makes a lot of money.
    3:03:34 That could be the…
    3:03:38 I mean, our culture respects money in that kind of way.
    3:03:40 And it’s verifiable, right?
    3:03:41 It’s verifiable.
    3:03:42 All right.
    3:03:43 The bank account can’t lie.
    3:03:44 Exactly.
    3:03:48 There’s surprising evidence that once you set up the ways of collecting the verifiable
    3:03:55 domain that this can work, there’s been a lot of research before this R1 on math problems.
    3:03:59 And they approach math with language models just by increasing the number of samples.
    3:04:01 So you can just try again and again and again.
    3:04:05 And you look at the amount of times that the language models get it right.
    3:04:10 And what we see is that even very bad models get it right sometimes.
    3:04:14 And the whole idea behind reinforcement learning is that you can learn from very sparse rewards.
    3:04:20 So the space of language and the space of tokens, whether you’re generating language
    3:04:25 or tasks for a robot, is so big that you might say that it’s like, I mean, each…
    3:04:27 The tokenizer of our language model can be like 200,000 things.
    3:04:30 So at each step, it can sample from that big of a space.
    3:04:36 So if it can generate a bit of a signal that it can climb onto, that’s what the whole field
    3:04:39 of RL is around, is learning from sparse rewards.
    3:04:43 And the same thing has played out in math where it’s like very weak models that sometimes
    3:04:44 generate answers.
    3:04:47 We see research already that you can boost their math scores.
    3:04:50 You can do this sort of RL training for math.
    3:04:54 It might not be as effective, but if you take a one billion parameter model, so something
    3:04:59 600 times smaller than deep seek, you can boost its grade school math scores very directly
    3:05:02 with a small amount of this training.
    3:05:05 So it’s not to say that this is coming soon.
    3:05:09 Setting up the verification domains is extremely hard and there’s a lot of nuance in this.
    3:05:15 But there are some basic things that we have seen before where it’s at least expectable
    3:05:17 that there’s a domain and there’s a chance that this works.
    3:05:18 All right.
    3:05:20 So we have fun things happening in real time.
    3:05:26 This is a good opportunity to talk about other reasoning models 01, 03.
    3:05:32 Just now, OpenAI, as perhaps expected, released 03 mini.
    3:05:35 What are we expecting from the different flavors?
    3:05:41 Can you just lay out the different flavors of the old models and from Gemini, the reasoning
    3:05:42 model?
    3:05:44 Something I would say about these reasoning models is we talked a lot about reasoning
    3:05:47 training on math and code.
    3:05:49 And what is done is that you have the base model.
    3:05:51 We’ve talked about a lot on the internet.
    3:05:54 You do this large scale reasoning training with reinforcement learning.
    3:06:00 And then what the deep seek paper detailed in this R1 paper, which for me is one of the
    3:06:06 big open questions on how do you do this is that they did reasoning heavy, but very standard
    3:06:09 post training techniques after the large scale reasoning RL.
    3:06:14 So they did the same things with a form of instruction tuning through rejection sampling,
    3:06:18 which is essentially heavily filtered instruction tuning with some reward models.
    3:06:22 And then they did this RLHF, but they made it math heavy.
    3:06:28 So some of this transfer, we’ve looked at this philosophical example early on.
    3:06:31 One of the big open questions is how much does this transfer?
    3:06:36 If we bring in domains after the reasoning training, are all the models going to become
    3:06:37 eloquent writers by reasoning?
    3:06:39 Is this philosophy stuff going to be open?
    3:06:42 We don’t know in the research of how much this will transfer.
    3:06:45 There’s other things about how we can make soft verifiers and things like this, but there
    3:06:51 is more training after reasoning, which makes it easier to use these reasoning models.
    3:06:52 And that’s what we’re using right now.
    3:06:55 So if we’re going to talk about with three mini and no one, these have gone through these
    3:07:00 extra techniques that are designed for human preferences after being trained to elicit
    3:07:01 reasoning.
    3:07:06 I think one of the things that people are ignoring is Google’s Gemini flash thinking
    3:07:10 is both cheaper than R1 and better.
    3:07:11 And they released it in the beginning of December.
    3:07:12 And nobody’s talking about it.
    3:07:13 No one cares.
    3:07:14 It has a different flavor to it.
    3:07:19 It’s behavior is less expressive than something like 01, and it has fewer tracks than it is
    3:07:20 on.
    3:07:25 Just a model last fall, QWQ, which was their preview reasoning model.
    3:07:29 And in deep sea cut R1 light last fall, where these models kind of felt like they’re on
    3:07:33 rails where they really, really only can do math and code.
    3:07:35 And 01 is it can answer anything.
    3:07:41 It might not be perfect for some tasks, but it’s flexible and has some richness to it.
    3:07:46 And this is kind of the art of like how cooking, like how it was a model a little bit undercooked.
    3:07:50 It’s like, I mean, it’s good to get a model out the door, but it’s hard to gauge and it
    3:07:54 takes a lot of taste to be like, is this a full fledged model?
    3:07:55 Can I use this for everything?
    3:07:58 And they’re probably more similar for math and code.
    3:08:05 My quick read is that Gemini flash is like not trained the same way as 01, but taking
    3:08:08 an existing training stack, adding reasoning to it.
    3:08:11 So taking a more normal training stack and adding reasoning to it.
    3:08:13 And I’m sure they’re going to have more.
    3:08:17 I mean, they’ve done quick releases on Gemini flash, the reasoning, and this is the second
    3:08:20 version from the holidays.
    3:08:25 It’s evolving fast and it takes longer to make this training stack where you’re doing
    3:08:26 this large scale RL.
    3:08:31 Ask it the same question from earlier, the one about the human nature.
    3:08:32 Yeah.
    3:08:35 What was the human nature one?
    3:08:39 The way I can ramble, why I can ramble about this so much is that we’ve been working on
    3:08:45 this at AI2 before 01 was fully available to everyone and before R1, which is essentially
    3:08:47 using this RL training for fine tuning.
    3:08:50 We use this in our like two-loo series of models.
    3:08:56 And you can elicit the same behaviors where you say like weight and cellochon, but it’s
    3:09:01 suddenly in the training process that this kind of reasoning expression is much lighter.
    3:09:04 So you can, there’s essentially a gradation and just how much of this RL training you
    3:09:07 put into it determines how the output looks.
    3:09:15 So we’re now using Gemini 2.0 Flash Thinking Experimental 121.
    3:09:20 It summarized the prompt as humans self-domesticated apes.
    3:09:21 The perspective.
    3:09:22 Okay.
    3:09:23 All right.
    3:09:25 So wait, is this reviewing the reasoning?
    3:09:27 Here’s why this is a novel.
    3:09:28 Okay.
    3:09:29 Click to expand.
    3:09:30 Click to expand.
    3:09:31 Okay.
    3:09:33 Analyze the request.
    3:09:34 Novel is the keyword.
    3:09:37 See how it just looks a little different.
    3:09:39 It looks like a normal output.
    3:09:40 Yeah.
    3:09:41 Yes.
    3:09:43 I mean, in some sense, it’s better structured.
    3:09:45 It makes more sense.
    3:09:50 Oh, when it latched onto human and then it went into organisms and oh, wow.
    3:09:56 Apex predator, focus on domestication, apply domestication to humans, explore the idea
    3:09:57 of self-domestication.
    3:09:58 Not good.
    3:09:59 Not good.
    3:10:02 Where is this going?
    3:10:08 Refine, articulate the insight, greater facial expressiveness and communication ability.
    3:10:09 Yes.
    3:10:10 Yes.
    3:10:11 Plasticity and adaptability.
    3:10:12 Yes.
    3:10:13 Dependence on social groups.
    3:10:14 Yes.
    3:10:15 All right.
    3:10:17 And self-critique and refined further.
    3:10:19 Wow.
    3:10:20 Is this truly novel?
    3:10:23 Is it well supported?
    3:10:25 So on and so forth.
    3:10:29 And the insight it’s getting at is humans are not just social animals, but profoundly
    3:10:32 self-domesticated apes.
    3:10:37 And the self-domestication is the key to understanding our unique cognitive and social abilities.
    3:10:39 Self-domesticated apes.
    3:10:40 Self-domest…
    3:10:42 I prefer the deep-seek response.
    3:10:43 Self-domest…
    3:10:48 I mean, it’s novel, the insight is novel.
    3:10:53 I mean, that’s like a good book title, self-domesticated apes, like there could be a case made for
    3:10:54 that.
    3:10:55 I mean, yeah, it’s cool.
    3:10:58 And it’s revealing the reasoning, it’s magical.
    3:10:59 It’s magical.
    3:11:01 Like, this is really powerful.
    3:11:04 Hello, everyone.
    3:11:09 This is Lex with a quick intermission, recorded after the podcast.
    3:11:14 Since we reviewed responses from DeepSeek R1 and Gemini Flash 2.0 Thinking during this
    3:11:20 conversation, I thought at this moment, it would be nice to insert myself quickly doing
    3:11:28 the same for OpenAI 01 Pro and 03 Mini with the same prompt, the prompt being give one
    3:11:32 truly novel insight about humans.
    3:11:40 And I thought I would, in general, give my vibe check and vibe-based anecdotal report
    3:11:46 on my own experiences with the new 03 Mini model, now that I’ve got a chance to spend
    3:11:49 many hours with it in different kinds of contexts and applications.
    3:11:56 So I would probably categorize this question as, let’s say, open-ended philosophical question.
    3:12:03 And in particular, the emphasis on novelty, I think is a nice way to test one of the capabilities
    3:12:09 of the model, which is come up with something that makes you pause and almost surprise you
    3:12:11 with its brilliance.
    3:12:16 So that said, my general review, after running each of the models on this question a bunch
    3:12:22 of times, is that 01 Pro consistently gave brilliant answers.
    3:12:29 Because they gave me pause and made me think, both cutting in its insight and just really
    3:12:36 nicely phrased with wit, with clarity, with nuance, over and over consistently generating
    3:12:37 the best answers.
    3:12:43 After that is R1, which is less consistent, but again, deliver brilliance.
    3:12:46 Gemini Flash 2.0 Thinking was third.
    3:12:50 And last was 03 Mini, actually.
    3:12:55 It often gave quite a generic answer, at least to my particular sensibilities.
    3:13:01 That said, in a bunch of other applications that I tested for brainstorming purposes,
    3:13:07 it actually worked extremely well and often outperformed R1.
    3:13:11 But on this open-ended philosophical question, it did consistently worse.
    3:13:16 Now, another important element for each of these models is how the reasoning is presented.
    3:13:23 DeepSeek R1 shows the full chain of thought tokens, which I personally just love.
    3:13:27 For these open-ended philosophical questions, it’s really, really interesting to see the
    3:13:28 model think through it.
    3:13:34 But really also just stepping back, me as a person who appreciates intelligence and reasoning
    3:13:40 and reflection, reading these kind of chain of thought raw tokens of R1, there’s something
    3:13:48 genuinely beautiful about observing the path of deliberation in an intelligence system.
    3:13:55 I think we don’t always have that explicitly laid out for us humans, so to see it in another
    3:14:01 intelligence system, the non-linearity of it akin to Ulysses or Finnegan’s Wake by
    3:14:03 James Joyce, it’s just beautiful to watch.
    3:14:09 Anyway, as we discussed in the episode DeepSeek R1, talked about humans being able to convert
    3:14:14 selfish desires into cooperative systems by collectively pretending abstract rules like
    3:14:21 money laws and rights are real, and the shared hallucinations act as games, where competition
    3:14:26 is secretly redirected to benefit the group, turning conflict into society’s fuel.
    3:14:32 Gemini 2.0 Flash Thinking said, “Humans are not just social animals, but self-domesticated
    3:14:37 apes, and this self-domestication is the key to understanding our unique cognitive and
    3:14:38 social abilities.”
    3:14:43 Now, it’s important to say that the chain of thought there was really interesting.
    3:14:50 It was looking through the entire evolution of life on Earth, considering apex predators,
    3:14:55 and considering how from that we ended up to where we are.
    3:14:59 I think that domestication by choice is a really interesting angle.
    3:15:04 Again, it’s one of those things when somebody presents a different angle on a seemingly
    3:15:06 obvious thing, it just makes me smile.
    3:15:12 And the same with DeepSeek R1, that these hallucinations of money, laws, and rights,
    3:15:18 and us collectively pretending like it’s real, and we play games with them that look like
    3:15:22 competition when secretly we’re just cooperating with each other.
    3:15:25 And that is the fuel of progress, beautifully put.
    3:15:30 Now, OpenAI01pro consistently over and over delivered bangers.
    3:15:34 I can go through many of them, but the first one was, “Humans are the only species that
    3:15:40 turns raw materials into symbolic resources, then uses those symbols to reorganize the
    3:15:46 very materials they came from, creating a closed feedback loop between meaning and matter.”
    3:15:52 Here, I just ran it again, banger after banger, I’m telling you, humans are unique among
    3:15:57 known species in that they simultaneously rewrite two layers of reality, the external
    3:16:04 world and their own private mental landscapes, and then merge these two rewritten layers
    3:16:12 into a continuous personal narrative that feels objectively true, feels true.
    3:16:13 This is poetry.
    3:16:23 Okay, and then O3 Mini High for me was smart, fast, actually, and kind of generic.
    3:16:25 Never quite got there for me.
    3:16:31 So here’s the first one I got from O3 Mini, “Humans are not fixed beings, but rather
    3:16:37 ongoing narratives, dynamic stories that would continuously write, edit, and reinterpret.
    3:16:42 This narrative plasticity is more than just memory or self-reflection, it’s an intrinsic
    3:16:48 cognitive process that acts like an internal error correction system, it allows us to adapt
    3:16:53 our identities and values over time in response to new experiences, challenges, and social
    3:16:54 context.”
    3:17:00 Now, it almost sneaks up to something approximating cutting insight with narrative plasticity
    3:17:05 in quotes, but then it goes back to the sort of the generic, I don’t know, all of these
    3:17:08 models are incredible for different reasons.
    3:17:13 There’s a lot of concerns as we discussed in this episode, but there’s a lot of reasons
    3:17:16 to be excited as well.
    3:17:18 And I probably spoken for too long.
    3:17:26 I am severely sleep deprived, borderline delirious, so hopefully some of this made sense.
    3:17:31 And now, dear friends, back to the episode.
    3:17:38 I think to Nathan’s point, when you look at the reasoning models, to me, even when I
    3:17:46 used R1 versus 01, there was that sort of rough edges around the corner feeling, right?
    3:17:50 And flash thinking earlier, I didn’t use this version, but the one from December, and it
    3:17:53 definitely had that rough edges around the corner feeling, right, where it’s just not
    3:17:56 fleshed out in as many ways, right?
    3:18:02 Sure, they added math and coding capabilities via these verifiers in RL, but it feels like
    3:18:07 they lost something in certain areas, and 01 is worse performing than chat in many areas
    3:18:09 as well, to be clear.
    3:18:10 Not by a lot.
    3:18:11 Not by a lot though, right?
    3:18:16 And it’s like R1 definitely felt to me like it was worse than V3 in certain areas, like
    3:18:21 doing this RL expressed and learned a lot, but then it weakened in other areas.
    3:18:28 And so I think that’s one of the big differences between these models, and what 01 offers.
    3:18:30 And then OpenAI has 01 Pro.
    3:18:35 And what they did with 03, which is also very unique, is that they stacked search on top
    3:18:37 of Chain of Thought, right?
    3:18:41 And so Chain of Thought is one thing where it’s able, it’s one chain, it back tracks,
    3:18:46 goes back and forth, but how they solved the ArcAGI challenge was not just the Chain of
    3:18:47 Thought.
    3:18:52 It was also sampling many times, i.e. running them in parallel, and then selecting.
    3:18:54 Is running in parallel actually search?
    3:18:58 Because I don’t know if we have the full information on how 01 Pro works, or like I’m not, I don’t
    3:19:01 have enough information to confidently say that it is search.
    3:19:02 It is parallel samples.
    3:19:03 Yeah.
    3:19:04 And then what?
    3:19:05 And it selects something.
    3:19:06 And we don’t know what the selection function is.
    3:19:11 The reason why we’re debating is because since 01 was announced, there’s been a lot of interest
    3:19:15 in techniques called Monte Carlo research, which is where you will break down the chain
    3:19:17 of thought into intermediate steps.
    3:19:19 We haven’t defined Chain of Thought.
    3:19:23 Chain of Thought is from a paper from years ago where you introduced the idea to ask a
    3:19:27 language model that at the time was much less easy to use.
    3:19:29 You would say, let’s verify step by step.
    3:19:32 And it would induce the model to do this bulleted list of steps.
    3:19:36 Chain of Thought is now almost a default in models, where if you ask it a math question,
    3:19:39 you don’t need to tell it to think step by step.
    3:19:43 And the idea with Monte Carlo research is that you would take an intermediate point in
    3:19:47 that chain, do some sort of expansion, spend more compute, and then just select the right
    3:19:48 one.
    3:19:52 That’s a very complex form of search that has been used in things like Mu Zero and Alpha
    3:19:53 Zero potentially.
    3:19:55 I know Mu Zero does this.
    3:19:59 Another form of search is just asking five different people and then taking the majority
    3:20:00 answers.
    3:20:01 Yes.
    3:20:04 There’s a variety of– it could be complicated, it could be simple.
    3:20:08 We don’t know what it is, just that they are not just issuing one chain of thought in
    3:20:09 sequence.
    3:20:14 They are launching many in parallel, and in the Arc AGI, they launched 1,000 in parallel
    3:20:19 for the one that really shocked everyone, that beat the benchmark.
    3:20:22 They would launch 1,000 in parallel, and then they would get the right answer, like 80 percent
    3:20:25 of the time or 70 percent of the time, 90 maybe even.
    3:20:28 Whereas if they just launched one, it was like 30 percent.
    3:20:29 There are many extensions to this.
    3:20:35 I would say the simplest one is that our language models today have been designed to give the
    3:20:39 right answer the highest percentage of the time in one response.
    3:20:44 We are now opening the door to different ways of running inference on our models in which
    3:20:49 we need to reevaluate many parts of the training process, which normally opens the door to
    3:20:54 more progress, but we don’t know if OpenAI changed a lot, or if just sampling more and
    3:20:57 multiple choices is what they’re doing, or if it’s something more complex, but they changed
    3:21:02 the training and they know that the inference mode is going to be different.
    3:21:09 We’re talking about 01 Pro, $200 a month, and they’re losing money.
    3:21:17 The thing that we’re referring to, this fascinating exploration of the test time compute space,
    3:21:18 is that actually possible?
    3:21:20 Do we have enough compute for that?
    3:21:22 Does the financials make sense?
    3:21:28 The fantastic thing is, and it’s in the thing that I just pulled up earlier, but the cost
    3:21:35 for GPT-3 has plummeted if you scroll up just a few images, I think.
    3:21:39 The important thing about, hey, is cost-limiting factor here.
    3:21:44 My view is that we’ll have really awesome intelligence before we have– AGI before we
    3:21:47 have it permeate throughout the economy.
    3:21:53 This is why that reason is, GPT-3 was trained in what, 2020, 2021, and the cost for running
    3:22:01 inference on it was $60, $70 per million tokens, which is the cost per intelligence was ridiculous.
    3:22:07 Now, as we scaled forward two years, we’ve had a 1200x reduction in cost to achieve the
    3:22:10 same level of intelligence as GPT-3.
    3:22:19 Here on the x-axis is time over just a couple of years, and on the y-axis is log scale dollars
    3:22:23 to run inference on a million tokens.
    3:22:31 You have just a doubt, like a linear decline in log scale from GPT-3 through 3.5 to LAMBA.
    3:22:37 It’s like five cents or something like that now, which is versus $60, 1200x.
    3:22:43 That’s not the exact numbers, but it’s 1200x, I remember that number, is humongous cost
    3:22:44 per intelligence.
    3:22:47 Now, the freak out over deep seek is, oh my god, they made it so cheap.
    3:22:51 Actually, if you look at this trend line, they’re not below the trend line, first of
    3:22:54 all, and at least for GPT-3.
    3:22:58 They are the first to hit it, which is a big deal, but they’re not below the trend line
    3:22:59 as far as GPT-3.
    3:23:00 Now, we have GPT-4.
    3:23:02 What’s going to happen with these reasoning capabilities?
    3:23:07 It’s a mix of architectural innovations, it’s a mix of better data, and it’s going to be
    3:23:10 better training techniques, and all of these different better inference systems, better
    3:23:17 hardware going from each generation of GPU to new generations or ASICs.
    3:23:22 Everything is going to take this cost curve down and down and down and down, and then
    3:23:27 can I just spawn a thousand different LLMs to create a task and then pick from one of
    3:23:31 them or whatever search technique I want, a tree, Monte Carlo tree search, maybe it gets
    3:23:33 that complicated.
    3:23:38 Maybe it doesn’t because it’s too complicated to actually scale, who knows, better lesson.
    3:23:46 The question is, I think, when not if, because the rate of progress is so fast.
    3:23:52 Nine months ago, Dario said nine months ago the cost to train an inference was this, and
    3:23:57 now we’re much better than this, and deep seek is much better than this, and that cost curve
    3:24:02 for GPT-4, which was also roughly $60 per million tokens when it launched, has already
    3:24:10 fallen to $2 or so, and we’re going to get it down to cents, probably, for GPT-4 quality,
    3:24:15 and then that’s the base for the reasoning models like 01 that we have today, and 01 Pro
    3:24:20 is spawning multiple, and 03, and so on and so forth, these search techniques too expensive
    3:24:25 today, but they will get cheaper, and that’s what’s going to unlock the intelligence.
    3:24:28 So get cheaper and cheaper and cheaper.
    3:24:34 The big deep seek R1 release freaked everybody out because of the cheaper.
    3:24:38 One of the manifestations of that is NVIDIA stock plummeted.
    3:24:40 Can you explain what happened?
    3:24:47 And also just explain this moment and whether if NVIDIA is going to keep winning.
    3:24:53 We’re both NVIDIA bulls here, I would say, and in some ways, the market response is reasonable.
    3:24:59 Most of the market, NVIDIA’s biggest customers in the US are major tech companies, and they’re
    3:25:05 spending a ton on AI, and a simple interpretation of deep seek is you can get really good models
    3:25:10 without spending as much on AI, so in that capacity, it’s like, oh, maybe these big tech
    3:25:12 companies won’t need to spend as much on AI and go down.
    3:25:16 The actual thing that happened is much more complex, where there’s social factors, where
    3:25:21 there’s the rising in the app store, the social contagion that is happening, and then I think
    3:25:25 some of it is just like, I don’t trade, I don’t know anything about financial markets,
    3:25:28 but it builds up over the weekend where the social pressure, where it’s like, if it was
    3:25:32 during the weekend, there was multiple days of trading when this was really becoming,
    3:25:37 but it comes on the weekend and then everybody wants to sell, and that is a social contagion.
    3:25:41 I think there were a lot of false narratives, which is like, hey, these guys are spending
    3:25:44 billions on models, and they’re not spending billions on models.
    3:25:49 No one spent more than a billion dollars on a model that’s released publicly.
    3:25:57 GPT-4 was a couple hundred million, and then they’ve reduced the cost with 4TURBO4O, but
    3:25:59 billion dollar model runs are coming.
    3:26:02 This concludes pre-training and post-training, and then the other number is like, hey, deep
    3:26:06 seek didn’t include everything, they didn’t include a lot of the cost goes to research
    3:26:07 and all this sort of stuff.
    3:26:10 A lot of the cost goes to inference, a lot of the cost goes to post-training.
    3:26:11 None of these things were factored.
    3:26:12 It’s research salaries.
    3:26:16 All these things are counted in the billions of dollars that OpenAI is spending, but they
    3:26:21 weren’t counted in the, hey, $6 million, $5 million that deep seek spent.
    3:26:25 So there’s a bit of misunderstanding of what these numbers are, and then there’s also an
    3:26:31 element of, Nvidia has just been a straight line up, and there’s been so many different
    3:26:35 narratives that have been trying to push down, I don’t say push down Nvidia stock, everyone
    3:26:39 is looking for a reason to sell or to be worried.
    3:26:43 It was blackwell delays, there are GPU, there’s a lot of reports, every two weeks there’s
    3:26:48 a new report about their GPUs being delayed.
    3:26:51 There’s the whole thing about scaling laws ending.
    3:26:52 It’s so ironic.
    3:26:53 It lasted a month.
    3:26:58 It was just, literally just, hey, models aren’t getting better.
    3:27:01 They’re just not getting better, there’s no reason to spend more, pre-training scaling
    3:27:02 is dead.
    3:27:08 After that, it’s like 01, 03, R1, R1, and now it’s like, wait, models are progressing
    3:27:09 too fast.
    3:27:14 Slow down the progress, stop spending on GPUs, but the funniest thing I think that comes
    3:27:21 out of this is, Jevon’s paradox is true, AWS pricing for H100s has gone up over the
    3:27:24 last couple of weeks.
    3:27:28 Since a little bit after Christmas, since V3 was launched, AWS H100 pricing has gone
    3:27:29 up.
    3:27:35 H200s are almost out of stock everywhere because H200 has more memory and therefore R1 wants
    3:27:37 that chip over H100, right?
    3:27:40 We were trying to get GPUs on a short notice this week for a demo and it wasn’t that easy.
    3:27:45 We were trying to get just like 16 or 32 H100s for a demo and it was not very easy.
    3:27:52 For people who don’t know, Jevon’s paradox is, when the efficiency goes up, somehow
    3:27:57 magically, counter-intuitively, the total resource consumption goes up as well.
    3:28:03 The semiconductors are like 50 years of Moore’s Law, every two years, half the cost, double
    3:28:07 the transistors, just like clockwork, and it’s slowed down, obviously, but the semiconductor
    3:28:09 industry has gone up the whole time, right?
    3:28:10 It’s been wavy, right?
    3:28:13 There’s obviously cycles and stuff, and I don’t expect AI to be any different, right?
    3:28:18 There’s going to be ebbs and flows, but in AI, it’s just playing out at an insane time
    3:28:19 scale, right?
    3:28:21 It was 2X every two years.
    3:28:24 This is 1200X in like three years, right?
    3:28:28 So it’s like the scale of improvement that is hard to wrap your head around.
    3:28:35 Yeah, I was confused because to me, NVIDIA’s thought on that should have gone up, but maybe
    3:28:39 it went down because there’s kind of suspicion of foul play on the side of China or something
    3:28:40 like this.
    3:28:45 But if you just look purely at the actual principles of play here, it’s obvious, yeah,
    3:28:46 the Jevon’s paradox.
    3:28:52 More progress that AI makes, or the higher the derivative of AI progress is, especially
    3:28:56 because NVIDIA is in the best place, the higher the derivative is, the sooner the market’s
    3:29:01 going to be bigger and expanding, and NVIDIA is the only one that does everything reliably
    3:29:02 right now.
    3:29:05 Because it’s not like an NVIDIA competitor arose.
    3:29:08 It’s another company that’s using NVIDIA.
    3:29:14 Who historically has been a large NVIDIA customer and has press releases about them
    3:29:19 cheering about being China’s biggest NVIDIA customer, right?
    3:29:23 Maybe they’ve quieted down, but I think that’s another element of is that they don’t want
    3:29:29 to say how many GPUs they have because, hey, yes, they have H800s, yes, they have H20s.
    3:29:32 They also have some H100s, which are smuggled in.
    3:29:34 Can you speak to that, to the smuggling?
    3:29:39 What’s the scale of smuggling that’s feasible for a nation state to do for companies?
    3:29:41 Is it possible to…?
    3:29:44 I think there’s a few angles of smuggling here.
    3:29:48 One is, ByteDance arguably is the largest smuggler of GPUs for China.
    3:29:50 China is not supposed to have GPUs.
    3:29:52 ByteDance has over 500,000 GPUs.
    3:29:53 Why?
    3:29:55 Because they’re all rented from companies around the world.
    3:29:56 They rent from Oracle.
    3:29:57 They rent from Google.
    3:30:01 They rent from all these mass and a bunch of smaller cloud companies too, right?
    3:30:03 All the neoclouds of the world.
    3:30:06 They rent so, so many GPUs, they also buy a bunch, right?
    3:30:09 And they do this for mostly what meta does, right?
    3:30:10 Serving TikTok.
    3:30:11 Serving…
    3:30:12 Back to the next best…
    3:30:13 Separate discussion.
    3:30:14 Same as that, right?
    3:30:15 To be clear, that’s today the view.
    3:30:16 Use, right?
    3:30:17 And it’s a valid use, right?
    3:30:19 It’s a dopamine circuit, right?
    3:30:25 Now, that’s theoretically now very much restricted with the AI diffusion rules, which happened
    3:30:27 in the last week of the Biden admin.
    3:30:33 And Trump admin looks like they’re going to keep them, which limits allies, even Singapore.
    3:30:37 Which Singapore is 20% of NVIDIA’s, 20, 30% of NVIDIA’s revenue.
    3:30:41 But Singapore’s had a memoratorium on not building data centers for 15 years, because
    3:30:42 they don’t have enough power.
    3:30:43 So where are they going?
    3:30:44 Oh, yeah.
    3:30:47 I mean, I’m not claiming they’re all going to China, right?
    3:30:48 But a portion are…
    3:30:53 Many are going to Malaysia, including Microsoft and Oracle have big data centers in Malaysia.
    3:30:56 They’re going all over Southeast Asia, probably India as well, right?
    3:31:00 There’s stuff routing, but the diffusion rules are very de facto.
    3:31:04 You can only buy this many GPUs from this country, and you can only rent a cluster of
    3:31:06 this large to companies that are Chinese, right?
    3:31:10 They’re very explicit on trying to stop smuggling, right?
    3:31:17 And a big chunk of it was, “Hey, let’s random company by 16 servers, ship them to China,
    3:31:18 right?”
    3:31:25 Actually, I saw a photo from someone in the semiconductor industry who leads a team for
    3:31:30 networking chips that competes with NVIDIA, and he sent a photo of a guy checking into
    3:31:36 a first-class United flight from San Francisco to Shanghai or Shenzhen with a super micro
    3:31:41 box that was this big, which can only contain GPUs, right?
    3:31:45 And he was booking first-class, because think about it, 3 to 5K for your first-class ticket,
    3:31:51 over-cost 240,000 in the US, 250,000, you sell it for 300,000 in China, wait, you just got
    3:31:54 a free first-class ticket and a lot more money.
    3:31:57 So it’s like, you know, and that’s like small-scale smuggling.
    3:32:01 Most of the large-scale smuggling is like companies in Singapore and Malaysia, like
    3:32:04 routing them around or renting GPUs completely legally.
    3:32:05 I want to jump in.
    3:32:06 How much does this scale?
    3:32:10 I think there’s been some number, like some people that have higher-level economics understanding
    3:32:15 say that as you go from one billion of smuggling to 10 billion, it’s like you’re hiding certain
    3:32:18 levels of economic activity, and that’s the most reasonable thing to me, is that there’s
    3:32:23 going to be some level where it’s so obvious that it’s easier to find this economic activity.
    3:32:32 Yeah, so my belief is that last year, roughly, so NVIDIA made a million H20s, which are legally
    3:32:35 allowed to be shipped to China, which we talked about is better for reasoning, right, inference
    3:32:40 at least, not training, but reasoning inference, and inference generally.
    3:32:47 Then they also had a couple hundred thousand, we think like 200 to 300,000 GPUs were routed
    3:32:50 to China from, you know, Singapore, Malaysia, US, wherever.
    3:32:55 Companies spawned up by 16 GPUs, 64 GPUs, whatever it is, routed, and Huawei is known
    3:32:59 for having spent up a massive network of companies to get the materials they need after they
    3:33:03 were banned in 2018, so it’s not like otherworldly, but I agree, right?
    3:33:07 Nathan’s point is like, hey, you can’t smuggle up $10 billion of GPUs.
    3:33:11 And then the third source, which is just now banned, which wasn’t considered smuggling,
    3:33:19 but is China is renting, I believe from our research, Oracle’s biggest GPU customer is
    3:33:21 ByteDance, right?
    3:33:24 And for Google, I think it’s their second biggest customer, right?
    3:33:27 And you go down the list of clouds, and especially these smaller cloud companies that aren’t
    3:33:30 like the hyperscalers, right?
    3:33:34 Think beyond CoreWeave and Lambda even, there’s a whole, there’s 60 different new cloud companies
    3:33:35 serving NVIDIA GPUs.
    3:33:38 I think ByteDance is renting a lot of these, right?
    3:33:39 All over, right?
    3:33:44 And so these companies are renting GPUs to Chinese companies, and that was completely
    3:33:48 legal up until the diffusion rules, which happened just a few weeks ago.
    3:33:54 And even now, you can rent GPU clusters that are less than 2,000 GPUs, or you can buy GPUs
    3:33:57 and ship them wherever you want if they’re less than 1,500 GPUs, right?
    3:34:02 So it’s like, there are still some ways to smuggle, but yeah, it’s not, as the numbers
    3:34:03 grow, right?
    3:34:07 A hundred-something billion dollars of revenue for NVIDIA last year, 200-something billion
    3:34:08 this year, right?
    3:34:14 And if next year, it could nearly double again, or more than double, based on what we see
    3:34:19 with data center footprints being built out all across the U.S. and the rest of the world,
    3:34:22 it’s going to be really hard for China to keep up with these rules, right?
    3:34:28 Yes, there will always be smuggling, and deep-seek level models of GPT-4 level models, O1 level
    3:34:32 models capable to train on what China can get, even the next year above that.
    3:34:39 But if we speedrun a couple more jumps, right, to billion-dollar models, $10 billion models,
    3:34:44 then it becomes, hey, there is a compute disadvantage for China for training models and serving them.
    3:34:46 And the serving part is really critical, right?
    3:34:48 Deep-seek cannot serve their model today, right?
    3:34:51 It’s completely out of inventory.
    3:34:55 It’s already started falling in the app store, actually, downloads, because you download it,
    3:34:56 you try and sign up.
    3:34:58 They say we’re not taking registrations because they have no capacity, right?
    3:35:02 You open it up, you get like less than five tokens per second if you even get your request
    3:35:03 approved, right?
    3:35:06 Because there’s just no capacity, because they just don’t have enough GPUs to serve
    3:35:08 the model, even though it’s incredibly efficient.
    3:35:13 It would be fascinating to watch the smuggling, because, I mean, there’s drug smuggling, right?
    3:35:20 That’s a market, there’s weapons smuggling, and GPUs will surpass that at some point.
    3:35:25 Chips are highest value per kilogram, probably by far.
    3:35:27 I have another question for you, Don.
    3:35:31 Do you track model API access internationally?
    3:35:36 How easy is it for Chinese companies to use hosted model APIs from the US?
    3:35:38 Yeah, I mean, that’s incredibly easy, right?
    3:35:43 OpenAI publicly stated Deep-seek uses their API, and as they say, they have evidence, right?
    3:35:47 And this is another element of the training regime, is people at OpenAI have claimed that
    3:35:51 it’s a distilled model, i.e., you’re taking OpenAI’s model, you’re generating a lot of
    3:35:55 output, and then you’re training on the output in their model.
    3:35:57 And even if that’s the case, what they did is still amazing, by the way, what Deep-seek
    3:35:58 did efficiency-wise.
    3:36:02 Distillation is standard practice in industry, whether or not, if you’re at a closed lab
    3:36:06 where you care about terms of service and IP closely, you distill from your own models.
    3:36:10 If you’re a researcher and you’re not building any products, you distill from the OpenAI
    3:36:11 models.
    3:36:12 This is a good opportunity.
    3:36:16 Can you explain big picture distillation as a process?
    3:36:17 What is distillation?
    3:36:18 What’s the process of distillation?
    3:36:20 We’ve talked a lot about training language models.
    3:36:24 They are trained on text, and post-training, you’re trying to train on very high-quality
    3:36:29 text that you want the model to match the features of, or if you’re using RL, you’re
    3:36:30 letting the model find its own thing.
    3:36:35 But for supervised fine-tuning, for preference data, you need to have some completions with
    3:36:37 the model is trying to learn to imitate.
    3:36:42 And what you do there is instead of a human data, or instead of the model you’re currently
    3:36:47 training, you take completions from a different, normally more powerful model.
    3:36:53 I think there’s rumors that these big models that people are waiting for, these GPT-5s
    3:36:58 of the world, the Claude III opuses of the world, are used internally to do this distillation
    3:36:59 process at OpenAI.
    3:37:04 There’s also public examples, right, like Meta explicitly stated, not necessarily distilling,
    3:37:09 but they used 405B as a reward model for 70B in their Llama 3.2 or 3.3.
    3:37:11 This is all the same topic.
    3:37:15 So is this ethical, is this legal?
    3:37:22 Why is that Financial Times article headline, say OpenAI says that there’s evidence that
    3:37:26 China’s deep seek used its model to train competitor?
    3:37:30 This is a long, at least in the academic side and research side, it’s a long history
    3:37:32 because you’re trying to interpret OpenAI’s rule.
    3:37:36 OpenAI’s terms of service say that you cannot build a competitor with outputs from their
    3:37:37 model.
    3:37:42 Terms of service are different than a license, which are essentially a contract between organizations.
    3:37:46 So if you have a terms of service on OpenAI’s account, if I violate it, OpenAI can cancel
    3:37:47 my account.
    3:37:51 This is very different than a license that says how you could use a downstream artifact.
    3:37:54 So a lot of it hinges on a word that is very unclear in the AI space, which is what is
    3:37:55 a competitor.
    3:38:01 And then the ethical aspect of it is like, why is it unethical for me to train on your
    3:38:04 model when you can train on the internet’s text, right?
    3:38:12 So there’s a bit of a hypocrisy because OpenAI and potentially most of the companies trained
    3:38:14 on the internet’s text without permission.
    3:38:20 There’s also a clear loophole, which is that I generate data from OpenAI and then I upload
    3:38:25 it somewhere and then somebody else trains on it and the link has been broken.
    3:38:27 They’re not under the same terms of service contract.
    3:38:32 There’s a lot of hip hop, there’s a lot of to be discovered details that don’t make
    3:38:33 a lot of sense.
    3:38:38 This is why a lot of models today, even if they train on zero OpenAI data, you ask the
    3:38:42 model who trained you, it’ll say I was, I am Chad GPT trained by OpenAI because there’s
    3:38:47 so much copy paste of like OpenAI outputs from that on the internet that you just weren’t
    3:38:52 able to filter it out and there was nothing in the RL where they implemented like, hey,
    3:38:56 or post training or SFT, whatever that says, hey, I’m actually a model by Allen Institute
    3:38:58 instead of OpenAI.
    3:38:59 We have to do this if we serve a demo.
    3:39:04 We do research and we use OpenAI APIs because it’s useful and you want to understand post
    3:39:08 training and like our research models, they will say they’re written by OpenAI unless
    3:39:12 we put in the system prop that we talked about that like, I am Tulu, I am a language model
    3:39:14 trained by the Allen Institute for AI.
    3:39:18 And if you ask more people around industry, especially with post training, it’s a very
    3:39:24 doable task to make the model say who it is or to suppress the OpenAI thing.
    3:39:28 So in some levels, it might be that deep sea didn’t care that it was saying that it was
    3:39:29 by OpenAI.
    3:39:32 Like if you’re going to upload model weights, it doesn’t really matter because anyone that’s
    3:39:37 serving it in an application and cares a lot about serving is going to, when serving it,
    3:39:40 if they’re using it for a specific task, they’re going to tailor it to that.
    3:39:42 And it doesn’t matter, but it’s saying it’s Chad GPT.
    3:39:46 Oh, I guess the one of the ways to do that is like a system prompt or something like
    3:39:47 that.
    3:39:49 Like if you’re serving it to say that you’re…
    3:39:50 That’s what we do.
    3:39:55 Like if we host the demo, you say you are Tulu 3, a language model trained by the Allen
    3:39:56 Institute for AI.
    3:40:00 We also are benefited from OpenAI data because it’s a great research tool.
    3:40:07 I mean, do you think there’s any truth and value to the claim, OpenAI’s claim that there’s
    3:40:10 evidence that China’s deep seek, use this model to train?
    3:40:16 I think everyone has benefited regardless because the data’s on the internet.
    3:40:18 And therefore, it’s in your portraying now, right?
    3:40:23 There are like subreddits where people share the best Chad GPT outputs and those are in
    3:40:24 your model…
    3:40:26 I think that they’re trying to ship the narrative.
    3:40:28 They’re trying to protect themselves.
    3:40:32 And we saw this years ago and Bite Dance was actually banned from some OpenAI APIs for training
    3:40:34 on outputs.
    3:40:39 There’s other AI startups that most people, if you’re in the AI culture, they just told
    3:40:43 us they trained on OpenAI outputs and they never got banned.
    3:40:45 That’s how they bootstrapped their early models.
    3:40:49 So it’s much easier to get off the ground using this than to set up human pipelines
    3:40:50 and build a strong model.
    3:40:54 So there’s a long history here and a lot of the communications are seen like narrative
    3:40:55 control.
    3:40:59 Actually, over the last couple of days, we’ve seen a lot of people distill deep seeks model
    3:41:04 into Lama models because the deep seek models are kind of complicated to run inference on
    3:41:08 because they’re a mixture of experts and they’re 600 plus billion parameters and all this.
    3:41:12 And people distilled them into the Lama models because the Lama models are so easy to serve
    3:41:16 and everyone’s built the pipelines and tooling for inference with the Lama models because
    3:41:18 it’s the open standard.
    3:41:21 So we’ve seen a sort of roundabout, right?
    3:41:22 Is it bad?
    3:41:23 Is it illegal?
    3:41:24 Maybe it’s illegal, whatever.
    3:41:25 I don’t know about that.
    3:41:26 But it could break contracts.
    3:41:27 I don’t think it’s illegal.
    3:41:30 In any legal, no one’s going to jail for this.
    3:41:35 I think fundamentally, I think it’s ethical or I hope it’s ethical because the moment
    3:41:42 it becomes, we ban that kind of thing, it’s going to make everybody much worse off.
    3:41:48 And I also actually, this is difficult, but I think you should be allowed to train on
    3:41:49 the internet.
    3:41:52 I know a lot of authors and creators are very sensitive about it.
    3:41:54 That’s a difficult question.
    3:41:57 But the moment you’re not allowed to train on the internet.
    3:41:58 I agree.
    3:42:01 I have a schizo take on how you consult this because it already works.
    3:42:04 I have a reasonable take on it.
    3:42:10 So, you know, Japan has a law which you’re allowed to train on any training data and
    3:42:15 copyrights don’t apply if you want to train a model A, B, Japan has nine gigawatts of
    3:42:17 curtailed nuclear power.
    3:42:23 C, Japan is allowed under the AI diffusion rule to import as many GPUs as they’d like.
    3:42:25 So all we have to do, we have a market here to make.
    3:42:30 We build massive data centers, we rent them to the labs, and then we train models in a
    3:42:33 legally permissible way, and there’s no if, ands, or buts.
    3:42:38 And now, the models have no potential copyright lawsuit from New York Times or anything like
    3:42:39 that.
    3:42:40 No, no, it’s just completely legal.
    3:42:41 Genius.
    3:42:46 The early copyright lawsuits have fallen in the favor of AI training.
    3:42:53 I would say that the long tail of use is going to go in the side of AI, which is if you scrape
    3:42:56 trillions of data, you’re not looking at the trillions of tokens of data.
    3:43:01 You’re not looking and saying this one New York Times article is so important to me.
    3:43:05 But if you’re doing a audio generation for music or image generation, and you say make
    3:43:10 it in the style of X person, that’s a reasonable case where you could figure out what is their
    3:43:12 profit margin on inference.
    3:43:17 I don’t know if it’s going to be the 50/50 of YouTube creator program or something, but
    3:43:19 I would opt into that program as a writer.
    3:43:26 Please, it’s going to be a rough journey, but there will be some solutions like that
    3:43:27 that make sense.
    3:43:30 But there’s a long tail where it’s just on the Internet.
    3:43:36 I think one of the other aspects of that Financial Times article implied, and so that leads to
    3:43:37 a more general question.
    3:43:45 Do you think there’s how difficult is spying, espionage, and stealing of actual secret code
    3:43:48 and data from inside companies?
    3:43:49 How much of that is being attempted?
    3:43:52 Code and data is hard, but ideas is easy.
    3:43:58 Silicon Valley operates on the way that top employees get bought out by other companies
    3:43:59 for a pay raise.
    3:44:04 And a large reason why these companies do this is to bring ideas with them.
    3:44:05 There’s no…
    3:44:09 I mean, in California, there’s rules like certain non-competes or whatever are illegal
    3:44:10 in California.
    3:44:14 And whether or not there’s NDAs and things, that is how a lot of it happens.
    3:44:19 Recently, there was somebody from Gemini who helped make this one million context length,
    3:44:23 and everyone is saying the next llama who, I mean, he went to the meta team, is going
    3:44:26 to have one million context length.
    3:44:29 And that’s kind of how the world works.
    3:44:34 As far as industrial espionage and things, that has been greatly successful in the past.
    3:44:39 The Americans did the Brits, the Chinese have done it to the Americans, and so on and so
    3:44:40 forth.
    3:44:43 It is a fact of life.
    3:44:48 And so to argue, industrial espionage can be stopped is probably unlikely, you can make
    3:44:49 it difficult.
    3:44:54 Even then, there’s all these stories about, “Hey, F35 and F22 have already been given
    3:44:57 to China in terms of design plans and stuff.”
    3:45:03 Code and stuff between, I say, companies, not nation states is probably very difficult.
    3:45:08 But ideas are discussed a lot, whether it be a house party in San Francisco, or a company
    3:45:15 changing employees, or the always the mythical honeypot that always gets talked about, like
    3:45:17 someone gets honeypotted.
    3:45:21 Because everyone working on AI is a single dude who’s in their 20s and 30s.
    3:45:25 Not everyone, but an insane amount of insane percentages.
    3:45:28 So there’s always all these like, and obviously–
    3:45:32 So a honeypotted is like a spy, a female spy approaches you and like–
    3:45:33 Yeah.
    3:45:36 Or male, right?
    3:45:37 It’s San Francisco, right?
    3:45:44 But as a single dude, I will say in his late 20s, we are very easily corrupted, right?
    3:45:47 Not corrupted myself, but you know, we are, we are, right?
    3:45:48 Everybody else, not me.
    3:45:49 Yeah, exactly.
    3:45:50 I’m too oblivious and I am not single.
    3:45:53 So I’m saved from one espionage access.
    3:45:59 Yeah, you have to make sure to close all security vulnerabilities.
    3:46:05 So you do collect a lot of information about each of the mega clusters for each of the
    3:46:08 major AI companies.
    3:46:12 Can you talk about the buildouts for each one that stand out?
    3:46:13 Yeah.
    3:46:17 I think the thing that’s like really important about these mega cluster buildouts is they’re
    3:46:20 completely unprecedented in scale, right?
    3:46:24 US, you know, sort of like data center power consumption has been slowly on the rise and
    3:46:29 it’s gone up to 2%, 3% even through the cloud computing revolution, right?
    3:46:32 Data center consumption as a percentage of total US.
    3:46:34 And that’s been over decades, right, of data centers, et cetera.
    3:46:36 It’s been climbing, climbing slowly.
    3:46:41 But now, 2% to 3%, by the end of this decade, it’s like even under like, you know, when
    3:46:47 I say like 10%, a lot of people that are traditionally by like 20, 28, 20, 30, people traditionally
    3:46:51 non-traditional data center people like that’s nuts.
    3:46:54 But then like people who are in like AI who have like really looked at this at like the
    3:46:58 anthropics and open AI’s are like, that’s not enough, okay?
    3:47:04 But like, you know, this is this is both through globally distributed or distributed throughout
    3:47:07 the US as well as like centralized clusters, right?
    3:47:10 The distributed throughout the US is exciting and it’s the bulk of it, right?
    3:47:17 Like, hey, you know, open AI or, you know, say Meta’s adding a gigawatt, right?
    3:47:20 But most of it is distributed through the US for inference and all these other things,
    3:47:21 right?
    3:47:24 So maybe we should lay out what a what a cluster is.
    3:47:28 So, you know, does this include AWS?
    3:47:32 Maybe it’s good to talk about the different kinds of clusters and what you mean by megaclusters
    3:47:36 and what’s the GPU and what’s the computer and what is not that far back.
    3:47:37 But yeah.
    3:47:39 So like, what do we mean by the clusters?
    3:47:41 No, man, I thought I was about to do the Apple ad, right?
    3:47:43 What’s a computer?
    3:47:49 So, so traditionally data centers and data center tasks have been a distributed systems
    3:47:54 problem that is capable of being spread very far and widely, right?
    3:48:00 I send a request to Google, it gets routed to a data center somewhat close to me.
    3:48:05 It does whatever search ranking recommendation sends a result back, right?
    3:48:09 The nature of the task is changing rapidly in that the task, there’s two tasks that people
    3:48:10 are really focused on now, right?
    3:48:12 It’s not database access.
    3:48:14 It’s not serve me the right page, serve me the right ad.
    3:48:20 It’s now a inference and inference is dramatically different from traditional distributed systems,
    3:48:22 but it looks a lot more simple, similar.
    3:48:24 And then there’s training, right?
    3:48:28 The train inference side is still like, hey, I’m going to put, you know, thousands of GPUs
    3:48:33 and, you know, blocks all around these data centers, I’m going to run models on them,
    3:48:37 you know, user submits a request, gets kicked off, or hey, my service, you know, they submit
    3:48:38 a request to my service, right?
    3:48:41 They’re on Word and they’re like, oh yeah, help me copilot and it kicks it off or I’m
    3:48:45 on my windows, copilot, whatever, Apple intelligence, whatever it is, it gets kicked off to a data
    3:48:46 center, right?
    3:48:51 And that data center does some work and sends it back, that’s inference, that is going to
    3:48:55 be the bulk of compute, but then, you know, and that’s like, you know, there’s thousands
    3:48:59 of data centers that we’re tracking with like satellites and like all these other things.
    3:49:01 And those are the bulk of what’s being built.
    3:49:05 About the scale of, and so that’s like what’s really reshaping and that’s what’s getting
    3:49:11 millions of GPUs, but the scale of the largest cluster is also really important, right?
    3:49:17 When we look back at history, right, like, you know, or through the age of AI, right?
    3:49:22 Like it was a really big deal when they did AlexNet on, I think, two GPUs or four GPUs?
    3:49:23 I don’t remember.
    3:49:24 It was a really big deal.
    3:49:25 It’s a big deal because you use GPUs.
    3:49:29 It’s a big deal to use GPUs and they use multiple, right?
    3:49:32 But then over time, its scale has just been compounding, right?
    3:49:40 And so when you skip forward to GPT-3, then GPT-4, GPT-4 20,000 A100 GPUs on precedented
    3:49:44 run, right, in terms of the size and the cost, right, a couple hundred million dollars on
    3:49:48 a YOLO, right, a YOLO run for GPT-4 and it yielded, you know, this magical improvement
    3:49:53 that was like perfectly in line with what was experimented and just like a log scale, right?
    3:49:55 Oh yeah, they have that plot from the paper.
    3:49:56 The technical report.
    3:49:58 The scaling laws were perfect, right?
    3:50:00 But that’s not a crazy number, right?
    3:50:05 20,000 A100s, roughly each GPU is consuming 400 watts.
    3:50:09 And then when you add in the whole server, right, everything, it’s like 15 to 20 megawatts
    3:50:11 of power, right?
    3:50:15 You know, maybe you could look up what the power of consumption of a human person is
    3:50:19 because the numbers are going to get silly, but like 15 to 20 megawatts was standard data
    3:50:20 center size.
    3:50:21 It was just unprecedented.
    3:50:22 That was all GPUs running one time.
    3:50:23 20 watts was a toaster.
    3:50:24 Yeah.
    3:50:29 A toaster is like a similar power consumption to an A100, right?
    3:50:34 H100 comes around, they increase the power from like 400 to 700 watts and that’s just
    3:50:36 per GPU and then there’s all the associated stuff around it.
    3:50:40 So once you count all that, it’s roughly like 1200 to 1400 watts.
    3:50:43 For everything, networking, CPUs, memory, blah, blah, blah.
    3:50:46 So we should also say, so what’s required?
    3:50:53 You said power, so a lot of power is required, a lot of heat is generated, cooling is required
    3:50:58 and because there’s a lot of GPUs that have to be or CPUs or whatever, they have to be
    3:50:59 connected.
    3:51:00 So there’s a lot of networking.
    3:51:01 Yeah.
    3:51:02 Right.
    3:51:03 Yeah.
    3:51:04 So I think, yeah.
    3:51:05 Sorry for skipping past that.
    3:51:06 And then the data center itself is like complicated, right?
    3:51:10 But these are still standard sized data centers for GPT-4 scale, right?
    3:51:16 Now we step forward to sort of what is the scale of clusters that people built last year,
    3:51:17 right?
    3:51:18 And it ranges widely, right?
    3:51:22 It ranges from like, hey, these are standard data centers and we’re just using multiple
    3:51:25 of them and connecting them together really with a ton of fiber between them, a lot of
    3:51:27 networking, et cetera.
    3:51:29 That’s what OpenAI and Microsoft did in Arizona, right?
    3:51:31 And so they have a, you know, 100,000 GPUs, right?
    3:51:32 Meta, similar thing.
    3:51:36 They took their standard existing data center design and it looks like an H and they connected
    3:51:39 multiple them together.
    3:51:44 And you know, they got to, they first did 16,000 GPUs, 24,000 GPUs total.
    3:51:46 Only 16 of them, 1,000 of them were running on the training run because GPUs are very
    3:51:47 unreliable.
    3:51:51 So they needed to have spares to like swap in and out all the way to like now 100,000
    3:51:54 GPUs that they’re training on Lama 4 on currently, right?
    3:51:56 Like 128,000 or so, right?
    3:52:03 This is, you know, think about 100,000 GPUs with roughly 1,400 watts apiece.
    3:52:05 That’s 140 megawatts, 150 megawatts, right?
    3:52:07 For 128,000, right?
    3:52:11 So you’re talking about, you’ve jumped from 15 to 20 megawatts to 10x, you know, almost
    3:52:17 10x that number, 9x that number to 150 megawatts in two years, right?
    3:52:19 From 2022 to 2024, right?
    3:52:23 And some people like Elon, he admittedly, right, and he says himself got into the game
    3:52:26 a little bit late for pre-training large language models, right?
    3:52:27 XAI was started later, right?
    3:52:32 But then he bet heaven and hell to get his data center up and get the largest cluster
    3:52:33 in the world, right?
    3:52:35 Which is 200,000 GPUs.
    3:52:36 And he did that.
    3:52:39 He bought a factory in Memphis.
    3:52:42 He’s upgrading the substation, but at the same time he’s got a bunch of mobile power
    3:52:45 generation, a bunch of single cycle combined.
    3:52:48 He tapped the natural gas line that’s right next to the factory and he’s just pulling a
    3:52:50 ton of gas, burning gas.
    3:52:52 He’s generating all this power.
    3:52:56 He’s in a factory, in an old appliance factory that’s shut down and moved to China long ago,
    3:52:57 right?
    3:53:00 And he’s got 200,000 GPUs in it.
    3:53:01 And now what’s the next scale, right?
    3:53:02 All the hyperscalers have done this.
    3:53:06 Now the next scale is something that’s even bigger, right?
    3:53:10 And so, you know, Elon, just to stick on the topic, he’s building his own natural gas plant,
    3:53:13 like a proper one right next door.
    3:53:18 He’s deploying tons of Tesla Mega Pack batteries to make the power more smooth and all sorts
    3:53:19 of other things.
    3:53:23 He’s got like industrial chillers to cool the water down because he’s water cooling the
    3:53:24 chips.
    3:53:28 So, all these crazy things to get the clusters bigger and bigger.
    3:53:34 But when you look at, like, say, what OpenAI did with Stargate, that’s that in Arizona,
    3:53:36 in Abilene, Texas, right?
    3:53:38 What they’ve announced at least, right?
    3:53:39 It’s not built, right?
    3:53:40 Elon says they don’t have the money.
    3:53:42 You know, there’s some debates about this.
    3:53:46 But at full scale, at least the first section is like definitely money’s accounted for,
    3:53:47 but there’s multiple sections.
    3:53:52 But at full scale, that data center is going to be 2.2 gigawatts, right, 2200 megawatts
    3:53:59 of power in and roughly like 1.8 gigawatts or 1800 megawatts, yeah, 1800 megawatts of
    3:54:01 power delivered to chips, right?
    3:54:06 Now, this is an absurd scale, 2.2 gigawatts is like more than most cities, right, you
    3:54:13 know, to be clear, delivered to a single cluster that’s connected to do training, right?
    3:54:16 To train these models, to do both the pre-training, the post-training, all of this stuff, right?
    3:54:17 This is insane.
    3:54:18 This is insane.
    3:54:19 This is insane.
    3:54:20 This is a nuclear power plant again.
    3:54:21 And everyone is doing this, right?
    3:54:22 Everyone is doing this, right?
    3:54:24 Meta in Louisiana, right?
    3:54:29 They’re building two natural gas plants, massive ones, and then they’re building this massive
    3:54:31 data center.
    3:54:37 Amazon has like plans for this scale, Google has plans for this scale, XAI has plans for
    3:54:38 this scale, right?
    3:54:42 Like all of these, the guys that are racing, the companies that are racing are racing hard
    3:54:46 and they’re doing multi-gigawatt data centers, right?
    3:54:52 You build this out because they think that, yeah, if I now have, you know, obviously pre-training
    3:54:55 scaling is going to continue, but to some extent, but then also all this post-training
    3:54:58 stuff where you have an RL sandbox for computer use or whatever, right?
    3:55:01 Like, you know, this is where they’re going to, and all these variable domains where they
    3:55:06 just keep learning and learning and learning, self-play, whatever it is, makes the AI so
    3:55:09 much more capable because the line does go up, right?
    3:55:11 As you throw more compute, you get more performance.
    3:55:15 The shirt is about scaling laws, you know, to some extent it is diminishing returns, right?
    3:55:18 You 10x the compute, you don’t get 10x better model, right?
    3:55:21 You get a diminishing returns, but also you get efficiency improvements, so you bend the
    3:55:23 curve, right?
    3:55:27 And these scale of data centers are doing, you know, wreaking, you know, a lot of like
    3:55:29 havoc on the network, right?
    3:55:33 And, you know, Nathan was mentioning there’s, Amazon has tried to buy this nuclear power
    3:55:38 plant, Talon, and if you look at the Talon stock, it’s just like skyrocketing and, you
    3:55:41 know, like they’re building a massive multi-gigawatt data center there, and, you know, you just
    3:55:44 go down the list, there’s so many ramifications.
    3:55:49 One thing is like certain regions of the U.S. transmitting power cost more than actually
    3:55:51 generating it, right?
    3:55:55 Because the grid is so slow to build, and the demand for power and the ability to build
    3:55:59 power and like re-ramping on a natural gas plant or even a coal plant is like easy enough
    3:56:01 to do, but like transmitting the power is really hard.
    3:56:06 So in some parts of the U.S., like in Virginia, it costs more to transmit power than it costs
    3:56:09 to generate it, which is like, you know, there’s all sorts of like second order effects that
    3:56:10 are insane here.
    3:56:13 Can the power grid support this kind of growth?
    3:56:16 You know, Trump’s executive orders, there’s a, there’s a Biden executive order before
    3:56:21 the end of the year, but then Trump had some more executive orders, which hopefully reduced
    3:56:26 the regulations to where, yes, things can be built, but yeah, this is a big, big challenge,
    3:56:27 right?
    3:56:28 Is building enough power fast enough?
    3:56:32 Are you going to basically have a nuclear power plant next to a data center for each
    3:56:33 one of these?
    3:56:38 So, so the fun thing here is this is too slow to build the power plant, to build a power
    3:56:42 plant or to re-configure an existing power plant is too slow.
    3:56:46 And so therefore you must use natural, data center power consumption is flat, right?
    3:56:47 You know, I mean, like it’s, right?
    3:56:49 Which is why nuclear is also good for it.
    3:56:55 Like longterm nuclear is a very natural fit, but you can’t do solar or anything in the
    3:56:57 short term like that.
    3:56:58 Because data center power is like this, right?
    3:57:03 Like you’re telling me, you know, I’m going to buy tens of billions of dollars of GPUs
    3:57:04 and idle them because the power is not being generated.
    3:57:05 Like power is cheap, right?
    3:57:10 Like if you look at the cost of a cluster, less than 20% of it is power, right?
    3:57:14 Most of it is the capital cost and depreciation of the GPUs, right?
    3:57:15 And so it’s like, well, screw it.
    3:57:17 I’ll just like, you know, I’ll just build natural gas plants.
    3:57:18 This is what Metta’s doing in Louisiana.
    3:57:22 This is what OpenAI is doing in Texas and like all these different places.
    3:57:25 They may not be doing it directly, but they are partnered with someone.
    3:57:28 And so there is a couple of hopes, right?
    3:57:32 Like one is, you know, and Elon, what he’s doing in Memphis is like, you know, to the
    3:57:36 extreme, they’re not just using dual combined cycle gas, which is like super efficient.
    3:57:40 He’s also just using single cycle and like mobile generators and stuff, which is less
    3:57:41 efficient.
    3:57:45 But he’s, you know, there’s also like the flip side, which is like solar power generation
    3:57:49 is like this and wind is another like, like this different correlate, you know, different.
    3:57:53 So if you stack both of those, plus you get a big chunk of batteries, plus you have a
    3:57:56 little bit of gas, it is possible to run it more green.
    3:57:59 It’s just the time scales for that is slow, right?
    3:58:04 So people are trying, but, you know, Metta basically said, whatever, don’t care about
    3:58:08 my sustainability pledge, or they’ll buy like a power, it’s called a PPA, power purchasing
    3:58:12 agreement, where there’ll be a massive wind farm or solar farm, like wherever.
    3:58:15 And then they’ll just pretend like those electrons are being consumed by the data center.
    3:58:18 But in reality, they’re paying for the power here and selling it to the grid and they’re
    3:58:20 buying power here.
    3:58:24 And then another thing is like Microsoft quit on some of their sustainability pledges, right?
    3:58:29 Elon, he, what he did with Memphis is objectively somewhat dirty, but he’s also doing it in an
    3:58:34 area where there’s like a bigger natural gas plant right next door and like a sewer next
    3:58:37 or not a sewer, but like a wastewater treatment and a garbage dump nearby, right?
    3:58:41 And he’s obviously made the world a lot more clean than that one data center is going to
    3:58:42 do, right?
    3:58:47 So I think like it’s fine to some extent, and maybe AGI solves, you know, global warming
    3:58:48 and stuff, right?
    3:58:51 Whatever it is, you know, this is, this is sort of the attitude that people at the labs
    3:58:52 have, right?
    3:58:53 Which is like, yeah, it’s great.
    3:58:54 We’ll just use gas, right?
    3:58:58 Because the race is that important and if we lose, you know, that’s way worse, right?
    3:59:05 I should say that I got you asked to visit the Memphis data center and it’s kind of incredible.
    3:59:11 I mean, I visited with Elon, just the teams and the rate of innovation.
    3:59:12 There’s insane.
    3:59:18 Because my sense is that, you know, nobody’s ever done anything of this scale and nobody
    3:59:23 has certainly ever done anything of this scale at the rate that XAI is doing.
    3:59:28 So they’re like figuring out, I mean, it’s all sitting in all these meetings with their
    3:59:29 brainstorming.
    3:59:31 It’s like, it’s insane.
    3:59:32 It’s exciting.
    3:59:35 Because they’re like, they’re trying to figure out what the bottlenecks are, how to remove
    3:59:39 the bottlenecks, how to make sure that, you know, there’s just so many really cool things
    3:59:46 about putting together a data center because, you know, everything has to work.
    3:59:51 It’s the people that do like the sys admin, you know, the machine learning, all that is
    3:59:52 the exciting thing so on.
    3:59:59 But really the people that run everything are the folks that know like the low level software
    4:00:02 and hardware that runs everything, the networking, all of that.
    4:00:06 And so you have to like make sure you have procedures that test everything.
    4:00:07 I think they’re using Ethernet.
    4:00:12 I don’t know how they’re doing the networking, but they’re using NVIDIA Spectrum X Ethernet.
    4:00:16 There’s actually like, I think, yeah, the unsung heroes are the cooling and electrical
    4:00:18 systems, which are just like glossed over.
    4:00:19 Yeah.
    4:00:24 But I think like, like one story that maybe is like exemplifies how insane this stuff
    4:00:29 is, is when you’re training, right, you’re always doing, you’re running through the model
    4:00:32 a bunch, right, in the most simplistic terms, running through the model a bunch.
    4:00:37 And then you’re going to exchange everything and synchronize the weights, right?
    4:00:38 So you’ll do a step.
    4:00:40 This is like a step in model training, right?
    4:00:42 At every step, your loss goes down, hopefully, and it doesn’t always.
    4:00:46 But in the simplest terms, you’ll be computing a lot and then you’ll exchange, right?
    4:00:49 The interesting thing is GPU power is most of it.
    4:00:50 Networking power is some, but it’s a lot less.
    4:00:53 But so while you’re computing, your power for your GPUs is here.
    4:00:57 But then when you’re exchanging weights, if you’re not able to overlap communications
    4:01:01 and compute perfectly, there may be a time period where your GPUs are just idle and you’re
    4:01:04 exchanging weights and you’re like, hey, the model’s updating.
    4:01:07 So you’re exchanging the radiance, you do the model update, and then you start training
    4:01:08 again.
    4:01:10 So the power goes, right?
    4:01:11 And it’s super spiky.
    4:01:16 And so funnily enough, right, like this, when you talk about the scale of data center power,
    4:01:17 right?
    4:01:19 You can blow stuff up so easily.
    4:01:25 And so Meta actually has accidentally upstreamed something to code in PyTorch where they added
    4:01:28 an operator, and I kid you not, whoever made this, like I want to hug the guy because it
    4:01:35 says PyTorch, it’s like PyTorch.powerplantNoBlowUp, equal zero or equal one.
    4:01:38 And what it does, what it does is amazing, right?
    4:01:42 Either, you know, when you’re exchanging the weights, the GPU will just compute fake
    4:01:44 numbers so the power doesn’t spike too much.
    4:01:48 And so then the power plants don’t blow up because the transient spikes screw stuff up.
    4:01:49 Well, that makes sense.
    4:01:51 I mean, you have to do that kind of thing.
    4:01:53 You have to make sure they’re not idle, yeah.
    4:01:56 And Elon’s solution was like, let me throw a bunch of Tesla mega packs and a few other
    4:01:57 things, right?
    4:02:01 Like everyone has different solutions, but like Meta’s at least was publicly and openly
    4:02:05 known, which is just like, set this operator, and what this operator does is it just makes
    4:02:08 the GPUs compute nothing so that the power doesn’t spike.
    4:02:11 But that just tells you how much power you’re working with.
    4:02:12 I mean, it’s insane.
    4:02:13 It’s insane.
    4:02:18 You can almost just go to Google, like scale, like what does X watts do and go through all
    4:02:21 the scales from one watt to a kilowatt to a megawatt.
    4:02:26 And you look and stare at that and you’re how high on the list a gigawatt is, and it’s
    4:02:27 mind-blowing.
    4:02:30 Can you say something about the cooling?
    4:02:37 So I know Elon’s using liquid cooling, I believe in all cases, that’s a new thing,
    4:02:38 right?
    4:02:39 Most of them don’t use liquid cooling.
    4:02:41 Is there something interesting to say about the cooling?
    4:02:42 Yeah, yeah.
    4:02:46 The cooling has been the de facto standard, throw a bunch of metal, heat pipes, et cetera,
    4:02:47 and fans, right?
    4:02:48 And like, that’s cold.
    4:02:50 That’s been enough to cool it.
    4:02:55 People have been dabbling in water cooling, Google’s TPUs are water cooled, right?
    4:02:58 So they’ve been doing that for a few years.
    4:03:01 But with GPUs, no one’s ever done, and no one’s ever done the scale of water cooling
    4:03:04 that Elon just did, right?
    4:03:09 Now next generation Nvidia is for the highest NGPU, it is mandatory water cooling.
    4:03:10 You have to water cool it.
    4:03:14 So Elon did it on this current generation, and that required a lot of stuff, right?
    4:03:19 If you look at some of the satellite photos and stuff of the Memphis facility, there’s
    4:03:22 all these external water chillers that are sitting basically.
    4:03:26 It looks like a semi-truck pod thing, what’s it called, the container.
    4:03:29 But really those are water chillers, and he has like 90 of those water chillers just sitting
    4:03:30 outside.
    4:03:35 90 different containers, right, that chill the water, bring it back to the data center,
    4:03:38 and then you distribute it to all the chips, pull all the heat out, and then send it back,
    4:03:39 right?
    4:03:44 So it’s both a way to cool the chips, but also an efficiency thing, all right?
    4:03:50 And going back to that sort of three vector thing, right, there is memory band with flops
    4:03:51 and interconnect.
    4:03:56 The closer the chips are together, the easier it is to do high-speed interconnects, right?
    4:04:00 And so this is also like a reason why you’re going to go water cooling is because you can
    4:04:06 just put the chips right next to each other, and therefore get higher speed connectivity.
    4:04:14 I got to ask you, so in one of your recent posts, there’s a section called Cluster Measuring
    4:04:17 Contest, so…
    4:04:21 There’s another word there, but I won’t say it, you know?
    4:04:25 What, who’s got the biggest now, and who’s going to have the biggest?
    4:04:29 Today, individual largest is Elon, right?
    4:04:30 Right.
    4:04:31 Elon’s cluster.
    4:04:34 Elon’s cluster in Memphis, 200,000 GPUs, right?
    4:04:39 Meta has like 128,000, Open Air has 100,000, now to be clear, other companies have more
    4:04:42 GPUs than Elon, they just don’t have them in one place, right?
    4:04:44 And for training, you want them tightly connected.
    4:04:50 There’s some techniques that people are researching and working on that let you train across multiple
    4:04:54 regions, but for the most part, you want them all in like one area, right?
    4:04:57 So you can connect them highly with high-speed networking.
    4:05:04 And so, you know, Elon today has 200,000 H100s, 100,000 H100s, 100,000 H200s, right?
    4:05:11 Meta, Open AI, you know, and Amazon all have on the scale of 100,000, a little bit less.
    4:05:14 But next, this year, right, this year, people are building much more, right?
    4:05:19 Anthropic and Amazon are building a cluster of 400,000 Tranium II, which is Amazon-specific
    4:05:22 chip, trying to get away from Nvidia, right?
    4:05:27 You know, Meta and Open AI have scales for hundreds of thousands.
    4:05:33 But by next year, you’ll have like 500,000 to 700,000 GPU clusters, and note those GPUs
    4:05:36 are much higher power consumption than existing ones, right?
    4:05:40 Hopper 700 watts, Blackwell goes to 1200 watts, right?
    4:05:44 So the power per chip is growing and the number of chips is growing, right?
    4:05:45 Nuts.
    4:05:48 You think Elon said he’ll get to a million.
    4:05:50 You think that’s actually feasible?
    4:05:53 I mean, I don’t doubt Elon, right?
    4:05:57 The filings that he has for like, you know, the power plan and the Tesla battery packs,
    4:06:00 it’s clear he has some crazy plans for Memphis.
    4:06:03 Like permits and stuff is open record, right?
    4:06:07 But it’s not quite clear that, you know, what and what the time scales are.
    4:06:09 I just never doubt Elon, right?
    4:06:10 You know, that’s, he’s going to surprise us.
    4:06:12 So what’s the idea with these clusters?
    4:06:18 If you have a million GPUs, what percentage in, let’s say, two, three years is used for
    4:06:25 training and what percent, pre-training and what percent is used for like, for the actual
    4:06:26 computation?
    4:06:28 So these mega clusters make no sense for inference, right?
    4:06:31 You could route inference there and just not train.
    4:06:35 But most of the inference capacity is being, you know, hey, I’ve got a 30 megawatt data
    4:06:36 center here.
    4:06:37 I’ve got 50 megawatts here.
    4:06:38 I’ve got a hundred here, whatever.
    4:06:43 I’ll just throw inference in all of those because the mega clusters, right, multi gigawatt
    4:06:47 data centers, I want to train there because that’s where all of my GPUs are co-located
    4:06:51 where I can put them at a super high networking speed connected together, right?
    4:06:52 Because that’s what you need for training.
    4:06:55 Now with pre-training, this is the old scale, right?
    4:06:59 You could increase parameters, you’d increase data, model gets better.
    4:07:03 That doesn’t apply anymore because there’s not much more data in the pre-training side,
    4:07:04 right?
    4:07:08 Yes, there’s video and audio and image that has not been fully taken advantage of.
    4:07:09 So there’s a lot more scaling.
    4:07:14 But a lot of people like, have taken transcripts of YouTube videos and that gets you a lot
    4:07:15 of the data.
    4:07:17 It doesn’t get you all of the learning value out of the video and image data.
    4:07:20 But, you know, there’s still scaling to be done on pre-training.
    4:07:24 This post-training world is where all the flops are going to be spent, right?
    4:07:27 The model is going to play with itself, it’s going to self-play, it’s going to do verifiable
    4:07:32 tasks, it’s going to do computer use in sandboxes, it might even do simulated robotics things,
    4:07:33 right?
    4:07:39 All of these things are going to be environments where compute is spent in quote unquote post-training.
    4:07:42 But I think it’s going to be good, we’re going to drop the post from post-training.
    4:07:43 Yeah.
    4:07:48 It’s going to be pre-training and it’s going to be training, I think, at some point.
    4:07:54 Because for the bulk of the last few years, pre-training has dwarfed post-training.
    4:07:59 But with these verifiable methods, especially ones that scale potentially infinitely, like
    4:08:04 computer use in robotics, not just math encoding, right, where you can verify what’s happening,
    4:08:07 those infinitely verifiable tasks, it seems you can spend as much compute as you want
    4:08:08 on them.
    4:08:09 Especially at the context length increase.
    4:08:13 Because the end of pre-training is when you increase the context length for these models.
    4:08:17 And we’ve talked earlier in the conversation about how the context length, when you have
    4:08:20 a long input, is much easier to manage than output.
    4:08:25 And a lot of these post-training and reasoning techniques rely on a ton of sampling and it’s
    4:08:27 becoming increasingly long context.
    4:08:31 So it’s just like you’re, effectively, your compute efficiency goes down.
    4:08:36 I don’t, I think FLOPs is the standard for how you measure it, but with RL and you have
    4:08:40 to do all these things where you move your weights around in a different way than at
    4:08:46 pre-training and just generation, it’s going to become less efficient and FLOPs is going
    4:08:48 to be less of a useful term.
    4:08:51 And then as the infrastructure gets better, it’s probably going to go back to FLOPs.
    4:08:56 So all of the things we’ve been talking about is most likely going to be NVIDIA, right?
    4:08:57 Is there any competitors?
    4:09:00 Google, Google, I kind of ignored them.
    4:09:02 Yeah, what’s the story with TPU?
    4:09:03 What’s the story with TPU?
    4:09:04 Like, what’s the…
    4:09:06 TPU is awesome, right?
    4:09:07 It’s great.
    4:09:11 Google is, they’re a bit more tepid on building data centers for some reason.
    4:09:12 They’re building big data centers.
    4:09:13 Don’t get me wrong.
    4:09:17 They actually have the biggest cluster, I was talking about NVIDIA clusters.
    4:09:20 They actually have the biggest cluster, period.
    4:09:23 But the way they do it is very interesting, right?
    4:09:26 They have two sort of data center super regions, right?
    4:09:29 In that the data center isn’t physically, like all of the GPUs aren’t physically on
    4:09:33 one site, but they’re like 30 miles from each other, not GPUs, TPUs, right?
    4:09:37 They have like in Iowa and Nebraska, they have four data centers that are just like right
    4:09:38 next to each other.
    4:09:42 Why doesn’t Google flex its cluster size more often?
    4:09:43 Go to multi data center training.
    4:09:46 There’s the good images in there, so I’ll show you what I mean.
    4:09:49 It’s just semi analysis multi data center.
    4:09:52 So this is like, you know, so this is an image of like what a standard Google data center
    4:09:53 looks like.
    4:09:56 By the way, their data centers look very different than anyone else’s data centers.
    4:09:57 What are we looking at here?
    4:10:00 So these are, yeah, so if you see this image, right?
    4:10:02 In the center, there are these big rectangular boxes, right?
    4:10:05 Those are where the actual chips are kept.
    4:10:10 And then if you scroll down a little bit further, you can see there’s like these water pipes,
    4:10:14 there’s these chiller cooling towers in the top and a bunch of like diesel generators.
    4:10:16 The diesel generators are backup power.
    4:10:21 The data center itself is like look physically smaller than the water chillers, right?
    4:10:25 So the chips are actually easier to like keep together, but then like cooling all the water
    4:10:27 for the water cooling is very difficult, right?
    4:10:32 So Google has like a very advanced infrastructure that no one else has for the TPU.
    4:10:35 And what they do is they’ve like stamped these data center, they’ve stamped a bunch of these
    4:10:37 data centers out in a few regions, right?
    4:10:42 So if you go a little bit further down, this is a Microsoft.
    4:10:43 This is an Arizona.
    4:10:46 This is where GPT-5 quote unquote will be trained, you know.
    4:10:48 If it doesn’t exist already.
    4:10:50 Yeah, it doesn’t exist already.
    4:10:54 But each of these data centers, I’ve shown a couple images of them, they’re like really
    4:10:56 closely co-located in the same region, right?
    4:10:57 Nebraska, Iowa.
    4:11:01 And then they also have a similar one in Ohio complex, right?
    4:11:04 And so these data centers are really close to each other.
    4:11:07 And what they’ve done is they’ve connected them super high bandwidth with fiber.
    4:11:09 And so these are just a bunch of data centers.
    4:11:14 And the point here is that Google has a very advanced infrastructure, very tightly connected
    4:11:16 in a small region.
    4:11:19 So Elon will always have the biggest cluster fully connected, right?
    4:11:21 Because it’s all in one building, right?
    4:11:23 And he’s completely right on that, right?
    4:11:27 Google has the biggest cluster, but you have to spread over three sites and by a significant
    4:11:30 margin, we have to go across multiple sites.
    4:11:33 Why doesn’t Google compete with Nvidia?
    4:11:36 Why don’t they sell TPUs?
    4:11:38 I think there’s a couple problems with it.
    4:11:46 It’s like one, TPU has been a form of allowing search to be really freaking cheap and build
    4:11:48 models for that, right?
    4:11:52 And so like a big chunk of the search TPU purchases or TPU purchases or a big chunk
    4:11:56 of Google’s purchases and usage, all of it is for internal workloads, right?
    4:12:02 Whether it be search, now Gemini, YouTube, all these different applications that they
    4:12:06 have, you know, ads, these are where all their TPUs are being spent, and that’s what they’re
    4:12:08 hyper focused on, right?
    4:12:12 And so there’s certain like aspects of the architecture that are optimized for their
    4:12:15 use case that are not optimized elsewhere, right?
    4:12:19 One simple one is like they’ve open sourced a Gemma model and they called it Gemma 7B,
    4:12:20 right?
    4:12:24 But then it’s actually eight billion parameters because the vocabulary is so large, and the
    4:12:28 reason they made the vocabulary so large is because TPUs like matrix multiply unit
    4:12:32 is massive, because that’s what they’ve like sort of optimized for.
    4:12:35 And so they decided, oh, I’ll just make the vocabulary large too, even though it makes
    4:12:38 no sense to do so in such a small model, because that fits on their hardware.
    4:12:42 So Gemma doesn’t run it as efficiently on a GPU as a Lama does, right?
    4:12:46 But vice versa, Lama doesn’t run as efficiently on a TPU as a Gemma does, right?
    4:12:50 And it’s so like there’s like certain like aspects of like hardware software co-design.
    4:12:53 So all their search models are their ranking and recommendation models, all these different
    4:12:59 models that are AI, but not like gen AI, right, have been hyper-optimized with TPUs forever.
    4:13:03 The software stack is super optimized, but all of this software stack has not been released
    4:13:06 publicly at all, right?
    4:13:09 Very small portions of it, Jax and XLA have been, but like the experience when you’re
    4:13:13 inside of Google and you’re training on TPUs as a researcher, you don’t need to know anything
    4:13:15 about the hardware in many cases, right?
    4:13:21 It’s like pretty beautiful, but as soon as you step outside, a lot of them go back.
    4:13:23 They leave Google and then they go back.
    4:13:26 Yeah, they’re like, they leave and they start a company because they have all these amazing
    4:13:29 research ideas and they’re like, wait, infrastructure is hard.
    4:13:30 Software is hard.
    4:13:31 And this is on GPUs.
    4:13:34 Or if they try to use TPUs, same thing, because they don’t have access to all this code.
    4:13:37 And so it’s like, how do you convince a company whose golden goose is searched where they’re
    4:13:43 making hundreds of billions of dollars from to start selling TPUs, which they used to
    4:13:50 only buy a couple billion of, you know, I think in 2023, they bought like a couple billion.
    4:13:53 And now they’re buying like 10 billion to 15 billion dollars worth, but how do you convince
    4:13:56 them that they should just buy like twice as many and figure out how to sell them and
    4:13:57 make 30 billion dollars?
    4:14:00 Who cares about making 30 billion dollars?
    4:14:04 Won’t that 30 billion exceed actually the search profit eventually?
    4:14:10 Oh, I mean, like, you’re always going to make more money on services than on hardware.
    4:14:14 I mean, like, yeah, like, to be clear, like today, people are spending a lot more on hardware
    4:14:16 than they are the services, right?
    4:14:19 Because the hardware front runs the service spend.
    4:14:24 But like, if there’s no revenue for AI stuff or not enough revenue, then obviously like
    4:14:26 it’s going to blow up, right?
    4:14:28 People won’t continue to spend on GPUs forever.
    4:14:31 And an invidious trying to move up the stack with like software that they’re trying to
    4:14:33 sell and license and stuff, right?
    4:14:38 But Google has never had that like DNA of like, this is a product we should sell, right?
    4:14:42 The Google Cloud does it, which is a separate organization from the TPU team, which is a
    4:14:45 separate organization from the DeepMind team, which is a separate organization from the
    4:14:46 search team, right?
    4:14:47 There’s a lot of bureaucracy.
    4:14:50 Wait, Google Cloud is a separate team than the TPU team?
    4:14:54 Technically TPU sits under infrastructure, which sits under Google Cloud.
    4:15:01 But like Google Cloud, like for like renting stuff and TPU architecture are very different
    4:15:02 goals, right?
    4:15:04 And hardware and software, like all of this, right?
    4:15:09 Like the Jack’s XLA teams do not serve Google’s customers externally, whereas Nvidia’s various
    4:15:14 CUDA teams for like things like Nickel serve external customers, right?
    4:15:19 The internal teams like Jackson XLA and stuff, they more so serve DeepMind and search, right?
    4:15:21 And so their customers different, they’re not building a product for them.
    4:15:29 Do you understand why AWS keeps winning versus Azure for cloud versus Google Cloud?
    4:15:32 Yeah, Google Cloud is tiny, isn’t it, relative to AWS?
    4:15:34 Google Cloud is third, yeah, yeah.
    4:15:37 Microsoft is the second biggest, but Amazon is the biggest, right?
    4:15:42 And Microsoft deceptively sort of includes like Microsoft Office 365 and things like
    4:15:43 that.
    4:15:44 It’s enterprise-wide licenses.
    4:15:46 So in reality, the gulf is even larger.
    4:15:48 Microsoft is still second though, right?
    4:15:49 Amazon is way bigger.
    4:15:50 Why?
    4:15:52 Because using AWS is better and easier.
    4:15:53 And in many cases, it’s cheaper.
    4:15:54 It was first.
    4:15:55 And it’s first.
    4:15:56 It was first.
    4:15:57 Yeah, but there’s a lot of things that are first that…
    4:15:58 Well, it’s easier.
    4:16:00 It’s harder to switch than it is to…
    4:16:01 Yeah, okay.
    4:16:02 But AWS is…
    4:16:03 There’s big fees for switching too.
    4:16:06 AWS generates over 80% of Amazon’s profit.
    4:16:07 I think over 90%.
    4:16:08 That’s insane.
    4:16:12 The distribution centers are just like, one day we’ll decide to make money from this.
    4:16:13 But they haven’t yet, right?
    4:16:14 Like they make tiny little profit from it.
    4:16:17 One day of Amazon Prime will triple in price.
    4:16:22 You would think they would improve AWS interface because it’s like horrible.
    4:16:25 It’s like clunky, but everybody’s…
    4:16:28 Yeah, one would think.
    4:16:31 I think actually Google’s interface is sometimes nice, but it’s also like they don’t care about
    4:16:35 anyone besides their top customers and like their customer service sucks and like they
    4:16:36 have a lot less.
    4:16:39 I mean, all these companies, they optimized for the big customers.
    4:16:40 Yeah.
    4:16:41 It’s supposed to be for business.
    4:16:44 But Amazon has always optimized for the small customer too though, right?
    4:16:47 Like obviously they optimize a lot for the big customer, but like when they started,
    4:16:51 they just would go to like random Bay Area things and give out credits, right?
    4:16:52 And then they like…
    4:16:53 Or just put in your credit card and use us, right?
    4:16:54 Like back in the early days.
    4:16:55 So they’ve always…
    4:16:56 The business has grown with them, right?
    4:16:57 In Virgin.
    4:16:58 So like, why does Amazon…
    4:17:02 Like why is Snowflake all over Amazon because Snowflake in the beginning when Amazon didn’t
    4:17:04 care about them was still using Amazon, right?
    4:17:08 And then of course one day Snowflake and Amazon has a super huge partnership, but like this
    4:17:11 is the case like Amazon’s user experience and quality is better.
    4:17:15 Also a lot of the silicon they’ve engineered makes them have a lower cost structure and
    4:17:21 traditional cloud storage, CPU, networking, that kind of stuff than in databases, right?
    4:17:27 Like I think like four of Amazon’s top five revenue products, margin products are like
    4:17:31 gross profit products or all database related products like Redshift and like all these
    4:17:32 things, right?
    4:17:38 So Amazon has a very like good silicon to a user experience like entire pipeline with
    4:17:39 AWS.
    4:17:40 I think Google…
    4:17:42 They’re silicon teams?
    4:17:46 Yeah, they have awesome silicon internally, TPU, the YouTube chip, some of these other
    4:17:48 chips that they’ve made.
    4:17:52 And the problem is they’re not serving external customers, they’re serving internal customers,
    4:17:53 right?
    4:17:56 I mean, NVIDIA’s entire culture is designed from the bottom up to do this.
    4:18:01 There’s this recent book, The NVIDIA Way, by Take Him, that details this and how they
    4:18:07 look for future opportunities and ready their CUDA software libraries to make it so that
    4:18:13 new applications of high performance computing can very rapidly be evolved on CUDA and NVIDIA
    4:18:14 chips.
    4:18:18 And that is entirely different than Google as a services business.
    4:18:19 Yeah.
    4:18:22 NVIDIA, it should be said as a truly special company.
    4:18:26 Like, I mean, they, the whole, the culture, everything, they’re really optimized for that
    4:18:27 kind of thing.
    4:18:33 Which is there’s somebody that can even challenge NVIDIA hardware-wise, Intel, AMD?
    4:18:35 I really don’t think so.
    4:18:42 We went through a very long process of working with AMD on training on their GPUs and inference
    4:18:43 and stuff.
    4:18:44 And they’re decent.
    4:18:46 Their hardware is better in many ways than in NVIDIAs.
    4:18:48 The problem is their software is really bad.
    4:18:50 And I think they’re getting better, right?
    4:18:54 They’re getting better faster, but they’re just, the gulf is so large.
    4:18:58 Even like, they don’t spend enough resources on it or have it historically, right?
    4:19:02 Maybe they’re changing their tune now, but for multiple months, we were submitting the
    4:19:03 most bugs, right?
    4:19:05 Like, ah, semianalysis, right?
    4:19:06 Like, what the fuck?
    4:19:08 Like, why are we submitting the most bugs, right?
    4:19:11 Because they only, and they only cared about their biggest customers.
    4:19:15 And so they’d ship them a private image, blah, blah, blah, and it’s like, okay, but like,
    4:19:20 I am just using PyTorch and I want to use the publicly available libraries and you don’t
    4:19:21 care about that, right?
    4:19:25 So, they’re getting better, but like, I think AMD is not possible, Intel’s obviously in
    4:19:29 dire straits right now and needs to be saved somehow.
    4:19:33 Very important for national security, for American, you know, technology.
    4:19:36 Can you explain the obvious, so why are they in dire straits?
    4:19:39 Going back to earlier, only three companies can R&D, right?
    4:19:45 Taiwan, Sinshu, Samsung, Pyongyang, and then Intel Hillsboro.
    4:19:46 Samsung’s doing horribly.
    4:19:47 Intel’s doing horribly.
    4:19:50 We could be in a world where there’s only one company that can do R&D and that one company
    4:19:52 already manufactures most of chips.
    4:19:55 They’ve been gaining market share anyways, but like, that’s a critical thing, right?
    4:19:58 So what happens to Taiwan means the rest of the world’s semiconductor industry and therefore
    4:20:01 tech relies on Taiwan, right?
    4:20:03 And that’s obviously precarious.
    4:20:08 As far as like Intel, they’ve been slowly steadily declining.
    4:20:13 They were on top of servers and PCs, but now Apple’s done the M1 and Nvidia’s releasing
    4:20:17 a PC chip and Qualcomm’s releasing a PC chip and in servers, hyperscalers are all making
    4:20:23 their own ARM based server chips and Intel has no AI silicon like wins, right?
    4:20:25 They have very small wins.
    4:20:29 And they never got into mobile because they said no to the iPhone and like, all these
    4:20:32 things have compounded and they’ve lost their process technology leadership, right?
    4:20:35 They were ahead for 20 years and now they’re behind by at least a couple years, right?
    4:20:40 And they’re trying to catch back up and we’ll see if like their 18A, 14A strategy works
    4:20:42 out where they try and leapfrog TSMC.
    4:20:46 But like, and Intel is just like losing tons of money anyways, right?
    4:20:49 And they just fired their CEO, even though the CEO was the only person who understood
    4:20:50 the company.
    4:20:51 Well, right, we’ll see.
    4:20:56 He was not the best, but he was pretty good, relatively, technical guy.
    4:20:57 Where does Intel make most of its money?
    4:20:58 The CPUs, though.
    4:21:01 PCs and data center CPUs, yeah, but data center CPUs are all going cloud.
    4:21:05 And Amazon, Microsoft, Google are making our ARM based CPUs.
    4:21:10 And then PC side, AMD’s gained market share, Nvidia’s launching a chip.
    4:21:11 That’s not going to be success, right?
    4:21:15 Media tech, Qualcomm ever launched chips, Apple’s doing well, right?
    4:21:19 Like they could get squeezed a little bit in PC, although PC generally, I imagine will
    4:21:21 just stick Intel mostly for Windows side.
    4:21:25 Let’s talk about the broad AI race, who do you think wins?
    4:21:26 Who talked about Google?
    4:21:31 The leader, the default leader has been Google because of their infrastructure advantage.
    4:21:35 Well, like in the news, open AI is the leader.
    4:21:36 They’re the leading in the narrative.
    4:21:37 They have the best model.
    4:21:40 They have the best model that people can use and they’re experts.
    4:21:42 And they have the most AI revenue.
    4:21:43 Yeah.
    4:21:45 Open AI is winning, right?
    4:21:48 So who’s making money on AI right now?
    4:21:49 Is anyone making money?
    4:21:53 So accounting profit wise, Microsoft is making money, but they’re spending a lot of catbacks,
    4:21:54 right?
    4:21:56 You know, and that gets depreciated over years.
    4:22:01 Meta is making tons of money, but with recommendation systems, which is AI, but not with Lama, right?
    4:22:04 Lama’s losing money for sure, right?
    4:22:08 I think anthropic and open AI are obviously not making money because otherwise they wouldn’t
    4:22:09 be raising money, right?
    4:22:12 They have to raise money to build more, right?
    4:22:14 Well, theoretically, they are making money, right?
    4:22:18 You spent a few hundred million dollars on GPT-4 and it’s doing billions in revenue.
    4:22:22 So obviously it’s making money, although they had to continue to research to get the compute
    4:22:24 efficiency wins, right?
    4:22:30 And move down the curve to get that 1200X that has been achieved for GPT-3.
    4:22:35 Maybe we’re only at a couple hundred X now, but with GPT-4 Turbo and 4.0 and there’ll be
    4:22:40 another one probably cheaper than GPT-4.0 even that comes out at some point.
    4:22:42 And that research costs a lot of money, right?
    4:22:43 Yep, exactly.
    4:22:48 That’s the thing that I guess is not talked about with the cost, that when you’re referring
    4:22:54 to the cost of the model, it’s not just the training or the test runs, it’s the actual
    4:22:55 research, the manpower.
    4:22:59 Yeah, to do things like reasoning right now that that exists, they’re going to scale it,
    4:23:00 they’re going to do a lot of research.
    4:23:07 I think people focus on the payback question, but it’s really easy to just be like, well,
    4:23:10 GDP is humans and industrial capital, right?
    4:23:14 And if you can make intelligence cheap, then you can grow a lot, right?
    4:23:18 That’s the sort of dumb way to explain it, but that’s sort of what basically the investment
    4:23:19 thesis is.
    4:23:24 I think only NVIDIA is actually making tons of money and other hardware vendors.
    4:23:28 The hyperscalers are all on paper making money, but in reality, they’re like spending a lot
    4:23:32 more on purchasing the GPUs, which you don’t know if they’re still going to make this much
    4:23:35 money on each GPU in two years, right?
    4:23:41 You don’t know if all of a sudden, OpenAI goes kapoof, and now Microsoft has like hundreds
    4:23:46 of thousands of GPUs they were renting to OpenAI that they paid for themselves with
    4:23:50 their investment in them, that no longer have a customer, right?
    4:23:53 This is always a possibility, I don’t believe that, right?
    4:23:57 I think OpenAI will keep raising money, I think others will keep raising money because
    4:24:02 the investments, the returns from it are going to be eventually huge once we have AGI.
    4:24:05 So do you think multiple companies will get, let’s assume-
    4:24:07 I don’t think it’s going to take all.
    4:24:08 Okay.
    4:24:12 So it’s not, let’s not call it AGI or whatever, it’s like a single day.
    4:24:13 It’s a gradual thing.
    4:24:15 Super powerful AI.
    4:24:20 But it’s a gradually increasing set of features that are useful and make a lot of money.
    4:24:22 Rapidly increasing set of features.
    4:24:25 Rapidly increasing set of features.
    4:24:32 So you’re saying a lot of companies will be, it just seems absurd that all of these companies
    4:24:35 are building gigantic data centers.
    4:24:39 There are companies that will benefit from AI but not because they trained the best model.
    4:24:44 Meta has so many avenues to benefit from AI and all of their services, people are there,
    4:24:47 people spend time on Meta’s platforms and it’s a way to make more money per user per
    4:24:48 hour.
    4:24:58 Yeah, it seems like Google X/XAI/Tesla, important to say, and then Meta will benefit not directly
    4:25:06 from the AI like the LLMs, but from the intelligence, like the additional boost of intelligence to
    4:25:07 the products they already sell.
    4:25:12 So whether that’s the recommendation system or for Elon, who’s been talking about Optimus,
    4:25:16 the robot, potentially the intelligence of the robot.
    4:25:20 And then you have personalized robots in the home, that kind of thing.
    4:25:25 He thinks it’s a 10 plus trillion dollar business, which-
    4:25:30 At some point maybe, not soon, but who knows what robotics-
    4:25:35 Let’s do a TAM analysis, right, 8 billion humans and let’s get 8 billion robots, right,
    4:25:39 and let’s pay them the average salary and yeah, there we go, 10 trillion.
    4:25:40 More than 10 trillion.
    4:25:46 Yeah, I mean, if there’s robots everywhere, why does it have to be just eight billion
    4:25:47 robots?
    4:25:48 Yeah, of course, of course.
    4:25:51 I’m gonna have like one robot, you’re gonna have like 20.
    4:25:54 Yeah, I mean, I see a use case for that.
    4:25:59 So yeah, I guess the benefit would be in the products as well, which is why OpenAI is in
    4:26:00 a trickier position because they-
    4:26:04 All of the value of OpenAI right now as a brand is in ChatGPT.
    4:26:09 And there is actually not that, for most users, there’s not that much of a reason that they
    4:26:14 need OpenAI to be spending billions and billions of dollars on the next best model when they
    4:26:17 could just license Lama 5 and Furby Way cheaper.
    4:26:22 So that’s kind of like, ChatGPT is an extremely valuable entity to them.
    4:26:25 But like, they could make more money just off that.
    4:26:29 The Chat application is clearly like does not have tons of room to continue, right?
    4:26:30 Like the standard Chat, right?
    4:26:33 Where you’re just using it for a random question and stuff, right?
    4:26:36 The cost continues to collapse, V3 is the latest one.
    4:26:37 It’ll go down to ads.
    4:26:39 Biggest, but it’s gonna get supported by ads, right?
    4:26:44 Like, you know, Meta already serves 405B and probably loses the money, but at some point,
    4:26:48 you know, they’re going to get, the models are gonna get so cheap that they can just
    4:26:50 serve them for free with ad supported, right?
    4:26:53 And that’s what Google is going to be able to do, and that’s obviously they’ve got a
    4:26:54 bigger reach, right?
    4:26:56 So Chat is not going to be the only use case.
    4:27:00 It’s like these reasoning, code, agents, computer use.
    4:27:03 All this stuff is where OpenAI has to actually go to make money in the future.
    4:27:04 Otherwise, they’re kaputts.
    4:27:09 But X, Google and Meta have these other products.
    4:27:15 So doesn’t, isn’t it likely that OpenAI and Anthropic disappear eventually?
    4:27:18 Unless they’re so good at models, they are.
    4:27:19 But it’s such a cutting edge.
    4:27:20 I mean, yes.
    4:27:22 It depends on where you think AI capabilities are going.
    4:27:24 You have to keep winning.
    4:27:25 Yes.
    4:27:26 You have to keep winning.
    4:27:31 As you climb, even if the AI capabilities are going super rapidly awesome into the direction
    4:27:39 of AGI, like there’s still a boost for X in terms of data, Google in terms of data, Meta
    4:27:44 in terms of data, in terms of other products and the money and like there’s just huge amounts
    4:27:45 of money.
    4:27:46 But the whole idea is human data is kind of tapped out.
    4:27:47 We don’t care.
    4:27:48 We don’t care.
    4:27:49 We don’t care about self-play, verifiable tasks.
    4:27:50 Yes, the self-play.
    4:27:51 Think about AWS.
    4:27:52 Which is an R&D problem.
    4:27:56 AWS does not make a lot of money on each individual machine.
    4:28:01 And the same can be said for the most powerful AI platform, which is even though the calls
    4:28:06 to the API are so cheap, there’s still a lot of money to be made by owning that platform.
    4:28:10 And there’s a lot of discussions as it’s the next compute layer.
    4:28:14 You have to believe that, and there’s a lot of discussions that tokens and tokenomics
    4:28:18 and LLM APIs are the next compute layer or the next paradigm for the economy, kind of
    4:28:20 like energy and oil was.
    4:28:26 But there’s also like, you have to sort of believe that APIs and chat are not where AI
    4:28:27 is stuck, right?
    4:28:30 It is actually just tasks and agents and robotics and computer use.
    4:28:36 And those are the areas where all the value will be delivered, not API, not chat application.
    4:28:43 Is it possible you have, I mean, it all just becomes a commodity and you have the very
    4:28:49 thin wrapper, like perplexity, just joking.
    4:28:51 There are a lot of wrappers making a lot of money.
    4:28:52 Yeah.
    4:28:56 But do you think it’s possible that people will just even forget what open AI and the
    4:28:57 thropic is?
    4:29:00 And just because there’ll be wrappers around the API and it just dynamically…
    4:29:04 If model progress is not rapid, yeah, it’s becoming a commodity, right?
    4:29:09 DeepSeq V3 shows this, but also the GPT-3 chart earlier, chart showed this, right?
    4:29:12 Lama3B is 1200X cheaper than GPT-3.
    4:29:17 Any GPT-3, like anyone whose business model is GPT-3 level capabilities is dead.
    4:29:20 Anyone whose business model is GPT-4 level capabilities is dead, right?
    4:29:25 It is a common saying that the best businesses being made now are ones that are predicated
    4:29:26 on models getting better, right?
    4:29:32 Which would be like wrappers, thing that is riding the wave of the models.
    4:29:35 The short term, the company that could make the most money is the one that figures out
    4:29:40 what advertising targeting method works for language model generations.
    4:29:45 We have the meta ads, which are hyper-targeted in feed, not within specific pieces of content.
    4:29:49 And we have search ads that are used by Google and Amazon has been rising a lot on search.
    4:29:56 But within a return from chat GPT, it is not clear how you get a high-quality placed ad
    4:29:57 within the output.
    4:30:04 And if you can do that with model costs coming down, you can just get super high revenue.
    4:30:07 That revenue is totally untapped and it’s not clear technically how it is done.
    4:30:12 Yeah, that is sort of the AdSense innovation that Google did.
    4:30:18 The one day you’ll have in GPT output an ad and that’s going to make billions of dollars.
    4:30:20 And it could be very subtle.
    4:30:21 It could be in conversation.
    4:30:22 We have voice mode now.
    4:30:27 It could be some way of making it so the voice introduces certain things.
    4:30:30 It’s much harder to measure and it takes imagination, but yeah.
    4:30:36 And it wouldn’t come off shady so you will receive public blowback, that kind of thing.
    4:30:40 You have to do it loud enough to where it’s clear as an ad and balance all of that.
    4:30:43 So that’s the open question they’re trying to solve.
    4:30:45 Anthropic and OpenAI, they need to…
    4:30:46 They might not say that they’re trying…
    4:30:47 I don’t think they care about that at all.
    4:30:49 They don’t care about it right now.
    4:30:50 I think it’s places like…
    4:30:51 I think they’re purely…
    4:30:52 Purely…
    4:30:53 They’re experimenting on that more.
    4:30:54 Oh, interesting.
    4:30:55 Yeah, for sure.
    4:30:58 Like, perplexity Google meta care about this.
    4:31:02 I think OpenAI and Anthropic are purely laser focused on…
    4:31:03 AGI.
    4:31:04 Yeah.
    4:31:05 Agents and AGI.
    4:31:11 Agents and AGI, I can make tons of money or I can spend, pay for everything.
    4:31:12 This is…
    4:31:15 It’s just predicated like back on the export control thing.
    4:31:19 If you think AGI is five, 10 years away or less, these labs think it’s two, three years
    4:31:20 away.
    4:31:24 Obviously, your actions are…
    4:31:29 If you assume they’re rational actors, which they are mostly, what you do in a two-year
    4:31:34 AGI versus five-year versus 10-year is very, very, very different.
    4:31:36 Do you think agents are promising?
    4:31:40 We have to talk about this.
    4:31:44 This is like the excitement of the year that agents are going to…
    4:31:51 The generic hype term that a lot of business folks are using, AI agents are going to revolutionize
    4:31:52 everything.
    4:31:53 Okay.
    4:31:55 So, mostly the term agent is obviously overblown.
    4:32:00 We’ve talked a lot about reinforcement learning as a way to train for verifiable outcomes.
    4:32:04 This should mean something that is open-ended and is solving a task independently on its
    4:32:07 own and able to adapt to uncertainty.
    4:32:11 There is a lot of the term agent applied to things like Apple Intelligence, which we
    4:32:16 still don’t have after the last WWDC, which is orchestrating between apps.
    4:32:20 That sort of tool use thing is something that language models can do really well.
    4:32:23 Apple Intelligence, I suspect will come eventually.
    4:32:24 It’s a closed domain.
    4:32:29 It’s your messages app integrating with your photos, with AI in the background.
    4:32:30 That will work.
    4:32:35 This has been described as an agent by a lot of software companies to get into the narrative.
    4:32:43 The question is, what ways can we get language models to generalize to new domains and solve
    4:32:45 their own problems in real time?
    4:32:49 Maybe some tiny amount of training when they are doing this with fine-tuning themselves
    4:32:53 or in-context learning, which is the idea of storing information in a prompt.
    4:32:58 You can use learning algorithms to update that and whether or not you believe that that
    4:33:05 is going to actually generalize to things like me saying, “Book my trip to go to Austin
    4:33:06 in two days.
    4:33:10 I have XYZ constraints and actually trusting it.”
    4:33:13 I think there’s an HCI problem coming back for information.
    4:33:15 Well, what’s your prediction there?
    4:33:18 Because my gut says we’re very far away from that.
    4:33:23 I think OpenAI’s statement, I don’t know if you’ve seen the five levels, right?
    4:33:28 Where it’s chat is level one, reasoning is level two, and then agents is level three.
    4:33:31 I think there’s a couple more levels, but it’s important to note, right?
    4:33:34 We were in chat for a couple of years, right?
    4:33:37 We just theoretically got to reasoning.
    4:33:39 We’ll be here for a year or two, right?
    4:33:44 And then agents, but at the same time, people can try and approximate capabilities of the
    4:33:45 next level.
    4:33:49 But the agents are doing things autonomously, doing things for minutes at a time, hours
    4:33:52 at a time, et cetera, right?
    4:33:56 Everything is doing things for tens of seconds at a time, right?
    4:33:59 And then coming back with an output that I still need to verify and use and try to check
    4:34:01 out, right?
    4:34:05 And the biggest problem is, of course, it’s the same thing with manufacturing, right?
    4:34:07 There’s the whole Six Sigma thing, right?
    4:34:08 How many nines do you get?
    4:34:12 And then you compound the nines onto each other, and it’s like, if you multiply by the
    4:34:18 number of steps that are Six Sigma, you get a yield or something, right?
    4:34:23 So in semiconductor manufacturing, tens of thousands of steps, 999999 is not enough,
    4:34:24 right?
    4:34:28 Because you multiply by that many times, you actually end up with like 60% yield, right?
    4:34:29 Yeah, or zero.
    4:34:30 Or low yield, yeah, or zero.
    4:34:32 And this is the same thing with agents, right?
    4:34:40 Chaining tasks together each time, LLMs, even the best LLMs in particularly pretty good benchmarks,
    4:34:42 don’t get 100%, right?
    4:34:45 They get a little bit below that because there’s a lot of noise.
    4:34:49 And so how do you get to enough nines, right?
    4:34:50 This is the same thing with self-driving.
    4:34:54 We can’t have self-driving because without it being like super geofenced like Google,
    4:34:55 like Google’s, right?
    4:34:58 And even then they have a bunch of tele operators to make sure it doesn’t get stuck, right?
    4:35:01 But you can’t do that because it doesn’t have enough nines.
    4:35:07 And self-driving has quite a lot of structure because roads have rules.
    4:35:08 It’s well-defined.
    4:35:09 There’s regulation.
    4:35:15 And when you’re talking about computer use for the open web, for example, or the open
    4:35:19 operating system, like there’s no, it’s a mess.
    4:35:27 So like the possibility, I’m always skeptical of any system that is tasked with interacting
    4:35:30 with the human world, with the open messy human world.
    4:35:31 That’s the thing.
    4:35:35 If we can’t get intelligence that’s enough to solve the human world on its own, we can
    4:35:41 create infrastructure like the human operators for Waymo over many years that enables certain
    4:35:42 workloads.
    4:35:45 There is a company, I don’t remember it, but it is, but that’s literally their pitches.
    4:35:47 Yeah, we’re just going to be the human operator when agents fail.
    4:35:49 And you just call us and we fix it.
    4:35:50 Yeah.
    4:35:51 It’s like an API call and it’s hilarious.
    4:35:54 There’s going to be tele-operation markets when we get human robots, which is there’s
    4:35:59 going to be somebody around the world that’s happy to fix the fact that it can’t finish
    4:36:03 loading my dishwasher when I’m unhappy with it, but that’s just going to be part of the
    4:36:04 Tesla service package.
    4:36:10 I’m just imagining like an AI agent talking to another AI agent.
    4:36:15 One company has an AI agent that specializes in helping other AI agents.
    4:36:19 But if you can make things that are good at one step, you can stack them together.
    4:36:23 So that’s why I’m like, if it takes a long time, we’re going to build infrastructure that
    4:36:24 enables it.
    4:36:29 You see the operator launch, they have partnerships with certain websites with DoorDash with OpenTable
    4:36:31 with things like this.
    4:36:35 Those partnerships are going to let them climb really fast, their model is going to get really
    4:36:36 good at those things.
    4:36:40 It’s going to prove a concept that might be a network effect where more companies want
    4:36:41 to make it easier for AI.
    4:36:45 Some companies will be like, no, let’s put blockers in place.
    4:36:47 And this is the story of the internet we’ve seen.
    4:36:51 We see it now with training data for language models where companies are like, no, you have
    4:36:55 to pay, like business working it out.
    4:37:00 That said, I think like airlines have a very, and hotels have high incentive to make their
    4:37:03 site work really well, and they usually don’t.
    4:37:09 Like if you look at how many clicks it takes to order an airplane ticket, it’s insane.
    4:37:12 You actually can’t call an American Airlines agent anymore.
    4:37:14 They don’t have a phone number.
    4:37:20 I mean, it’s horrible on many, on the interface front, to imagine that agents will be able
    4:37:25 to deal with that website when I as a human struggle, like I have an existential crisis
    4:37:31 every time I try to book an airplane ticket that I don’t, I think it’s going to be extremely
    4:37:35 difficult to build an AI agent that’s robust in that way.
    4:37:38 But think about it like United has accepted the Starlink term, which is they have to provide
    4:37:41 Starlink for free and the users are going to love it.
    4:37:45 What if one airline is like, we’re going to take a year and we’re going to make our website
    4:37:49 have white text that works perfectly for the AIs.
    4:37:53 Every time anyone asks about an AI flight, they buy whatever airline it is.
    4:37:58 Or like, they just like, here’s an API and it’s only exposed to AI agents and if anyone
    4:38:03 queries it, the price is 10% higher and for any flight, but we’ll let you see any of our
    4:38:05 flights and you can just book any of them.
    4:38:06 Here you go.
    4:38:07 Agent Matt.
    4:38:08 And then it’s like, oh, and I made 10% higher price.
    4:38:09 Awesome.
    4:38:10 Yeah.
    4:38:12 And like, am I willing to say that for like, hey, book me a flight to see Lex, right?
    4:38:13 And it’s like, yeah, whatever.
    4:38:21 I think computers and real world and the open world are really, really messy.
    4:38:25 But if you start defining the problem in narrow regions, people are going to be able to create
    4:38:32 very, very productive things and ratchet down cost massively, right?
    4:38:38 Now, crazy things like robotics in the home, those are going to be a lot harder to do just
    4:38:43 like self-driving because there’s just a billion different failure modes, right?
    4:38:48 But agents that can like navigate a certain set of websites and do certain sets of tasks
    4:38:53 or like look at, you know, take a photo of your grocery, your fridge and or like upload
    4:38:57 your recipes and then like it figures out what to order from, you know, Amazon slash
    4:38:59 Whole Foods food delivery.
    4:39:01 Like that’s going to be like pretty quick and easy to do, I think.
    4:39:05 So it’s going to be a whole range of like business outcomes and it’s going to be tons
    4:39:08 of tons of sort of optimism around people can just figure out ways to make money.
    4:39:11 To be clear, these sandboxes already exist in research.
    4:39:16 There are people who have built clones of all the most popular websites of Google, Amazon,
    4:39:20 blah, blah, blah to make it so that there’s, I mean, OpenAI probably has them internally
    4:39:21 to train these things.
    4:39:26 It’s the same as DeepMind’s robotics team for years has had clusters for robotics where
    4:39:28 you interact with robots fully remotely.
    4:39:33 They just have a lab in London and you send tasks to it, arrange the blocks and you do
    4:39:34 this research.
    4:39:39 Obviously, there’s text there that fix stuff, but we’ve turned these cranks of automation
    4:39:40 before.
    4:39:46 You go from sandbox to progress and then you add one more domain at a time and generalize
    4:39:47 it.
    4:39:51 I think in the history of NLP and language processing, instruction tuning in tasks per
    4:39:54 language model used to be like one language model did one task.
    4:39:57 And then in the instruction tuning literature, there’s this point where you start adding
    4:40:01 more and more tasks together where it just starts to generalize to every task.
    4:40:03 And we don’t know where on this curve we are.
    4:40:07 I think for reasoning with this RL and verifiable domains were very early, but we don’t know
    4:40:12 where the point is where you just start training on enough domains and poof like more domains
    4:40:15 to start working and you’ve crossed the generalization barrier.
    4:40:20 Well, what do you think about the programming context?
    4:40:28 So software engineering, that’s where I personally know a lot of people interact with AI the
    4:40:29 most.
    4:40:34 There’s a lot of fear and angst too from current CS students, but that is the area where probably
    4:40:40 the most AI revenue and productivity gains have come, whether it be co-pilots or cursor
    4:40:44 or what have you, right, this is or just standard chat GPT, right?
    4:40:49 Like a lot of, I know very few programmers who don’t have chat GPT and actually many
    4:40:53 of them have the $200 tier because that’s what it’s so good for, right?
    4:40:58 I think that in that world, we already see it like SWE bench and if you’ve looked at
    4:41:03 the benchmark made by some Stanford students, I wouldn’t say it’s like really hard, but
    4:41:04 I wouldn’t say it’s easy either.
    4:41:08 I think like it takes someone who’s been throughout least, you know, a few years of CS or a couple
    4:41:11 years of programming to do SWE bench well.
    4:41:16 And the models went from 4% to 60% in like a year, right?
    4:41:18 And where are they going to go to next year?
    4:41:21 You know, it’s going to be higher, probably won’t be 100% because again, that nines is
    4:41:23 like really hard to do.
    4:41:25 But we’re going to get to some point where that’s and then we’re going to need harder
    4:41:28 software engineering benchmarks and so on and so forth.
    4:41:33 But the way that like people think of it now is it’s can do code completion easy.
    4:41:36 It can do some function generation and I have to review it, great.
    4:41:41 But really the like software engineering agents I think can be done faster sooner than any
    4:41:44 other agent because it is a verifiable domain.
    4:41:51 You can always like unit test or compile and there’s many different regions of like it can
    4:41:55 inspect the whole code base at once, which no engineer really can only the architects
    4:41:59 can really think about this stuff, the really senior guys and they can define stuff and
    4:42:01 then the agent can execute on it.
    4:42:05 So I think I think software engineering costs are going to plummet like crazy and one interesting
    4:42:09 aspect of that is when software engineering costs are really low, you get very different
    4:42:10 markets.
    4:42:11 Right.
    4:42:14 So in the US, you have all these platforms as companies, right, sales force and so on
    4:42:15 and so forth.
    4:42:16 Right.
    4:42:20 In China, no one uses platform sass.
    4:42:25 Everyone just builds their own stack because software engineering is much cheaper in China,
    4:42:29 partially because like people stem number of stem graduates, et cetera.
    4:42:33 So it’s generally just cheaper to do.
    4:42:36 And so at the same time, code for like code alums have been adopted much less in China
    4:42:39 because the cost of an engineer there is much lower.
    4:42:42 But like what happens when every company can just invent their own business logic like
    4:42:44 really cheaply and quickly.
    4:42:48 You stop using platform sass, you start building custom tailored solutions, you change them
    4:42:49 really quickly.
    4:42:51 Now all of a sudden your business is a little bit more efficient too potentially because
    4:42:56 you’re not dealing with the hell that is like some random platform sass company stuff not
    4:43:00 working perfectly and having to adjust workflows or random business automation cases that aren’t
    4:43:02 necessarily AI required.
    4:43:04 It’s just logic that needs to be built that no one has built, right?
    4:43:08 All of these things can go happen faster and so I think software and then the other domain
    4:43:12 is like industrial, chemical, mechanical engineers, second coding, right?
    4:43:17 Just generally and like their tools like semiconductor engineers, their tools are 20 years old.
    4:43:21 All the tools run on XP, including ASML lithography tools run on Windows XP, right?
    4:43:25 It’s like, you know, and like a lot of the analysis happens in Excel, right?
    4:43:29 Like it’s just like guys, like you guys can move 20 years forward with all the data you
    4:43:31 have and gathered and like do a lot better.
    4:43:34 It’s just you need the engineering skills for software engineering to be delivered to
    4:43:36 the actual domain expert engineer.
    4:43:40 So I think, I think that’s the area where I’m like super duper bullish of, of generally
    4:43:42 AI creating value.
    4:43:45 The big picture is that I don’t think it’s going to be a cliff.
    4:43:51 It’s like, we talked to anything, a really good example of how growth changes is when
    4:43:53 meta added stories.
    4:43:57 So Snapchat was on an exponential, they added stories, it flatlined.
    4:44:01 Software engineers, then up until the right, AI is going to come in, it’s probably going
    4:44:02 to be flat.
    4:44:04 It’s like, it’s not like everyone’s going to lose their job.
    4:44:08 It’s hard because the supply corrects more slowly.
    4:44:10 So the amount of students is still growing.
    4:44:13 And that’ll correct on a multi year, like a year delay.
    4:44:16 But the amount of jobs will just turn.
    4:44:20 And then maybe in 20, 40 years, it’ll be well down.
    4:44:23 But in the few years, there’ll never going to be the snap moment where it’s like software
    4:44:24 engineers aren’t useful.
    4:44:28 I think also the nature of what it means to be a programmer and what kind of jobs programmers
    4:44:29 do changes.
    4:44:36 Cause I think there needs to be a human in the loop of everything you’ve talked about.
    4:44:41 There’s a really important human in that picture of like correcting the code.
    4:44:43 Like fixing.
    4:44:45 Thinking larger than the context length.
    4:44:46 Yep.
    4:44:52 And debugging also, like debugging by sort of reading the code, understanding the steering
    4:44:53 the system.
    4:44:56 Like no, no, no, you missed the point adding more to the prompt.
    4:44:58 Kind of like, yes.
    4:45:02 Adding the human designing the perfect Google button, Google’s famous for having people
    4:45:04 design buttons that are so perfect.
    4:45:07 And it’s like, how, like, how is AI going to do that?
    4:45:10 Like they could give you all ideas.
    4:45:11 Perfect.
    4:45:12 Fine.
    4:45:13 I mean, that’s the thing.
    4:45:14 You can call it taste.
    4:45:19 Humans have one thing humans can do is figure out what other humans enjoy better than AI
    4:45:20 systems.
    4:45:21 That’s where the preference.
    4:45:25 You’re loading that in, but ultimately humans are the greatest preference generally.
    4:45:27 That’s where the preference comes from.
    4:45:31 And humans are actually very good at reading or like judging between two things versus this
    4:45:35 is this goes back to the core of what early Jeff and preference tuning is, is that it’s
    4:45:38 hard to generate a good answer for a lot of problems, but it’s easy to see which one
    4:45:39 is better.
    4:45:43 And that’s how we’re using humans for AI now is judging which one is better.
    4:45:47 And that’s what software engineering could look like is the PR review.
    4:45:48 Here’s a few options.
    4:45:53 What are the, like, here’s some potential pros and cons and they’re going to be judges.
    4:46:00 I think the thing I would very much recommend is people start, programmers start using AI
    4:46:05 and embracing that role of the supervisor of the AI system and like partner of the AI
    4:46:10 system versus writing from scratch or not learning coding at all and just generating
    4:46:11 stuff.
    4:46:14 Because I think there actually has to be a pretty high level of expertise as a programmer
    4:46:18 to be able to manage increasingly intelligent systems.
    4:46:21 I think it’s that and then becoming a domain expert in something.
    4:46:22 Sure.
    4:46:23 Yeah.
    4:46:27 Because seriously, if you go look at aerospace or semiconductors or chemical engineering,
    4:46:30 everyone is using really crappy platforms, really old software.
    4:46:34 Like the job of a data science is like a joke, right?
    4:46:35 In many cases.
    4:46:39 In many cases, it’s very real, but it’s like bring what the forefront of human capabilities
    4:46:41 are to your domain.
    4:46:45 And even if the forefront is from the AI, your domain, you’re at the forefront, right?
    4:46:50 So it’s like, you have to be at the forefront of something and then leverage the rising
    4:46:52 tide that is AI for everything else.
    4:46:53 Yeah.
    4:46:59 There’s so many low hanging fruit everywhere in terms of where software can help automate
    4:47:02 a thing or digitize a thing.
    4:47:06 In the legal system, that’s why Doge is exciting.
    4:47:12 Yeah, I mean, I got to hang out with a bunch of the Doge folks and they, I mean, government
    4:47:15 is like so old school.
    4:47:21 It’s like begging for the modernization of software, of organizing the data, all this
    4:47:22 kind of stuff.
    4:47:29 I mean, in that case is by design, because bureaucracy protects centers of power and
    4:47:33 so on, but software breaks down those barriers.
    4:47:39 So it hurts those that are holding onto power, but ultimately benefits humanity.
    4:47:44 So there’s a bunch of domains of that kind.
    4:47:49 One thing we didn’t fully finish talking about is open source.
    4:47:51 So first of all, congrats.
    4:47:52 You released a new model.
    4:47:53 Yeah.
    4:47:54 This is the…
    4:47:55 Tulu.
    4:47:56 I’ll explain what a Tulu is.
    4:48:01 A Tulu is a hybrid camel when you breed a dromedary with a Bacchian camel.
    4:48:05 Back in the early days after chat, GPT, there was a big wave of models coming out like Alpaca,
    4:48:10 Vicuna, et cetera, that were all named after various mammalian species.
    4:48:11 So Tulu is…
    4:48:14 The brand is multiple years old, which comes from that.
    4:48:20 And we’ve been playing at the frontiers of post training with open source code.
    4:48:24 And this first part of this release was in the fall where we used…
    4:48:30 We built on Lama’s open models, open weight models, and then we add in our fully open code
    4:48:32 or fully open data.
    4:48:36 There’s a popular benchmark that is chatbot arena, and that’s generally the metric by
    4:48:41 which how these chat models are evaluated, and it’s humans compare random models from
    4:48:42 different organizations.
    4:48:48 And if you looked at the leaderboard in November or December, among the top 60 models from
    4:48:53 10s to 20s of organizations, none of them had open code or data for just post training.
    4:48:57 Among that, even fewer or none have pre-training data and code available, but post training
    4:48:58 is much more accessible.
    4:49:00 At this time, it’s still pretty cheap and you can do it.
    4:49:04 And the thing is like, how high can we push this number where people have accessed all
    4:49:05 the code and data?
    4:49:07 So that’s kind of the motivation of the project.
    4:49:12 We draw on lessons from Lama, NVIDIA had a nematron model where the recipe for their
    4:49:17 post training was fairly open with some data and a paper, and it’s putting all these together
    4:49:22 to try to create a recipe that people can fine tune models like GPT-4 to their domain.
    4:49:27 So to be clear, in the case of Tulu, maybe you can talk about almost too, but in the
    4:49:31 case of Tulu, you’re taking Lama 345B.
    4:49:35 Tulu has been a series of recipes for post training.
    4:49:38 So we’ve done multiple models over years.
    4:49:40 And so you’re open sourcing everything.
    4:49:41 Yeah.
    4:49:45 If you start with an open weight based model, the whole model technically is an open source
    4:49:49 because you don’t know what Lama put into it, which is why we have the separate thing
    4:49:50 that we’ll get to.
    4:49:54 But it’s just getting parts of the pipeline where people can zoom in and customize.
    4:49:58 I know I hear from startups and businesses, they’re like, okay, I can take this post training
    4:50:00 and try to apply it to my domain.
    4:50:01 We talk about verifiers a lot.
    4:50:08 We use this idea, which is reinforcement learning with verifiable rewards, RLVR, kind of similar
    4:50:12 to RLHF, and we applied it to math.
    4:50:18 And the model today, which is we applied it to the Lama 405B base model from last year,
    4:50:20 and we have our other stuff.
    4:50:25 We have our instruction tuning and preference tuning, but the math thing is interesting,
    4:50:28 which is like, it’s easier to improve this math benchmark.
    4:50:32 There’s a benchmark, MATH, math, all capitals, tough name.
    4:50:36 On the benchmark, name is the area that you’re evaluating.
    4:50:37 We’re researchers.
    4:50:39 We’re not brands, brand strategists.
    4:50:43 And this is something that the DeepSeek paper talked about as well, is like at this bigger
    4:50:48 model, it’s easier to elicit powerful capabilities with this RL training, and then they distill
    4:50:51 it down from that big model to the small model.
    4:50:55 And this model we released today, we saw the same thing as we’re at AI2.
    4:50:56 We don’t have a ton of compute.
    4:51:01 We can’t train 405B models all the time, so we just did a few runs and they tend to work.
    4:51:07 And it’s like, it just shows that there’s a lot of room for people to play in these things.
    4:51:09 And they crushed Lama’s actual release, right?
    4:51:11 They’re way better than it.
    4:51:12 Yeah.
    4:51:15 So our val numbers, I mean, we have extra months in this, but our val numbers are much
    4:51:18 better than the Lama Instruct model that they released.
    4:51:20 And they also said better than DeepSeek V3.
    4:51:21 Yeah.
    4:51:25 On our val benchmark, the most DeepSeek V3 is really similar.
    4:51:29 We have a safety benchmark to understand if it will say harmful things and things like
    4:51:30 that.
    4:51:31 And that’s what draws us down most of the way.
    4:51:34 It’s still like, it’s like an amalgamation of multiple benchmarks or what do you mean?
    4:51:35 Yeah.
    4:51:36 So we have a 10 value.
    4:51:39 This is like, this is standard practice in post training is you choose your evaluations
    4:51:40 you care about.
    4:51:43 In academics, in smaller labs, you’ll have fewer evaluations.
    4:51:46 In companies, you’ll have a really one domain that you really care about.
    4:51:50 In frontier labs, you’ll have 10s to 20s to maybe even like 100 evaluations of specific
    4:51:51 things.
    4:51:55 So we choose a representative suite of things that look like chat, precise instruction following,
    4:51:58 which is like respond only in emojis.
    4:51:59 Like does the model follow weird things like that?
    4:52:00 Yeah.
    4:52:02 Math, code, and you create a suite like this.
    4:52:07 So safety would be one of 10 in that type of suite where you have like, what is the broader
    4:52:09 community of AI care about?
    4:52:12 And for example, in comparison to DeepSeek, it would be something like our average of
    4:52:18 VAL for our model would be 80, including safety and similar without and DeepSeek would be
    4:52:26 like 79% average score without safety and their safety score would bring it down like
    4:52:27 safety.
    4:52:28 Oh, so you beat them even ignoring safety?
    4:52:29 Yeah.
    4:52:33 So this is something that internally it’s like, I don’t want to win only by like how you shape
    4:52:34 the VAL benchmark.
    4:52:36 So if there’s something that’s like people may or may not care about safety in their
    4:52:39 model, safety can come downstream.
    4:52:43 Safety can be when you host the model for an API like safety is addressed in a spectrum
    4:52:44 of locations in AI applications.
    4:52:47 So it’s like, if you want to say that you have the best recipe, you can’t just gait it
    4:52:51 on these things that some people might not want.
    4:52:57 And this is just, it’s like the time of progress and we benefit, we can release a model later,
    4:53:01 we have more time to learn new techniques like this RL technique, we had started this
    4:53:02 in the fall.
    4:53:04 It’s now really popular as reasoning models.
    4:53:08 The next thing to do for open source post training is to scale up verifiers, to scale
    4:53:11 up data, to replicate some of deep seeks results.
    4:53:15 And it’s awesome that we have a paper to draw on and it makes it a lot easier.
    4:53:22 And that’s the type of things that is going on among academic and closed frontier research
    4:53:23 in AI.
    4:53:25 Since you’re pushing open source, what do you think is the future of it?
    4:53:30 You think deep seek actually changes things since it’s open source or open weight or it’s
    4:53:33 pushing the open source movement into the open direction?
    4:53:35 This goes very back to license discussion.
    4:53:38 So deep seek R1 with a friendly license is a major reset.
    4:53:42 So it’s like the first time that we’ve had a really clear frontier model that is open
    4:53:46 weights and with a commercially friendly license with no restrictions on downstream
    4:53:49 use cases, synthetic data, distillation, whatever.
    4:53:53 This has never been the case at all in the history of AI in the last few years since
    4:53:54 ChatGPT.
    4:53:57 There have been models that are off the frontier or models with weird licenses that you can’t
    4:53:58 really use them.
    4:54:04 So isn’t Meta’s license like pretty much permissible except for five companies?
    4:54:09 And so this goes to what open source AI is, which is there’s also use case restrictions
    4:54:12 in the Lama license, which says you can’t use it for specific things.
    4:54:15 So if you come from an open source software background, you would say that that is not
    4:54:16 an open source license.
    4:54:20 What kind of things are those, though?
    4:54:22 At this point, I can’t pull them off the top of my head.
    4:54:23 Stuff that’s competitor.
    4:54:26 It used to be military use was one and they removed that for scale.
    4:54:32 It’ll be like CSAM, like child abuse material.
    4:54:35 That’s the type of thing that is forbidden there, but that’s enough from an open source
    4:54:38 background to say it’s not an open source license.
    4:54:42 And also the Lama license has this horrible thing where you have to name your model Lama
    4:54:45 if you touch it to the Lama model.
    4:54:46 So it’s like the branding thing.
    4:54:50 So if a company uses Lama, technically the license says that they should say built with
    4:54:52 Lama at the bottom of their application.
    4:54:54 And from a marketing perspective, that just hurts.
    4:54:57 I could suck it up as a researcher and I’m like, oh, it’s fine.
    4:55:01 It says Lama-dash on all of our materials for this release.
    4:55:06 But this is why we need truly open models, which is we don’t know deep-seek R1’s data.
    4:55:10 So you’re saying I can’t make a cheap copy of Lama and pretend it’s mine, but I can
    4:55:12 do this with the Chinese model.
    4:55:13 Hell yeah.
    4:55:16 That’s what I was saying.
    4:55:21 And that’s why it’s like we want this whole open language models thing, the Olmo thing
    4:55:25 is to try to keep the model where everything is open with the data as close to the frontier
    4:55:26 as possible.
    4:55:27 So we’re compute constrained.
    4:55:29 We’re personnel constrained.
    4:55:34 We rely on getting insights from people like John Shulman tells us to do RL on outputs.
    4:55:39 We can make these big jumps, but it just takes a long time to push the frontier of open source.
    4:55:44 And fundamentally, I would say that that’s because open source AI does not have the same
    4:55:46 feedback loops as open source software.
    4:55:49 We talked about open source software for security.
    4:55:52 Also it’s just because you build something once and you can reuse it.
    4:55:55 If you go into a new company, there’s so many benefits.
    4:55:58 But if you open source a language model, you have this data sitting around, you have this
    4:55:59 training code.
    4:56:04 It’s not that easy for someone to come and build on and improve because you need to spend
    4:56:05 a lot on compute.
    4:56:06 You need to have expertise.
    4:56:12 So until there are feedback loops of open source AI, it seems mostly an ideological mission.
    4:56:15 People like Mark Zuckerberg, which is like America needs this.
    4:56:21 And I agree with him, but in the time where the motivation ideologically is high, we need
    4:56:26 to capitalize and build this ecosystem around what benefits do you get from seeing the language
    4:56:27 model data.
    4:56:29 And there’s not a lot about that.
    4:56:33 We’re going to try to launch a demo soon where you can look at an Olmo model and a
    4:56:39 query and see what pre-training data is similar to it, which is like legally risky and complicated.
    4:56:43 But it’s like, what does it mean to see the data that the AI was trained on?
    4:56:44 It’s hard to parse.
    4:56:45 It’s terabytes of files.
    4:56:48 It’s like, I don’t know what I’m going to find in there.
    4:56:54 But that’s what we need to do as an ecosystem if people want open source AI to be financially
    4:56:55 useful.
    4:56:56 We didn’t really talk about Stargate.
    4:57:01 I would love to get your opinion on like what the new administration, the Trump administration,
    4:57:08 everything that’s being done from the America side and supporting AI infrastructure and
    4:57:10 the efforts of the different AI companies.
    4:57:11 What do you think about Stargate?
    4:57:17 What are we supposed to think about Stargate and does Sam have the money?
    4:57:18 Yeah.
    4:57:21 So I think Stargate is an opaque thing.
    4:57:23 It definitely doesn’t have $500 billion.
    4:57:25 It doesn’t even have $100 billion, right?
    4:57:30 So what they announced is this $500 billion number, Larry Ellison, Sam Altman and Trump
    4:57:31 said it.
    4:57:38 They thanked Trump and Trump did do some executive actions that do significantly improve the
    4:57:42 ability for this to be built faster.
    4:57:45 One of the executive actions he did is on federal land, you can just basically build
    4:57:49 data centers in power, pretty much like that.
    4:57:52 And then the permitting process is basically gone or you file after the fact.
    4:57:56 So like one of the, again, like I had a Schizo take earlier, another Schizo take, if you’ve
    4:58:00 ever been to the Presidio in San Francisco, beautiful area.
    4:58:03 You could build a power plant and a data center there if you wanted to because it is federal
    4:58:04 land.
    4:58:05 It used to be a military base.
    4:58:11 But you know, obviously this would like piss people off, you know, it’s a good bit.
    4:58:14 Anyways, Trump has made it much easier to do this, right?
    4:58:18 Generally, Texas has the only unregulated grid in the nation as well.
    4:58:19 Let’s go Texas.
    4:58:24 And so, you know, therefore like ERCOT enables people to build faster as well.
    4:58:27 In addition, the federal regulations are coming down.
    4:58:31 And so Stargate is predicated, and this is why that whole show happened.
    4:58:35 Now, how they came up with a $500 billion number is beyond me.
    4:58:39 How they came up with a $100 billion number makes sense to some extent, right?
    4:58:44 And there’s actually a good table in here that I would like to show in that Stargate
    4:58:49 piece that I had.
    4:58:50 It’s the most recent one.
    4:58:51 Yeah.
    4:58:58 So anyways, Stargate, you know, it’s basically right, like there is, it’s a table about cost.
    4:59:01 There, you passed it already.
    4:59:03 It’s that one.
    4:59:06 So this table is kind of explaining what happens, right?
    4:59:10 So Stargate is in Abilene, Texas, the first $100 billion of it.
    4:59:17 That site is 2.2 gigawatts of power in, about 1.8 gigawatts of power consumed, right?
    4:59:24 Per GPU, they have like roughly, Oracle is already building the first part of this before
    4:59:25 Stargate came about.
    4:59:27 To be clear, they’ve been building it for a year.
    4:59:29 They tried to rent it to Elon, in fact, right?
    4:59:31 But Elon was like, “It’s too slow.
    4:59:32 I need it faster.”
    4:59:34 So then he went and did his Memphis thing.
    4:59:38 And so OpenAI was able to get it with this like weird joint venture called Stargate.
    4:59:42 They initially signed a deal with just Oracle for the first section of this cluster, right?
    4:59:50 This first section of this cluster, right, is roughly $5 billion to $6 billion of server
    4:59:51 spend, right?
    4:59:54 And then there’s another billion or so of data center spend.
    4:59:59 But then likewise, like if you fill out that entire 1.8 gigawatts with the next two generations
    5:00:05 of NVIDIA’s chips, GB200, GB300, VR200, and you fill it out completely, that ends up being
    5:00:10 roughly $50 billion of server cost, right?
    5:00:15 Plus there’s data center cost, plus maintenance cost, plus operation cost, plus all these
    5:00:16 things.
    5:00:19 And that’s where OpenAI gets to their $100 billion announcement that they had, right?
    5:00:22 Because they talked about $100 billion is phase one.
    5:00:24 That’s this Abilene, Texas data center, right?
    5:00:27 $100 billion of total cost of ownership, quote, unquote, right?
    5:00:28 So it’s not CapEx.
    5:00:29 It’s not investment.
    5:00:32 It’s $100 billion of total cost of ownership.
    5:00:35 And then there will be future phases.
    5:00:39 They’re looking at other sites that are even bigger than this 2.2 gigawatts, by the way,
    5:00:40 in Texas and elsewhere.
    5:00:43 And so they’re not completely ignoring that.
    5:00:49 But there is the number of $100 billion that they save for phase one, which I do think will
    5:00:50 happen.
    5:00:51 They don’t even have the money for that.
    5:00:54 Furthermore, it’s not $100 billion, it’s $50 billion of spend, right?
    5:01:01 And then like $50 billion of operational cost, power, et cetera, rental pricing, et cetera.
    5:01:06 Because they’re renting it, OpenAI is renting the GPUs from the Stargate joint venture, right?
    5:01:08 What money do they actually have, right?
    5:01:11 SoftBank is going to invest, Oracle is going to invest, OpenAI is going to invest.
    5:01:13 OpenAI is on the line for $19 billion.
    5:01:17 Everyone knows that they’ve only got $6 billion in their last round and $4 billion in debt.
    5:01:23 But there is news of like SoftBank maybe investing $25 billion into OpenAI, right?
    5:01:25 So that’s part of it, right?
    5:01:26 So $19 billion can come from there.
    5:01:28 So OpenAI does not have the money at all, right?
    5:01:29 To be clear.
    5:01:34 Inc. is not dried on anything, OpenAI has $0 for this $50 billion, right?
    5:01:38 In which they’re legally obligated to put $19 billion of CAPEX into the joint venture
    5:01:41 and then the rest they’re going to pay via renting the GPUs from the joint venture.
    5:01:44 And then there’s Oracle.
    5:01:48 Oracle has a lot of money, they’re building the first section completely, they were spending
    5:01:49 for themselves, right?
    5:01:55 This $6 billion of CAPEX, $10 billion of TCO, and they were going to do that first section.
    5:01:57 They’re paying for that, right?
    5:02:00 As far as the rest of the section, I don’t know how much Larry wants to spend, right?
    5:02:01 At any point he could pull out, right?
    5:02:03 Like this is again, this is like completely voluntary.
    5:02:06 So at any point there’s no signed Inc. on this, right?
    5:02:09 But he potentially could contribute tens of billions of dollars, right, to be clear.
    5:02:11 He’s got the money, Oracle’s got the money.
    5:02:17 And then there’s like MGX, which is the UAE fund, which technically has $1.5 trillion
    5:02:18 for investing in AI.
    5:02:21 But again, like, I don’t know how real that money is.
    5:02:26 And like, whereas there is no Inc. signed for this, SoftBank does not have $25 billion
    5:02:27 of cash.
    5:02:32 They have to sell down their stake in ARM, which is the leader in CPUs and they IPO’ed
    5:02:33 it.
    5:02:34 This is obviously what they’ve always wanted to do.
    5:02:36 They just didn’t know where they’d redeploy the capital.
    5:02:38 Selling down the stake in ARM makes a ton of sense.
    5:02:42 So they can sell that down and invest in this if they want to and invest in Open AI if they
    5:02:43 want to.
    5:02:50 As far as like money secured, the first 100,000 GB 200 cluster can be funded.
    5:02:53 Everything else after that is up in the air.
    5:02:54 Money’s coming.
    5:02:55 I believe the money will come.
    5:02:57 I personally do.
    5:02:58 It’s a belief.
    5:03:02 It’s a belief that they are going to release better models and be able to raise money.
    5:03:06 But like the actual reality is that Elon’s right, the money does not exist.
    5:03:09 What does the US government have to do with anything?
    5:03:10 What does Trump have to do with everything?
    5:03:12 He’s just a hype man.
    5:03:16 Trump is, he’s reducing the regulation so they can build it faster.
    5:03:18 And he’s allowing them to do it, right?
    5:03:21 Because any investment of this side is going to involve like antitrust stuff.
    5:03:23 So obviously he’s going to allow them to do it.
    5:03:27 He’s going to enable the regulations to actually allow it to be built.
    5:03:31 I don’t believe there’s any US government dollars being spent on this though.
    5:03:32 Yeah.
    5:03:37 So I think he’s also just creating a general vibe that this regulation will go down and
    5:03:40 this is the era of building.
    5:03:42 So if you’re a builder, you want to create stuff.
    5:03:43 You want to launch stuff.
    5:03:44 This is the time to do it.
    5:03:48 And so like we’ve had this 1.8 gigawatt data center in our data for over a year now and
    5:03:51 we’ve been like sort of sending it to all of our clients, including many of these companies
    5:03:53 that are building the multi gigawatts.
    5:03:57 But that is like at a level that’s not quite maybe executives like seeing $500 billion,
    5:04:02 $100 billion, and then everyone’s asking them like, so it could spur like another like an
    5:04:04 even faster arms race, right?
    5:04:08 Because there’s already an arms race, but like this like $100 billion, $500 billion number.
    5:04:13 Trump talking about it on TV, like it could spur the arm race to be even faster and more
    5:04:15 investors to flood in and et cetera, et cetera.
    5:04:20 So I think, I think you’re right is that in that sense that open eye or sort of Trump
    5:04:23 is sort of like championing people are going to build more and his actions are going to
    5:04:25 let people build more.
    5:04:33 What are you excited about about these several years that are upcoming in terms of cluster
    5:04:40 buildouts, in terms of breakthroughs in AI, like the best possible future you can imagine
    5:04:44 in the next couple of years, two, three, four years, what does that look like just it could
    5:04:51 be a very specific technical things like breakthroughs on post post training or it could be just
    5:04:52 size big.
    5:04:53 Yeah.
    5:04:55 I mean it’s impressive clusters.
    5:05:00 I really, I really enjoyed tracking supply chain and like who’s involved in what I really
    5:05:01 do.
    5:05:04 It’s really fun to see like the numbers, the cost, who’s building what capacity helping
    5:05:07 them figure out how much capacity they should build, winning deals, strategic stuff.
    5:05:08 That’s really cool.
    5:05:14 I think technologically there’s a lot around the networking side that really excites me
    5:05:18 with optics and electronics like kind of getting closer and closer, whether it be co-package
    5:05:22 optics or some sort of like forms of new forms of switching.
    5:05:25 This is internal to a cluster.
    5:05:26 Yeah.
    5:05:30 Also multi-data center training, like there’s people are putting so much fiber between these
    5:05:35 data centers and lighting it up with so much bandwidth that there’s a lot of interesting
    5:05:40 stuff happening on that end, telecom has been really boring since 5G and now it’s like really
    5:05:42 exciting again on the other side.
    5:05:44 Can you educate me a little bit about the speed of things?
    5:05:49 So the speed of memory versus the speed of interconnect versus the speed of fiber between
    5:05:50 data centers.
    5:05:53 Are these like orders of magnitude different?
    5:05:57 Can we at some point converge towards a place where it all just feels like one computer?
    5:05:58 No.
    5:06:01 I don’t think that’s possible.
    5:06:02 It’s only going to get harder to program.
    5:06:03 Not easier.
    5:06:04 Okay.
    5:06:07 It’s only going to get more difficult and complicated and more layers, right?
    5:06:11 The general image that people like to have is like this hierarchy of memory.
    5:06:14 So on chip is really close, localized within the chip, right?
    5:06:15 You have registers, right?
    5:06:19 Those are shared between some compute elements and then you’ll have caches, which are shared
    5:06:20 between more compute elements.
    5:06:21 Then you have like memory, right?
    5:06:24 Like HBM or DRAM, like DDR memory or whatever it is.
    5:06:27 And that’s shared between the whole chip.
    5:06:31 And then you can have, you know, pools of memory that are shared between many chips, right?
    5:06:33 And then storage and you keep zoning out, right?
    5:06:38 The access latency across data centers, across within the data center, within a chip is different.
    5:06:43 So like you’re obviously always, you’re always going to have different programming paradigms
    5:06:44 for this.
    5:06:45 It’s not going to be easy.
    5:06:46 Programming this stuff is going to be hard.
    5:06:48 Maybe I can help, right?
    5:06:49 You know, with programming this.
    5:07:00 But the way to think about it is that like there is, there’s sort of like the more elements
    5:07:04 you add to a task, you don’t gain, you don’t get strong scaling, right?
    5:07:07 If I double the number of chips, I don’t get two exit performance, right?
    5:07:11 This is just like a reality of computing because there’s inefficiencies.
    5:07:15 And there’s a lot of interesting work being done to make it not, you know, to make it
    5:07:19 more linear, whether it’s making the chips more networked together more tightly or,
    5:07:23 you know, cool programming models or cool algorithmic things that you can do on the
    5:07:25 model side, right?
    5:07:27 DeepSeq did some of these really cool innovations because they were limited on interconnect,
    5:07:29 but they still needed to parallelize, right?
    5:07:31 Like all sorts of, you know, all, everyone’s always doing stuff.
    5:07:35 Google’s got a bunch of work and everyone’s got a bunch of work about this.
    5:07:39 That stuff is super exciting on the model and workload and innovation side, right?
    5:07:42 Hardware, solid state transformers are interesting, right?
    5:07:46 For the power side, there’s all sorts of stuff on batteries and there’s all sorts of stuff
    5:07:49 on, you know, I think, I think when you look at, if you look at every layer of the compute
    5:07:50 stack, right?
    5:07:54 Whether it goes from lithography and etch all the way to like fabrication to like optics
    5:07:59 to networking to power to transformers to cooling to, you know, a networking and you
    5:08:03 just go on up and up and up and up the stack, you know, even air conditioners for data centers
    5:08:04 are like innovating, right?
    5:08:07 Like it’s like, there’s like copper cables are innovating, right?
    5:08:10 Like you wouldn’t think it, but copper cables, like there’s some innovations happening there
    5:08:14 with like the density of how you can pack them and like, it’s like all of these layers
    5:08:18 of the stack all the way up to the models, human progress is at a pace that’s never been
    5:08:19 seen before.
    5:08:22 I’m just imagining you sitting back in a layer somewhere with screens everywhere, just monitoring
    5:08:27 the supply chain where all these clusters, like all the information you’re gathering,
    5:08:28 I mean, you do incredible.
    5:08:29 There’s a big team.
    5:08:30 There’s a big team.
    5:08:39 I mean, you’re, you do quite incredible work with seminars, I mean, just keeping your finger
    5:08:43 on the pulse of human civilization in the digital world.
    5:08:44 It’s pretty cool.
    5:08:45 Like just to watch, feel that.
    5:08:46 Yeah.
    5:08:47 Thank you.
    5:08:48 I guess.
    5:08:51 Feel all of us like doing shit.
    5:08:52 Epic shit.
    5:08:53 Feel the AGI.
    5:08:59 I mean, from meme to like reality, what Nathan, is there like breakthroughs that you’re like
    5:09:01 looking forward to potentially?
    5:09:04 I had a while to think about this while listening to Dylan’s beautiful response.
    5:09:06 He didn’t listen to me.
    5:09:11 I knew, no, I knew this was coming and it’s like, realistically, training models is very
    5:09:13 fun because there’s so much low hanging fruit.
    5:09:19 And the thing that makes my job entertaining, I train models, I write analysis about what’s
    5:09:24 happening with models and it’s fun because there is obviously so much more progress to
    5:09:25 be had.
    5:09:29 And the real motivation why I do this, like somewhere where I can share things is that
    5:09:33 there’s just, I don’t trust people that are like, trust me bro, we’re going to make AI
    5:09:34 good.
    5:09:36 It’s like, we’re the ones that it’s like, we’re going to do it and you can trust us
    5:09:41 and we’re just going to have all the AI and it’s just like, I would like a future where
    5:09:45 more people have a say in what AI is and can understand it.
    5:09:49 And that’s a little bit less fun that it’s not a like positive thing of like, this is
    5:09:50 just all really fun.
    5:09:55 Like training models is fun and bring people in as fun, but it’s really like AI, if it
    5:09:59 is going to be the most powerful technology of my lifetime, it’s like, we need to have
    5:10:06 a lot of people involved in making that and making it open helps with that as accessible
    5:10:08 as possible as open as possible.
    5:10:09 Yeah.
    5:10:14 In my read of the last few years is that more openness would help the AI ecosystem in terms
    5:10:18 of having more people understand what’s going on, rather that researchers from non-AI fields
    5:10:20 to governments to everything.
    5:10:22 It doesn’t mean that openness will always be the answer.
    5:10:27 I think then I will reassess of like, what is the biggest problem facing AI and tack on
    5:10:30 a different angle to the wild ride that we’re on.
    5:10:37 And for me, just from even the user experience, anytime you have the like Apathy said, the
    5:10:46 aha moments, like the magic, like seeing the reasoning, the chain of thought, it’s like,
    5:10:49 there’s something really just fundamentally beautiful about that.
    5:10:53 It’s putting a mirror to ourselves and seeing like, oh shit, it is solving intelligence
    5:11:00 as the cliche, like goal of these companies is, and you get to understand why we humans
    5:11:03 are special, the intelligence within us is special.
    5:11:08 And for now, also why we’re special in terms of, we seem to be conscious and the AI systems
    5:11:14 for now aren’t, and we get to explore that mystery.
    5:11:20 So that’s, it’s just really cool to get to explore these questions that I don’t think,
    5:11:25 I would have never imagined would be even possible.
    5:11:32 Back when, so just watching with excitement, deep blue, because I wouldn’t have ever thought
    5:11:35 this kind of AI would be possible in my lifetime.
    5:11:38 It’s like, this is really feels like AI.
    5:11:39 It’s incredible.
    5:11:44 I started with AI of learning to fly as a quadrotor, it’s like, learn to fly, and it
    5:11:47 was just like, it learned to fly up, it would hit the ceiling and stop and catch it.
    5:11:51 It’s like, okay, that is like really stupid compared to what’s going on now.
    5:11:56 And now you could probably, with natural language, tell it to learn to fly, and it’s going to
    5:11:59 generate the control algorithm, the requirement to do that.
    5:12:03 There’s low level blockers, like we had to do some weird stuff for that, but you can,
    5:12:04 you definitely can.
    5:12:07 Back to our robotics conversation, yeah, when you have to interact in actual physical
    5:12:12 world as hard, what gives you hope about the future of human civilization?
    5:12:18 Looking into the next 10 years, 100 years, 1,000 years, how long do you think we’ll make
    5:12:19 it?
    5:12:22 Do you think we’ve got 1,000 years?
    5:12:27 Humans will definitely be around in 1,000 years, I think there’s ways that very bad
    5:12:31 things could happen that will be way fewer humans, but humans are very good at surviving.
    5:12:35 There’s been a lot of things that that is true.
    5:12:39 I don’t think they’re necessarily, we’re good at long-term credit assignment of risk,
    5:12:44 but when the risk becomes immediate, we tend to figure things out.
    5:12:51 For that reason, I’m like, there’s physical constraints to things like AGI, hyper recursive
    5:12:56 improvement to kill us all type stuff, physical reasons, and for how humans have figured things
    5:13:00 out before, I’m not too worried about it, AI takeover.
    5:13:05 There are other international things that are worrying, but there’s just fundamental human
    5:13:08 goodness and trying to amplify that.
    5:13:16 We’re on a tenuous time, and if you look at humanity as a whole, there’s been times where
    5:13:20 things go backwards, there’s times when things don’t happen at all, and we’re on what should
    5:13:23 be very positive trajectory right now.
    5:13:29 Yeah, there seems to be progress, but just with power, there’s spikes of human suffering.
    5:13:33 We want to try to minimize the amount of spikes.
    5:13:36 Generally humanity is going to suffer a lot less.
    5:13:37 I’m very optimistic about that.
    5:13:44 I do worry of techno-fascism type stuff arising as AI becomes more and more prevalent and
    5:13:48 powerful, and those who control it can do more and more.
    5:13:53 Maybe it doesn’t kill us all, but at some point, every very powerful human is going to
    5:13:58 want a brain-computer interface so that they can interact with AGI and all of its advantages
    5:14:05 in many more way and merge its mind with that person’s capabilities can leverage those much
    5:14:11 better than anyone else, and therefore won’t be one person rule them all, but the thing
    5:14:16 I worry about is it’ll be few people, hundreds, thousands, tens of thousands, maybe millions
    5:14:22 of people rule whoever’s left and the economy around it.
    5:14:28 That’s the thing that’s probably more worrisome is human machine amalgamations.
    5:14:32 This enables an individual human to have more impact on the world, and that impact can be
    5:14:35 both positive and negative.
    5:14:39 Generally humans have positive impacts on the world, at least societally, but it’s possible
    5:14:44 for individual humans to have such negative impacts, and AGI, at least as I think the
    5:14:49 labs define it, which is not a runaway sentient thing, but rather just something that can
    5:14:54 do a lot of tasks really efficiently, amplifies the capabilities of someone causing extreme
    5:14:56 damage.
    5:15:01 For the most part, I think it’ll be used for profit-seeking motives, which will then reduce,
    5:15:04 which will increase the abundance and supply of things, and therefore reduce suffering,
    5:15:05 right?
    5:15:07 What’s the goal?
    5:15:12 Scrolling on a timeline, just rolling a stasis.
    5:15:15 Scrolling holds the status quo of the world.
    5:15:16 That is a positive outcome, right?
    5:15:23 Like if I have food tubes and lumped up scrolling and I’m happy, that’s a positive outcome.
    5:15:30 While expanding out into the cosmos, well, this is a fun time to be alive.
    5:15:34 And thank you for pushing the forefront of what is possible in humans, and thank you
    5:15:35 for talking to me.
    5:15:36 This was fun.
    5:15:37 Thanks for having us.
    5:15:38 Thanks for having us.
    5:15:42 Thanks for listening to this conversation with Dylan Patel and Nathan Lambert.
    5:15:46 To support this podcast, please check out our sponsors in the description.
    5:15:52 And now, let me leave you some words from Richard Feynman.
    5:15:57 For a successful technology, reality must take precedence over public relations.
    5:16:01 For nature cannot be fooled.
    5:16:03 Thank you for listening, and I hope to see you next time.
    5:16:13 [MUSIC]
    5:16:23 [BLANK_AUDIO]

    Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Dylan’s X: https://x.com/dylan522p
    SemiAnalysis: https://semianalysis.com/
    Nathan’s X: https://x.com/natolambert
    Nathan’s Blog: https://www.interconnects.ai/
    Nathan’s Podcast: https://www.interconnects.ai/podcast
    Nathan’s Website: https://www.natolambert.com/
    Nathan’s YouTube: https://youtube.com/@natolambert
    Nathan’s Book: https://rlhfbook.com/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Invideo AI: AI video generator.
    Go to https://invideo.io/i/lexpod
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (13:28) – DeepSeek-R1 and DeepSeek-V3
    (35:02) – Low cost of training
    (1:01:19) – DeepSeek compute cluster
    (1:08:52) – Export controls on GPUs to China
    (1:19:10) – AGI timeline
    (1:28:35) – China’s manufacturing capacity
    (1:36:30) – Cold war with China
    (1:41:00) – TSMC and Taiwan
    (2:04:38) – Best GPUs for AI
    (2:19:30) – Why DeepSeek is so cheap
    (2:32:49) – Espionage
    (2:41:52) – Censorship
    (2:54:46) – Andrej Karpathy and magic of RL
    (3:05:17) – OpenAI o3-mini vs DeepSeek r1
    (3:24:25) – NVIDIA
    (3:28:53) – GPU smuggling
    (3:35:30) – DeepSeek training on OpenAI data
    (3:45:59) – AI megaclusters
    (4:21:21) – Who wins the race to AGI?
    (4:31:34) – AI agents
    (4:40:16) – Programming and AI
    (4:47:43) – Open source
    (4:56:55) – Stargate
    (5:04:24) – Future of AI

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #458 – Marc Andreessen: Trump, Power, Tech, AI, Immigration & Future of America

    AI transcript
    0:00:05 The following is a conversation with Mark Andreessen, his second time on the podcast.
    0:00:11 Mark is a visionary tech leader and investor who fundamentally shaped the development of
    0:00:15 the internet and the tech industry in general over the past 30 years.
    0:00:22 He is the co-creator of Mosaic, the first widely used web browser, co-founder of Netscape,
    0:00:28 co-founder of the legendary Silicon Valley Venture Capital firm Andreessen Horowitz, and
    0:00:33 is one of the most influential voices in the tech world, including at the intersection
    0:00:37 of technology and politics.
    0:00:40 And now, a quick few second mention of his sponsor.
    0:00:43 Check them out in the description, it’s the best way to support this podcast.
    0:00:49 We’ve got On-Cord for unifying your ML stack, GitHub for programming, Notion for team projects
    0:00:56 and collaboration, Shopify for merch, and Element for hydration, choose wisely my friends.
    0:01:02 Also, if you want to get in touch with me, for whatever reason, go to lexfreement.com/contact.
    0:01:06 And now, onto the full ad reads, no ads in the middle, I try to make this interesting,
    0:01:12 but if you skip them, please still check out the sponsors, I enjoy their stuff, maybe you
    0:01:13 will too.
    0:01:19 This episode is brought to you by On-Cord, a platform that provides data focused AI tooling
    0:01:25 for data annotation, curation, and management, and for model evaluation, once you train up
    0:01:29 the model on the data that you curate.
    0:01:34 In this conversation with Mark Andreessen, we actually discuss what he calls kind of
    0:01:37 like the trillion dollar questions.
    0:01:42 And one of them for AI is, how effective will synthetic data be?
    0:01:45 It really isn’t an open question.
    0:01:51 What piece, what fraction of the intelligence of future models will be based on training
    0:01:53 on synthetic data?
    0:01:57 At the top AI labs, I’m hearing a lot of optimism.
    0:02:02 As far as I can tell that optimism is not currently, at least in the general case, based
    0:02:04 on any real evidence.
    0:02:10 So I do think synthetic data will play a part, but how big a part?
    0:02:14 There’s still going to be some curation from humans, there’s still going to need to be
    0:02:15 a human in the loop.
    0:02:23 I think the real question is, how do you effectively integrate the human in the loop, so that the
    0:02:34 synthetic data, sort of 99 synthetic, 1% human, that combination can be most effective?
    0:02:35 That’s a real question.
    0:02:38 And companies like Concord are trying to solve that very problem.
    0:02:44 First of all, they want to provide the tooling for the annotation, for the actual human-iac
    0:02:50 collaboration, but also asking and answering the research question of how do you pull it
    0:02:55 all off and make the resulting model more intelligent for very specific applications and for the
    0:02:57 general applications?
    0:03:00 Yeah, so Concord does a really good job on the tooling side.
    0:03:08 Go try them out to curate, annotate, and manage your AI data at oncord.com/lex.
    0:03:12 That’s oncord.com/lex.
    0:03:17 This episode is brought to you by GitHub and GitHub Copilot.
    0:03:24 If you don’t know what that is, my friends you’re in for a joyous, beautiful surprise.
    0:03:32 I think a lot of people that program regularly know and love GitHub and know and love Copilot.
    0:03:40 It’s the OG AI programming assistant, and it’s the one that’s really trying to win this
    0:03:42 very competitive space.
    0:03:44 It is not easy.
    0:03:49 If you’re somebody that uses VS Code, obviously, well, maybe not obviously, but you can use
    0:03:54 GitHub Copilot in VS Code, but you can use it also in other IDEs.
    0:03:58 I’m going to be honest with you, it’s a very competitive space.
    0:04:06 I’m trying all the different tools in the space, and I really love how much GitHub and
    0:04:10 GitHub Copilot want to win in this competitive space.
    0:04:17 I’m excitedly sitting back and just eating popcorn like that, Michael Jackson meme, and
    0:04:20 just enjoying the hell out of it.
    0:04:27 Like I said, I’m going to be doing a bunch of programming episodes, including with Primogen.
    0:04:34 He I think has a love/hate relationship with AI and with AI agents, and with the role of
    0:04:36 AI in the programming experience.
    0:04:42 He’s really at the forefront of people that are playing with all these languages, with
    0:04:48 all these different applications, with all the different use cases of code, and he is
    0:04:53 a new VM user, so he’s going to be skeptical in general of new technology.
    0:04:58 He’s a curmudgeon sitting on a porch, on a rocking chair, screaming at the kids, throwing
    0:05:04 stuff at them, but at the same time, he’s able to play with the kids as well, so I am more
    0:05:09 on the kids side, with a childlike joy, enjoy the new technology.
    0:05:18 For me, basically everything I do, programming-wise, has the possibility of AI either reviewing
    0:05:21 it or assisting it.
    0:05:23 It’s constantly in the loop.
    0:05:29 Even if I’m writing stuff from scratch, I’m always just kind of one second away from asking
    0:05:33 a question about the code, or asking it to generate, or rewrite a certain line, or to
    0:05:39 add a few more lines, all that kind of stuff, so I’m constantly, constantly using it.
    0:05:45 If you’re learning to code, or if you’re an advanced programmer, it is really important
    0:05:49 that you get better and better at using AI as an assistant programmer.
    0:05:55 Get started with GitHub Copilot for free today at gh.io/copilot.
    0:06:00 This episode is also brought to you by Notion, a note-taking and team collaboration tool
    0:06:05 that Mark Andreessen, on this very episode, sings a lot of praises to.
    0:06:07 I believe he sings, “Was it on mic or off mic?”
    0:06:10 I don’t remember, but anyway, he loves it.
    0:06:15 It’s one of the tools, one of the companies, one of the ecosystems that integrate AI really
    0:06:19 effectively for team applications.
    0:06:25 You have, let’s see, docs, and wikis, and projects, and all that kind of stuff.
    0:06:30 You can have the AI load all of that in, and answer questions based on that.
    0:06:36 You can connect a bunch of apps, like you can connect Slack, you can connect Google Drive.
    0:06:43 I think in the context, we were talking about something like Notion for email, for Gmail.
    0:06:47 I don’t know if Notion integrates email yet.
    0:06:53 They’re just like this machine that’s constantly increasing the productivity of every aspect
    0:06:57 of your life, so I’m sure they’re going to start integrating more and more apps.
    0:07:02 I use it for Slack and Google Drive, but I use it primarily at the individual level for
    0:07:07 note-taking, and even at the individual level, just incredible what Notion AI can do.
    0:07:12 Try it out for free when you go to Notion.com/lex.
    0:07:19 It’s all lowercase Notion.com/lex to try the power of Notion AI today.
    0:07:23 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere
    0:07:26 with a great looking online store.
    0:07:35 There are a few people embody the joy and the power of capitalism than Mark Andreessen.
    0:07:43 I was at a thing where Mark and Toby were both there, and then we were chatting, and
    0:07:47 they were very friendly, so I think they’re friends, and I got to hang out with Toby.
    0:07:50 It was, again, an incredible person.
    0:07:56 I said it again and again, and it’s almost becoming funny that eventually we’ll do a
    0:07:57 podcast.
    0:07:59 I don’t know why we haven’t done a podcast.
    0:08:05 There’s a few people in my life where it’s like, like, Jeffrey Hinton is one of those
    0:08:06 people.
    0:08:12 It’s like, we’ve agreed to do a podcast for so long, and we’ve just been kind of lazy
    0:08:15 about it, and Toby’s the same.
    0:08:17 Anyway, he’s a CEO of Shopify.
    0:08:21 I don’t even know if he knows that Shopify sponsors this podcast.
    0:08:23 It doesn’t matter.
    0:08:27 It goes without saying, it should be obvious to everybody, that one doesn’t affect the
    0:08:28 other.
    0:08:33 I’m very fortunate to have way more sponsors than we could possibly fit, so I could pick
    0:08:39 whoever the hell I want, and whatever guests I choose will never have anything to do with
    0:08:42 the companies that sponsor the podcast.
    0:08:45 There’s not even like a tinge of influence.
    0:08:50 In fact, if there’s anything, it’ll be the opposite direction, but I also try to avoid
    0:08:51 that.
    0:08:57 It’s possible I talk to the CEO of GitHub, for example, on this podcast, and GitHub sponsors
    0:08:58 this podcast.
    0:09:03 It’s possible I talk to the CEO of Shopify, Toby, and Shopify sponsors this podcast.
    0:09:08 One doesn’t affect the other, and obviously, again, goes without saying, but let me say
    0:09:16 it, make it explicit that nobody can buy their way onto the podcast, whether through sponsorships
    0:09:19 or buying me dinner or whatever.
    0:09:20 I don’t know.
    0:09:27 It’s just, it’s impossible, and most likely, if that’s attempted, it’s going to backfire
    0:09:33 so I think people intuitively know not to attempt because it would really piss me off.
    0:09:37 Anyway, this is a detour.
    0:09:38 We’re supposed to talk about Shopify.
    0:09:44 I have a Shopify store, lexforumni.com/store, that sells t-shirts, but you can sell more
    0:09:49 sophisticated stuff, make a lot of money, and participate in this beautiful machinery
    0:09:50 of capitalism.
    0:09:55 Sign up for a $1 per month trial period at Shopify.com/lex.
    0:09:56 That’s all over the case.
    0:10:01 Go to Shopify.com/lex to take your business to the next level today.
    0:10:07 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte
    0:10:13 mix of which I consume very ridiculously large amounts.
    0:10:17 You know, salt used to be currency in the ancient world.
    0:10:18 How silly are humans?
    0:10:26 They’re not silly, how sort of surprising the things we converge on as being the store
    0:10:31 of value, just value in general, the kind of things we assign value to together.
    0:10:41 We just kind of all agree that this item, this material, this idea, this building is
    0:10:49 extremely valuable, and then we compete over that resource, or that idea, or that building.
    0:10:54 We fight, and sometimes there is wars, and sometimes there is complete destruction, and
    0:10:59 the rise and fall of empires, all over some resource.
    0:11:04 What a funny, strange little world.
    0:11:12 Completely harmless as H. Hiker’s guide to the galaxy summarizes humans.
    0:11:15 For some reason, instead of that book, I was going to say Catcher in the Rye.
    0:11:21 In my exhausted brain, the books kind of all morph together, but Catcher in the Rye is
    0:11:23 a really damn good book.
    0:11:28 All of the classics I return to often, the simple books, even like the first book I read
    0:11:34 in English, Traveal Book, Traveal Book, called The Giver.
    0:11:42 It’s like I return to it in its simplicity, maybe it has sentimental value, maybe that’s
    0:11:46 what it is, but just the simplicity of words, Animal Farmer, I’ve read, I don’t know how
    0:11:51 many times, probably over 50 times, I return to it over and over and over, the simplicity,
    0:11:54 the poetry of that simplicity.
    0:11:59 That’s something that just resonates with my brain, maybe it’s a peculiar kind of brain.
    0:12:08 It is a peculiar kind of brain, and I have to thank you for being patient with this peculiar
    0:12:09 kind of brain.
    0:12:14 Get a simple pack for free with any purchase of whatever the thing I was talking about,
    0:12:16 which I think is Element.
    0:12:21 Try it at drinkelement.com/lex.
    0:12:22 This is a Lex Friedman podcast.
    0:12:26 To support it, please check out our sponsors in the description.
    0:12:39 And now, dear friends, here’s Mark and Risen.
    0:12:48 All right, let’s start with optimism.
    0:12:57 If you were to imagine the best possible one to two years, 2025, ’26, for tech, for big
    0:13:01 tech and small tech, what would it be, what would it look like, lay out your vision for
    0:13:05 the best possible scenario trajectory for America?
    0:13:06 The roaring 20s.
    0:13:07 The roaring 20s.
    0:13:08 The roaring 20s.
    0:13:09 I mean, look, a couple of things.
    0:13:14 It is remarkable over the last several years with all of the issues, including not just
    0:13:17 everything in politics, but also COVID and every other thing that’s happened.
    0:13:18 It’s really amazing.
    0:13:19 The US just kept growing.
    0:13:21 If you just look at economic growth charts, the US just kept growing.
    0:13:23 And very significantly, many other countries stopped growing.
    0:13:27 So Canada stopped growing, the UK stopped growing, Germany stopped growing.
    0:13:31 And some of those countries may be actually going backwards at this point.
    0:13:34 And there’s a very long discussion to be had about what’s wrong with those countries.
    0:13:37 And there’s, of course, plenty of things that are wrong with our country.
    0:13:41 But the US is just flat out primed for growth.
    0:13:47 And I think that’s a consequence of many factors, some of which are lucky and some of
    0:13:48 which through hard work.
    0:13:52 And so the lucky part is just, number one, we just have incredible physical security
    0:13:54 by being our own continent.
    0:13:56 We have incredible natural resources.
    0:14:00 There’s this running joke now that whenever it looks like the US is going to run out of
    0:14:04 some rare earth material, some farmer in North Dakota kicks over a hay bale and finds like
    0:14:05 a $2 trillion deposit.
    0:14:12 I mean, we’re just blessed with geography and the natural resources.
    0:14:14 We can be energy independent anytime we want.
    0:14:16 This last administration decided they didn’t want to be.
    0:14:18 They wanted to turn off American energy.
    0:14:22 This new administration has declared that they have a goal of turning it on in a dramatic
    0:14:23 way.
    0:14:24 There’s no question we can be energy dependent.
    0:14:26 We can be a giant net energy exporter.
    0:14:28 It’s purely a question of choice.
    0:14:31 And I think the new administration is going to do that.
    0:14:33 And so, oh, and then I would say two other things.
    0:14:38 One is, we are the beneficiaries, and you’re an example of this, we’re a beneficiary, we’re
    0:14:43 the beneficiary of 50, 100, 200 years of like the basically most aggressive, driven, smartest
    0:14:46 people in the world, most capable people moving to the US and raising their kids here.
    0:14:50 And so, we just have, you know, by far the most dynamic, you know, we’re by far the
    0:14:54 most dynamic population, most aggressive, you know, we’re the most aggressive set of
    0:14:59 characters in certainly in any Western country and have been for a long time and certainly
    0:15:00 are today.
    0:15:03 And then finally, I would just say, look, we are overwhelmingly the advanced technology
    0:15:04 leader.
    0:15:08 You know, we have our issues and we have, I would say, a particular issue with manufacturing,
    0:15:12 which we could talk about, but for, you know, anything in software or anything in AI, anything
    0:15:16 in, you know, all these, you know, advanced biotech, all these advanced areas of technology,
    0:15:20 like we’re by far the leader, again, in part because many of the best scientists and engineers
    0:15:23 in those fields, you know, you don’t come to the US.
    0:15:29 And so, we just, we have all of the preconditions for a, for just a monster, boom, you know,
    0:15:32 I could see economic growth going way up, I could see productivity growth going way
    0:15:35 up, rate of technology adoption going way up, and then we could, we can do a global
    0:15:40 tour if you like, but like, basically all of our competitors have like profound issues
    0:15:44 and, you know, we could kind of go through them one by one, but the competitive landscape
    0:15:49 just is, it’s like, it’s, it’s remarkable how, how, how much better position we are
    0:15:50 for growth.
    0:15:54 What about the humans themselves, almost philosophical questions, you know, I travel across the world
    0:16:00 and there’s something about the American spirit, the entrepreneurial spirit that’s uniquely
    0:16:02 intense in America.
    0:16:03 I don’t know what that is.
    0:16:11 I’ve talked to a saga who claims it might be the Scots Irish blood that runs through
    0:16:12 the history of America.
    0:16:13 What is it?
    0:16:17 You, at the heart of Silicon Valley, is there something in the water?
    0:16:19 Why is there this entrepreneurial spirit?
    0:16:20 Yeah.
    0:16:22 So is this a family show or am I allowed to swear?
    0:16:23 You can say whatever the fuck you want.
    0:16:24 Okay.
    0:16:28 So the TV, the great TV show succession, the show, of course, that would, which you were
    0:16:30 intended to root for exactly zero of the characters.
    0:16:31 Yes.
    0:16:34 The show succession was in the final episode of the first season when the whole family
    0:16:39 is over in Logan Roy’s ancestral homeland of Scotland and they’re at this castle, you
    0:16:40 know, for some wedding.
    0:16:43 And Logan is just like completely miserable after having to, you know, because he’s been
    0:16:47 in New York for 50 years, he’s totally miserable being back in, in Scotland and he gets in
    0:16:51 some argument with somebody and he’s like, he says, finally, just says, my God, I cannot
    0:16:58 wait to get out of here and go back to America where we could fuck without condoms.
    0:17:01 So was that a metaphor or okay, exactly, right?
    0:17:04 And so no, but it’s exactly the thing and everybody instantly knows what they’re like.
    0:17:07 Everybody watching that instantly starts laughing because you know what it means, which is exactly
    0:17:08 this.
    0:17:09 I think there’s like an ethnographic way of it.
    0:17:12 There’s a bunch of books on like all, like you said, the Scots-Irish, like all the different
    0:17:15 derivations of all the different ethnic groups that have come to the U.S. over the course
    0:17:17 of the last 400 years, right?
    0:17:22 But what we have is this sort of amalgamation of like, you know, the northeast Yankees who
    0:17:26 were like super tough and hardcore, yeah, the Scots-Irish are super aggressive.
    0:17:31 You know, we’ve got the Southerners and the Texans, you know, and the sort of whole kind
    0:17:35 of blended, you know, kind of Anglo-Hispanic thing, super incredibly tough, strong driven,
    0:17:40 you know, capable characters, you know, the Texas Rangers, you know, we’ve got the, yeah,
    0:17:43 we’ve got the California, you know, we’ve got the, you know, the wild, we’ve got the
    0:17:47 incredibly, you know, inventive hippies, but we also have the hardcore engineers, we’ve
    0:17:50 got, you know, the best, you know, rocket scientists in the world, we’ve got the best,
    0:17:53 you know, artists in the world, you know, creative professionals, you know, the best
    0:17:54 movies.
    0:18:00 And so, yeah, there is, you know, all of our problems, I think, are basically, you know,
    0:18:04 in my view, to some extent, you know, attempts to basically sand all that off and make everything
    0:18:09 basically boring and mediocre, but there is something in the national spirit that basically
    0:18:10 keeps bouncing back.
    0:18:14 And basically what we discover over time is we basically just need people to stand up
    0:18:17 at a certain point and say, you know, it’s time to, you know, it’s time to build, it’s
    0:18:20 time to grow, you know, it’s time to do things.
    0:18:23 And so, and there’s something in the American spirit that just like, we’re just right back
    0:18:24 to life.
    0:18:28 And before I actually saw, you know, I saw it as a kid here in the early 80s, you know,
    0:18:34 because the 70s were like horribly depressing, right, in the U.S., like they were a nightmare
    0:18:35 on many fronts.
    0:18:40 And in a lot of ways, the last decade to me has felt a lot like the 70s, just being mired
    0:18:45 in misery and just this self-defeating, you know, negative attitude and everybody’s upset
    0:18:46 about everything.
    0:18:50 And, you know, and then by the way, like energy crisis and hostage crisis and foreign wars
    0:18:56 and just demoralization, right, you know, the low point for in the 70s was, you know,
    0:18:59 Jimmy Carter, who just passed away, he went on TV and he gave this speech known as the
    0:19:00 Malay speech.
    0:19:04 And it was like the weakest possible trend to like rouse people back to a sense of like
    0:19:05 passion completely failed.
    0:19:10 And, you know, we had the, you know, the hostages in, you know, Iran for I think 440 days and
    0:19:14 every night on the nightly news, it was, you know, lines around the block, energy crisis,
    0:19:16 depression, inflation.
    0:19:19 And then, you know, Reagan came in and, you know, Reagan was a very controversial character
    0:19:23 at the time and, you know, he came in and he’s like, nope, it’s morning in America.
    0:19:25 And we’re the shining city on the hill and we’re going to do it.
    0:19:26 And he did it.
    0:19:27 And we did it.
    0:19:29 And the national spirit came roaring back and, you know, word really hard for a full
    0:19:30 decade.
    0:19:33 And I think that’s exactly what, I think, you know, we’ll see, but I think that’s what
    0:19:34 could happen here.
    0:19:39 And I just did a super long podcast on Milton Friedman with Jennifer Burns, who’s this incredible
    0:19:41 professor at Stanford.
    0:19:42 And he was part of the Reagan.
    0:19:46 So there’s a bunch of components to that, one of which is economic.
    0:19:47 Yes.
    0:19:52 And one of which, maybe you can put a word on it of not to be romantic or anything, but
    0:19:58 freedom, individual freedom, economic freedom, political freedom, and just in general, individualism.
    0:20:00 Yeah, that’s right.
    0:20:01 Yeah.
    0:20:05 And as you know, as America has this incredible streak of individualism, you know, and individualism
    0:20:09 in America probably peaked, I think, between roughly, call it the end of the Civil War,
    0:20:14 1865 through to probably call it 1931 or something, you know, and there was this like incredible
    0:20:15 run.
    0:20:17 I mean, that period, you know, we now know that period is the Second Industrial Revolution.
    0:20:21 And it’s when the United States basically assumed global leadership and basically took
    0:20:24 over technological and economic leadership from England.
    0:20:27 And then, you know, that led to, you know, ultimately then, therefore being able to,
    0:20:30 you know, not only industrialize the world, but also win World War II and then win the
    0:20:31 Cold War.
    0:20:36 And yeah, you know, there’s a massive industrial, you know, massive individualistic streak.
    0:20:39 By the way, you know, Milton Friedman’s old videos are all on YouTube.
    0:20:46 They are every bit as compelling and inspiring as they were then, you know, he’s a singular
    0:20:51 figure and many of us, you know, I never knew him, but he was actually at Stanford for many
    0:20:52 years at the Hoover Institution.
    0:20:53 But I never met him.
    0:20:57 But I know a lot of people who worked with him and, you know, he was a singular figure,
    0:21:02 but his, all of his lessons, you know, live on are fully available.
    0:21:05 But I would also say it’s not just individualism and this is, you know, this is one of the
    0:21:08 big things that’s like playing out in a lot of our culture and kind of political fights
    0:21:12 right now, which is, you know, basically this feeling, you know, certainly that I have and
    0:21:16 I share with a lot of people, which is it’s not enough for America to just be an economic
    0:21:20 zone and it’s not enough for us to just be individuals and it’s not enough to just have
    0:21:23 line go up and it’s not enough to just have economic success.
    0:21:29 There are deeper questions at play and also, you know, there’s more to a country than just
    0:21:30 that.
    0:21:32 And, you know, quite frankly, a lot of it is intangible.
    0:21:37 A lot of it is, you know, involved spirit and passion and, you know, like I said, we
    0:21:41 have more of it than anybody else, but, you know, we have to choose to want it.
    0:21:43 The way I look at it is like all of our problems are self-inflicted.
    0:21:46 Like they’re, you know, decline is a choice.
    0:21:50 You know, all of our problems are basically demoralization campaigns, you know, basically
    0:21:53 people telling us, people in positions of authority telling us that we should, you know,
    0:21:55 we shouldn’t, you know, stand out.
    0:21:56 We shouldn’t be adventurous.
    0:21:57 We shouldn’t be exciting.
    0:21:58 We shouldn’t be exploratory.
    0:22:01 You know, we shouldn’t, you know, this, that and the other thing and we should feel bad
    0:22:02 about everything that we do.
    0:22:06 And I think we’ve lived through a decade where that’s been the prevailing theme and I think
    0:22:10 quite honestly, as of November, I think people are done with it.
    0:22:14 If we could go on a tangent of a tangent, since we’re talking about individualism and
    0:22:19 that’s not all that it takes, you’ve mentioned in the past the book, The Ancient City, by,
    0:22:24 if I could only pronounce the name French historian, Numa Denis Foustel de Coulombe.
    0:22:25 I don’t know.
    0:22:26 That was amazing.
    0:22:27 Okay.
    0:22:28 All right.
    0:22:29 From the 19th century.
    0:22:30 Anyway, you said this is an important book to understand who we are and where we come
    0:22:31 from.
    0:22:34 So what that book does, it’s actually quite a striking book.
    0:22:40 So the book is written by this guy, as a profusive, let’s do the pronunciations, foreign language
    0:22:42 pronunciations for the day.
    0:22:50 He was a professor of classics at the Sorbonne in Paris, you know, the top university in
    0:22:51 the, actually in the 1860s.
    0:22:57 So actually right around after the U.S. Civil War and he was a savant of a particular kind,
    0:23:00 which is he, and you can see this in the book, is he had apparently read and sort of absorb
    0:23:06 and memorized every possible scrap of Greek and Roman literature and so it’s like a walking
    0:23:09 like index on basically Greek and Roman, everything we know about Greek and Roman culture.
    0:23:11 And that’s significant.
    0:23:13 The reason this matters is because basically none of that has changed, right?
    0:23:17 And so he had access to the exact same materials that we have, we have access to.
    0:23:19 And so there, you know, we’ve learned nothing.
    0:23:21 And then specifically what he did is he talked about the Greeks and the Romans, but specifically
    0:23:23 what he did is he went back further.
    0:23:26 He reconstructed the people who came before the Greeks and the Romans and what their life
    0:23:27 and society was like.
    0:23:30 And these were the people who were now known as the, as the Indo-Europeans.
    0:23:33 And these were, or you may have heard of these, these are the people who came down from the
    0:23:34 steppes.
    0:23:37 And so they came out of what’s now like Eastern Europe, like around sort of the outskirts of
    0:23:38 what’s now Russia.
    0:23:40 And then they sort of swept through Europe.
    0:23:44 They ultimately took over all of Europe, by the way, you know, almost many of the ethnicities
    0:23:48 in the Americas, the hundreds of years to follow, you know, are Indo-European.
    0:23:51 So like, you know, they were this basically this warrior, basically class that like came
    0:23:55 down and swept through and, and, and, and, and, you know, essentially, you know, populated
    0:23:56 much of the world.
    0:23:58 And there’s a whole interesting saga there.
    0:24:01 But what he does, and then they basically, they, they, from there came basically what
    0:24:04 we know as the Greeks and the Romans were kind of evolutions off of that.
    0:24:08 And so what he reconstructs is sort of what life was like, what life was like, at least
    0:24:11 in the West for people in their kind of original social state.
    0:24:15 And the significance of that is, is the original social state is this is living in the state
    0:24:20 of the absolute imperative for survival with absolutely no technology, right?
    0:24:22 Like no modern systems, no nothing, right?
    0:24:23 You’ve got the clothes on your back.
    0:24:27 You’ve got your, you know, you’ve got whatever you can build with your bare hands, right?
    0:24:30 This is, you know, predates basically all concepts of, of, of technologies we understand
    0:24:31 that today.
    0:24:35 And so these are people under like maximum levels of physical survival pressure.
    0:24:37 And so what, what social patterns did they evolve to be able to do that?
    0:24:43 And then the social pattern basically was as follows, is a three part social structure,
    0:24:50 family, tribe and city and zero concept of individual rights and essentially no concept
    0:24:51 of individualism.
    0:24:54 And so you were not an individual, you were a member of your family.
    0:24:58 And then a set of families would aggregate into a tribe and then a set of tribes would
    0:25:01 aggregate into a, into a city.
    0:25:05 And then the morality was completely, it was actually what Nietzsche talks, Nietzsche
    0:25:08 talks about, the morality was entirely master morality, not slave morality.
    0:25:12 And so in their morality, anything that was strong was good and anything that was weak
    0:25:13 was bad.
    0:25:14 And it’s very clear why that is, right?
    0:25:18 It’s because strong equals good equals survive, weak equals bad equals die.
    0:25:22 And that led to what became known later as the master slave dialectic, which is, is it
    0:25:25 more important for you to live on your feet as a master, even if the risk of dying?
    0:25:28 Or are you willing to, you know, live as a slave on your knees in order to not die?
    0:25:32 And this is sort of the, the derivation of that moral framework.
    0:25:35 Christianity later inverted that moral framework, but it, you know, the original framework lasted
    0:25:38 for, you know, many, many thousands of years.
    0:25:40 No concept of individualism, the head of the family had total life and death control over
    0:25:44 the, over the family, the head of the tribe, same thing, head of the city, same thing.
    0:25:48 And then you were morally obligated to kill members of the, of the other cities on contact.
    0:25:49 Right?
    0:25:52 You were morally required to, like if you didn’t do it, you were a bad person.
    0:25:59 Um, and then the form of the society was basically maximum fascism combined with maximum communism.
    0:26:00 Right?
    0:26:04 And so it was maximum fascism in the form of this, like absolute top-down control where
    0:26:07 the head of the family tribe or city could kill other members of the community at any
    0:26:10 time with no repercussions at all.
    0:26:14 So maximum hierarchy, but combined with maximum communism, which is no market economy.
    0:26:16 And so everything gets shared, right?
    0:26:19 And sort of the point of being in one of these collectives is that it’s a collective and,
    0:26:21 and, and, you know, and people are sharing.
    0:26:24 And of course that limited how big they could get cause, you know, the problem with communism
    0:26:25 is it doesn’t scale.
    0:26:26 Right?
    0:26:27 It works at the level of a family.
    0:26:31 It’s much harder to make it work at the level of a country, impossible, maximum fascism,
    0:26:32 maximum communism.
    0:26:37 And then, and then it was all intricately tied into their religion and their, their religion
    0:26:39 was in two parts.
    0:26:43 It was a veneration of ancestors and it was veneration of nature.
    0:26:47 And the veneration of ancestors is extremely important because it was basically like basically
    0:26:50 the ancestors were the people who got you to where you were, the ancestors were the people
    0:26:52 who had everything to teach you.
    0:26:53 Right?
    0:26:55 And then it was veneration of nature cause of course nature is the thing that’s trying
    0:26:56 to kill you.
    0:27:00 Um, and then you had your ancestor, every family tribe or city had their ancestor gods
    0:27:02 and then they had their, um, they had their nature gods.
    0:27:03 Okay.
    0:27:04 So fast forward to today.
    0:27:07 Like we live in a world that is like radically different, but in the book takes you through
    0:27:11 kind of what happened from that through the Greeks and Romans through to Christianity.
    0:27:14 And so the, but it, but it’s very helpful to kind of think in these terms because the
    0:27:19 conventional view of the progress through time is that we are, you know, the cliche is the
    0:27:22 arc of the, you know, moral universe, you know, Ben Stor’s justice, right?
    0:27:25 Or so-called Whig history, which is, you know, that the arc of progress is positive, right?
    0:27:29 And so we, you know, what you hear all the time, what you’re taught in school and everything
    0:27:32 is, you know, every year that goes by, we get better and better and more and more moral
    0:27:35 and more and more people are in a better version of ourselves.
    0:27:39 Our Indo European ancestors would say, Oh no, like you people have like fallen to shit.
    0:27:43 Like you people took all of the principles of basically your civilization and you have
    0:27:47 deluded them down to the point where they barely even matter, you know, and you’re having,
    0:27:50 you know, children at a wedlock and you’re, you know, you regularly encounter people of
    0:27:54 other cities and you don’t try to kill them and like, how crazy is that?
    0:27:58 And they would basically consider us to be living like an incredibly diluted version of
    0:28:01 this sort of highly religious, highly cult-like, right?
    0:28:04 Highly organized, highly fascist, fascist communist society.
    0:28:10 I can’t resist noting that as a consequence of basically going through all the transitions
    0:28:14 we’ve been through, going all the way through Christianity, coming out the other end of Christianity,
    0:28:18 Nietzsche declares God is dead, we’re in a secular society, you know, that still has,
    0:28:21 you know, tinge is a Christianity, but, you know, largely prides itself on no longer being
    0:28:27 religious in that way, you know, we being the sort of most fully evolved, modern, secular,
    0:28:32 you know, expert scientists and so forth have basically re-evolved or fallen back on the
    0:28:36 exact same religious structure that the Indo Europeans had, specifically ancestor worship,
    0:28:42 which is identity politics, and nature worship, which is environmentalism.
    0:28:45 And so we have actually like worked our way all the way back to their cult religions without
    0:28:46 realizing it.
    0:28:49 And it just goes to show that, like, you know, in some ways we have fallen far from the, far
    0:28:53 from the family tree, but in some cases we’re exactly the same.
    0:29:00 You kind of described this progressive idea of wokeism and so on as worshipping ancestors.
    0:29:02 Identity politics is worshipping ancestors, right?
    0:29:07 It’s tagging newborn infants with either, you know, benefits or responsibilities or, you
    0:29:10 know, levels of condemnation based on who their ancestors were.
    0:29:13 The Indo Europeans would have recognized it on site.
    0:29:15 We somehow think it’s like super socially progressive.
    0:29:16 Yeah.
    0:29:17 And it is not.
    0:29:19 I mean, I would say obviously not.
    0:29:23 Let’s, you know, get new answers, which is where I think you’re headed, which is, look,
    0:29:27 is the idea that you can like completely reinvent society every generation and have no regard
    0:29:28 whatsoever for what came before you?
    0:29:30 That seems like a really bad idea, right?
    0:29:33 That’s like the Cambodians with your zero underpull pot and, you know, death, you know,
    0:29:34 follows.
    0:29:40 It’s obviously the Soviets tried that, you know, the, you know, the utopian fantasists
    0:29:43 who think that they can just rip up everything that came before and create something new
    0:29:44 in the human condition.
    0:29:47 And human society have a very bad history of causing, you know, enormous destruction.
    0:29:51 So on the one hand, it’s like, okay, there is like a deeply important role for tradition.
    0:29:56 And the way I think about that is it’s the process of evolutionary learning, right?
    0:30:00 Which is what tradition ought to be is the distilled wisdom of all, and, you know, this
    0:30:01 is not even what Europeans thought about it.
    0:30:04 It should be the distilled wisdom of everybody who came before you, right?
    0:30:07 All those important and powerful lessons learned.
    0:30:09 And that’s why I think it’s fascinating to go back and study how these people lived is
    0:30:12 because that’s part of the history and, you know, part of the learning of the goddess
    0:30:14 to where we are today.
    0:30:17 Having said that, there are many cultures around the world that are, you know, mired
    0:30:20 in tradition to the point of not being able to progress.
    0:30:23 And in fact, you might even say globally, that’s the default human condition, which
    0:30:26 is, you know, a lot of people are in societies in which, you know, there’s like absolute
    0:30:30 seniority by age, you know, kids are completely, you know, like in the U.S., like for some
    0:30:32 reason, we decided kids are in charge of everything, right?
    0:30:35 And like, you know, they’re the trendsetters and they’re allowed to like set all the agendas
    0:30:39 and like set all the politics and set all the culture and maybe that’s a little bit crazy.
    0:30:42 But like in a lot of other cultures, kids have no voice at all, no role at all, because
    0:30:46 it’s the old people who are in charge of everything, you know, they’re gerontocracies.
    0:30:50 And it’s all a bunch of 80-year-olds running everything, which by the way, we have a little
    0:30:52 bit of that too, right?
    0:30:57 And so I would say is like, there’s a down, there’s a real downside, you know, full traditionalism
    0:31:02 as communitarianism, you know, it’s ethnic particularism, you know, it’s ethnic chauvinism,
    0:31:07 it’s, you know, this incredible level of resistance to change, you know, that’s, I mean, it just
    0:31:08 doesn’t get you anywhere.
    0:31:12 It may be good and fine at the level of an individual tribe, but as a society living
    0:31:15 in the modern world, you can’t evolve, you can’t advance, you can’t participate in
    0:31:18 all the good things that, you know, that have happened.
    0:31:21 And so, you know, I think probably this is one of those things where extremeness on either
    0:31:23 side is probably a bad idea.
    0:31:29 And I, but, you know, but this needs to be approached in a sophisticated and nuanced way.
    0:31:35 So the beautiful picture you painted of the roaring 20s, how can the Trump administration
    0:31:37 play a part in making that future happen?
    0:31:38 Yeah.
    0:31:42 So look, a big part of this is getting the government boot off the neck of the American
    0:31:47 economy, the American technology industry, the American people, you know, and then again,
    0:31:50 this is a replay of what happened in the 60s and 70s, which is, you know, for what started
    0:31:54 out looking like, you know, I’m sure good and virtuous purposes, you know, we, we ended
    0:31:57 up both that and now with this, you know, what I, what I describe as sort of a form of soft
    0:32:01 authoritarianism, you know, the good news is it’s not like a military dictatorship.
    0:32:05 It’s not like, you know, you get thrown into Lou Bianca, you know, for the most part, it’s
    0:32:07 not coming at four in the morning, you’re not getting dragged off to a cell.
    0:32:10 So it’s not hard authoritarianism, but it is soft authoritarianism.
    0:32:15 And so it’s this, you know, incredible, suppressive blanket of regulation rules, you know, this
    0:32:17 concept of a vetocracy, right?
    0:32:20 What’s required to get anything done, you know, you need to get 40 people to sign off
    0:32:24 in anything, any one of them can veto it, you know, there’s a lot of how our now political
    0:32:26 system works.
    0:32:30 And then, you know, just this general idea of, you know, progress is bad and technology
    0:32:34 is bad and capitalism is bad and building businesses is bad and success is bad.
    0:32:39 You know, tall poppy syndrome, you know, basically anybody who sticks their head up,
    0:32:41 you know, deserves to get it, you know, chopped off.
    0:32:44 Anybody who’s wrong about anything deserves to get condemned forever.
    0:32:49 You know, just this very kind of, you know, grinding, you know, repression and then coupled
    0:32:55 with specific government actions such as censorship regimes, right and debanking, right?
    0:33:00 And you know, draconian, you know, deliberately kneecapping, you know, critical American industries.
    0:33:03 And then, you know, congratulating yourself in the back for doing it or, you know, having
    0:33:06 these horrible social policies like let’s let all the criminals out of jail and see what
    0:33:07 happens.
    0:33:08 Right.
    0:33:11 And so like, we’ve just been through this period, you know, I call it a demoralization
    0:33:14 campaign, like we’ve just been through this period where, you know, whether it started
    0:33:17 that way or not, it ended up basically being this comprehensive message that says you’re
    0:33:22 terrible and if you try to do anything, you’re terrible and fuck you.
    0:33:25 And the Biden administration reached kind of the full pinnacle of that in our time.
    0:33:29 They got really bad on many fronts at the same time.
    0:33:34 And so just like relieving that and getting kind of back to it reasonably, you know, kind
    0:33:40 of optimistic, constructive, you know, pro-growth frame of mind, there’s just, there’s so much
    0:33:43 pent-up energy and potentially the American system of that alone is gonna, I think, cause,
    0:33:46 you know, growth and spirit to take off.
    0:33:49 And then there’s a lot of things proactively, but yeah, and then there’s a lot of things
    0:33:50 proactively that could be done.
    0:33:52 So how do you relieve that?
    0:33:59 To what degree has the thing you described ideologically permeated government and permeated
    0:34:00 big companies?
    0:34:03 Disclaimer at first, which is I don’t want to predict anything on any of this stuff because
    0:34:08 I’ve learned the hard way that I can’t predict politics or Washington at all.
    0:34:11 But I would just say that the plans and intentions are clear and the staffing supports it.
    0:34:15 And all the conversations are consistent with the new administration and that they plan
    0:34:19 to take, you know, very rapid action on a lot of these fronts very quickly.
    0:34:21 They’re gonna do as much as they can through executive orders and then they’re gonna do
    0:34:24 legislation and regulatory changes for the rest.
    0:34:26 And so they’re gonna move, I think, quickly on a whole bunch of stuff.
    0:34:29 You can already feel, I think, a shift in the national spirit, or at least, let’s put
    0:34:30 it this way.
    0:34:33 I feel it for sure and Silicon Valley like it, you know, I mean, we, you know, we just
    0:34:36 saw a great example of this with what, you know, with what Mark Zuckerberg is doing.
    0:34:39 You know, obviously I’m involved with his company, but, you know, we just saw it kind
    0:34:44 of in public, the scope and speed of the changes, you know, are reflective of sort of this, of
    0:34:45 a lot of these shifts.
    0:34:49 But I would say that that same conversation, those same kinds of things are happening throughout
    0:34:50 the industry, right.
    0:34:54 And so the tech industry itself, whether people were pro-Trump or anti-Trump, like there’s
    0:34:57 just like a giant five shift mood shift that’s like kicked in already.
    0:35:02 And then I was with a group of Hollywood people about two weeks ago, and they were still,
    0:35:04 you know, people who at least, at least vocally were still very anti-Trump.
    0:35:08 But I said, you know, has anything changed since, since November 6th?
    0:35:10 And they immediately said, oh, it’s completely different.
    0:35:15 It feels like the ISIS thawed, you know, woke us over, you know, they said that all kinds
    0:35:18 of projects are going to be able to get made now that couldn’t before that, you know, probably
    0:35:20 was going to start making comedies again.
    0:35:24 You know, like, they were just like, it’s like, it’s like, it’s just like an incredible
    0:35:26 immediate environmental change.
    0:35:30 And I’m, as I talk to people kind of throughout, you know, certainly throughout the economy,
    0:35:33 people who run businesses, I hear that all the time, which is just this, this last 10
    0:35:34 years of misery is just over.
    0:35:38 I mean, the one that I’m watching that’s really funny, I mean, Facebook’s giving a lot, that
    0:35:39 is getting a lot of attention.
    0:35:42 But the other funny one is BlackRock, which I’m not, you know, and I don’t know him,
    0:35:44 but I’ve watched for a long time.
    0:35:48 And so, you know, Larry Fink is the CEO of BlackRock was like first in as a major, you
    0:35:56 know, investment CEO on like every dumb social trend and rule set, like every, all right,
    0:36:03 I’m going for it, every retarded, every retarded thing you can imagine, every ESG and every
    0:36:08 like, every possible satellite companies with every aspect of just these crazed ideological
    0:36:09 positions.
    0:36:12 And, you know, he was coming in, he literally was like, had aggregated together trillions
    0:36:17 of dollars of, of, of, of shareholdings that he did not, that were, you know, that were
    0:36:21 his, his customers rights and he, you know, seized their voting control of their shares
    0:36:24 and was using it to force all these companies to do all of this, like crazy ideological
    0:36:25 stuff.
    0:36:27 And he was like the typhoid Mary of all this stuff in corporate America.
    0:36:31 And if he in the last year has been like backpedaling from that stuff, like as fast as he possibly
    0:36:32 can.
    0:36:35 And I saw just an example last week, he pulled out of the, whatever the corporate net zero
    0:36:39 alliance, you know, he pulled out of the crazy energy, energy, energy stuff.
    0:36:42 And so like, you know, he’s backing away as fast as he can.
    0:36:43 He’s doing it.
    0:36:46 Remember the Richard Pryor backwards walk, Richard Pryor had this way where he could,
    0:36:50 he could back out of a room while looking at, like he was walking forward.
    0:36:54 And so, you know, even they’re doing that.
    0:36:58 And just the whole thing, I mean, if you saw the court recently ruled that NASDAQ had these
    0:37:03 crazy board of directors composition rules, one of the funniest moments of my life is
    0:37:07 when my friend Peter Thiel and I were on the, the, the meta board and these NASDAQ rules
    0:37:10 came down mandated diversity on corporate boards.
    0:37:13 And so we sat around the table and had to figure out, you know, which of us counted as diverse
    0:37:19 and the very professional attorneys that met up explained with a 100% complete straight
    0:37:24 phase that Peter Thiel counts as diverse by virtue of being LGBT.
    0:37:27 And this is a guy who literally wrote a book called the diversity myth.
    0:37:33 And he literally looked like he swallowed a live goldfish and this was imposed.
    0:37:36 I mean, this was like so incredibly offensive to him that like, it just like, it was just
    0:37:37 absolutely appalling.
    0:37:40 And I felt terrible for him, but the look in his face was very funny.
    0:37:44 It was imposed by NASDAQ, you know, your stock exchange is imposing this stuff on you.
    0:37:48 And then the court, whatever the court of appeals just nuked that, you know, it’s like
    0:37:51 these things basically are being like ripped down one by one.
    0:37:55 And what’s on the other side of it is basically, you know, finally being able to get back to,
    0:37:58 you know, everything that, you know, everybody always wanted to do, which is like run their
    0:38:03 companies, have great products, have happy customers, you know, like succeed, like succeed,
    0:38:07 achieve, outperform and, you know, work with the best and the brightest and not be made
    0:38:08 to feel bad about it.
    0:38:10 And I think that’s happening in many areas of American society.
    0:38:15 It’s great to hear that Peter Thiel is fundamentally a diversity hire.
    0:38:18 Well, so it was very, you know, there was a moment.
    0:38:22 So Peter, you know, Peter, of course, you know, is, you know, is publicly gay has been
    0:38:26 for a long time, you know, but, you know, there are other men on the board, right?
    0:38:28 And you know, we’re sitting there and we’re all looking at it and we’re like, all right,
    0:38:32 like, okay, LGBT and we just, we keep coming back to the B, right?
    0:38:39 And it’s like, you know, it’s like, all right, you know, I’m willing to do a lot for this
    0:38:44 company, but it’s all about sacrifice for diversity.
    0:38:45 Well, yeah.
    0:38:47 And then it’s like, okay, like, is there a test?
    0:38:48 Right.
    0:38:49 You know?
    0:38:50 Oh, yeah.
    0:38:51 Exactly.
    0:38:52 How do you prove it?
    0:38:56 The questions that got asked, you know, what are you willing to do?
    0:38:57 Yeah.
    0:39:03 I think I’m very good at asking lawyers completely absurd questions with a totally straight face.
    0:39:05 And do they answer with a straight face?
    0:39:06 Sometimes.
    0:39:07 Okay.
    0:39:09 I think in fairness, they have trouble telling when I’m joking.
    0:39:15 So you mentioned the Hollywood folks, maybe people in Silicon Valley and vibe shift.
    0:39:19 Maybe you can speak to preference falsification.
    0:39:21 What do they actually believe?
    0:39:23 How many of them actually hate Trump?
    0:39:31 But like what percent of them are feeling this vibe shift and are interested in creating
    0:39:34 the roaring twenties in the way they’ve described?
    0:39:36 So first we should maybe talk population.
    0:39:40 So there’s like all of Silicon Valley and the way to just measure that is just look
    0:39:41 at voting records.
    0:39:42 Right.
    0:39:44 And what that shows consistently is Silicon Valley is just a, you know, at least historically,
    0:39:49 my entire time there has been overwhelmingly majority just straight up Democrat.
    0:39:51 The other way to look at that is political donation records.
    0:39:57 And again, you know, the political donations in the Valley range from 90 to 99% to one side.
    0:39:59 And so, you know, we’ll, I just bring it up because like we’ll see what happens with
    0:40:03 the voting and with donations going forward.
    0:40:06 We maybe talk about the fire later, but I can tell you there is a very big question of
    0:40:08 what’s happening in Los Angeles right now.
    0:40:11 I don’t want to get into the fire, but like it’s catastrophic and, you know, there was
    0:40:14 already a rightward shift in the big cities in California.
    0:40:18 And I think a lot of people in LA are really thinking about things right now as they’re
    0:40:21 trying to, you know, literally save their houses and save their families.
    0:40:24 But you know, even in San Francisco, there was a big right, it was a big shift to the
    0:40:26 right in the voting in 24.
    0:40:30 So we’ll see where that goes, but, you know, you observe that by just looking at the numbers
    0:40:32 over time.
    0:40:35 The part that I’m more focused on is, you know, and I don’t know how to exactly describe
    0:40:39 this, but it’s like the top thousand or the top 10,000 people, right?
    0:40:43 And you know, I don’t have a list, but like it’s the, you know, it’s all the top founders,
    0:40:47 top CEOs, top executives, top engineers, top VCs, you know, and then kind of into the
    0:40:51 ranks, you know, the people who kind of built and run the companies and they’re, you know,
    0:40:58 I don’t have numbers, but I have a much more tactile feel, you know, for what’s happening.
    0:41:04 So I, the big thing I have now come to believe is that the idea that people have beliefs
    0:41:07 is mostly wrong.
    0:41:11 I think that most people just go along.
    0:41:13 And I think even most high status people just go along.
    0:41:17 And I think maybe the most high status people are the most prone to just go along because
    0:41:19 they’re the most focused on status.
    0:41:24 And the way I would describe that is, you know, one of the great forbidden philosophers
    0:41:29 of our time is the Unabomber, Ted Kaczynski, and amidst his madness, he had this extremely
    0:41:30 interesting articulation.
    0:41:35 You know, he was a, he was an insane lunatic murderer, but he was also a, you know, Harvard
    0:41:44 super genius, not that those are in conflict, but he was a very bright guy and he did this
    0:41:49 whole thing where he talked about, basically he was very right-wing and talked about leftism
    0:41:50 a lot.
    0:41:53 And he had this great concept that’s just stuck in my mind ever since I read it, which
    0:41:57 is he had this concept you just called oversocialization.
    0:42:00 And so, you know, most people are socialized, like most people are socialized, like most
    0:42:04 people are, you know, we live in a society, most people learn how to be part of a society,
    0:42:06 they give some deference to the society.
    0:42:10 There’s something about modern Western elites where they’re oversocialized and they’re just
    0:42:16 like overly oriented towards what other people like themselves, you know, think and believe
    0:42:20 and you can get a real sense of that if you have a little bit of an outside perspective,
    0:42:25 which I just do, I think as a consequence of where I grew up, like even before I had
    0:42:28 the views that I have today, there was always just this weird thing where it’s like, why
    0:42:31 does every dinner party have the exact same conversation?
    0:42:34 Why does everybody agree on every single issue?
    0:42:39 Why is that agreement precisely what was in the New York Times today?
    0:42:44 Why are these positions not the same as they were five years ago, right?
    0:42:47 But why does everybody like snap into agreement every step of the way?
    0:42:51 And that was true when I came to Silicon Valley and it’s just just true today, 30 years later.
    0:42:55 And so I think most people are just literally take, I think they’re taking their cues from
    0:42:59 it’s some combination, the press, the universities, the big foundations, so it’s like basically
    0:43:04 it’s like the New York Times, Harvard, the Ford Foundation, and you know, I don’t know,
    0:43:08 you know, a few CEOs and a few public figures and you know, maybe, you know, maybe the president
    0:43:13 of your parties in power and like whatever that is, everybody just everybody who’s sort
    0:43:18 of good and proper and elite and good standing and in charge of things and a sort of correct
    0:43:21 member of, you know, let’s call it coastal American society, everybody just believes
    0:43:22 those things.
    0:43:26 And then, you know, the two interesting things about that is number one, there’s no divergence
    0:43:28 among the organs of power, right?
    0:43:31 So the Harvard and Yale believe the exact same thing, the New York Times, the Washington
    0:43:34 Post believe the exact same thing, the Ford Foundation, the Rockefeller Foundation believe
    0:43:38 the exact same thing, Google and you know, whatever, you know, Microsoft believe the
    0:43:40 exact same thing.
    0:43:43 But those things change over time.
    0:43:46 But there’s never conflict in the moment, right?
    0:43:50 And so, you know, the New York Times and the Washington Post agreed on exactly everything
    0:43:58 in 1970, 1980, 1990, 2000, 2010 and 2020, despite the fact that the specifics changed radically,
    0:43:59 the lockstep was what mattered.
    0:44:03 And so I think basically we in the Valley, we’re on the tail end of that in the same
    0:44:05 way, Hollywood’s the tail end of that in the same way, New York’s the tail end of that,
    0:44:08 the same way the media is on the tail end of that.
    0:44:10 It’s like some sort of collective hive mind thing.
    0:44:13 And I just go through that to say like, I don’t think most people in my orbit, or you
    0:44:18 know, say the top 10,000 people in the Valley, or the top 10,000 people in LA, I don’t think
    0:44:21 they’re sitting there thinking, basically, I have rocks, I mean, they probably think
    0:44:25 they have rocks out of beliefs, but they don’t actually have like some inner core of rocks
    0:44:26 out of beliefs.
    0:44:28 And then they kind of watch reality change around them and try to figure out how to keep
    0:44:30 their beliefs, like correct, I don’t think that’s what happens.
    0:44:34 I think what happens is they conform to the belief system around them.
    0:44:37 And I think most of the time they’re not even aware that they’re basically part of
    0:44:38 a herd.
    0:44:45 Is it possible that the surface chatter of dinner parties, underneath that there is
    0:44:50 a turmoil of ideas and thoughts and beliefs that’s going on, but you’re just talking to
    0:44:55 people really close to you or in your own mind, and the socialization happens at the
    0:45:01 dinner parties, like when you go outside the inner circle of one, two, three, four people
    0:45:03 who you really trust, then you start to conform.
    0:45:09 But inside there, inside the mind, there is an actual belief or a struggle, attention
    0:45:17 within New York Times or with the listener, there’s a slow smile that overtook Marc Andreessen’s
    0:45:18 face.
    0:45:21 So look, I’ll just tell you what I think, which is at the dinner parties and at the
    0:45:24 conferences, no, there’s none of that.
    0:45:27 What there is is that all of the heretical conversations have anything that challenges
    0:45:33 the status quo, any heretical ideas in any new idea is a heretical idea.
    0:45:36 Any deviation, it’s either discussed a one-on-one face-to-face.
    0:45:40 It’s like a whisper network, or it’s like a real-life social network.
    0:45:43 There’s a secret handshake, which is like, okay, you meet somebody and you know each
    0:45:47 other a little bit, but not well, and you’re both trying to figure out if you can talk
    0:45:50 to the other person openly or whether you have to be fully conformist.
    0:45:51 It’s a joke.
    0:45:52 Oh, yeah.
    0:45:53 Humor.
    0:45:54 I’m sorry.
    0:45:55 Somebody cracks a joke.
    0:45:56 Somebody cracks a joke.
    0:45:59 If the other person laughs, the conversation is on.
    0:46:05 If the other person doesn’t laugh back slowly away from the scene, I didn’t mean anything
    0:46:06 by it.
    0:46:08 And by the way, it doesn’t have to be like a super offensive joke.
    0:46:12 It just has to be a joke that’s just up against the edge of one of the, use the Sam Bankman
    0:46:18 free term, one of the chivalrous, it has to be up against one of the things of one of
    0:46:21 the things that you’re absolutely required to believe to be the dinner parties.
    0:46:24 And then at that point, what happens is you have a peer-to-peer network.
    0:46:30 You have a one-to-one connection with somebody, and then you have your little conspiracy of
    0:46:32 a thought criminality.
    0:46:35 And then you have your network, you’ve probably been through this, you have your network of
    0:46:37 thought criminals, and then they have their network of thought criminals, and then you
    0:46:41 have this like delicate mating dances to whether you should bring the thought criminals together.
    0:46:42 Right?
    0:46:46 And the dance, the fundamental mechanism of the dance is humor.
    0:46:47 Yeah, it’s humor.
    0:46:48 Right.
    0:46:49 Well, of course.
    0:46:50 Memes.
    0:46:51 Yeah.
    0:46:52 Well, for two reasons.
    0:46:53 Number one, humor is a way to have deniability.
    0:46:55 It’s a way to discuss these things without having deniability.
    0:46:56 Oh, I’m sorry.
    0:46:57 It was just a joke, right?
    0:46:58 So that’s part of it.
    0:47:00 Which is one of the reasons why comedians can get away with saying things the rest of
    0:47:01 us can.
    0:47:04 Because they can always fall back on, “Oh, yeah, I was just going for the laugh.”
    0:47:08 But the other key thing about humor, right, is that laughter is involuntary, right?
    0:47:09 Like you either laugh or you don’t.
    0:47:12 And it’s not like a conscious decision whether you’re going to laugh, and everybody can tell
    0:47:14 when somebody’s fake laughing, right?
    0:47:16 And this every professional comedian knows this, right?
    0:47:18 The laughter is the clue that you’re onto something truthful.
    0:47:21 Like people don’t laugh at like made up bullshit stories.
    0:47:24 They laugh because like you’re revealing something that they either have not been allowed to
    0:47:27 think about or have not been allowed to talk about, right?
    0:47:28 Or is off limits.
    0:47:31 And all of a sudden, it’s like the ice breaks and it’s like, “Oh, yeah, that’s the thing.
    0:47:32 And it’s funny.”
    0:47:33 And like I laugh.
    0:47:36 And then, and then of course, this is why, of course, live comedy is so powerful is because
    0:47:37 you’re all doing that at the same time.
    0:47:38 So you start to have, right?
    0:47:39 The safety of, you know, the safety of numbers.
    0:47:43 And so, so the comedians have like the all, there’s no, no surprise to me like, for example,
    0:47:46 Joe has been as successful as he has because they have, they have this hack that the, you
    0:47:50 know, the rest of us who are not professional comedians don’t have, but you have your in-person
    0:47:51 version of it.
    0:47:52 Yeah.
    0:47:53 And then you’ve got the question of whether the, whether you can sort of join the networks
    0:47:54 together.
    0:47:57 And then you’ve probably been to this as, you know, then at some point there’s like a different,
    0:48:00 there’s like the alt dinner party, the Thorker middle dinner party and you get six or eight
    0:48:02 people together and you join the networks.
    0:48:05 And those are like the happiest moments, at least in the last decade, those are like the
    0:48:08 happiest moments of everybody’s lives because they’re just like, everybody’s just ecstatic
    0:48:12 because they’re like, “I don’t have to worry about getting yelled at and shamed like for
    0:48:16 every third sentence that comes out of my mouth and we can actually talk about real things.”
    0:48:17 So that’s the live version of it.
    0:48:22 And then of course the other side of it is the, you know, the group chat phenomenon, right?
    0:48:26 And then basically the same thing played out, you know, until Elon bought Axe and until
    0:48:30 Substack took off, you know, which were really the two big breakthroughs in free speech online.
    0:48:33 The same dynamic played out online, which is you had absolute conformity on the social
    0:48:37 networks, like literally enforced by the social networks themselves through censorship and
    0:48:41 then also through cancellation campaigns and mobbing and shaming, right?
    0:48:45 But then you had, but then group chats grew up to be the equivalent of a stop, right?
    0:48:50 Anybody who grew up in the Soviet Union under communism, you know, they had the hard version
    0:48:51 of this, right?
    0:48:53 It’s like, how do you know who you could talk to and then how do you distribute information
    0:48:58 and, you know, like, you know, again, that was the hard authoritarian version of this.
    0:49:01 And then we’ve been living through this weird mutant, you know, softer authoritarian version
    0:49:03 but with, you know, with some of the same patterns.
    0:49:10 And WhatsApp allows you to scale and make it more efficient to build on these groups
    0:49:13 of heretical ideas bonded by humor.
    0:49:14 Yeah, exactly.
    0:49:15 Well, and this is the thing.
    0:49:16 This is kind of the running joke about group chat, right?
    0:49:20 The running kind of thing about group chats, it’s not even a joke, it’s like, every group
    0:49:23 chat, if you’ve noticed this, like every, this principle of group chats, every group
    0:49:26 chat ends up being about memes and humor.
    0:49:29 And the goal of the game, the game of the group chat is to get as close to the line
    0:49:34 of being actually objectionable as you can get without actually tripping it, right?
    0:49:38 And I like literally every group chat that I have been in for the last decade, even if
    0:49:42 it starts some other direction, what ends up happening is it becomes the absolute comedy
    0:49:47 fest where, but it’s walking, they walk right at the line and they’re constantly testing.
    0:49:49 And every once in a while, somebody will trip the line and people will freak out and it’s
    0:49:50 like, oh, too soon.
    0:49:53 Okay, you know, we got to wait until next year to talk about that, you know, they walk
    0:49:54 it back.
    0:49:55 And so it’s that same thing.
    0:49:57 And yeah, and then group chats is a technological phenomenon.
    0:50:00 It was amazing to see because basically it was number one, it was, you know, obviously
    0:50:05 the rise of smartphones, then it was the rise of the new messaging services, then it was
    0:50:09 the rise specifically of, I would say, combination of what’s happened signal.
    0:50:13 And the reason for that is those were the two big systems that did the full encryption.
    0:50:15 So you actually felt safe.
    0:50:20 And then the real breakthrough, I think, was disappearing messages, which hit signal probably
    0:50:25 four or five years ago and hit WhatsApp three or four years ago.
    0:50:31 And then the combination of encryption and disappearing messages, I think really unleashed
    0:50:32 it.
    0:50:35 Well, then there’s the fight over the length of the disappearing messages, right?
    0:50:38 And so it’s like, you know, I often get behind of my things.
    0:50:43 So I set to seven day, you know, disappearing messages and my friends who are like, no,
    0:50:44 that’s way too much risk.
    0:50:45 Yeah.
    0:50:46 It’s got to be a day.
    0:50:48 And then every once in a while, somebody will set to five minutes before they send something
    0:50:49 like particularly inflammatory.
    0:50:50 Yeah.
    0:50:51 100%.
    0:50:54 Well, what, I mean, one of the things that bothers me about what’s up, the choice is
    0:50:58 between 24 hours and, you know, seven days, one day or seven days.
    0:51:04 And I have to have an existential crisis about deciding whether I can last for seven days
    0:51:06 with what I’m about to say.
    0:51:07 Exactly.
    0:51:09 Now, of course, what’s happening right now is the big thaw, right?
    0:51:10 And so the vibe shift.
    0:51:14 So what’s happening on the other side of the election is, you know, Elon on Twitter two
    0:51:17 years ago and now Mark with Facebook and Instagram.
    0:51:20 And by the way, with the continued growth of Substack and with other, you know, new platforms
    0:51:24 that are emerging, you know, like I think it may be, you know, I don’t know that everything
    0:51:29 just shifts back into public, but like a tremendous amount of the, a tremendous amount of the
    0:51:33 verboten conversations, you know, can now shift back into public view.
    0:51:36 And I mean, quite frankly, this is one of those things, you know, quite frankly, even
    0:51:40 if I was opposed to what those people are saying, and I’m sure I am in some cases, you
    0:51:43 know, I would argue it’s still like net better for society that those things happen in public
    0:51:49 instead of private, you know, do you really want, like, yeah, like, don’t you want to
    0:51:50 know?
    0:51:53 And, and so, and then it’s just, look, it’s just, I think clearly much healthier to live
    0:51:56 in a society in which people are not literally scared about their saying.
    0:52:01 I mean, to push back, to come back to this idea that we’re talking about, I do believe
    0:52:05 that people have beliefs and thoughts that are heretical, like a lot of people.
    0:52:09 I wonder what fraction of people have that.
    0:52:12 To me, this is the preference falsification is really interesting.
    0:52:18 What is the landscape of ideas that human civilization has in private as compared to
    0:52:25 what’s out in public, because like that, the, the, the dynamical system that is the difference
    0:52:30 between those two is fascinating, like there’s throughout history, the, the fall of communism
    0:52:36 in multiple regimes throughout Europe is really interesting because everybody was following,
    0:52:43 you know, the line until not, but you better, for sure, privately, there was a huge number
    0:52:49 of boiling conversations happening where like this is this, the bureaucracy of communism,
    0:52:53 the corruption of communism, all of that was really bothering people more and more and
    0:52:54 more and more.
    0:52:58 And all of a sudden, there’s a trigger that allows the vibe shift to happen.
    0:53:05 So to me, like the, the interesting question here is what is the landscape of private thoughts
    0:53:12 and ideas and conversations that are happening under the surface of, of, of Americans, especially
    0:53:17 my question is how much dormant energy is there for this roaring twenties where people
    0:53:18 are like, no more bullshit.
    0:53:19 Let’s get shit done.
    0:53:20 Yeah.
    0:53:21 So let’s go through that.
    0:53:22 We’ll go through the theory of preference falsification.
    0:53:23 Yeah.
    0:53:24 Just, just, just by the way, amazing.
    0:53:26 The books, unless it gets fascinating.
    0:53:27 Yeah.
    0:53:28 Yeah.
    0:53:29 Great books.
    0:53:32 Incredibly, about 20, 30 year old book, but it’s very, it’s completely modern and current
    0:53:36 in what it talks about as well as very deeply historically informed.
    0:53:42 So it’s called private truths, public lies, and it’s written by a social science professor
    0:53:46 named Timur Quran at, I think, Duke.
    0:53:47 And it’s, it’s definitive work on this.
    0:53:50 And so he, he has this concept, he calls preference falsification.
    0:53:53 And so preference falsification is two things, preference falsification.
    0:53:56 And you get it from the title of the book, private truths, public lies.
    0:54:00 So preference falsification is when you believe something and you can’t say it.
    0:54:05 Or, and this is very important, you don’t believe something and you must say it, right?
    0:54:10 And, and, and the commonality there is in both cases, you’re lying, you, you, you believe,
    0:54:13 you believe something internally and then you’re lying about it in public.
    0:54:17 And so the thing, you know, the, and there’s sort of two, the two classic forms of it.
    0:54:20 There’s the, you know, for example, there’s the, I believe communism is rotten, but I
    0:54:21 can’t say it, version of it.
    0:54:26 But then there’s also the, the, the famous parable of the real life example.
    0:54:30 But the thing that Voslav Havel talks about in the other good book on this topic, which
    0:54:34 is the power of the powerless, you know, who was an anti-communist resistance fighter
    0:54:37 who ultimately became the, you know, the president of Czechoslovakia after the fall
    0:54:38 of the wall.
    0:54:42 But he wrote this book and he, he describes the other side of this, which is workers
    0:54:44 of the world unite, right?
    0:54:48 And so he, he describes what he calls the parable, the greengrocer, which is your greengrocer
    0:54:51 in Prague in 1985.
    0:54:54 And for the last 70 years, it has been, or it’s 50 years, it’s been absolutely mandatory
    0:54:59 to have a sign in the window of your store that says workers of the world unite, right?
    0:55:00 And it’s 1985.
    0:55:04 It is like crystal clear that the world, the workers of the world are not going to unite.
    0:55:08 Like all the things that could happen in the world, that is not going to happen.
    0:55:10 The commies have been at that for 70 years.
    0:55:11 It is not happening.
    0:55:13 But that slogan had better be in your window every morning, because if it’s not in your
    0:55:16 window every morning, you are not a good communist.
    0:55:19 The secret police are going to come by and they’re going to, they’re going to get you.
    0:55:21 And so the first thing you do when you get to the store is you put that slogan in the
    0:55:23 window and you make sure that it stays in the window all day long.
    0:55:27 But he says the thing is every single person, the greengrocer knows the slogan is fake.
    0:55:29 He knows it’s a lie.
    0:55:32 Every single person walking past the slogan knows that it’s a lie.
    0:55:35 Every single person walking past the store knows that the greengrocer is only putting
    0:55:38 it up there because he has to lie in public.
    0:55:42 And the greengrocer has to go through the humiliation of knowing that everybody knows
    0:55:44 that he’s caving into the system and lying in public.
    0:55:48 And so it turns into the moralization campaign.
    0:55:50 It’s not just ideological enforcement.
    0:55:54 In fact, it’s not ideological enforcement anymore because everybody knows it’s fake.
    0:55:55 The authorities know it’s fake.
    0:55:56 Everybody knows it’s fake.
    0:55:59 It’s not that they’re enforcing the actual ideology of the world’s workers of the world
    0:56:00 uniting.
    0:56:05 It’s that they are enforcing compliance and compliance with the regime and fuck you, you
    0:56:06 will comply.
    0:56:09 And so anyway, that’s the other side of that.
    0:56:13 And of course, we have lived in the last decade through a lot of both of those.
    0:56:17 I think anybody listening to this could name a series of slogans that we’ve all been forced
    0:56:20 to chant for the last decade that everybody knows at this point are just like simply not
    0:56:21 true.
    0:56:26 I’ll let the audience speculate on their own group chats.
    0:56:29 >> Send mark your memes online as well, please.
    0:56:30 >> Yes, yes, exactly.
    0:56:32 But okay, so anyway, so it’s the two sides of that, right?
    0:56:36 So it’s private truth, it’s public lies.
    0:56:39 So then what preference falsification does is it talks about extending that from the
    0:56:42 idea of the individual experience of that to the idea of the entire society experiencing
    0:56:43 that, right?
    0:56:47 That’s just your percentages question, which is like, okay, what happens in a society in
    0:56:49 which people are forced to lie in public about what they truly believe?
    0:56:52 What happens, number one, is that individually they’re lying in public and that’s bad.
    0:56:56 But the other thing that happens is they no longer have an accurate gauge at all or any
    0:56:59 way to estimate how many people agree with them.
    0:57:02 And this is how, again, this literally is like how you get something like the communist
    0:57:08 system, which is like, okay, you end up in a situation in which 80 or 90 or 99% of society
    0:57:11 can actually all be thinking individually, I really don’t buy this anymore.
    0:57:14 And if anybody would just stand up and say it, I would be willing to go along with it,
    0:57:17 but I’m not going to be the first one to put my head on the chopping block.
    0:57:21 But you have no, because of the suppression censorship, you have no way of knowing how
    0:57:22 many other people agree with you.
    0:57:26 And if the people, if the people agree with you are 10% of the population and you become
    0:57:29 part of a movement, you’re going to get killed.
    0:57:33 If 90% of the people agree with you, you’re going to win the revolution, right?
    0:57:37 And so the question of like what the percentage actually is, is like a really critical question.
    0:57:41 And then basically, in any sort of authoritarian system, you can’t like run a survey to get
    0:57:42 an accurate result.
    0:57:45 And so you actually can’t know until you put it to the test.
    0:57:47 And then what he describes in the book is it’s always put to the test in the same way.
    0:57:51 And this is exactly what’s happened for the last two years, like 100% of exactly what’s
    0:57:52 happened.
    0:57:58 It’s like straight out of this book, which is somebody, Elon sticks his hand up and says,
    0:58:02 the workers of the world are not going to unite, right, or the emperor is actually wearing
    0:58:03 no clothes, right?
    0:58:05 You know, that famous parable, right?
    0:58:08 So one person stands up and does it and literally that person is standing there by themselves
    0:58:12 and everybody else in the audience is like, ooh, I wonder what’s going to happen to that
    0:58:13 guy.
    0:58:14 Right.
    0:58:15 But again, nobody knows.
    0:58:16 Elon doesn’t know.
    0:58:17 The first guy doesn’t know.
    0:58:19 Other people don’t know, like, which way is this going to go?
    0:58:22 And it may be that that’s a minority position and that’s a way to get yourself killed.
    0:58:26 Or it may be that that’s the majority position and that and you are now the leader of a revolution.
    0:58:29 And then basically, of course, what happens is, okay, the first guy does that, doesn’t get
    0:58:30 killed.
    0:58:33 The second guy does, well, a lot of the time that guy doesn’t get killed, but when the
    0:58:36 guy doesn’t get killed, then a second guy pops his head up, says the same thing.
    0:58:37 All right.
    0:58:40 Now you’ve got two, two leads to four, four leads to eight, eight leads to 16.
    0:58:44 And then as we saw with the fall of the Berlin Wall, this is what happened in Russia and
    0:58:47 Eastern Europe in ’89, when it goes, it can go, right?
    0:58:49 And then it rips.
    0:58:53 And then what happens is very, very quickly, if it turns out that you had a large percentage
    0:58:56 of the population that actually believed the different thing, it turns out all of a sudden
    0:59:00 everybody has this giant epiphany that says, oh, I’m actually part of the majority.
    0:59:05 And at that point, like, you were on the freight train of revolution, right, like, it is rolling,
    0:59:06 right?
    0:59:11 Now, the other part of this is the distinction between the role of the elites and the masses.
    0:59:14 And here, the best book is called The True Believer, which is the Eric Hoffer book.
    0:59:20 And so the nuance you have to put on this is the elites play a giant role in this, because
    0:59:24 the elites do idea formation and communication, but the elites by definition are a small minority.
    0:59:28 And so there’s also this giant role played by the masses, and the masses are not necessarily
    0:59:32 thinking these things through in the same intellectualized, formal way that the elites
    0:59:33 are.
    0:59:36 But they are for sure experiencing these things in their daily lives, and they for sure have
    0:59:38 at least very strong emotional views on them.
    0:59:42 And so when you really get the revolution, it’s when you get the elites lined up with
    0:59:46 or either the current elites change or the new set of elites, a new set of counter elites
    0:59:50 basically come along and say, no, there’s actually a different and better way to live.
    0:59:53 And then the people basically decide to follow the counter elite.
    0:59:55 So that’s the other dimension to it.
    0:59:57 And of course, that part is also happening right now.
    1:00:00 And again, case study number, one of that would be Elon and his, you know, he turns
    1:00:03 out, you know, truly massive following.
    1:00:07 And he has done that over and over in different industries, not just saying crazy shit online,
    1:00:13 but saying crazy shit in the realm of space, in the realm of autonomous driving, in the
    1:00:17 realm of AI, just over and over and over again, turns out saying crazy shit is one of the
    1:00:20 ways to do a revolution and to actually make progress.
    1:00:21 Yeah.
    1:00:22 And it’s like, well, but then there’s the test.
    1:00:23 Is it crazy shit?
    1:00:24 Or is it the truth?
    1:00:25 Yeah.
    1:00:27 And, you know, and this is where, you know, many, there are many more specific things
    1:00:31 about Elon’s genius, but one of the, one of the really core ones is an absolute dedication
    1:00:32 to the truth.
    1:00:36 And so when Elon says something, it sounds like crazy shit, but in his mind, it’s true.
    1:00:37 Now is he always right?
    1:00:38 No.
    1:00:39 Sometimes the rockets crash.
    1:00:40 Like, you know, sometimes he’s wrong.
    1:00:41 He’s human.
    1:00:42 He’s like anybody else.
    1:00:43 He’s not right all the time.
    1:00:46 But at least my, my through line with him, both in what he says in public and what he
    1:00:49 says in private, which by the way, are the exact same things.
    1:00:50 He does not do this.
    1:00:52 He doesn’t lie in public about what he believes in private, or at least he doesn’t do that
    1:00:53 anymore.
    1:00:56 But it’s 100% consistent in my, in my experience.
    1:01:00 By the way, there’s two guys who are 100% consistent like that, that I know, um, Elon
    1:01:01 and Trump.
    1:01:02 Yeah.
    1:01:06 Whatever you think of them, what they say in private is 100% identical to what they
    1:01:07 say in public.
    1:01:08 Like they are completely transparent.
    1:01:10 They’re completely honest in that way, right?
    1:01:13 Which is like, and again, it’s not like they’re perfect people, but they’re honest in that
    1:01:14 way.
    1:01:17 And it makes them potentially both, as they have been very powerful leaders of these
    1:01:21 movements, because they’re both willing to stand up and say the thing that if it’s true,
    1:01:25 it turns out to be the thing in many cases that, you know, many or most or almost everyone
    1:01:28 else actually believes, but nobody was actually willing to say out loud.
    1:01:29 And so they can actually catalyze these shifts.
    1:01:33 And I, I mean, I think this framework is exactly why Trump took over the Republican party is
    1:01:36 I think Trump stood up there on stage with all these other kind of conventional Republicans
    1:01:39 and he started saying things out loud that it turned out the base really was, they were
    1:01:42 either already believing or they were prone to believe.
    1:01:43 And he was the only one who was saying them.
    1:01:47 And so the, again, elite masses, he was the elite, the voters of the masses and the voters
    1:01:52 decided, you know, no, no more bushes, like we’re going this other direction.
    1:01:53 That’s the mechanism of social change.
    1:01:56 Like what we just described is like the actual mechanism of the social change.
    1:01:59 It is fascinating to me that we have been living through exactly this.
    1:02:03 We’ve been moving through everything exactly what Timur Karan describes, everything that
    1:02:08 Voslav Havel described, you know, black squares and Instagram, like the whole thing, right?
    1:02:09 All of it.
    1:02:14 And we’ve been living through the, you know, the true believer elites masses, you know,
    1:02:17 thing with, you know, with a set of like basically incredibly corrupt elites wondering
    1:02:19 why they don’t have the little masses anymore and a set of new elites that are running away
    1:02:20 with things.
    1:02:24 And so like we’re, we’re living through this like incredible applied case study of these
    1:02:25 ideas.
    1:02:28 And, you know, if there’s a moral of the story, it is, you know, I think fairly obvious, which
    1:02:33 is it is a really bad idea for a society to wedge itself into a position in which most
    1:02:36 people don’t believe the fundamental precepts of what they’re told they have to do, you
    1:02:40 know, to be, to be good people like that, that is just not, not a good state to be in.
    1:02:44 So one of the ways to avoid that in the future, maybe is to keep the delta between what’s
    1:02:47 said in private and what’s said in public small.
    1:02:48 Yeah.
    1:02:50 It’s like, well, this is sort of the, the siren song of censorship is we can keep people
    1:02:54 from saying things, which means we can keep people from thinking things.
    1:02:57 And you know, by the way, that may work for a while, right?
    1:03:00 Like, you know, this, I mean, again, the hard form of the Soviet Union, you know, Soviet
    1:03:05 Union, owning a mimeograph, pre-photocopiers, there were mimeograph machines that were
    1:03:08 used to make some was taught underground newspapers, which is the mechanism of written communication
    1:03:12 of radical ideas, radical ideas.
    1:03:14 Ownership of a mimeograph machine was punishable by death.
    1:03:15 Right?
    1:03:18 So that’s the hard version, right?
    1:03:21 You know, the soft version is somebody clicks a button in Washington and you are erased
    1:03:22 from the internet.
    1:03:23 Right?
    1:03:25 Like, which, you know, good news, you’re still alive.
    1:03:28 Bad news is, you know, shame about not being able to get a job, you know, too bad your
    1:03:31 family now, you know, hates you and won’t talk to you, you know, whatever, whatever the,
    1:03:34 you know, whatever the version of cancellation has been.
    1:03:36 And so, so, so like, does that work?
    1:03:40 Like, maybe it works for a while, like it worked for the Soviet Union for a while, you
    1:03:43 know, in its way, especially when it was coupled with, you know, official state power, but when
    1:03:48 it unwinds, it can only wind with like incredible speed and ferocity because to your point, there’s
    1:03:49 all this bottled up energy.
    1:03:52 Now, your question was like, what are the percentages?
    1:03:53 Like what’s the breakdown?
    1:03:58 And so my, my rough guess, just based on what I’ve seen in my world is it’s something
    1:04:01 like 20, 60, 20.
    1:04:05 It’s like you’ve got 20% like true believers in whatever is, you know, the current thing,
    1:04:08 you know, you got 20, you got 20% of people who are just like true believers of whatever
    1:04:12 they, you know, whatever, you know, whatever’s in the New York Times, Harvard professors and
    1:04:16 the Ford Foundation, like just digitally, by the way, maybe it’s 10, maybe it’s five,
    1:04:18 but let’s say generously it’s 20.
    1:04:22 So it’s a, you know, 20% kind of full on revolutionaries.
    1:04:26 And then you’ve got, let’s call it 20% on the other side that are like, no, I’m not
    1:04:27 on board with this.
    1:04:28 This is, this is crazy.
    1:04:31 I’m not, I’m not signing up for this, but, you know, you know, they, their view of themselves
    1:04:32 is they’re in a small minority.
    1:04:35 And in fact, they start out in a small minority because what happens is the 60% go with the
    1:04:38 first 20%, not the second 20%.
    1:04:41 So you’ve got this large middle of people and it’s not that there’s anything like, it’s
    1:04:44 not that people in the middle are not smart or anything like that.
    1:04:47 It’s that they just have like normal lives and they’re just trying to get by and they’re
    1:04:51 just trying to go to work each day and do a good job and be a good person and raise their
    1:04:55 kids and, you know, have a little bit of time to watch the game.
    1:04:59 And they’re just not engaged in the cut and thrust of, you know, political activism or
    1:05:01 any of this stuff is just not their thing.
    1:05:05 But then, but that’s where the over socialization comes in is just like, okay, by default, the
    1:05:11 60% will go along with the 20% of the radical revolutionaries at least for a while.
    1:05:14 And then the counter elite is in this other 20%.
    1:05:19 And over time, they build up a theory and network and ability to resist.
    1:05:22 And a new set of representatives and a new set of ideas.
    1:05:24 And then at some point, there’s a contest.
    1:05:27 And then, and then, and then right, and then the question is what happens in the middle,
    1:05:30 what happens in the 60% and it is kind of my point.
    1:05:34 It’s not even really does the 60% change their beliefs as much as it’s like, okay, what, what
    1:05:39 is the thing that that 60% now decides to basically fall into step with.
    1:05:44 And I think that 60% in the valley that 60% for the last decade decided to be woke.
    1:05:49 And you know, extremely, I would say on edge on a lot of things.
    1:05:52 And I, you know, that 60% is pivoting in real time.
    1:05:53 They’re just done.
    1:05:54 They’re just had it.
    1:05:59 And I would love to see where that pivot goes because there’s internal battles happening
    1:06:00 right now.
    1:06:01 Right.
    1:06:02 So this is the other thing.
    1:06:03 Okay.
    1:06:04 So there’s two, two forms of internal, there’s two forms of things.
    1:06:07 And Timur has actually talked about this, Professor Crown has talked about this.
    1:06:10 And so, so one is he said, he said, this is the kind of unwind where what you’re going
    1:06:11 to have is you’re not going to have people in the other direction.
    1:06:14 You’re going to have people who claim that they supported Trump all along who actually
    1:06:15 didn’t.
    1:06:16 Right.
    1:06:17 Right.
    1:06:19 So it’s going to swing the other way.
    1:06:21 And by the way, Trump’s not the only part of this, but you know, he’s just a convenient
    1:06:23 shorthand for, you know, for, for a lot of this.
    1:06:26 But you know, whatever it is, you’ll, you’ll have people who will say, well, I never supported
    1:06:30 the right or I never supported ESG or I never thought we should have canceled that person.
    1:06:31 Right.
    1:06:34 Where of course they were full on a part of the mob, like, you know, kind of at that
    1:06:35 moment.
    1:06:36 Right.
    1:06:39 So you’ll have preference falsification happening in the other direction and his prediction,
    1:06:43 I think basically is you’ll end up with the same quote problem on the, on the other side.
    1:06:44 Now, will that happen here?
    1:06:48 I don’t know, you know, how far is American society willing to go at any of these things?
    1:06:49 I don’t know.
    1:06:51 But like there is some, some question there.
    1:06:55 And then, and then the other part of it is, okay, now you have this, you know, elite that
    1:06:58 is used to being in power for the last decade.
    1:07:01 And by the way, many of those people are still in power and they’re in very, you know, important
    1:07:03 positions and the New York times is still the New York times and Harvard is still Harvard
    1:07:07 and like those people haven’t changed like at all, right.
    1:07:10 And they didn’t, you know, they’ve been bureaucrats in the government and, you know, senior democratic,
    1:07:12 you know, politicians and so forth.
    1:07:15 And they’re sitting there, you know, right now feeling like reality has just smacked them
    1:07:18 hard in the face because they lost the election so badly.
    1:07:22 But they’re now going into a, and specifically the Democratic party is going into a civil
    1:07:23 war.
    1:07:24 Right.
    1:07:27 And that form of the civil war is completely predictable.
    1:07:30 And it’s exactly what’s happening, which is half of them are saying, we need to go back
    1:07:31 to the center.
    1:07:34 And we need to de-radicalize because we’ve lost the people.
    1:07:35 We’ve lost that the people in the middle.
    1:07:39 And so we need to go back to the middle in order to be able to get 50% plus one in an
    1:07:40 election.
    1:07:41 Right.
    1:07:43 And then the other half of them are saying, no, we weren’t true to our principles.
    1:07:44 We were too weak.
    1:07:45 We were too soft.
    1:07:46 You know, we must become more revolutionary.
    1:07:48 We must double down and we must, you know, celebrate, you know, murders in the street
    1:07:50 of health insurance executives.
    1:07:52 And that’s, and that right now is like a real fight.
    1:07:57 If I can tell you a little personal story that breaks my heart a little bit, there’s a, there’s
    1:08:02 a professor, a historian, I won’t say who, who I admire deeply, love his work.
    1:08:05 He’s a kind of a heretical thinker.
    1:08:12 And we were talking about having a podcast or doing a podcast and he eventually said
    1:08:18 that, you know what, at this time, given your guest list, I just don’t want the headache
    1:08:24 of being in the faculty meetings in my particular institution.
    1:08:28 And I asked who are the particular figures in this guest list.
    1:08:31 He said, Trump.
    1:08:37 And the second one, he said that you announced your interest to talk to Vladimir Putin.
    1:08:39 So I just don’t want the headache.
    1:08:45 Now I fully believe he, it would surprise a lot of people if I said who it is, but you
    1:08:50 know, this is a person who’s not bothered by the guest list.
    1:08:55 And I should also say that 80 plus percent of the guest list is left wing.
    1:08:56 Okay.
    1:08:59 Nevertheless, he just doesn’t want the headache.
    1:09:04 And that speaks to the, the thing that you’ve kind of mentioned that you just don’t, don’t
    1:09:05 want the headache.
    1:09:10 You just want to just have a pleasant morning with some coffee and talk to your fellow professors.
    1:09:14 And I think a lot of people are feeling that in universities and in other contexts in tech
    1:09:16 companies.
    1:09:20 And I wonder if that shifts how quickly that shifts.
    1:09:26 And there the percentages you mentioned, 20, 60, 20 matters and the, and the, the contents
    1:09:30 of the private groups matters and the dynamics of how that shifts matters.
    1:09:32 Cause it’s very possible.
    1:09:36 Nothing really changes in universities and major tech companies or just, there’s a kind
    1:09:45 of excitement right now for potential revolution and these new ideas, this new vibes to reverberate
    1:09:51 through these companies and universities, but it’s possible the, the wall will hold.
    1:09:52 Yeah.
    1:09:53 So he’s a friend of yours.
    1:09:55 I respect that you don’t want to name him.
    1:09:56 I also respect you don’t want to beat on him.
    1:09:59 So I would like to beat on him on your behalf.
    1:10:00 Does he have tenure?
    1:10:01 Yes.
    1:10:04 He should use it.
    1:10:07 So this is the thing, right?
    1:10:10 This is the ultimate indictment of the corruption and the rot at the heart of our education
    1:10:12 system at the heart of these universities.
    1:10:14 And it’s by the way, it’s like across the board.
    1:10:16 It’s like all the, all the top universities.
    1:10:20 It’s like, cause the, the siren song for what it’s been for 70 years, whatever, the tenure
    1:10:25 system peer review system, tenure system, um, which is like, yeah, you work your butt
    1:10:29 off as an academic to get a professorship and then to get tenure, because then you can
    1:10:32 say what you actually think, right?
    1:10:37 Then you can do your work and your research and your speaking and your teaching without
    1:10:40 fear of being fired, right?
    1:10:43 Without fear of being canceled, um, like academic freedom.
    1:10:48 I mean, think of the term academic freedom and then think of what these people have done
    1:10:49 to it.
    1:10:52 Like it’s gone.
    1:11:02 Like that entire thing was fake and is completely rotten and these people are completely, completely
    1:11:06 giving up the entire moral foundation of the system has been built for them, which by the
    1:11:12 way is paid for virtually 100% by taxpayer money.
    1:11:16 That’s the, what’s the inkling of hope in this, like what this particular person and
    1:11:22 others who hear this, what can give them strength, inspiration, and courage, um, that the population
    1:11:25 at large is going to realize the corruption in their industry and it’s going to withdraw
    1:11:26 the funding.
    1:11:27 It’s okay.
    1:11:28 So desperation.
    1:11:30 No, no, no, no, no, think about what happens next.
    1:11:31 Okay.
    1:11:32 So let’s go, let’s go through it.
    1:11:35 So the, the universities, the university, the universities are funded by four primary sources
    1:11:36 of federal funding.
    1:11:39 The big one is a federal student loan program, which is, you know, in the many trillions of
    1:11:43 dollars at this point and only spiraling, you know, way faster than inflation.
    1:11:44 That’s number one.
    1:11:48 Number two is federal research funding, which is also very large and you probably know that
    1:11:53 when a scientist at the university gets a research grant, the university rakes as much
    1:11:58 as 70% of the money for central uses.
    1:12:01 Number three is tax exemption at the operating level, which is based in the idea that these
    1:12:06 are nonprofit institutions as opposed to let’s say political institutions.
    1:12:11 Number four is tax exemptions at the endowment level, you know, which is the financial buffer
    1:12:15 that these places have.
    1:12:18 Anybody who’s been close to university budget will basically see that what would happen
    1:12:20 if you withdrew those sources of federal taxpayer money.
    1:12:24 And then for the state schools, the state money, they still legal bankrupt.
    1:12:28 And then you could rebuild.
    1:12:30 Then you could rebuild because the problem right now, you know, like the folks at University
    1:12:32 of Austin are like mounting a very valiant effort.
    1:12:34 And I hope that they succeed and I’m sure I’m cheering for them.
    1:12:38 But the problem is you’re now inserting, you suppose you and I want to start a new university
    1:12:41 and we want to hire all the free thinking professors and we want to have the place that
    1:12:42 fixes all this.
    1:12:45 Practically speaking, we can’t do it because we can’t get access to that money.
    1:12:48 You’re the most direct reason we can’t get access to that money.
    1:12:50 We can’t get access to federal student funding.
    1:12:54 Do you know how universities are accredited for the purpose of getting access to federal
    1:12:57 student funding, federal student loans?
    1:13:00 They’re accredited by the government, but not directly, indirectly.
    1:13:02 They’re not accredited by the Department of Education.
    1:13:07 Instead what happens is the Department of Education accredits accreditation bureaus
    1:13:09 that are non-profits that do the accreditation.
    1:13:12 Guess what the composition of the accreditation bureaus is?
    1:13:16 The existing universities, they’re in complete control.
    1:13:20 The incumbents are in complete control as to who gets, as to who gets access to federal
    1:13:21 student loan money.
    1:13:26 Guess how enthusiastic they are about accrediting a new university, right?
    1:13:32 And so we have a government funded and supported cartel that has gone, I mean, it’s just obvious.
    1:13:36 Now it’s just gone sideways and basically any possible way it could go sideways, including,
    1:13:40 I mean, literally, as you know, students getting beaten up on campus for being the wrong religion.
    1:13:43 They’re just wrong in every possible way at this point.
    1:13:45 And it’s all in the federal taxpayer back.
    1:13:50 And there is no way, I mean, my opinion, there is no way to fix these things without replacing
    1:13:51 them.
    1:13:54 And there’s no way to replace them without letting them fail.
    1:13:56 And by the way, it’s like everything else in life.
    1:13:59 I mean, in a sense, this is like the most obvious conclusion of all time, which is what
    1:14:04 happens in the business world when a company has a bad job is they go bankrupt and another
    1:14:05 company takes its place, right?
    1:14:07 And that’s how you get progress.
    1:14:11 And of course, below that is what happens is this is the process of evolution, right?
    1:14:12 Why does anything ever get better?
    1:14:16 Because things are tested and tried and then you know, the things that are good survive.
    1:14:18 And so these places have cut themselves off.
    1:14:21 They’ve been allowed to cut themselves off from both from evolution at the institutional
    1:14:28 level and evolution at the individual level, as shown by the just widespread abuse of tenure.
    1:14:33 And so we’ve just stalled out, we built an ossified system, an ossified centralized corrupt
    1:14:34 system.
    1:14:36 We’re surprised by the results.
    1:14:38 They are not fixable in their current form.
    1:14:40 I disagree with you on that.
    1:14:44 Maybe it’s grounded in hope that I believe you can revolutionize the system from within
    1:14:48 because I do believe Stanford and MIT are important.
    1:14:51 Oh, but that logic doesn’t follow at all.
    1:14:53 That’s underpants-nome logic.
    1:14:55 Underpants-nome, can you explain what that means?
    1:14:56 Underpants-nose logic.
    1:14:59 I just started watching a key touchstone of American culture with my nine-year-old, which
    1:15:00 of course is South Park.
    1:15:01 Yes.
    1:15:02 Wow.
    1:15:05 And there is a, which by the way is a little aggressive for a nine-year-old.
    1:15:06 Very aggressive.
    1:15:07 But he likes it.
    1:15:10 So he’s learning all kinds of new words.
    1:15:11 All kinds of new ideas.
    1:15:12 But yeah.
    1:15:14 I told him, I said, “You’re going to hear words on here that you are not allowed to
    1:15:15 use.”
    1:15:16 Right.
    1:15:17 Education.
    1:15:22 And I said, “Do you know how we have an agreement that we never lie to mommy?”
    1:15:27 I said, “Not using a word that you learn in here does not count as lying.”
    1:15:28 Wow.
    1:15:29 And keep that in mind.
    1:15:32 Orwellian redefinition of lying, but yes, go ahead.
    1:15:35 Of course, in the very opening episode, in the first 30 seconds, one of the kids calls
    1:15:36 the other kid a dildo.
    1:15:37 Right?
    1:15:38 We’re off to the races.
    1:15:39 Yep.
    1:15:40 Let’s go.
    1:15:41 Daddy, what’s a dildo?
    1:15:42 Yep.
    1:15:48 You know, I’m sorry, I don’t know.
    1:15:56 So, famous episode of South Park, the underpants gnomes, and so there’s all the kids basically
    1:15:59 realize that their underpants are going missing from their dresser drawers.
    1:16:02 Somebody stealing the underpants, and it’s just like, “Well, who on earth would steal
    1:16:03 the underpants?”
    1:16:05 And it turns out it’s the underpants gnomes.
    1:16:07 And it turns out the underpants gnomes have come to town, and they’ve got this little
    1:16:10 underground warren of tunnels and storage places for all the underpants.
    1:16:14 And so they go out at night, they steal the underpants, and the kids discover the underpants
    1:16:16 gnomes, and they’re, “What are you doing?
    1:16:17 What’s the point of this?”
    1:16:21 And so the underpants gnomes present their master plan, which is a three-part plan, which
    1:16:24 is step one, collect underpants.
    1:16:26 Step three, profit.
    1:16:30 Step two, question mark.
    1:16:34 So you just proposed the underpants gnomes, which is very common in politics.
    1:16:37 So the form of this in politics is, we must do something.
    1:16:41 This is something, therefore we must do this.
    1:16:45 But there’s no causal logic chain in there at all to expect that that’s actually going
    1:16:48 to succeed, because there’s no reason to believe that it is.
    1:16:49 It’s the same thing.
    1:16:50 But this is what I hear all the time.
    1:16:56 I will let you talk as the host of the show in a moment, but I hear this all the time.
    1:17:00 I have friends who are on these boards, very involved with these places, and I hear this
    1:17:02 all the time, which is like, “Oh, these are very important.
    1:17:07 We must fix them, and so therefore they are fixable.”
    1:17:09 There’s no logic chain there at all.
    1:17:14 If there’s that pressure that you described in terms of cutting funding, then you have
    1:17:22 the leverage to fire a lot of the administration and have new leadership that steps up, that
    1:17:27 aligns with this vision that things really need to change at the heads of the universities,
    1:17:33 and they put students and faculty primary, fire a lot of the administration, and realign
    1:17:40 and reinvigorate this idea of freedom of thought and intellectual freedom.
    1:17:45 Because there is already a framework of great institutions that’s there, and the way they
    1:17:50 talk about what it means to be a great institution is aligned with this very idea that you’re
    1:17:51 talking about.
    1:17:56 It’s this meaning like intellectual freedom, the idea of tenure, right?
    1:18:00 On the surface, it’s aligned, underneath is become corrupted.
    1:18:03 If we say free speech and academic freedom often enough, sooner or later these tenured
    1:18:04 professors will get brave.
    1:18:07 Well, do you think the universities are fundamentally broken?
    1:18:09 Okay, so how do you fix it?
    1:18:19 How do you have institutions for educating 20-year-olds and institutions that host researchers
    1:18:24 that have the freedom to do epic shit, like research-type shit that’s outside the scopes
    1:18:27 of R&D departments and inside companies?
    1:18:29 So how do you create an institution like that?
    1:18:31 How do you create a good restaurant when the one down the street sucks?
    1:18:34 All right, you invent something new?
    1:18:36 You open a new restaurant?
    1:18:37 Yeah.
    1:18:38 Okay.
    1:18:41 How often in your life have you experienced a restaurant that’s just absolutely horrible
    1:18:43 and it’s poisoning all of its customers and the food tastes terrible?
    1:18:46 And then three years later, you go back and it’s fantastic.
    1:18:49 Charlie Munger actually had the best comment on his great investor, Charlie Munger, the
    1:18:50 great comment.
    1:18:52 He once asked, he’s like, you know, he’s, you know, General Electric was going through
    1:18:55 all these challenges and he was asked to the Q&A, he said, “How would you fix the culture
    1:18:56 of General Electric?”
    1:18:58 And he said, “Fix the culture of General Electric.”
    1:19:02 He said, “I couldn’t even fix the culture at a restaurant.”
    1:19:03 Like it’s insane.
    1:19:04 Like obviously you can’t do it.
    1:19:07 I mean, nobody in business thinks you can do that.
    1:19:09 Like, it’s impossible.
    1:19:13 Like, it’s not, it’s, no, no, look, having said all that, I should also express this
    1:19:17 because I have a lot of friends to work at these places and are involved in various attempts
    1:19:18 to fix these.
    1:19:19 I hope that I’m wrong.
    1:19:20 I would love to be wrong.
    1:19:23 I would love for the, I would love for the underpants known step two to be something
    1:19:26 clear and straightforward that they can figure out how to do.
    1:19:27 I would love to, love to fix it.
    1:19:29 I’d love to see them come back to their spoken principles.
    1:19:30 I think that’d be great.
    1:19:33 I’d love to see the professors with tenure get bravery.
    1:19:34 I would love to see.
    1:19:38 I mean, it’d be fantastic, you know, my partner and I’ve done like a lot of public speaking
    1:19:39 on this topic.
    1:19:42 It’s, it’s been intended to not just be harsh, but also be like, okay, like these, these
    1:19:44 challenges have to be confronted directly.
    1:19:48 By the way, let me also say something positive, you know, especially post October 7th, there
    1:19:52 are a bunch of very smart people who are major donors and board members of these institutions
    1:19:56 like Mark Rowan, you know, who are really coming in trying to, you know, I think legitimately
    1:19:57 trying to, trying to fix these places.
    1:20:00 I have a friend on the executive committee at one of the top technical universities.
    1:20:02 He’s working over time to try to do this.
    1:20:05 Man, I hope they can figure it out.
    1:20:08 But I, but the counter question would just be like, do you see it actually happening
    1:20:10 at a single one of these places?
    1:20:13 I’m a person that believes in leadership.
    1:20:18 If you have the right leadership, the whole system can be changed.
    1:20:21 So here’s a question for your friend who have tenure at one of these places, which is who
    1:20:23 runs his university.
    1:20:28 I think, you know, you know, I think runs it whoever the fuck says they run it.
    1:20:29 That’s what great leadership is.
    1:20:31 Like a president has that power.
    1:20:36 But how does he has the leverage because they can mouth off like Elon can fire the professors.
    1:20:39 They can fire them through being vocal publicly.
    1:20:40 Yes.
    1:20:41 Fire the professors.
    1:20:42 What do you talk about legally?
    1:20:44 No, they cannot fire the professors.
    1:20:45 Then we know who runs the university.
    1:20:46 The professors.
    1:20:47 Yeah.
    1:20:49 Professors, the professors and the students, the professors and the Ferrell students.
    1:20:53 And they’re of course in a radicalization feedback cycle, driving each other crazy.
    1:20:54 The Ferrell students.
    1:20:55 Yeah, the Ferrell students.
    1:20:56 Yeah, the Ferrell students.
    1:20:59 What happens when you’re put in charge of your bureaucracy, where the, where the thing
    1:21:02 that the bureaucracy knows is that they can outlast you?
    1:21:05 The thing that the tenure professors at all these places know is it doesn’t matter who
    1:21:09 the president is because they can outlast them because they cannot get fired.
    1:21:12 By the way, it’s the same thing that bureaucrats in the government know.
    1:21:14 It’s the same thing that the bureaucrats in the Department of Education know.
    1:21:16 They know the exact same thing.
    1:21:17 They can outlast you.
    1:21:20 It’s, I mean, it’s the whole thing that the resistance, like they can be the resistance.
    1:21:23 They can just sit there and resist, which is what they do.
    1:21:24 They’re not fireable.
    1:21:26 That’s definitely a crisis that needs to be solved.
    1:21:27 It’s a huge problem.
    1:21:30 And I also don’t like that I’m defending academia here.
    1:21:37 I, I agree with you that the situation is dire and, uh, but I just think that institutions
    1:21:38 are important.
    1:21:41 And I should also add context, since you’ve been grilling me a little bit.
    1:21:45 You were using restaurants as an analogy and earlier offline in this conversation, you
    1:21:47 said the Dairy Queen is a great restaurant.
    1:21:51 So let’s, let’s let the listener take that.
    1:21:52 Dairy Queen is the best restaurant.
    1:21:53 The best restaurant.
    1:21:54 There you go.
    1:21:57 I think Marcaadresa is saying today, I don’t want it to cut.
    1:21:58 You should go order a blizzard.
    1:22:00 Just one day you should walk down there and order a blizzard.
    1:22:01 Yeah.
    1:22:03 They can get like 4,000 calories in a cup.
    1:22:04 They can.
    1:22:05 And they’re delicious.
    1:22:06 Amazing.
    1:22:07 They are truly delicious.
    1:22:08 And they’ll put, they’ll put anything in there you want.
    1:22:09 All right.
    1:22:10 Okay.
    1:22:12 So, but anyway, let me just close by saying, look, I, I, my friends at the university system,
    1:22:14 I would just say, look, like this is the challenge.
    1:22:16 Like I would just, I would just pose this as the challenge.
    1:22:19 Like to me, like this is having had a lot of these conversations.
    1:22:20 Like this is the bar.
    1:22:22 In my view, this is the conversation that actually has to happen.
    1:22:24 This is the bar that actually has to be hit.
    1:22:27 These problems need to be confronted directly because I think there’s just, I think there’s
    1:22:28 been way too much.
    1:22:31 I mean, I’m actually worried kind of on the other side, there’s too much happy talk in
    1:22:32 these conversations.
    1:22:35 I think the taxpayers do not understand this level of crisis.
    1:22:39 And I think if the taxpayers come to understand it, I think the funding evaporates.
    1:22:43 And so I think the, the fuse is going through, you know, no fault of any of ours, but like
    1:22:44 the fuse is going.
    1:22:47 And there’s some window of time here to fix this and address it and justify the money.
    1:22:53 Because it just normal taxpayers sitting in normal towns, in normal jobs, are not going
    1:22:56 to tolerate this for, for that much longer.
    1:23:00 You mentioned censorship a few times, let us if we can go deeper into the darkness of
    1:23:04 the past and how censorship mechanism was used.
    1:23:09 So you are a good person to speak about the history of this because you were there on
    1:23:14 the ground floor in 2013 ish Facebook.
    1:23:23 I heard that you were there when they invented or maybe developed the term hate speech in
    1:23:28 the context of censorship on social media.
    1:23:33 So take me to through that history, if you can, the use of censorship.
    1:23:37 So I was there on the ground floor in 1993.
    1:23:39 There’s multiple floors to this building apparently.
    1:23:40 There are.
    1:23:41 Yeah.
    1:23:45 So I was first asked to implement censorship on the internet, which was in the web browser.
    1:23:46 That is fast.
    1:23:47 Yeah.
    1:23:48 Yeah.
    1:23:51 In actually in 1982, I was asked to implement a nudity filter.
    1:23:53 Did you have the courage to speak up back then?
    1:23:56 I didn’t have any problem speaking up back then.
    1:23:58 I was making six dollars and 25 cents an hour.
    1:23:59 I did not have a lot to lose.
    1:24:03 No, I was asked at the time and then look, you know, legitimate, you know, in some sense
    1:24:07 of legitimate request, which is working on a research project actually funded by the
    1:24:09 federal government and a public university.
    1:24:12 So you know, I don’t think my boss was like in any way out of line, but it was like, yeah,
    1:24:15 like this web browser thing is great, but like, could it just make sure to not have
    1:24:17 any photos of naked people that show up?
    1:24:21 But if you think about this for a second as a technologist, I had an issue, which is this
    1:24:22 was like pre-image net, right?
    1:24:26 And so I had a brief period where I tried to imagine an algorithm that I referred to
    1:24:32 as the breast detection algorithm that I was going to have to design.
    1:24:36 And then apparently a variety of other apparently body parts people are also sensitive about.
    1:24:41 And and then I politely declined to do this for just the technical difficulties.
    1:24:43 Well, number one, I didn’t actually didn’t know how to do it, but number two is just
    1:24:46 like, no, I’m not, I’m not building, I’m just not building a censorship engine.
    1:24:48 Like I’m, you know, I’m just not doing it.
    1:24:51 And in those days, it was, you know, in those days, the internet generally was, you know,
    1:24:55 free fire zone for everything is actually interesting as sort of pre-93.
    1:24:57 The internet was such a specific niche community.
    1:25:02 Like it was like the million kind of highest IQ nerds in the world.
    1:25:06 And so it actually like didn’t really have a lot of issues that people were like super
    1:25:10 interested in talking about like astrophysics and not very interested in, you know, even
    1:25:11 politics at that time.
    1:25:16 So there really was not an issue there, but yeah, I didn’t want to start the process.
    1:25:19 So I think the way to think about this, so first of all, you know, yeah, so I was involved
    1:25:22 in this at Facebook every step, by the way, I’ve been involved this at Facebook every
    1:25:24 step of the way I joined the board there in 2007.
    1:25:28 So I saw, I’ve seen everything in the last, you know, almost 20 years every step of the
    1:25:29 way.
    1:25:31 But also I’ve been involved in most of the other companies over time.
    1:25:33 So I was an angel investor in Twitter, I knew them really well.
    1:25:38 We were the founding investor in Substack, I’m part of the Elon takeover of Twitter
    1:25:40 with X, I was an angel at LinkedIn.
    1:25:44 So I’ve been in these, we were the funder of Pinterest, we were one of the main investors
    1:25:46 there, Reddit as well.
    1:25:48 And I was having these conversations with all these guys all the way through.
    1:25:52 So as much talk specifically about Facebook, but I can just tell you like the general pattern
    1:25:55 and for quite a while it was kind of all the same across these companies.
    1:26:00 Yeah, so basically the way to think about this, the true kind of nuanced view of this
    1:26:05 is that there is practically speaking no internet service that can have zero censorship.
    1:26:09 And by the way, that also mirrors, there is no country that actually has limited free
    1:26:11 speech either.
    1:26:15 The US First Amendment actually has 12 or 13 formal carve outs from the Supreme Court
    1:26:21 over time, you know, so incitement to violence and terrorist recruitment and child abuse
    1:26:23 and so, you know, child pornography and so forth, they’re like, they’re not covered by
    1:26:25 the First Amendment.
    1:26:28 And just practically speaking, if you and I are going to start an internet company and
    1:26:32 have a service, we can’t have that stuff either, right, because it’s illegal or it will just
    1:26:33 clearly, you know, destroy the whole thing.
    1:26:36 So you’re always going to have a censorship engine.
    1:26:39 I mean, hopefully it’s not actually in the browser, but like you’re going to have it
    1:26:42 for sure at the level of an internet service.
    1:26:45 But then what happens is now you have a machine, right?
    1:26:50 Now you have a system where you can put in rules saying we allow this, we don’t allow
    1:26:51 that.
    1:26:54 You have enforcement, you have consequences, right?
    1:26:59 And once that system is in place, like it becomes the ring of power, right, which is
    1:27:03 like, okay, now anybody in that company or anybody associated with a company or anybody
    1:27:06 who wants to pressure that company will just start to say, okay, you should use that machine
    1:27:11 for more than just terrorist recruitment and child pornography, you should use it for XYZ.
    1:27:17 And basically that transition happened to call it 2012-2013 is when there was this like
    1:27:19 very, very kind of rapid pivot.
    1:27:22 I think the kickoff to it for some reason was this, it was the beginning of the second
    1:27:24 Obama term.
    1:27:29 I think it also coincided with the sort of arrival of the first kind of super woke kids
    1:27:34 into these schools, you know, that kind of, you know, it’s the kids that were in school
    1:27:37 between like, you know, for the Iraq war and then the global financial crisis and like,
    1:27:40 they came out like super radicalized, they came into these companies and they immediately
    1:27:45 started mounting these social crusades to ban and censor lots of things.
    1:27:48 And then, you know, quite frankly, the Democratic Party figured this out and they figured out
    1:27:51 that these companies were, you know, very subject to being controlled and the, you know,
    1:27:55 the executive teams and boards of directors are almost all Democrats and, you know, there’s
    1:27:58 tremendous circulation, a lot of Obama people from the first term actually came and worked
    1:28:02 in these companies and a lot of FBI people and other, you know, law enforcement intelligence
    1:28:07 people came in and worked and they were all Democrats for that set.
    1:28:10 And so they just, you know, the ring of power was lying on the table.
    1:28:15 It had been built and they, you know, pick it up and put it on and then they just ran.
    1:28:18 And the original discussions were basically always on two topics.
    1:28:21 It was hate speech and misinformation.
    1:28:23 Hate speech was the original one.
    1:28:26 And the hate speech conversation started exactly like you’d expect, which is we can’t have
    1:28:29 the n-word in which the answer is fair enough.
    1:28:30 Let’s not have the n-word.
    1:28:31 Okay.
    1:28:34 Now we’ve set a precedent, right?
    1:28:37 And then, and then Jordan Peterson has talked a lot about this, the definition of hate speech
    1:28:41 ended up being things that make people uncomfortable, right?
    1:28:43 So we can’t have things that make, you know, people uncomfortable.
    1:28:46 I, of course, you know, people like me that are disagreeable, raise their hands and say,
    1:28:49 well, that idea right there makes me uncomfortable.
    1:28:51 But of course, that doesn’t count as hate speech, right?
    1:28:56 So, you know, the ring of power is on one hand and not on the other hand.
    1:29:01 And then basically that began this slide where it ended up being that, you know, completely
    1:29:05 anodyne is the point that Mark has been making recently, completely anodyne comments that
    1:29:08 are completely legitimate on television or on the Senate floor.
    1:29:10 All of a sudden our hate speech can’t be set online.
    1:29:14 So that, you know, the ring of power was wielded in grossly irresponsible ways.
    1:29:16 We can talk about all the stuff that happened there.
    1:29:17 And then the other one was misinformation.
    1:29:20 And that wasn’t as there was a little bit of that early on.
    1:29:23 But of course, that really kicked in with with Trump.
    1:29:28 So, so the hate speech stop, the hate speech stop predated Trump by like three or four years.
    1:29:32 The misinformation stuff was basically, it was a little bit later, and it was the consequence
    1:29:33 of the Russiagate hoax.
    1:29:38 And then that was, you know, a ring of power that was even more powerful, right?
    1:29:42 Because, you know, hate speech is like, okay, at some point, if some if something offensive
    1:29:44 or not, like at least you can have a question as to whether that’s the case.
    1:29:48 But the problem with misinformation is like, is it the truth or not?
    1:29:52 You know, you know, what do we know for 800 years or whatever Western civilization?
    1:29:56 It’s that, you know, there’s only a few entities that can determine the truth on every topic.
    1:29:58 You know, there’s God, you know, there’s the king.
    1:29:59 We don’t have those anymore.
    1:30:02 And the rest of us are all imperfect and flawed.
    1:30:05 And so the idea that any group of experts is going to sit around the table and decide
    1:30:08 on the truth is, you know, deeply anti-Western and deeply authoritarian.
    1:30:14 And somehow the misinformation kind of crusade went from the Russiagate hoax into just full-blown.
    1:30:17 We’re going to use that weapon for whatever we want.
    1:30:20 And then, of course, then the culminating moment on that that really was the straw that
    1:30:25 broke the camel’s back was we’re going to censor all theories that the COVID virus might
    1:30:28 have been manufactured in a lab as misinformation.
    1:30:32 And inside these companies, like that was the point where people for the first time, this
    1:30:36 is like what, three years ago, for the first time they were like, that was when it sunk
    1:30:39 in where it’s just like, okay, this has spun completely out of control.
    1:30:42 But anyway, that’s how we got to where we are.
    1:30:47 And then basically that spell lasted, that that that complex existed and got expanded
    1:30:51 basically from call it 2013 to 2023.
    1:30:54 I think basically two things broke it.
    1:30:55 One is sub-stack.
    1:31:00 And so when I’m super proud of those guys, because they started from scratch and declared
    1:31:04 right up front that they were going to be a free speech platform.
    1:31:09 And they came under intense pressure, including from the press and, you know, they tried to
    1:31:12 just beat them to the ground and kill them and intense pressure, by the way, from, you
    1:31:16 know, let’s say certain of the platform companies, you know, basically threatening them.
    1:31:17 And they stood up to it.
    1:31:21 And, you know, sitting here today, they have the widest spectrum of speech and conversation.
    1:31:24 I’ve, you know, anywhere on planet Earth and they’ve done a great job and it’s worked.
    1:31:25 By the way, it’s great.
    1:31:30 And then obviously Elon, you know, with X was the, you know, the hammer blow.
    1:31:34 And then I did the third one now was what Marcus doing at Facebook.
    1:31:39 And there’s also like singular moments, I think you’ve spoken about this, which like
    1:31:45 John Stuart going on Stephen Colbert and talking about the lab leak theory.
    1:31:46 Yes.
    1:31:50 I just, there’s certain moments that just kind of shake everybody up.
    1:31:54 The right person, the right time, just it’s a wake up call.
    1:31:58 So that there, and I will tell you like, and I should say John Stuart attacked me recently
    1:32:03 so I’m not that thrilled about him, but I would say I was a long run fan of John Stuart.
    1:32:08 I watched probably every episode of the Daily Show when he was on it for probably 20 years.
    1:32:11 But he did a very important public service and it was that appearance on the Colbert
    1:32:12 show.
    1:32:15 And I don’t know how broadly this is, you know, at the time it was in the news briefly,
    1:32:18 but I don’t know how if people remember this, but I will tell you in, in the rooms where
    1:32:22 people discuss what is misinformation and these policies, that was a very big moment.
    1:32:23 That was probably actually the key catalyzing moment.
    1:32:28 And I think he exhibited, I would say conspicuous bravery and had a big impact with that.
    1:32:31 And yeah, what for people who don’t recall what he did, what, and this was in the full
    1:32:35 blown like you absolutely, you know, you absolutely must lock down for two years, you absolutely
    1:32:38 must keep all the schools closed, you absolutely must have everybody work from home.
    1:32:41 You absolutely must wear a mask, like the whole thing.
    1:32:46 And one of those was you absolutely must believe that COVID was completely natural.
    1:32:51 You must believe that and not believing that means you’re a fascist Nazi Trump supporter,
    1:32:53 mega evil Q and on person, right.
    1:32:57 And that was like uniform and that was enforced by the social media companies.
    1:33:01 And like I said, that was the peak and John Stuart went on the Colbert show and I don’t
    1:33:04 know if they planned it or not because Colbert looked shocked, I don’t know how much it was
    1:33:09 a bit, but he went on there and he just had one of these like the emperors wearing no
    1:33:13 clothes things where he said, it’s just not plausible that you had the COVID super virus
    1:33:20 appear 300 yards down the street from the Wuhan Institute of lethal coronaviruses like it’s
    1:33:23 just not plausible that that certainly that you could just rule that out.
    1:33:26 And then there was another key moment actually, the more serious version was I think the author
    1:33:30 Nicholson Baker wrote a big piece for New York magazine and Nicholson Baker is like
    1:33:34 one of our great novelist writers of our time and he wrote the piece and he did the complete
    1:33:35 addressing of it.
    1:33:39 And that was the first, I think that was the first legit, there had been like alt, you
    1:33:42 know, renegade, there had been, you know, people running around saying this, but getting
    1:33:43 censored all over the place.
    1:33:46 That was the first one that was like in the mainstream press where he and he talked to
    1:33:49 all the heretics and he just like laid the whole thing out.
    1:33:52 And and that was a moment and I remember, let’s say a board meeting at one of these companies
    1:33:56 after that where basically, you know, everybody looked around the table and it was like, all
    1:34:01 right, I guess we’re not, we don’t need to censor that anymore.
    1:34:03 And you know, and then of course, what immediately follows from that is, well, wait a minute,
    1:34:06 why were we censoring that in the first place?
    1:34:09 And okay, like, and then, you know, the downstream, not that day, but the downstream conversations
    1:34:14 were like, okay, if we made such a giant, in retrospect, if we all made such a giant
    1:34:17 collective mistake, censoring that, then what does that say about the rest of our regime?
    1:34:21 And I think that was the thread in the sweater that started to unravel it.
    1:34:24 I should say it again, I do think that the John Stuart appearance and the statement he
    1:34:26 made was a courageous act.
    1:34:27 Yeah, I agree.
    1:34:30 I think we need to have more of that in the world.
    1:34:38 And like you said, Elon, everything he did with X is a series of courageous acts.
    1:34:45 And I think what Zuck, what Mark Zuckerberg did on Rogan a few days ago is a courageous
    1:34:46 act.
    1:34:49 Can you just speak to that?
    1:34:51 He has become, I think, an outstanding communicator, right?
    1:34:54 And he’s, you know, somebody who came in for a lot of criticism earlier in his career
    1:34:55 on that front.
    1:35:00 And I think he’s one of these guys who can sit down and talk for three hours and make
    1:35:01 complete sense.
    1:35:05 And, you know, as you do with all of your episodes, like when somebody sits and talks
    1:35:09 for three hours, like you really get a sense of somebody, because it’s really hard to bear
    1:35:10 official for that long.
    1:35:12 And, you know, he’s not done that repeatedly.
    1:35:13 He’s really good at it.
    1:35:16 And then look, again, I would maybe put him in the third category now with, certainly
    1:35:20 after that appearance, I would say I would put him up there now with, you know, kind of
    1:35:23 Elon and Trump in the sense of the public and the private are now synchronized.
    1:35:24 I guess I’d say that.
    1:35:27 Like, he said on that show what he really believes.
    1:35:28 He said all the same things that he says in private.
    1:35:31 Like I don’t think there’s really any discrepancy anymore.
    1:35:38 I would say he has always taken upon himself a level of obligation, responsibility to running
    1:35:43 a company the size of Metta and to running services that are that large.
    1:35:46 And I think, you know, his conception of what he’s doing, which I think is correct is he’s
    1:35:48 running services that are bigger than any country, right?
    1:35:52 He’s running, you know, over 3 billion people use those services.
    1:35:55 And so, and then, you know, the company has, you know, many tens of thousands of employees
    1:35:57 and many investors and it’s a public company.
    1:36:01 And he thinks very deeply and seriously about his responsibilities.
    1:36:05 And so, you know, he has not felt like he has had, let’s just say the complete flexibility
    1:36:07 that Elon has had.
    1:36:10 And you know, people could argue that one way or the other, but, you know, he’s, he’s,
    1:36:12 you know, yeah, he’s, he’s, you know, he talked about a lot.
    1:36:14 He’s evolved a lot.
    1:36:15 A lot of it was he learned a lot.
    1:36:17 And by the way, I’m going to put myself right back up there.
    1:36:20 Like I’m not claiming any huge foresight or heroism on any of this.
    1:36:22 Like I’ve also learned a lot.
    1:36:26 Like, like my views on things are very different than they were 10 years ago on lots of topics.
    1:36:29 And so, you know, I’ve been on a learning journey.
    1:36:31 He’s been on a learning journey.
    1:36:33 He is a really, really good learner.
    1:36:39 He assimilates information, you know, as good as or better than anybody else I know.
    1:36:42 The other thing I guess I would just say is he talked on that show about something very
    1:36:46 important, which is when you’re in a role where you’re running a company like that, there
    1:36:50 are a set of decisions that you get to make and you deserve to be criticized for those
    1:36:53 decisions and so forth and it’s valid.
    1:36:57 But you are under tremendous external pressure as well.
    1:36:59 And by the way, you’re under tremendous internal pressure.
    1:37:01 You’ve got your employees coming at you.
    1:37:03 You’ve got your executives in some cases coming at you.
    1:37:06 You’ve got your board in some cases coming at you.
    1:37:08 You’ve got your shareholders coming at you.
    1:37:11 So you’ve got your internal pressures, but you also have the press coming at you.
    1:37:13 You’ve got academia coming at you.
    1:37:17 You’ve got the entire non-profit complex coming, activist complex coming at you.
    1:37:21 And then really critically, you know, he talked about Enrogan and these companies all went
    1:37:27 through this in this last, especially five years, you had the government coming at you.
    1:37:31 And you know, that’s the really, you know, stinky end of the pool where, you know, the
    1:37:35 government was in my view, you know, illegally exerting, you know, just in flagrant violation
    1:37:40 of the First Amendment and federal laws on speech and coercion and conspiracy, forcing
    1:37:44 these companies to engage in activities, you know, then again, in some cases, they may
    1:37:46 have wanted to do, but in other cases, they clearly didn’t want to do and felt like they
    1:37:48 had to do.
    1:37:54 And the level of pressure, like I just say, like I’ve known every CEO of Twitter, they’ve
    1:37:58 all had the exact same experience, which when they were in the job, it was just daily beatings.
    1:38:02 Like it’s just getting punched in the face every single day, constantly.
    1:38:10 And you know, Mark is very good at getting physically punched in the face and he’s very
    1:38:13 good at, you know, taking a punch and he has taken many, many punches.
    1:38:17 So I would encourage people to have a level of sympathy for these are not kings.
    1:38:20 These are people who operate with like, I would say, extraordinary levels of external
    1:38:21 pressure.
    1:38:26 I think if I had been in his job for the last decade, I would be a little puddle on the floor.
    1:38:30 And so it says, I think a lot about him that he has, you know, risen to this occasion the
    1:38:31 way that he has.
    1:38:33 And by the way, I should also say, you know, the cynicism, of course, is immediately out.
    1:38:37 And, you know, it’s a legitimate thing for people to say, but you know, it’s like, oh,
    1:38:39 you’re only doing this because of Trump or, you know, whatever.
    1:38:43 And it’s just like, no, like he has been thinking about and working on these things and trying
    1:38:45 to figure them out for a very long time.
    1:38:50 And so I think what you saw are legitimate, deeply held beliefs, not some, you know, sort
    1:38:52 of just in the moment thing that could change at any time.
    1:38:59 So what do you think it’s like to be him and other leaders of companies to be you and withstand
    1:39:01 internal pressure and external pressure?
    1:39:02 What’s that life like?
    1:39:04 Is it deeply lonely?
    1:39:05 That’s a great question.
    1:39:07 Leaders are lonely to start with.
    1:39:10 And this is one of those things where almost nobody has sympathy, right?
    1:39:11 Nobody feels sorry for a CEO, right?
    1:39:13 Like, it’s not a thing, right?
    1:39:17 And, you know, and again, legitimately so, like CEOs get paid a lot, like the whole thing.
    1:39:18 There’s a lot of great things about it.
    1:39:21 So it’s not like they should be out there asking for a lot of sympathy, but it is the
    1:39:23 case that they are human beings.
    1:39:24 And it is the case that it is a lonely job.
    1:39:30 And the reason it’s a lonely job is because your words carry tremendous weight.
    1:39:33 And you are dealing with extremely complicated issues and you’re under a tremendous amount
    1:39:36 of emotional, you know, personal emotional stress.
    1:39:40 And, you know, you often end up not being able to sleep well and you end up not being
    1:39:43 able to, like, keep up an exercise routine and all those things and, you know, you come
    1:39:45 under family stress because you’re working all the time.
    1:39:48 Or my partner, Ben, you know, was, he was CEO of our last company before we started
    1:39:49 the venture firm.
    1:39:52 He said, you know, the problem he had, like, with his family life was he would, even when
    1:39:57 he was home at night, he wasn’t home because he was in his head trying to solve all the
    1:39:58 business problems.
    1:40:00 And so he was like supposed to be like having dinner with his kids and he was physically
    1:40:01 there, but he wasn’t mentally there.
    1:40:05 So, you know, you kind of get, you get that a lot, but the key thing is like you can’t
    1:40:06 talk to people, right?
    1:40:08 So you can’t, I mean, you can talk to your spouse and your kids, but like they don’t
    1:40:11 understand that they’re not working in your company, they don’t understand, have the context
    1:40:13 to really help you.
    1:40:16 You, if you talk to your executives, they all have agendas, right?
    1:40:20 And so they’re all, they’re all, and they can’t resist, like it’s just human nature.
    1:40:23 And so you can’t necessarily rely on what they say.
    1:40:28 It’s very hard in most companies to talk to your board because they can fire you.
    1:40:29 Right.
    1:40:32 Now, Mark has the situation because he has control, it actually turns out he can talk
    1:40:35 to his board and Mark talks to us about many things that he does, that most CEOs won’t
    1:40:39 talk to the boards about because we, literally because we can’t fire him.
    1:40:42 But the general, a general, including all the CEOs of Twitter, none of them had control
    1:40:44 and so they, they could all get fired.
    1:40:47 So you can’t talk to the board members, they’re going to fire you.
    1:40:51 You can’t talk to the shareholders because they’ll just like dump your stock, right?
    1:40:54 Like, okay, so who’s the, so, so the, so every once in a while what you find is basically
    1:40:58 the best case scenario they have is they can talk to other CEOs and there’s these little
    1:41:00 organizations where they kind of pair up and do that.
    1:41:03 And so they maybe get a little bit out of that, but, but even that’s fraught with peril
    1:41:08 because can you really talk about confidential information with another CEO, insider trading
    1:41:09 risk?
    1:41:13 And so it’s just a very, it’s just a very lonely and isolating thing to start with.
    1:41:16 And then you, and then on top of that, you apply pressure, right.
    1:41:17 And that’s where it gets painful.
    1:41:22 And then maybe I’ll just spend a moment on this internal, external pressure thing.
    1:41:28 My general experience with companies is that they can withstand most forms of external
    1:41:32 pressure as long as they retain internal coherence, right?
    1:41:39 So as long as the internal team is really bonded together and supporting each other,
    1:41:41 most forms of external pressure you can withstand.
    1:41:46 And by that, I mean investor stuff, your stock, you lose your biggest customers, you know,
    1:41:51 whatever negative article, you know, negative headline, you know, you can, you can withstand
    1:41:52 all that.
    1:41:54 And basically, in fact, many of those forms of pressure can be bonding experiences for
    1:41:57 the team where they, where they come out stronger.
    1:42:01 What you 100% cannot withstand is the internal crack.
    1:42:05 And what I always look for in high pressure corporate situations now is the moment when
    1:42:07 the internal team cracks.
    1:42:13 Because I know the minute that happens, we’re in a different regime, like it’s like the,
    1:42:16 you know, the solidest turn into liquid, like we’re in a different regime and like the whole
    1:42:17 thing can unravel in the next week.
    1:42:20 Because then people turn it, I mean, this, this is what’s happening in Los Angeles right
    1:42:21 now.
    1:42:26 The mayor and the fire chief turned on each other and that’s it.
    1:42:27 That government is dysfunctional.
    1:42:29 It is never going to get put back together again.
    1:42:30 It is over.
    1:42:32 It is not going to work ever again.
    1:42:34 And that’s what happens inside companies.
    1:42:40 And so, so, so somebody like Mark is under like profound internal pressure and external
    1:42:41 pressure at the same time.
    1:42:45 Now he’s been very good at maintaining the coherence of his executive team, but he has
    1:42:50 had over the years a lot of activist employees as a lot of these companies have had.
    1:42:52 And so that’s been continuous pressure.
    1:42:55 And then the final thing I’d say is I said that companies can withstand most forms of
    1:43:00 external pressure, but not all in the special, though not all one is government pressure.
    1:43:05 Is it when your government comes for you like, yeah, any CEO who thinks that they’re bigger
    1:43:09 than the government has that notion beaten out of them in short order.
    1:43:16 Can you just linger on that because it is maybe educating and deeply disturbing.
    1:43:21 You’ve spoken about it before, but we’re speaking about again, this government pressure.
    1:43:27 So you think they’ve crossed the line into essentially criminal levels of pressure,
    1:43:32 flagrant criminality, felonies, like obvious felonies, and I can, I can actually cite
    1:43:33 the laws.
    1:43:36 But yes, absolute criminality.
    1:43:43 Can you explain how those possible to happen and maybe on a hopeful note, how we can avoid
    1:43:44 that happening again?
    1:43:49 So as to start with is a lot of this now is in the public record, which is good because
    1:43:50 it needs to be in the public record.
    1:43:52 And so there’s there’s three forms of things that are in the public record that people
    1:43:53 can look at.
    1:43:57 So one is the Twitter files, right, which Elon put out with the set of journalists when
    1:43:58 he took over.
    1:44:01 And I will just tell you, the Twitter files are 100% representative of what I’ve seen
    1:44:03 at every other one of these companies.
    1:44:05 And so you can just see what happened in Twitter.
    1:44:08 And you can just assume that that happened in these other companies, you know, for the
    1:44:11 most part, certainly in terms of the kind of pressure that they got.
    1:44:15 So that’s that’s number one, that stuff, you can just read it and you should if you haven’t.
    1:44:19 The second is Mark referenced this in the Rogan podcast.
    1:44:22 There’s a congressman, Jim Jordan, who has a committee congressional committee called
    1:44:23 the Weaponization Committee.
    1:44:27 And they in the last, you know, whatever three years have done a full scale investigation
    1:44:28 of this.
    1:44:31 And Facebook produced a lot of documents into that investigation.
    1:44:35 And those have many of those have now been made public and you can download those reports.
    1:44:38 And there’s like, I’d like 2000 pages worth of material on that.
    1:44:41 And that’s essentially the Facebook version of the Twitter files just arrived at with
    1:44:43 a different mechanism.
    1:44:45 And then third is Mark himself talking about this on on on Rogan.
    1:44:47 So, you know, just defer to his comments there.
    1:44:53 But yeah, basically what those three forms of information show you is basically the government,
    1:44:58 you know, over time, and then culminating in 2020, 2021, you know, in the last four years
    1:45:01 just decided that the First Amendment didn’t apply to them.
    1:45:06 And they just decided that federal laws around free speech and around conspiracies to take
    1:45:10 away the rights of citizens just don’t apply.
    1:45:14 And they just decided that they can just arbitrarily pressure, just like literally arbitrarily
    1:45:19 call up companies and threaten and bully and yell and scream and, you know, threaten repercussions
    1:45:22 and force people to force them to censor.
    1:45:25 And you know, there’s this old thing of like, well, the First Amendment only applies to,
    1:45:27 you know, the government doesn’t apply to companies.
    1:45:30 It’s like, well, there’s actually a little bit of nuance to that.
    1:45:34 First of all, it definitely applies to the government like 100%.
    1:45:36 The First Amendment applies to the government.
    1:45:39 By the way, so does the Fourth Amendment and the Fifth Amendment, including the right to
    1:45:41 due process also applies to the government.
    1:45:45 There was no due process at all to any of the censorship regime that was put in place.
    1:45:48 There was no due process put in place, by the way, for debanking either.
    1:45:52 Those are just as serious violations as the free speech violations.
    1:45:55 So this is just like flagrant, flagrant unconstitutional behavior.
    1:45:57 And then there are specific federal statutes.
    1:46:00 There’s it’s 18241 and 18242.
    1:46:04 And one of them applies to federal employees, government employees, and the other one applies
    1:46:10 to private actors around what’s called deprivation of rights and conspiracy to deprive rights.
    1:46:14 And it is not legal, according to the United States Criminal Code, for government employees
    1:46:19 or in a conspiracy private entities to take away constitutional rights.
    1:46:23 And interestingly, some of those constitutional rights are enumerated, for example, in the
    1:46:24 First Amendment, freedom of speech.
    1:46:28 And then some of those rights actually do not need to be enumerated.
    1:46:32 It is if the government takes away rights that you have, they don’t need to be specifically
    1:46:36 enumerated rights in the Constitution in order to still be a felony.
    1:46:40 The Constitution does not very specifically does not say you only have the rights that
    1:46:41 it gives you.
    1:46:44 It says you have all the rights that have not been previously defined as being taken
    1:46:45 away from you.
    1:46:46 Right.
    1:46:49 And so debanking qualifies as a right, you know, right to access to the financial system
    1:46:53 is every bit something that’s subject to these laws as free speech.
    1:46:54 And so yeah, this has happened.
    1:46:57 And then I’ll just add one final thing, which is we’ve talked about two parties so far.
    1:47:01 Start with the government employees, and then we’ve talked about the companies.
    1:47:04 The government employees, for sure, have misbehaved.
    1:47:07 The companies, there’s a very interesting question there as to whether they are victims
    1:47:12 or perpetrators or both, you know, they will defend and they will argue and I believe they
    1:47:15 have a good case that they are victims not perpetrators, right?
    1:47:19 They are the downstream subjects of pressure, not the cause of pressure.
    1:47:23 But there’s a big swath of people who are in the middle and specifically the ones that
    1:47:26 are funded by the government that I think are in possibly pretty big trouble.
    1:47:29 And that’s all of these third party censorship bureaus.
    1:47:35 I mean, the one that sort of is most obvious is the so-called Stanford Internet Observatory
    1:47:37 that got booted up there over the last several years.
    1:47:43 And they basically were funded by the federal government to be third party censorship operations.
    1:47:47 And they’re private sector actors, but acting with federal funding.
    1:47:52 And so it puts them in this very interesting spot where there could be very obvious theory
    1:47:55 under which they’re basically acting as agents of the government.
    1:47:59 And so I think they’re also very exposed on this and have behaved in just flagrantly illegal
    1:48:00 ways.
    1:48:06 Obviously government should not do any kind of pressure, even soft pressure on companies
    1:48:07 to censor.
    1:48:08 Can’t.
    1:48:09 Not allowed.
    1:48:11 It really is disturbing.
    1:48:20 I mean, it probably started soft, lightly, slowly, and then it escalates as the old will
    1:48:27 to power will instruct them to do because you get, I mean, yeah, I mean, that’s why there’s
    1:48:31 protection because you can’t put a check on power for government, right?
    1:48:34 There are so many ways that they can get you like there are so many ways they can come
    1:48:35 at you and get you.
    1:48:39 And, you know, the thing here to think about is a lot of times we really think about government
    1:48:40 action.
    1:48:41 They think about legislation, right?
    1:48:45 Because you, so when I was a kid, we got trained at how does government work?
    1:48:49 There was this famous animated short, the thing we got shown was just a cartoon of how a bill
    1:48:50 becomes a law.
    1:48:52 It’s like this, you know, if it’s a little bill snicked along and guessed this, I’m just
    1:48:53 a bill.
    1:48:54 Yeah.
    1:48:55 Exactly.
    1:48:56 Like it’s like, all right.
    1:48:57 It works at all.
    1:48:58 Like that doesn’t actually happen.
    1:48:59 We could talk about that.
    1:49:03 But even beyond that, mostly what we’re dealing with is not legislation.
    1:49:06 When we talk about government power these days, mostly it’s not legislation.
    1:49:10 Mostly it’s either regulation, which is basically the equivalent of legislation, but having not
    1:49:14 gone through the legislative process, which is a very big open legal issue and one of
    1:49:16 the things that the doge is very focused on.
    1:49:20 Most government rules are not legislated, they’re regulated, and there’s tons and tons
    1:49:24 of regulations that these companies are, so this is another cliche you’ll hear a lot, which
    1:49:25 is, oh, private companies can do whatever they want.
    1:49:27 It’s like, oh, no, they can’t.
    1:49:32 They’re subject to tens of thousands of regulations that they have to comply with, and the hammer
    1:49:35 that comes down when you don’t comply with regulations is profound, like they can completely
    1:49:38 wreck your company with no ability for you to do anything about it.
    1:49:41 So regulation is a big part of the way the power gets exercised.
    1:49:45 And then there’s what’s called just flat out administrative power, the term that you’ll
    1:49:46 hear.
    1:49:48 And administrative power is just literally the government telling you, calling you and
    1:49:49 telling you what to do.
    1:49:50 Here’s an example of how this works.
    1:49:55 So Facebook had this whole program a few years back to do a global cryptocurrency for payments
    1:49:56 called Libra.
    1:49:59 And they built the entire system, and it was this high-scale sort of new cryptocurrency,
    1:50:01 and they were going to build it in every product, and there were going to be 3 billion people
    1:50:05 who could transact with Libra, and they went to the government, and they went to all these
    1:50:06 different, try to figure out how to make it.
    1:50:09 So it was like fully compliant with anti-money laundering and all these controls and everything,
    1:50:11 and they had the whole thing ready to go.
    1:50:16 Two senators wrote letters to the big banks saying, we’re not telling you that you can’t
    1:50:21 work with Facebook on this, but if you do, you should know that every aspect of your business
    1:50:26 is going to come under greatly increased level of regulatory scrutiny.
    1:50:29 Which is, of course, the exact equivalent of, it sure is a nice corner restaurant you
    1:50:33 have here, it would be a shame if somebody tossed a Molotov cocktail through the window
    1:50:34 and burned it down tonight.
    1:50:37 And so what is that letter?
    1:50:42 It’s not a law, it’s not even a regulation, it’s just like straight direct state power.
    1:50:47 And then it culminates in literally calls from the White House where they’re just flat
    1:50:50 out telling you what to do, which is, of course, what a king gets to do, but not what
    1:50:52 a president gets to do.
    1:50:57 And so anyway, so what these companies experienced was, they experienced the full panoply of
    1:51:00 this, but the level of intensity was in that order.
    1:51:03 It was actually legislation was the least important part.
    1:51:06 Regulation was more important, administrative power was more important, and then just flat
    1:51:10 out demands and flat out threats were ultimately the most important.
    1:51:11 How do you fix it?
    1:51:15 Well, first of all, you have to elect people who don’t do it.
    1:51:19 So as with all these things, ultimately, the fault lies with the voters.
    1:51:21 And so you have to decide you don’t want to live in that regime.
    1:51:24 I have no idea what part of this recent election map to the censorship regime.
    1:51:28 I do know a lot of people on the right got very angry about the censorship, but I think
    1:51:32 it probably at least helped with enthusiasm on that side.
    1:51:37 Maybe some people in the left will now not want their democratic nominees to be so pro-censorship.
    1:51:40 So the voters definitely get a vote.
    1:51:45 Number one, number two, I think you need transparency, you need to know what happened.
    1:51:46 We know some of what happened.
    1:51:50 Peter Teal has written in the FT just now saying we just need like, after what we’ve
    1:51:55 been through in the last decade, we need broad-based truth and reconciliation efforts to really
    1:51:57 get to the root of things.
    1:51:59 So maybe that’s part of it.
    1:52:02 We need investigations for sure.
    1:52:03 Ultimately we need prosecutions.
    1:52:06 We need ultimately, we need people to go to jail because we need to set object lessons
    1:52:09 that say that you don’t get to do this.
    1:52:13 And on those last two, I would say that those are both up to the new administration and I
    1:52:15 don’t want to speak for them and I don’t want to predict what they’re going to do.
    1:52:19 But they have, they for sure have the ability to do both of those things and we’ll see
    1:52:20 where they take it.
    1:52:21 Yeah, it’s truly disturbing.
    1:52:26 I don’t think anybody wants this kind of overreach of power for government, including perhaps
    1:52:28 people that are participating in it.
    1:52:35 It’s like this dark momentum of power that you just get caught up in it and that’s the
    1:52:36 reason there’s that kind of protection.
    1:52:38 Nobody wants that.
    1:52:41 So I use the metaphor of the ring of power and for people who don’t catch the reference
    1:52:44 as Lord of the Rings and the thing with the ring of power and Lord of the Rings is the
    1:52:48 ring the Gollum has in the beginning and it turns you invisible and it turns out it like
    1:52:52 unlocks all this like fearsome power is the most powerful thing in the world is key to
    1:52:53 everything.
    1:52:56 And basically the moral lesson of Lord of the Rings, which was written by a guy who thought
    1:53:00 very deeply about these things is, yeah, the ring of power is inherently corrupting.
    1:53:03 The characters at one point, they’re like, end off, just put on the ring and like fix
    1:53:04 this.
    1:53:05 Right.
    1:53:10 And he’s like, he will not put the ring on even to like end the war because he knows
    1:53:11 that it will corrupt him.
    1:53:17 And then as it starts, the character of Gollum is the result of like a normal character who
    1:53:20 ultimately becomes this incredibly corrupt and deranged version of himself.
    1:53:24 And so, I mean, I think you said something actually quite profound there, which is the
    1:53:27 ring of power is infinitely tempting.
    1:53:29 The censorship machine is infinitely tempting.
    1:53:32 If you have it, like you are going to use it.
    1:53:37 It’s overwhelmingly tempting because it’s so powerful and that it will corrupt you.
    1:53:41 And yeah, I don’t know whether any of these people feel any of this today.
    1:53:42 They should.
    1:53:43 I don’t know if they do.
    1:53:47 But yeah, you go out five or 10 years later, you know, you would hope that you would realize
    1:53:51 that your soul has been corroded and you probably started out thinking that you were a patriot
    1:53:55 and you were trying to defend democracy and you ended up being, you know, extremely authoritarian
    1:53:57 and anti-democratic and anti-western.
    1:54:05 Can I ask you a tough question here staying on the ring of power, Elon is quickly becoming
    1:54:11 the most powerful human on earth?
    1:54:13 I’m not sure about that.
    1:54:14 You don’t think he is?
    1:54:16 Well, he doesn’t have the nukes, so.
    1:54:17 Nukes.
    1:54:22 Yeah, there’s different definitions and perspectives on power, right?
    1:54:30 How can he and or Donald Trump avoid the corrupting aspects of this power?
    1:54:31 I mean, I think the danger is there with power.
    1:54:32 It’s just, it’s flat out there.
    1:54:36 I would say with Elon, I mean, you know, we’ll see, I would say with Elon, and I would say
    1:54:40 by the way, overwhelmingly, I would say so far so good, I’m extremely, extremely thrilled
    1:54:45 by what he’s done on almost every front for like, you know, the last 30 years, but including
    1:54:48 all this stuff recently, like I think he’s been a real hero on a lot of topics where
    1:54:50 we needed to see heroism.
    1:54:53 But look, I would say I guess the sort of case that he has this level of power is some
    1:54:57 combination of the money and the proximity to the president.
    1:55:00 And obviously both of those are instruments of power.
    1:55:05 The counterargument to that is I do think a lot of how Elon is causing change in the
    1:55:06 world right now.
    1:55:08 I mean, there’s, there’s the companies he’s running directly where I think he’s doing
    1:55:13 very well and we’re investors in multiple of them and doing very well.
    1:55:17 But I think like a lot of the stuff that gets people mad at him is like, it’s the social
    1:55:20 and political stuff and it’s, you know, it’s his statements and then it’s the downstream
    1:55:21 effects of his statements.
    1:55:25 So like, for example, it’s, you know, for the last couple of weeks, it’s been him, you
    1:55:28 know, kind of weighing in on this rape gang scandal, you know, this rape organized child
    1:55:30 rape thing in the UK.
    1:55:34 And you know, it’s, it’s, you know, it’s, it’s actually a, it’s a preface cascade.
    1:55:36 It’s one of these things where people knew there was a problem.
    1:55:37 They weren’t willing to talk about it.
    1:55:39 It kind of got suppressed.
    1:55:43 And then Elon brought it up and then all of a sudden there’s now in the UK, this like
    1:55:46 massive explosion of basically open conversation about it for the first time.
    1:55:49 And, you know, it’s like this catalyzing, all of a sudden everybody’s kind of woken
    1:55:52 up and being like, Oh my God, you know, this is really bad.
    1:55:55 And then there will be now, you know, I’m pretty sure pretty, pretty clearly big changes
    1:55:56 as a result.
    1:56:00 And Elon was, you know, he played the role of the boy who said, the emperor has no clothes,
    1:56:01 right?
    1:56:02 But, but, but here’s the thing.
    1:56:03 Here’s my point.
    1:56:05 Like he said it about something that was true, right?
    1:56:09 And so had he said it about something that was false, you know, he would get no credit
    1:56:10 for it.
    1:56:11 He wouldn’t deserve any credit for it.
    1:56:12 But he said something that was true.
    1:56:16 And by the way, everybody over there instantly, they were like, Oh yeah, he’s right.
    1:56:17 Right.
    1:56:20 Like nobody, like nobody seriously said, they’re just arguing the details now.
    1:56:22 So, so number one, it’s like, okay, he says true things.
    1:56:26 And so it’s like, okay, how far a bit of this way, like how worried are we?
    1:56:30 Are we about somebody becoming corrupt by virtue of their power being that they get to speak
    1:56:31 the truth?
    1:56:34 And I guess I would say, especially in the last decade of what we’ve been through where
    1:56:37 everybody’s been lying all the time about everything, I’d say, I think we should run
    1:56:39 this experiment as hard as we can to get people to tell the truth.
    1:56:42 And so I don’t feel that bad about that.
    1:56:47 And then the money side, you know, this rapidly gets into the money and politics question.
    1:56:51 And the money and politics question is this very interesting question because it seems
    1:56:55 like there’s a clear cut case that the more money and politics, the worse things are and
    1:56:58 the more corrupted the system is.
    1:57:02 That was a very popular topic of public conversation up until 2016, when Hillary outspent Trump
    1:57:05 three to one and lost.
    1:57:09 You’ll notice that money and politics has all most vanished as a topic in the last eight
    1:57:10 years.
    1:57:14 And once again, Trump spent far, you know, Kamala raised and spent 1.5 billion on top
    1:57:16 of what Biden spent.
    1:57:18 So they were, they were at, I don’t know, something like three billion total and Trump,
    1:57:22 I think spent again, like a third or a fourth of that.
    1:57:26 And so the money and politics kind of topic has kind of vanished from the popular conversation
    1:57:27 the last eight years.
    1:57:34 It has come back a little bit now that Elon is spending, you know, but, but again, like
    1:57:37 it’s like, okay, he’s spending, but the data would seem to indicate in the last, at least
    1:57:39 in the last eight years that money doesn’t win the political battles.
    1:57:43 It’s actually like the voters actually have a voice and they actually exercise it and
    1:57:44 they don’t just listen to ads.
    1:57:47 And so again, there I would say like, yeah, clearly there’s some power there, but I don’t
    1:57:50 know if it’s like, I don’t know if it’s some like, I don’t know if it’s some weapon that
    1:57:54 he can just like turn on and use in a definitive way.
    1:57:59 And I don’t know if there’s parallels there, but I could also say just on a human level,
    1:58:04 he has a good heart and I interact with a lot of powerful people and that’s not always
    1:58:05 the case.
    1:58:07 So that’s a good thing there.
    1:58:08 Yeah.
    1:58:13 If we, if we can draw parallels to the Hobbit or whatever who gets to put on the ring.
    1:58:14 Frodo.
    1:58:15 Frodo, yeah.
    1:58:17 Yeah, maybe one of the lessons of Lord of the Rings, right, is even, even Frodo would
    1:58:19 have been, you know, even Frodo would have been corrupted, right?
    1:58:23 But, you know, nevertheless, you had somebody who could do what it took at the time.
    1:58:27 The thing that I find just so amazing about the Elon phenomenon and all the critiques
    1:58:31 is, you know, the one thing that everybody in our societies universally agrees on because
    1:58:36 of our, it’s sort of our post-Christian egalitarian, so, you know, we live in sort of this post
    1:58:42 secularized Christian context in the West now and it’s, you know, we consider Christianity
    1:58:45 kind of, you know, backwards, but we still believe essentially all the same things.
    1:58:49 We just dress them up in sort of fake science.
    1:58:53 So the, the one thing that we’re all told, we’re all taught from, from, is that the best
    1:58:55 people in the world are the people who care about all of humanity, right?
    1:58:59 And we venerate, you know, all of our figures are people who care about all of, you know,
    1:59:02 Jesus cared about all of humanity, Gandhi cared about all of humanity, Martin Luther
    1:59:05 King cared about all of humanity, like, it’s, it’s, it’s, the person who cares the most
    1:59:07 about everybody.
    1:59:11 And with Elon, you have a guy who literally like is, he’s literally, he talks about this
    1:59:15 constantly and he talks about it exactly the same in private is literally, he is operating
    1:59:18 on behalf of all of humanity to try to get us to, you know, he goes through to get us
    1:59:21 through multi-planetary civilization so that we can survive a strike on anyone planet so
    1:59:25 that we can extend the light of human consciousness into the world and, you know, into the universe
    1:59:27 and have it persist, you know, and the good of the whole thing.
    1:59:31 And like literally the critique is, yeah, we want you to care about all of humanity,
    1:59:32 but not like that.
    1:59:39 Yeah, all the critics, all the, all the surface turmoil, the critics will be forgotten.
    1:59:42 Yeah, I think that’s, yeah, that’s clear.
    1:59:47 You said that we always end up being ruled by the elites of some kind.
    1:59:50 Can you explain this law, this idea?
    1:59:55 So this comes from a Italian political philosopher from about a hundred years ago named Robert.
    2:00:02 I’m going to mangle the, let you pronounce the Italian, Michelle’s or Michael’s.
    2:00:06 And then it was, I learned about it through a famous book on politics, probably the best
    2:00:10 book on politics written in the 20th century called the Machiavellians by this guy, James
    2:00:12 Burnham, who has had a big impact on me.
    2:00:16 But in the Machiavellians, he resurrects what he calls this sort of Italian realist school
    2:00:19 of political philosophy from the, from the 10s and 20s.
    2:00:21 And these were people, to be clear, this was not like a Mussolini thing.
    2:00:26 These were people who were trying to understand the actual mechanics of how politics actually
    2:00:27 works.
    2:00:31 So to get to the actual sort of mechanical substance of like how the political machine
    2:00:32 operates.
    2:00:38 And this guy, Michelle, said this concept he ended up with called the iron law of oligarchy.
    2:00:42 And so what the iron law of oligarchy, and I mean, take a step back to say what he meant
    2:00:44 by oligarchy because it has multiple meanings.
    2:00:47 So basically, in classic political theory, there’s basically three forms of government
    2:00:48 at core.
    2:00:51 There’s democracy, which is rule of the many.
    2:00:53 There’s oligarchy, which is rule of the few.
    2:00:55 And there’s monarchy, which is rule of the one.
    2:00:58 And you can just use that as a general framework of any government you’re going to be under
    2:01:01 is going to be one of those, just a mechanical observation, without even saying which ones
    2:01:05 are good or bad, just a structural observation.
    2:01:08 And so the question that Michelle’s asked was, like, is there such a thing as democracy?
    2:01:10 Like, is there actually such a thing as democracy?
    2:01:13 Is there ever actually like direct, direct government?
    2:01:17 And what he did was he mounted this sort of incredible historical exploration of whether
    2:01:19 democracies had ever existed in the world.
    2:01:22 And the answer basically is almost never, and we could talk about that.
    2:01:27 But the other thing he did was he sought out the most democratic private organization in
    2:01:31 the world that he could find at that point, which he concluded was some basically communist
    2:01:35 German Auto Workers Union that was like wholly devoted to the workers of the world uniting,
    2:01:37 you know, back when that was like the hot thing.
    2:01:40 And he went in there and he’s like, okay, this is the organization out of all organizations
    2:01:43 on planet Earth that must be operating as a direct democracy.
    2:01:46 And he went in there and he’s like, oh, nope, there’s a leadership class.
    2:01:49 You know, there’s like six guys at the top and they control everything and they lead
    2:01:53 the rest of the membership along, you know, by the nose, which is of course the story
    2:01:54 of every union.
    2:01:58 The story of every union is always the story of, you know, there’s a Jimmy Hoffa in there,
    2:01:59 you know, kind of running the thing.
    2:02:04 You know, we just saw that with the Dock Workers Union, right, like, you know, there’s a guy.
    2:02:05 And he’s in charge.
    2:02:09 And by the way, the number two is his son, right, like that’s not like a, you know, an
    2:02:10 accident, right?
    2:02:14 So the iron law of oligarchy basically says democracy is fake.
    2:02:17 There’s always a rule in class, there’s always a ruling elite structurally.
    2:02:21 And he said the reason for that is because the masses can’t organize, right?
    2:02:22 What’s the fundamental problem?
    2:02:26 Whether the mass is 25,000 people in union or 250 million people in a country, the masses
    2:02:31 can’t organize, the majority cannot organize, only a minority can organize and to be effective
    2:02:33 in politics, you must organize.
    2:02:38 And therefore every political structure in human history has been some form of a small
    2:02:44 organized elite ruling, a large and dispersed majority, every single one.
    2:02:51 The Greeks and the Florentines had brief experiments in direct democracy and they were total disasters.
    2:02:54 In Florence, I forget the name of it, it was called like the workers revolt or something
    2:02:55 like that.
    2:02:59 There was like a two year period where they basically experimented with direct democracy
    2:03:02 during the Renaissance and it was a complete disaster.
    2:03:04 And they never tried it again.
    2:03:08 In the state of California, we have our own experiment on this, which is the proposition
    2:03:13 system, which is an overlay on top of the legislature and anybody who looks at it for
    2:03:15 two seconds concludes it’s been a complete disaster.
    2:03:19 It’s just a catastrophe and it’s caused enormous damage to the state.
    2:03:23 And so basically the presumption that we are in a democracy is just sort of by definition
    2:03:24 fake.
    2:03:27 Now, good news for the US, it turns out the founders understood this and so of course they
    2:03:30 didn’t give us a direct democracy, they gave us a representative democracy, right?
    2:03:34 And so they built the oligarchy into the system in the form of Congress and the executive
    2:03:37 branch, the judicial branch.
    2:03:40 But so anyway, so as a consequence, democracy is always and everywhere fake.
    2:03:43 There is always a ruling elite.
    2:03:47 And basically the lesson of the Machiavellians is you can deny that if you want, but you’re
    2:03:48 fooling yourself.
    2:03:52 The way to actually think about how to make a system work and maintain any sort of shred
    2:03:56 of freedom is to actually understand that that is actually what’s happening.
    2:04:02 And lucky for us, the founders saw this and figured out a way to, given that there’s
    2:04:09 going to be a ruling elite, how to create a balance of power among that elite so it
    2:04:10 doesn’t get out of hand.
    2:04:11 It was very clever, right?
    2:04:13 And some of this was based on earlier experiments.
    2:04:16 Some of this, by the way, these were very, very smart people, right?
    2:04:18 And so they knew tremendous amounts of like Greek and Roman history.
    2:04:23 They knew the Renaissance history, the Federalist papers, they argued this at great length.
    2:04:24 You can read it all.
    2:04:29 They ran like one of the best seminars in world history, trying to figure this out.
    2:04:30 And they went through all this.
    2:04:33 And yeah, and so they thought through it very carefully, but just to give you an example,
    2:04:34 which continues to be a hot topic.
    2:04:38 So one way they did it is through the three branches of government, right?
    2:04:42 Executive legislative and judicial sort of balance of powers.
    2:04:45 But the other way they did it was they sort of echoing what had been done earlier, I think
    2:04:50 in the UK Parliament, they created the two different bodies of the legislature, right?
    2:04:54 And so the House and the Senate, and as you know, the House is a portion on the basis
    2:04:56 of population and the Senate is not, right?
    2:05:00 The small states have just as many senators as the big states.
    2:05:02 And then they made the deliberate decision to have the House get reelected every two
    2:05:05 years to make it very responsive to the will of the people.
    2:05:09 And they made the decision to have the Senate get reelected every six years so that it had
    2:05:12 more buffer from the passions at the moment.
    2:05:14 But what’s interesting is they didn’t choose one or the other, right?
    2:05:16 They did them both.
    2:05:18 And then to get legislation passed, you have to get through both of them.
    2:05:22 And so they built in like a second layer of checks and balances.
    2:05:26 And then there’s 1,000 observations we could make about like how well the system is working
    2:05:30 today and like how much does it live up to the ideal and how much are we actually complying
    2:05:31 with the Constitution?
    2:05:34 And there’s lots of, you know, there’s lots of open questions there.
    2:05:39 But you know, this system has survived for coming on 250 years with a country that has
    2:05:42 been spectacularly successful that I don’t think at least, you know, I don’t think any
    2:05:44 of us would trade this system for any other one.
    2:05:46 And so it’s one of the great all-time achievements.
    2:05:47 Yeah, it’s incredible.
    2:05:52 And we should say they were all pretty young relative to our current set of leaders.
    2:05:53 Many in their 20s at the time.
    2:05:54 And like super geniuses.
    2:05:57 This is one of those things where it’s just like, all right, something happened where
    2:06:01 there was a group of people where, you know, nobody ever tested their IQs, but like, these
    2:06:02 are Einstein’s of politics.
    2:06:03 Yeah.
    2:06:04 The amazing thing.
    2:06:07 But anyway, I just, I go through all that, which is they were very keen students of the
    2:06:12 actual mechanical practice of democracy, not fixated on what was desirable.
    2:06:16 They were incredibly focused on what would actually work, which is, you know, I think
    2:06:17 the way to think about these things.
    2:06:22 There were engineers of sorts, not the fuzzy humanity students of sorts.
    2:06:24 They were shape rotators, not word cells.
    2:06:26 I remember that.
    2:06:27 Wow.
    2:06:29 That meme came and went.
    2:06:30 I think you were centered to them.
    2:06:31 You’re centered to a lot of memes.
    2:06:32 I was.
    2:06:36 You’re the meme dealer and the meme popularizer.
    2:06:37 That meme I guess we credit for.
    2:06:39 And then the current thing is the other one I get some credit for.
    2:06:42 I don’t know that I invented either one, but I popularized them.
    2:06:44 Take credit and run with it.
    2:06:52 If you can just linger on the Machiavellians, it’s a, it’s a study of power and power dynamics.
    2:06:59 Like you mentioned, looking at the actual reality of the machinery of power from everything
    2:07:04 you’ve seen now in government, but also in companies, what are some interesting things
    2:07:08 you can sort of continue to say about the dynamics of power, the jostling for power that
    2:07:10 happens inside these institutions.
    2:07:11 Yeah.
    2:07:15 So it, a lot of it, you know, we already talked about this a bit with the universities, which
    2:07:19 is you can apply a Machiavellian style lens to the, it’s why I posed the question to you
    2:07:24 that I did, which is, okay, who runs the university, the trustees, the administration, the students
    2:07:25 or the faculty.
    2:07:28 And then, you know, the answer, the true answer is some combination of the three or of the
    2:07:33 four, plus the donors, by the way, plus the government, plus the press, et cetera, right.
    2:07:36 And so there, you know, there’s a, there’s a mechanical interpretation of that.
    2:07:41 I mean, companies operate under the exact same, you know, set of questions, who runs a company,
    2:07:44 you know, the CEO, but like the CEO runs the company basically up to the day that either
    2:07:47 the shareholders or the management team revolt.
    2:07:50 If the shareholders revolt, it’s very hard for the CEO to stay in the seat.
    2:07:53 If the management team revolts, it’s very hard for the CEO to stay in the seat.
    2:07:56 By the way, if the employees revolt, it’s also hard to stay in the seat.
    2:07:59 By the way, if the New York Times comes at you, it’s also very hard to stay in the seat.
    2:08:02 If the Senate comes at you, it’s very hard to stay in the seat.
    2:08:07 So, you know, like a reductionist version of this that is a good shorthand is who can
    2:08:09 get who fired.
    2:08:13 You know, so, so who has more power, you know, the newspaper columnist who makes, you know,
    2:08:17 $200,000 a year or the CEO who makes, you know, $200 million a year.
    2:08:20 And it’s like, well, I know for sure that the columnist can get the CEO fired.
    2:08:21 I’ve seen that happen before.
    2:08:25 I have yet to see a CEO get a columnist fired.
    2:08:32 Did anyone ever get fired from the Bill Ackman assault on journalism?
    2:08:36 So Bill, Bill like really showed the bullshit that happens in journalism.
    2:08:39 No, because what happens is they, they, they were at with the, I mean, they, and I would
    2:08:41 say to their credit, they were as a badge of honor.
    2:08:43 And then to their shame, they were as a badge of honor, right?
    2:08:48 Which is if, you know, if they’re doing the right thing, then they are justifiably proud
    2:08:50 of themselves for standing up under pressure.
    2:08:53 But it also means that they can’t respond to legitimate criticism.
    2:08:56 And, you know, they’re obviously terrible at that now.
    2:09:01 As I recall, he went straight to the CEO of the actual Springer that owns Insider.
    2:09:04 And I, you know, and I happen to know the CEO and I think he’s quite a good CEO.
    2:09:08 But like, I, like, well, it’s a good example is the CEO of actual Springer run his own
    2:09:09 company.
    2:09:10 Right.
    2:09:12 Like, well, there’s a fascinating, okay, so there’s a fascinating thing playing out right
    2:09:13 now.
    2:09:18 Not to dwell on these fires, but it’s a, you see, the pressure reveals things, right?
    2:09:22 And so if you’ve been watching what’s happened with LA Times recently, so this guy, biotech
    2:09:26 entrepreneur buys the LA Times, like whatever, eight years ago, it is just like the most
    2:09:30 radical social revolutionary thing you can possibly imagine.
    2:09:32 It endorses every crazy left-wing radical.
    2:09:36 You can imagine it endorses Karen Bass, it endorses Gavin Newsom, it’s just like a litany
    2:09:39 of all the people who are currently burning the city to the ground.
    2:09:42 It’s just like endorsed every single bad person, every step of the way.
    2:09:44 He’s owned it the entire time.
    2:09:47 You know, he put his foot down right before, for the first time, I think put his foot down
    2:09:50 right before the November election and said, we’re not, we’re getting, he said, we’re
    2:09:52 going to get out of this thing where we just always endorse the Democrat.
    2:09:53 And we said, we’re not endorsing.
    2:09:57 I think he said, we’re not endorsing for the presidency and like the paper flipped out.
    2:09:58 Right.
    2:10:01 It’s like our billionaire backer who’s, I don’t know what he spends, but like, he must
    2:10:05 be burning 50 or 100 million dollars a year out of his pocket to keep this thing running.
    2:10:09 He paid 500 million for it, which is amazing.
    2:10:13 Back when people still thought these things were businesses.
    2:10:17 And then he’s probably burned another 500 million over the last decade, keeping it running.
    2:10:20 And he burns probably another 50, a hundred million a year to do this.
    2:10:24 And the journalists at the LA Times hate him with the fury of a thousand sons.
    2:10:27 Like they just like absolutely freaking despise him.
    2:10:29 And they have been like attacking him and, you know, the ones that can get jobs elsewhere
    2:10:32 quit and do it and the rest just stay and say the worst, you know, most horrible things
    2:10:33 about him.
    2:10:36 And they want to constantly run these stories, attacking him.
    2:10:40 And so he has had this reaction that a lot of people in LA are having right now to this
    2:10:44 fire and to this just like incredibly vivid collapse of leadership and all these people
    2:10:48 that he had his paper head and tourists are just disasters.
    2:10:50 And he’s on this tour.
    2:10:54 He’s basically just, he’s decided, he’s, he’s decided to be the boy who says the emperor
    2:10:57 has no clothes, but he’s doing it to his own newspaper.
    2:10:58 Very smart guy.
    2:11:01 And he’s basically saying, yeah, we, we, yes, we did all that and we endorsed these
    2:11:04 people and it was a huge mistake and we’re going to completely change.
    2:11:08 And his paper is, you know, in a complete internal revolt.
    2:11:09 But I go through it, which is okay.
    2:11:12 Now we have a very interesting question, which is who runs the LA Times.
    2:11:17 Because for the last eight years, it hasn’t been him.
    2:11:19 It’s been the reporters.
    2:11:23 Now for the first time, the owner is showing up saying, oh no, I’m actually in charge and
    2:11:25 the reporters are saying, no, you’re not.
    2:11:28 And like, like it is freaking on.
    2:11:32 And so again, if the Machiavellian’s mindset on this is like, okay, how is power actually
    2:11:33 exercised here?
    2:11:37 Can, can, can a guy who’s like even super rich and super powerful, who even owns his
    2:11:39 own newspaper, can he stand up to a full-scale assault?
    2:11:43 Not only by his own reporters, but by every other journalism outlet who also now thinks
    2:11:45 he’s the antichrist.
    2:11:50 And he is trying to exercise power by speaking out publicly and so that’s the game of power
    2:11:51 there.
    2:11:52 And firing people.
    2:11:54 And you know, he has removed people and he has set new rules.
    2:11:57 I mean, he is, he is now, I think at long, I think he’s saying that he’s now at long
    2:12:01 last actually exercising prerogatives of an owner of a business, which just decide on
    2:12:02 the policies and staffing of the business.
    2:12:06 There are certain other owners of these publications that are doing similar things right now.
    2:12:08 He’s the one I don’t know.
    2:12:10 So he’s the one I can talk about.
    2:12:13 But there are others that are going through this same thing right now.
    2:12:17 And I think it’s a really interesting open question, like, you know, in a fight between
    2:12:20 the employees and the employer, like it’s not crystal clear that the employer wins that
    2:12:21 one.
    2:12:23 And just to stay on journalism for a second, we mentioned Bill Ackman.
    2:12:28 I just want to say, put him in the category we mentioned before of a really courageous
    2:12:29 person.
    2:12:37 I don’t think I’ve ever seen anybody so fearless in going after, you know, in following what
    2:12:40 he believes in publicly.
    2:12:46 That’s courage that, that, that several things he’s done publicly has been really inspiring,
    2:12:47 just being courageous.
    2:12:49 What do you think is like the most impressive example?
    2:12:57 Where he went after a journalist whose whole incentive is to like, I mean, it’s like sticking
    2:13:02 your like kicking the beehive or whatever, you know, what’s going to follow.
    2:13:08 And to do that, I mean, that’s why it’s difficult to challenge journalistic organizations because
    2:13:12 they’re going to, you know, there’s just so many mechanisms they use, including like writing
    2:13:16 articles and get cited by Wikipedia, then drive the narrative and then they can get
    2:13:18 you fired, all this kind of stuff.
    2:13:27 Bill Ackman, like a bad MFR, just, just tweets these essays and just goes after them legally
    2:13:32 and also in the public eye and just, I don’t know, that was truly inspiring.
    2:13:36 There’s not many people like that in public.
    2:13:42 And hopefully that inspires not just me, but many others to be like, to be courageous themselves.
    2:13:45 Did you know of him before he started doing this in public?
    2:13:49 I knew of Neri, his wife, who’s just a brilliant researcher and scientist, and so I admire
    2:13:50 her look up to her.
    2:13:51 I think she’s amazing.
    2:13:55 Well, the reason I ask if you knew about Bill is because a lot of people had not heard
    2:13:58 of him before, especially like before October 7th and before some of the campaigns he’s
    2:14:01 been running since in public, and with Harvard and so forth.
    2:14:05 But he was very well known in the investment world before that.
    2:14:10 So he was a famous, he was a so-called activist investor for, you know, very, very successful
    2:14:15 and very widely respected for probably 30 years before, before, before now.
    2:14:19 And I bring that up because it turns out they weren’t for the most part battles that happened
    2:14:20 in kind of full public view.
    2:14:23 They weren’t national stories, but in the business and investing world, the activist
    2:14:30 investor is a very, it’s like in the movie Taken, it’s a very specific set of skills.
    2:14:34 How to like really take control of situations and how to wreck the people who you’re going
    2:14:36 up against.
    2:14:41 And just to, and there’s a lot, there’s been controversy over the years on this topic
    2:14:44 and there’s too much detail to go into, but the, the defensive activist investing, which
    2:14:48 I think is valid is, you know, these are the guys who basically go in and take stakes in
    2:14:51 companies that are being poorly managed or under optimized.
    2:14:55 And, and, and then generally what that means is at least the theory is that means the existing
    2:15:00 management has become entrenched and lazy, mediocre, you know, whatever, not responding
    2:15:04 to the needs of the shareholders, often not responding to the customers.
    2:15:09 And the activists basically go in with a minority position and then they rally support among
    2:15:11 other investors who are not activists.
    2:15:16 And then they basically show up and they force change, but they are the aggressive version
    2:15:17 of this.
    2:15:19 And I’ve been on the, I’ve been involved in companies that have been on the receiving
    2:15:24 end of these, where it is amazing how much somebody like that can exert pressure on situations
    2:15:26 even when they don’t have formal control.
    2:15:30 So it’s another, it would be another chess piece on the mechanical board of kind of how
    2:15:31 power gets exercised.
    2:15:34 And basically what happens is the effect of analysts a large amount of the time they end
    2:15:37 up taking, they end up taking over control of companies, even though they never own more
    2:15:39 than like 5% of the stock.
    2:15:42 And so anyway, so it turns out with Bill’s been such a fascinating case because he has
    2:15:48 that like complete skill set and he has now decided to bring it to bear in areas that
    2:15:50 are not just companies.
    2:15:53 And two interesting things for that, one is, you know, some of these places, you know,
    2:15:57 and some of these battles are still ongoing, but number one, like a lot of people who run
    2:16:00 universities or newspapers are not used to being up against somebody like this.
    2:16:04 And by the way, also now with infinitely deep pockets and lots of experience in courtrooms
    2:16:06 and all the things that kind of go with that.
    2:16:12 But the other is, through example, he is teaching a lot of the rest of us, the activist playbook,
    2:16:13 like in real time.
    2:16:17 And so the Liam Neeson skill set is getting more broadly diffused just by being able to
    2:16:19 watch and learn from him.
    2:16:22 So I think he, I think he’s having a, you know, I would put him up there with Elon in
    2:16:25 terms of somebody who’s really affecting how all this is playing out.
    2:16:29 But even skill set aside, just courage and yes, including by the way, courage to go outside
    2:16:30 of his own zone.
    2:16:31 Yeah.
    2:16:32 Right.
    2:16:35 You know, cause like he hasn’t, I’ll give you an example, like my firm venture capital
    2:16:36 firm, we have LPs.
    2:16:40 There are things that I feel like I can’t do or say cause I feel like I would be bringing,
    2:16:44 you know, I would be bringing embarrassment or other consequences to our LPs.
    2:16:47 He has investors also where he worries about that.
    2:16:50 And so his, so a couple of things, one is his willingness to go out a bit and risk his
    2:16:52 relationship with his own investors.
    2:16:55 But I will tell you the other thing, which is his investors, I know this for a fact his
    2:16:59 investors have been remarkable to be supportive of him doing that because as it turns out,
    2:17:02 a lot of them actually agree with him.
    2:17:06 And so he’s the same thing he does in his activism campaigns.
    2:17:09 He is able to be the tip of the spear on something that actually a lot more people agree with.
    2:17:10 Yeah.
    2:17:14 It turns out if you have truth behind you, it helps.
    2:17:18 And just again, you know, how I started is a lot of people are just fed up.
    2:17:23 You’ve been spending a bunch of time in Mar-a-Lago and Palm Beach helping the new administration
    2:17:26 in many ways, including interviewing people who might join.
    2:17:31 So what’s your general sense about the talent about the people who are coming in into the
    2:17:33 new administration?
    2:17:36 So I should start by saying I’m not a member of the new administration.
    2:17:40 I’m not, I’m not in the room, I’m not like in the room when a lot of these people are
    2:17:41 being selected.
    2:17:42 I believe you said unpaid intern.
    2:17:43 I am an unpaid intern.
    2:17:48 So I’m a volunteer and I, you know, when helpful, but I’m not, I’m not making the decisions
    2:17:50 nor am I in a position to, you know, speak for the administration.
    2:17:53 So I don’t want to say anything that will cause people to think I’m doing that.
    2:17:54 It’s a very unusual situation, right?
    2:17:57 Where you had an incumbent president and then you had a four-year gap where he’s out of
    2:17:59 office and then you have him coming back, right?
    2:18:04 And as you’ll recall, there was a fair amount of controversy over the end of the first term.
    2:18:05 Oh, yeah.
    2:18:09 The fear, the specific concern was, you know, the first Trump administration, you know,
    2:18:12 they will all say this is like they didn’t come in with a team, right?
    2:18:15 So they, you know, they didn’t come into the team and most of the sort of institutional
    2:18:19 base of the Republican party were Bush Republicans and they were, and many of them had become
    2:18:20 never Trumpers.
    2:18:22 And so they had a hard time putting the team together.
    2:18:24 And then by the way, they had a hard time getting people confirmed.
    2:18:27 And so if you talk to the people who were there in the first term, it took them two
    2:18:30 to three years to kind of even get the government in place.
    2:18:33 And then they basically only had the government in place for, you know, for basically like
    2:18:37 18 months and then COVID hit, you know, and then sort of aftermath and everything and all
    2:18:39 the drama and headlines and everything.
    2:18:42 And so the concern, you know, including from some very smart people in the last two years
    2:18:46 has been, boy, if Trump gets a second term, is he going to be able to get a team that
    2:18:50 is as good as the team he had last time or a team that is actually not as good because
    2:18:53 maybe people got burned out, maybe they’re more cynical now, maybe they’re not willing
    2:18:55 to go through the drama.
    2:18:57 By the way, a lot of people on in the first term came under like, you know, with their
    2:19:01 own withering legal assaults and, you know, some of them went to prison and like, you
    2:19:05 know, a lot, a lot of stuff happened, lots of investigations, lots of legal fees, lots
    2:19:09 of bad press, lots of debanking, by the way.
    2:19:14 A lot of the officials in the first term got debanked, including the president’s wife
    2:19:15 and son.
    2:19:16 Yeah.
    2:19:17 I heard you tell that story.
    2:19:18 It’s insane.
    2:19:19 That’s just insane.
    2:19:20 In the wake of the first term.
    2:19:21 Yes.
    2:19:25 We now take out spouses and children with our ring of power.
    2:19:28 And so there’s like this legitimate question as to like whether, okay, what will the team
    2:19:29 for the second term look like?
    2:19:33 And at least what I’ve seen and what you’re seeing is the appointments is it looks much,
    2:19:34 much better.
    2:19:37 First of all, it just looks better than the first term and not because the people in the
    2:19:40 first term were not necessarily good, but just you just have this like influx of like
    2:19:44 incredibly capable people that have shown up that want to be part of this.
    2:19:46 And you just didn’t have that the first time.
    2:19:49 And so they’re just drawing on a much deeper, richer talent pool than they had the first
    2:19:50 time.
    2:19:53 And they’re drawing on people who know what the game is, like they’re drawing on people
    2:19:57 now who know what is going to happen and they’re still willing to do it.
    2:20:00 And so they’re going to get, I think, you know, some of the best people from the first
    2:20:05 term, but they’re bringing in a lot of people who they couldn’t get the first time around.
    2:20:07 And then second is there’s a bunch of people, including people in the first term where they’re
    2:20:09 just 10 years older.
    2:20:13 And so they went through the first term and they just learned how everything works.
    2:20:16 Or they’re young people who just had a different point of view, and now they’re 10 years older
    2:20:19 and they’re ready to go serve in government.
    2:20:21 And so there’s a generational shift happening.
    2:20:25 And actually one of the interesting things about the team that’s forming up is it’s remarkably
    2:20:26 young.
    2:20:29 Some of the cabinet members and then many of the second and third level people are like
    2:20:33 in their 30s and 40s, you know, which is a big change from the gerontocracy that, you
    2:20:36 know, we’ve been under for the last 30 years.
    2:20:39 And so I think the caliber has been outstanding, you know, and we could sit here and list tons
    2:20:42 and tons of people, but like, you know, the people who are running, you know, it’s everything
    2:20:46 from the people who are running all the different departments at HHS, it’s the people running,
    2:20:50 you know, the number two at the Pentagon is Steve Feinberg, who’s just like an incredible
    2:20:53 legend of private equity, incredible capable guy.
    2:20:57 We’ve got two, actually two of my partners are going in, who I both think are amazing.
    2:20:58 Yeah.
    2:21:02 Like many, many parts of the government that people are like really impressive.
    2:21:10 Well, I think one of the concerns is actually that given the human being of Donald Trump,
    2:21:18 that there would be more tendency towards, let’s say, favoritism versus meritocracy,
    2:21:22 that there’s kind of circles of sycophancy that form.
    2:21:30 And if you’re be able to be loyal and never oppose and just be basically suck up to the
    2:21:32 president, then you’ll get a position.
    2:21:33 So that’s one of the concerns.
    2:21:40 And I think you’re in a good position to speak to the degree that’s happening versus
    2:21:43 hiring based on merit and just getting great teams.
    2:21:44 Yeah.
    2:21:48 So look, I just start by saying any leader at that level, by the way, any CEO, there’s
    2:21:49 always some risk of that.
    2:21:50 Right.
    2:21:53 So there’s always some, you know, it’s just, it’s like a natural reality warps around powerful
    2:21:54 leaders.
    2:21:55 And so there’s always some risk to that.
    2:21:57 Of course, the good and powerful leaders are, you know, very aware of that.
    2:22:01 And Trump at this point in his life, I think, is highly aware of that, at least my interactions
    2:22:03 with him, like he definitely seems very aware of that.
    2:22:06 So that’s one thing.
    2:22:09 I would just say that I think the way to look at that, I mean, and look like I said, I don’t
    2:22:11 want to predict what’s going to happen once this whole thing starts unfolding.
    2:22:14 But I would just say, again, the caliber of the people who are showing up and getting
    2:22:18 the jobs and then the fact that these are some of the most accomplished people in the
    2:22:24 business world and in the medical field, I just, you know, Jay Battitaria coming in
    2:22:25 around NIH.
    2:22:27 So I was actually in the, I was actually, I was part of the interview team for a lot
    2:22:29 of the HHS folks.
    2:22:30 Nice.
    2:22:31 Jay is amazing.
    2:22:32 I was so happy to see that.
    2:22:36 So I literally got, this is a story, I got to the transition office for one of the days
    2:22:38 of the HHS interviews and I was on one of the interviewing teams and they gave us, I didn’t
    2:22:41 know who the candidates were and they gave us the sheet in the beginning and I go down
    2:22:46 the sheet and I saw Jay’s name and I like, I almost physically fell out of my chair.
    2:22:51 And I was just like, you know, and I have, I happen to know Jay and I like respect him
    2:22:52 enormously.
    2:22:55 And then he proved himself under this like talk about a guy who proved himself under
    2:23:01 extraordinary pressure over the last five years and then go radical under the pressure.
    2:23:04 He maintained balance and thoughtfulness and depth.
    2:23:05 I mean, incredibly.
    2:23:10 Very serious, very analytical, very applied and, and, and, and yes, 100% tested under
    2:23:15 pressure came out like the more people look back at what he said and did and you know,
    2:23:19 he’s not, you know, none of us are perfect, but like overwhelmingly, like overwhelmingly
    2:23:21 insightful throughout that whole period.
    2:23:24 And you know, we, you know, we would all be much better off today had he been in charge
    2:23:26 of the response.
    2:23:29 And so just like an incredibly capable guy and look, and then he learned from all that
    2:23:30 right.
    2:23:31 He learned a lot in the last five years.
    2:23:35 And so the idea that somebody like that could be ahead of NIH as compared to the people
    2:23:37 we’ve had is just like breathtakingly.
    2:23:41 It’s just a gigantic upgrade, you know, and then Marty McRae coming in to run FDA exact
    2:23:42 same thing.
    2:23:47 The guy coming to run a CDC exact same thing.
    2:23:49 I mean, I’ve been spending time with Dr. Oz.
    2:23:52 So, you know, I’m not like, again, I’m not like it, I’m not on these teams.
    2:23:56 I’m not in the room, but like I’ve been spending enough time trying to help that like his level
    2:24:00 of insight into the healthcare system is like, it’s like astounding and it comes from being
    2:24:03 a guy who’s been like in the middle of the whole thing and been talking to people about
    2:24:07 this stuff and working on it and serving as a doctor himself and in medical systems for,
    2:24:11 you know, his entire life and it’s just like, you know, he’s like a walking encyclopedia
    2:24:12 on these things.
    2:24:17 And so, and you know, very dynamic, you know, very charismatic, very smart, organized, effective.
    2:24:20 So, you know, to have somebody like that in there.
    2:24:24 And so anyway, they’re just, I have like 30 of these stories now across all these different,
    2:24:25 all these different positions.
    2:24:29 And so I, and then I just, I’d be quite honest, I do do the compare and contrast to the last
    2:24:30 four years.
    2:24:32 And it’s not even, these people are not in the same ballpark.
    2:24:36 They’re just like wildly better.
    2:24:40 And so, you know, the pound for pound is maybe the best team in the White House since, you
    2:24:48 know, I don’t even know, maybe the 90s, maybe the, maybe the 30s, maybe the 50s, you know,
    2:24:52 maybe Eisenhower had a team like this or something, but it’s, it’s, it’s, there’s a lot of really
    2:24:53 good people in there now.
    2:24:56 Yeah, the potential for change is certainly extremely high.
    2:24:59 Well, can you speak to Doge?
    2:25:04 What’s the most wildly successful next two years for Doge?
    2:25:06 Can you imagine?
    2:25:11 Maybe also, can you think about the trajectory that’s the most likely and what kind of challenges
    2:25:12 would it be facing?
    2:25:13 Yeah.
    2:25:18 So, and start by saying again, I’m not disclaimer after disclaimer, I’m not on Doge.
    2:25:19 I’m not a member of Doge.
    2:25:25 We should say there’s about 10 lawyers in the room staring now, I’m just kidding.
    2:25:27 Both the angels and the devils on my shoulder.
    2:25:28 Okay.
    2:25:29 Yeah.
    2:25:30 So I’m not speaking for Doge.
    2:25:32 I’m not in charge of Doge.
    2:25:33 Those guys are doing it.
    2:25:34 I’m not doing it.
    2:25:38 But I am, you know, again, I’m volunteering to help as much as I can and I’m 100% supportive.
    2:25:39 Yeah.
    2:25:43 So look, I, I think the way to think, I mean, the, the basic outlines are in public, right?
    2:25:47 Which is it’s a, it’s a time limited, you know, basically commission.
    2:25:48 It’s not a formal government agency.
    2:25:51 It’s a, you know, time limited 18 month.
    2:25:55 It’ll, it’ll, in terms of implementation, it will advise the executive branch, right?
    2:26:00 And so the, the, the implementation will happen through the, the White House and the president
    2:26:02 has total attitude on what he wants to, what he wants to implement.
    2:26:07 Um, and then basically what I think about it is three kind of streams, you know, kind
    2:26:09 of target sets and they’re related, but different.
    2:26:12 So money, uh, people and regulations.
    2:26:16 Um, and so, you know, the headline number, they’ve, you know, put us the two trillion
    2:26:19 dollar number and there’s already, you know, disputes over, over that and whatever.
    2:26:23 And there’s all question there, but then there’s the people thing and the people thing is interesting
    2:26:26 because you get into these very, um, kind of, um, fascinating questions.
    2:26:30 Um, and I’ve been doing this, I, I won’t do this for you as a pop quiz, but I do this
    2:26:34 for people in government as a pop quiz and I can stump them every time, which is a, how
    2:26:36 many federal agencies are there?
    2:26:41 And the answer is somewhere between 450 and 520 and nobody’s quite sure.
    2:26:43 And then the other is how many people will work for the federal government.
    2:26:47 Um, and the answer is, you know, something on the order, I forget, but like 4 million
    2:26:52 full time employees and maybe up to 20 million contractors and nobody is quite sure.
    2:26:54 And so there’s a large people component to this.
    2:26:57 Um, and then by the way, there’s a related component to that, which is how many of them
    2:27:01 are actually in the office and the answer is not many.
    2:27:03 Most of the federal buildings are still empty, right?
    2:27:06 And so, and then there’s questions of, like, are people, you know, working from home or
    2:27:08 we’re actually working from home.
    2:27:11 So there’s the people to mention and of course the money and the people are connected and
    2:27:13 then there’s the third, which is the regulation thing, right?
    2:27:17 And I described earlier how basically our system of government is much more now based
    2:27:20 on regulations than legislation, right?
    2:27:24 Most of the rules that we all live under are not from a bill that went through Congress.
    2:27:27 They’re from an agency that created a regulation.
    2:27:28 That turns out to be very, very important.
    2:27:32 So one is, a lot of already described, we want to do the doge wants to do broad based
    2:27:33 regulatory relief.
    2:27:36 And Trump has talked about this and basically get the government off his backs and liberate
    2:27:39 the American people to be able to do things again.
    2:27:40 Um, so that’s part of it.
    2:27:43 But there’s also something else that’s happened, which is very interesting, which was there
    2:27:47 were a set of Supreme Court decisions about two years ago, um, that went directly after
    2:27:53 the idea that the executive branch can create regulatory agencies and issue regulations
    2:27:57 and enforce those regulations without corresponding congressional legislation.
    2:28:03 Um, and most of the federal government that exists today, including most of the departments
    2:28:07 and most of the rules and most of the money and most of the people, most of it is not
    2:28:09 enforcing laws that Congress passed.
    2:28:11 Most of it is, is regulation.
    2:28:16 And the Supreme Court basically said large parts, you know, large to maybe all of that
    2:28:20 regulation that did not directly result from a bill that went through Congress, the way
    2:28:25 that the cartoon said that it should, um, that may not actually be legal.
    2:28:30 Now, the previous White House, of course, was super in favor of big government.
    2:28:31 They had no desire to act.
    2:28:32 They did nothing based on this.
    2:28:34 They didn’t, you know, pull anything back in.
    2:28:39 But the new regime, if they choose to, could say, look, the thing that we’re doing here
    2:28:43 is not, you know, challenging the laws were actually complying with the Supreme Court decision
    2:28:46 that basically says we have to unwind a lot of this.
    2:28:50 And we have to unwind the regulations, which are no longer legal constitutional.
    2:28:53 We have to unwind the spend and we have to unwind the people.
    2:28:56 And so that, and that’s how you get from basically connect the thread from the regulation part
    2:28:59 back to the money part, back to the people part.
    2:29:01 They have work going on all three of these threads.
    2:29:05 They have, I would say, incredibly creative ideas on how to deal with this.
    2:29:09 I’m, I know lots of former government people who 100% of them are super cynical on this
    2:29:10 topic.
    2:29:11 And they’re like, this is impossible.
    2:29:12 This can never possibly work.
    2:29:17 And I’m like, well, I can’t tell you what the secret plans are, but like, like blow
    2:29:21 my mind, like, and all three of those, like they have ideas that are like really quite
    2:29:24 amazing, as you’d expect from, you know, from the people involved.
    2:29:28 And so over the course of the next few months, you know, that’ll start to become visible.
    2:29:33 And then the final thing I would say is this is going to be very different than attempts
    2:29:34 like that.
    2:29:38 There have been other programs like this in the past, the Clinton Gore administration
    2:29:42 had one and then there were others before that Reagan had one.
    2:29:46 The difference is this time, there’s social media.
    2:29:52 And so there has never been, it’s interesting, one of the reasons people in Washington are
    2:29:57 so cynical is because they know all the bullshit, like they know all the bad spending and all
    2:30:01 the bad rules and all the like, you know, I mean, look, we’re adding a trillion dollars
    2:30:04 to the national debt every 100 days right now.
    2:30:08 And that’s compounding and it’s now passing the size of the Defense Department budget and
    2:30:10 it’s compounding and it’s pretty soon it’s going to be adding a trillion dollars every
    2:30:13 90 days and then it’s going to be adding a trillion dollars over 80 days and then it’s
    2:30:15 going to be a trillion dollars every 70 days.
    2:30:18 And then if this doesn’t get fixed at some point, we enter a hyperinflationary spiral
    2:30:23 and we become Argentina or Brazil and Kablooey, right?
    2:30:26 And so like everybody in DC knows that something has to be done.
    2:30:30 And then everybody in DC knows for a fact that it’s impossible to do anything.
    2:30:31 Right.
    2:30:34 They know all the problems and they also know the sheer impossibility of fixing it.
    2:30:37 But I think what they’re not taking into account that what the critics are not taking into account
    2:30:42 is these guys can do this in the full light of day and they can do it on social media.
    2:30:44 They can completely bypass the press.
    2:30:46 They can completely bypass the cynicism.
    2:30:51 They can expose any element of unconstitutional or silly government spending.
    2:30:54 They can run victory laps every single day on what they’re doing.
    2:30:56 They can bring the people into the process.
    2:30:59 And again, if you think about it, this goes back to our Machiavellian structure, which
    2:31:05 is if you think about, again, you’ve got democracy, oligarchy, monarchy, rule of the many, rule
    2:31:07 of the few, rule of the one.
    2:31:10 You could think about what’s happening here as a little bit of a sandwich, which is you
    2:31:15 have, we don’t have a monarch where we have a president, rule of the one with some power.
    2:31:19 And then we have the people who can’t organize, but they can be informed and they can be aware
    2:31:22 and they can express themselves through voting and polling.
    2:31:26 And so there’s a sandwich happening right now is the way to think about it, which is
    2:31:30 you’ve got basically monarchy, rule of one, combining with rule of many, right?
    2:31:32 And rule of many is that you get to vote, right?
    2:31:34 The people do get to vote, basically.
    2:31:38 And then essentially Congress as in the sort of permanent bureaucratic class in Washington
    2:31:40 as the oligarchy in the middle.
    2:31:45 And so the White House plus the people, I think have the power to do all kinds of things
    2:31:46 here.
    2:31:48 And I think that would be the way I would watch it.
    2:31:56 The transparency, I mean, Elon just by who he is, is incentivized to be transparent and
    2:32:00 show the bullshit in the system and to celebrate the victories.
    2:32:02 So it’s going to be so exciting.
    2:32:08 I mean, honestly, it just makes government more exciting, which is a win for everybody.
    2:32:11 These people are spending our money.
    2:32:14 These people have enormous contempt for the taxpayer.
    2:32:16 Okay, here’s the thing you hear in Washington.
    2:32:17 Here’s one of the things.
    2:32:18 So the first thing you hear is this is impossible.
    2:32:19 They’ll be able to do nothing.
    2:32:21 And then yeah, I walk them through this and they’re like, they start to get, they start
    2:32:24 to don and then this is a new kind of thing.
    2:32:27 And then they’re like, well, it doesn’t matter because all the money is in entitlements and
    2:32:32 the debt and the military.
    2:32:34 And so, yeah, you’ve got like this silly fake, whatever, NPR funding or whatever, and it
    2:32:36 just, it’s a rounding error and it doesn’t matter.
    2:32:41 And you look it up in the budget and it’s like, whatever, $500 million or $5 billion.
    2:32:44 Or it’s the charging stations that don’t exist.
    2:32:47 It’s the $40 billion of charging stations and they build eight charging stations.
    2:32:52 Or it’s the broadband internet plan that delivered broadband to nobody, right?
    2:32:53 And cost you $30 billion.
    2:32:57 So these boondoggles and what everybody in Washington says is the $30 billion is a rounding
    2:32:58 error on the federal budget.
    2:32:59 It doesn’t matter.
    2:33:00 Who cares if they, if they make it go away.
    2:33:05 And of course, any taxpayer is like, what the?
    2:33:06 What do you mean?
    2:33:07 It’s $30 billion.
    2:33:08 Yeah.
    2:33:09 Right.
    2:33:12 And then the experts are like, and the press is in on this too, then the experts are like,
    2:33:14 well, it doesn’t, it doesn’t matter because it’s a rounding error.
    2:33:15 No, it’s $30 billion.
    2:33:20 And if you’re this cavalier about $30 billion, imagine how cavalier you are about the $3
    2:33:21 trillion.
    2:33:22 Yeah.
    2:33:23 Okay.
    2:33:24 $30 billion is $30 billion.
    2:33:27 A lot of the federal budget and percentage know it’s not, but $30 billion divided by
    2:33:31 30 do the math, $30 billion divided by let’s say 300 million taxpayers, right?
    2:33:36 Like what’s that math expert $100 per taxpayer per year.
    2:33:37 Okay.
    2:33:43 So $100 to an ordinary person working hard every day to make money and provide for their
    2:33:44 kids.
    2:33:46 $100 is a meal out.
    2:33:48 It’s a trip to the amusement park.
    2:33:51 It’s the ability to, you know, buy additional educational materials.
    2:33:54 It’s the ability to have a babysitter, to be able to have a romantic relationship with
    2:33:55 your wife.
    2:33:59 It’s, there’s like a hundred things that that person can do with $100 that they’re not doing
    2:34:03 because it’s going to some bullshit program that is being basically where the money’s
    2:34:07 being looted out in the form of just like ridiculous, ridiculousness and graft.
    2:34:11 And so the idea that that $30 billion program is not something that is like a very important
    2:34:17 thing to go after is just like the level of contempt for the taxpayer is just off the charts.
    2:34:21 And then that’s just one of those programs and there’s like a hundred of those programs
    2:34:22 and they’re all just like that.
    2:34:24 Like it’s not like any of this stuff is running well.
    2:34:26 Like the one thing we know is that none of this stuff is running well.
    2:34:27 Like we know that for sure.
    2:34:28 Right.
    2:34:31 And we like, we know these people aren’t showing up to work and like we know that all this crazy
    2:34:32 stuff is happening.
    2:34:33 Right.
    2:34:37 And like, you know, the, do you remember Elon’s story of the, do you remember Elon’s story
    2:34:39 of what got the Amish to turn out to vote in Pennsylvania?
    2:34:40 Oh, okay.
    2:34:41 So like Pennsylvania.
    2:34:42 Okay.
    2:34:43 So Pennsylvania is like a wonderful state, great history.
    2:34:46 It has these cities like Philadelphia that have descended like other cities into just
    2:34:49 like complete chaos, violence, madness and death, right?
    2:34:53 And the federal government has just like let it happen is incredibly violent places.
    2:34:56 And so the Biden administration decided that the big pressing law enforcement thing that
    2:35:00 they needed to do in Pennsylvania was that they needed to start raiding Amish farms to
    2:35:04 prevent them from selling raw milk with armed raids.
    2:35:05 Right.
    2:35:10 And it turns out it really pissed off the Amish and it turns out they weren’t willing to drive
    2:35:14 to the polling places because they don’t have cars, but if you came and got them, they would
    2:35:15 go and they would vote.
    2:35:17 That’s one of the reasons why Trump won anyway.
    2:35:21 So like the law enforcement agencies are off working on like crazy things.
    2:35:23 Like the system’s not working.
    2:35:26 And so you, you add up, pick 130 billion dollar programs.
    2:35:27 All right.
    2:35:28 Now you’re okay.
    2:35:30 Math major, a hundred times a hundred.
    2:35:31 Ten thousand.
    2:35:32 Ten thousand dollars.
    2:35:33 Okay.
    2:35:34 Ten thousand dollars per taxpayer per year.
    2:35:36 And but it’s also not just about money.
    2:35:40 That’s really obviously money is a hugely important thing, but it’s the cavalier attitude.
    2:35:41 Yes.
    2:35:48 And in sort of in the ripple effect of that, it makes it so nobody wants to work in government
    2:35:49 and be productive.
    2:35:53 It makes it so the corruption can, it breeds corruption.
    2:35:55 It breeds laziness.
    2:35:59 It breeds secrecy because you don’t want to be transparent about having done nothing all
    2:36:00 year.
    2:36:01 All those kinds of stuff.
    2:36:02 And you don’t want to reverse that.
    2:36:08 So it would be exciting for the future to work in government to, because the, the amazing
    2:36:13 thing if you’re the steel man government is you can do shit at scale.
    2:36:20 You have money and you can directly impact people’s lives in a positive sense at scale.
    2:36:22 That’s super exciting.
    2:36:28 As long as there’s no bureaucracy that slows you down or not huge amounts of bureaucracy
    2:36:30 that slows you down significantly.
    2:36:31 So here’s the trick.
    2:36:36 This blew my mind because I was, you know, once you open the hell mouth of looking into
    2:36:40 the federal budget, you learn all kinds of things.
    2:36:44 So there is a term of art in government called impoundment.
    2:36:48 And so you, if you’re like me, you’ve learned this the hard way when your car has been impounded.
    2:36:52 The government meaning of impoundment, the federal budget meaning is a different meaning.
    2:36:54 Impoundment is as follows.
    2:36:58 The constitution requires Congress to authorize money to be spent by the executive branch,
    2:36:59 right?
    2:37:02 So the executive branch goes to Congress says we need money acts.
    2:37:03 Congress does their thing.
    2:37:05 They come back and they say, you can have money, why?
    2:37:08 The money’s appropriated from Congress, the executive branch spends it on the military
    2:37:11 or whatever they spend it on, or on roads to nowhere or charging stations to nowhere
    2:37:14 or whatever.
    2:37:18 And what’s in the constitution is the Congress appropriates the money.
    2:37:23 Over the last 60 years, there has been an additional interpretation of appropriations
    2:37:29 applied by the courts and by the system, which is the executive branch not only needs Congress
    2:37:33 to appropriate X amount of money, the executive branch is not allowed to underspend.
    2:37:37 Yeah, I’m aware of this, I’m aware of this.
    2:37:40 And so there’s this thing that happens in Washington at the end of every fiscal year,
    2:37:45 which is September 30th, and it’s the great budget flush, and any remaining money that’s
    2:37:47 in the system that they don’t know how to productively spend, they deliberately spend
    2:37:53 it on productively, to the tune of hundreds and hundreds of billions of dollars.
    2:37:57 A president that doesn’t want to spend the money can’t not spend it.
    2:37:58 Yeah.
    2:38:02 Like, okay, A, that’s not what’s in the constitution, and there’s actually quite a good Wikipedia
    2:38:05 page that goes through the great debate on this is played out in the legal world over
    2:38:06 the last 60 years.
    2:38:10 And basically, if you look at this with anything resembling, I think I don’t mind you’re like,
    2:38:13 “All right, this is not what the founders meant.”
    2:38:16 And then number two, again, we go back to this thing of contempt.
    2:38:21 Can you imagine showing up and running the government like that, and thinking that you’re
    2:38:24 doing the right thing, and not going home at night, and thinking that you’ve sold your
    2:38:25 soul?
    2:38:29 I actually think you sort of had a really good point, which is it’s even unfair to the
    2:38:31 people who have to execute this.
    2:38:32 Yeah.
    2:38:35 It makes them bad people, and they didn’t start out wanting to be bad people.
    2:38:37 And so, there is stuff like this, like…
    2:38:38 Yeah.
    2:38:39 Everywhere.
    2:38:40 Everywhere.
    2:38:42 And so, we’ll see how far these guys get.
    2:38:44 I am extremely encouraged what I’ve seen so far.
    2:38:48 It seems like a lot of people will try to slow them down, but yeah, I’m up to get far.
    2:38:50 Another difficult topic, immigration.
    2:38:56 What’s your take on the, let’s say, heated H-1B visa debate that’s going on online and
    2:38:58 legal immigration in general?
    2:38:59 Yeah.
    2:39:04 By saying I am not involved in any aspect of government policy on this, I am not planning
    2:39:05 to be.
    2:39:07 This is not an issue that I’m working on, or that I’m going to work on.
    2:39:08 We’re not.
    2:39:11 This is not part of the agenda of what the firm is doing, so my firm is doing.
    2:39:17 So, I’m not in this, in the new administration of the government, I’m not planning to be,
    2:39:19 so purely just personal opinion.
    2:39:25 So, I would describe what I have as a complex or hopefully nuanced view on this issue that’s
    2:39:28 maybe a little bit different than what a lot of my peers have.
    2:39:32 And I think, and I kind of thought about this, I didn’t say anything about it all the way
    2:39:36 through the big kind of debate over Christmas, but I thought about it a lot and read everything.
    2:39:39 I think what I realized is that I just have a very different perspective on some of these
    2:39:44 things and the reason is because of the combination of where I came from and then where I ended
    2:39:45 up.
    2:39:50 And so, let’s start with this, where I ended up in Silicon Valley.
    2:39:54 And I have made the pro high-skilled immigration argument many, many times, the H-1B argument
    2:40:00 many times, in past lives, I’ve been in DC many times arguing with prior administrations
    2:40:03 about this, always on the side of trying to get more H-1Bs and trying to get more high-skilled
    2:40:04 immigration.
    2:40:11 And I think that argument is very strong and very solid and very, has paid off for the
    2:40:15 US in many, many ways and we can go through it, but I think it’s the argument everybody
    2:40:16 already knows, right?
    2:40:17 It’s like the stock.
    2:40:19 You take any Silicon Valley person, you press the button and they tell you why we need to
    2:40:21 drain the world to get more H-1Bs, right?
    2:40:23 So, everybody kind of gets that argument.
    2:40:27 So, it’s basically just to summarize, it’s a mechanism by which you can get super smart
    2:40:33 people from the rest of the world, import them in, keep them here to increase the productivity
    2:40:35 of the US companies.
    2:40:36 Yeah.
    2:40:40 And then it’s not just good for them and it’s not just good for Silicon Valley or the tech
    2:40:41 industry.
    2:40:44 It’s good for the country because they then create new companies and create new technologies
    2:40:49 and create new industries that then create many more jobs for Native-born Americans than
    2:40:53 would have previously existed and so you’ve got a, it’s a positive sum, flywheel thing
    2:40:54 where everybody wins.
    2:40:56 Like everybody wins, there are no trade-offs.
    2:40:59 It’s all absolutely glorious in all directions.
    2:41:04 You cannot possibly, there cannot possibly be a moral argument against it under any circumstances.
    2:41:08 Anybody who argues against it is obviously doing so from a position of racism is probably
    2:41:10 a fascist and a Nazi, right?
    2:41:11 Right.
    2:41:12 I mean, that’s the thing.
    2:41:13 And like I said, I’ve made that argument many times.
    2:41:16 I’m very comfortable with that argument and then I’d also say, look, I would say number
    2:41:20 one, I believe a lot of it, I’ll talk about the parts I don’t believe, but I believe a
    2:41:21 lot of it.
    2:41:23 And then the other part is, look, I benefit every day.
    2:41:28 I always describe it as I work in the United Nations, like I, my own firm and our founders
    2:41:35 and our companies and the industry and my friends, you know, are just this like amazing,
    2:41:40 you know, panoply cornucopia of people from all over the world.
    2:41:43 And you know, I just, I’ve worked, I don’t know at this point where people from, it’s
    2:41:45 got to be, I don’t know, 80 countries or something.
    2:41:47 And hopefully over time, it’ll be, you know, the rest as well.
    2:41:50 And, you know, it’s just, it’s been amazing and they’ve done many of the most important
    2:41:52 things in my industry and it’s been really remarkable.
    2:41:55 So that’s all good.
    2:41:58 And then, you know, there’s just the practical version of the argument, which is we are the,
    2:41:59 we are the main place.
    2:42:00 These people get educated anyway.
    2:42:01 Right.
    2:42:03 They, the best and the brightest tend to come here to get educated.
    2:42:06 And so, you know, this is the old kind of Mitt Romney staple of green card to every,
    2:42:11 you know, at least, you know, maybe not every university degree, but every technical degree.
    2:42:15 The sociologists we could quibble about, but, you know, the roboticists for sure.
    2:42:16 For sure.
    2:42:17 For sure.
    2:42:18 We can all agree that.
    2:42:19 At least I want you over on something today.
    2:42:21 Well, no, I’m exaggerating for a fact.
    2:42:23 So, and I lost you.
    2:42:25 I had you for half a second.
    2:42:27 I haven’t gotten to the other side of the argument yet.
    2:42:28 Okay.
    2:42:29 Thank you.
    2:42:31 So surely we can all agree that we need to staple a green card.
    2:42:33 The rollercoaster is going up.
    2:42:35 The rollercoaster is ratcheting slowly up.
    2:42:36 So, yeah.
    2:42:38 So surely we can all agree that the roboticists should all get green cards.
    2:42:41 And again, like there’s a lot of merit to that, obviously, like, look, we want the U.S.
    2:42:43 to be the world leader in robotics.
    2:42:46 What’s step one to being the world leader in robotics is have all the great robotics
    2:42:47 people, right?
    2:42:50 Like, you know, very unlike the underpass know, it’s like a very straightforward formula.
    2:42:51 Right.
    2:42:52 Yeah.
    2:42:53 All right.
    2:42:54 That’s all well and good.
    2:42:55 All right.
    2:42:57 But it gets a little bit more complicated because there is a kind of argument that’s sort
    2:43:00 of right underneath that that you also hear from, you know, these same people.
    2:43:04 And I have made this argument myself many times, which is we need to do this because we don’t
    2:43:06 have enough people in the U.S. who can do it otherwise.
    2:43:07 Right.
    2:43:08 We have all these unfilled jobs.
    2:43:10 We’ve got all these, you know, all these companies that wouldn’t exist.
    2:43:11 We don’t have enough good founders.
    2:43:12 We don’t have enough engineers.
    2:43:16 We don’t have enough scientists or then the next version of the argument below that is
    2:43:20 our education system is not good enough to generate those people.
    2:43:23 And which is a weird argument, by the way, because, like, our education system is good
    2:43:27 enough for foreigners to be able to come here preferentially in, like, a very large number
    2:43:31 of cases, but somehow not good enough to educate our own native born people.
    2:43:34 So there’s like a weird, these little cracks in the matrix that you can kind of stick your
    2:43:38 fingernail into and kind of wonder about and we’ll come back to that one.
    2:43:41 Like, at least, yes, our education system has this flaws.
    2:43:45 And then underneath that is the argument that Vivek made, you know, which is, you know,
    2:43:50 we have a cultural rot in the country and native born people in the country don’t work hard
    2:43:53 enough and spend too much time watching TV and TikTok and don’t spend enough time studying
    2:43:54 differential equations.
    2:43:59 And again, it’s like, all right, like, you know, yeah, there’s a fair amount to that.
    2:44:04 Like there’s a lot of American culture that is, you know, there’s a lot of frivolity.
    2:44:07 There’s a lot of, you know, look, I mean, we have well documented social issues in many
    2:44:11 fronts, many things that cut against having a culture of just like straightforward high
    2:44:13 achievement and effort and striving.
    2:44:16 Anyway, like, you know, those are the basic arguments.
    2:44:19 But then I have this kind of other side of my, you know, kind of personality and thought
    2:44:23 process, which is, well, I grew up in a small farming town in rural Wisconsin, the rural
    2:44:24 Midwest.
    2:44:27 And, you know, it’s interesting, there’s not a lot of people who make it from rural
    2:44:31 Wisconsin to, you know, high tech.
    2:44:33 And so it’s like, all right, why is that exactly, right?
    2:44:37 And I know I’m an aberration, like I was the only one from anybody I ever knew who ever
    2:44:38 did this, right?
    2:44:40 I know what an aberration I am, and I know exactly how that aberration happened.
    2:44:46 And it’s a very unusual set of steps, including, you know, many that were just luck.
    2:44:51 But like it, there is in no sense a talent flow from rural Wisconsin into high tech,
    2:44:55 like not at all.
    2:44:59 There is also like in no sense a talent flow from the rest of the Midwest into high tech.
    2:45:01 There is no talent flow from the South into high tech.
    2:45:03 There is no flow from the Sun Belt into high tech.
    2:45:08 There is no flow from, you know, the deep South into high tech, like just like literally
    2:45:12 it’s like the blanks, but there’s this whole section of the country that just where the
    2:45:15 people just like for some reason don’t end up in tech.
    2:45:20 Now, that’s a little bit strange because these are the people who put a man on the moon.
    2:45:23 These are the people who built the World War II war machine.
    2:45:27 These are the people, at least their ancestors are the people who built the Second Industrial
    2:45:32 Revolution and built the railroads and built the telephone network and built, you know,
    2:45:36 logistics and transportation and the auto industry was built in Cleveland and Detroit.
    2:45:40 And so at least these people’s parents and grandparents and great grandparents somehow
    2:45:44 had the wherewithal to like build all of this like amazing things and invent all these things.
    2:45:48 And then there’s many, many, many, many stories in the history of American invention and innovation
    2:45:52 and capitalism where you had people who grew up in the middle of nowhere, Philo Farnsworth
    2:45:55 who invented the television and just like, you know, tons and tons of others, endless
    2:45:57 stories like this.
    2:46:00 Now you have, I’d look up a puzzle, right, in the conundrum, which is like, okay, like
    2:46:03 what is happening on the blank spot of the map?
    2:46:07 And then of course, you also can’t help noticing that the blank spot on the map, the Midwest,
    2:46:12 the South, you’ve also just defined Trump country, the Trump voter base, right?
    2:46:13 And it’s like, oh, well, that’s interesting.
    2:46:15 Like how did that happen?
    2:46:16 Right.
    2:46:19 And so either you really, really, really have to believe the very, very strong version of
    2:46:22 like the Vivec thesis or something where you have to believe that like that basically
    2:46:26 culture, the whole sort of civilization in the middle of the country and how the country
    2:46:31 is so like deeply flawed, either inherently flawed or culturally flawed such that for
    2:46:35 whatever reason, they are not able to do the things that their, you know, parents and
    2:46:38 grandparents were able to do and that their peers are able to do it or something else
    2:46:39 is happening.
    2:46:40 Would you care to guess on what else is happening?
    2:46:41 I mean, what?
    2:46:42 Affirmative action?
    2:46:43 Affirmative action.
    2:46:44 Okay.
    2:46:48 This is very, think about this, this is very entertaining, right?
    2:46:51 What are the three things that we know about affirmative action?
    2:46:55 It is absolutely 100% necessary.
    2:47:00 But however, it cannot explain the success of any one individual, nor does it have any
    2:47:01 victims at all.
    2:47:07 They could explain maybe disproportionate, but surely it doesn’t explain why you’re
    2:47:11 probably the only person in Silicon Valley from Wisconsin.
    2:47:15 What educational institution the last 60 years has wanted Farm Boys from Wisconsin?
    2:47:18 But what institution rejected Farm Boys from Wisconsin?
    2:47:19 All of them.
    2:47:20 All of them.
    2:47:21 Of course.
    2:47:22 Okay.
    2:47:23 So we know this.
    2:47:26 This is the Harvard and UNC Supreme Court cases.
    2:47:28 So this was like three years ago.
    2:47:31 These were big court cases, you know, because the idea of affirmative action has been litigated
    2:47:35 for many, many, many years and through many court cases and the Supreme Court repeatedly
    2:47:38 in the past had upheld that it was a completely legitimate thing to do.
    2:47:41 And a lot of these, and there’s basically two categories of affirmative action that
    2:47:43 like really matter, right?
    2:47:47 One is admissions into educational institutions and then the other is jobs, right, getting
    2:47:48 hired.
    2:47:49 Like those are the two biggest areas.
    2:47:53 The education one is like super potent has been a super potent political issue for a
    2:47:56 very long time for all, you know, people have written and talked about this for many decades.
    2:47:57 I don’t need to go through it.
    2:47:59 There’s many arguments for why it’s important.
    2:48:01 There’s many arguments as to how it could backfire.
    2:48:02 It’s been this thing.
    2:48:06 But the Supreme Court upheld it for a very long time.
    2:48:08 The most recent ruling, I’m not a lawyer, I don’t have the exact reference in my head,
    2:48:15 but there was a case in 2003 that said that Sandra Day O’Connor famously wrote that, you
    2:48:20 know, although it had been 30 years of affirmative action and although it was not working remotely
    2:48:24 as it had been intended, she said that, you know, well, basically we need to try it for
    2:48:25 another 25 years.
    2:48:29 But she said basically as a message to future Supreme Court justices, if it hasn’t resolved
    2:48:33 basically the issues it’s intended to resolve within 25 years, then we should probably call
    2:48:34 it off.
    2:48:36 By the way, we’re coming up on the 25 years.
    2:48:39 It’s a couple years away.
    2:48:43 The Supreme Court just had these cases as a Harvard case, and I think a University of
    2:48:44 North Carolina case.
    2:48:48 And what’s interesting about those cases is the lawyers in those cases put a tremendous
    2:48:53 amount of evidence into the record of how the admissions decisions actually happen at
    2:48:59 Harvard and happen at UNC, and it is like every bit as cartoonishly garish and racist
    2:49:04 as you could possibly imagine, because it’s a ring of power.
    2:49:07 And if you’re an admissions officer at a private university or an administrator, you
    2:49:11 have unlimited power to do what you want, and you can justify any of it under any of
    2:49:14 these rules or systems.
    2:49:17 And up until these cases, it had been a black box where you didn’t have to explain yourself
    2:49:19 and show your work.
    2:49:23 And what the Harvard and USC cases did is they basically required showing the work.
    2:49:26 And there was all kinds of phenomenal detail.
    2:49:29 Number one is there were text messages in there that will just curl your hair of students
    2:49:33 being spoken of and just crude racial stereotypes that would just make you want to jump out
    2:49:34 the window.
    2:49:35 It’s horrible stuff.
    2:49:38 But also, there was statistical information.
    2:49:41 And of course, the big statistical kicker to the whole thing is that at top institutions,
    2:49:46 it’s common for different ethnic groups to have different cutoffs for SAT that are as
    2:49:48 wide as 400 points.
    2:49:52 So different groups.
    2:49:57 So specifically, Asians need to perform at 400 SAT points higher than other ethnicities
    2:50:00 in order to actually get admitted into these– I mean, it’s not even about– I mean, white
    2:50:02 people are a part of this, but Asians are a very big part of this.
    2:50:06 And actually, the Harvard case is actually brought by an activist on behalf of actually
    2:50:09 the Asian students who are being turned away.
    2:50:12 And it’s basically– I mean, it’s the cliche now in the valley and in the medical community,
    2:50:16 which is if you want a super genius, you hire an Asian from Harvard, because they are guaranteed
    2:50:21 to be freaking Einstein, because if they weren’t, they were never getting admitted, right?
    2:50:24 Almost all the qualifications get turned away.
    2:50:29 So they’ve been running this– it’s a very, very explicit, very, very clear program.
    2:50:32 This of course has been a third rail of things that people are not supposed to discuss under
    2:50:34 any circumstances.
    2:50:37 The thing that has really changed the tenor on this is, I think, two things.
    2:50:40 Number one, those Supreme Court cases, the Supreme Court ruled that they can no longer
    2:50:42 do that.
    2:50:45 I will tell you, I don’t believe there’s a single education institution in America that
    2:50:48 is conforming with the Supreme Court ruling.
    2:50:51 I think they are all flagrantly ignoring it, and we could talk about that.
    2:50:53 Mostly because of momentum, probably, or what?
    2:50:55 They are trying to make the world a better place.
    2:50:57 They are trying to solve all these social problems.
    2:50:59 They are trying to have diverse student populations.
    2:51:02 They are trying to live up to the expectations of their donors.
    2:51:04 They are trying to make their faculty happy.
    2:51:09 They are trying to have their friends and family think that they’re good people.
    2:51:13 They’re trying to have the press write nice things about them.
    2:51:18 It’s nearly impossible for them, and to be clear, nobody has been fired from an admissions
    2:51:20 office for 25 years of prior.
    2:51:24 What we now, the Supreme Court now, is ruled to be illegality.
    2:51:28 They’re all the same people under the exact same pressures.
    2:51:32 The numbers are moving a little bit, but I don’t know anybody in the system who thinks
    2:51:35 that they’re complying with the Supreme Court.
    2:51:36 Who’s in charge?
    2:51:39 In the rank ordering of who rules who, the university’s rule of the Supreme Court way
    2:51:42 more than the Supreme Court rules the university’s.
    2:51:45 Another example of that is that every sitting member of the Supreme Court went to either
    2:51:48 Harvard or Yale.
    2:51:53 The level of incestuousness here is like … Anyway, so there’s that.
    2:51:54 This has been running for a very long time.
    2:51:58 One is the Harvard and USC cases gave up the game, number one, or at least showed what
    2:51:59 the mechanism was.
    2:52:04 And then number two, the other thing is obviously the aftermath of October 7th, and what we
    2:52:08 discovered was happening with Jewish applicants, and what was happening at all the top institutions
    2:52:14 for Jewish applicants was they were being actively managed down as a percentage of the
    2:52:17 base.
    2:52:23 I’ve heard reports of extremely explicit, basically, plans to manage the Jewish admissions
    2:52:28 down to their representative percentage of the US population, which is 2%.
    2:52:31 There’s a whole backstory here, which is 100 years ago, Jews were not admitted into a lot
    2:52:34 of these institutions, and then there was a big campaign to get them in.
    2:52:37 Once they could get in, they immediately became 30% of these institutions because there’s
    2:52:39 so many smart, talented Jews.
    2:52:43 So it went from 0% to 30%, and then the most recent generation of leadership has been trying
    2:52:45 to get it down to 2%.
    2:52:49 And a lot of Jewish people, at least a lot of Jewish people I know, sort of, they kind
    2:52:53 of knew this was happening, but they discovered it the hard way after October 7th, right?
    2:52:57 And so all of a sudden … So basically, the Supreme Court case meant that you could address
    2:53:00 this in terms of the Asian victims.
    2:53:04 The October 7th meant that you could address it in terms of the Jewish victims, and for
    2:53:07 sure both of those groups are being systematically excluded, right?
    2:53:10 And then, of course, there’s the thing that you basically can’t talk about, which is all
    2:53:13 the white people are being excluded.
    2:53:17 And then it turns out it’s also happening to black people.
    2:53:21 And this is the thing that blew my freaking mind when I found out about it.
    2:53:28 So I just assumed that this was great news for American blacks, because obviously if
    2:53:31 whites, Asians, and Jews are being excluded, then the whole point to this in the beginning
    2:53:35 was to get the black population up, and so this must be great for American blacks.
    2:53:41 So then I discovered this New York Times article from 2004 called, “Blacks are being admitted
    2:53:44 into top schools at greater numbers, but which ones?”
    2:53:45 Uh-oh.
    2:53:48 And again, and by the way, this is in the New York Times.
    2:53:53 This is not in, like, you know, whatever, national review, this is New York Times, 2004.
    2:53:57 And the two authorities that were quoted in the story are Henry Lewis Gates, who’s the
    2:54:01 dean of the African-American studies community in the United States, super brilliant guy,
    2:54:07 and then Lonnie Guinear, who was a, she was a potential Supreme Court appointee under,
    2:54:10 I think, close friend of Hillary Clinton, and there was, for a long time, she was on
    2:54:12 the shortlist for Supreme Court.
    2:54:18 So one of the top, you know, jurists, lawyers in the country, both black, was sort of legendarily
    2:54:22 successful in their, in their, in the academic and legal worlds, and black.
    2:54:24 And they are quoted as the authorities in this story.
    2:54:29 And the story that they tell is actually very, it’s amazing.
    2:54:33 By the way, it’s happening today in education institutions, and it’s happening in companies,
    2:54:38 and you can see it all over the place and the government, which is, at least at that
    2:54:44 time, the number was half of the black admits into a place like Harvard were not American
    2:54:45 born blacks.
    2:54:53 They were foreign born blacks, specifically, northern African, generally Nigerian, or West
    2:54:54 Indian.
    2:54:55 Right.
    2:54:59 And by the way, many Nigerians and northern Africans have come to the U.S. and have been
    2:55:00 very successful.
    2:55:03 Nigerian Americans as a group, like way outperform, they’re, you know, this is a super smart cohort
    2:55:04 of people.
    2:55:07 And then West Indian blacks in the U.S. are incredibly successful.
    2:55:12 Most recently, by the way, Kamala Harris, as well as Colin Powell, like just two sort
    2:55:13 of examples of that.
    2:55:18 And so basically what Henry Louis Gates, Alana Guarnira said in the story is Harvard is basically
    2:55:23 struggling to either whatever it was, identify a recruit, make successful whatever it was,
    2:55:25 American born native blacks.
    2:55:30 And so therefore, they were using high school immigration as an escape hatch to go get blacks
    2:55:31 from other countries.
    2:55:35 And then this was 2004 when you could discuss such things.
    2:55:39 Obviously, that is a topic that nobody has discussed since.
    2:55:40 It has sailed on.
    2:55:45 All of the DEI programs of the last 20 years have had this exact characteristic.
    2:55:48 There’s large numbers of black people in America who are fully aware of this and are like,
    2:55:51 “It’s obviously not us that are getting these slots.
    2:55:54 We’re obviously, we’re literally competing with people who are being imported.”
    2:55:58 And if you believe in the basis of affirmative action, you are trying to make up for historical
    2:56:00 injustice of American black slavery.
    2:56:06 So the idea that you’re import somebody from Nigeria that never experienced that is tremendously
    2:56:08 insulting to black Americans.
    2:56:11 Anyway, so you can see where I’m heading with this.
    2:56:16 We have been in a 60-year social engineering experiment to exclude native born people from
    2:56:20 the educational slots and jobs that high school immigration has been funneling foreigners
    2:56:21 into.
    2:56:22 Right.
    2:56:24 And so it turns out it’s not a victim-free thing.
    2:56:27 There’s like 100% there’s victims because why?
    2:56:28 There’s only so many.
    2:56:30 For sure, there’s only so many education slots and then for sure, there’s only so many of
    2:56:31 these jobs.
    2:56:32 Right.
    2:56:35 Google only hires so many, you know, whatever level seven engineers.
    2:56:36 Right.
    2:56:38 And so that’s the other side of it.
    2:56:39 Right.
    2:56:44 And so you’re a farm boy in Wisconsin, right, or a black American whose ancestors arrived
    2:56:53 here on a slave ship 300 years ago in Louisiana, or a Cambodian immigrant in the Bronx and
    2:56:58 your kid or a Jewish immigrant or from a very successful Jewish family.
    2:57:02 And your entire, you know, for three generations, you and your parents and grandparents went
    2:57:03 to Harvard.
    2:57:07 And what all of those groups know is the system that has been created is not for them.
    2:57:08 Right.
    2:57:11 It’s designed specifically to exclude them.
    2:57:14 And then what happens is all of these tech people show up in public and say, “Yeah, let’s
    2:57:15 bring in more foreigners.”
    2:57:16 Right.
    2:57:21 And so anyway, so the short version of it is you can’t anymore, I don’t think, just
    2:57:29 have the “high school immigration” conversation for either education or for employment without
    2:57:32 also having the DEI conversation.
    2:57:34 And then DEI is just another word for affirmative action.
    2:57:36 So it’s the affirmative action conversation.
    2:57:39 And you need to actually deal with this at substance and to see what’s actually happening
    2:57:42 to people you need to join these topics.
    2:57:46 And I think it is much harder to make the moral claim for high school immigration given
    2:57:52 the extent to which DEI took over both the education process and the hiring process.
    2:57:53 Okay.
    2:57:57 So first of all, that was brilliantly laid out, the nuance of it.
    2:58:02 So just to understand, it’s not so much a criticism of H1B, high school immigration,
    2:58:08 it’s that there needs to be more people saying, “Yay, we need more American-born hires.”
    2:58:12 So I spent the entire Christmas holiday reading every message on this and not saying anything.
    2:58:17 And what I was – which you know me well enough to know that’s a serious level of –
    2:58:18 Yeah, that’s very zen.
    2:58:19 Yes, thank you.
    2:58:20 Thank you.
    2:58:21 No, it wasn’t.
    2:58:25 There was tremendous rage on the other side of it, but I suppressed it.
    2:58:29 So I was waiting for the dog that didn’t bark, right?
    2:58:33 And the dog that didn’t bark was I did not – and tell me if you saw one, I did not see
    2:58:36 a single example of somebody pounding the table for more high school immigration who
    2:58:40 was also pounding the table to go get more smart kids who are already here into these
    2:58:42 educational institutions and into these jobs.
    2:58:44 I didn’t see a single one.
    2:58:45 That’s true.
    2:58:47 I think I agree with that.
    2:58:49 There really was a divide.
    2:58:51 But it was like literally, it was like the proponents of high school immigration.
    2:58:53 And again, this was me for a very long time.
    2:58:57 I mean, I kind of took myself by surprise on this because I was on – you know, I had
    2:58:59 the much simpler version of this story for a very long time.
    2:59:03 Like I said, I’ve been in Washington many times under past presidents lobbying for this.
    2:59:05 By the way, never made any progress, which we could talk about.
    2:59:08 Like it never actually worked.
    2:59:10 But you know, I’ve been on the other side of this one.
    2:59:14 But I was literally sitting there being like, all right, which of these like super geniuses
    2:59:17 who many of whom by the way are very successful high school immigrants or children of high
    2:59:23 school immigrants, which of these super geniuses are going to like say, actually we have this
    2:59:25 like incredible talent source here in the country, which again, to be clear, I’m not
    2:59:26 talking about white people.
    2:59:30 I’m talking about native-born Americans, whites, Asians, Jews, blacks, for sure.
    2:59:31 For sure.
    2:59:32 For sure.
    2:59:33 Those four groups.
    2:59:34 But also white people.
    2:59:35 Yeah.
    2:59:36 And also white people.
    2:59:44 Making the case for American-born hires are usually not also supporting H1B.
    2:59:50 It’s an extreme divide and those people that are making that case are often not making it
    2:59:55 in a way that’s like making it in quite a radical way.
    2:59:56 Yeah.
    2:59:57 Let’s put it this way.
    2:59:58 Yeah.
    2:59:59 But you have this interesting thing.
    3:00:01 You have a split between the sides that I’ve noticed, which is one side has all of the
    3:00:02 experts.
    3:00:03 Right.
    3:00:04 Right.
    3:00:05 And I’m using scare quote for people listening to audio.
    3:00:08 I’m making quotes in the air with my fingers as vigorously as I can.
    3:00:11 One side has all the certified experts.
    3:00:13 The other side just has a bunch of people who are like, they know that something is wrong
    3:00:16 and they don’t quite know how to explain it.
    3:00:19 So unusual about the Harvard UNC cases, by the way, in front of Supreme Court, is they
    3:00:22 actually had sophisticated lawyers for the first time in a long time actually put all
    3:00:25 the seven s together and actually put it in the public record.
    3:00:28 They actually had experts, which is just really rare.
    3:00:31 Generally what you get is you get, because if you don’t have experts, what do you have?
    3:00:35 You know something is wrong, but you have primarily an emotional response.
    3:00:42 You feel it, but can you put it in the words and tables and charts that a certified expert
    3:00:43 can?
    3:00:44 No, you can’t.
    3:00:45 That’s not who you are.
    3:00:48 That doesn’t mean that you’re wrong and it also doesn’t mean that you have less of a
    3:00:49 moral stance.
    3:00:50 Yeah.
    3:00:51 And so it’s just like, all right.
    3:00:54 Now, by the way, look, I think there are ways to square the circle.
    3:00:56 I think there’s a way to have our cake and eat it too.
    3:00:58 Like I think there’d be many ways to resolve this.
    3:01:04 I think, again, I think the way to do it is to look at these issues combined, at DEI combined
    3:01:05 with high school immigration.
    3:01:12 It so happens the DEI is under much more scrutiny today than it has been for probably 20 years.
    3:01:18 Affirmative action is, the Supreme Court did just rule that it is not legal for universities
    3:01:19 to do that.
    3:01:23 They are still doing it, but they should stop.
    3:01:28 And then there are more and more, you’ve seen more companies now also dishing their DEI
    3:01:29 programs.
    3:01:33 In part, that’s happening for a bunch of reasons, but it’s happening in part because a lot of
    3:01:37 corporate lawyers will tell you that the Supreme Court rulings and education either already
    3:01:43 apply to businesses or just as a clear foreshadowing, the Supreme Court will rule on new cases that
    3:01:44 will ban any businesses.
    3:01:51 And so there is a moment here to be able to look at this on both sides.
    3:01:55 Let me add one more nuance to it that makes it even more complicated.
    3:01:57 So the cliche is we’re going to drain the world, right?
    3:01:58 You’ve heard that?
    3:02:00 We’re going to take all the smart people from all over the world.
    3:02:01 We’re going to bring them here.
    3:02:02 We’re going to educate them.
    3:02:04 And then they’re going to raise their families here, create businesses here, create jobs
    3:02:05 here, right?
    3:02:07 In the cliche, that’s a super positive thing.
    3:02:08 Yeah.
    3:02:09 Okay.
    3:02:12 So what happens to the rest of the world?
    3:02:13 They lose?
    3:02:18 Well, how fungible are people?
    3:02:24 How many highly ambitious, highly conscientious, highly energetic, high achieving, high IQ
    3:02:28 super geniuses are there in the world?
    3:02:30 And if there’s a lot, that’s great.
    3:02:34 But if there just aren’t that many, and they all come here, and they all aren’t where
    3:02:39 they would be otherwise, what happens to all those other places?
    3:02:43 So it’s almost impossible for us here to have that conversation in part because we become
    3:02:46 incredibly uncomfortable as a society talking about the fact that people aren’t just simply
    3:02:50 all the same, which is the whole thing we could talk about.
    3:02:54 But also we are purely the beneficiary of this effect, right?
    3:02:57 We are brain draining the world, not the other way around.
    3:02:58 There’s only four.
    3:03:02 So if you look at the flow of high-skill immigration over time, there’s only four permanent sinks
    3:03:05 of high-skill immigration in places people go.
    3:03:07 It’s the US, Canada, the UK, and Australia.
    3:03:10 It’s the four of the five eyes.
    3:03:12 It’s the major English fear countries.
    3:03:16 And so for those countries, this seems like a no-lose proposition.
    3:03:20 It’s all the other countries that basically what we four countries have been doing is
    3:03:21 draining all those smart people up.
    3:03:25 It’s actually much easier for people in Europe to talk about this I’ve discovered because
    3:03:27 the Eurozone is whatever, 28 countries.
    3:03:31 And within the Eurozone, the high-skill people over time have been migrating to originally
    3:03:36 the UK, but also specifically, I think it’s the Netherlands, Germany, and France.
    3:03:40 But specifically, they’ve been migrating out of the peripheral Eurozone countries.
    3:03:43 And the one where this really hit the fan was in Greece, right?
    3:03:47 So Greece falls into chaos, disaster, and then you’re running the government in Greece
    3:03:51 and you’re trying to figure out how to put an economic development plan together.
    3:03:54 All of your smart young kids have left.
    3:03:56 Like what are you going to do, right?
    3:04:01 By the way, this is a potential, I know you care a lot about Ukraine, this is a potential
    3:04:02 crisis for Ukraine.
    3:04:06 Not because, in part because of this, because we enthusiastically recruit Ukrainians of
    3:04:07 course.
    3:04:09 And so we’ve been brain draining Ukraine for a long time.
    3:04:12 But also, of course, war does tend to cause people to migrate out.
    3:04:18 And so when it comes time for Ukraine to rebuild as a peaceful country, is it going to have
    3:04:20 the talent base even that it had five years ago?
    3:04:22 It’s like a very big and important question.
    3:04:25 By the way, Russia, like we have brain drain a lot of really smart people out of Russia.
    3:04:29 A lot of them are here over the last 30 years.
    3:04:31 And so there’s this thing.
    3:04:33 It’s actually really funny if you think about it.
    3:04:37 The one thing that we know to be the height of absolute evil that the West ever did was
    3:04:40 colonization and resource extraction.
    3:04:44 So we know the height of absolute evil was when the Portuguese and the English and everybody
    3:04:47 else went and had these colonies and then went in and we took all the oil and we took
    3:04:51 all the diamonds and we took all the whatever lithium or whatever it is, right?
    3:04:55 Well, for some reason, we realized that that’s a deeply evil thing to do when it’s a physical
    3:04:58 resource, when it’s a non-conscious physical matter.
    3:05:02 For some reason, we think it’s completely morally acceptable to do it with human capital.
    3:05:08 In fact, we think it’s glorious and beautiful and wonderful and the great flowering of peace
    3:05:10 and harmony and moral justice of our time to do it.
    3:05:13 And we don’t think for one second what we’re doing to the countries that we’re pulling
    3:05:15 all these people out of.
    3:05:18 And this is one of these things like I don’t know, like maybe we’re just going to live
    3:05:22 in this delusional state forever and we’ll just keep doing it and it’ll keep benefiting
    3:05:23 us and we just won’t care what happens.
    3:05:27 But like, I think there may come, this is one of these, this is like one of these submarines
    3:05:28 under 10 feet under the waterline.
    3:05:32 Like, I think it’s just a matter of time until people suddenly realize, “Oh my god, what
    3:05:33 are we doing?”
    3:05:37 Because like, we need the rest of the world to succeed too, right?
    3:05:39 Like, we need these other countries to like flourish.
    3:05:42 Like we don’t want to be the only successful country in the middle of just like complete
    3:05:46 chaos and disaster and we just extract and we extract and we extract and we don’t think
    3:05:47 twice about it.
    3:05:51 Well, this is so deeply profound actually.
    3:05:55 So what is the cost of winning, quote unquote?
    3:06:01 If these countries are drained in terms of human capital on the level of geopolitics,
    3:06:02 what does that lead to?
    3:06:08 Even if we talk about wars and conflict and all of this, we actually want them to be strong
    3:06:13 in the way we understand it’s strong, not just in every way.
    3:06:19 So that cooperation and competition can build a better world for all of humanity.
    3:06:22 It’s interesting.
    3:06:27 This is one of those truths where you just speak and it resonates and I didn’t even
    3:06:28 think about it.
    3:06:29 Yeah, exactly.
    3:06:34 So this is, you were sitting in the June the holidays, he said, just boiling over.
    3:06:39 So all that said, there’s still to use some good to the H1B.
    3:06:40 Okay.
    3:06:42 So then you get this other, okay.
    3:06:43 So then there’s, quote, come all the way around.
    3:06:44 There’s another nuance.
    3:06:45 So there’s another nuance.
    3:06:48 There’s another nuance, which is mostly the value we don’t use H1Bs anymore.
    3:06:49 Mostly we use O1s.
    3:06:55 So there’s a separate class of visa and O1 is like this.
    3:06:57 It turns out the O1 is the super genius visa.
    3:06:59 So the O1 is basically our founder.
    3:07:02 Like when we have like a, when we have somebody from anywhere in the world and they’ve like
    3:07:06 invented a breakthrough new technology and they want to come to the U.S. to start a company,
    3:07:11 they come in through an O1 visa and that actually is like a, it’s a fairly high bar.
    3:07:13 It’s a high acceptance rate, but it’s like a pretty high bar and they, they do a lot
    3:07:17 of work and they, there’s like a, you have to put real work into it and really, really
    3:07:19 prove your case.
    3:07:24 Mostly what’s happened with the H1B visa program is that it has gone to basically two categories
    3:07:25 of employers.
    3:07:29 One is the basically a small set of big tech companies that hire in volume, which is exactly
    3:07:31 the companies that you would think.
    3:07:34 And then the other is it goes to these, what they call kind of the mills, the consulting
    3:07:35 mills, right.
    3:07:38 And so there’s these set of companies with names I don’t want to pick on companies,
    3:07:43 you know, names like Cognizant that, you know, hire basically have in their business model
    3:07:47 is primarily Indian being in primarily Indians in large numbers.
    3:07:51 And you know, they often have, you know, offices next to company owned housing and they’ll
    3:07:53 have, you know, organizations that are, you know, they’ll have, you know, organizations
    3:07:56 that are literally thousands of Indians, you know, living and working in the U.S. and
    3:08:01 they do basically call it mid-tier like IT consulting.
    3:08:04 So you know, these folks, they’re making good, good, good, good wages, but they’re making
    3:08:11 68 a year, $100,000 a year, not the, you know, 300,000 that you’d make in the valley.
    3:08:15 And so like in practice, the startups basic like little tech, as we call it or the startup
    3:08:20 world mainly doesn’t use H1Vs at this point, and mainly can’t because the system is kind
    3:08:23 of rigged in a way that we really can’t.
    3:08:26 And then, and then, and then again, you get to the sort of underlying morality here, which
    3:08:30 is it’s like, well, you know, Amazon like Amazon’s in like, I love Amazon, but like
    3:08:33 they’re a big powerful company, you know, they’ve got, you know, more money than God,
    3:08:37 they’ve got resources, they’ve got long-term planning horizon, they do big, you know, profound
    3:08:42 things over, you know, decades at a time, you know, they could, you know, or any of
    3:08:45 these other companies could launch massively effective programs to go recruit the best
    3:08:48 and brightest from all throughout the country.
    3:08:52 And, you know, you’ll notice they don’t do that, you know, they bring in, you know, 10,000,
    3:08:55 20,000 H1Vs a year.
    3:08:57 And so you’ve got a question there.
    3:09:00 And then these mills, like there’s lots of questions around them and whether they should,
    3:09:03 you know, whether that’s even a ethical way, you know, I don’t want to say they’re unethical,
    3:09:08 but there’s questions around like exactly what the trade-offs are there.
    3:09:11 And so, you know, this, yeah, and this is like a Pandora’s box that really, you know,
    3:09:16 nobody really wanted to be opened, you know, to play devil’s advocate on all this in terms
    3:09:19 of like national immigration issues, you know, none of this is like a top-end issue just
    3:09:21 because the numbers are small, right.
    3:09:24 And so, you know, I don’t think, you know, the administration has said like, this is
    3:09:27 not like a priority of theirs for right now.
    3:09:30 But I guess what I would say is like there is actually a lot of complexity and nuance
    3:09:32 here.
    3:09:35 I have a lot of friends, like I said, I have a lot of friends and colleagues who are,
    3:09:39 you know, who came over on H1Vs or O1s, green cards, many are now citizens.
    3:09:42 And you know, every single one of them was not every single one.
    3:09:45 A lot of them were enthusiastic to, you know, defend the honor of immigrants throughout
    3:09:46 this whole period.
    3:09:48 And they said to me, it’s like, well, Mark, how can we, you know, how can we, how can
    3:09:51 we more clearly express, you know, the importance of high school immigration to the U.S.?
    3:09:57 And I was like, I think you can do it by advocating for also developing our native born talent.
    3:10:01 Like, do you want to inflame the issue or do you want to diffuse the issue?
    3:10:02 Right.
    3:10:04 And I think the answer is to diffuse the issue.
    3:10:09 Let me give you one more positive scenario, which, and then I’ll also beat up on the university
    3:10:10 some more.
    3:10:14 Do you know about the National Merit Scholarship System?
    3:10:16 Have you heard about this?
    3:10:17 Not really.
    3:10:22 So there’s a system that was created during the Cold War called the National Merit Scholars.
    3:10:27 And it is a basically, it was created, I forget, in the 1950s or ’60s when it was when people
    3:10:31 in government actually wanted to identify the best and the brightest as heretical an
    3:10:33 idea as that sounds today.
    3:10:39 And so it’s basically a national talent search for basically IQ.
    3:10:44 Its goal is to identify basically the top 0.5% of the IQ in the country.
    3:10:46 By the way, completely regardless of other characteristics.
    3:10:51 So there’s no race, gender, or any other aspect to it is just going for straight intelligence.
    3:10:57 It uses the, first the PSAT, which is the preparatory SAT that you take, and then the SAT.
    3:11:02 So it uses those scores, that is the scoring, it’s a straight PSAT-SAT scoring system.
    3:11:09 So they use the SAT as a proxy for IQ, which it is.
    3:11:13 They run this every year, they identify, it’s like they get down to like 1% of the population
    3:11:17 of the kids, 18-year-olds in a given year who score highest on the PSAT, and then they
    3:11:22 get down to further qualify down to the 0.5% that also replicate on the SAT.
    3:11:25 And then it’s like the scholarship amount is like $2,500, right?
    3:11:30 So it’s like, it was a lot of money 50 years ago, not as much today, but it’s a national
    3:11:33 system being run literally to find the best and the brightest.
    3:11:37 How many of our great and powerful universities use this as a scouting system?
    3:11:39 Like, our universities all have sports teams.
    3:11:44 They all have national scouting, full-time scouts who go out and they go to every high
    3:11:47 school and they try to find all the great basketball players and bring them into the
    3:11:50 NCAA, into all these leagues.
    3:11:53 How many of our great and powerful and enlightened universities use the national merit system
    3:11:58 to go do a talent search for the smartest kids and just bring them in?
    3:12:02 Let me guess, very few, zero.
    3:12:03 As you say it, that’s brilliant.
    3:12:07 There should be that same level of scouting for talent internally.
    3:12:08 Go get the smartest ones.
    3:12:11 I’ll give you one more clicker on this topic if you’re not, if I haven’t beaten it to
    3:12:12 death.
    3:12:16 The SAT has changed.
    3:12:22 The SAT used to be a highly accurate proxy for IQ that caused a bunch of problems.
    3:12:25 People really don’t like the whole idea of IQ.
    3:12:29 The SAT has been actively managed over the last 50 years by the college board that runs
    3:12:32 it and it has been essentially like everything else.
    3:12:37 It’s been dumbed down in two ways.
    3:12:42 Number one has been dumbed down where an 800 from 40 years ago does not mean what an 800
    3:12:43 means today.
    3:12:48 40 years ago it was almost impossible to get an 800.
    3:12:53 Today there’s so many 800s that you could stock the entire Ivy League with 800s.
    3:12:55 It’s been deliberately dumbed down.
    3:12:59 Then two is they have tried to pull out a lot of what’s called the G-loading.
    3:13:03 They’ve tried to detach it from being an IQ proxy because IQ is such an inflammatory
    3:13:04 concept.
    3:13:07 The consequence of that is, and this is sort of perverse, they’ve made it more coachable
    3:13:08 people.
    3:13:13 Right, so the SAT 40 years ago coaching didn’t really work and more recently it has really
    3:13:14 started to work.
    3:13:18 One of the things you see is that the Asian spike, you see this giant leap upward in
    3:13:21 Asian performance over the last decade and I think looking at the data, I think a lot
    3:13:26 of that is because it’s more coachable now and the Asians do the most coaching.
    3:13:28 There’s a bunch of issues with this.
    3:13:31 The coaching thing is really difficult because the coaching thing is a subsidy then to the
    3:13:34 kids whose parents can afford coaching.
    3:13:37 I don’t know about you, but where I grew up there was no SAT coaching.
    3:13:38 There’s like an issue there.
    3:13:41 I didn’t even know what the SAT was until the day I took it, much less that there was
    3:13:45 coaching, much less that it could work, so much less we could afford it.
    3:13:46 So number one, there’s issues there.
    3:13:50 But the other issue there is think about what’s happened by the dumbing down.
    3:13:55 800 no longer captures all the smart, 800 is too crude of a test.
    3:13:57 It’s like the AI benchmarking problem.
    3:13:59 It’s the same problem they have in AI benchmarking right now.
    3:14:02 800 is too low of a threshold.
    3:14:06 There are too many kids scoring 800 because what you want is you want whatever, if it’s
    3:14:09 going to be 100,000 kids, I don’t know what it is, it’s going to be 50,000 kids a year
    3:14:10 scoring 800.
    3:14:15 You also then want kids to be able to score 900 and 1100 and 1200 and you want to ultimately
    3:14:19 get to, you’d like to ultimately identify the top 100 kids and make sure that you get
    3:14:21 them in MIT.
    3:14:25 And the resolution of the test has been reduced so that it actually is not useful for doing
    3:14:26 that.
    3:14:29 And again, I would say this is like part of the generalized corruption that’s taken
    3:14:33 place throughout this entire system where we have been heading in the reverse direction
    3:14:37 from wanting to actually go get the best and brightest and actually put them in the places
    3:14:38 where they should be.
    3:14:41 And then just the final comment would be the great thing about standardized testing and
    3:14:45 the national merit system is, like I said, it’s completely race blind, it’s gender blind,
    3:14:47 it’s blind on every other characteristic.
    3:14:49 It’s only done on test scores.
    3:14:54 And you can make an argument about whether that’s good or bad, but it is for sure, it’s
    3:14:57 the closest thing that we had to get to merit.
    3:15:00 It was the thing that they did when they thought they needed merit to win the Cold War.
    3:15:03 And of course, we could choose to do that anytime we want.
    3:15:07 And I just say, I find it like incredibly striking and an enormous moral indictment
    3:15:10 of the current system that there are no universities to do this today.
    3:15:13 So back to the immigration thing just real quick, it’s like, okay, we aren’t even trying
    3:15:16 to go get the smart kids out of the center or south.
    3:15:19 And even if they think that they can get into these places, they get turned down.
    3:15:21 And the same thing for the smart Asians and the same thing for the smart Jews and the
    3:15:23 same thing for the smart black people.
    3:15:29 And like, it’s just like, I don’t know how, like, I don’t know how that’s moral.
    3:15:31 Like, I don’t get it at all.
    3:15:37 As you said about the 800, so I took the SAT and the ACT many times and I’ve always gotten
    3:15:39 perfect on math 800.
    3:15:47 It’s just, and I’m not that, I’m not special, like it doesn’t identify genius.
    3:15:54 I think you want to search for genius and you want to create measures that find genius
    3:15:57 of all different kinds, speaking of diversity.
    3:16:06 And I guess we should reiterate and say over and over and over, defend immigrants, yes,
    3:16:09 but say we should hire more and more native born.
    3:16:13 Well, you asked me in the beginning, like, what’s the most optimistic forecast, right,
    3:16:21 that we could have in the most optimistic forecast would be my God, what if we did both?
    3:16:25 So that’s the reasonable, the rational, the smart thing to say here.
    3:16:26 In fact, we don’t have to have a war.
    3:16:30 Well, it would diffuse, it would diffuse the entire issue.
    3:16:32 If everybody in the center in the south of the country and every Jewish family, Asian
    3:16:37 family, black family knew they were getting a fair shake, like it would diffuse the issue.
    3:16:38 Like how about diffusing the issue?
    3:16:43 Like what a crazy radical, sorry, I don’t mean to really get out of my skis here, but
    3:16:47 I think your profile on X states it’s time to build.
    3:16:52 It feels like 25, 2025 is a good year to build.
    3:17:02 So I wanted to ask your advice and maybe for advice for anybody who’s trying to build,
    3:17:08 who’s trying to build something useful in the world, maybe launch a startup or maybe just
    3:17:14 launch apps, services, whatever, ship software products.
    3:17:21 So maybe by way of advice, how do you actually get to shipping?
    3:17:24 So I mean, a big part of the answer I think is we’re in the middle of a legit revolution
    3:17:29 and I know you’ve been talking about this on your show, but like AI coding, I mean,
    3:17:34 this is the biggest earthquake to hit software in certainly my life, maybe since the investment
    3:17:35 software.
    3:17:39 And I’m sure we’re involved in various of these companies, but these tools from a variety
    3:17:46 of companies are absolutely revolutionary and they’re getting better at leaps and bounds
    3:17:47 right every day.
    3:17:52 You know all this, but the thing with coding, there’s open questions of whether AI can get
    3:17:57 better at understanding philosophy or creative writing or whatever, but for sure we can make
    3:18:01 it much better at coding because you can validate the results of coding.
    3:18:05 And so there’s all these methods of synthetic data and self-training and reinforcement learning
    3:18:07 that for sure you can do with coding.
    3:18:12 And so everybody I know who works in the field says AI coding is going to get to be phenomenally
    3:18:14 good and it’s already great.
    3:18:17 And you can, I mean, anybody wants to see this just go on YouTube and look at AI coding
    3:18:21 demos, you know, little kids making apps in 10 minutes working with an AI coding system.
    3:18:23 And so I think it’s the golden age.
    3:18:25 I mean, I think this is an area where it’s clearly the golden age.
    3:18:29 The tool set is extraordinary, you know, in a day as a coder for sure in a day, you can
    3:18:34 retrain yourself, you know, start using these things, get a huge boost in productivity as
    3:18:37 a non-coder you can learn much more quickly than you could before.
    3:18:41 That’s actually a tricky one in terms of learning as a non-coder to build stuff.
    3:18:45 But still, I feel like you still need to learn how to code.
    3:18:47 It becomes a superpower.
    3:18:49 It helps you be much more productive.
    3:18:56 Like you could legitimately be a one person company and get quite far.
    3:18:57 I agree with that up to a point.
    3:19:03 So the, I think for sure for quite a long time, the people who are good at coding are going
    3:19:06 to be the best at actually having AI’s code things because they’re going to understand
    3:19:08 what, I mean, very basic, they’re going to understand what’s happening, right?
    3:19:11 And they’re going to be able to evaluate the work and they’re going to be able to, you
    3:19:13 know, literally like manage AI’s better.
    3:19:16 Like even if they’re not literally handwriting the code, they’re just going to have a much
    3:19:17 better sense of what’s going on.
    3:19:21 So I definitely think like 100%, my nine year old is like doing all kinds of coding classes
    3:19:24 and he’ll keep doing that for certainly through 18.
    3:19:26 We’ll see after that.
    3:19:29 And so for sure that’s the case.
    3:19:32 But look, having said that, one of the things you can do with an AI is say, teach me how
    3:19:35 to code, right?
    3:19:40 And so, and you know, there’s a whole bunch of, you know, I’ll name names, you know,
    3:19:43 like there’s a whole bunch of work that they’re doing economy for free.
    3:19:47 And then, you know, we have this company, Replet, which is originally specifically built
    3:19:52 for kids for coding that is as AI built in, that’s just absolutely extraordinary now.
    3:19:56 And then, you know, there’s a variety of other systems like this.
    3:20:00 And yeah, I mean, the AI is going to be able to teach you to code AI, by the way, is as
    3:20:04 you know, spectacularly good at explaining code, right?
    3:20:08 And so, you know, the tools have these features now where you can talk to the code base.
    3:20:12 So you can like literally like ask the code based questions about itself.
    3:20:15 And you can also just do the simple form, which is you can copy and paste code into
    3:20:20 chat GPT and just ask it to explain it, what’s going on, rewrite it, improve it, make recommendations.
    3:20:23 And so there’s, yeah, there’s dozens of ways to do this.
    3:20:26 By the way, you can also, I mean, even more broadly than code, like, you know, okay, you
    3:20:31 want to make a video game, okay, now you can do AI, art generation, sound generation, dialogue
    3:20:34 generation, voice generation, right?
    3:20:37 And so all of a sudden, like you don’t need designers, you know, you don’t need, you know,
    3:20:38 voice actors.
    3:20:43 You know, so yeah, so there’s just like unlimited, and then, you know, because, you know, a big
    3:20:47 part of coding is so called glue, you know, it’s interfacing into other systems.
    3:20:50 So it’s interfacing in a, you know, stripe to take payments or something like that.
    3:20:54 And you know, AI is fantastic at writing glue code.
    3:20:57 So you know, really, really good at making sure that you can plug everything together,
    3:21:01 really good at helping you figure out how to deploy, you know, it’ll even write a business
    3:21:03 plan for you.
    3:21:06 So it’s just this, it’s like everything happening with AI right now, it’s just it’s like this
    3:21:10 latent superpower, and there’s this incredible spectrum of people who have really figured
    3:21:14 out massive performance increases, productivity increases with it already.
    3:21:16 There’s other people who aren’t even aware it’s happening.
    3:21:21 And there’s some gearing to whether you’re a coder or not, but I think there are lots
    3:21:23 of non coders that are after the races.
    3:21:27 And I think there are lots of professional coders who are still like, you know, the blacksmiths
    3:21:32 were not necessarily in favor of, you know, car business.
    3:21:36 So yeah, there’s the old William Gibson quote, the future is here, it’s just not evenly
    3:21:37 distributed yet.
    3:21:41 And this is maybe the most potent version of that that I’ve ever seen.
    3:21:48 Yeah, there’s a, you know, the old meme with the, with the bell curve, the people on both
    3:21:51 extremes say AI coding is the future.
    3:21:52 Right.
    3:21:56 It’s very common to the programmers to say, you know, if you’re any good of a programmer,
    3:21:57 you’re not going to be using it.
    3:21:58 That’s just not true.
    3:22:04 No, I consider myself a reasonably good programmer and I, my productivity has been just skyrocketed
    3:22:12 and the joy of programming skyrocketed is every aspect of programming is more efficient,
    3:22:15 more productive, more fun, all that kind of stuff.
    3:22:19 I would also say code is, you know, code has, code has of anything in like industrial society,
    3:22:24 code has the highest elasticity, which is to say the easier it is to make it, the more
    3:22:25 it gets made.
    3:22:29 I think effectively there’s unlimited demand for code, like in other words, like there’s
    3:22:34 always some other idea for a thing that you can do, a feature that you can add or a thing
    3:22:36 that you can optimize.
    3:22:40 And so, and so like overwhelmingly, you know, the amount of code that exists in the world
    3:22:43 is a fraction of even the ideas we have today and then we come up with new ideas all the
    3:22:44 time.
    3:22:50 And so I think that like, you know, I was, I was late 80s, early 90s, when sort of automated
    3:22:53 coding systems started to come out, expert systems, big deal in those days.
    3:22:56 And there were all these, there was a famous book called The Decline and Fall of the American
    3:22:59 Programmer, you know, that predicted that these new coding systems were going to mean
    3:23:00 we wouldn’t have programmers in the future.
    3:23:04 And of course, the number of programming jobs exploded by like a factor of 100.
    3:23:07 Like my guess will be, we’ll have more, my guess is we’ll have more coding jobs probably
    3:23:11 by like an order of magnitude 10 years from now.
    3:23:12 That will be different.
    3:23:13 There’ll be different jobs.
    3:23:17 They’ll involve orchestrating AI, but there will be, we will be creating so much more
    3:23:21 software that the whole industry will just explode in size.
    3:23:26 Are you seeing the size of companies decrease in terms of startups?
    3:23:28 What’s the landscapes of little tech?
    3:23:31 All we’re seeing right now is the AI hiring boom of all time.
    3:23:37 Oh, for the big tech people and little tech, everybody’s trying to hire as many engineers
    3:23:38 as they can to build AI systems.
    3:23:40 It’s just, it’s a hundred percent.
    3:23:44 I mean, there’s a handful of company, you know, there’s a little bit, there’s customer
    3:23:45 service.
    3:23:48 You know, there, we have some companies and others, I think it’s Klarna that’s publicizing
    3:23:55 a lot of this in Europe where, you know, there are jobs that can be optimized and jobs that
    3:23:56 can be automated.
    3:24:02 But like for engineering jobs, like it’s just an explosion of hiring, that at least so far
    3:24:05 there’s no trace of any sort of diminishing effect.
    3:24:07 Now, having said that, I am looking forward to the day.
    3:24:12 I am waiting for the first company to walk in saying, yes, like the more radical form
    3:24:13 of it.
    3:24:16 So basically the companies that we see are basically one of two kinds.
    3:24:20 We see the companies that are basically, sometimes use weak form, strong form.
    3:24:25 So the weak form companies sometimes use the term, it’s called the sixth bullet point.
    3:24:28 AI is the sixth bullet point on whatever they’re doing.
    3:24:29 Sure.
    3:24:30 Right.
    3:24:31 And it’s on the slide, right?
    3:24:33 So they’ve got the, you know, whatever, dot, dot, dot, dot, and then AI is the sixth thing.
    3:24:35 And the reason AI is the sixth thing is because they had already previously written the slide
    3:24:37 before the AI revolution started.
    3:24:40 And so they just added the sixth bullet point on the slide, which is how you’re getting
    3:24:44 all these products that have like the AI button up in the corner, right, the little sparkly
    3:24:45 button.
    3:24:46 Right.
    3:24:48 And all of a sudden, Gmail is offering to summarize your email, which I’m like, I don’t
    3:24:49 need that.
    3:24:53 Like I need you to answer my email, not summarize it like what the hell.
    3:24:54 Okay.
    3:24:55 So we see those.
    3:24:56 And that’s fine.
    3:24:59 That’s like, I don’t know, putting sugar on the cake or something.
    3:25:02 But then we see the strong form, which is the companies that are building from scratch
    3:25:03 for AI.
    3:25:04 Right.
    3:25:05 And they’re building it.
    3:25:08 I actually just met with a company that is building literally an AI email system as an
    3:25:09 example.
    3:25:10 Oh, nice.
    3:25:11 I can’t wait.
    3:25:12 Yeah.
    3:25:13 They’re going to completely, right.
    3:25:14 It’s going to be an obvious idea.
    3:25:15 Very smart team.
    3:25:17 You know, it’s going to be great.
    3:25:20 And then, you know, Notion just, you know, another, not one of our companies, but just
    3:25:21 came out with a product.
    3:25:24 And so now companies are going to basically come through, sweep through, and they’re going
    3:25:27 to do basically AI first versions of basically everything.
    3:25:31 And those are like companies built, you know, AI is the first bullet point is the strong
    3:25:32 form of the argument.
    3:25:33 Yeah.
    3:25:34 Cursor is an example of that.
    3:25:38 They basically said, okay, we’re going to rebuild the thing with AI as the first citizen.
    3:25:41 What if we knew from scratch that we could build on this?
    3:25:45 And again, this is like, this is part of the full employment act for startups and VCs is
    3:25:50 it just like if a technology transformation is efficiently powerful, then you actually
    3:25:54 need to start the product development process over from scratch because you need to reconceptualize
    3:25:55 the product.
    3:25:58 And then usually what that means is you need a new company because most incumbents just
    3:25:59 won’t do that.
    3:26:02 And so, yeah, so that’s underway across many categories.
    3:26:07 What I’m waiting for is the company where it’s like, no, our org chart is redesigned
    3:26:08 as a result of AI, right?
    3:26:12 And so, I’m looking, I’m waiting for the company where it’s like, no, we’re going to have like,
    3:26:15 you know, and the cliche, here’s a thought experiment, right?
    3:26:18 The cliche would be we’re going to have like the human executive team and then we’re going
    3:26:20 to have the AI’s be the workers, right?
    3:26:25 So we’ll have VP of engineering supervising 100 instances of coding agents, right?
    3:26:26 Okay, maybe.
    3:26:27 Right.
    3:26:31 By the way, or maybe, maybe the VP of engineering should be the AI.
    3:26:34 Maybe supervising human coders who are supervising AI’s, right?
    3:26:39 Because one of the things that AI should be pretty good at is managing because it’s like
    3:26:41 not, you know, it’s like a process driven.
    3:26:43 It’s the kind of thing that AI is actually pretty good at, right?
    3:26:46 Performance evaluation coaching.
    3:26:49 And so, should it be an AI executive team?
    3:26:54 And then, you know, and then of course the ultimate question, which is AI CEO, right?
    3:26:57 And then, you know, and then there’s, and then maybe the most futuristic version of it would
    3:27:00 be an actual AI agent that actually goes fully autonomous.
    3:27:01 Yeah.
    3:27:04 What if you really set one of these things loose and let it, let it basically build itself
    3:27:05 a business?
    3:27:08 And so I will say like, we’re not yet seeing those.
    3:27:13 And I think there’s a little bit of the systems aren’t quite ready for that yet.
    3:27:16 And then I think it’s a little bit of, you really do need at that point, like a founder
    3:27:21 who’s really willing to break all the rules and really willing to take the swing.
    3:27:22 And those people exist.
    3:27:23 And so I’m sure we’ll see that.
    3:27:27 And some of it is, as you know, with all the startups, this is the execution.
    3:27:34 The idea that you have a AI first email client, seems like an obvious idea, but actually creating
    3:27:38 one, executing it and then taking on Gmail is really, is really difficult.
    3:27:45 I mean, Gmail, it’s fascinating to see Google can’t do it because, because why?
    3:27:49 Because the momentum, because it’s hard to re-engineer the entirety of the system feels
    3:27:52 like Google is perfectly positioned to, to do it.
    3:27:59 Same with like your perplexity, which I love, like Google could technically take on perplexity
    3:28:02 and do it much better, but they haven’t, not yet.
    3:28:06 So it’s fascinating why that is for large companies.
    3:28:08 I mean, that, that is an advantage for little tech.
    3:28:09 They could be agile.
    3:28:10 Yeah, that’s right.
    3:28:11 They could move fast.
    3:28:12 Yeah.
    3:28:14 Little companies can break glass in a way big companies can’t.
    3:28:15 Right.
    3:28:18 This is sort of the big breakthrough that Clay Christians had in the innovators dilemma,
    3:28:21 which is sometimes when big companies don’t do things, it’s because they’re screwing up
    3:28:23 and that certainly happens.
    3:28:26 But a lot of times they don’t do things because it would break too much glass.
    3:28:30 It was specifically, it would, it would interfere with their existing customers and their existing
    3:28:31 businesses.
    3:28:32 And they just simply won’t do that.
    3:28:34 And by the way, responsibly, they shouldn’t do that.
    3:28:35 Right.
    3:28:41 And so they just get, Clay Christians, this big thing is they, they often don’t adapt
    3:28:46 because they are well run, not because they’re probably run, but they’re optimizing machines.
    3:28:49 They’re, they’re, they’re optimizing, I guess, existing business and, and, and, and as, as
    3:28:54 you kind of just said, this is like a permanent state of affairs for large organizations, like
    3:28:56 every once in a while, one breaks the pattern and actually does it.
    3:28:59 But for the most part, like this is a very predictable form of human behavior.
    3:29:03 And this fundamentally is why startups exist.
    3:29:08 It feels like 2025 is when the race for dominance and AI will see some winners.
    3:29:10 Like it’s a big year.
    3:29:12 So who do you think wins the race?
    3:29:16 Open AI, Meta, Google, XAI, who do you think wins the AI race?
    3:29:18 I would say, I’m not going to predict, I’m going to say there’s questions all over the
    3:29:19 place.
    3:29:22 And then we have, we have this category of question we call the trillion dollar question,
    3:29:26 which is like literally depending on how it’s answered, people make or lose a trillion dollars.
    3:29:30 And I think there’s like, I don’t know, five or $6 trillion questions right now that are
    3:29:33 hanging out there, which is an unusually large number.
    3:29:36 And I just, you know, I’ll just hit a few of them and we can talk about them.
    3:29:38 So one is big models versus small models.
    3:29:40 Another is open models versus closed models.
    3:29:44 Another is whether you can use synthetic data or not.
    3:29:45 Another is chain of thought.
    3:29:48 How far can you push that in reinforcement learning?
    3:29:52 And then another one is political trillion dollar questions, policy questions, which,
    3:29:57 you know, the U.S. and the EU have both been flunking dramatically and the U.S. hopefully
    3:29:59 is about to really succeed at.
    3:30:00 Yeah.
    3:30:03 And then there’s probably another, you know, half dozen big important questions after that.
    3:30:08 And so these are all just like, say, this is an industry that’s in flux in a way that
    3:30:11 I even more dramatic, I think, than the ones I’ve seen before.
    3:30:15 And look, the most example, most obvious example, the flux is sitting here three, sitting here
    3:30:19 in the summer, you know, sitting here less than three years ago, sitting here in December
    3:30:23 22, we would have said that OpenAI is just running away with everything.
    3:30:27 And sitting here today, it’s like, you know, there’s at least six, you know, world-class
    3:30:33 God model companies and teams that are, by the way, generating remarkably similar results.
    3:30:36 That’s actually been one of the most shocking things to me is like, it turns out that once
    3:30:40 you know that it’s possible to build one incredibly smart Turing test passing large
    3:30:44 language model, which was a complete shock and surprise to the world.
    3:30:48 It turns out within, you know, a year you can have five more.
    3:30:51 There’s also a money component thing to it, which is to get the money to scale one of
    3:30:53 these things into the billions of dollars.
    3:30:56 There’s basically right now only two sources of money that will do that for you.
    3:31:00 One is the hyperscalers giving you the money, which you turn around and round trip back
    3:31:01 to them.
    3:31:05 Or, you know, foreign sovereigns, you know, other, you know, country sovereign sovereign
    3:31:10 wealth funds, which can be, you know, difficult in some cases for companies to access.
    3:31:14 So there’s a, there’s another, there’s maybe another trillion dollar question is the financing
    3:31:15 question.
    3:31:16 Here’s one.
    3:31:19 Sam Altman has been public about the fact that he wants to transition open AI from being
    3:31:21 a nonprofit, being a for-profit.
    3:31:25 The way that that is legally done is that, and there is a way to do it, there is a way
    3:31:30 in U.S. law to do it, the IRS and other legal entities, government entities scrutinize this
    3:31:34 very carefully because the U.S. takes foundation nonprofit law very seriously because of the
    3:31:36 tax exemption.
    3:31:40 And so the way that, historically the way that you do it is you start a for-profit and
    3:31:44 then you, you raise money with the for-profit to buy the assets of the nonprofit at fair
    3:31:47 market value.
    3:31:51 And you know, the last financing round at open AI was, you know, 150 some billion dollars.
    3:31:56 And so logically, the, if, if, if the flip is going to happen, the for-profit has to
    3:32:02 go raise 150 billion dollars out of the chute to buy the assets, you know, raising 150 billion
    3:32:03 is a challenge.
    3:32:06 Um, so, you know, is that even possible?
    3:32:09 If that is possible, then open AI maybe as often the races as a for-profit company.
    3:32:13 If not, you know, you know, I don’t know, and then, you know, obviously the Elon lawsuit.
    3:32:17 So, so just because they’re the market leader today, you know, there’s big important questions
    3:32:18 there.
    3:32:20 You know, Microsoft has this kind of love-hate relationship with them.
    3:32:21 Where does that go?
    3:32:25 Apple’s, you know, lagging badly behind, but, you know, they’re very good at catching up.
    3:32:29 Amazon, you know, is primarily a hyperscalar, but they now have their own models.
    3:32:33 And then there’s the other questions like you laid out brilliantly, briefly and brilliantly,
    3:32:39 open versus closed, big versus little models, synthetic data, that’s a huge, huge question.
    3:32:45 And then test on compute with a chain of thought, the role of that, and this is fascinating.
    3:32:48 And these are, I think it’s fair to say, trillion-dollar questions.
    3:32:49 You know, these are big.
    3:32:51 Like, look, you know, it’s like, here’s a trillion-dollar question, which is kind of
    3:32:54 embedded in that, which is just hallucinations, right?
    3:32:58 Like, so if you are trying to use these tools creatively, you’re thrilled because they can
    3:33:02 draw new images and they can make new music and they can do all this incredible stuff,
    3:33:03 right?
    3:33:04 They’re creative.
    3:33:07 The flip side of that is if you need them to be correct, they can’t be creative.
    3:33:11 That’s, you know, the term hallucination and these things do hallucinate.
    3:33:16 And you know, there have been, you know, court cases already where lawyers have submitted
    3:33:20 legal briefs that contain made-up court citations, case citations, the judge is like, wait a
    3:33:21 minute, this doesn’t exist.
    3:33:24 And the very next question is, did you write this yourself?
    3:33:26 And the lawyer goes, “Uh…”
    3:33:30 I mean, that’s why you, along with Grock, looking for truth.
    3:33:33 I mean, that’s an open, technical question.
    3:33:35 How close can you get to truth with LLMs?
    3:33:36 Yeah, that’s right.
    3:33:43 And my sense, this is very contentious topic at the industry, my sense is if to the extent
    3:33:47 that there is a domain in which there is a definitive and checkable and provable answer,
    3:33:51 and you might say math satisfies that, coding satisfies that, and maybe some other fields,
    3:33:54 then you should be able to generate synthetic data.
    3:33:55 You should be able to do chain of thought reasoning.
    3:33:57 You should be able to do reinforcement learning.
    3:34:02 And you should be able to ultimately, you know, eliminate hallucinations for, but by the way,
    3:34:05 that’s a trillion dollar question right there as to whether that’s true.
    3:34:08 But then, but then there’s question like, okay, is that going to work in the more general
    3:34:09 domain?
    3:34:12 Like, so for example, one possibility is these things are going to get truly superhuman like
    3:34:17 math and coding, but at like discussing philosophy, they’re going to just, they’re basically as
    3:34:19 smart as they’re ever going to be.
    3:34:23 And they’re going to be kind of, you know, say mid-wit grad student level.
    3:34:26 And the theory there would just be they’re already out of training data.
    3:34:30 Like they literally, you know, you talk to these people like literally the big models,
    3:34:33 the big models are like within a factor of two X of consuming all the human generated
    3:34:36 training data to the point that some of these big companies are literally hiring people
    3:34:39 like doctors and lawyers to sit and write new training data by hand.
    3:34:42 And so does this mean that like you have to, if you want your model to be better at philosophy,
    3:34:45 you have to go hire like a thousand philosophers and have them write new content?
    3:34:47 And is anybody going to do that?
    3:34:50 And so, you know, maybe, maybe these things are topping out in certain ways and they’re
    3:34:52 going to leap way ahead in other ways.
    3:34:57 And so anyway, so we just don’t, you know, I guess this is maybe my main conclusion is
    3:35:02 I don’t any of these, anybody tell anybody telling you these big sweeping conclusions,
    3:35:05 you know, this whole super, you know, all of these abstract generalized super intelligence
    3:35:09 AGI stuff, like it, you know, maybe it’s the engineer in me, but like, no, like that’s
    3:35:16 not the, that’s not the, that’s too abstract, like it’s got to actually work.
    3:35:18 And then by the way, it has to actually pay for it.
    3:35:22 I mean, this is a problem right now with the, you know, the big models, the big models that
    3:35:25 are like really good at coding a math, they’re like actually very expensive to run.
    3:35:28 You know, they’re quite slow.
    3:35:33 Another trillion dollar question, future chips, which I know you’ve talked a lot about.
    3:35:37 Another trillion dollar question, yeah, I mean, all the global issue, oh, another trillion
    3:35:43 dollar question, censorship, right, like, and, and, and, and all the, as they say, all
    3:35:48 the human feedback training process.
    3:35:49 Exactly what are you training these things to do?
    3:35:51 What are they allowed to talk about?
    3:35:55 How long do they give you these and how often do they give these incredibly preachy moral
    3:35:56 lectures?
    3:35:59 Here’s a, here’s a, here’s a good, here’s a trillion dollar question.
    3:36:05 How many other countries want their country to run its education system, healthcare system,
    3:36:08 news system, political system on the basis of an AI that’s been trained according to
    3:36:13 the most extreme left-wing California politics, right, because that’s kind of what they have
    3:36:15 on offer right now.
    3:36:17 And I think the answer to that is not very many.
    3:36:22 So there’s like massive open questions there about like what, you know, and by the way,
    3:36:25 like what morality of these things are going to get trained on as a.
    3:36:32 In that one, we’re cracking wide open with what’s been happening over the past few months,
    3:36:38 censorship on every level of these companies and just the very idea what truth means and
    3:36:45 what it means to be, expand the Overton window of LLMs or the Overton window of human discourse.
    3:36:47 So what, what I experienced, you know, going back to how we started, what I experienced
    3:36:53 was, all right, social media censorship regime from hell, debanking, right, at like large
    3:36:58 scale, and then the war on the crypto industry, trying to kill it, and then basically declared
    3:37:03 intent to do the same thing to AI and to put AI under the same kind of censorship and control
    3:37:06 regime as social media and the banks.
    3:37:11 And I think this election tips in America, I think this election tips us from a timeline
    3:37:15 in which things were going to get really bad on that front to a timeline in which I think
    3:37:17 things are going to be quite good.
    3:37:21 But look, those same questions also apply outside the US and, you know, the EU is doing
    3:37:25 their thing, they’re being extremely draconian, and they’re trying to lock in a political
    3:37:27 censorship regime on AI right now.
    3:37:29 That’s so harsh that even American AI companies are not even willing to launch new products
    3:37:31 in the EU right now.
    3:37:35 Like, that’s not going to last, but like what happens there, right?
    3:37:38 And what are the tradeoffs, you know, what levels of censorship are American companies
    3:37:42 going to have to sign up for if they want to operate in the EU or is the EU still capable
    3:37:50 of generating its own AI companies or have we brain drained them so that they can’t.
    3:37:52 So big questions.
    3:37:53 Quick questions.
    3:38:03 So you’re very active on X, a very unique character, flamboyant, exciting, bold.
    3:38:05 You post a lot.
    3:38:10 I think there’s a meme, I don’t remember it exactly, but Elon posted something like inside
    3:38:12 Elon there are two wolves.
    3:38:16 One is please be kind or more positive.
    3:38:22 And the other one is, I think, you know, doing the, take a big step back and fuck yourself
    3:38:24 in the face guy.
    3:38:28 How many wolves are inside your mind when you’re tweeting?
    3:38:30 To be clear, a reference from the comedy classic, “Tropic Thunder.”
    3:38:31 “Tropic Thunder.”
    3:38:32 Yeah.
    3:38:33 Legendary movie.
    3:38:34 Yes.
    3:38:39 Any zoomers listening to this who haven’t seen that movie, go watch it immediately.
    3:38:40 Yeah.
    3:38:41 There’s nothing offensive about it.
    3:38:50 I’m Cruz’s greatest performance.
    3:38:55 So yeah, no, look, just start by saying like I’m not supposed to be tweeting at all.
    3:38:56 So yeah.
    3:38:57 Yes.
    3:38:58 Yes.
    3:38:59 Yes.
    3:39:00 But you know.
    3:39:01 So how do you approach that?
    3:39:02 Like, how do you approach what to tweet?
    3:39:03 I mean, I don’t.
    3:39:08 Like, so it’s a, it’s a, I don’t, I don’t well enough.
    3:39:10 It’s mostly an exercise in frustration.
    3:39:13 Look, there’s a glory to it and there’s, there’s, there’s an issue with it and the glory of
    3:39:18 it is like, you know, instantaneous global communication that, you know, in X in particular
    3:39:21 is like the, you know, the town square on all these, you know, social issues, political
    3:39:24 issues, everything else, current events.
    3:39:26 But I mean, look, there’s no question, the format, the format of at least the original
    3:39:29 tweet is, you know, prone to be inflammatory.
    3:39:34 You know, I’m the guy who at one point, the entire nation of India hated me because I
    3:39:38 once tweeted something and it turned out that it’s still politically sensitive and the entire
    3:39:39 continent.
    3:39:43 I stayed up all night that night as, as I became front page headline and leading television
    3:39:46 news in each time zone in India for a single tweet.
    3:39:50 So like the single tweet out of context is a very dangerous thing.
    3:39:55 Obviously, X now has the middle ground where they, you know, they now have the longer form
    3:39:56 essays.
    3:40:01 And so, you know, probably the most productive thing I can do is, is longer form, is longer
    3:40:02 form things.
    3:40:05 You’re not going to do it though, are you?
    3:40:06 I do, I do from time to time.
    3:40:07 Sometimes.
    3:40:08 I should, I should do more of them.
    3:40:11 And then, yeah, I mean, look, and yeah, obviously X is doing great.
    3:40:14 And then like I said, like stuff stack, you know, has become the center for a lot, you
    3:40:15 know, a lot of them.
    3:40:19 I think the best kind of, you know, deeply thought through, you know, certainly intellectual
    3:40:23 content, you know, tons of current events, stuff there as well.
    3:40:26 And then, yeah, so, and then there’s a bunch of other, you know, a bunch of new systems
    3:40:27 that are very exciting.
    3:40:30 So I think one of the things we can look forward to in the next four years is number one, just
    3:40:34 like a massive reinvigoration of social media as a consequence of the changes that are happening
    3:40:35 right now.
    3:40:37 And I’m very excited to see the, to see what’s going to happen with that.
    3:40:42 And then, I mean, it’s happening on X, but it’s now going to happen on other platforms.
    3:40:47 And then the other is crypto is going to come, you know, crypto is going to come right back
    3:40:48 to life.
    3:40:49 And actually, that’s very exciting.
    3:40:54 Actually, that’s worth noting is that’s another trillion dollar question on AI, which is in
    3:40:58 a world of pervasive AI, and especially in a world of AI agents, imagine a world of billions
    3:41:03 or trillions of AI agents running around, they need an economy.
    3:41:07 And crypto, in our view, happens to be the ideal economic system for that, right?
    3:41:08 Because it’s a programmable money.
    3:41:10 It’s a very easy way to plug in and do that.
    3:41:13 And there’s this transaction processing system that can do that.
    3:41:16 And so I think the crypto-AI intersection, you know, is potentially very, a very, very
    3:41:17 big deal.
    3:41:22 And so that was, that was going to be impossible under the prior regime.
    3:41:25 And I think under the new regime, hopefully, it’ll be something we can do.
    3:41:30 Almost for fun, let me ask a friend of yours, Jan Lacoon, what are your top 10 favorite things
    3:41:33 about Jan Lacoon?
    3:41:37 He’s a, I think he’s a, he’s a brilliant guy.
    3:41:38 I think he’s important to the world.
    3:41:41 I think you guys disagree on a lot of things.
    3:41:44 But I personally like vigorous disagreement.
    3:41:48 I, as a person in the stands, like to watch the gladiators go at it.
    3:41:50 No, he’s a super genius.
    3:41:53 I mean, look, he, I haven’t said we’re super close, but you know, casual, casual friends,
    3:41:56 I worked with him at Meta, you know, he’s the chief scientist at Meta for a long time
    3:42:02 and it still, you know, works with us and, and, you know, and as obviously as a legendary
    3:42:06 figure in the field and one of the main people responsible for what’s happening, it’s my
    3:42:10 serious observation would be that it’s, it’s, it’s the thing I keep, I’ve talked to him
    3:42:13 about for a long time and I keep trying to read and follow everything he does is he’s
    3:42:19 probably, he is the, I think, see if you agree with this, he is the smartest and most credible
    3:42:23 critic of LLMs is the path for AI.
    3:42:26 And he’s not, you know, there’s certain, I would say troll like characters who are
    3:42:30 just like crapping everything, but like, yeah, and has like very deeply thought through basically
    3:42:35 theories as to why LLMs are an evolutionary dead end.
    3:42:40 And I actually like, I try to do this thing where I try to model, you know, I try to have
    3:42:43 a mental model of like the two different sides of a serious argument.
    3:42:46 So I, I’ve tried to like internalize that argument as much as I can, which is difficult
    3:42:49 because like we’re investing it behind LLMs as aggressively as we can.
    3:42:54 So if he’s right, like, that can be a big problem, but like we should also know that.
    3:42:59 And then I sort of use his ideas to challenge all the bullish people, you know, to really
    3:43:01 kind of test their level of knowledge.
    3:43:06 So I like to kind of grill people like I’m not like, I’m not, you know, I was not, you
    3:43:09 know, I was got my CS degree 35 years ago.
    3:43:12 So I’m not like deep in the technology, but like if, if to the extent I can understand
    3:43:16 Jan’s points, I can use them to, you know, to really surface a lot of the questions for
    3:43:18 the people who are more bullish.
    3:43:20 And that’s been, I think, very productive.
    3:43:21 Yeah.
    3:43:24 So, yeah, it’s just, it’s very striking that you have somebody who is like that central
    3:43:28 in the space who is actually like a full on, a full on skeptic.
    3:43:31 And you know, and again, you could, this could go different ways.
    3:43:33 He could end up being very wrong.
    3:43:37 He could end up being totally right, or it could be that he will provoke the evolution
    3:43:39 of these systems to be much better than they would have been.
    3:43:40 Yeah.
    3:43:41 He could be both right and wrong.
    3:43:44 And first of all, I do, I do agree with that.
    3:43:51 He’s one of the most legit and regress and deep critics of the LLM path to AGI, you know,
    3:43:56 his basic notions that they’re needs, AI needs to have some physical understanding of the
    3:43:57 physical world.
    3:44:01 And that’s very difficult to achieve with LLM.
    3:44:05 And that, that is a really good way to challenge the limitations of LLMs and so on.
    3:44:11 He’s also been a vocal and a huge proponent of open source, which is a whole nother which
    3:44:12 you have been as well.
    3:44:13 Which is very useful.
    3:44:14 Yeah.
    3:44:15 And that’s been just fascinating to watch.
    3:44:16 And anti-dumer.
    3:44:17 Anti-dumer.
    3:44:18 Yeah.
    3:44:19 Yeah.
    3:44:20 He’s, he’s, he’s very anti-dumer.
    3:44:21 He embodies.
    3:44:22 He also has many wolves.
    3:44:23 He does.
    3:44:24 He does.
    3:44:25 He does.
    3:44:26 He does.
    3:44:27 So it’s been really, really fun to watch.
    3:44:28 The other two.
    3:44:29 Okay.
    3:44:30 Here’s my other wolf coming out.
    3:44:31 Yeah.
    3:44:36 The other two of the three Godfathers of AI are like radicals, like, like full on left,
    3:44:40 you know, far left, you know, like they, I would say like either Marxists or borderline
    3:44:41 Marxists.
    3:44:44 And they’re like, I think quite extreme in their social political views.
    3:44:47 And I think that feeds into their demerism.
    3:44:50 And I think, you know, they, they, they are lobbying for like draconian government.
    3:44:54 I think what would be runnously destructive government legislation and regulation.
    3:44:58 And so it’s, it’s actually super helpful, super, super helpful to have Jan as a counterpoint
    3:44:59 to those two.
    3:45:00 Another fun question.
    3:45:02 Our mutual friend, Andrew Huberman.
    3:45:03 Yes.
    3:45:08 First, maybe what do you love most about Andrew and second, what score on a scale of one to
    3:45:09 10?
    3:45:11 Do you think he would give you on your approach to health?
    3:45:12 Oh, three.
    3:45:13 Physical three.
    3:45:15 You think you score that high, huh?
    3:45:16 Okay.
    3:45:17 That’s good.
    3:45:18 Exactly.
    3:45:23 Well, so he did, he convinced me to stop drinking alcohol, which was a big deal.
    3:45:24 Successfully.
    3:45:27 Well, it was like my, other than my family, it was my favorite thing in the world.
    3:45:29 And so it was a major, major reduction.
    3:45:32 Like having like a glass of scotch at night was like a major, like it was like the thing
    3:45:33 I would do to relax.
    3:45:38 So he has profoundly negatively impacted my emotional health.
    3:45:43 I blame him for making me much less happy as a person, but much, much, much healthier.
    3:45:44 Physically healthier.
    3:45:46 So that, that I credit him with that.
    3:45:48 I’m glad I did that.
    3:45:50 But then his sleep stuff, like, yeah, I’m not doing any of that.
    3:45:51 Yeah.
    3:45:52 I have no interest in his sleep.
    3:45:53 Shit.
    3:45:54 Like, no.
    3:45:57 This whole light, natural light, no, we’re not doing that.
    3:45:58 Too hardcore for this.
    3:46:01 I don’t see any, I don’t see any natural, I don’t see any natural light in here.
    3:46:02 It’s all covered.
    3:46:03 It’s all horrible.
    3:46:04 And I’m very happy.
    3:46:09 I would be very happy living and working here because I’m totally happy without natural
    3:46:10 light.
    3:46:11 In darkness.
    3:46:12 It must be a metaphor for something.
    3:46:13 Yes.
    3:46:14 It’s a test.
    3:46:16 Look, it’s a test of manhood as to whether you can have a blue screen in your face for
    3:46:17 three hours and then go right to sleep.
    3:46:22 Like I don’t understand why you should want to take shortcuts.
    3:46:25 I now understand what they mean by toxic masculinity.
    3:46:29 All right.
    3:46:37 So let’s see, you’re exceptionally successful by most measures, but what to use the definition
    3:46:39 of success?
    3:46:43 I would probably say it is a combination of two things.
    3:46:48 I think it is contribution.
    3:46:56 So have you done something that mattered ultimately and specifically a matter to people?
    3:47:01 And then the other thing is, I think happiness is either overrated or almost a complete myth.
    3:47:05 And in fact, interesting, Thomas Jefferson did not mean happiness the way that we understand
    3:47:08 it when he said, “Pursuade to happiness and the declaration of independence.”
    3:47:16 He meant it more of the Greek meaning, which is closer to satisfaction or fulfillment.
    3:47:23 So I think about happiness as the first ice cream cone makes you super happy, the first
    3:47:27 mile of the walk in the park during sunset makes you super happy.
    3:47:33 The first kiss makes you super happy, the thousandth ice cream cone, not so much.
    3:47:38 The thousandth mile of the walk through the park, the thousandth kiss can still be good,
    3:47:42 but maybe just not right in a row.
    3:47:46 And so happiness is this very fleeting concept and the people who anchor on happiness seem
    3:47:48 to go off the rails pretty often.
    3:47:54 It’s sort of the deep sense of having been, I don’t know how to put it, useful.
    3:48:00 So that’s a good place to arrive at in life.
    3:48:01 Yeah, I think so.
    3:48:02 Yeah.
    3:48:03 I mean, can you sit?
    3:48:04 Yeah.
    3:48:07 Who was it who said the source of all the ills in the world with man’s inability to sit in
    3:48:11 a room by himself doing nothing?
    3:48:14 But if you’re sitting in a room by yourself and you’re like, “All right,” four in the
    3:48:18 morning, it’s like, “All right, have I lived up to my expectation of myself?”
    3:48:24 Like, if you have, the people I know who feel that way are pretty centered and generally
    3:48:33 seem very, I don’t know how to put it, pleased, but proud, calm at peace.
    3:48:40 The people who are sensation seekers, by the way, there’s certain entrepreneurs, for example,
    3:48:45 who are like in every form of extreme sport and they get huge satisfaction out of that.
    3:48:48 Or there’s sensation seeking in sort of useful and productive ways.
    3:48:52 Larry Allison was always like that, Zuckerberg was like that.
    3:49:00 And then there’s a lot of entrepreneurs who end up no drugs, like sexual escapades that
    3:49:02 seem like they’ll be fun at first and then backfire.
    3:49:07 Yeah, but at the end of the day, if you’re able to be at peace by yourself in a room
    3:49:08 at 4 a.m.
    3:49:09 Yeah.
    3:49:15 I would even say happy, but I know, I understand Thomas Jefferson didn’t mean it the way maybe
    3:49:20 I mean it, but I can be happy by myself at 4 a.m. with a blue screen.
    3:49:21 That’s good.
    3:49:22 Exactly.
    3:49:23 Staring at cursor.
    3:49:24 Exactly.
    3:49:31 As a small tangent, a quick shout out to an amazing interview you did with Barry Weiss
    3:49:34 and just to her in general, Barry Weiss of The Free Press.
    3:49:37 She has a podcast called “Honestly with Barry Weiss.”
    3:49:38 She’s great.
    3:49:39 People should go listen.
    3:49:45 You were asked if you believe in God.
    3:49:49 One of the joys, see we talked about happiness, one of the things that makes me happy is making
    3:49:50 you uncomfortable.
    3:49:51 Thank you.
    3:49:55 So this question is designed for, many of the questions today are designed for that.
    3:50:01 You were asked if you believe in God and you said after a pause, you’re not sure.
    3:50:09 So it felt like the pause, the uncertainty there was some kind of ongoing search for wisdom
    3:50:11 and meaning.
    3:50:14 Are you in fact searching for wisdom and meaning?
    3:50:15 I guess I put it this way.
    3:50:21 There’s a lot to just understand about people and then I feel like I’m only starting to
    3:50:29 understand and that’s certainly a simpler concept than God.
    3:50:33 So that’s what I’ve spent a lot of the last 15 years trying to figure out.
    3:50:37 I feel like I spent my first like whatever 30 years figuring out machines and then now
    3:50:41 I’m spending 30 years figuring out people, which turns out to be quite a bit more complicated.
    3:50:47 And then I don’t know, maybe God’s the last 30 years or something.
    3:50:52 And then look, I mean, just like Elon is just like, okay, the known universe is very complicated
    3:50:53 and mystifying.
    3:50:58 I mean, every time I pull up in astronomy, I get super in astronomy and it’s like, daddy,
    3:51:03 how many galaxies are there in the universe and how many galaxies are there in the universe?
    3:51:04 100 billion.
    3:51:05 Okay.
    3:51:06 Like how?
    3:51:07 Yeah.
    3:51:08 Yeah.
    3:51:11 Like how is that freaking possible?
    3:51:16 Like what, like it’s just, it’s such a staggering concept that I-
    3:51:21 I actually wanted to show you a tweet that blew my mind from Elon from a while back.
    3:51:25 He said, Elon said, as a friend called it, this is the ultimate skill tree.
    3:51:31 This is a wall of galaxies, a billion light years across.
    3:51:32 So these are all galaxies.
    3:51:33 Yeah.
    3:51:36 Like what the, like how, how is it that big?
    3:51:37 Like how the hell?
    3:51:40 I’m like, you know, I can read the textbook into this and then that and the whatever eight
    3:51:42 billion years and the big bang and the whole thing.
    3:51:44 And then it’s just like, all right, wow.
    3:51:48 And then it’s like, all right, the big bang, all right, like what was, what was before the
    3:51:49 big bang?
    3:51:56 Do you think we’ll ever, we humans will ever colonize like a galaxy and maybe even go beyond?
    3:51:57 Sure.
    3:51:58 I mean, yeah.
    3:51:59 I mean, in the fullness of time.
    3:52:00 Yeah.
    3:52:01 So you have that kind of optimism.
    3:52:02 You have that kind of hope that extends across thousands of years.
    3:52:03 In the fullness of time.
    3:52:04 I mean, yeah.
    3:52:06 I mean, yeah, you know, all the, all the problems, all the challenges with it that I do, but
    3:52:07 like, yeah, why not?
    3:52:10 I mean, again, in the fullness of time, it’ll, it’ll take a long time.
    3:52:12 You don’t think we’ll destroy ourselves?
    3:52:13 No.
    3:52:14 I doubt it.
    3:52:15 I doubt it.
    3:52:18 And, you know, fortunately we have Elon giving us, giving us the backup plan.
    3:52:19 So I don’t know.
    3:52:21 Like I grew up, you know, real Midwest sort of just like conventionally kind of Protestant
    3:52:22 Christian.
    3:52:25 It never made that much sense to me.
    3:52:26 Got trained as an engineer and a scientist.
    3:52:27 I’m like, oh, that definitely doesn’t make sense.
    3:52:31 I’m like, I know, I’ll spend my life as an empirical, you know, rationalist and I’ll figure
    3:52:32 everything out.
    3:52:37 You know, and then again, you walk up against these things, you know, you bump up against
    3:52:40 these things and you’re just like, all right, I like, okay, I guess there’s a scientific
    3:52:44 explanation for this, but like, wow.
    3:52:46 And then there’s like, all right, where did that come from?
    3:52:47 Right.
    3:52:50 And then how far back can you go on the causality chain?
    3:52:51 Yeah.
    3:52:54 And then, yeah, I mean, then even, even just, you know, experiences that we all have on
    3:52:56 earth, it’s hard to, it’s hard to rationally explain it all.
    3:53:01 And then, you know, so yeah, I guess I just say I’m kind of radically open-minded at peace
    3:53:04 with the fact that I’ll probably never know.
    3:53:07 The other thing that has happened, and maybe the more practical answer to the question
    3:53:12 is, I think I have a much better understanding now of the role that religion plays in society
    3:53:14 than I didn’t have when I was younger.
    3:53:18 And my partner, Ben, has a great, I think he quotes his father on this.
    3:53:22 He’s like, if man does not have a real religion, he makes up a fake one.
    3:53:25 And the fake ones go very, very badly.
    3:53:30 And so there’s this class, it’s actually really funny, there’s this class of intellectual,
    3:53:33 there’s this class of intellectual that has what appears to be a very patronizing point
    3:53:37 of view, which is, yes, I’m an atheist, but it’s very important that the people believe
    3:53:40 in something, right?
    3:53:43 And Marx had like the negative view on that, which is religion is the opiate of the masses,
    3:53:46 but there’s a lot of like right-wing intellectuals who are themselves, I think, pretty atheist
    3:53:49 diagnostic that are like, it’s deeply important that the people be Christian or something
    3:53:50 like that.
    3:53:53 And on the one hand, it’s like, wow, that’s arrogant and presumptive.
    3:53:58 But on the other hand, you know, maybe it’s right because, you know, what have we learned
    3:54:02 in the last hundred years is in the absence of a real religion, people will make up fake
    3:54:03 ones.
    3:54:07 There’s this writer, there’s this political philosopher who’s super interesting on this
    3:54:08 name, Eric Vogelin.
    3:54:12 And he wrote this, he wrote in that sort of mid part of the century, mid and late part
    3:54:13 of the 20th century.
    3:54:17 He was like born and I think like 1900 and like died in like 85.
    3:54:23 So he saw the complete run of communism and Nazism and himself, you know, fled, I think
    3:54:26 he fled Europe and, you know, the whole thing.
    3:54:30 And, you know, his sort of big conclusion was basically that both communism and Nazism
    3:54:36 and fascism were basically religions were, but like in the deep way of religions, like
    3:54:39 they were, you know, we call them political religions, but they were like actual religions.
    3:54:43 And, you know, they were the, they were what Nietzsche forecasted when he said, you know,
    3:54:47 God is dead, we’ve killed him and we won’t wash the blood off our hands for a thousand
    3:54:48 years, right?
    3:54:53 Is we will come up with new religions that will just cause just mass murder and death.
    3:54:57 And like, you read his stuff now and you’re like, yep, that happened, right.
    3:55:00 And then of course, as fully, you know, elite moderants, of course, we couldn’t possibly
    3:55:02 be doing that for ourselves right now.
    3:55:04 But of course we are.
    3:55:08 And you know, I would argue that Eric Vogelin for sure would argue that the last 10 years,
    3:55:11 you know, we have been in a religious frenzy, you know, the woke that woke has been a full
    3:55:16 scale religious frenzy and has had all of the characteristics of a religion, including
    3:55:21 everything from patron saints to holy texts to, you know, sin.
    3:55:26 It said, woke, wokeness has said every aspect of a, wokeness has said every, I think it’s
    3:55:31 that every single aspect of an actual religion other than redemption, right, which is maybe
    3:55:34 like the most dangerous religion you could ever come up with is the one where there’s
    3:55:35 no forgiveness.
    3:55:36 Right.
    3:55:39 And so I think if Vogelin were alive, I think he would have zeroed right in on that would
    3:55:40 have said that.
    3:55:43 And, you know, we just like sailed right off.
    3:55:46 I mentioned earlier, like we, we somehow rediscover the religions of the Indo Europeans
    3:55:49 were all into identity politics and environmentalism.
    3:55:52 Like, I don’t think that’s an accident.
    3:55:58 So it’s anyway, like there, there is something very deep going on in the human psyche on
    3:56:07 religion that is not dismissible and needs to be taken seriously, even if one struggles
    3:56:10 with the, the specifics of it.
    3:56:15 I think I speak for a lot of people that has been a real joy and for me, an honor to get
    3:56:21 to watch you seek to understand the human psyche as you described you in that 30 year
    3:56:24 part of your life.
    3:56:26 And it’s been an honor to talk with you today.
    3:56:27 Thank you, Mark.
    3:56:28 Thank you, Alex.
    3:56:29 Is that it?
    3:56:31 That’s only, only how long is that?
    3:56:36 Four hours with Mark Andreessen is like 40 hours of actual content.
    3:56:41 So I’ll accept being one of the short ones for the listener.
    3:56:47 Mark looks like he’s ready to go for 20 more hours and I need a nap.
    3:56:48 Thank you, Mark.
    3:56:49 Thank you, Alex.
    3:56:52 Thanks for listening to this conversation with Mark Andreessen.
    3:56:57 To support this podcast, please check out our sponsors in the description.
    3:57:02 And now let me leave you with some words from Thomas Sowell.
    3:57:09 It takes considerable knowledge just to realize the extent of your own ignorance.
    3:57:21 Thank you for listening and hope to see you next time.
    3:57:25 [Music]
    3:57:27 (gentle music)
    3:57:30 (upbeat music)

    Marc Andreessen is an entrepreneur, investor, co-creator of Mosaic, co-founder of Netscape, and co-founder of the venture capital firm Andreessen Horowitz.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep458-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/marc-andreessen-2-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Marc’s X: https://x.com/pmarca
    Marc’s Substack: https://pmarca.substack.com
    Marc’s YouTube: https://www.youtube.com/@a16z
    Andreessen Horowitz: https://a16z.com

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Encord: AI tooling for annotation & data management.
    Go to https://encord.com/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (12:46) – Best possible future
    (22:09) – History of Western Civilization
    (31:28) – Trump in 2025
    (39:09) – TDS in tech
    (51:56) – Preference falsification
    (1:07:52) – Self-censorship
    (1:22:55) – Censorship
    (1:31:34) – Jon Stewart
    (1:34:20) – Mark Zuckerberg on Joe Rogan
    (1:43:09) – Government pressure
    (1:53:57) – Nature of power
    (2:06:45) – Journalism
    (2:12:20) – Bill Ackman
    (2:17:17) – Trump administration
    (2:24:56) – DOGE
    (2:38:48) – H1B and immigration
    (3:16:42) – Little tech
    (3:29:02) – AI race
    (3:37:52) – X
    (3:41:24) – Yann LeCun
    (3:44:59) – Andrew Huberman
    (3:46:30) – Success
    (3:49:26) – God and humanity

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #457 – Jennifer Burns: Milton Friedman, Ayn Rand, Economics, Capitalism, Freedom

    AI transcript
    0:00:07 The following is a conversation with Jennifer Burns, a historian of ideas, including the
    0:00:14 evolution of economic, political and social ideas in the United States in the 20th century to today.
    0:00:22 She wrote two biographies, one on Milton Friedman and the other on Ian Rand, both of which I highly
    0:00:30 recommend. This was a super technical and super fascinating conversation. At the end, I make a
    0:00:36 few comments about my previous conversation with President Zelensky, for those of you who may be
    0:00:43 interested. And now a quick few second mention of a sponsor. Check them out in the description,
    0:00:51 it’s the best way to support this podcast. We’ve got brain.fm for focus, github for programming and
    0:00:59 AI, element for delicious electrolytes, Shopify for merch and AG1 for health. She’s wise with my
    0:01:06 friends. Also, if you want to get in touch with me, go to electstreaming.com/contact and now
    0:01:10 onto the full ad reads. As always, no ads in the middle. I try to make this interesting,
    0:01:16 but if you must skip them, please still check out our sponsors. If I can only speak English,
    0:01:24 I enjoy their stuff. Maybe you will too. Click the links by the stuff. Glory shall be ours.
    0:01:32 This episode is brought to you by brain.fm, a platform that offers music specifically made
    0:01:39 for focus. I talk about listening to brown noise a lot. It’s actually funny, but I don’t believe
    0:01:46 brain.fm has brown noise. But that’s not what I use it for. I usually play brown noise because
    0:01:52 basically everything has brown noise. YouTube has brown noise, Spotify has brown noise.
    0:02:01 I use that as one layer and as the second layer, I’ll use music from brain.fm. So there’s all kinds
    0:02:12 of almost like ethereal soundtracks. Maybe there’s a bit of like a techno beat. I like the stuff with
    0:02:20 the beat, just very light. Where the beat does not have this kind of edge that distracts me,
    0:02:26 but there’s still a rhythm to it. So I’ll have that plus a bit of brown noise and that’s like a
    0:02:32 really beautiful focus. I believe these ads are for the episode with Jennifer Burns, Milton Friedman.
    0:02:36 Did you know that he wrote capitalism and freedom in just six months
    0:02:42 while teaching full-time? Also, did you know that Brad Knight wrote JavaScript
    0:02:47 and I think a week, maybe 10 days? Passion and focus, ladies and gentlemen,
    0:02:54 gets a lot of stuff done. And you should try to increase yours by trying brain.fm for free
    0:03:01 for 30 days by going to brain.fm/lex. That’s brain.fm/lex for 30 days free.
    0:03:11 This episode is also brought to you by GitHub and GitHub Co-Pilot, the super amazing AI that
    0:03:18 helps you program. If you don’t know what GitHub Co-Pilot is, ladies and gentlemen, you are missing
    0:03:28 out. I’m going to be doing a lot of programming podcasts coming up and I mean I really just don’t
    0:03:37 even program without AI anymore. It is true, it is fully an assistant at this point, but not a
    0:03:44 kind of guide. So I’ve not really had success with anything agentic. Really, the thing I’m
    0:03:48 interested in, especially when I’m actually trying to get work done, I’m interested in maximizing
    0:03:54 my productivity. And for that, the difficult things that an agent is supposed to be able to do, I
    0:04:00 still do faster and better, those difficult decisions. I don’t like the task of fixing
    0:04:12 decisions made by agents, but fixing code generated by Co-Pilot, for example, that is much
    0:04:16 more pleasant. It’s much more fun, it’s much more efficient, especially because the mistakes are not
    0:04:25 that numerous. Anyway, I have a lot of people writing to me trying to get into programming.
    0:04:31 One of the things you should definitely get to know is GitHub and you should get to know GitHub
    0:04:38 Co-Pilot and all the suite of developer tools they have to help you write code with the help of AI.
    0:04:51 It’s great. To try out GitHub Co-Pilot for free, go to gh.io/copilot. That’s gh.io/copilot.
    0:05:00 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix.
    0:05:06 Did you know that Ayn Rand’s daily diet was black coffee, french fries and cigarettes?
    0:05:14 Well, she should have been consuming some element. I mean, listen, let’s not be judgmental here.
    0:05:21 Churchill did quite a few impactful things in the world and his diet and
    0:05:29 liquids and substances he consumed were just atrocious and the guy was out of shape and it was
    0:05:36 just a mess. But he lived a long life and a productive life and one of the most impactful
    0:05:43 and influential humans in history. So there you go. But it’s not like element
    0:05:53 makes you not impactful. It just is a little boost, but it’s not going to get your shit done for you.
    0:05:59 You still need to take big risks and take on the world and do epic shit,
    0:06:06 but might as well be a little healthier for it, especially when you’re doing like crazy physical
    0:06:14 endurance event. Your electrolytes need to be on point. Get a simple path for free with any
    0:06:22 purchase. Try it at drinkelement.com/lex. This episode is also brought to you by Shopify,
    0:06:27 a platform designed for anyone to sell anywhere with a great looking online store.
    0:06:35 I often talk about capitalism when I do the ad read for Shopify and no better episode than
    0:06:43 for many hours focuses on the work of Milton Friedman, who was the seminal figure of the
    0:06:51 Chicago School of Economics. And Ein Rand, who is basically the most hardcore and saying the so
    0:06:58 defender of capitalism. Howard Work, his architectural principles that we talk about with Jennifer
    0:07:07 Burns, I mean is the embodiment of this spirit of, “Fuck you, I’ll do whatever I want. I’ll do it my
    0:07:16 way.” That radical individualism that makes up America, that makes up the individual gears that
    0:07:23 make up the machinery of capitalism. That is the American way and that has some downsides,
    0:07:28 but mostly it’s upsides. It’s the reason we have so many amazing things and the quality of life is
    0:07:33 going up and the productivity, the GDP is going up, not just in the United States, but across the
    0:07:41 world, thanks to the incredible innovation by US inventors, US companies. So anyway, Shopify is
    0:07:46 just one implementation of that. First of all, of course, the engineers that create Shopify,
    0:07:51 but if you yourself want to sell stuff, you’re creating something and you want to sell it,
    0:08:01 Shopify enables you to do that. Sign up for $1 per month trial, period, at Shopify.com/Lex.
    0:08:06 That’s all lowercase. Go to Shopify.com/Lex to take your business to the next level today.
    0:08:14 This episode is also brought to you by AG1. And all in one daily drink to support better health
    0:08:20 and peak performance as I slide slowly down in my chair. It is late late at night,
    0:08:27 embarrassingly so. And I’ve lost all energy and I’m slowly losing my mind.
    0:08:38 And there’s a cup next to me that I am swirling gradually. It is a cup of ice with some water
    0:08:46 and element in it, but it makes me feel like maybe it’s a whiskey. And whiskey is probably
    0:08:52 something I need at this moment. But let us focus on the essentials. And definitely not whiskey,
    0:08:57 but something way healthier, which is AG1. I already had it twice today, did crazy exercise,
    0:09:05 didn’t sleep much the night before, had to do a super long podcast, had to do a lot of reading.
    0:09:13 It was just an insane day, my friends. I’m so grateful to be alive. And yeah, there’s the little
    0:09:20 joys of drinking a bit of AG1. Does it do much for me? I don’t know. It makes me feel like it does.
    0:09:26 It’s like a really nice multivitamin. Brings joy to my life. I miss it when it’s not there.
    0:09:29 Who knows? We’re all going to die in the end.
    0:09:38 Anyway, they’ll give you a one month supply of fish oil when you sign up with drinkag1.com/lex.
    0:09:45 This is the Lex Friedman podcast. To support it, please check out our sponsors in the description.
    0:09:55 And now, dear friends, here’s Jennifer Burns.
    0:10:13 You have written two biographies, one on Milton Friedman and one on Ayn Rand. So if we can,
    0:10:16 we will focus on each one separately. But first, let’s talk about the ideas that
    0:10:22 two of them held in common, the value of individual freedom, skepticism of collectivism,
    0:10:27 and the ethics of capitalism. Can you talk about the big picture ideas they converge on?
    0:10:33 Yeah. So Milton Friedman and Ayn Rand, in the biggest picture, they’re both
    0:10:38 individualists and they’re skeptical of collectivities and collectivism.
    0:10:42 So their unit of analysis is the individual, what’s good for the individual,
    0:10:46 what works for the individual, and their understanding of society flows from that.
    0:10:55 They also both use this focus on individualism to justify and to support capitalism as a social
    0:11:01 and economic system. So we can put them in a similar category. We can call them individualists.
    0:11:06 We could call them libertarians of a sort. They’re also really different in how they approach
    0:11:14 capitalism, how they approach thinking. Ayn Rand developed her own moral and philosophical system
    0:11:20 to justify individualism and to connect the individual to capitalism and to support
    0:11:25 capitalism as a social and economic system. Friedman struggles a bit more with how to justify
    0:11:32 capitalism and he’ll ultimately come down to freedom as his core value, his God, as he says.
    0:11:38 And so freedom does connect back to the individual, but he’s not justifying capitalism for his own
    0:11:44 sake. He’s justifying it for its ability to underwrite freedom in a social sense and also in the
    0:11:48 individual sense. At a high level, are there interesting differences between them? You already
    0:11:53 mentioned a few, maybe in terms of who they are personally, maybe in terms of how they approach
    0:11:58 the justification for capitalism or maybe other ways. Yeah, for sure. So beyond this idea that
    0:12:03 that Milton Friedman takes a while to come to his justification of capitalism,
    0:12:09 Morzine Rand kind of has it from the start. She really focuses on the core quality of
    0:12:15 rationalism and rationality. Rationality is the defining feature of human beings. And so
    0:12:22 she works from there, whereas Milton Friedman eventually converges on this idea of freedom.
    0:12:27 So that’s one part of it. The other is their intellectual styles are really, really different.
    0:12:32 Their interpersonal styles are really different. So Friedman has big ideas, big principles that
    0:12:38 guide him, but he’s also deeply empirical. He spends most of his career doing historical research,
    0:12:43 economic research, pulling data from how people actually make economic decisions and live in
    0:12:49 the world and using them to test and refine his theories. Where Rand, to some degree, we could
    0:12:53 say she’s empirical and that she lives through the Russian Revolution and takes a very big lesson
    0:13:01 from that. But her style of thinking is really first principles, an axiomatic approach, going from
    0:13:08 the basic idea of rationality and then playing that out in different spheres. And so those are
    0:13:14 just very different intellectual approaches. And then they lead in some ways to really different
    0:13:21 ways of thinking about how you get things done in the world. Ein Rand is a purist. She wants to
    0:13:28 start with the pure belief. She doesn’t want it to be diluted. One of her favorite sayings was,
    0:13:32 it’s earlier than you think. In other words, we’re still moving towards a place where we can really
    0:13:38 hold and express these ideals purely. Friedman, although he didn’t use this terminology, was
    0:13:43 much more half a loaf guy. I’ll take what I can get and then I’ll try to move to where I really
    0:13:49 want to be. But he is able to compromise, especially when he moves from being an economist into being
    0:13:55 more of a political thinker. And so that’s a really different intellectual style. And then
    0:14:02 it also plays out in their lives in that Ein Rand is incredibly schismatic. I mean, she wants her
    0:14:08 friends to believe what she believes and support what she supports. And she’s willing to break
    0:14:14 a relationship if it doesn’t match. Milton Friedman, he also does tend to have friends
    0:14:21 who agree with him. Yet he’s always willing to debate his opponents and he’s willing to do so
    0:14:27 with a smile on his face. He’s the happy warrior. And he actually will win a lot of debates simply
    0:14:33 by his emotional affect and his cheerfulness and his confidence, where Rand will lose debates because
    0:14:39 she gets so angry in the face of disagreement. So yeah, they have a lot of similarities and a
    0:14:43 lot of differences. And it’s been really fascinating to kind of dive deep into both of them.
    0:14:50 I just re-listened to Ein Rand’s, I think, last lecture or at least it’s called that. And just
    0:14:58 the confrontational nature of how she answers questions or how she addresses critics and so
    0:15:05 on, there is a kind of charisma to that. So I think both of them are very effective at winning over
    0:15:12 sort of popular support, but in very different styles. It seems like Ein Rand is very cranky,
    0:15:16 but there’s, I mean, it’s the most charismatic, cranky person I think I’ve ever listened to.
    0:15:24 Yeah, I mean, people talked about her meeting her and coming to believe in her ideas
    0:15:30 in a similar way as they did with Marxism in that suddenly everything made sense.
    0:15:33 And that when they came to believe in objectivism, they felt they had this
    0:15:39 engine for understanding the entire world. Now after a while, for most people, that then became
    0:15:45 confining. But yeah, that’s certainty. And Friedman had some of that as well. He clothed it differently.
    0:15:50 He clothed it in happiness, where Rand kind of closed it, as you said, in crankiness or anger.
    0:15:55 I mean, there’s also an arc to Rand. She gets kind of angrier and angrier and crankier and crankier
    0:16:00 over the course of her life. What I enjoyed about my research is I was able to get into this early
    0:16:06 moment when she was different and a little more open. And then I kind of watched her clothes and
    0:16:12 her heart in over time. Would it be fair to say that Milton Friedman had a bit more intellectual
    0:16:19 humility, where he would be able to sort of evolve over time and be convinced by the reality of the
    0:16:26 world to change sort of the nuances of policy, the nuances of how he thought about economics
    0:16:31 or about the world? Yeah, absolutely. Friedman believed in being able to say I was wrong.
    0:16:36 And there are some things he said he was wrong about, will delve more into
    0:16:42 monetarism and monetary policy. But he was able to talk about the ways his ideas hadn’t mapped
    0:16:46 onto the world the way he thought they would. He does a really interesting interview at the
    0:16:53 end of his life where he’s beginning to voice some doubts about globalization, which was,
    0:16:56 he was sort of a prophet of globalization, a cheerleader of globalization. He really thought
    0:17:00 it would lead to a better world in all respects. And towards the end of his life, it’s about two
    0:17:07 years before he dies, there’s a note of doubt about how globalization unfolded and what it would mean,
    0:17:11 particularly for the American worker. And so you can see him still thinking. And that to me,
    0:17:17 I had sort of assumed he became crankier and crankier and more and more set in his ways. And
    0:17:20 of course, there’s a phase where he does become that way, especially since he’s in the public
    0:17:24 eye and there’s not room for nuance. But to find in the last years of his life,
    0:17:30 of his life, him being so reflective, that was absolutely not something Rand could do.
    0:17:34 I think there’s a thread throughout this conversation where we should actually also
    0:17:40 say that you’re kind of a historian of ideas. I am a historian of ideas, yes.
    0:17:48 And so we’re talking about today, in part, about two people who kind of fought for ideas,
    0:17:53 for an idea, like we mentioned, freedom for capitalism. And they did it in very different
    0:18:00 ways. And it’s so interesting to see sort of the impact they both had and how their
    0:18:08 elucidation explanation of those ideas like reverberated throughout society and how we together
    0:18:14 as a society figure out what works, the degree to which they have influence on the public,
    0:18:17 the degree to which they have influence on individual administrations like the Reagan
    0:18:24 administration, Nixon and so on, and how it might return like fadeaway and then come back
    0:18:31 in the modern times. And it’s so interesting if you just see this whole world as a game of ideas
    0:18:38 where we were like pushing and pulling and trying to figure stuff out. A bunch of people got real
    0:18:45 excited over a hundred years ago about communism and then they tried stuff out and then the
    0:18:52 implementation broke down and we keep playing with ideas. So these are the two greats of playing
    0:18:55 with ideas. I think that’s a thread that just runs through this.
    0:19:01 Yeah. And kind of pushing back against that movement towards communism, social democracy,
    0:19:06 but one difference that I really should emphasize, Rand is a writer of fiction.
    0:19:10 She’s a philosopher, but she’s also a writer of fiction. So she is working
    0:19:16 almost in the mythic register, much more in the psychological register. She’s creating characters
    0:19:22 that people identify with and people relate to experiences they’ve had. And that’s one of the
    0:19:27 reasons she hits so deep. And she’s also offering people, I read all the fan letters to her. People
    0:19:35 would say things like, “I read the fountain head and now I’m getting a divorce.” Having
    0:19:40 just these incredible realizations. Mill and Freeman didn’t get such things.
    0:19:45 And Mill and Freeman didn’t get such things. Or I’ll meet someone and they’ll say to me,
    0:19:51 “Ian Rand is the reason I went to medical school.” A couple of women said this to me a few years back.
    0:19:55 I never even occurred to me that I could be a doctor until I read “Ian Rand” and I said,
    0:19:59 “I’m going to go to medical school.” And so she has that really intense impact on people.
    0:20:07 So she thought of herself as rational. She thought of rationality as what she was doing,
    0:20:14 but she was actually doing a mythopoetic psychological work as well. Whereas Freeman,
    0:20:19 on the one hand, was much more rational. There’s a whole set of economic thinking and he provides
    0:20:25 a rational framework for understanding the world and it’s the framework of neoclassical economics.
    0:20:32 At the same time, he does pull on mythologies of the idea of America and the Gilded Age,
    0:20:38 the frontier mythology, the individual immigrant, the settler mythology. He pulls on these,
    0:20:44 but he doesn’t create them and he’s more kind of playing a tune he already has.
    0:20:50 Whereas I think Rand really does something a little bit deeper in her ability to reach into
    0:20:57 people’s psyche and then take that emotional, psychological experience and fuse it to an
    0:21:03 intellectual world and a political world. And that’s really what makes her so powerful.
    0:21:09 And so I think she comes back in to relevancy in a different way than Friedman does because
    0:21:16 I think in some way she’s tapped into a more universal human longing for independence and
    0:21:22 autonomy and self-creation and self-discovery. Nevertheless, there are still pragmatic ideas
    0:21:28 that are still important today for Milton Friedman, even just on the economics level.
    0:21:36 So let’s dig in. Let me try. I took some notes. Let me try to summarize who Milton Friedman is
    0:21:42 and then you can correct me. Okay. So he is widely considered to be one of the greatest,
    0:21:46 the most influential economists in history, not just the 20th century, I think, ever.
    0:21:53 He was an advocate of economic freedom, like we said, and just individual freedom in general.
    0:21:59 He strongly advocated for free market capitalism and limited government intervention in the economy,
    0:22:04 though you do give… I’ve listened to basically everything you have on the internet.
    0:22:08 You give some more depth and nuance on his views on this and in your books.
    0:22:17 He led the famed Chicago School of Economics and he won the Nobel Prize in Economics in 1976.
    0:22:24 He greatly influenced economic policies during the Reagan administration and other administrations.
    0:22:29 He was an influential public intellectual, highly influential, not just among economists.
    0:22:38 He lived 1912 to 2006. So that means he lived and worked through some major world events
    0:22:43 where his ideas were really important, the Great Depression, with the New Deal, World War II,
    0:22:50 with the post-war reconstruction, the rise and fall of the Bretton Woods Monetary System,
    0:22:56 as we may talk about, the Cold War and all the conflicts involved in that,
    0:23:01 sort of the tensions around communism and so on, so the fall of the Soviet Union.
    0:23:08 And also he has some interesting relationships to China’s economic transformation since the 1970s,
    0:23:11 the stagflation of the 1970s, and I’m sure there’s a lot more.
    0:23:19 Can you maybe continue this thread and give a big picture overview of the ideas he is known for?
    0:23:27 Yeah, sure. And that’s a great summary. You learn fast. So let me start with the economics and
    0:23:35 then I can kind of transition to how he used those economic ideas to become a real voice
    0:23:37 in the American conservative movement, the American political realm.
    0:23:43 So I’ll kind of highlight four ideas or contributions or episodes.
    0:23:49 One was his work with Anna Schwartz in revising our understanding of the Great Depression.
    0:23:54 And that’s tightly related to the second, which is the School of Monetarism
    0:24:03 that he and Schwartz really become founders of. Then there is the prediction of stagflation
    0:24:09 and the explanation of that in the 1970s, which really is one of these sort of career-making
    0:24:14 predictions. And we can dig into that. And then in terms of technical economics,
    0:24:21 he’s known for the permanent income hypothesis which he develops with a group of female collaborators
    0:24:27 that I can talk about. So those are kind of four technical pieces and being really brought together
    0:24:32 in what becomes the Chicago School of Economics. He’s undoubtedly the head and the leader of the
    0:24:38 Chicago School of Economics. There’s an earlier generation that he learns from. There’s his
    0:24:44 generation. There’s also a Chicago School of Law and Economics that’s really profoundly influential.
    0:24:48 And then there’ll be kind of a third generation that he’s somewhat distinct from,
    0:24:54 but that goes on to really shape economics. But let me go back to these kind of four pieces,
    0:25:01 and let me start with Great Depression. So Milton Friedman actually lives through the
    0:25:09 Great Depression. He’s in college when it hits, and he is, so he’s in college just 1928 to 1932.
    0:25:16 And he’s aware of the Depression, and he’s deciding, should I study mathematics or should
    0:25:23 I study economics? And he’s had some good economics teachers, but it’s really the context.
    0:25:29 It’s looking around at the slow dissolving of economic prosperity. So he decides to go to
    0:25:34 Chicago. He decides to study economics. And what’s really interesting is that
    0:25:42 the Great Depression is so unexpected. It’s unpredicted. It’s unprecedented. And economists
    0:25:47 are really struggling to know how to respond to it. And so he’s going to arrive at the University
    0:25:54 of Chicago when the field is struggling to know what to do. So he’s in this kind of really open
    0:26:00 space where the institutional economics of the 1920s has failed to predict, which was focused
    0:26:05 on business cycles. This is the irony. Their big thing was charting and understanding business
    0:26:09 cycles. And then we have the biggest business cycle of all time, and they haven’t seen it coming,
    0:26:19 and they don’t have a good explanation for it. And what he will get at Chicago is the remnants of
    0:26:26 the monetary understanding of the economy. And so his teachers, they don’t know exactly what’s
    0:26:33 going on, but they look first to the banking crisis. They look first to the, in 1933, it’s,
    0:26:37 you know, bank runs, failures of, maybe it’s up to a third of American banks. Thousands of banks
    0:26:42 are failing per week. So they’re focused on that. So that’s the first kind of imprint he will have.
    0:26:48 The Great Depression has something to do with a banking system. The second imprint he will have
    0:26:54 is that all of his professors are profoundly concerned about the social crisis. They want
    0:26:59 relief programs. They want them now. They want bank regulation and financial reform. They’re
    0:27:04 very active. This is not laissez-faire by any stretch of the imagination. So Friedman has
    0:27:14 that imprinting. And then about, so that’s, he gets there in ’32, ’36, ’37, the ideas of John Manured
    0:27:18 Keynes from Britain, which has a different explanation. Keynes has a different explanation,
    0:27:23 the Great Depression will kind of make landfall in American economics and be very profoundly
    0:27:29 influential on most American economists, but Friedman already, it’s too late for Friedman. He
    0:27:36 already has a different perspective. So Keynesianism unfolds. I can say more about that, but it basically
    0:27:44 leads to more active federal government participation in the economy. And what underlies
    0:27:49 a lot of that, it’s adaptation in America particularly, is the idea that capitalism
    0:27:58 has failed. Capitalism has revealed itself to have a profound flaw in that it’s two,
    0:28:04 it’s cycles of boom and bust, create social instability, chaos. It needs to be tamed. It
    0:28:12 needs to be regulated. And so that becomes the kind of baseline of politics in the United States,
    0:28:16 the understanding of the New Deal, the understanding of the Democratic Party, even to some extent
    0:28:22 the understanding of the Republican Party. And Friedman never quite sure about that. He has
    0:28:26 a hunch that there’s something else going on, and he does not buy that capitalism has sort of
    0:28:31 ground to a halt, or the other idea is that capitalism has gone through some sort of phase
    0:28:38 transition. And it worked great maybe while we had a frontier. This is a very serious argument
    0:28:44 that people are making. United States used to have a frontier, a place where Europeans hadn’t
    0:28:48 fully settled. Of course, they’re pushing out the native tribes. That’s another story, but
    0:28:53 that this frontier is the engine of economic growth, and the frontier is now over, it’s closed,
    0:28:58 and we’re going to stagnate. There’s a theory of secular stagnation. And so to deal with secular
    0:29:03 stagnation, we’re just going to have to have a more active state. So Friedman is suspicious of all
    0:29:09 these assumptions. And he has this idea that there’s something to do with money. Money is somehow
    0:29:16 important. And so he joins together with Anna Schwartz, who is an economist. She doesn’t at
    0:29:21 this time hold a PhD. She’s working for the National Bureau of Economic Research, and they come
    0:29:27 together to do this study of money in the US economy. And it takes them 12 years to write the
    0:29:33 book. And they’re releasing their ideas, and they’re arguing, and Friedman is writing papers,
    0:29:39 giving talks, saying money’s really important. And nobody’s really believing him. He’s a crank.
    0:29:44 He’s at Chicago. Chicago is a well-known university, but he’s sort of considered a crank.
    0:29:52 And then in ’63, he and Anna Schwartz published this book, and it’s 800 pages. It’s a reinterpretation
    0:29:57 of the history of the United States through money. The central character is money, whether it’s
    0:30:02 specie, greenback, or the US currency. And they have a whole chapter on the Great Depression.
    0:30:08 What they’ve literally done, Schwartz has done most of this. Schwartz has gone to banks and said,
    0:30:13 show me your books. And then she’s added up column by column. How much money is in your vault? How
    0:30:18 much money is on deposit? How much money is circulating? And so they literally have graphs.
    0:30:23 You can see them in the book of how much money has been circulating in the US at various different
    0:30:28 points in time. And when they get to the Great Depression, they find the quantity of money
    0:30:33 available, and the economy goes down by a third. And in some ways, this is completely obvious,
    0:30:42 because so many banks have failed. And we don’t have any type of bank insurance at that point.
    0:30:46 So if your bank goes under, your savings are there, the money essentially vanishes. And it’s
    0:30:51 fractional reserve banking, right? So you’ve put in, they can loan up to 90% off on their deposits.
    0:30:58 And so Friedman and Schwartz present this argument that what really made the Great
    0:31:03 Depression so bad was this drop in the amount of money, the 30% drop in the money. They called
    0:31:08 the Great Contraction. And then they go further and they say, well, how did this happen? And why?
    0:31:15 And they pinpoint the Federal Reserve, which is a fairly new institution at that time. And they
    0:31:19 say, what did the Federal Reserve do, the lender of last resort? What did it do in the face of what
    0:31:25 they’re depicting as a massive, unprecedented liquidity crisis? And they find it’s not really
    0:31:32 doing much. And they really dig into the details. And they find that the Federal Reserve has gone
    0:31:37 through a sort of personnel change. And some of the key leaders in the 1920s, Benjamin Strong,
    0:31:42 is one of them. He’s now deceased. And the dominance of the New York Federal Reserve,
    0:31:49 which in their telling is global, it’s interconnected, it’s seen a lot of financial
    0:31:55 things come and go. And they believe that the New York Fed had the understanding to recognize
    0:31:58 this is a liquidity crisis. We should be very generous. We should support all the banks.
    0:32:05 Their influence has diminished for the kind of banks that are more, they don’t say like the
    0:32:08 rubes and the hicks, but it basically is. It’s like, people in charge don’t know what they’re
    0:32:14 doing. And so the Fed pursues this kind of policy of masterly inactivity. They don’t see it as a
    0:32:22 problem. They don’t do much. There’s an enormous liquidity crisis. And that’s their version of
    0:32:27 what the Great Depression is all about, that it’s a financial system meltdown. It’s a liquidity
    0:32:34 crisis. And that in some ways, well, in many ways, they argue very strong counterfactual argument.
    0:32:39 The Federal Reserve could have prevented it, and it did not. And so it becomes then
    0:32:46 an institutional failure and a political failure, not a failure of capitalism as a system.
    0:32:53 And so this book comes out, it’s a blockbuster. And even those economists who’ve been like,
    0:32:58 “Freedmen is a crank. I don’t buy it,” are like, “Freedmen in shorts are onto something.
    0:33:04 Milton Friedman on a shorts are onto something.” And so that really changes the game. And
    0:33:11 this is also one of his most influential contributions, because Friedman in shorts becomes
    0:33:17 the playbook for the Federal Reserve. And we have lived through this, right? The financial crisis,
    0:33:23 the Federal Reserve is ready to loan. COVID, the Federal Reserve is all kinds of new things,
    0:33:30 because no Federal Reserve chair wants to be in Friedman in shorts 2.0 that somebody writes,
    0:33:36 or they’re the bad guy who let the economy melt down. So the specifics of what they say to do
    0:33:42 have obviously evolved as the system has changed. But this is a playbook for how to deal with economic
    0:33:48 crisis. It’s Friedman in shorts. And so it’s absolutely fundamental. And that is really going
    0:33:53 to be the place he makes his mark. There’s a lot of things to say here. So first, the book we’re
    0:33:58 talking about is a monetary history of the United States in part for which Milton Friedman won the
    0:34:03 Nobel Prize. You’ve also mentioned the influence of the Great Depression. If you’re going to even
    0:34:12 just rewind to that. So he went to, I guess, college in Rutgers. And he was mathematical
    0:34:18 proclivities. So he was kind of wanted to be a mathematician. And so it’s kind of a cool crossroads.
    0:34:27 It’s interesting how the right time, the right person arrives, right? So you describe this really
    0:34:32 well that he had his choice to be a mathematician or an economist. An economist is the University of
    0:34:41 Chicago. A mathematician is Brown University, whichever. And then this is also the beginnings,
    0:34:48 as you’ve described, of mathematical economics. So he fits in nicely into this using,
    0:34:54 I think you said the number of equations started going up per paper, which is a really nice way
    0:35:02 to put it. So really, the right person at the right time to try to solve this puzzle of the economy
    0:35:08 melting down. It’s so interesting. Just one human, it’s just from just zooming in on a single human
    0:35:16 making a decision about life. And it’s hard to know when you’re in it that the world is melting
    0:35:22 down from an economics perspective. And then I could do something about this to figure out what
    0:35:27 it is. And also, I’m going to reject the mainstream narrative about why this happened.
    0:35:32 Yeah. So the other piece of the puzzle, when he goes to Rutgers, he thinks he’ll be an
    0:35:38 actuary. So Milton Friedman’s family, his parents are immigrants, Jewish immigrants from Eastern
    0:35:45 Europe, they’re pretty atypical in that they don’t stay in New York. And they moved to
    0:35:51 Raway, New Jersey, and they put together a fairly middle class life as kind of, they have a shop,
    0:35:54 they do some wholesale buying and selling. And then his father dies when he’s 16.
    0:36:00 His life becomes more precarious. But it’s never as precarious as he makes it up to be.
    0:36:04 He’s got three older sisters, they earn a good living, and suddenly they all have better grades
    0:36:11 in high school than he does, but he’s the one that goes to college. But it’s actually really
    0:36:17 important that he loses his father figure because he’s then looking for other father figures. And
    0:36:22 he meets two at Rutgers. One is Arthur Burns, who will go on to have a huge influence in his
    0:36:30 career. No relation to me, by the way. But Arthur Burns is like him, a fellow Jewish immigrant boy
    0:36:36 on the make. He’s older. And he’s making a career as an economist. And then there’s Homer Jones,
    0:36:41 who has gone to the University of Chicago and is studying with Frank Knight at Chicago and says,
    0:36:47 you have to go to Chicago. So he has these two mentors. And Burns in particular suggests, oh,
    0:36:52 I could be an economist. That could be my career path. The idea to be an actuary for an insurance
    0:36:57 company, I’m not sure where he got that idea, but he just thought that was something he could do
    0:37:01 as someone who was good at math. And so the college really opens, the perspective opens the door.
    0:37:10 And then I think it’s really key that again, he doesn’t get an explanation that he buys
    0:37:16 for the Great Depression. So then he’s looking for one. And the math part is really interesting
    0:37:23 aspect of his career. Now, he actually comes to Chicago to study with the mathematical economist,
    0:37:31 Henry Schultz. But he gets there and he thinks Schultz is kind of dumb. He really does. He’s
    0:37:36 incredibly arrogant and he just thinks this guy’s not that smart. And it seems that, I mean, Schultz
    0:37:41 did some really important work in the early stages of mathematical economics, but a lot of the oral
    0:37:46 histories about him are like, yeah, he wasn’t that bright. So Friedman’s maybe onto something.
    0:37:54 So he falls into the set of students who are really enthralled with his other professor, Frank
    0:38:00 Knight. And Frank Knight is against math and economics. Frank Knight is like a neoclassical
    0:38:05 economist, but not a mathematical economist. He’s an old school liberal. He’s really concerned about
    0:38:13 liberal democracy, economic liberalism. And Friedman is very deeply influenced by Knight.
    0:38:18 And he continues to pursue mathematical economics. So he’ll go for part of his graduate career. He
    0:38:24 goes to Columbia University, where he actually gets his PhD from. And he works with a mathematical
    0:38:30 economist there. And so he comes out trained in what will eventually be econometrics.
    0:38:36 Statistics and economics, his early publications are in statistics, but it’s not really where his
    0:38:42 intellectual heart and soul are. And eventually, he will turn very profoundly against mathematics
    0:38:47 in economics and become a sort of heterodox strain throughout 20th century economics. It says,
    0:38:55 simple models are better. We need to work on empirical, work off empirical data,
    0:39:02 not construct elegant models, and becomes really sort of counter cultural within economics in
    0:39:06 that way. And the test of a good model is it should actually predict stuff that happened.
    0:39:09 It should predict stuff that happened. It should tie back to what’s going on.
    0:39:14 I’m wondering which direction to go. So first, actually, if we could zoom out on the different
    0:39:20 schools of economics, just the basics. You mentioned neoclassical. We mentioned
    0:39:25 Kenzian economics. What else did we mention? Well, the Chicago School of Economics. Where does
    0:39:33 Austrian economics fit into that pile and Marxian economics? And can we just even just linger and
    0:39:39 try to redefine Kenzian economics and Chicago School of Economics and neoclassical economics
    0:39:44 and Austrian economics, because there’s some overlap and tension.
    0:39:51 Schools of economics. So we could start with classical economics. Classical economics,
    0:39:55 we could think of, Adam Smith is kind of your classic classical economist,
    0:40:02 the founder of the discipline. Classical economics does not really use math. It’s very close to
    0:40:09 political economy. It’s concerned with, as Smith puts it, the wealth of nations. It’s concerned
    0:40:14 to some degree with distribution. It’s concerned to some degree with what makes a good political
    0:40:22 system. And what tends to really define classical economics when you’re looking from a great
    0:40:29 distance is what’s called the labor theory of value. So where does value come from in classical
    0:40:37 economics? It comes from the labor that a person puts into it. So maybe this in some ways comes
    0:40:42 from Locke’s notion of property that you kind of mingle your labor with the natural world.
    0:40:49 We can say labor theory of value. So classical economics concerned with Smith is arguing against
    0:40:55 mercantilism for more free trade often goes by the name of political economy to show it’s more
    0:41:03 capacious. It’s thinking of politics and economics. You can still read these books today. The sentences
    0:41:08 are long. The words are different, but you can still follow along. So the real big transition
    0:41:14 from classical economics and political economy to economics, as it’s understood today, comes
    0:41:20 with the marginal revolution. And the marginal revolution is a scientific revolution that happens
    0:41:24 in a couple of different places simultaneously. This is one of these things that you see in the
    0:41:30 history of science. There’ll be some breakthrough. Darwin has a breakthrough, but somebody else has
    0:41:34 sort of the same breakthrough at the same time, totally differently. So there’s a version of
    0:41:41 marginalism that’s continental. There’s a version in the German-speaking lands, in the French-speaking
    0:41:48 lands, and in Britain. And they all kind of come together. And the shift is in the theory of value.
    0:41:58 So the theory of value in marginalism is on the margin. So say you have one apple and you want
    0:42:06 a second one. How much is going from one apple to two apple worth for you? Probably quite a bit.
    0:42:11 If you had 10 apples, maybe going to 11 apples, doesn’t matter that much. The marginal value is
    0:42:18 less. So what marginalism does, though, most importantly, is it opens the door to math and
    0:42:26 economics, because it means you can graph this. Now, you can depict this relationship graphically.
    0:42:31 And there’s some really interesting work in the history of economics that shows a lot of the
    0:42:38 people who developed marginalism were looking to physics as a model, physics, the queen of the
    0:42:45 sciences. And so they were thinking, they imported terms from the natural world to describe the
    0:42:52 social world through the lens of economics, terms like equilibrium. So the idea being that if you
    0:42:59 looked at a market, a market would reach equilibrium when everybody is bought and sold,
    0:43:05 all that they want, or the price will settle at an equilibrium price when it’s really the demand
    0:43:11 and supply are matching up. And some of these ideas are things we would pick up at a microeconomics
    0:43:18 class? Oh, yes. This is still out there. This is sort of the basic foundation of microeconomics,
    0:43:25 marginal analysis. And so in the German-speaking intellectual tradition, this is the root of
    0:43:31 Austrian economics. And people picking up the marginal revolution in the German-speaking lands
    0:43:39 are opposed to the historicists who are thinking in a more evolutionary way about how societies
    0:43:49 kind of grow and change. And they have a vision of economic ideas as applying differently to different
    0:43:55 types of social arrangements. Or the marginalists, remember, are inspired by physics. And this is
    0:44:02 a set of natural laws that applies anywhere to any sort of human society. So that’s this first
    0:44:10 really big fissure that we’ll see again and again. Are you historically minded? Do certain traits of
    0:44:17 economic life adhere and become expressed in certain types of societies? Or are there universal
    0:44:22 economic laws that flow through any type of society? So that’s kind of a juncture, a break.
    0:44:29 And so marginalism, first, people start using really geometry to kind of graph things, but
    0:44:35 marginalism is also opening up to the possibility of calculus and the possibility of creating models.
    0:44:40 But at that point in time, late 19th century, a model is something like a physicist does,
    0:44:44 like think of an inclined plane and how fast does the ball roll from one to the other? It’s
    0:44:49 a physical representation of the world. And eventually economists will start to create
    0:44:53 mathematical representations of the world. But we’re not quite there yet. So we’re late 19th
    0:44:59 century and we have this fissure, we have this introduction of marginal analysis that marks the
    0:45:05 juncture from classical economics to economics. So let’s say now we have economics, but we still
    0:45:12 have this fissure between historical thinking and let’s call it natural law thinking. That’s not
    0:45:19 quite right, but physical laws versus contingency. And then in the United States, this ends up mapping
    0:45:27 onto debates about capitalism. And so more historically minded economists tend to be
    0:45:33 interested in the progressive movement, and which is invested in taming and regulating
    0:45:41 industrial capitalism and changing its excesses, you know, factory safety laws, wage laws, working
    0:45:48 conditions laws. Yet in general, American economists all use marginal analysis just in
    0:45:54 different ways. The ones who are more drawn to marginal analysis become known as neoclassical
    0:45:59 economists. They’re neoclassical. The neo is because they’re using marginal analysis. The
    0:46:05 classical is because they don’t think we need to change the way the economy operates or the
    0:46:08 government operates. They’re not progressive. Whereas the progressives are saying things like
    0:46:16 we need to use social control. The state and the people collectively and democratically need to
    0:46:26 control the way economics unfolds and make sure things are fair and equal. So that school of
    0:46:31 thought becomes known as institutional economics in the United States by the 20th century. So it’s
    0:46:36 part of the progressive movement late 19th century. Into the 20th century, it really becomes institutional
    0:46:42 economics. And it’s quite dominant. And the neoclassical economists are still there, but they’re
    0:46:48 very much a minority. And Frank Knight, Milton Friedman’s teacher, is one of the minority
    0:46:55 neoclassical economists. And the institutionalists are much more progressive still.
    0:47:01 Is it fair to say that the neoclassical folks and even the classical folks versus the institutional
    0:47:07 economics folks, they have a disagreement about how much government intervention that should be
    0:47:13 in the economy. So neoclassical is less intervention. And then institutional economists,
    0:47:20 the progressive folks, has more intervention. Yes, exactly right. So this is the situation in the
    0:47:28 1920s. But the other piece I should mention is the first generation of progressive economists
    0:47:34 were very radical. They were closely allied with the socialist movement, with labor radicalism.
    0:47:39 And many of them lost their jobs at universities. This kind of connects to the early dawn of
    0:47:45 academic freedom. This is before academic freedom. And they were chastened. They became much more
    0:47:51 mainstream. By the time we get to the 1920s, we don’t really have radical critiques of society
    0:47:58 coming from economists. Much smaller profession, much less important than it is today. And
    0:48:04 fairly peaceful, because the 1920s are a fairly peaceful decade in the United States.
    0:48:11 So this is a situation when the Great Depression hits. And as I mentioned before, the head,
    0:48:17 the kind of most important institutional economist is Wesley Mitchell. And he has said,
    0:48:23 he’s written a whole book on business cycles. But he doesn’t see this business cycle coming,
    0:48:28 and it hits, and he doesn’t have a good explanation for it. Now, perhaps the preeminent neoclassical
    0:48:34 economist was Irving Fisher. Now, Irving Fisher is big into the stock market. And Irving Fisher
    0:48:42 says sometime in late summer, 1929, stocks are going ever higher and will continue to go ever
    0:48:48 higher forever. And so he loses his reputation after the stock market crash. So Milton Friedman
    0:48:53 is stepping into a field in which the greats have been discredited, and there’s an enormous
    0:48:59 economic crisis all around. And everybody’s struggling to figure out why the crisis happened.
    0:49:04 Yes. And the other thing he’s stepping into is a world where in the United States, there’s a
    0:49:11 great deal of anger at capitalism, at the system, unemployed people on the street in Europe. There’s
    0:49:18 rising fascist movements in Asia. There’s rising fascist movements. And so everyone’s very concerned
    0:49:23 about this. And Friedman is seeing a lot of this through the lens of Frank Knight, who feels like
    0:49:29 we are maybe reaching the end of what he calls liberalism. He calls himself an old-fashioned
    0:49:33 liberalism. We’re reaching the end of representative democratic government, because
    0:49:40 representative democratic government cannot solve these social problems. And capitalism,
    0:49:45 as it has developed, Knight is very pro-capitalist, but he says it’s generating inequality, and this
    0:49:51 is putting too many strains on the system. So Knight will become one of the people who helps
    0:50:00 Friedman think, how do I develop a new theory of capitalism that works in an era of mass democracy,
    0:50:06 where people can vote and people can express at the ballot box their unhappiness with what’s
    0:50:12 happening economically. So this larger movement will generate, of which F.A. Hayek is a part,
    0:50:18 Friedman is a part. That becomes the very early stirrings of trying to think about a new sort
    0:50:24 of liberalism, which will eventually be called neoliberalism. Okay. So if we can just linger on
    0:50:30 definitions of things. So we mentioned what neoclassical is in the institutional economics is.
    0:50:37 What’s Kenzie in economics? And the Chicago School of Economics, I guess, is a branch of
    0:50:44 neoclassical that’s a little bit more empirical versus maybe model-based. And Kenzie in this very
    0:50:52 model, model heavy, more intervention of government. So the real battle is Kenzie in versus everybody
    0:50:58 else. That is what eventually comes to pass in the United States and in the kind of overall developed
    0:51:03 kind of developed profession of economics. The other piece of the puzzle here is the
    0:51:10 introduction of mathematics. And it’s been around the edges, but it will pick up speed in the 1930s,
    0:51:18 like the econometrics society has founded. They start publishing. People start using more statistical
    0:51:23 and mathematical tools to think about economics. And they’re given a boost sort of inadvertently
    0:51:28 by the rise of Kenzie in economics. So Kenzie is trained in the neoclassical tradition.
    0:51:35 He’s an absolutely fascinating figure. He’s been there in peace negotiations at Versailles. He
    0:51:41 basically calls World War II. He’s like, hey, we’re going to have another war here,
    0:51:46 caused by Germany, because this peace treaty has been done in such an vindictive way. And people
    0:51:52 have made such bad decisions. He’s there. He sees it happening. And so when the Great Depression
    0:51:58 unfolds, he basically comes up with a new theory for explaining what’s going on. And
    0:52:04 the previous neoclassical understanding is where things go up and things go down. And when they
    0:52:09 go down, there’s a natural mechanism to bring them back up. So when the economy is going down,
    0:52:15 prices are going down, wages are going down. Everybody’s losing money, but eventually firms
    0:52:21 are going to realize, hey, I can hire people cheap. Hey, I can buy stuff cheap. I don’t have a lot of
    0:52:25 competition. Maybe I should get in the game here. And then others will start to get in and then you
    0:52:32 regenerate prosperity in that way. And so Keynes says, sure, that’s one theory, but something
    0:52:38 different is happening right now. Part of why it’s happening is because we have– the working
    0:52:43 class is more empowered now. They’re not simply going to just take low wages and ride them down
    0:52:51 to the floor. We might not hit the floor. But also, he says, people might become too anxious
    0:52:58 to spend. They might not want to invest. And Keynes has these discussions of animal spirits.
    0:53:04 He’s still enough of a political economist to think not just in terms of human rationality,
    0:53:08 but what are some other things going on in human beings? And people might decide to sit on their
    0:53:15 money. They might not invest it. And so what happens then is you could get stuck in a bad
    0:53:20 equilibrium. So in the neoclassical model, the equilibrium kind of restarts and resets itself.
    0:53:25 And he says, no, we could get stuck here. We could get stuck in the depression. And in that case,
    0:53:30 what has to happen, he says, the government stimulates investment and the government itself
    0:53:36 invests. And then he argues that– this is a student of his, Richard Kahn, says,
    0:53:41 as the government invests a dollar, it has a multiplier effect. A dollar spent by the government
    0:53:47 kind of ramifies out throughout the economy. So it takes the government and puts it in the center,
    0:53:50 as opposed to, say, the banking system or the financial system, which would be the
    0:53:57 more Friedman analysis. And for many economists of Friedman’s generation– and he’s a weird
    0:54:02 generation because it’s the generation that becomes dominant. It’s just like four years older,
    0:54:06 the men who become Keynesian economics. But that four years is really important because they come
    0:54:11 in to graduate school in economics and they get exposed to the new ideas of John Maynard Keynes.
    0:54:18 And I think it’s Paul Samuelson calls it– it was like a South Sea virus that
    0:54:24 attacked all of the younger economists, immediately succumbed, and no one under 50
    0:54:32 ever got the disease because their thinking’s already set. And so Keynesianism, Keynes himself,
    0:54:38 is very suspicious of math and economics. And he and Friedman is fascinating. One of the first
    0:54:43 books by Jan Tingerman, a Dutch economist, to use math and economics. He has huge volumes.
    0:54:51 Volume one, Keynes pans it. Volume two, Friedman pans it. So they’re in the same page, but what
    0:54:59 happens is as Keynesianism arrives in the United States, Franklin Roosevelt is not really a Keynesian.
    0:55:05 He’s kind of an accidental or experimental Keynesian. And there’s a bunch of different ideas
    0:55:09 in the United States that are very similar to Keynesianism. They’re not theorized,
    0:55:14 but they’re similar ideas that the government has to do something. So this all comes together
    0:55:22 and American economists realize that you can construct models in the Keynesian perspective.
    0:55:29 And if you can use numbers in these models, you can go to Washington, D.C. with numbers,
    0:55:37 and you seem like you have a lot more authority. And so math becomes really
    0:55:46 twinned into Keynesian economics. So the numbers are used as a symbol of expertise.
    0:55:50 We really know what the hell’s going on because we have some numbers, right?
    0:55:54 Right. And we can create a model. And so we can say, okay, in the model, the interest rate is here
    0:55:59 and taxes are here. So let’s play with government spending. Let’s make it up. Let’s make it down.
    0:56:04 And then we can get an estimation. It’ll spit out here’s predicted GDP. So the other piece of
    0:56:11 the Keynesian revolution is it really gets people thinking kind of holistically about the economy
    0:56:21 as one conceptual unit. And you then have what Hollis-Hamulson will end up calling the neoclassical
    0:56:27 synthesis. And this was still in economics today. If you take micro, you’re going to get supply and
    0:56:32 demand, scarcity, marginal analysis. If you take macro, you’re going to get a very different approach.
    0:56:38 And that’s more Keynesian-based. And so the idea is that, and this makes sense, I mean, you can think
    0:56:43 of this from statistics, right? The way things act individually versus when they’re all added
    0:56:49 together can be very different. So there’s this kind of uneasy piece where economists are using
    0:56:54 kind of neoclassical tools to analyze individual behavior and individual market
    0:56:57 behavior, and they’re shifting to a different paradigm when they think about the economy as
    0:57:03 a whole. And in this paradigm of the economy as a whole, the federal budget, the taxing and
    0:57:08 spending power of the federal government become paramount. And that is called the fiscal revolution.
    0:57:16 And that’s really the essence of Keynesianism. But the key thing to remember is that Keynesianism
    0:57:22 and Keynes are different. And there’s this famous episode where John Maynard Keynes comes to DC and
    0:57:27 he goes to dinner, and he comes back and he says to one of his friends in London, he’s, “Oh, yeah,
    0:57:36 it was really interesting. I was the only non-Keynesian there.” Yeah. So Keynesianism is more government
    0:57:45 intervention, fiscal policy. So put the government at the center of influencing the economy. And then
    0:57:51 the different flavors of whether it’s Austrian economics or Chicago School of Economics
    0:57:59 is saying, “No, we have to put less government intervention and trust the market more.” And
    0:58:06 the formulation of that from Milton Friedman is trust the money more, not trust, but the money
    0:58:13 supply is the thing that should be focused on. Yes. So the Austrians and the Chicago School see
    0:58:21 economic prosperity and growth comes from individual initiative, individual entrepreneurship,
    0:58:25 kind of private sources. The private market is what drives economic growth, not the public sector.
    0:58:32 And so for Friedman, then the question is, what is the government’s role? And because he’s lived
    0:58:38 through the Great Depression, he’s not laissez-faire, and he won’t ever be laissez-faire. Now, interestingly,
    0:58:44 Hayek, living through the Great Depression, at first is laissez-faire. And he’s like, “Sure,
    0:58:50 like let it rip.” And things get so bad that Hayek’s like, “Okay, that’s not going to work.”
    0:58:54 Can we actually define laissez-faire? So what do we mean? Like, what’s the free market? What’s
    0:59:00 laissez-faire? What’s the extreme version here? So yeah, laissez-faire means levabie in France.
    0:59:07 It’s more often used as an insult than as an actual. Very few people are completely and totally
    0:59:12 laissez-faire. That would be like the pure laissez-faire would be the sort of pure,
    0:59:16 maybe pure anarchist position, like the state does nothing, or the state isn’t even there.
    0:59:23 But it tends to, if I could maybe make it more precise, it would be focused on freedom of contract
    0:59:32 would be essential. And that means the buyer of labor and the seller of labor must have absolute
    0:59:40 freedom to contract. So that means no minimum wage law, no working hours law, no employment law,
    0:59:45 things like that. That was, and this is all pre-progressive movement. A lot of things are
    0:59:50 that way, right? You know, imagine you’re in 19th century America and you have a farm and you hire
    0:59:56 someone to help you on the farm. You offer the money, they take it. If they fall off a ladder and
    1:00:00 break their back, maybe you help them out, maybe you don’t, right? But there’s not a whole apparatus
    1:00:06 of legal liability and safety and things like that. So that would be one piece. Another piece of
    1:00:15 laissez-faire would be free trade amongst nations. So no regulation of who can invest in a nation or
    1:00:22 who can take money out of a nation. So Nippon Steel could come and invest in US Steel and there would
    1:00:28 be no grounds in which to reject that. Or you could, as a billionaire in the United States,
    1:00:33 relocate you and all your money to another country and the United States couldn’t try to keep you
    1:00:40 and nobody else could stop you from coming in. And then in the context of economic crisis,
    1:00:50 laissez-faire would not encompass centrally provided relief because in the pure theory,
    1:00:57 again, very seldom applied purely, but in the pure theory, the wages need to come down far enough
    1:01:03 and people need to be desperate enough to start taking work and to start the machine again.
    1:01:07 So the theory would be if you give people relief, they might not go back to work.
    1:01:14 Now, almost nobody says that in the Great Depression because the situation is so bad
    1:01:20 and people are starving on the street and people feel, for humanitarian and ethical reasons,
    1:01:25 it’s not okay to say that. The Austrians, though at first, Hayek and Lionel Robbins,
    1:01:30 are like, this is a business cycle and it needs to run its course and it will be detrimental
    1:01:34 if we intervene. And then pretty soon, Hayek has to change his tune.
    1:01:38 So the Austrians are the most hardcore in terms of laissez-faire.
    1:01:44 Absolutely. And so Hayek will make the turn towards accepting more of a state and then
    1:01:50 we’ll come to talk about how the state needs to support what he calls the competitive order.
    1:01:58 But his mentor, Ludwig von Mises, still remains very hardcore and is not really open to things
    1:02:03 like unemployment insurance or other state-based interventions.
    1:02:08 What does von Mises say about human suffering that’s witnessed in the Great Depression,
    1:02:13 for example? What are we supposed to do as economists, as humans that define policy?
    1:02:18 What are we supposed to see when people are suffering at scale?
    1:02:24 Yeah, I wish I knew and answer that question. I don’t know enough about von Mises and his
    1:02:33 reaction in the Great Depression. I think I would hazard that he would look more down the road and
    1:02:40 say, well, if you start here, you’re going to go places that are bad. But I don’t factually
    1:02:44 know what he said in response. I do know that Hayek’s position doesn’t last very long.
    1:02:51 It’s not a position you can hold to. Maybe you could hold to it in other cycles. The other thing
    1:03:00 that was interesting is I found very few Americans saying this. Most who were were kind of small town
    1:03:07 electeds or the most famous is Andrew Mellon, quoted by Herbert Hoover. So not directly,
    1:03:12 you don’t have him on record saying this, but apparently Hoover records in his memoirs that
    1:03:20 Mellon said something like, liquidate real estate, liquidate stocks, purge the rottenness
    1:03:26 out of the system. People will live a healthier life. And certainly, they were members of the
    1:03:31 Federal Reserve who felt like it would create, they didn’t say moral hazard, but it would create
    1:03:37 what we now call moral hazard, bad habits, where we to intervene and to save failing banks because
    1:03:42 failing banks need to be taught a lesson, they need to be taught discipline. And so a lot of
    1:03:47 people, I think, saw it in the context of discipline. This is discipline. And if you remove the
    1:03:51 discipline, you’ll be taking away something fundamental in society.
    1:03:55 So Milton Friedman never quite went all the way to Lise Fair?
    1:04:01 No. No, he didn’t see that. And what’s really interesting is the number of incredibly radical
    1:04:07 proposals that he and his teachers were floating. So I’ve mentioned Frank Knight. Another really
    1:04:15 important influence on Friedman was Henry Simons, who was a junior professor at Chicago. And Simons
    1:04:23 had this idea for what he called 100% money, which would be a law that says banks have to
    1:04:27 hold 100% of the deposits they receive. They can’t loan them out on the margin.
    1:04:32 So this would completely and totally have overhauled the US banking system. And he would have said,
    1:04:36 there’s a category of things called banks where you get deposits. And then there’s going to be a
    1:04:41 category of sort of, he didn’t say investment banks, but investment vehicles that will invest.
    1:04:48 So similar to what did happen in some ways in the banking reforms, in the 1930s, the investment
    1:04:53 banks were split from the deposit banks. And the banks that took deposits were much more
    1:04:58 highly regulated, and they were supported by the FDIC. But the point being, the Chicago
    1:05:04 School had these very radical proposals for reform, go off the gold standard, restrict
    1:05:12 the currency, change the banks, immediately relief payments now. What is important to note,
    1:05:17 though, is that they thought of all of those as emergency measures to get through the emergency,
    1:05:24 not as permanent alterations in the state of what had to be and not permanent alterations
    1:05:29 between state and market. Where the Keynesian assumption is things have changed, times have
    1:05:37 changed, we’re in a new dispensation, and we need a new relationship. So Milton Friedman
    1:05:44 is very open to doing things differently in a state of emergency. He will have different ideas
    1:05:48 during World War II than any other time. And that’s why I argue I think he would have been
    1:05:53 supportive of at least the first rounds of coronavirus relief, because I think he would
    1:05:59 have put his emergency thinking hat on. So in that way, he was definitely more flexible.
    1:06:07 You mentioned Hayek. Who is this guy? What’s his relationship to Milton Friedman in the space
    1:06:12 of ideas and in the context of the Great Depression? Can we talk about that a little bit?
    1:06:21 Sure. So F.A. Hayek is an Austrian economist who takes up a posting in London, and he’s
    1:06:27 in a mentor, a mentee rather of Ludwig von Mises. He’s writing about business cycles,
    1:06:35 Austrian capital theory, and the depression hits. And he’s one of the few economists who in the
    1:06:41 beginning really is not calling for much intervention. Although, as he realizes how politically
    1:06:45 unpalatable that is, he will develop a more softened version of Austrian economics that has
    1:06:52 room for a whole range of social services. What’s significant about Hayek is that he is also watching
    1:06:57 what’s happening in Austria, what’s happening in Germany, and he’s really worried the same
    1:07:04 thing is going to happen to the Western democracies. And he sees the root cause of this is socialism,
    1:07:08 the shift towards an expanded role for government, which we’ve been talking about is happening in
    1:07:13 the United States. It’s also happening in Britain. And so he writes this book that becomes incredibly
    1:07:20 famous, “The Road to Serfdom,” basically saying taking these steps towards a planned economy
    1:07:26 or an economy that’s a modified form of capitalism is going to could. He’s very clear that this is
    1:07:31 not an inevitability, but if the same steps are taken and people follow the same line of thinking,
    1:07:37 we may end up in a sort of coercive totalitarian state. So this becomes enormously popular in the
    1:07:43 United States. First of all, he’s in good touch with Friedman’s teachers, even before this book
    1:07:47 comes out. They see them as kindred spirits. Frank Knight is in touch with him. Henry Simons
    1:07:52 is in touch with him. They all see themselves as liberals. They call themselves old-fashioned,
    1:07:58 unreconstructed liberals. And so even before he becomes famous, Hayek will be trying to kind of
    1:08:04 organize thinkers and intellectuals who he believes shares his values of what we would call
    1:08:10 today classical liberalism and to kind of create a counter-consensus to the one that’s gathering.
    1:08:17 Now, Hayek also chooses not to argue against Keynes, and he feels that this is a huge missed
    1:08:22 opportunity, that he should have staked out the case against Keynes, and that because he did not,
    1:08:27 people come to believe there is no case against Keynes. Keynes is literally unanswerable.
    1:08:34 So Hayek will have this great regret. He will channel some of his regrets into sort of community
    1:08:41 building, specifically developing the Montpelerin Society. And it will fall to Friedman to really
    1:08:50 make that case against Keynes. But Hayek will end up at Chicago, and Hayek really influences
    1:08:58 Friedman to think about what Hayek calls the competitive order and how the state can and must
    1:09:05 maintain a competitive order. That is the system of laws, of norms, of practices that makes it
    1:09:11 possible for markets to function. And this is one of these key differentiators between the older
    1:09:18 philosophy of laissez-faire and the newer reconceptualization of liberalism, which says, “Yes,
    1:09:25 we need a state. We need a state that’s not intervening in markets under social democratic
    1:09:30 auspices, but is structuring and supporting markets so that they can function with maximum
    1:09:38 freedom, keeping in mind that if there aren’t basic social supports needed, the market is apt to
    1:09:44 generate the type of either inequality or social instability that will call the whole system into
    1:09:51 question.” So Hayek is really key in promoting this modified liberalism. But from being a very
    1:09:58 prominent economist in the 1920s and 1930s, as mathematics becomes the language of economics,
    1:10:03 Hayek is completely left out in the cold. Now, Friedman to some degree is left out in the cold,
    1:10:09 but Friedman at least has proved the mathematical economists that he knows what they’re up to,
    1:10:15 and he’s rejecting it from a position of expertise and knowledge. And he literally drives the
    1:10:20 mathematical economists out of Chicago. They’re clustered in a group called the Kohl’s Commission,
    1:10:28 and he makes their life hell. They flee. They flee the Friedman slot. But then when Hayek arrives
    1:10:33 at the University of Chicago, he would like to be considered for a position in the economics
    1:10:37 department. And Friedman, Milton Friedman says, “No way. You’re not really an economist because
    1:10:44 you’re not empirical because you just developed these theories.” So he has an appreciation for
    1:10:51 Hayek as a social thinker, but not as an economist. So what Friedman decides to do, his answer to
    1:10:58 Keynes will be deeply empirical, but it will also be theoretical. And it will create an alternative
    1:11:05 intellectual world and approach for economists who aren’t satisfied with Keynesianism. And almost
    1:11:11 single-handedly, Friedman will introduce sort of political and ideological diversity
    1:11:17 into the field of economics because from his beachhead in Chicago, he will develop the theory
    1:11:26 of monetarism. So what is monetarism? The easy way to summarize it is this famous dictum of Milton
    1:11:34 Friedman’s. Inflation is always and everywhere a monetary phenomenon. And it’s fascinating that he
    1:11:41 becomes an expert in inflation because the first research and the first major research product
    1:11:45 of monetarism is that theory of the Great Depression in a monetary history of the United
    1:11:53 States. And that is the theory of a deflation, all prices going down. And he will go back to an idea
    1:11:59 that Irving Fisher had popularized, but a very old idea, almost a truism, the quantity theory of money,
    1:12:05 which says the level of the price level is related to the amount of money circulating in an economy.
    1:12:11 So if you have more money, prices go up. If you have less money, prices go down. Now, this seems
    1:12:17 like very basic and almost too basic to bear repeating. But Friedman is saying this very basic
    1:12:24 relationship holds true even in an advanced industrial economy. And that is what people
    1:12:30 have started to doubt. And if you think about money, you think about banks, you don’t think
    1:12:37 necessarily about the federal budget spending and taxation. And what you see happens in American
    1:12:42 economics, the textbooks previous to the Keynesian Revolution, they spent a lot of time on money,
    1:12:47 they spent a lot of time on interest rates, you can do word counts and other scholars have done
    1:12:52 the word counts. And then word count for money after World War II just plummets. And you start
    1:13:00 seeing things like taxation, budget, those things go up. So what happens is the economics profession
    1:13:05 shifts its attention. It just looks away from money to other things. And Friedman is one of the
    1:13:13 few who’s saying, no, money still matters, money still counts. And it’s a very counterintuitive
    1:13:19 argument to make. It’s a very historical argument to make. And this is absolutely fascinating to me.
    1:13:25 With Anna Schwartz, he develops this 150-year time frame. He also has students working on
    1:13:30 episodes of hyperinflation in different periods of time. He’s also looking back
    1:13:37 to ancient history, inflationary episodes there. And he’s saying this is a law of economics.
    1:13:42 This is something that recurs throughout time. It’s not historical, right? It’s not contingent.
    1:13:49 It’s a law of economics. And his Keynesian counterpoints are saying, no, that’s not
    1:13:54 relevant any longer. Maybe once it was relevant, but it’s not relevant today. Now, in some ways,
    1:14:02 they have a point because in order to pay for World War II, the federal government
    1:14:09 sells a lot of bonds. It issues a lot of debt. And it wants to pay this debt back at a low
    1:14:14 interest rate. And it wants people to keep buying it. It wants the low interest rate
    1:14:19 to be competitive with other interest rates. So once in general, low interest rates throughout
    1:14:25 the economy. And the Federal Reserve has been so discredited by the Great Depression that the
    1:14:31 Treasury basically runs a Federal Reserve and says, keep interest rates low. And so that’s
    1:14:37 what it’s doing. And so the Federal Reserve has stopped being an independent entity. It’s just
    1:14:43 a sub sort of department of the Treasury. But in 1951, they negotiate what’s called the Treasury
    1:14:49 Fed Accord. And the Federal Reserve gets its independence, but it doesn’t really use it.
    1:14:57 But statuatorily, it now has it. And so most economists are just observing a regime in which
    1:15:01 the Federal Reserve has no power, a regime in which there is really little inflation,
    1:15:05 the inflation that is seen as post, there’s a little burst of inflation in the Korean War.
    1:15:10 And they’re saying inflation is not really important. It’s not really relevant. And money’s
    1:15:14 not really relevant and important. And so to break through and to make the argument,
    1:15:20 that’s why Friedman and Schwartz go to history. And they’re able to make that argument for history.
    1:15:25 So then Friedman is coming out with a variety of papers that are saying,
    1:15:31 you know, when I look at economic fluctuations, he maps them side by side to fluctuations. And
    1:15:36 the money supply and says, look, they fit. And other economists, remember, they’re building
    1:15:41 complicated mathematical models. And Friedman’s doing extremely simple stuff. And they just think
    1:15:47 it’s dumb. It’s not interesting. It’s not true. They just, they don’t buy it at all. And so,
    1:15:53 but after a monetary history of the United States, they have to pay attention. So it’s really in
    1:16:00 those years, Friedman is hammering this idea of monetarism, and it starts to become something
    1:16:06 respectable, bordering on respectable for other economists to look to and think about. And that’s
    1:16:10 really the beginning of the kind of Keynesian monetarist split, where if you start to give
    1:16:16 Friedman any credence, you’re heading towards a monetarist position. Now, at the same time,
    1:16:26 Friedman comes out very publicly in 1964 as a supporter of Barry Goldwater. And Keynesian economics
    1:16:31 has found a home in the Democratic Party. It’s probably the brightest moment in the sun is
    1:16:36 the administration of John F. Kennedy, who brings in a lot of Harvard and Yale professors to the
    1:16:42 Council of Economic Advisers. He proposes a series of spending programs that are really guided by
    1:16:49 the Keynesian philosophy. And the Barry Goldwater is tremendously controversial, part for his votes
    1:16:54 against civil rights, which Friedman really supports in part because he’s a hardcore libertarian
    1:16:59 in an age when that’s not in the political mainstream or not discussed in the political
    1:17:04 mainstream. And I mean, he’s just tremendously unpopular, particularly in all the educated
    1:17:09 precincts where Friedman lives. So Friedman is like an outcast on a pariah for his support of
    1:17:15 Goldwater. And so that actually really affects monetarism because people feel that this is now
    1:17:21 becoming a package deal. And so there’s a great reluctance to embrace Friedman’s ideas because
    1:17:28 it seems like you would then have to embrace his politics. So it’s associated with conservatism.
    1:17:35 So this is the years when conservatism, there is a movement that calls itself conservatism.
    1:17:40 And Friedman is very tightly allied with this movement from the beginning, partly through his
    1:17:45 friendship with William F. Buckley. And a lot of people say to me, yeah, but Friedman’s not
    1:17:52 conservative. And this is like a bigger, you have a whole separate podcast on this. But for now,
    1:17:58 I’ll just say that conservative in the United States becomes a political brand that contains
    1:18:04 elements of conservatism that are recognizable across time and space, embrace of tradition,
    1:18:11 for comfort with hierarchy, et cetera. And it also has something new and different, which is
    1:18:17 Friedman’s ideas about Milton Friedman’s advocacy of more free markets, less government regulation
    1:18:21 and the benefits of capitalism and the benefits of freedom. And that gets folded into American
    1:18:28 conservatism in part because Milton Friedman is such a powerful intellectual figure. And after
    1:18:34 his advocacy, Goldwater media realizes this guy is really smart. He has really interesting things
    1:18:39 to say. He makes great copy. He makes a great guest. And he starts writing a column for Newsweek
    1:18:45 magazine, which is a very big deal in a much more consolidated media environment. And he’s quoted
    1:18:50 in all the newspapers. And so his public profile really starts to rise right as he’s pushing
    1:18:55 monetarism as an alternative to the Keynesian synthesis.
    1:18:59 Can we just linger on what is monetarism?
    1:19:01 Yes, okay. I didn’t come into it.
    1:19:04 So like what, okay, the money supply.
    1:19:04 Yes.
    1:19:12 So money is this thing that you can leave it a note, like a notion where people buy and sell
    1:19:20 stuff. And there’s this fascinating complex dynamical system of people contracting with
    1:19:24 each other in this beautiful way. I mean, there’s so many pod head questions I want to ask about
    1:19:30 the nature of money. I mean, money is fascinating in that way. And I think for Milton Friedman,
    1:19:39 trusting the flow of money is really important. And the signals that pricing and money in general
    1:19:42 provides is really important.
    1:19:48 So yeah, and some of this, I could take some of this back again to Frank Knight. So one thing
    1:19:55 Frank Knight said to all his students was the market is the best allocation mechanism we have.
    1:20:02 The market is what allocates resources in a situation of scarcity. The market allocates them.
    1:20:09 The best. And Hayek will add to that by saying prices are information signals, and a price
    1:20:14 sends information to buyers and sellers about how they should act. And these are the two of the
    1:20:20 strongest arguments for why the government should not intervene in the price system because it will
    1:20:26 blur information or because it will allocate less efficiently than market allocation will.
    1:20:32 And so what Friedman is really going to add to that is maybe going up a level and thinking
    1:20:40 in the macro about the whole economy and how money circulates through that economy as a whole.
    1:20:47 And so what he and Anna Schwartz do is they construct what are called monetary aggregates.
    1:20:53 This is adding together, say, all the money that’s on deposit in banks and all the money that’s
    1:20:59 believed to be circulating in people’s wallets. And you also have to really go back in time.
    1:21:06 We don’t have credit cards. There is a stock market, but it’s tiny in terms of the number
    1:21:13 of people who invest. There aren’t mutual funds. When travelers checks are introduced,
    1:21:20 this is a big deal. So we have a very simple monetary system. And so Schwartz and Milton
    1:21:25 Friedman start measuring what they call the monetary aggregates. They focus on M1 and M2,
    1:21:32 and their favorite aggregate is M2, which I believe is encompassing deposits and circulating medium.
    1:21:37 The other thing to recall, there’s some fine distinctions between
    1:21:48 money in savings accounts and money in checkings accounts. And money in savings accounts
    1:21:53 can earn interest and is generally believed not to circulate, or money in checking accounts
    1:21:58 does not at that time bear interest and cannot legally bear interest. And so his thought of
    1:22:02 is circulating. And then there’s different institutional architectures of postal savings
    1:22:09 banks and credit unions. But Friedman is, one, taking the focus to these aggregate amounts of
    1:22:17 money and saying, “These really have a lot to do with economic booms and busts. When we have
    1:22:23 an expansion in the amount of available money, we see an expansion in economic activity. When we
    1:22:32 have a contraction in available money, we have a contraction.” And so he says, “At this stage,
    1:22:38 the government, through the mechanism of the Federal Reserve and its influence on interest rates,
    1:22:44 can either make money more cheaply available and more freely available in the economy,
    1:22:52 or can make money more expensive and slow things down.” But the central core idea of
    1:22:59 monetarism is this is potentially very bad if the government can hit the gas and then hit the
    1:23:06 break and hit the gas and hit the break based on, say, what a politician wants or what somebody
    1:23:13 at the Federal Reserve wants. You have a lot of instability in the system. And so one of the core
    1:23:20 policy proposals of monetarism is let’s grow the money supply at a steady rate. And in the beginning,
    1:23:26 Friedman just says K percent. He doesn’t even put a number on it because he says the number
    1:23:32 doesn’t matter. What matters is the steadiness in the growth rate because if it’s a steady growth rate,
    1:23:38 it will fade away and then people will make economic decisions based on the fundamentals,
    1:23:45 not based on what they think is going to happen, not based on hedging against inflation
    1:23:52 or hedging against deflation. They’ll just be able to function. So this is sort of the paradox
    1:23:59 of monetary policy. When it’s happening right, you don’t see it, you don’t notice it. When it’s
    1:24:03 happening wrong, Friedman argues, it can just fundamentally destabilize everything. It can
    1:24:10 cause a great depression, it can cause an artificial boom. And so he’s taking monetary policy at a
    1:24:14 time when most economists think it’s completely irrelevant and saying this is the central game
    1:24:21 of the economy. Now, we live in a world where we believe this and the Federal Reserve chair can’t
    1:24:27 open their mouth without headlines being generated. But Friedman is saying this at a time when the
    1:24:33 Federal Reserve is like a mysterious and secretive organization. It’s not well known,
    1:24:38 it’s not deeply appreciated. Some of the only people who appreciate the Fed’s power are
    1:24:46 hardcore rural populace who have constituents who think the banks and money power are the problem,
    1:24:52 who are like throwbacks from the frontier days. So Friedman in the beginning has no constituency
    1:24:59 for this policy, he has no constituency for this analysis. And so just going back to summarize
    1:25:06 monetarism, it’s looking, it’s using the quantity theory of money to analyze the macro economy.
    1:25:15 It’s proposing a policy of slow and steady growth in the money supply. And then it is arguing that
    1:25:21 inflationary episodes when they emerge are profoundly driven by changes in the money supply,
    1:25:28 not by anything else. I mean, and going even up a level as we started,
    1:25:37 how epic is it to develop this idea, to hold this idea and then to convince
    1:25:45 the United States of this idea that money matters, that today we believe is mostly correct
    1:25:54 for now. And so just this idea that goes against the experts and then eventually wins out
    1:26:00 and drives so much of the economy, the biggest, the most powerful economy in the world. So
    1:26:05 fascinating. Yeah. So I mean, that’s a fascinating story. And so what happens is Friedman has
    1:26:10 advanced all these ideas. He’s roiled the economics profession. He’s built a political profile.
    1:26:18 And then he becomes the head of the American Economics Association. And he is asked in that
    1:26:23 role to give a presidential address. And so he gives his presidential address December 1967.
    1:26:31 And he says, I’m going to talk about inflation. And I’m going to talk about the trade-off between
    1:26:36 inflation and unemployment. And this is what’s generally known as the Phillips curve. And the
    1:26:42 Phillips curve in its original form is derived of post-World War II data. So it’s derived of
    1:26:51 about 12 years of data. And it shows that when inflation goes up, unemployment goes down. And
    1:26:56 the idea would make sense that as the economy is heating up and lots of things are happening,
    1:27:03 more and more people are getting hired. And so this relationship has led policymakers to think
    1:27:09 that sometimes inflation is good. And if you want to lower unemployment, you could let inflation
    1:27:17 kind of go a little bit. And in accrued forms, it becomes to seem like a menu, like you could
    1:27:22 take your model and you could plug in, I want this much unemployment. And it would say, well,
    1:27:27 great, this is how much inflation you should do. And so then you would target that inflation rate.
    1:27:34 So Freeman gets up and he says, this is wrong. This might work in the short term, but it’s not
    1:27:39 going to work in the long term because in the long term, inflation has, first of all,
    1:27:45 it has a momentum of its own. Once it gets going, it tends to build on itself, the acceleration
    1:27:52 as thesis. It accelerates. And once inflation gets going, and the reason it gets going is because
    1:27:59 workers go to the store and they see the price level has gone up, things have cost more.
    1:28:07 They ask for the wages to go up. Then people, eventually, the wages will go up too high,
    1:28:11 and they will no longer be hireable or companies will decide, at these high wages,
    1:28:16 I can’t hire as many workers, I’d better lay off. So if inflation keeps going, eventually,
    1:28:21 over the long term, it will result in high unemployment. So he says, theoretically,
    1:28:26 you could end up in a situation where you have high inflation and high unemployment.
    1:28:29 This hasn’t been seen, but he says, theoretically, this could happen. And then he goes and he says,
    1:28:35 and the government has started expanding the money supply, started expanding the money supply
    1:28:41 in 1966. So we’re going to get a bunch of inflation and then we’re going to get a bunch
    1:28:46 of unemployment. And he estimates about how long it will take. And then he says, once this all
    1:28:54 happens, it will take about 20 years to get back to normal. And he predicts the stagflation of the
    1:29:03 1970s. Stagflation of the ’90s. Again, against the mainstream belief represented by the Phillips
    1:29:10 Curve. Yeah. And what really makes it happen is that many of the economists who most deeply
    1:29:16 dislike Friedman and most deeply dislike his politics in the 1970s as they’re running their
    1:29:21 models, they start to say, Friedman’s right. They start to see in the data that he’s right.
    1:29:26 And a very parallel process happens in Britain. Britain is going through a very similar burst
    1:29:32 of spending, burst of inflation. And so Friedman is vindicated in a very profound way in the way
    1:29:36 that he himself said would be the ultimate vindication, which is my theory should predict.
    1:29:44 So that prediction of stagflation is really this sort of final breakthrough of his ideas
    1:29:52 and also their importance to policy and to thinking about how we should intervene or not in the
    1:29:56 economy and what the role of the Federal Reserve is. Because he’s saying the Federal Reserve is
    1:30:02 incredibly powerful. And finally, people start to believe it. And I don’t know if we said,
    1:30:08 but to make clear, stagflation means high unemployment and high inflation, which is a thing
    1:30:15 like you mentioned was not seen before. And he predicted accurately. And it also disproves the
    1:30:21 relationship, the inverse relationship between unemployment and inflation.
    1:30:28 Yeah. Now I should say the Phillips Curve is still out there. It’s been expectations augmented. And
    1:30:36 it is relevant in the short term, but Friedman’s warning is still very much apt that if you get
    1:30:43 too focused on unemployment, you can let inflation out of the bag. And so until very recently,
    1:30:49 the Federal Reserve’s tradition has been focusing on inflation, believing that’s fundamental,
    1:30:54 and that will keep unemployment low rather than trying to lower unemployment at the cost of
    1:31:00 raising inflation. Can we go back to Frank Knight and the big picture thing we started
    1:31:05 with, which is the justification of capitalism? Yes. So as you mentioned, Milton Friedman
    1:31:12 searched for a moral justification of capitalism. Frank Knight was a big influence on Milton Friedman
    1:31:19 and including on this topic of understanding the moral justification of capitalism. I think you
    1:31:25 spoke about Knight’s case for capitalism was grounded in the idea that the ability to act
    1:31:31 in the face of uncertainty creates profit. And it should because taking risks should be rewarded.
    1:31:37 So this idea that taking risks in the face of uncertainty should create profit. And that
    1:31:43 becomes a justification that the ethics of capitalism. Can you just speak to that?
    1:31:49 Yeah. So Knight is talking about where does profit come from? And to his mind, it comes
    1:31:55 from the entrepreneurial function and the risk taking function. And so he weaves that into why
    1:32:04 capitalism works best and why it’s the most effective allocation machine and why it assigns
    1:32:11 responsibility in a way he believes that a socialist system never could. Now, Knight, though, is not a
    1:32:16 booster of capitalism. It could be in part because he’s just a darkly pessimistic kind of depressive
    1:32:22 guy. And so he’s afraid capitalism is going to collapse and socialism or fascism is going to
    1:32:29 take over or communism. And so he kind of descends into darkness there. Friedman as the more
    1:32:35 optimist believes with Hayek that you can develop a different approach to capitalism that would
    1:32:40 preserve the price system, preserve allocation, but build in social supports, build in a social
    1:32:45 minimum, things like this. But there’s a moment in his career where he’s really struggling to figure
    1:32:50 out like, how do I make this case for capitalism? And basically, the whole sort of conservative
    1:32:53 movement or people who we later call the conservative movement are struggling to make this case.
    1:33:00 And he starts thinking about what makes capitalism work is that if you put forth effort,
    1:33:04 you get a reward. So then you could say, well, people get what they deserve under capitalism.
    1:33:09 But then he kind of stops and he says, that’s not really true because we’re born with such
    1:33:14 different endowments and there’s a huge quotient of luck, right? So some people are just in the
    1:33:20 right position and some people aren’t. So if I say capitalism is moral because people get what
    1:33:27 they deserve, that’s not really true. And he also kind of has like an ethical reaction, which he
    1:33:33 ends up calling like an aesthetic reaction. He’s kind of like, it just doesn’t feel right to say
    1:33:38 that. And so he struggles for a while with like, what do I say? And then he basically says, capitalism,
    1:33:44 it can’t be the core. Discipline of the market can’t be the core to your ethics. It has to be
    1:33:48 something else. So that’s when he will decide it’s freedom is individual freedom. That’s really
    1:33:54 the ethical core and capitalism makes individual freedom possible because capitalism is dedicated
    1:34:04 to maximizing that. And so the defense of capitalism comes through freedom. And at his stage in history,
    1:34:10 he’s able to set aside nice worry about inequality and say, when I look at the data, and this is true
    1:34:16 for the macro data mid-century, incomes are actually converging, right? And also, if you
    1:34:21 look historically, if the country goes from, say, a more feudal agrarian society to a more
    1:34:26 market-based society, incomes will converge. Now, then they might start to diverge, but
    1:34:30 freedom is in the moment when he’s seeing the convergence. And so that’s what he’s really
    1:34:37 focused on. So he believes he can justify capitalism through the ethic of freedom. And he also believes
    1:34:43 that inequality is a problem that can be addressed through specific policies. And it’s not a
    1:34:49 fundamental feature of capitalism. In other words, he doesn’t see capitalism as an engine of inequality
    1:34:53 the way that Frank Knight did and the way that maybe some critics on the left would.
    1:34:59 How did he conceive of freedom? So individual freedom, economic freedom, political freedom,
    1:35:04 civil freedom, what was the tension, the dynamic between those different freedoms for him?
    1:35:10 So he really begins focusing on economic freedom. And he says it’s really important to focus on
    1:35:16 economic freedom because in the United States, we don’t value it enough. So by economic freedom,
    1:35:23 he means the ability to keep what you’ve earned, the ability to make decisions about your business,
    1:35:28 the ability to make decisions about the work that you do. So this will translate into things like
    1:35:32 there shouldn’t be a minimum wage. He believes the minimum wage has bad social
    1:35:36 effects, but he also believes you should be free to accept a job at a wage that you yourself have
    1:35:44 determined is acceptable to you. And there should be very minimal regulation, questions around safety
    1:35:48 and other things because the market will ultimately, if you create an unsafe product,
    1:35:55 it won’t sell. And that will be that’s sort of your incentive. So he really centers economic
    1:35:59 freedom because he thinks especially, and he’s really speaking from his vantage point in the
    1:36:04 universities and speaking to the kind of liberal consensus of the 50s and 60s, he thinks economic
    1:36:09 freedom has been undervalued in the American context. So he really wants to push that forward.
    1:36:13 He’s really kind of taking political freedom for granted. Now later in his career, when he becomes
    1:36:19 famous, he’s traveling the world, he spends time in Chile, and this country is now being
    1:36:25 ruled by a dictator, Gustav Pinochet, who starts introducing economic freedom, but there’s no
    1:36:29 political freedom. And Milton Friedman believes eventually these two things are going to go
    1:36:35 together and tells Pinochet, “You’ve got economic freedom, and eventually it’s going to mean
    1:36:39 political freedom.” Pinochet is like, “Okay, fine. I’m not really interested in that. I want to
    1:36:44 know what I should do about inflation.” But then when Milton Friedman leaves Chile, he is
    1:36:50 attacked and vilified for having been a supporter. He’s a supporter of the regime,
    1:36:55 which he’s not, but he realizes he has talked too much about economic freedom and he hasn’t
    1:36:58 talked enough about political freedom. And he’s kind of assumed political freedom because he’s
    1:37:04 come from the American context. So then he starts recalibrating them and saying, “You know what?
    1:37:08 If you don’t have political freedom, you’re never going to be able to hold on to economic freedom.”
    1:37:14 So he sees that they need to go together and they don’t naturally go together. And so he starts to
    1:37:20 become more clear in talking about political freedom. Now let’s fast forward to the end of
    1:37:26 his life, and he’s witnessing the emergence of what we call the Asian Tigers. So capitalist economies
    1:37:32 that are doing very well, but they don’t have political freedom. But then he observes, they
    1:37:37 don’t have political freedom in that you can’t vote in a free and fair election, but they also
    1:37:44 don’t have a stazi. They don’t have a KGB. They’re not hauling people off for their wrong opinions.
    1:37:49 So then he says they have something called civic freedom. And so he kind of defines this third
    1:37:55 sphere, civic freedom of debate, discussion, interpersonal relations, but you can’t be political.
    1:38:02 So this is a late in life addition. I don’t think it’s fully theorized. I think what it shows is
    1:38:08 that during the Cold War, he very much believed economic and political freedom, capitalism and
    1:38:15 freedom, democracy, the United States capitalism, this all went together. And he starts to see at
    1:38:19 the end of his life the emergence of different social systems that are using market trading
    1:38:24 and allocation, but aren’t giving people similar freedoms. And he’s kind of puzzling over that.
    1:38:31 Now he always believes that China will democratize. And he thinks China’s on the path to democratization,
    1:38:36 in part because Chile does democratize. Eventually, Pinochet has voted out and it’s
    1:38:41 become a democratic capitalist and very prosperous country. And he thinks as exactly what’s happening
    1:38:46 in China, he sees Tiananmen and he doesn’t live long enough to get to where we are now,
    1:38:51 in which doesn’t look like political or civic freedom is coming to China anytime soon.
    1:38:58 And he did oppose the dual-track system of China, meaning like the market is bottom up,
    1:39:03 the government in China is top down, and you can’t have both.
    1:39:06 He thought you couldn’t have both. Yeah.
    1:39:08 He thought eventually the market would triumph.
    1:39:12 Well, it’s a really powerful idea to say, okay, maybe there’s not political freedom,
    1:39:18 but just hold on to the economic freedom and eventually that’s going to give political freedom.
    1:39:23 Is that correct to say like start to work on the economic freedom
    1:39:26 and the political freedom piece will take care of itself?
    1:39:31 That’s what he believed. That’s what he believed. Yeah, I think it’s more complicated than that,
    1:39:36 right? The people who gain out of a system of economic freedom could decide to collude
    1:39:41 in a system where there isn’t political freedom. That’s certainly a scenario.
    1:39:46 So, but that was, again, that’s that core idea of freedom, right? And that core belief
    1:39:50 that people want freedom and that people are drawn to freedom.
    1:39:56 Just to go back to Frank Knight a little bit, he wrote an essay called The Ethics of Competition,
    1:40:01 the metaphor that economic life is a game, and then maybe that extends the society as a whole,
    1:40:07 like the entirety of it is a competitive game. And Milton Friedman,
    1:40:12 I think, adapted some of this, appreciated some of this. Can you speak to this metaphor?
    1:40:18 Yeah, I think what the metaphor of the game does is it asks you, okay, well, what are the rules then?
    1:40:24 And let’s focus on the rules that keep the game going. So, he didn’t use the concept of an
    1:40:28 infinite game, but I think that’s an interesting one, a game that all the players are in and keep
    1:40:33 going again and again and again. And so, that helped Knight, along with Hayek,
    1:40:41 shift from the allocation question, who’s getting what, are things allocated fairly
    1:40:46 to the more structural question of, like, what are the rules of the game that we need to keep
    1:40:52 this system going? And so, for a while, that led to the discussion of monopoly, well, we need rules
    1:40:58 against concentration, or we need the rule of law. Everyone needs to be treated equally.
    1:41:06 People need to know what they’re up against. And then, going back to monetarism,
    1:41:14 the core of monetarism is a rule. Friedman called it a monetary growth rule. And so, again, what
    1:41:21 keeps the economic game going is a rule about how much the money grows that everybody knows.
    1:41:27 Nobody’s guessing. Nobody’s changing the rules to help their side or to help the people they’re
    1:41:34 friendly with. We all know it’s there. It’s clear. It’s easy. And so, that emphasis on rules, I think,
    1:41:38 really has a through line. It goes into Hayek’s competitive order, and then it goes into the
    1:41:48 monetary growth rule. And then, today, monetary policy makes use of monetary policy rules. We
    1:41:54 have not abandoned discretion, but rules are used as a heuristic or a check, and those come out of
    1:42:03 Friedman’s thinking. And so, it’s really profound. And it was always counterposed to discretion,
    1:42:09 which Friedman worried would be subject to capture or political corruption if you had
    1:42:14 discretion in policymaking or if you had discretion in these very big areas. Then,
    1:42:20 people would stop competing against each other in a market, and they would turn their attention
    1:42:27 to getting control of the rules or the rule makers. So, if there’s clear, transparent rules,
    1:42:33 then you’re free to play the game. Yes, exactly. But then, depending on the rules,
    1:42:40 the game can turn out the equilibrium that arrives at might be different. So, that speaks
    1:42:46 to the mechanism design, the design of the rules. Yeah, and that was, again, to go back to the idea
    1:42:52 separating new liberalism or neoliberalism from classical liberalism was more of a focus on what
    1:42:56 are the rules that are needed. What is the competitive order that we want to set out?
    1:43:03 How do we design in social safeguards? How do we think about it? And so, that shift
    1:43:09 towards monetary policy and focusing on stable monetary growth, that becomes really important
    1:43:15 in the post-70s era is one of the basic rules of how capitalist economies should function. And it
    1:43:21 becomes really important because they see the example of, say, countries most notably in Latin
    1:43:28 America where monetary rules weren’t followed and different governments played politics with
    1:43:35 their currencies, and that created just huge upheaval and huge social loss, economic loss,
    1:43:41 just economic disaster. So, my friend, she’s a poker player, philosopher of sorts,
    1:43:46 great human being. She has a podcast called Win-Win that everybody should listen to.
    1:43:51 And the whole purpose of the podcast and her whole way of being in spirit is to find win-win
    1:43:59 solutions. So, do you think of economic life as having such win-win solutions? So, being able
    1:44:05 to find rules where everybody wins or is it always going to be zero sum? I definitely believe
    1:44:12 in win-win, but with the big asterisks, like you can have win-win, but it can feel like win-lose,
    1:44:20 which is it’s not just are people getting more, it has a lot to do with do people feel
    1:44:25 they’re getting more and do people feel they’re getting what’s fair and equal. So, you could have
    1:44:33 a situation, for instance, if you look at the history of going back to Chile, it has
    1:44:40 steady growth, steady income growth, steady diminution of inequality, and a high level of
    1:44:46 discontent within the society and a high level of belief that the society is corrupt and unfair.
    1:44:51 And that’s what matters. How people feel about it, how people perceive it,
    1:44:57 matters. And we saw this recently, you can’t just come out with a bunch of statistics and
    1:45:03 tell people you’re winning in this game if they feel like they’re losing. So, that goes to all
    1:45:10 the non-rational factors and all the comparative factors that people have when they think about
    1:45:15 where they are vis-a-vis other people in society. So, we’re just incredibly social creatures. We’re
    1:45:20 incredibly attuned to our status, to rising and falling, to where we sit vis-a-vis others.
    1:45:26 And so, that absolutely has to be attended to. It can’t just be an economic analysis.
    1:45:32 That’s so interesting that the experience of the economy is different than the reality of
    1:45:38 the economy. On the topic of corruption, I think the reality of corruption versus the perception
    1:45:43 of corruption is really important in a lot of these nations. You take Ukraine, for example,
    1:45:50 the perception of corruption has a big impact on the economy. You don’t want to invest, you’re
    1:45:54 very cautious as a business person. The reality of corruption could be way different than the
    1:46:01 actual perception. But if narratives take hold, it’s a self-fulfilling prophecy that it has a
    1:46:06 big effect on the psychology that people involved. It’s interesting. Yeah. I mean, this goes back to
    1:46:12 Keynes’ analysis of the Great Depression, right? If people won’t invest, if they’re spooked,
    1:46:18 if the investing classes are spooked, you could be in real trouble. And in some ways,
    1:46:24 this simple analysis of the problem and proposal of a solution was enough to restore
    1:46:30 eventually the path to academic prosperity, right? That’s Franklin Roosevelt, nothing to fear but
    1:46:36 fear itself. The sense of we know we have a future, we have optimism, then you believe in it. And to
    1:46:42 go back to thinking about money, right? Money works because we all believe in it. It’s a form
    1:46:48 of social trust. And it’s a form of belief and faith in our society and in the other people in it.
    1:46:51 And when that breaks down, the money system will break down as well.
    1:46:57 Is there something Milton Friedman said and thought about how to control the psychology of
    1:47:04 humans at scale? No. I mean, what’s interesting is he does talk, especially in his later work,
    1:47:11 he says we have fiat currency and this is an experiment. And we don’t know how it’s going
    1:47:16 to turn out. And it’s turning out okay right now, but we’ve always had a commodity based or backed
    1:47:24 currency of some form or another. And this is the first time. And so who really knows, so far,
    1:47:30 so good. And he also is very attuned. It’s interesting in his later writings when he’s
    1:47:35 thinking about this to, sure, I could design a monetary system that would be different. But
    1:47:41 when I look at history, I see that monetary systems have always say incorporated the role of the
    1:47:47 state because it’s so important to people. And so therefore, my theoretical designs really have
    1:47:52 to be tempered by what I’ve actually seen happen in history. So maybe you could speak to this
    1:47:59 tension between how much government intervention is okay for Milton Friedman. So he was against
    1:48:04 minimum wage, but he was for guaranteed minimum income. Can you explain actually the difference
    1:48:09 between the two? Yeah. So this was one of the discoveries I made in my research. I found a
    1:48:14 paper from 1938, he wrote advocating what we would call today a universal basic income,
    1:48:20 a minimum income. And he basically sees this as part of the effort to create a new liberalism,
    1:48:25 right? And he basically says we have advanced societies, we have prosperous societies,
    1:48:32 we have decided in keeping with our morals and our ethics that people should not be starving
    1:48:36 in an advanced society like this. The question is how are we going to make that happen?
    1:48:41 And he ended up believing the best thing to do was to put a floor under everybody.
    1:48:48 And he said you can get that based on your income. If you have a lot of income, you don’t get it.
    1:48:52 If you have a little income, you might get a little bit of it. If you have no income,
    1:48:57 you get enough of it. And he believed in the beginning, you should base that on what was
    1:49:02 required to buy food, right? That that would be kind of an objective. You could objectively determine
    1:49:07 the nutrition and the price of food. And so that for him, it’s important, he says,
    1:49:12 it’s keeping with a liberal polity because it’s not intervening in the price system,
    1:49:18 it’s not intervening in economic relations. And it does not, in his view, require a bureaucracy
    1:49:25 to administer. It is not, in his view, require that you qualify for it by virtue of being in a
    1:49:32 protected class. You just get it as kind of part of your membership in this general citizenship
    1:49:40 body. And so that, to him, was really different than a minimum wage because it did not interfere
    1:49:46 with the work bargain. His belief about minimum wages was specifically that it priced out unskilled
    1:49:52 labor. That what an unskilled laborer had to offer was a willingness to work for a very low wage.
    1:49:58 And if you set the minimum wage too high, businesses instead of hiring that higher
    1:50:03 priced labor would not hire, or like we could think of today, right? They put in an electronic
    1:50:08 checkout, you know, or something like this where you don’t actually need the labor. So he really
    1:50:13 believed the minimum wage had that perverse incentive. Now, there’s, this is a live debate
    1:50:18 on what minimum wages do. And there seems to be a level at which you can set them that they can
    1:50:24 not have that perverse effect and, in fact, can kind of create people with more spending money
    1:50:30 that then powers the economy. So he had a very sort of clinical analysis of that, rather than
    1:50:37 an empirical one or a really abstract analysis. But the minimum income is fascinating because it
    1:50:45 seems very leftist to us. But what it is, is it’s purely individualistic. And it never really happened
    1:50:52 because it was so purely individualistic because American social policy typically identifies
    1:50:57 like this group of people is deserving and will give them benefits. So the classic example is
    1:51:03 soldiers, veterans. Another example is mothers raising dependent children. These people deserve
    1:51:08 money. The rest of you, you better go out and work. And so Friedman’s proposal, it really
    1:51:15 caught on in the ’60s. It ultimately went nowhere, but it was no litmus test, no income analysis.
    1:51:20 Just we’re going to give you this much. Everyone’s going to get this much. And he decided once mass
    1:51:25 taxation had come in, you could do it through taxes. And you could just rebate people who didn’t pay
    1:51:30 income taxes, got a rebate. That actually came to pass. It’s the earned income tax credit. And it’s
    1:51:36 considered extremely successful by policy analysts. It does what it’s supposed to do. It’s not that
    1:51:44 expensive. And so I see that as a kind of paradigm of his thinking in that instead of creating a
    1:51:50 bureaucracy that does some form of redistribution, or instead of trying to intervene in the market
    1:51:56 for labor or the market for something else, the market for housing, you provide a cash grant that
    1:52:03 people spend for themselves. And so interestingly, that’s what happened in the emergency situation
    1:52:07 of COVID, right? That’s exactly what people did. They followed that model. We just get money out
    1:52:12 quick. And there’s a lot of discussion still about UBI’s is something that should be done.
    1:52:20 And I think it’s always going to be hard to pull off because I think Americans and their elected
    1:52:24 representatives don’t want to provide a universal benefit. They want to provide a targeted benefit
    1:52:30 because they believe there’s like a moral component here. And Friedman advanced a policy that was
    1:52:37 really abstract and really just kind of, it was devoid of judgment. It was like pure and beautiful
    1:52:43 in that way, but utterly impractical. And it really focused on not interfering with the market
    1:52:48 and the signals that the market provides. It was really against price controls for the same kind of
    1:52:54 reason. Yeah, exactly. You could say, okay, but how does this not interfere with the market, right?
    1:52:58 If you provide people with a minimum income, won’t that change their incentives to work, etc?
    1:53:02 I mean, there’s a big body of research on this. Most of it seems to show
    1:53:09 one, it’s way better than the current benefits cliff where you have to not work to get your
    1:53:17 benefits. And any incentive impact on working seems to be much lower than would be expected. But
    1:53:23 I’ll let the economist and the social science to spite that one out and figure it out empirically.
    1:53:27 Hopefully we should be able to. Yeah, there’s been a bunch of studies. It’s interesting,
    1:53:31 even just how you conduct studies like this, how you do these kinds of experiments,
    1:53:38 especially if you’re empirically minded. Because a lot of the studies I saw are pretty small.
    1:53:46 So how do you make big conclusions about how to run the world, how to run the economies
    1:53:55 from such small studies? It’s all a fascinating experiment of ideas. And it’s also inspiring to
    1:54:01 see individuals and maybe small groups of individuals like the Chicago School of Economics
    1:54:09 to sort of shake out what we believe and how we run the world. Yeah, inspiring. Yeah.
    1:54:14 You call Milton Friedman, the last great conservative,
    1:54:22 maybe to be a little bit sort of controversial and make bold statements that get everybody excited.
    1:54:25 But what do you mean by that? And what makes a great conservative?
    1:54:31 So I was really thinking of that in terms of kind of American political identities
    1:54:37 and particularly the 20th century conservative movement, which people are always saying this
    1:54:42 isn’t conservatism. And I said, yes, in America, conservatism is different. It looks different.
    1:54:47 It feels different. Conservatism in America builds in a big component of what we could
    1:54:55 call libertarianism, pro-capitalism, anti-government ideas. And critics will say, but conservatism
    1:55:01 is about conserving institutions and practices and it has a role for the state and an organic
    1:55:07 community. But in the United States, it’s always had since the 20th century, also this anti-statist.
    1:55:14 Let’s let the market rip. Let’s not worry about what the market does to establish traditions.
    1:55:19 The market is our tradition. Capitalism is our tradition. So that was really synthesized.
    1:55:24 Many people were there, but Friedman and the importance of his books,
    1:55:31 Free to Choose, Capitalism and Freedom, the television series he did, all of these were
    1:55:37 like core components of this American conservative synthesis as it evolved. And I really see that
    1:55:44 as having broken down. It is scattered into different pieces. We don’t know where they’re
    1:55:51 going to come back together again. But Friedman’s push for open global markets,
    1:55:55 unfettered free trade, that’s getting pushback on both the left and the right.
    1:56:02 That I think is just a major sign that both parties have turned away from this vision.
    1:56:07 I don’t know what they’ve turned to, but the way that Friedman brought these pieces together,
    1:56:11 I think that political moment has passed. So that’s what I was trying to talk about
    1:56:16 with the book title. There’s another way, though, in which I think of him also as a
    1:56:22 conservative, which is that within the field of economics, he went back to this older idea,
    1:56:28 the quantity theory of money, and said, this still has value. This can be applied in the
    1:56:33 modern day. It is something to teach us. And he pushed back against this trend towards
    1:56:38 mathematicization. So he kept writing books. He can still pick up a Friedman book and read it.
    1:56:44 There’s lots of economics articles and outputs, like “unreadable unless you’re in the field.”
    1:56:50 And so I think in that way, he was trying to conserve methodologically and intellectually
    1:56:55 the traditions of the field. The work that he, and particularly Anna Schwartz did that
    1:57:01 literal counting of things and deep analysis of data from the field, that was completely
    1:57:06 unfashionable in his time. Now, we’ve sort of gone back to it with big data and with computers,
    1:57:10 but he helped bring that forward and preserve that tradition. So I think of him kind of
    1:57:15 intellectually as a conservative, if you think of the mode of his thought. And so,
    1:57:21 I mean, what makes a great conservative is one who takes those older ideas and makes them fresh
    1:57:26 for a new time period. I think that’s exactly what he did.
    1:57:32 You’ve also spoken about the fact that the times when he was sort of out in public,
    1:57:42 there was more of an open battle of ideas, where conservatism often had William F. Buckley. He had
    1:57:52 a more vibrant, deep debate over ideas, where it seems less deep now.
    1:57:58 I mean, that is the thing that it’s hard, especially for the students I teach today,
    1:58:03 to be like, there were arguments about ideas and conservatives want a bunch of them,
    1:58:10 and that happened in the ’70s and late 1960s and 1970s, when one set of arguments was about
    1:58:16 economics, like, okay, this idea of stimulating the economy by spending more, it has a downside.
    1:58:21 The downside’s called inflation, and the downside’s called too much regulation.
    1:58:29 You’ve gone too far in kind of bottling up the actual sources of economic growth and dynamism,
    1:58:34 and we have to let those free. In social policy, there was also a critique.
    1:58:40 The Great Society had all these ways of ideas of ending poverty, and people came and analyzed them
    1:58:45 and said, the programs aren’t helping. In some ways, you’ve actually created engines to trap
    1:58:50 people in poverty because you’ve given them a benefit and said, if they actually start to work,
    1:58:55 they lose the benefit. You’ve created all these perverse incentives, and these ideas were fought
    1:59:00 out, they were empirical, they were controversial, and they were based on really deep research
    1:59:10 and really deep argumentation. It seems that era has passed. It seems like we’re driven much more
    1:59:17 quickly by moods rather than thought through ideas. Right now, it seems like the ideas come
    1:59:24 after they follow the political mood and try to put together the underpinning of it, where it
    1:59:28 really was the opposite for much of the 20th century. It does seem like we lead with emotional
    1:59:36 turmoil, and the ideas follow versus lead with the ideas, and sort of the emotion of the masses
    1:59:41 respond. Right, exactly. If we think of the evolution of conservatism, it was a whole set
    1:59:49 of ideas that was crafted, refined, the 1950s, 1960s, 1970s, sort of really found their emotional
    1:59:56 standard bearer, translator, salesperson in Ronald Reagan, who incidentally had been following these
    2:00:01 ideas as they developed and had been honing his ability to express them and apply them politically.
    2:00:08 It’s very opposite if we look at Trump as the political definer of the era. There’s a set of
    2:00:15 ideas, but it was more attitudes, impulses, vibes, and the ideas are coming after that,
    2:00:22 trying to figure out how they patch on. It’s interesting to watch, to see that difference,
    2:00:28 and I hazard that a lot of it just has to do with the immediacy of the media environment we’re in,
    2:00:33 and it’s just power of the media messages to get out so fast.
    2:00:41 What do you think Milton Friedman would say about Donald Trump, about him winning in 2024,
    2:00:46 and just in general, this political moment? I think he would love Doge.
    2:00:54 I think he would focus on that part because I think he would really love it. He would be
    2:01:01 very alarmed by the idea of tariffs and very alarmed by the return to protectionism. I mean,
    2:01:07 I think he believed that part of what made the world peaceful in the second half of the 20th
    2:01:13 century, as opposed to during World War II, was that the world was knit together more by trade,
    2:01:18 and that was the great hope that people traded with each other. They wouldn’t fight. He was also
    2:01:25 a proponent of the freeing movement of capital. He would absolutely oppose this idea that
    2:01:33 Nippon Steel wasn’t allowed to invest in the United States. I think he would struggle, and he
    2:01:39 wholeheartedly embraced Reagan, and he worked to minimize the parts of the Reagan legacy he didn’t
    2:01:44 like. I think he would find it harder to embrace Trump because he’s not of that
    2:01:49 style. He just had a different style, but I’m guessing he would have come around through
    2:01:56 I think he would just say, okay, we have a chance to reduce the size of government. At the same time,
    2:02:03 the spending plans of the Trump administration are not fiscally conservative in any way,
    2:02:09 and that was his concern. It was not so much with debt, but with the feeling that there’s no
    2:02:14 mechanism to stop the growth of government, that it just grows and grows and grows. He ended up
    2:02:22 believing even deficits aren’t so bad because they make politicians cautious he thought about
    2:02:29 continuing to spend. I have to believe he would be concerned about the potential threats to the
    2:02:36 US currency’s position as the world’s reserve currency with increased levels of debt and spending.
    2:02:48 He was concerned about low interest rates. He died, I think it’s 2004, 2006, but it was in the
    2:02:52 beginning he didn’t see the zero low bound, but he saw low interest rates and he said this isn’t
    2:02:57 necessarily good. Everyone’s talking about low interest rates as if they’re good, but there
    2:03:06 should be a price on capital. There should be a price on this. It shouldn’t be so low. He had
    2:03:12 some of still the macro insights that I think are important. You wrote the Wall Street Journal essay
    2:03:19 titled How Inflation Ended Neoliberalism and Re-Elected Trump. Can we weave that into this
    2:03:25 discussion in terms of inflation and Trump? What’s the main idea of the essay?
    2:03:34 The main idea is looking back and saying, today we have been living in a world where people have
    2:03:44 been focused on monetary policy, steady monetary policy, free trade, reducing regulation. This is
    2:03:52 all called the neoliberal era. My argument was a lot of that arose was driven by inflation.
    2:03:58 We have Milton Friedman predict inflation in 1967. It starts breaking out in the 1970s,
    2:04:08 Britain and the United States. Every institution was designed around stable prices. Once inflation
    2:04:15 broke out, prices were no longer stable. For example, tax rates weren’t inflation adjusted.
    2:04:21 If your income went up because of inflation, you might bump from a low tax rate to an extremely
    2:04:25 high tax rate, but you don’t actually have more money. On paper, you have more money,
    2:04:28 but everything costs more. You don’t actually have more money and your taxes have gone up.
    2:04:35 That kicks off the taxpayer revolt. There’s a whole shift of American corporations towards
    2:04:41 focusing on financial investments because the tax breaks they used to get for depreciation,
    2:04:46 for building new factories, are not inflation adjusted. They no longer pay off in an inflationary
    2:04:54 environment. Then when Paul Volcker comes in, early 1980s and starts fighting inflation,
    2:05:00 really pushes up interest rates to bring down inflation. That completely reorders the banking
    2:05:07 sector because banks had statutory legal limits on the interest they could charge. Once
    2:05:14 general market interest rates exceeded that, it was proliferation of new financial forms to take
    2:05:22 advantage of that. My point was the era we live in was ushered in by inflation. Then everyone
    2:05:29 turned against all the formulations we had and said, “Well, these have hollowed out our industrial
    2:05:36 base. We’ve got too much immigration. We’ve got too much economic openness. We need to
    2:05:40 reshore. We need to focus. We need to turn against all these things. We need to spend more. We’ve
    2:05:49 disinvested.” The net result of that turning away, I argued, is people forgot about inflation.
    2:05:53 They really forgot it could ever exist. You had a whole set of theories on the left,
    2:05:56 modern monetary theory that basically said, “We don’t really need to worry about inflation.
    2:06:03 We can spend what we want.” Lo and behold, inflation came back. My argument is that
    2:06:10 is now open the door to the presidency of Donald Trump, which is potentially a deeply
    2:06:17 transformative moment that will change the size and shape of government, that may change our foreign
    2:06:23 policy profoundly, that may change our immigration policy, that may change the demographics of our
    2:06:30 country. All of that in my thesis is that it’s all been made possible by inflation. The great
    2:06:37 mistake of the past years was to forget how fundamental inflation was to the rise of the
    2:06:46 last political order and to profoundly underestimate how much inflation would change the current
    2:06:50 political order. I just think it’s one of these things. This is why I think you should study
    2:06:55 history, because I think if you had studied history, you would be aware of this. It’s so easy
    2:07:01 for people to forget just like the banks forgot that interest rates could ever go up. They got so
    2:07:07 used to it. It’s only a 10, 15-year thing, but to them, that seems like forever. I really do
    2:07:14 believe what history teaches you to do is just have a much vaster scope in your vision and then
    2:07:18 take into account the possibilities of so many things happening that are different
    2:07:24 than what’s happening today. I just hope we don’t forget about inflation entirely, but here’s the
    2:07:31 thing. It is quite a strong chance that Trump’s policies will initiate even worse inflation,
    2:07:36 and then they will prove to be his undoing. The ironies of inflation could be continuing.
    2:07:45 Like you said, Milton Friedman would be a big fan of Doge. If he was still here today and rolled
    2:07:51 with Elon Musk and Vivek, what advice would he give? What do you think he would focus on in terms
    2:07:58 of where to cut, how to cut, how to think about cutting? His signature policy move I talk about
    2:08:09 this is taking the price mechanism and trying to make that into the policy. That seems obvious to
    2:08:14 us today, but in the era that he came in, there would be rent controls. Let’s take away rent
    2:08:21 controls. Let’s let housing prices set themselves. He was very against national parks. I actually
    2:08:27 think the national parks are good, so I hope the Doge people don’t take this up. Rather than an
    2:08:31 allocation to fund the national parks, they should be funded by the revenue that they bring in when
    2:08:39 people visit them. Let’s let prices make the decisions here. I think that would be one of the
    2:08:44 key pieces. The other thing I think he’d really be thinking about, he wrote about this a lot about
    2:08:51 occupational licensure and barriers to entry. He felt like one of the worst things that government
    2:08:57 does and sometimes it’s private entities that do this is create barriers to entry to protect
    2:09:01 industries and markets. He talked about this in the case of the medical profession, which I think
    2:09:07 is actually not a good example because I think we all have a collective investment in having medical
    2:09:14 doctors be highly trained. For instance, you could look at nail technicians or hair cutting.
    2:09:18 There’s often these licensing requirements or there’s a big kerfuffle. I think it’s the
    2:09:22 DC passed a law that to run a childcare center, you have to have a college degree. What does
    2:09:26 that do? That disenfranchises a whole bunch of would-be entrepreneurs who don’t happen to have
    2:09:30 a college degree, but probably could be really good at this particular business. I think he would
    2:09:39 be saying, look out for where private interests have used the state to protect themselves and
    2:09:47 clear away those types of barriers and let competition through prices guide outcomes.
    2:09:53 Yeah, so open up for more competition and allow for more signals from the market
    2:10:02 to drive decisions, which would actually naturally lead to cutting a lot of the bureaucracy of
    2:10:07 government. I think the other thing he would probably be arguing for is again, go back to
    2:10:14 the design of the minimum income or the negative income tax, that there’s a way he ultimately
    2:10:18 decided to run it through the tax system. The government’s already collecting this data.
    2:10:22 They already have your information and they can just send the money out through the system.
    2:10:27 Rather than having a social bureaucracy where you have to come in in person, you have to fill
    2:10:33 out forms, you have to document, do you own a car? What’s your income? Who lives in the household?
    2:10:42 I think he would say and his analysis of that was who that really benefited was the bureaucracy,
    2:10:48 that process, that paper that implemented those norms and that if you could pull that away,
    2:10:54 you could get help out where it was needed much quicker without having this drag of people doing
    2:10:58 sort of unproductive work of administering these systems. I think trying to cut administrative
    2:11:03 overhead and what he didn’t have then, which we have now, is the technology that we have and the
    2:11:11 ability to send benefits out via smartphone or just to move so much faster and to handle
    2:11:17 information on a mass scale so much faster. It’s painful, but I think one of the big things you
    2:11:22 can do is just that, which is digitalize. I don’t know if that’s a word, but just
    2:11:33 convert everything to where the speed of signal can be instantaneous. There’s no paperwork.
    2:11:41 It goes immediately. Then that means that the pricing signals and all these kinds of things
    2:11:45 are just immediately available to people. That seems to be the low-hanging fruit government,
    2:11:53 IT systems could be vastly improved. But that would result again with a lot of people getting
    2:12:02 fired. I think somebody submitted a question for me saying, “What are your thoughts as a person
    2:12:07 who cares about compassion? What are your thoughts about government employees, which there’s a lot of
    2:12:15 that are going to be hurt by doge?” It’s always a really difficult question.
    2:12:22 A lot of people get fired to make room for a new system that’s going to lead to a lot of pain.
    2:12:28 There is going to be a lot of pain. I don’t know what the solution is. I think that’s also part of
    2:12:35 why Friedman favored a minimum income. He talked about it being countercyclical. In other words,
    2:12:41 when things were really bad, the spending level on it would naturally go up. This is what economists
    2:12:48 today call an automatic stabilizer. Then when it’s not needed, the cost of it goes down.
    2:12:54 Maybe there’s a way to make it sweeten it with honey and have people take buyouts or things
    2:12:59 like that. That would certainly be a way better way to go. I did a podcast with Javier Malay.
    2:13:05 He has consistently praised Milton Friedman and cited him as one of his inspirations.
    2:13:11 So, what do you think Milton Friedman would say about what’s going on in Argentina and
    2:13:15 what Javier Malay is trying to do in Argentina? Yeah, I think he would appreciate it. I mean,
    2:13:22 I think Malay is much more of an Austrian-inspired thinker, but I think he definitely appreciates
    2:13:28 Friedman. On the macro level, Friedman always understood it’s really painful to treat inflation,
    2:13:35 but the more you put it off, the harder it is. So, I think he would be trying to get him,
    2:13:44 as he’s doing, to just message that short-term pain, long-term gain. I think he’d be very supportive.
    2:13:49 I think he’d be thrilled to see also that Malay is very good at explaining these abstract ideas
    2:13:53 and putting his policies in the framework of the bigger picture. That was really meaningful
    2:14:01 to Friedman. I don’t know how politically persuasive it is overall. Malay is very intense.
    2:14:07 He doesn’t have the same sort of gifts of salesmanship and sending people at ease that say
    2:14:11 someone like Ronald Reagan had, but it seems to be that’s what his country was calling for right
    2:14:22 now. Yeah, he has more chainsaw-less, more blanket. Javier recollects this line from
    2:14:26 Milton Friedman. I don’t know if this is accurate, but if you strive for equality over freedom,
    2:14:30 you often get neither, but if you strive for freedom, you often get both. Do you think
    2:14:39 there’s truth to this? I think on the big picture, definitely. We’ve seen focusing too much on
    2:14:47 equality. Because equality is such an alluring word, it can lead you to downgrade all kinds of
    2:14:52 other things that are really important. But I really think it depends on how you’re defining
    2:15:03 freedom. The statement is too big and too broad. If you’re talking about freedom, if by freedom,
    2:15:10 you mean not having to pay taxes if you’re successful, I think that can have all kinds
    2:15:15 of knock-on effects. The idea that people are able to prosper when they’re educated,
    2:15:20 where is education going to come from? How is that going to be paid for and supported?
    2:15:28 Again, to go back to night, if you’re generating too much inequality or people are feeling that
    2:15:33 you’re generating too much inequality, sometimes they value that more than they value freedom.
    2:15:41 I think there has to be more of a balance. It’s hard to make such global statements
    2:15:45 if you have to break them down into what actually do you mean. But again,
    2:15:50 Malay is coming from a very different context, a very different country that has seen
    2:15:56 so much upheaval, so much government intervention, so much inflation, so much political turmoil.
    2:16:01 He’s probably thinking about it differently than Friedman was thinking about it.
    2:16:08 There probably still is a real threat of hyperinflation. There seems to be a very high
    2:16:14 level of corruption or the capacity for corruption, so it’s a really messy situation.
    2:16:20 So, Javier Malay likes to recollect this great line from Milton Friedman, that if you strive for
    2:16:26 equality over freedom, you often get neither, but if you strive for freedom, you often get both.
    2:16:33 Do you think there’s truth to this? Yeah, I think in the macro, for sure. We’ve seen,
    2:16:40 if you really put equality as your goal, it’s such a seductive ideal, and people believe in
    2:16:46 it so much that they can carry out horrible crimes in the name of equality. But then,
    2:16:52 focusing on freedom, these words are too big. They’re so hard to define. So, I think you have
    2:16:58 to ask what is the freedom you’re talking about? If you’re talking about the freedom of ordinary
    2:17:04 people to be entrepreneurial, to make their own way, to start new things, to continue what
    2:17:08 they’re doing, to keep what they’ve earned, for sure, I think that can increase the equality
    2:17:15 overall. If you’re talking about lower taxes, if freedom is just a code for lower taxes, there has
    2:17:22 to be, I mean, lower taxes in general, great. But if you’re one of the top generators of wealth,
    2:17:29 there has to be some way to ensure that, say, education, people prosper when they’re well
    2:17:35 educated. That’s when economies do better. Education is generally state-funded, and you need some way
    2:17:41 to support that and provide for those institutions that structure society that make competition
    2:17:48 possible. So, I think it’s just a really broad statement. Again, Malay is coming from a really
    2:17:54 different context. He’s coming from the South American context from such upheaval, such economic
    2:18:00 devastation, in pursuit of the goal of equality that I think trying to rebalance with that emphasis
    2:18:05 on freedom, I definitely see where he’s coming from. If we can pivot a little bit. We’ve talked
    2:18:11 about Reagan. What are some interesting stories about how Milton Friedman navigated to Reagan,
    2:18:16 and maybe even the Nixon administrations, and how he was able to gain influence?
    2:18:22 Well, the Nixon administration is an interesting case because, so I’ve been talking about inflation
    2:18:29 and the different consequences it had. One consequence it had is that it began to undermine
    2:18:33 the Bretton Woods currency system that was established in the wake of World War II. Now,
    2:18:40 Bretton Woods, what it did basically, it ended up inadvertently putting the U.S. dollar at the
    2:18:45 center of the world economic system. But under Bretton Woods, countries of the industrialized
    2:18:52 West agreed to trade their currency in set ratios that governments sent. A franc was worth so many
    2:18:58 dollars or a German mark was worth so many francs. Then also under this system, countries could come
    2:19:05 to the United States and they could trade the dollars that they held for gold because the U.S.
    2:19:14 was on a modified gold standard. There was a ratio of gold to paper money. The system was set up
    2:19:21 and very quickly, most countries were, the dollar was at the heart of it and that the converting
    2:19:25 into and out of dollars was really the mechanism of trade for many of these countries. So,
    2:19:35 Friedman said, what we should have is floating exchange rates. This is an idea, again, of instead
    2:19:41 of having a top-down design of policy, an administered policy, we will have policy set by
    2:19:46 prices and usually be able to trade currencies on an open market. They should trade and they
    2:19:52 should fluctuate and that would be fine. Totally outlandish idea. But he was pinpointing the fact
    2:19:58 that Bretton Woods had an instability and that instability began to emerge in the time of inflation.
    2:20:08 So, you have more and more dollars being printed. They’re worth less and less. If European nations
    2:20:14 keep trading their currency for dollars, they’re going to be importing inflation into their own
    2:20:19 economies. So, they say, we don’t want these dollars, we’d like some gold instead and they
    2:20:26 have the right to go to the treasury, send in an order and get gold out. So, they start doing this
    2:20:32 more and more and it becomes, it’s called the gold drain and the United States starts running out of
    2:20:39 gold. They’re aware this is happening through the ’60s. They’re trying various things to fix it and
    2:20:47 when Nixon comes into office in ’68, Friedman sends him a memo and it says,
    2:20:57 “This is going to be a real problem.” He says something like, “This is a running sore and you
    2:21:05 have to lance it right away.” Some very graphic metaphor. Otherwise, it’s going to explode and
    2:21:13 Nixon just files the memo away. Nixon loved people to think he was influenced by and following the
    2:21:18 wisdom of Milton Friedman, but he didn’t actually want to do that. He just wanted the political
    2:21:27 benefit that came from it. So, then comes the moment where the US Treasury Department realizes
    2:21:33 we’re going to run out of gold. What should we do? Everybody de-camps to Camp David and Nixon
    2:21:40 decides we’re just going to stop redeeming currency for gold. It’s called slamming the
    2:21:47 gold window shut, done. He also, at that same meeting, decides to institute price controls.
    2:21:52 He does a whole bunch of stuff. It’s an emergency. He calls it the new economic plan,
    2:21:57 which is an unconscious echo of the Soviet new economic plan, so a problematic name,
    2:22:02 a problematic policy. Friedman is livid at the price controls, but he’s like,
    2:22:07 “Actually, it’s great that you closed the gold window. Let’s go all the way to floating exchange
    2:22:14 rates.” This idea was heresy within the Treasury Department. Everyone’s very committed to the
    2:22:19 idea of the gold standard, convertibility, possibility, the United States at the court,
    2:22:24 the financial system, kind of hem and haw. But at this point, Friedman has a very close
    2:22:31 relationship with George Schultz. George Schultz is a high-level appointee who will eventually,
    2:22:35 over the course of the Nixon administration, become the Treasury Secretary. So,
    2:22:42 Friedman is feeding Schultz all his ideas about how we should move to floating exchange rates,
    2:22:48 how we shouldn’t try to reconstruct Bretton Woods. The people in Treasury, it’s funny because I’ve
    2:22:51 read some of their accounts, and actually Paul Volcker is in the Treasury Department at this
    2:22:57 time. He can sense that Friedman is in here somewhere, like feeding his boss ideas. He
    2:23:02 doesn’t quite know. In the oral history, Schultz talks about this quite a bit.
    2:23:09 So, at any rate, Friedman exerts this behind-the-scenes influence, and what Schultz does is just let’s
    2:23:17 Bretton Woods fade away. He doesn’t make grand pronouncements. It just slowly, the world shifts
    2:23:24 to a regime. For a while, it was like a regime of steady prices, and then they call it a steady
    2:23:28 regime of changing prices or whatever. The language changes, the reality changes, and they kind of end
    2:23:33 up where they are. So, that’s a real measure of Friedman’s influence. If there had been another
    2:23:38 economist in Schultz’s ear that said, “No, catastrophe is imminent. We have to go back to
    2:23:42 Bretton Woods,” he probably would have worked harder. The U.S. government would have worked
    2:23:49 harder. That becomes one of these pieces of globalization. What people don’t realize is
    2:23:54 there used to be, in addition to these floating set capital ratios, you couldn’t bring capital
    2:23:58 in and out of different countries. You had to register. You couldn’t invest. All these
    2:24:03 rules and strictures and the falling of Bretton Woods really blows that all open. It’s a precursor
    2:24:10 to globalization. So, Friedman is right there. Now, he’s very ambivalent about Nixon. I mean,
    2:24:14 he sees that Nixon is not an honest person. He thinks he’s very intelligent,
    2:24:22 and Nixon’s dream is to create a new centrist majority. So, he does many things to go back on
    2:24:27 his supposed economic principles and ideals. So, Friedman does not like this. He doesn’t
    2:24:32 like the price controls. He’s in communication with his old mentor, Arthur Burns, who’s now
    2:24:37 the chair of the Federal Reserve, and Burns is basically doing everything wrong in monetary
    2:24:42 policy. And I described this in the book in some detail, these anguished letters back and forth.
    2:24:50 And basically, as I see it, Burns doesn’t have a solid theory of inflation. And the more Friedman
    2:24:55 pushes him, it’s almost like Burns is willfully ignoring Friedman and kind of doing the opposite
    2:25:00 of what Friedman says. So, Burns is running a very loose monetary policy. Inflation is quite
    2:25:04 considerable over the ’70s. I mean, we were all spooked by what did it get to 6% something like
    2:25:11 that recently for a very short time. This is inflation going over 10%, hovering an 8% for
    2:25:15 basically the whole decade of the ’70s. It’s going up and down with extremely elevated rates.
    2:25:21 And so, the Carter presidency largely falls, foreign policy is a big part of it, but the
    2:25:25 failure to tame inflation is part of it. And then Reagan comes in. And now,
    2:25:31 Reagan loves Friedman and Friedman loves Reagan. Very mutual feeling. The Reagan administration
    2:25:37 creates an advisory economic board. Friedman’s on it. He’s retired now. He’s entering his golden
    2:25:44 years. But he really has Reagan’s ear. And here, what he does is he convinces Reagan of his theory
    2:25:50 of inflation, which is inflation has been caused. It’s a monetary phenomenon that has been caused
    2:25:59 by bad monetary policy. Inflation has an accelerating dynamic. The only way to end inflation is by
    2:26:04 really showing and signaling that government policy has changed. And when you do that,
    2:26:10 it’s very painful for a short amount of time. People will suffer. But then, you will come out on the
    2:26:17 other side into stable prices. And this is what you need for economic prosperity. So, the man who
    2:26:27 implements this policy, Paul Volcker, is definitely influenced by Friedman. He even buys Friedman’s
    2:26:33 specific technique of the monetary growth rule and of the focus on monetary aggregates, which
    2:26:38 Friedman has said, right, money matters. Aggregates matter. And that’s what money is. Pretty quickly,
    2:26:45 Volcker finds that because of inflation and the financial deregulation and response to it,
    2:26:50 the aggregates don’t work the way Friedman said they would. And so, the specific policy Friedman
    2:26:56 recommends. Volcker tries it for a year or so. It doesn’t work super well. But what does work
    2:27:03 is letting interest rates go high, go above inflation to a point where both the general
    2:27:07 citizenry and the financial markets believe like, oh, they’re actually serious about inflation.
    2:27:11 And because we’ve had a decade of inflation with all these presidents saying,
    2:27:17 forward, we’re going to whip inflation now, that monetary policy has lost credibility.
    2:27:21 This is why people focus so much on credibility today, because once it’s lost,
    2:27:25 it’s really hard to get it back. And one way Volcker gets it back is interest rates over 20%.
    2:27:33 Unemployment, very high, as high as 25% in construction sectors. And as this is happening,
    2:27:37 Milton Friedman is whispering in Reagan’s ear, this is the right thing.
    2:27:43 Stay the course. This is going to work. Now, interestingly, he hates Volcker. Volcker hates
    2:27:48 him. And Friedman will never give Volcker credit for this policy, but he will give Reagan credit
    2:27:57 for this policy. But he owes credit himself for keeping Reagan from wobbling on this policy and
    2:28:02 just pushing it through. And he also tells Reagan very pragmatically, you better do this now.
    2:28:06 You’ve got a four-year term. Do this in the first two years of your term.
    2:28:11 Things will have turned around by 1984 when you run for reelection and you’ll benefit from it. And
    2:28:16 that’s absolutely what happens. If we could take a small tangent. Sort of a question I have to ask
    2:28:21 about, this is so much of Bretton Woods and maybe the gold standard, maybe just
    2:28:27 have a general discussion about this whole space of ideas. There’s a lot of people today that care
    2:28:35 about cryptocurrency. What do you think that Milton Friedman would say about cryptocurrency and
    2:28:43 what role crypto might play in the economy, whether he would be for this idea,
    2:28:51 against this idea. And if we could look at it for today and also just 10, 100 years from now.
    2:28:57 There’s a clip, I think it’s in 1992 where people say, oh, Friedman predicted cryptocurrencies
    2:29:03 because he’s talking about how payments will eventually be electronic. So in some ways,
    2:29:07 he definitely, as he was looking at the computer and money, he knew these would come together in
    2:29:15 some way. I think he probably would see a use case for a crypto. He definitely would not buy
    2:29:22 the stronger forms, I think of crypto ideology in which we could be heading towards a future
    2:29:25 in which there’s many different currencies that compete or that are distributed or there’s a
    2:29:31 stateless currency. And he addresses this very, very clearly because a Hayek’s denationalization
    2:29:37 of money, it’s a paper in the late ’70s, Hayek argues for this kind of competing currency model
    2:29:42 or regime. And so he’s responding to that. He’s responding to people writing about free banking.
    2:29:48 And he basically says, look, even if you developed a variety of competing currencies,
    2:29:53 eventually society would converge on one. And that’s because people just want one currency
    2:29:57 that they know they don’t want a bunch of different options. Even in places where there have been
    2:30:03 options to do that, they’ve been used very minimally. And then he says, secondly, the state always steps
    2:30:09 in. He says, technically, theoretically, it doesn’t have to, I could draw you a model. I could tell
    2:30:14 you about how it could work without the state. But in actual reality, all human societies,
    2:30:21 through time and space, the state eventually becomes involved in the provision of money because it has
    2:30:27 so many knock-on effects to so many people. So sure, I think he would, again, find a use case
    2:30:32 for crypto. I think it’s interesting, but I don’t think he would see it as this is going to display
    2:30:38 state money, and we’re going to have a variety of distributed currencies. The other thing he
    2:30:46 really stresses is that a change in a monetary system, it only happens amid great, great crisis.
    2:30:52 So again, you see in countries where the state is not controlling the money well, right? That’s when
    2:30:57 people are more turning to crypto. But he says, because money is so fundamental, they’re going to
    2:31:05 be so much political pressure on any country that gets the currency profoundly wrong that the government
    2:31:09 will fall and another one will replace it, right? So if you look at episodes of hyperinflation,
    2:31:14 they don’t go on very long because they’re so upsetting to people.
    2:31:19 If we can go back in time, we’ve talked about it a bunch, but it’s still a fascinating time.
    2:31:27 The Great Depression, the University of Chicago, there’s these folks like Jacob Weiner, Frank Knight,
    2:31:31 Henry Simons, all of these influence the thinking of Milton Friedman.
    2:31:38 There’s this Room 7 situation in the University of Chicago. Just going back there,
    2:31:44 even just speaking almost philosophically, what does it take to explore ideas together,
    2:31:49 sort of like deliberate, argue in that space? And maybe there might be interesting stories
    2:31:55 about that time. It would just be interesting to understand how somebody like Milton Friedman
    2:32:04 forms. The seed is planted and the flower blooms. Yeah, yeah. So he gets to University of Chicago,
    2:32:11 he makes fast friends. And in his third and fourth year, they become what I call them the Room 7 gang.
    2:32:16 So Room 7 is they find an old store room in the basement, they take it over, and that’s where
    2:32:22 they have their jam sessions. And what made this world come together was Frank Knight. There was
    2:32:28 a charismatic leader, and there are a bunch of acolytes who clustered around him. That, I think,
    2:32:33 was a key piece of the ingredient. And then there was a sense that they were on to something that
    2:32:38 the rest of the economics field had forgotten or was rejecting. So there was that sense of mission.
    2:32:45 So that seems to have been, there was a formal education piece. And then there was a parallel
    2:32:52 education piece rooted in admiration for a thinker, a shared admiration. And then what that led Friedman
    2:33:00 to do, what I found syllabi that he had from non-economics courses, lists of books, and he’d
    2:33:05 written the prices of different ones he wanted to read. So he had John Stuart Mill on Liberty,
    2:33:11 like 50 cents, written in the margin. So he began to educate himself. He gave himself a parallel
    2:33:16 curriculum alongside this very formal economics curriculum. He started reading the traditions
    2:33:21 of political liberalism and then talking them through with friends and then developing a shared
    2:33:28 sense of mission. And the incredible thing is, of those friends in the group, they scattered
    2:33:32 for like 10 years, and then they all came back together. George Stigler, his great friend,
    2:33:40 was hired at Chicago. Aaron Director, who was his wife’s brother, was at Chicago. So many of these
    2:33:45 people continued. He became Frank Knight’s colleague. So that was the base. That was what
    2:33:52 really grew him, that really profound peer group. Now, the other piece I talk about a lot is,
    2:33:58 Friedman was a collaborator, an open-minded collaborator, and he had incredible connections
    2:34:04 with economists who were women. And he basically found, first in the figure of Anna Schwartz,
    2:34:09 later in the figure of this group of women who were his wife’s friends, this kind of untapped
    2:34:14 pool of talent. And so he immersed himself in this whole other world of consumption economics,
    2:34:21 and that resulted in his more technical work on a theory of the consumption function,
    2:34:26 which is the theory of permanent income. So for Friedman, intellectual work and intellectual
    2:34:33 production was always done in this very social context, in a context that blended like friendship
    2:34:40 and intellectual partnership. And he only had a handful of friends who were not also economists
    2:34:47 interested in the same questions he was. So he just lived and breathed ideas all day long.
    2:34:51 Can you speak to the jam sessions? Like, what do we know about the jam sessions? What are we
    2:34:55 talking about here? You’re sitting in the room. Are they analyzing, are they reading papers and
    2:34:59 discussing papers, or are they arguing more like over beers kind of situation?
    2:35:06 Yeah, more arguing over beers. And in this case, there are several people who say it was all about
    2:35:12 Frank Knight. What did he say? What did he mean when he said it? Is he right? And so Knight was
    2:35:17 very, he would say one thing and then say another. If you read him, it’s very hard to follow what he’s
    2:35:22 actually saying because he’s full of qualifications and ironies, it blends. And so he would throw out
    2:35:27 these pieces, and then the students would kind of clutch at them, and then they would come back
    2:35:32 together and try to assemble this sort of worldview. And then Frank Knight fell into this terrible
    2:35:38 depression, and to cheer him up, they planned a big party. And they went back through all of his
    2:35:43 previous writings, and they assembled them into a book that was published. This is the Ethics of
    2:35:48 Competition. And you can read the introduction written in part by Milton Friedman. So not only
    2:35:52 were they talking about Knight and what he said, but then they started pouring over his work.
    2:35:57 And one of them described it as like a general equilibrium system where you had to know all
    2:36:02 the parts, and then all of a sudden it all fit together in a whole. So if we step back, what
    2:36:08 they were doing was getting inside the mind of a great thinker and understanding the ways that all
    2:36:14 fit together, and then kind of testing their ideas against Knights. And what’s fascinating is one of
    2:36:21 the first papers that Friedman publishes in Statistics is a rebuttal of Frank Knight. He
    2:36:27 publishes a rebuttal of Frank Knight’s ideas about risk and uncertainty. And Frank Knight,
    2:36:33 he kind of took a black swan argument. He said, “Risk, you can calculate uncertainty. You can’t,
    2:36:39 like, existentially, philosophically. You can’t get your hands around it. It is the black swan.”
    2:36:44 And Friedman publishes this statistical paper, and he says, “I can put uncertainty on a graph.”
    2:36:50 And so there’s that sort of Freudian killing of the father element when he comes back, and he will
    2:36:57 in some ways turn his back on Knight’s approach and Knight’s pessimism even while it’s like a
    2:37:03 foundation of his thinking. Fascinating. Is there something you could say about the thinking process
    2:37:10 that Milton Friedman followed, like how he developed his ideas? You mentioned there’s a
    2:37:18 strong collaborative component, but there’s another story I saw about that I think his son
    2:37:24 recalled about the argument number system that you mentioned, which I, by the way, if you’re
    2:37:29 going to explain that as a tangent of a tangent, that’s really awesome. I think it’s like number
    2:37:37 one if the other person is right. Number two means you were right and I was wrong. And the number
    2:37:42 system evolved in some ways to be quick and efficient, but in other ways, they also were
    2:37:47 really clear about it. So, you know, something like there’s kind of like three reasons behind it.
    2:37:54 First is if you use a number, it reminds the listener that it’s really hard to say the words
    2:38:00 I was wrong. So, you’re kind of calling on their sympathy by using the number, reminding them that
    2:38:07 you’re doing a hard thing. And then it’s also reminding them that you’re in this family with
    2:38:14 this code. And so you’re signaling your membership and your closeness and your love, really. And
    2:38:18 so it’s supposed to be an easy way to disagree without like breaking the relationship. Yeah. So,
    2:38:25 admitting you’re wrong now comes with this warm fuzzy feeling. Yeah. Yeah. And that’s really, I mean,
    2:38:34 that’s so powerful. I think so much of the friction of human interaction can be boiled down to
    2:38:38 just not being able to admit that you’re wrong, efficiently and quickly and regularly and just
    2:38:44 often. And to be able to do that, that’s really, that’s really powerful. Yeah. I think it’s a really,
    2:38:50 a really neat aspect of their family life for sure. That’s a fun story, but like can we just
    2:38:57 generalize to how he engaged in collaboration, how he developed his ideas as they’re like a
    2:39:02 thinking process. So, he taught at the University of Chicago and he tended to teach for six months
    2:39:08 and then have six months off. And he spent the summers in New Hampshire or Vermont. He had a,
    2:39:12 right near that border, they had two different houses. And that to him was the deep thinking time.
    2:39:19 And so, when he’s at Chicago, he’s teaching, he’s arguing, you know, some people love his
    2:39:25 teaching style very much in charge, very much keeping students on their toes, confrontational,
    2:39:30 others found it too much overwhelming, kind of shut them down intellectually and they couldn’t,
    2:39:36 they couldn’t cope with it. And so, I think it was kind of go time when he was teaching. In that
    2:39:41 case, he was, that was a lot of social time interacting, talking, other professors, going
    2:39:46 out and giving papers, arguing with the people at Yale or Harvard. Then he would go and do these
    2:39:51 very deep dives over the summer. He would also regularly do these trips to New York
    2:39:55 to see Anna Schwartz. So, it was a 12-year collaborator. They didn’t have, phone calls were
    2:39:59 really expensive. They did have quite an extensive correspondence, but then they would do these
    2:40:04 meetings. So, he would basically come in at the beginning of the summer, going to Raleigh,
    2:40:09 stop in New York, see Schwartz, and then again on the way back to Chicago. So, you’d have these
    2:40:13 deep check-ins at that point. The other thing that happened is people would come visit him
    2:40:18 in New Hampshire. And so, he would have these, he had a studio separate from the house. He would
    2:40:23 go and he would work. And then at night, his friends would come. His friends were all economists.
    2:40:27 There’s a whole like cluster of economists. They all clustered within driving distance of the
    2:40:32 Dartmouth Library so that they could get their hands on books. And so, they would come over and
    2:40:38 then they would argue and talk into the night. So, I think he did need that deep focus time.
    2:40:44 But it was not like, he also lived a very engaged, very embedded social life.
    2:40:51 A part of which was his marriage. Is there something you could say about love, about marriage,
    2:40:56 about relationship? They made the whole thing work because it was vibrant and they wrote a
    2:41:00 biography together. They did. I mean, they were very complimentary. They were kind of the ying and
    2:41:09 the yang. She was very introverted, somewhat suspicious of others, skeptical. And he was
    2:41:17 extremely extroverted, optimistic, high energy. And they also were at a time when it was really
    2:41:21 clear like for a broader society, these are the roles of a man. These are the roles of a woman.
    2:41:28 And they pretty much adopted those. Now, Rose Friedman did some very important economic work.
    2:41:32 She’s part of the early stages of the theory of the consumption function. She didn’t complete her
    2:41:38 degree because she really knew there wasn’t, if she wanted to be married and have children in the
    2:41:44 world she lived in, there wasn’t a real pathway to also being an economist. I do think that a lot of
    2:41:50 that, it’s not, although it feels very gendered, like he’s the man out in the world and she’s in
    2:41:54 private. It’s interesting because her brother, Aaron, director was the same way. He was very
    2:41:59 private man, very shy, very introverted. And he exerted this quiet intellectual influence on
    2:42:05 all of his friends. So I think that was just kind of a family treat of being more quiet,
    2:42:09 preferring to be behind the scenes. It wouldn’t have worked any other way. Because Friedman was
    2:42:17 so out there, so extroverted. And there’s a bit of a sad thing she said. She said,
    2:42:22 “When I married Milton, I lost half of my conversations. When David came along, I lost the
    2:42:29 other half.” So it was a household that was just dominated by male voices in which she didn’t have
    2:42:34 a lot of room. What was tricky for me in my research is she didn’t leave much of a trace.
    2:42:39 She put together Milton Friedman’s archive and she took herself out of it. So I really
    2:42:44 had trouble finding her actual voice in the historical documents. And she didn’t want to
    2:42:50 leave that behind. So it’s an absolutely essential piece of his success because she’s the one who
    2:42:55 pushed him to do the Newsweek column, to do Free to Choose. And she really wrote Capitalism and
    2:43:01 Freedom. She took all his random notes and she put them together into a book. And that became this
    2:43:08 kind of testimony of his ideas. But she shared many of his ideas. And she, without… When I think
    2:43:12 of Friedman, if you take away… Anna Schwartz, if you take away Rose Friedman, if you take away
    2:43:17 the other woman who collaborated with him, you have a much thinner resume than the one he actually
    2:43:25 has. Yeah, it’s always sad. It always makes me wonder about the private secret conversations
    2:43:32 between partners. Yeah. Because they’re… They might not show up in the record, but they probably
    2:43:39 influence the person more than almost anything else. Those quiet little conversations. Yeah.
    2:43:49 If we can switch our path to another great mind of the 20th century,
    2:43:55 Ayn Rand. We talked about some of the similarities here about them being fighters for freedom and
    2:44:03 fighters for capitalism. What is Ayn Rand’s philosophy if you can give a big 10-thousand
    2:44:08 summary of Objectivism? Yeah. So she called it Objectivism. China, she used to do this thing
    2:44:15 like, I can stand on one foot and say it. So it goes something like epistemology,
    2:44:21 reason, ethics, selfishness, politics, capitalism. That was kind of how she summarized it. So
    2:44:26 what she did, there’s a couple things she did with Objectivism. First of all, she says the key
    2:44:32 defining element of humanity is rationalism, the rational faculty. So that’s what defines
    2:44:38 what humanity is. Therefore, there is an objective reality that we can access and know with our
    2:44:45 reason. That’s the objective of epistemology. And the one social and economic system that lets
    2:44:53 rationality flower and is based upon rationality is capitalism. And then rationality only works
    2:45:01 in her view as an individual capacity. And that rationality teaches that what you should do is
    2:45:11 pursue your interests. And so she ends up calling that selfishness. Now, it’s tricky because selfishness
    2:45:18 has so many strong and negative connotations. And she meant, I think, something closer to
    2:45:27 like self-actualization because she really tried to create this idea and express the idea that
    2:45:35 to be truly selfish did not mean trampling on others, it meant just being motivated by your
    2:45:44 own kind of internal measures and metrics. And so in her fiction, she tries to show this by
    2:45:50 showing the false selfishness of some of Peter Keating, who’s an architect who kind of steps
    2:45:55 over everybody to advance his career. And she says it’s not true selfishness because true selfishness
    2:46:01 would recognize it’s false to take others’ work and pass it off as your own. Now, the other big
    2:46:10 piece of objectivism is a very approach that’s really inspired and related to Friedrich Nietzsche’s
    2:46:19 idea of revaluing values or a genealogy of morals. And so she says, what’s happened here is
    2:46:26 Western culture has converged on this idea of altruism as good, being selfless and altruistic
    2:46:34 is good. And this has led us to communism and has led us to devalue the individual in favor of the
    2:46:40 collective. So what we need is a new moral code which elevates selfishness, which elevates the
    2:46:45 individual and which takes all the things that we have been told are bad and actually says their
    2:46:50 values. This is what she’s trying to do with objectivism. I mean, it is about as ambitious
    2:46:57 of an intellectual project as there can be. And that’s what really draws people in. Yet at the
    2:47:04 same time, she’s flying in the face of the way human morals and ethics and societies have evolved.
    2:47:09 And she’s not able to single-handedly recreate them the way she wants them to be.
    2:47:14 Yeah, I mean, she’s not doing herself any favors by taking on the words and trying to rebrand them
    2:47:21 completely, like writing the virtue of selfishness. It’s like, can we just call it self-actualization?
    2:47:26 There’s a negative connotation to selfishness and a positive connotation to altruism.
    2:47:34 So she sometimes it seems takes on the hardest possible form of argument.
    2:47:41 Yeah, I mean, she had a student who ended up being very close to her, Nathaniel Brandon,
    2:47:44 and he was the Reverend Advisor, and he said, “Can you please not use selfishness,
    2:47:49 like just come up with another word.” But part of her liked it. Part of her wanted to provoke
    2:47:51 and unsettle. She didn’t want to give that up.
    2:48:02 I mean, people should listen to her public talks. Her whole aura, the way of being is
    2:48:08 provocative. And she’s a real powerhouse of an intellectual. So she loves the challenge.
    2:48:16 And that just listening to her in itself is just inspiring, because you could see the individualism
    2:48:23 radiate from her. Yeah, I mean, that was one of the things I found in researching and writing
    2:48:29 about her. She’s an incredibly unusual human being. And so that was her strength, right?
    2:48:34 Because she’s so unusual, but it was also her downfall, because she looked to herself as a model
    2:48:40 or to get insight about humanity. And she never quite processed how different she was from other
    2:48:48 people. So just because we talked about Milton Friedman so much, can we just return to what to
    2:48:54 you, given everything we’ve said, because the interesting difference is about Ayn Rand, her ideas
    2:49:04 related to Milton Friedman. Yeah, I mean, broadly, we could put Milton Friedman and Ayn Rand in
    2:49:14 some sort of category together, but she has this focus on ethics and rationality and this desire
    2:49:21 to be revolutionary. That’s much stronger than Friedman. Friedman wanted to overthrow the economic
    2:49:28 consensus. He didn’t want to overturn the moral basis of Western society. So she’s also, she does
    2:49:33 something. So in one of Frank Knight’s essays, he talks about the ethics of competition. And he
    2:49:40 says, you basically cannot build an ethics out of competition, because it would be monstrous to do so,
    2:49:47 because it would say the winner of this competition is ethically right. And that would open the door
    2:49:51 to sort of might makes right. And this is what Friedman struggles with. And he says, I can’t
    2:49:56 take capitalist outcomes as ethical unto themselves. I can’t do it. It doesn’t feel right.
    2:50:01 And there’s this line where Frank Knight says, no one would ever do this. And I was like, oh,
    2:50:07 Frank Knight, you haven’t read Ayn Rand yet. You’re a little too early, because that’s what she does.
    2:50:13 She takes the outcomes of capitalism and of marked competition and says, these have ethical meaning.
    2:50:20 And this is where ethical meaning inheres. And it is ethical to try to succeed and to succeed in a
    2:50:25 capitalist society. Now, what she’s able to do is create a fictional world in which people succeed
    2:50:31 in her fictional capitalist world through ethical behavior. And so she doesn’t really have to wrestle
    2:50:38 with a capitalist world in which people succeed through fraud and corruption and all the other
    2:50:43 things that might go into someone’s success. She creates the best possible take on success under
    2:50:49 capitalism. And then she holds that up as an ideal. And I think what’s important is that so few people
    2:50:55 have done that. And she comes at a time when everybody is emphasizing the downsides of capitalism.
    2:50:59 And she says, there’s another way to look at it. Here are the good sides of capitalism.
    2:51:04 And like you said, she was operating, which I really loved the phrasing of that in the mythic
    2:51:12 register. So she was constructing these characters, these capitalists that are like the highest form,
    2:51:20 these great heroic figures, almost romanticizing them. You mentioned We The Living as one of the
    2:51:28 books that you like of hers the most. But can we stay in the mythic register with the Fountainhead
    2:51:35 and Alice Rugg? What to you are some sort of memorable, inspiring moments, insightful moments
    2:51:44 from those books that may be scenes or ideas that you take away from them that are important for
    2:51:54 people to understand? Yeah. So the Fountainhead is this story of a struggling architect, Howard Rourke,
    2:51:59 and she kind of follows his life and his career. And the message is really,
    2:52:06 it’s a version of “to thine own self be true,” right? And Rourke’s designs are too avant-garde.
    2:52:13 Nobody appreciates him. And he just keeps doing what he wants to do and is just focused on his
    2:52:18 own visions, his own genius. I think that’s been really inspiring to kind of creators of all types.
    2:52:24 I think it’s barely unrealistic as a portrait of human achievement, but it’s an aspirational idea.
    2:52:31 I mean, one phrase that comes to mind is there’s a character, I forget which one, who is in some
    2:52:34 sort of adversarial relationship with Howard Rourke and says something to him like, “Well,
    2:52:41 Mr. Rourke, what do you think of me?” And Rourke says, “I don’t think of you.” And that to Rand
    2:52:47 was the ideal. You’re not thinking of other people. You’re an island unto yourself. You’re
    2:52:53 focused on your own goals, your own capacities. And you’re not doing it to impress other people
    2:52:57 or to be better than other people or to dominate other people. You’re doing it because you’re
    2:53:04 expressing your inner soul in a way. So that has been very captivating to so many. And The
    2:53:08 Fountainhead is one of those books we talked about that causes people rooted and they make
    2:53:13 changes in their life or they feel called to their higher self. And I think there’s also
    2:53:18 the scene where Rourke, with the Dean of Architecture at school that’s speaking to what
    2:53:25 you’re saying, I think to me is inspiring. So this is the Dean of Architecture that expels Rourke
    2:53:31 and it brings him into a meeting thinking Rourke will plead for a second chance. And the Dean says
    2:53:37 that Rourke’s work is contrary to every principle we have tried to teach you, contrary to all
    2:53:43 established precedents and traditions of art. Do you mean to tell me that you’re thinking seriously
    2:53:50 of building that way when and if you are an architect? And then in a gangster-like way,
    2:53:56 Rourke says yes. And then Dean asks, my dear fellow, who will let you? And Rourke replies,
    2:54:03 that’s not the point. The point is, who will stop me? Yes. I mean, Rand’s coming from communist
    2:54:09 Russia, but it has a bit of the don’t mess with Texas flavor. I might say that really resonates
    2:54:15 with this idea of anyone who’s felt like they’re fighting the powers that be. Yeah, it’s interesting.
    2:54:21 I thought you might be going to the quote where he says something like, I inherit no tradition.
    2:54:27 I stand at the beginning of one. And I really think Rand’s thinking about herself when she says
    2:54:32 that. She inherits nothing. She stands at the start. But the fountainhead comes out in the
    2:54:38 middle of World War II and it’s not expecting Rand as an unknown writer. This is kind of a strange
    2:54:44 book. It’s a classic story. It’s turned down by 12 publishers before one takes a chance on it. And
    2:54:50 Rand really loved this story. The editor who read it said, this book is great. And his boss said,
    2:54:57 no. And he said, if you don’t take this book, I’m quitting. And so she idolized him for doing that.
    2:55:03 So they print it and it becomes a bestseller just through word of mouth. So it’s not advertised,
    2:55:08 it gets one good book review, but people tell each other how much they like this book. And it
    2:55:13 keeps printing and selling out printings. It’s made into a movie. And so it lands in this time
    2:55:18 when Americans are engaged in this great collective endeavor of World War II. They’re making all kinds
    2:55:23 of sacrifices for the collective. And I think paradoxically, as they do that, they’re drawn
    2:55:28 to this vision of someone who doesn’t have to compromise at all, who is leading their life
    2:55:32 exactly as they want to. Meanwhile, they might be sleeping on an ocean liner because they’ve
    2:55:35 been drafted to fight in this war. And they’re reading The Fountainhead and they’re feeling
    2:55:40 better about themselves. And so it’s also really interesting. The Fountainhead is hugely popular
    2:55:45 in India, which is fascinating. And talk to people about this. And they basically say,
    2:55:51 this book comes like a breath of fresh air into a very traditional and conformist culture. And
    2:55:55 people just latch onto it and they love it. And it gives them that feeling of freedom and
    2:56:00 possibility that they’re hoping for. Yeah, I mean, it really is a book. Alice
    2:56:06 Shrugged can be a bit of that too, but it’s more the philosophy of objectivism and the details
    2:56:12 and the nuance of that seeps into Alice Shrugged. The Fountainhead is very much like a thing that
    2:56:18 makes you change the path of your life. Yeah. And that, I mean, that’s beautiful to see that books
    2:56:25 can have that power. And Rand knew that she was doing that and she knew what she was doing.
    2:56:31 This wasn’t an accident. And people say, oh, she’s a bad writer. Oh, her characters are so heavy-handed.
    2:56:36 You know, she started as a screenwriter. She started as someone who analyzed films
    2:56:42 for movie studios. She knew exactly how to manipulate plot and character and drama.
    2:56:47 And she also knew that she was writing. You know, people say, oh, Rand is for, you know,
    2:56:51 adolescence. Adolescent teenagers love Rand. And that’s kind of who she was writing for. And she
    2:56:54 said, you know, I’m writing for people as they start out on their life and they’re thinking
    2:57:01 about who they want to be. So she’s not writing, you know, for the weary middle age. She’s writing
    2:57:05 for the young who are looking for inspiration. You know, people say that to me sometimes about
    2:57:12 certain books like Rand, but also about the alchemist. I know a lot of people for whom the
    2:57:17 alchemist and their adults and their brilliant people, the alchemist changed our life. And the
    2:57:25 same can be said about the fountainhead. And I sometimes get criticized for using words that
    2:57:34 are too simple. I think simple words can have power. And the cliche thing sometimes needs to
    2:57:43 be said. And sometimes, if effectively needs to be said in an over-the-top way in the mythic
    2:57:48 register, because that’s the thing that resonates with us. Because we are like heroes of our own
    2:57:54 story. And we need to hear that message sometimes to take the bold step, to take the risk, to take
    2:58:00 the leap. Yeah. And I mean, the other thing, she knew she was doing kind of propaganda in a way.
    2:58:04 She was like, I’m doing pro-capitalist propaganda. She has a degree from the University of Leningrad.
    2:58:09 You know, she’s raised up in Soviet Russia. She said, we need to present the case for the other
    2:58:15 side in the same way. And that’s what she did. Why do you think she’s so divisive? People either
    2:58:22 love her or hate her? I mean, I think it’s because of that purity that I’m willing to say,
    2:58:29 sort of you get what you deserve, and that kind of lack of charity. And part of that in her work
    2:58:35 is because she creates this fictional world where she can set everything up just so. And so you don’t
    2:58:43 have contingency or accident or bad luck. Or you don’t really have a lot of children. You don’t
    2:58:49 have handicapped people. You just have this idealized world. And I think it’s really infuriating
    2:58:55 for people who feel that’s so inaccurate. How can you be depriving a social theory and philosophy
    2:59:00 around this? And how can you be missing what seems to many people she’s missing the kind of
    2:59:07 ethical instinct or the altruistic or charitable instinct? And so they just become enraged at
    2:59:12 that. And they don’t want to see anyone go that far. And they’re outraged that someone went that far.
    2:59:19 Did the thing that Frank Knight said no one would do. Yeah, it’s just it’s very unsettling.
    2:59:25 Would you say that’s her main blind spot? The main flaw of objectivism is just
    2:59:33 how black and white it paints the world. Or if not, what would you say are the flaws of objectivism?
    2:59:40 So, I mean, the big flaw is that it’s justified through a fictional world. It’s not justified
    2:59:49 through reference to the real world. It’s not empirical in a way. And Rand herself would say
    2:59:56 that she’s not writing about things how they are, but how they should be. And so that idealism
    3:00:04 just really undermines it as a mechanism to understand where we’re actually living.
    3:00:10 And that is a big contrast with Milton Friedman who would focus on how things are versus how
    3:00:16 things should be. And then I think it’s the problem of elevating rationality or any other
    3:00:21 mode of insight or thinking. And so what happens in Rand’s life when I describe this in some detail
    3:00:29 in the book is she essentially creates a cult of reason around her. And people who are drawn into
    3:00:34 this cult, it’s called the collective. It’s a group of young people in New York City who are
    3:00:39 drawn to her work. And she’s already famous, but she’s writing Atlas Shrugged. And so she’s sharing
    3:00:45 drafts of Atlas Shrugged as she goes along. And one of the members of the collective
    3:00:50 to bring all of this together is Alan Greenspan, later behead of the Federal Reserve. And he’s
    3:00:55 incredibly taken with her. He’s one of these people who says, I was a narrow technical thinker. I
    3:01:00 never thought about ethics or politics or anything bigger until I met Ein Rand. And she really opened
    3:01:06 my mind. He’s part of this tight-knit group. But in this tight-knit group, they think of themselves,
    3:01:10 we are all individualists. We’re dedicated to individualism and capitalism. We’re different
    3:01:17 than everybody else. Over time, they all come to share Ein Rand’s views and opinions on everything
    3:01:23 from music to art to clothes. She gets a dining room table and a bunch of them get the same dining
    3:01:30 room table. And it becomes incredibly conformist because they’ve all believed they’re acting
    3:01:36 rationally. And they believe that to act rationally is to agree with Ein Rand. And they believe there’s
    3:01:43 no other way to make decisions than rationality. And so to just agree with her is to be irrational.
    3:01:48 They don’t want to be irrational. So people get really caught up in this very damaging
    3:01:56 cult-like circle around her. Plus, for a cult of reason, they get awfully emotional when there’s
    3:02:06 any disagreement with Ein Rand. I mean, it’s kind of hilarious. It’s absurd. But it’s also beautiful
    3:02:12 to watch this singular figure. We’ve talked about several singular figures in Like Frank, right?
    3:02:19 That shakes up the world with her ideas. And of course, it would form a cult. And of course,
    3:02:22 that cult would be full of contradictions and hypocrisies.
    3:02:28 Yeah, I mean, it’s amazing. So Murray Rothbard is a famous anarchist, falls into the Ein Rand cult.
    3:02:36 And then he disagrees. And there’s some type of show trial where he’s told he’s wrong about
    3:02:41 everything. And then he has a little sort of pseudo cult of his own. And two of his cult members
    3:02:50 switch over to Ein Rand. And then one of them, to gesture their breaking of their relationship,
    3:02:56 mails him a dollar bill that’s been torn in half. I mean, this is high theatrics, right?
    3:03:05 Okay, sticking on the drama and the theatrics. Who was Nathaniel Brandon? Can you take me through
    3:03:11 the arc of Ein Rand’s relationship with Nathaniel Brandon to their dramatic falling out in 1968?
    3:03:19 Yes. So after the fountainhead, the fountainhead’s auction is sold to be a film. So Ein Rand moves
    3:03:22 to Hollywood, where she’s going to help in the writing of the film. She wants a lot of creative
    3:03:27 control. And then she’s also still working in screenwriting and things like this. And so she
    3:03:33 gets a letter from a Canadian student who’s written to her several times. And then he writes
    3:03:37 again, and he says, I’m at UCLA. And she’s like, young man, you’re so full of error. Why don’t
    3:03:42 you come visit me? And I’ll straighten you out. So he comes and they have this real meeting of the
    3:03:48 minds. They talk all night. He comes again. He brings his girlfriend. She loves him. And they
    3:03:54 start this very intense relationship of spending every weekend at her house, basically, staying
    3:03:58 up all night talking about ideas. He becomes completely converted to the Objectivist worldview.
    3:04:05 Rand begins counseling him and his girlfriend about their relationship. Very intense thing.
    3:04:11 Then eventually, they graduate from college and they both enroll in a graduate program in Columbia
    3:04:18 and they leave. And after they’ve left, I and Rand is just bereft. And within a few months,
    3:04:23 she packs up her home and she moves to New York. Here I am. I like New York better. And so that
    3:04:28 becomes the seedbed of the collective. And the Brandon’s, they get married. They change their
    3:04:34 name to Brandon. They’ve never publicly spoken on this, but many people have pointed out it has
    3:04:41 the word Rand in the name. So it’s some type of acknowledgement of how important she is to them.
    3:04:47 And time goes on and romantic feelings develop between I and Rand and Nathaniel Brandon,
    3:04:54 some 20 years or junior. And they discuss them and they realize that rationality has led them
    3:04:59 to the conclusion that they should be lovers. Right. Right. They’ve rationally decided this,
    3:05:04 but because they’re rational, they need the consent or at least to inform their partners.
    3:05:07 They’re both married. They’re both married. So they call a meeting and they
    3:05:15 obtain the consent or maybe simply inform the others of the rationality of the choice. And then
    3:05:21 they say, but this is only going to be an intellectual relationship, but we’d like a few hours alone
    3:05:25 each week. And we don’t want to be deceptive. So we want you to know and approve of this. So the
    3:05:32 spouses bought into rationality, know and approve. One thing leads to another, it becomes a full,
    3:05:36 romantic and sexual relationship. And although it’s open within these four people, it is not
    3:05:42 open more broadly. And so in all these meetings of the collective, Alan Greenspan, all these other
    3:05:47 people coming up, drinking coffee all night, talking, talking, they all know that Nathaniel
    3:05:53 Brandon is objectivist number one. They don’t know that there’s a romantic and sexual relationship
    3:05:59 happening. It’s kept a secret. And then when Atlas Shrug comes out, it’s panned by reviewers. People
    3:06:05 absolutely hate this book. And Rand is not Howard Rourke. She falls into a deep depression
    3:06:12 because her masterpiece has been rejected. And so then the romantic relationship ends,
    3:06:18 but the close personal relationship continues. And then over time, Brandon, who’s still married
    3:06:24 to his wife, begins an affair with another young woman. And at this point, he has started the
    3:06:30 Nathaniel Brandon Institute to teach objectivism. And he’s making good money. He’s becoming quite
    3:06:35 famous. She supported the Institute. She supported it. And at first, it was to help her in her
    3:06:39 depression because he said, “The world needs to recognize your genius. They missed Atlas Shrug,
    3:06:44 but I’m going to teach them. I will bring the message.” And it’s very successful. It becomes
    3:06:48 its own business. It has a newsletter. It’s a whole world. So that small cult around,
    3:06:53 I’d Rand, expands to this whole social network. And it’s very much a piece with this burgeoning
    3:06:57 conservative movement. Objectivists are involved in criticizing the draft. And
    3:07:04 they’re kind of a libertarian, objectivist world going on. All of this is happening.
    3:07:09 In the meantime, Nathaniel Brandon has found a new partner. And he doesn’t tell I ran this
    3:07:18 because he knows she’ll be upset. And so it goes on for years. And I ran knows something is going
    3:07:24 on, but she can’t quite figure it out. And finally, Barbara Brandon says to Nathaniel Brandon,
    3:07:30 “You have to tell her. This has just gone on too long.” So she finds out and the whole thing
    3:07:37 blows up and she exiles him and she breaks off contact with him. And nobody has ever told what
    3:07:42 happens. It’s called the objectivism. Objectivism breaks in two because some people say,
    3:07:49 “How could I do anything wrong?” And other people say, “What is this letter all about?
    3:07:52 And what did Nathaniel Brandon do? And I’m not just going to take her word for it. I need more
    3:07:57 information.” And then a bunch of people, I read all the accounts of this, a bunch of people are
    3:08:02 like, “Okay, they were having an affair.” And a bunch of other people are like, “No, that couldn’t
    3:08:08 possibly be happening.” And so the whole thing breaks up. But what I argue in my book is actually
    3:08:15 this is to the benefit of Rand’s ideas because Rand herself was so controlling over her ideas.
    3:08:22 And now that she steps back from a public role, objectivism flows into the student libertarian
    3:08:27 movement. Some objectivists become conservatives. It just kind of spreads out more generally. And
    3:08:32 you don’t have to drink the Kool-Aid. You don’t have to take the official course. Nathaniel Brandon
    3:08:37 goes on to be part of the self-esteem movement, killer human potential movement, California.
    3:08:43 And Ayn Rand lives another 10 years or so, but she doesn’t do major work after that.
    3:08:53 Since we were talking about some of the, although rationalized, some strange sexual
    3:08:59 partnerships that they’re engagement in, I have to ask about the Fountainhead and the
    3:09:05 quote-unquote “rape scene” in the Fountainhead. Was she intending to add that there to be
    3:09:12 controversial? How are we supposed to read into it? Is it a glimpsing to Ayn Rand’s sexuality?
    3:09:20 And maybe broadly, we can say, well, what was her view on sexuality, on sex, on power dynamics,
    3:09:26 and relationships? Yeah. I mean, there’s also an objectivist theory of sexuality that probably the
    3:09:32 least convincing of all the parts of objectivism. And it goes something like your sexual desires
    3:09:40 express your highest values. And they are related in some ways to your rationality,
    3:09:46 right, which is also related to your highest values. So for her, that explained her attraction
    3:09:51 to Nathaniel Brandon and Nathaniel Brandon’s attraction to her was a function of their highest
    3:09:57 values. And in fact, Brandon imbibed this so deeply that the fact that he was later drawn
    3:10:03 sexually to a woman who was not particularly accomplished, but was beautiful, caused him
    3:10:09 deep anguish and guilt for being non-objectivist. So this is the objectivist theory. Then the
    3:10:15 gender politics are just crazy. And we have to kind of back up and think, okay, so who is Ayn Rand?
    3:10:21 She’s born in Lisa Rosenbaum in Russia. She is someone who stands out from the crowd from the
    3:10:26 beginning. She never really fits in. She’s not conventionally beautiful by any stretch of the
    3:10:30 imagination. She struggles with her weight, and she doesn’t consider herself to have a beautiful
    3:10:37 face. She’s very independent. She meets none of the metrics of traditional femininity at all.
    3:10:41 She finds love with a man who is very handsome, but very passive.
    3:10:48 Yet she writes in all her fiction about strong manly heroes. So this seems to be like a projection.
    3:10:54 The man she’s actually with is not a strong manly hero. The hero she writes about, she probably
    3:10:57 wouldn’t be able to be in the same room with them for more than one minute before they got
    3:11:04 into raging argument, right? And then she develops this theory about women and men in that
    3:11:12 a woman should worship her man, and a woman finds her true expression in worshiping the
    3:11:17 man she’s with. So again, this is not at all how Ayn Rand lives her life. This is like this,
    3:11:25 I would say, compensatory theory for her lack of ability to conform to the gender norms of her day.
    3:11:33 She then articulates them in a very strong and almost distorted and exaggerated way to compensate
    3:11:37 for the fact that she doesn’t actually meet them, can actually enact them.
    3:11:45 The rape scene, to some degree, embodies that idea that to some degree, that the woman should
    3:11:53 worship the man. I tend to read it more in terms of literary genre. So Rand is a screenwriter,
    3:12:03 a consumer of movies, and that rape scene is paradigmatic for the romance genre. In other
    3:12:11 words, these pulpy romance novels, the hero rapes the heroine, and then they fall in love. That’s
    3:12:16 just the trope of how it works. So it’s crazy when you read it, but if you were reading a bunch of
    3:12:23 novels in this genre, you would find this is very standard. And so that is a huge part of
    3:12:28 this appeal at the time. There’s this feminist who hates Rand, Susan Brownmiller, and she wants to
    3:12:33 write an angry denunciation of the rape scene. So she goes to get the fountain head, and she’s
    3:12:38 wondering how is she ever going to find the scene in this 800-page book? It’s a library copy because
    3:12:43 she doesn’t want to buy it. And it just falls open to the rape scene because everybody’s gone
    3:12:49 and read it because it’s very racy and explicit for that time. So I’m almost positive she also knew
    3:12:55 that. Like, if I put in this kind of taboo-breaking sex scene, that’s also going to probably be why
    3:13:02 people tell their friends about it. So I think it’s a mess. I think all of the gender and sexuality
    3:13:10 stuff is that she states is just a total mess. I think it also reminds me of another guy related
    3:13:19 Fugius Nietzsche who had very strong opinions on women and wrote about what women’s role in society
    3:13:23 should be and different power dynamics and relationships and all that kind of stuff when
    3:13:30 he himself really had trouble getting laid. Yeah. And so you have to sort of always maybe
    3:13:36 chuckle or take with a grain of salt the analysis of power dynamics and relationship from these
    3:13:45 figures which failed in many regards in their own private life. You mentioned feminists.
    3:13:49 Would you consider Ayn Rand a feminist? I mean, she’s almost an anti-feminist
    3:13:59 because she then goes on and someone writes her a letter about, like, should there be a
    3:14:06 female president or something? This is like the beginning of feminism. And she says, no. No women
    3:14:12 should ever be president because if she’s president, she wouldn’t be able to look up to any man
    3:14:16 because she would be so powerful and therefore she would be corrupt and rotten in the soul
    3:14:23 and unfit to be a leader. It just makes no sense. But that said, she’s a woman and she’s one of
    3:14:30 the most powerful intellects in the 20th century. Yeah. And so the contradictions, I mean, Nietzsche’s
    3:14:39 full contradictions of this sort, that the very fact that she’s one of the most powerful minds
    3:14:47 in history to me means that she is a feminist in the spirit she embodies, right, and what she
    3:14:53 represents. I mean, she lived the ideals of individualism in her life and set aside gender
    3:14:58 norms in her own life. But she did not see herself as part of any… She did not see herself as doing
    3:15:04 this for the benefit of other women or to change society’s views about women. There was no collective
    3:15:13 essence to it. So if feminism has some sort of collective aspect to it or at least some
    3:15:18 identification, one needs to identify with a broader category of women and feel they’re acting
    3:15:27 in behalf of that, she’s definitely not doing that. And she was fair to women in her life,
    3:15:32 promoted them in her life, but did not… I mean, she was very negative about feminism. And then
    3:15:38 because they dress terribly. And then the other thing, it’s really interesting, there’s all these
    3:15:45 kind of homoerotic themes in her writing. And for that reason, many gay men were drawn to her writing.
    3:15:50 And then she would say homosexuals are dirty, terrible people. She would denounce
    3:15:55 people for being homosexual. So there’s a whole actual literature of gay men wrestling with
    3:16:03 Rand and what she says about gay people. So yeah, it’s hard to make sense of. And I just
    3:16:08 think of the enormous pressures. I want to be charitable. I just think of the enormous pressure
    3:16:13 she was under in the culture she was raised in, the expectations that were placed upon her and
    3:16:19 her uttering ability to meet any of them. And it came out in this very tortured set of ideals
    3:16:27 that she tried to promote. And this kind of lack of ability to introspect in herself and to,
    3:16:31 it was probably too painful to introspect and to think about that. So she just
    3:16:37 tried to rationalize her way through it. And it came out in these very strange theories.
    3:16:43 Why do you think that Iran is, maybe you can correct me, but as far as I can see,
    3:16:48 never mentioned in the list of great thinkers in history or even the great thinkers of the 20th
    3:16:54 century or even the great female thinkers of the 20th century. So you have somebody like Simone de
    3:17:02 Brevard, Hannah Arendt. I almost never see her in the list. If you Google those silly lists, top
    3:17:08 whatever, top thinkers of the 20th century, she’s not mentioned. Why is that?
    3:17:14 A lot of people just deeply dislike Rand. They deeply dislike her ideas. They don’t think they’re
    3:17:21 profound because they’re disconnection from other ideas and other understandings of human society.
    3:17:28 I think, I think where you could look at them and say, these ideas are very provocative and
    3:17:32 they’re very deep because she’s not taking anything for granted and she’s flipping everything around
    3:17:38 and forcing you to really think. To a lot of other readers, to her critics, they just look absurd.
    3:17:46 Like, how could you even make these contentions? And I think that because she’s not without
    3:17:51 precedence and she’s not without followers, but she doesn’t knit herself into an intellectual
    3:17:59 community the way that these other thinkers do very naturally, that you can see who they influence,
    3:18:05 you can see who they’re in dialogue with. I think my book was one of the first to really
    3:18:09 take Rand and say, she’s a figure in American history. Here’s who she’s connected to. Here’s
    3:18:16 who she’s influenced. And I got a lot of pushback for that. I think now people are more open to it,
    3:18:23 but I think the people who compile these lists really dislike her work and they think it’s shallow
    3:18:31 because they find her fiction overdrawn. They find her work in the mythic register simple and she’s
    3:18:39 also a grand systematic thinker in an age that’s over systems. She’s almost creating an inverse
    3:18:47 Marxism. Marx was writing in 1848. He’s not a thinker of the mid-20th century. I think that’s
    3:18:52 part of it. The lack of a legacy and the dislike of what she had to say and the feeling that she’s
    3:18:58 too detached, her insights are not insights because they’re too idealized rather than being rooted in
    3:19:01 a theory of human nature that people find plausible.
    3:19:10 You study and write about history of ideas in the United States over the past 100 years,
    3:19:19 100 plus years. How do you think ideas evolve and gain power over the populace, over our government,
    3:19:26 over culture? Just looking at evolution of ideas as they dance and challenge each other and
    3:19:35 play in public discourse. What do you think is the mechanism by which they take hold and have
    3:19:42 influence? There’s a couple different ways I think it happens. I really am interested in
    3:19:50 the relationship between the thinker and then the reader and the interpreter of the ideas
    3:19:56 and then the conditions on the ground that make that idea resonate or not resonate.
    3:20:05 As an intellectual historian, I’m studying ideas and I’m always putting them in their
    3:20:10 historical context. What is happening that is making these things resonate, that is making them,
    3:20:18 people seek them out. For Ranskay, she has this credibility because of her experience of communism.
    3:20:24 She’s one of these defining moments of the time. Then I think the idea comes out in a sort of
    3:20:30 pure form and then other people rework it and reshape it as they read it. I’m really interested
    3:20:35 in how people form communities around these ideas. A bunch of people started calling themselves
    3:20:43 objectives and getting together to read Rans’ work. That was spontaneous and ground up and wasn’t
    3:20:48 supported by any money nobody planted. It just happened. Friedman’s a different case in that he
    3:20:53 joins an established tradition of thought that’s been institutionalized in universities. People
    3:20:58 are signing up and paying money and getting credentialed to learn these ideas. To my mind,
    3:21:04 these are two different ways but really emblematic ways of how ideas spread. Rand, I think of this
    3:21:10 more bottom-up, people encounter the idea in a book. They’re blown away by it or they imbibe it
    3:21:14 without even realizing they’re imbibing it and then they’re like, “Well, maybe I don’t like
    3:21:20 Franklin Roosevelt so much or maybe I’ll look another time at Barry Goldwater.” Whereas Friedman,
    3:21:25 you get the idea more top-down. I know I’m getting the idea. I know I’m being positioned
    3:21:31 within a elite discourse of economics. I think they go top-down and bottom-up and then they
    3:21:37 hit the events. Friedman’s ideas wouldn’t have gone anywhere without that episode
    3:21:42 of stagflation that really made people think they proved out. I think Rand’s idea has really
    3:21:47 caught fire in Cold War America that’s looking for a statement of what does it mean to be an
    3:21:52 individual? What does it mean to live in this mass society because it’s also a time of great
    3:21:57 social conformity and where people are suddenly, they’re working for large corporations.
    3:22:04 They’ve been served in a large military. The United States is stepping out onto the
    3:22:07 world stage. Everything is bigger. What does it mean to be an individual in that world? That’s
    3:22:12 where Rand’s ideas catch fire. I think a lot about that, about how they trickle through
    3:22:17 different levels of society and then how ideas collide with experience I think is critical.
    3:22:22 What do you think about when they actually take power in government? I think about ideas like
    3:22:30 Marxism and how that evolves into the Bolshevik Revolution and how that takes hold in its
    3:22:37 implementations or you can think about Nazism and with Hitler where it goes from a small number of
    3:22:43 people that get real excited about a thing and then somehow just becomes viral and takes hold
    3:22:52 in power and then that has its consequences. When I think about this historical path of
    3:23:00 Communism and the kind of logics and dynamics of Communism, in many ways it has some echoes with
    3:23:07 Rand in that the ideology in its purest form is almost, it’s a rationalist ideology of some ways.
    3:23:11 It’s an analysis of history and how things are supposed to be and I think you mentioned Hannah
    3:23:16 Arendt. I think she is one of the most kind of penetrating analyses of Communism which she really
    3:23:24 puts in the category of it’s a logical ideology. Logic leads inexorably to its conclusions and
    3:23:31 then experience crops up and experience is different. What does a sort of cult of rationality do when
    3:23:36 it hits experience? Well, it tries to bend experience to its will and that I think is really the
    3:23:46 story of Communism writ large. The question though is why does it catch fire? Why does it draw people
    3:23:52 into political allegiance? I think in the case of Communism, it’s this dream of a more ethical
    3:24:00 world, dream of equality, dream of the powerless rising up against the powerful. That’s drawn in
    3:24:07 so many and then you had the whole addition of Leninism which gave a kind of international
    3:24:12 cast to that and helped people think about what are the relations between poorer and richer countries
    3:24:16 and what can we expect out of them and what might happen, gave a sort of framework for thinking about
    3:24:22 that in a time when the world was becoming more interconnected and those differences were becoming
    3:24:31 more obvious. Fascism to me is unleashing more something primal, something sort of dark and
    3:24:39 primal within people and it’s more a permission structure to indulge in that that is normally
    3:24:44 not there. Those impulses are normally channeled or held down and it seems that when the fascist
    3:24:48 regimes come into power, they give people permission to let those forces out.
    3:24:54 I think on Communism, going back to that lecture that Anne Rand gave,
    3:25:04 I think what ranks true to me a little bit is that what fuels it is a kind of maybe not resentment
    3:25:11 but envy towards the people that have the have-nots versus the haves and there’s some
    3:25:17 degree to which Nazism has the same of envy towards some group, resentment towards some group.
    3:25:24 So it’s given the environment of hard times, hard economic times, combined with the more primal
    3:25:32 just envy of not having and seeing somebody who has it and just constructing a narrative around
    3:25:39 that, that can become a real viral idea. Yeah, it seems like Communism is more animated by this
    3:25:46 idea of injustice. The world is unjust. It should be different and fascism seems like the process
    3:25:54 of scapegoating. We’ve identified the source of the problem and it’s this group and they need
    3:26:00 to be punished for what they’ve done to the rest of us. There is a primal thing, going back to
    3:26:08 literature in 1984, two minutes of hate where you can get everybody real excited about hating a thing
    3:26:14 and there’s something primal about us humans where once you’re in that state of hate,
    3:26:24 anyone can direct that hate towards anything, towards any group, towards any idea,
    3:26:29 towards anything because we could get caught up in the masochistic area of the hatred. It’s a
    3:26:40 dangerous thing. You floated the idea, I forget where, of pivoting for your next book towards maybe
    3:26:47 writing about postmodernism, which is a set of ideas, almost the opposite of Onerant’s philosophy.
    3:26:56 Can you maybe explain your curiosity about, first of all, spaces of ideas, but maybe postmodernism?
    3:27:04 Yeah, I think in the broadest sense, what I’m interested in, two dimensions that guide me
    3:27:07 in doing intellectual history. One is what I talked about, how does an idea go from
    3:27:14 a book, an elite space out to more popular dimensions? How does that happen? What happens
    3:27:20 to the idea along the way? How is it distorted or changed? The other is just search for meaning in
    3:27:25 of a post-Christian era or a secular era. What are people coming up with? I think
    3:27:32 to replace that void in their religious or spiritual lives, I think both Rand and Friedman
    3:27:38 offered these sort of alternatives, right? Objectivism, quasi-rationalist religion. People
    3:27:45 take economics as a theory of the world that almost, you can almost believe in it, right? It
    3:27:49 can almost take that place. And in both cases, how do those ideas travel? When I think about
    3:27:56 postmodernism, it first struck me, if you read the original postmodern thinkers, it’s really
    3:28:01 tough going. I mean, I make my students do it and they suffer. I think they see it’s worthwhile,
    3:28:08 but it’s no fun to read Derrida. But somehow it’s trickled down into, how do we go from like Derrida
    3:28:13 to Tumblr? And I sort of realized, oh, this has happened with postmodernism. It’s followed the
    3:28:20 same path that say from Milton Friedman’s economic theory to free to choose on YouTube. We’ve had
    3:28:27 a similar path of high French theory down to Tumblr and I sexually identify as an attack
    3:28:33 helicopter or whatever it may be. And so that was really interesting. And then I also thought,
    3:28:41 well, at the same time, this is clearly a structure of meaning. And I actually think it’s followed
    3:28:47 the same path of objectivism, which is turning into its opposite, just still down and then
    3:28:51 turning into its opposite. So if objectivism was a group of people who considered themselves
    3:28:56 individualists who ended up deeply conforming to the dictates of a charismatic leader,
    3:29:02 postmodernism started about disrupting binaries. We’re going to be fluid. We’re going to go
    3:29:07 beyond the border. We’re going to disrupt the binary. And it’s devolved in its popular forms
    3:29:14 to the reinscribing of many different binaries. Pressor and oppressed has become this like
    3:29:18 paradigmatic set of glasses you put on to understand the world. So I think the dynamics
    3:29:24 are very, very similar. So I think it’s something in the traffic of the idea from its pure form to
    3:29:30 its popular form, and then how it gets politicized or mobilized in different ways. And behind it
    3:29:36 all, I think, is this human longing for meaning and the inadequacy of the traditional ways that
    3:29:42 need was met at this point in time. By the way, that going from pure form to popular form,
    3:29:49 I remember this might be before the internet, but when I was in college reading Derrida and Foucault
    3:29:58 and not knowing context at all, it was just interesting. I’m able to read pure encapsulations
    3:30:03 of an idea and just kind of like, oh, all right, well, that person believes that and you just kind
    3:30:08 of hold it. But then you realize if you actually take the pure form of that idea and then it creates
    3:30:13 a community around it, you realize what that actually becomes. And you’re like, oh, yeah, no,
    3:30:21 that’s not, although I do consider myself sexually an attack helicopter. That’s it.
    3:30:23 Identify sexually. Yes, beautiful. Okay.
    3:30:32 Your process of researching for, let’s say, the biographies of Mr. Friedman and I’m Rand
    3:30:40 seems like an insane amount of work. Yeah. You did incredible work there going to the original
    3:30:51 sources. Can you maybe speak to that? What is required to persevere and to go for so many years,
    3:30:59 to go so deep to the sources? Yeah. So I mean, I go to the archive. That’s where I feel like I’m
    3:31:06 communing with the dead in some ways. I’m seeing what they saw in some ways and reading what they
    3:31:11 felt. And I tell my doctoral students, it’s got to be something that gets you out of bed
    3:31:16 in the morning because there comes a point in your doctoral career where nobody’s,
    3:31:20 there’s nowhere to go. There’s nowhere to be. You got to be getting up because you’re interested
    3:31:24 in what you want to study. And so with Rand, it was this real sense of discovery. I am discovering,
    3:31:28 I want to know about this woman. I want to know where she fits. And the only way to find out
    3:31:37 is to do the research. And so, yeah, I like to go deep. It’s really interesting to me.
    3:31:42 And I should say, in both of these cases, I’ve done it in an institutional structure. I don’t
    3:31:46 know that I would do it independently. So the first was the graduate program in history. It was at
    3:31:53 UC Berkeley. And so I had coursework and then I had structures. I did have people to check in with
    3:31:57 and read, but I had a great deal of latitude. I’m very grateful for people are like, you wrote a
    3:32:02 dissertation on I ran at Berkeley. I’m like, yeah, hell I did. Berkeley’s like, it’s a great place.
    3:32:06 At the time I was there, there was absolute room for free inquiry.
    3:32:11 Oh, can you just linger on that? So when you said that you’re doing that and doing a dissertation
    3:32:22 on I ran, was there, did people get upset? No, I did have a friendly critic who took it upon
    3:32:26 himself to throw at me everything he thought the outside world would throw at me. I think maybe
    3:32:32 five or 10 years earlier, it wouldn’t have been possible. But the most important thing I had to
    3:32:38 the person I really had to convince this was worth doing was myself, you know, because I knew it was
    3:32:44 an unconventional choice for the field and for a dissertation. But once I convinced myself, I just
    3:32:48 said, well, we’re going to do this and see. And because it was unconventional, it ended up standing
    3:32:56 out. And when it really was the time there was a was I started it during second Bush administration,
    3:33:02 second George W. Bush, second term, people were interested in just conservatism in general and
    3:33:06 felt, no matter where they stood on the political spectrum felt like objectively, we don’t know
    3:33:11 enough about this. And this is a problem. And so they were open to learning more. So I really kind
    3:33:15 of caught that caught that wave in scholarship and caught that wave in American culture where people
    3:33:22 wanted to know more. And we should probably say that, I mean, I ran is at the very least, as you’ve
    3:33:27 mentioned, a kind of gateway to conservatism. Yes, I called her the gateway drug and people
    3:33:34 start with Rand, they’re taken by her, you know, in some ways, she takes the worldview of Milton
    3:33:40 Friedman in terms of what capitalism can accomplish economically. And then she puts it in this
    3:33:46 mythopoetic register and she fictionalizes it. So once people have absorbed that, they want more,
    3:33:52 you know, they go on to learning more of the ideas behind that vision, or they have become true
    3:33:56 believers, they’ve converted. And so then they head off to work for a politician to work for a
    3:33:59 think tank to work for a party. And so absolute traffic. Now, not everyone. There’s plenty of
    3:34:03 people who read on Rand who don’t take the politics in. It’s a nice story. It’s interesting.
    3:34:09 Just an episode in their life. But for others, it’s really foundational. It really changes them.
    3:34:14 So those were the people I wanted to track very deliberately. I wasn’t trying to do in the round
    3:34:18 everything about on Rand. I was like, I’m Rand and the American right, you know, goddess of the
    3:34:23 market, I’m Rand and the American right is the title. So where did they, where did they take
    3:34:26 her, those who took her in this political direction? What difference did she make?
    3:34:32 If we return to like the actual, your process. Yeah. So you’re showing up and you’re reading
    3:34:39 sources and you’re like, is it kind of like the process of discovery? You’re just kind of like
    3:34:47 taking it all in and seeing what unifying ideas emerge or maybe special moments that
    3:34:54 illustrate an idea emerge? Yeah. I mean, I know with the biography of a person, I am already
    3:35:00 given a start and an end date and a rough narrative of what happens. So I have a kind of structure.
    3:35:06 And then with Rand, both with Rand and Friedman, I started by reading their major books before I
    3:35:11 really read anything about them because I wanted my own experience of the material to be fresh.
    3:35:16 And I had read some on Rand, but not a lot. Similarly, I had read some Friedman, but not
    3:35:21 a lot. So at first it’s like, let me read the major stuff, get oriented, and then just dive into
    3:35:28 the archive and see what’s there. Who are they talking to? What’s going on? In Rand’s case,
    3:35:34 I was interested in her in the United States, not her in Russia. I didn’t have the language
    3:35:39 skills to do that. So I start her in the United States and I start when she publishes her first
    3:35:43 book and she starts getting letters. And who is she writing to? Who’s writing to her?
    3:35:48 And then I start to uncover this world of kind of nascent conservatism. And I’m kind of putting
    3:35:52 that together. And once I have enough, I say, well, that’s a chapter. I’m going to cover that
    3:35:58 chapter. And then there’s going to be the book has come out. And so now I need to start a different
    3:36:02 chapter. What’s her life like after the book has been published? And then I look for that. But I’m
    3:36:07 really, although I have this very high level structure, it’s coming out of the archive,
    3:36:13 the material I’m finding. And if I’m not finding the material there, I won’t cover it in great
    3:36:18 detail. Or if I’ve decided it’s outside my am, but I’m not going to go into great depth on it.
    3:36:22 And you’re trying to understand the relationships is so fascinating, like reconstruct
    3:36:29 in a dark room, trying to reconstruct shine a light on relationships through reading letters.
    3:36:33 It’s interesting. Yeah. Yeah. I mean, correspondence is really, really helpful.
    3:36:39 Drafts, correspondence, and you know, someone this famous, they have oral histories,
    3:36:43 other people write about them. So you’re reading all these different things and kind of triangulating
    3:36:47 and trying to sort of put them together. And then think about, how do I present this in a
    3:36:53 compelling story? And what do I need to explain? And then also for me, what was really helpful was
    3:36:59 is that because I teach, and I am explaining the kind of broad sweep of 20th century history. So
    3:37:05 you know, I know that Rand’s involved in a labor action at Warner Brothers. But through my teaching,
    3:37:10 I realized, oh, yes, this is a moment of labor strikes across the country. And so then that
    3:37:17 really changes the origin story of Atlas Shrugged, because she’s looking at labor actions. And she
    3:37:23 originally thinking of the book as being called The Strike. So she’s really responding in real
    3:37:29 time and being inspired by what’s happening, you know, in the mid 1940s in the United States.
    3:37:32 So then I can kind of take that and run with that and figure out where to go.
    3:37:37 So you’re super passionate about teaching. You mentioned Milton Friedman had a very
    3:37:45 interesting way of teaching. So what’s your, how do you think of teaching, teaching history,
    3:37:50 teaching history of ideas, teaching great young minds about the past?
    3:37:56 Yeah, I mean, it’s great. It’s really inspiring. The ways the old school kind of dominating way in
    3:38:02 which Friedman taught would not fly in today’s university wouldn’t be permitted. And also the
    3:38:08 students wouldn’t respond to it, you know? So I try to share my enthusiasm. I think that’s like
    3:38:12 almost the number one thing I bring is my enthusiasm, like, look how neat and interesting
    3:38:18 these ideas are. I try to keep my own views out pretty much. I try to give the fairest possible
    3:38:24 rendition I can of each thinker. If I find someone really disturbing, I might sidebar at the end of
    3:38:29 the lecture and say, this kind of, you know, I find this unsettling and this, you know, tells me
    3:38:34 something about myself. But most of the time, I’m bringing people into the, like the biography of
    3:38:39 a great thinker, the context of them. And then we, in the lecture, we’ll literally read the work
    3:38:44 together and we’ll talk about it. And I’ll ask the students, what are you finding here? What’s
    3:38:51 jumping out at you? Kind of breaking down the language and really teaching them how to do deep
    3:38:56 reading. So I feel like that is my contribution right now. We’re having trouble reading collectively.
    3:39:00 We’re having trouble paying attention collectively. And I’m trying to cultivate
    3:39:06 their skills to doing that and showing them how I do it and also modeling like this is how I would
    3:39:11 read a text. This is what jumps out to me when I look at, you know, Thomas Kuhn or something like
    3:39:17 this. And just show them that I’d studying a history of ideas is really fun. I feel incredibly
    3:39:22 perfect to do it, you know. And the other thing is I think this is the time for students in college
    3:39:28 figuring out who they are. Their minds are developing and growing. They can really handle
    3:39:32 complicated hard ideas. They don’t always have the context behind them. So I need to give them
    3:39:36 the hard ideas and then show them this is kind of the context of what’s happening in the world.
    3:39:41 But really, I’m just, I’m showing them the landscape. I don’t have time to go deep.
    3:39:48 We have a 10 week quarter, you know, giving them a flyover. And then I want them to know
    3:39:52 how to go deep and know where they want to go deep. Do the thing that Milton Friedman did, which is
    3:40:00 in parallel. Yes, do their own parallel curriculum. Exactly. Exactly. What advice
    3:40:05 would you give in terms of reading about ideas you agree with and reading ideas you disagree with?
    3:40:10 I mean, even though I think the passion is important for the teaching of the ideas, like,
    3:40:16 dispassion is more important for the reading and understanding of them. So a lot of people have
    3:40:21 said to me like, I could never write about Ayn Rand like she makes me so angry. You know,
    3:40:26 and I’ve never become, I don’t get angry reading her. Like, I’m like, oh, there you go again,
    3:40:32 you know, or like, well, that’s going to cause trouble. You know, and so I guess I’m approaching
    3:40:38 it with a sort of charity, but also with, I’m not, I don’t have huge expectations. I’m not expecting
    3:40:43 to have the light shine on me. I’m not expecting to agree. I’m like, I can be very clinical about
    3:40:49 it. So that’s what’s worked for me. It might not work for others. But and then I just try to find
    3:40:54 the humor in it. You know, like, how, how funny is it? Like these different aspects of them,
    3:40:58 you know, like when teaching my students about Oliver Wendell Holmes, like his,
    3:41:05 his dad wrote a poem about him. He called him the astronaut about how he came from outer space.
    3:41:09 He seemed like he came from outer space. I’m like, this is his dad’s view of his son. Like,
    3:41:14 that’s how weird of a guy he was, you know? And so I try to like find that, keep alert for those
    3:41:19 funny kind of human touches that like, these are ultimately just people, you know, people with
    3:41:23 ideas that they spent enough time polishing up and developing that we still want to read about them
    3:41:28 a hundred years later. What about the dramatic formulation of that same question? Do you think
    3:41:33 there’s some ideas that are good and some of that are evil? Do you think we can draw such lines? Or
    3:41:38 is it more complicated, like the old soul genius in line between good and evil that runs to the
    3:41:44 heart of every person? I mean, I philosophically agree with souls and it’s in for sure. I do think
    3:41:50 some ideas pull on the good side and some ideas pull on the bad side, like absolutely. And I think
    3:41:55 that’s probably, that’s probably why people dislike Rand so much is they feel like she’s
    3:42:00 giving license to the bad side. And she’s saying it’s okay to be selfish and it’s okay, you know,
    3:42:07 they feel like she’s unloosing the dark forces. And, you know, in some cases that may be true,
    3:42:14 but she’s also unloosing some of the light forces in terms of reflecting on yourself and trying to
    3:42:19 be true. But definitely there are ideas that are dangerous to play with and there are ideas that
    3:42:26 I think give license to the darker sides of human nature. But I think you can see that in the
    3:42:34 historical record. So I think that it’s possible to show that. And obviously there’s some places,
    3:42:37 you know, like Germany, they’re trying, they think the ideas are so dangerous,
    3:42:42 they can’t be allowed to circulate. And in some contexts that may absolutely be true.
    3:42:48 And then still even that we should take with a grain of salt because perhaps censorship of an
    3:42:53 idea is more dangerous than the idea. So all of that, that’s the beautiful thing about us humans,
    3:43:00 we’re always at tension trying to figure out what ideas are the ones that are going to help
    3:43:08 humanity flourish. Pothead question, do humans have ideas or do ideas have us? So where do
    3:43:12 ideas come from? You have Milton Friedman sitting there after Rutgers trying to figure out what
    3:43:21 he can do about the Great Depression. Where, do you ever think about this? I sometimes think that
    3:43:27 aliens are actually ideas. They’re just kind of like travel through human brains and
    3:43:38 like captivate us. So we get all real excited. Like with a monolith in 2001 Space Odyssey,
    3:43:44 a monolith lands and everybody gets excited and somehow this idea just gets everybody
    3:43:51 to be on the same page and it reverberates through the community. And then that results in an
    3:43:56 implementation of some action that results in us figuring out that that idea was actually bad and
    3:44:02 we learned new ideas. But it feels like the idea is right in the show. Yeah. I mean, I think in a
    3:44:09 lot of cases, I think it’s true. Kane says this famous quote, “Most men are slaves of some defunct
    3:44:19 economist.” That’s funny. So I do think it’s really hard to have an original thought. We are social
    3:44:25 creatures. We encounter the same situations again and again. And so it’s really hard. You’re born
    3:44:30 into these traditions of thinking and being and knowing and most people are never going to question
    3:44:34 them and most people are never going to become aware of them. So again, that’s some of the work of
    3:44:39 what I do as an intellectual historian is like, let’s become aware. Let’s realize that you’re
    3:44:46 carrying a map that’s orienting you to the world in a certain way. And so I think you have to work
    3:44:51 really, really hard to have an original idea. And even then, it’s not a completely original idea.
    3:44:56 It’s a reworking and a reassembling of ideas others have had. So I definitely think it’s
    3:45:03 possible to create autonomy in the realm of ideas and to be an autonomous consumer of ideas. But I
    3:45:09 think on balance, most people are not. And that’s fine. They want to have experiences. They want
    3:45:15 to do other things with their life. Well, Jennifer, thank you so much for this journey through ideas
    3:45:21 today. And thank you so much for your incredible work. It was really fun and fascinating to talk
    3:45:27 with you today. Thank you. Thank you. Thank you for listening to this conversation with Jennifer
    3:45:33 Burns. And now let me try to reflect on and articulate some things I’ve been thinking about.
    3:45:39 If you’d like to submit questions or topics that I can comment on in this way here at the end of
    3:45:49 episodes, go to lexfreeman.com/ama or contact me for whatever other reason at lexfreeman.com/contact.
    3:45:54 Please allow me to say a few words about my interview with the president of Ukraine,
    3:46:00 Volodymyr Zelensky. Now that a few days have passed and I’ve had the chance to think about
    3:46:06 the conversation itself, the response, future upcoming conversations, and what it all means for
    3:46:13 the war in Ukraine, for global geopolitics, and for us humans in general. I’ve gotten a lot of
    3:46:20 heartfelt, positive words from all sides, including, at least so far, literally everybody who knows
    3:46:25 me personally inside Ukraine, which includes a lot of soldiers and many high-profile figures,
    3:46:29 some who are supportive of the president and some who are critical of him.
    3:46:36 Literally all private communication has been positive and supportive. This is usually not the
    3:46:42 case with me. Friends usually will write to me to criticize and to disagree. That’s the whole point
    3:46:49 of friendship. To argue and have fun doing it. There was none of that here, at least so far.
    3:46:54 So, thank you for your support and kind words, it means the world.
    3:47:00 The most common message was please keep pushing for peace. I will.
    3:47:09 But online, on the interwebs, I saw a lot of attacks, sometimes from swarms of online accounts,
    3:47:12 which of course makes me suspicious about the origin of those attacks.
    3:47:19 One of my friends in Ukraine, who by the way thinks the attacks are all propped up by Ukrainian
    3:47:25 bot farms, said there’s no need to say anything extra. Let the interview stand on its own.
    3:47:28 Just keep focused on the mission of pushing for peace.
    3:47:36 Basically, he’s a Ukrainian version of my other friend, Joe Rogan, who to this day says,
    3:47:41 don’t read the comments. This is generally good advice and I try to follow it. But I’m also a
    3:47:48 human being. I worry my heart on my sleeve in this interview. This war, for me, is deeply personal.
    3:47:56 And the level of vitriol, misrepresentation and lies about the conversation and about me personally
    3:48:01 was particularly intense and disingenuous. So, I thought I would use this opportunity to say a
    3:48:07 few words, just speak a bit more about how I approach this conversation with President Zelensky
    3:48:13 and conversations in general. This interview is something I poured my heart and soul into,
    3:48:19 preparing a lot. I’ve described parts of the preparation process I follow in the outro to
    3:48:25 the Zelensky conversation. But in general, let me say that I’ve read a lot, listened to a lot,
    3:48:31 and had a lot of private conversations with people on the ground. I have many flaws, but being
    3:48:38 unprepared for this conversation is not one of them. Two low effort attacks got to me a bit,
    3:48:46 if I’m being honest, though I am learning to take it all in stride. First attack is that I’m unprepared,
    3:48:54 uninformed, or naive. I don’t give a damn about the trolls, but I want people who listen to me,
    3:48:59 who support me, who care about my words to know that this is not the case. It never will be the
    3:49:07 case for future conversations, especially ones of this importance. I work extremely hard to prepare.
    3:49:15 Second low effort attack that got to me a bit, is that I’m a shill for Zelensky or a shill for
    3:49:22 Putin. Both accusations were hurled readily and freely by the online mob of all persuasions,
    3:49:28 by the left and the right in the United States, and Europe, by the pro and the anti Zelensky people
    3:49:35 in Ukraine, or of Ukrainian origins, and by the pro and anti-Putin people in Russia, or of Russian
    3:49:44 origins. As I’ve said, over and over, this is not the case, and will never be the case. I’m a shill
    3:49:50 for no one. More than that, I just simply refuse to be caught in any one single echo chamber.
    3:49:56 It’s an ongoing battle, of course, because social media algorithms and the various dogmatic groups
    3:50:03 and tribes out there want to pull you in to their warm embrace of belonging, and humans want to
    3:50:10 belong. But the cost of the path I have chosen is that I will never belong to any one group.
    3:50:19 In the end, like many of us must, I walk alone. And I try to do my best to do what is right,
    3:50:24 to my independent heart and mind, not what is popular with any one group.
    3:50:31 My goals for this conversation were twofold. First, give a large platform to President Zelensky
    3:50:36 to explain his perspective on the war, and to do so in a way that brings out the best in
    3:50:44 who he is as a leader and human being. Second goal was to push for peace, and to give him every
    3:50:49 opportunity possible to signal that he’s ready to make peace, and to provide his vision for what
    3:50:56 that might look like. And just to be clear, by peace, I mean long-lasting peace that minimizes
    3:51:02 suffering of people in the region and maximizes the flourishing of humanity in the coming decades.
    3:51:10 The war in Ukraine has led to over one million casualties and growing every single day.
    3:51:17 For some people, torn apart by loss, tormented and forced into a state of anger and hate,
    3:51:25 peace is a dirty word. To them, nothing less than justice must be accepted.
    3:51:35 I hear this pain. I’ve seen the bodies and the suffering. It’s true, peace will not bring back
    3:51:41 your loved ones, but it will prevent further slaughter of more people, each of whom are someone
    3:51:49 else’s loved ones. So again, the second goal of this conversation was to push for this kind of peace.
    3:51:58 So how did I approach it? Every conversation is its own puzzle, so let me try to explain my
    3:52:04 approach for this one. As I’ve said, I read and listened to a lot of material since February 24,
    3:52:11 2022. There would be many weeks over the past three years where I would spend every day over
    3:52:18 eight hours a day of focused reading and research. There were several rabbit holes that I consistently
    3:52:24 returned to and researched, but the most important line of inquiry was always peace talks. Not just
    3:52:31 in this war, but in other wars in modern history. For this specific war, as part of the background
    3:52:37 prep, I would take notes on every single perspective I could find on every single major diplomatic
    3:52:43 meeting and negotiation that happened in Ukraine-Russia relations since 1991.
    3:52:51 There is a lot of material to go through, and there are a lot of perspectives, even on the very
    3:52:57 2019 meeting that President Zelensky spoke about in this podcast. Just as a small but important
    3:53:04 example, Andrei Bogdan was interviewed twice by Dmitry Gordon and gave a deep inside look
    3:53:11 of the administration of President Zelensky, including that very 2019 meeting. The two interviews
    3:53:18 are seven and a half hours, by the way, and from my interviewer perspective are a masterclass of
    3:53:24 interviewing. Andrei Bogdan worked directly with President Zelensky as the head of the office of
    3:53:30 the President of Ukraine. He was there for the 2019 face-to-face meeting between Volodymyr Zelensky
    3:53:37 and Vladimir Putin at the Paris summit, along with French President Emmanuel Macron and German
    3:53:45 Chancellor Angela Merkel. This was part of the Normandy format peace talks. In those two interviews,
    3:53:52 Andrei Bogdan gave a very different perspective on that 2019 meeting than did President Zelensky
    3:53:58 to me in our conversation. The perspective being that the failure to negotiate a ceasefire
    3:54:05 and peace was not a simple one-sided story. I don’t think this is the right time for me to dive
    3:54:10 into that data point and be critical. I’m not interested in being critical for the sake of
    3:54:17 criticism. I am interested, once again, in productive conversations, critical or otherwise,
    3:54:24 that push towards peace. The kind I described earlier. This is merely an example of a data
    3:54:31 point I was collecting in my brain. There are many, many others. But all of it taken together
    3:54:38 made it clear to me, and I still believe this, that it is indeed very difficult, but possible,
    3:54:45 to negotiate long-lasting peace with Vladimir Putin. It is certainly true that Ukraine is
    3:54:51 best positioned to negotiate from a place of strength. After the invasion of February 24,
    3:54:59 2022, I believe there were three chances where peace was most achievable. First chance was March
    3:55:06 and April of 2022, with a successful defense of the North. Second chance was the fall of 2022,
    3:55:13 with a successful counter-offensive in Hassan and Kharkiv. The third chance is now.
    3:55:19 As he has stated multiple times publicly, Donald Trump is very interested in making peace.
    3:55:25 It is likely that the U.S. financial support for this war will continue to dwindle. So,
    3:55:32 the leverage and the timing for peace negotiation is now. There is unlikely to be another chance like
    3:55:40 this for a long time. Just to zoom out on the conversation piece of this, I interviewed Donald
    3:55:47 Trump and may do so again. I interviewed Vladimir Zelensky and may do so again. And it seems likely
    3:55:55 that I will interview Vladimir Putin in Russia, in the Kremlin. I understand the risks and I accept
    3:56:00 them. The risks for me are not important. I’m not important. I merely want to do my small part in
    3:56:07 pushing for peace in a moment in history when there’s a real chance for that peace to actually be
    3:56:14 achieved. I may be speaking too long, I’m sorry, but I can probably speak for many more hours,
    3:56:20 so this is in fact me trying to be brief. So again, my two goals were to bring out the best in
    3:56:26 President Zelensky as a leader and a human being and to give him every opportunity possible to
    3:56:32 signal that he is ready to make peace and to lay out his vision for what that peace might look like.
    3:56:41 Like I said, step one through ten is prepare well. I did. But step 11 is the actual conversation.
    3:56:46 There the specific psychological and personality quirks and qualities of the guest matter a lot.
    3:56:50 My job is to try to cut through the bullshit walls we put up with human beings
    3:56:56 and reveal directly or indirectly who the person truly is and how they think.
    3:57:03 With Zelensky, he is a deeply empathic and emotional human being who personally feels
    3:57:10 the suffering of the people of Ukraine in this war. This is a strength and perhaps also a weakness.
    3:57:17 But it is an important part of the reason why I said many times that he is a truly historic figure.
    3:57:24 Very few leaders in recent history would be able to pull off what he did, to stay in Kiev,
    3:57:29 to unite the country, to convince the West to join the war effort to the degree they did.
    3:57:37 He is also a showman to borrow the title of the biography I recommended. A man with many layers
    3:57:46 of humor and wit, but also ego and temper. Sometimes fully self-aware and sometimes losing himself
    3:57:52 in the emotional roller coaster of a painful memory or a turn of phrase that he can use as
    3:57:59 a springboard for an angry soliloquy. After this, the fact that we didn’t agree to anything,
    3:58:05 what we will talk about or how long we will talk about it. The interview could have easily been
    3:58:11 five minutes or three hours, so I had to quickly gain his trust enough to open up
    3:58:17 and stay for a long-form conversation, but push him enough to reveal the complexities of his
    3:58:24 thought process and his situation. This is where humor and camaraderie was essential and I would
    3:58:29 return to it often, though it was very difficult given the stakes, the heaviness, the seriousness
    3:58:35 of the topic of the war. So in this case, the approach I followed for this conversation is
    3:58:41 constant nudges and questions about peace, often using almost childlike statements or questions.
    3:58:47 I generally like these kinds of questions. On the surface, they may seem naive, but they’re not.
    3:58:53 They are often profound in their simplicity, like a lot of questions that children ask.
    3:58:59 Remember, it was a child who pointed out that the emperor was not wearing any clothes.
    3:59:05 I like the simplicity, the purity, the boldness of such questions to cut through the bullshit
    3:59:11 to the truth. And that truth is that hundreds of thousands of people died in this war
    3:59:18 and are dying every day. And all the other problems, from corruption to suspended elections,
    3:59:25 to censorship, cannot be solved until peace is made. I give the president every single chance
    3:59:31 to signal willingness to negotiate, knowing that both Trump and Putin will listen to this
    3:59:38 conversation. I don’t think he took it and instead chose to speak very crude words towards
    3:59:44 Vladimir Putin. This is fully understandable, but not directly productive to negotiation.
    3:59:51 To clarify, I have hosted many conversations that were intensely critical of Vladimir Putin,
    3:59:56 from Sir Hiplohi to Stephen Kotkin. But this conversation is with a world leader,
    4:00:02 speaking about another world leader during a historic opportunity for peace.
    4:00:08 Crude words of disrespect, while powerful, may harm negotiations.
    4:00:15 Peacemaking in this situation requires compromise in order to avoid further death and suffering.
    4:00:22 And I believe it requires treating the other leader with a seriousness you expect him to treat you
    4:00:29 with. This is what I was pushing for. All that while also putting my ego aside and letting the
    4:00:35 president shine, which is necessary to accomplish both goals one and two that I mentioned previously.
    4:00:41 This is also why I wanted the president to speak about Elon and Trump, to extend the olive branch
    4:00:48 for further avenues of peacemaking. This is not about politics. It is, once again, simply about peace.
    4:00:55 Now, all of this, my words, my attempts were taken out of context and used to attack me by
    4:01:01 some online mobs. As an example, President Zelensky said in a mocking tone that he thinks
    4:01:10 that Vladimir Putin is simply irritated by people who are alive in Ukraine. And I answered, “If you
    4:01:15 believe this, it will be very difficult to negotiate. If you think that the president of a country is
    4:01:22 completely crazy, it is really hard to come to an agreement with him. You have to look at him as a
    4:01:28 serious person who loves his country and loves the people in this country. And he conducts,
    4:01:35 yes, destructive military actions.” The president interrupted me at this point and said,
    4:01:42 “Who are you talking about now? Who loves this country?” And I said, “Putin. Do you think he
    4:01:49 doesn’t love this country?” And the president answered, “No.” Again, this is not a podcast
    4:01:55 conversation with a historian or activist. And I somehow, out of nowhere, just for fun,
    4:02:02 waxed poetic about Putin’s or Zelensky’s or Trump’s love of nation. It is a conversation
    4:02:09 with a world leader discussing the opportunity to negotiate peace when a large number of people
    4:02:17 are dying every single day. Even if the heart boils over with hate, leadership now requires
    4:02:23 sitting at the negotiation table and compromising. This may be painful, but it is necessary.
    4:02:28 There are a few other places in the conversation where some online mobs took my words out of
    4:02:35 context and used them to call me naive and to call for more war, saying peace is impossible
    4:02:42 with a man who they claim is the second coming of Hitler. My friends, if you make such attacks on
    4:02:49 this conversation, it is in fact you who are naive and ignorant of the facts of history and
    4:02:59 geopolitics. Peace must be made now in order for death and suffering to stop, in order for Ukraine
    4:03:05 to have a chance to flourish, and in order for the drums of a global war to stop beating,
    4:03:14 a global war that would cripple humanity. This was my goal, once again, to push for peace.
    4:03:22 And I will continue this effort to the best of my ability. Thank you. I love you all.
    4:03:38 [Music]

    Jennifer Burns is a historian of ideas, focusing on the evolution of economic, political, and social ideas in the United States in the 20th century. She wrote two biographies, one on Milton Friedman, and the other on Ayn Rand.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep457-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Jennifer’s X: https://x.com/profburns
    Jennifer’s Website: https://www.jenniferburns.org

    Jennifer’s Books:
    Milton Friedman biography: https://amzn.to/4hfy1HO
    Ayn Rand biography: https://amzn.to/4afr3A0

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Brain.fm: Music for focus.
    Go to https://brain.fm/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (10:05) – Milton Friedman
    (24:58) – The Great Depression
    (39:15) – Schools of economic thought
    (50:22) – Keynesian economics
    (58:10) – Laissez-faire
    (1:06:00) – Friedrich Hayek
    (1:11:18) – Money and monetarism
    (1:26:03) – Stagflation
    (1:30:56) – Moral case for capitalism
    (1:34:53) – Freedom
    (1:39:51) – Ethics of competition
    (1:43:37) – Win-win solutions
    (1:45:26) – Corruption
    (1:47:51) – Government intervention
    (1:54:10) – Conservatism
    (2:00:33) – Donald Trump
    (2:03:09) – Inflation
    (2:07:38) – DOGE
    (2:12:58) – Javier Milei
    (2:18:03) – Richard Nixon
    (2:25:17) – Ronald Reagan
    (2:28:24) – Cryptocurrency
    (2:43:40) – Ayn Rand
    (2:51:18) – The Fountainhead
    (3:02:58) – Sex and power dynamics
    (3:19:04) – Evolution of ideas in history
    (3:26:32) – Postmodernism
    (3:37:33) – Advice to students
    (3:45:50) – Lex reflects on Volodymyr Zelenskyy interview

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #456 – Volodymyr Zelenskyy: Ukraine, War, Peace, Putin, Trump, NATO, and Freedom

    AI transcript
    0:00:06 The following is a conversation with Volodymyr Zelensky, the president of Ukraine.
    0:00:12 It was an intense, raw, and heartfelt conversation, my goal for which was to understand
    0:00:21 and to do all I can to push for peace. Please allow me to say a few words, first about language,
    0:00:26 then about the president, and finally about history. Please skip ahead,
    0:00:33 straight to our conversation, if you like. We spoke in a mix of languages, continuously switching
    0:00:41 from Ukrainian to Russian to English. So, the interpreter was barely hanging on. It was indeed,
    0:00:47 in many ways, a wild ride of a conversation, as the president said, the first of many.
    0:00:55 Language, like many other things in a time of war, is a big deal. We had a choice. Speaking Russian,
    0:01:02 Ukrainian, or English. The president does speak some English, but he’s far from fluent in it,
    0:01:08 and I sadly don’t speak Ukrainian, yet. So, Russian is the only common language we’re both
    0:01:14 fluent in. In case you don’t know, the Russian language is one that the president speaks fluently
    0:01:20 and was his primary language for most of his life. It’s the language I also speak fluently,
    0:01:27 to the degree I speak any language fluently, as does a large fraction of the Ukrainian population.
    0:01:34 So, the most dynamic and powerful conversation between us would be in Russian, without an interpreter,
    0:01:42 who in this case added about two to three second delay, and frankly translated partially and poorly,
    0:01:48 for me at least. Taking away my ability to feel the humor, the wit, the brilliance, the pain,
    0:01:55 the anger, the humanity of the person sitting before me, that I could clearly feel when he was
    0:02:03 speaking fluently in the language I understand, Russian. But all that said, war changes everything.
    0:02:08 The Ukrainian language has become a symbol of the Ukrainian people’s fight for freedom
    0:02:15 and independence. So, we had a difficult choice of three languages, and faced with that choice,
    0:02:23 we said yes, to all three, to the consternation and dismay of the translators.
    0:02:31 We make captions and voice over audio tracks available in English, Ukrainian and Russian,
    0:02:37 so you can listen either to a version that is all one language, or to the original mixed language
    0:02:42 version with subtitles in your preferred language. The default is English overdub.
    0:02:48 On YouTube, you can switch between language audio tracks by clicking the settings gear icon,
    0:02:56 then clicking audio track, and then selecting the language you prefer, English, Ukrainian,
    0:03:05 Russian. To listen to the original mixed language version, please select the English UK audio track.
    0:03:12 Big thank you to Eleven Labs for their help with overdubbing using a mix of AI and humans.
    0:03:17 We will continue to explore how to break down the barriers that language creates,
    0:03:24 with AI and otherwise. This is a difficult but important endeavor. Language, after all,
    0:03:31 is much more than a cold sequence of facts and logic statements. There are words when spoken
    0:03:38 in the right sequence and at the right time that can shake the world and turn the ties of history,
    0:03:46 that can start and end wars. Great leaders can find those words, and great translators
    0:03:52 can help these words reverberate to the outskirts of a divided civilization.
    0:03:58 On another note, let me say that President Zelensky is a truly remarkable person
    0:04:05 and a historic figure. I say this as somebody who deeply understands the geopolitical complexity
    0:04:12 and history of the region. I am from this region. My parents were both born in Ukraine,
    0:04:20 Kiev and Kharkiv, both my grandfathers too. I was born in Tajikistan and lived for a time there,
    0:04:29 then in Kiev, then Moscow, then United States. And while I am now for almost 30 years and to
    0:04:37 the day I die, I am a proud American. My family roots grow deep in the soil of nations that comprised
    0:04:44 the Soviet Union, including Ukraine, Russia, Belarus, and Tajikistan. I’ve gotten to know and
    0:04:48 have spoken for hours with members of the President’s team and people close to him.
    0:04:55 I spoke to hundreds of Ukrainians since 2022, including soldiers, civilians, politicians,
    0:05:00 artists, religious leaders, journalists, economists, historians, and technologists.
    0:05:07 I listened to hundreds of hours of programs that both support and criticize the President,
    0:05:13 in Ukraine, in Russia, in the United States. I’ve read countless books about this war
    0:05:20 and the long arc of history that led up to it. A force to recommend too, at this moment,
    0:05:26 I would say the Russo-Ukrainian War by Sergei Plohe and the Showman by Simon Schuster,
    0:05:33 which is a good personal behind the scenes biography of the President, focused on 2022.
    0:05:41 But there are many, many more. This is why I can comfortably say that he is a truly singular
    0:05:47 and remarkable human being. It was an honor and pleasure to talk with him on and off the mic.
    0:05:55 Now, it is true that I plan to travel to Moscow and to speak with President Vladimir Putin.
    0:06:01 And I hope to be back in Kiev as well, as President Zelensky said this was our first
    0:06:07 of many more meetings. In all these cases, I seek to do my small part in pushing for peace.
    0:06:13 And in doing all this, I’m deeply grateful for the trust people have given me on all sides,
    0:06:20 for the people attacking me, sometimes lying about me, for the critics in the stands,
    0:06:26 chanting the latest slogans of the mass hysteria machine like the sheep and animal farm.
    0:06:34 I love you too. And I assure you that drawing lines between good and evil on a world map
    0:06:41 is much easier than seeing that line between good and evil in every human being,
    0:06:48 including you and me. This is what I try to do. I’m simply a human being who seeks to find and
    0:06:58 surface the humanity in others. And as I’ve said, no amount of money, fame, power, access can buy my
    0:07:06 opinion or my integrity. Now, finally, please allow me to briefly overview some history to give
    0:07:10 background for several topics that President Zelensky references in this conversation.
    0:07:16 I recommend my conversation with Sergey Plohe and many others about the history of the region.
    0:07:23 But here let me start with 1991, when Ukraine declared its independence and the Soviet Union
    0:07:30 collapsed. From this point on, Russia-Ukraine relations were defined in large part by whether
    0:07:36 Ukraine aligned more with Russia or with the West, meaning Europe, United States, NATO, and so on.
    0:07:44 In 2004, with the Orange Revolution, a pro-Western candidate, Viktor Yushenko became president.
    0:07:51 In 2010, it went the other way, a pro-Russia candidate, Viktor Yanukovych became president.
    0:07:57 The internal tensions grew, and in 2013, Euromaidan protests broke out
    0:08:03 over Yanukovych’s decision to suspend talks with the European Union in favor of closer ties with
    0:08:10 Russia. This set forward a chain of important events in 2014. On the politics front, Yanukovych was
    0:08:17 ousted and fled to Russia, leading to the election of a pro-Western president. Also, in 2014, on the
    0:08:24 war front, Russia annexed Crimea and war broke out in the Donbass region of eastern Ukraine,
    0:08:31 which eventually killed over 14,000 people and continued all the way to 2022, when,
    0:08:39 on February 24, 2022, Russian forces initiated a full-scale invasion of Ukraine.
    0:08:43 This is when the world started to really pay attention.
    0:08:50 Now, some history of peace talks. Volodymyr Zelensky won the presidency in 2019,
    0:08:55 and he discusses, in this conversation, the ceasefire agreements he made with Vladimir Putin
    0:09:03 in 2019, which was one of many attempts at peace, from the two Minsk agreements in 2014 and ’15
    0:09:12 to a series of ceasefire agreements in 2018, ’19, and ’20, all of which failed, in part or in whole.
    0:09:17 All this shows just how difficult ceasefire and peace negotiations are,
    0:09:24 but they are not impossible. It is always worth trying, over and over again, to find the path to
    0:09:32 peace. I believe that presidents Zelensky, Putin, and Trump should meet soon after January 20 this
    0:09:39 year and give everything they got to negotiate a ceasefire and security guarantees that pave the
    0:09:45 way for a long-lasting peace. We discussed several ideas for this in this conversation.
    0:09:54 As I said, this was one of my main goals here, to push for peace. This trip to Kyiv and this
    0:09:59 conversation was a truly special moment for me in my life. It is one I will never forget.
    0:10:05 So to reflect, I say a few more words and answer some questions at the very end if you like to
    0:10:14 listen. But here, let me say thank you to everyone for your support over the years. It means the world.
    0:10:20 And now, a quick few second mention of each sponsor. Check them out in the description.
    0:10:24 It’s the best way to support this podcast. There are no sponsor reads in the middle,
    0:10:32 so, you know, you can skip these, but I do try to make them interesting in case you stick around.
    0:10:36 In either case, still please check out the sponsors by their stuff. It’s the best way
    0:10:45 to support this podcast. We’ve got Notion for Notes and Team Collaboration, Github for all things
    0:10:52 programming, including with the help of AI, AG1 for Health, Elements for Electrolytes,
    0:10:58 Aidsleep for Naps, and BetterHelp for Your Mind. If you want to get in touch with me for whatever
    0:11:05 reason, go to lexfeedman.com/contact. And now onto the full ad reads. This episode is brought to you
    0:11:10 by Notion, a note-taking and team collaboration tool. I believe I mentioned it at the end of the
    0:11:17 podcast. It’s something I use regularly as a big part of my podcast prep and research process.
    0:11:24 I currently only use it at the computer when I’m doing really sort of rigorous systematic
    0:11:30 note-taking. But it is, like I mentioned, really the best integration of AI that I’ve
    0:11:37 used in any note-taking application. I’m a bit delirious at the moment because through the insane
    0:11:46 amount of work that had to be done to bring together the translation for this episode with
    0:11:52 President Zelensky, I’ve gotten very little sleep. So here I am trying to put together a few words
    0:12:04 when the neurons required to assemble said words are just not firing. Anyway, the amount of research,
    0:12:10 the amount of note-taking that I had to do, just the chaos, the whirlpool, the overwhelming amount
    0:12:18 of notes that I took across many books and blog posts. And I was listening to just a large number
    0:12:24 of conversations from all different kinds of perspectives. And I’m not sure those notes were
    0:12:30 sort of directly useful, but they’re building up a knowledge base. They’re building up an intuition.
    0:12:38 They’re making sure that I have a chance to understand. So anyway, Notion played a big part
    0:12:45 of that. Try Notion AI for free when you go to Notion.com/lex. That’s all lowercase Notion.com/lex
    0:12:50 to try the power of Notion AI today. This episode is also brought to you by a new sponsor,
    0:13:01 but obviously one I’ve used for many, many years. It’s GitHub and GitHub Co-Pilot. So GitHub for
    0:13:07 people who somehow don’t know if you’re listening to this and you’re not a developer, it’s basically
    0:13:16 a place where developers go to be happy and to collaborate and to share and to build, especially
    0:13:24 for people who are part of the open source world. So it really is a magical place. And also they
    0:13:31 were pioneers in the AISs decoding space with GitHub Co-Pilot. Now GitHub Co-Pilot is not just
    0:13:39 available in VS Code. It’s also available in Neo Vim. It’s available in all the JetBrains IDEs.
    0:13:47 I’ve used JetBrains for a long time and loved it and eventually drifted away. Still have not
    0:13:54 tried Neo Vim. I probably should. Vim Neo Vim. That’s what all the cool kids are using. Anyway,
    0:14:01 GitHub Co-Pilot and all the different features of AI-assisted coding that they’re continually
    0:14:08 developing are available in those IDs. As I mentioned, at the end of the episode, I was an
    0:14:16 Emacs user for probably over 20 years, way more than 20 years. And so I don’t remember exactly when,
    0:14:22 but a few months ago, I switched to VS Code. And that was just such a lightball moment. It took
    0:14:27 a little bit of time to get adjusted. I missed a bunch of stuff in Emacs, especially because I
    0:14:32 customized everything with Lisp, which is what Emacs is written in. And it’s the sort of the
    0:14:39 back end customization is written in Lisp. And Lisp is its own programming language with an aura
    0:14:48 and a spirit that permeated my being for a long time. So it took a little bit of time to get used
    0:14:56 to VS Code. But really, the magic of Co-Pilot is the thing that allowed me to transition so quickly.
    0:15:02 And they’re facing a lot of steep competition right now. So I’m excited just how seriously
    0:15:09 they’re taking this competitive space of AI-assisted coding and developers win.
    0:15:16 The more competition, the more features developers win. And I, as a developer myself,
    0:15:23 just full of joy when I get to pair a program with a good LLM. Anyway, get started with GitHub
    0:15:32 Co-Pilot for free today at gh.io/copilot. This episode is also brought to you by AG1 and all
    0:15:40 in one daily drink to support better health and peak performance. I’ve been traveling crazy places,
    0:15:49 intense schedules, just chaos, taking risks, all that kind of stuff. So to get back to where I can
    0:16:01 drink AG1 and have for brief moments of time the feeling of home is really nice. AG1, for whatever
    0:16:08 reason, is the thing that makes me feel like home. It’s the symbol of the daily habits that I do when
    0:16:18 I have my shit together. And I’m exercising and eating okay and making sure that I’m getting the
    0:16:26 nutrition I need. So in that sense, it’s good to be home. They’ll give you a one month supply of
    0:16:32 fish oil when you sign up at drinkag1.com/lex. This episode is also brought to you by Element,
    0:16:38 my daily zero sugar and delicious electrolyte mix. Now some number of packets of element I actually
    0:16:46 did bring to Ukraine, to Eastern Europe, to Europe as I’m traveling. It’s just so easy to travel with
    0:16:54 and especially when I’m fasting for 24 hours or more, which I was doing not by choice, but
    0:17:04 for the flexibility that it enables, electrolytes really help me avoid the headaches associated
    0:17:12 with not consuming enough calories or fasting or eating only meat or all that kind of stuff.
    0:17:18 It really helps make sure that you avoid what people call the keto flu, but I find that when I’m
    0:17:24 fasting or doing really low carbs at any stage, it just makes me feel better if I make sure the
    0:17:31 electrolytes are correct. And the same is true with the intense exercise. So get a simple pack
    0:17:38 for free with any purchase. Try to drink element.com/lex. This episode is brought to you by EighthSleep
    0:17:50 and it’s Pod4Ultra. And yes, the irony of the fact that I haven’t slept for probably 40 hours
    0:17:57 and I’m about to crash the irony of the fact that I am talking or attempting to
    0:18:06 about a really, really nice mattress. I just can’t wait. I can’t wait. It cools the bed,
    0:18:14 warm blanket. It really is, it’s an escape from the insanity, the cruelty,
    0:18:24 the madness of the world. Yeah. So I look forward to that. I’ll look forward to that whenever I
    0:18:32 take a power nap or trying to get a phone that sleep. Yeah. It’s a little ruspous from the madness
    0:18:41 of the world. Go to EighthSleep.com/lex and use code Lex to get $350 off the Pod4Ultra.
    0:18:48 This episode is also brought to you by BetterHelp, spelled H-E-L-P Help. It’s difficult for me to
    0:18:59 explain the kind of things that war does to people’s minds, to how they see the world,
    0:19:06 how they interact with each other. I’ve seen a lot of pain in my travels and it breaks my heart.
    0:19:16 So that said, the human mind is remarkably resilient to suffering. And that too
    0:19:25 gives me a kind of hope that no matter what, the human spirit prevails and flourishes.
    0:19:31 Sometimes it takes years, sometimes it takes generations, but it does flourish.
    0:19:39 Anyway, I’m reminded of that from BetterHelp. It’s a service that helps you figure out what
    0:19:43 you need to match with a licensed therapist in under 48 hours. You can check them out at
    0:19:50 BetterHelp.com/lex and save on your first month. That’s BetterHelp.com/lex.
    0:19:58 This is the Lex Friedman podcast. And now, dear friends, here’s the president of Ukraine,
    0:20:07 Volodymyr Zelensky.
    0:20:20 If we can explain why the Ukrainian language is very important,
    0:20:23 our conversation will be most effective and impactful if we speak in Russian.
    0:20:27 I speak Russian perfectly, of course, and I understand everything you are talking about.
    0:20:34 However, I can’t respond in Russian the entire interview. It’s because this is how it is today.
    0:20:39 I am not making anything up. You can see it all for yourself. You can feel and hear it.
    0:20:46 Today, there were 73 missile attacks against us and people were killed. There were over 100 drones
    0:20:53 today and this is a daily occurrence. The people who attack us, they speak Russian.
    0:20:59 They attack people who were only recently told that this was actually in defense of
    0:21:08 Russian-speaking people. And this is why I respect neither the leader or director of today’s Russia,
    0:21:16 nor the people. I just, that’s it. And I don’t think that you can just pretend that nothing’s
    0:21:24 happening and give Putin a pass once again for saying that we are one people, that we speak
    0:21:30 one language, etc. They speak the language of weapons. That is a fact. And we are peaceful people,
    0:21:38 peaceful people who want to protect themselves and defend their freedom and their human choice.
    0:21:46 You know, at the beginning of the war, I addressed Russians in Russian.
    0:21:58 Zero effect. They’re mute. They do not listen. They did not listen.
    0:22:03 Some are afraid. Some have other issues. They have different reasons. It’s like when a person is
    0:22:08 drowning. Drowning and people walk by because they can’t hear them. And someone walks on by crying.
    0:22:13 Afraid to save them. It doesn’t change anything for the one drowning.
    0:22:19 They need someone to help them. This is why I honestly despise these people as they are deaf.
    0:22:27 They began the occupation in the supposed defense of the Russian language. And that’s why,
    0:22:32 with all due respect, I would like to give an interview in Ukrainian. This is very,
    0:22:41 this is very important to me. If there are some points that you want me to explain,
    0:22:47 in Russian, I can certainly do that. I can certainly occasionally speak Russian.
    0:22:54 But in general, in general, no, I’m not sure that that you will understand me completely.
    0:22:59 Despite your Ukrainian roots, you are a citizen of the United States, right?
    0:23:10 Yes. That’s why I’m surprised that you don’t understand. Well, it was a long time ago. I
    0:23:20 understand that it was a long time ago. Moreover, a lot has changed. A lot has changed.
    0:23:28 If I may please allow me to say this in Russian. Yes, many things have changed. But I have hope.
    0:23:34 I hope that today, many Russians will hear this, that Vladimir Putin will hear this,
    0:23:38 that the American president, Donald Trump, and the American people will hear this,
    0:23:43 that everyone will hear this. And yes, Ukrainian language is important symbolically.
    0:23:46 But what is also important is that we understand each other well.
    0:23:51 For Donald Trump? Is it important for Donald Trump whether I speak Russian or not?
    0:23:57 Yes. Because unfortunately, and it hurts to admit, but I cannot speak or understand Ukrainian yet.
    0:24:02 So your wit, dynamism, and your humanity will not come through as well and as quickly.
    0:24:06 Remember, I need to wait for two to three seconds to hear it.
    0:24:12 You have a great sense of humor, great stories. With an interpreter translating,
    0:24:15 I simply won’t see this, but I understand that it’s painful.
    0:24:21 Another reason is that I hoped we could show that even though
    0:24:26 it is sometimes said that Russian is banned in Ukraine.
    0:24:29 This is not true. I’m speaking Russian now, right?
    0:24:32 We have people who speak Russian. This is not true, really, it’s not.
    0:24:38 It’s really not true. We disrespect Russian now because of Russians.
    0:24:44 That’s all. When they were saving Russian speakers, they killed Russian speakers,
    0:24:48 many people who actually, many of whom are in the East, right?
    0:24:53 In the East, they lived, lived in the East.
    0:24:56 They destroyed their houses, destroyed their lives.
    0:24:59 It’s not a rhetorical thing. It’s not all talk and blah, blah, blah.
    0:25:01 I don’t have time for blah, blah, blah. Yes.
    0:25:05 So it’s a very, very, very important and sensitive moment.
    0:25:12 The message is that we are not one nation. We are not, you know, the same country.
    0:25:15 We’re different countries. Yes, different countries.
    0:25:23 And I think what is most important is what we’re talking about, not how.
    0:25:27 We’re speaking about it. This is what I think. You’re a smart guy.
    0:25:30 So you have a lot of experience in dialogue of this kind.
    0:25:33 That’s why I think you will, you will understand me.
    0:25:44 Yeah. I, anyway, I think it is far better for Donald Trump to hear my English, not my Russian.
    0:25:46 Your English is much better than my Ukrainian.
    0:25:48 You’re getting better and better at everything.
    0:25:53 That’s true. I’m a very honest guy. That’s why I will be very honest with you.
    0:25:58 Okay. Your Ukrainian is not very good, but we will, but we will work on it.
    0:26:01 Yes. I have many flaws. That’s one of them.
    0:26:06 Sometimes I can speak English. Sometimes, as I understand, we can be very flexible, right?
    0:26:10 Very flexible. Spanish, Swahili.
    0:26:11 Yeah, you see?
    0:26:13 Yeah. Javier Malay needs to understand us.
    0:26:17 So by the way, Javier understood me without any words.
    0:26:20 The language of love, maybe.
    0:26:21 Of respect. Respect.
    0:26:25 I respect him. I had a very good conversation with him. Really brilliant.
    0:26:27 May I sometimes speak Russian and sometimes English?
    0:26:29 Yes. You can use any language you like.
    0:26:33 And I think that’s a very good rule for this first meeting between us.
    0:26:38 As you said, maybe we will meet in the future for the second time.
    0:26:39 Second and third and fourth?
    0:26:43 Yeah, this is good. You can ask questions in the language you’d like,
    0:26:45 and I will answer in the language I can.
    0:26:49 Well, you said you wanted to meet by the sea at some point.
    0:26:52 So for our next meeting, let’s meet by the sea.
    0:26:53 With pleasure.
    0:26:59 Next time, it would be much better to meet by our Ukrainian Black or our Azov Sea.
    0:27:01 You know, I’ve been to a lot of…
    0:27:06 I have traveled to many cities in Ukraine, but I have never been to Odessa.
    0:27:08 And everyone tells me that, and I don’t know why.
    0:27:09 You have to.
    0:27:12 Can you explain to me why everyone loves Odessa so much?
    0:27:14 What’s there?
    0:27:19 You know, what’s in Odessa? That’s how they say it.
    0:27:21 What’s there? In Odessa, we’ve got it all.
    0:27:21 Okay.
    0:27:27 Odessa, I love Odessa because of its particular temperament.
    0:27:32 People have their own accent, and it’s so…
    0:27:34 There are many nationalities, you know.
    0:27:39 There are a lot of stories, authentic Odessa cuisine.
    0:27:43 By the way, you know, the cuisine is very different from others.
    0:27:47 The dishes are not like any other dishes, and everything is very tasty.
    0:27:50 Also, there are beautiful people.
    0:27:51 And today, you know,
    0:28:00 you understand people very well, especially after the attacks on Odessa.
    0:28:03 You understand what the people are like.
    0:28:06 Just how Odessites are, very Ukrainian.
    0:28:09 And that’s very cool.
    0:28:10 I love Odessa.
    0:28:12 I go there several times a year.
    0:28:16 I go there several times a year now because…
    0:28:20 Well, now because of strengthening of air defense systems,
    0:28:23 because of this grain corridor, etc.
    0:28:25 I go there more often.
    0:28:28 They have the sun there.
    0:28:29 They have the sea.
    0:28:33 It’s Ukraine, and it’s very cool there.
    0:28:37 Well, when you come and visit me in Texas as a guest for the third time…
    0:28:39 With pleasure.
    0:28:39 Let’s do this.
    0:28:42 How about you?
    0:28:47 My friend Joe Rogan and I will go get some Texas barbecue together.
    0:28:48 Who will pay?
    0:28:50 That’s a good question.
    0:28:53 Putin, Putin, for everything.
    0:28:54 He has to pay.
    0:28:55 Well, yes, we’ll invite him to.
    0:28:56 No, no, no, no.
    0:28:57 Okay.
    0:28:57 Without him.
    0:28:58 Okay, I get it.
    0:28:58 Understood.
    0:29:09 But if the Rome Statute will be accepted by your government before this moment.
    0:29:11 By the way, I don’t know if you know this,
    0:29:14 but Joe has a great comedy club in Austin.
    0:29:15 Joe Rogan.
    0:29:16 Joe Rogan, yes.
    0:29:21 And I think that as a person who respects comedy and stand-up comedy,
    0:29:23 it would be interesting for you to have a look at it.
    0:29:27 No, no, I know him, and I saw a lot of different videos.
    0:29:30 He’s a very talented person.
    0:29:35 So it would be a pleasure if you invite me and I’m able to do it.
    0:29:42 I am a little bit busy, but if I’ll be in the United States,
    0:29:46 I hope that I will have a conversation and a meeting with President Trump.
    0:29:50 And of course, during my visit, if I’ll have the time,
    0:29:52 it would be a pleasure if you’ll invite me with pleasure.
    0:29:53 You know what?
    0:29:55 I will pay.
    0:29:56 Good.
    0:30:00 Yeah, I had to think about it, but you are the president.
    0:30:01 Yes, with you, with pleasure.
    0:30:03 When the war is over, please come.
    0:30:03 Thanks so much.
    0:30:05 And when you’re less busy.
    0:30:06 Thanks so much.
    0:30:09 If we can go back many years, World War II,
    0:30:13 tell me the story of your grandfather who fought in World War II.
    0:30:21 My grandfather, he graduated from the military, military academy,
    0:30:25 and from the very beginning of the war, he went to fight.
    0:30:30 He was in the infantry and he fought through the entire war.
    0:30:31 He had many wounds.
    0:30:37 As they used to say back then, his chest is covered in medals.
    0:30:38 And it’s true.
    0:30:39 He had more than 30.
    0:30:42 Yes, more than 30.
    0:30:45 He was the kind of man he was such.
    0:30:49 He was such a serious man.
    0:30:51 I loved him very much.
    0:30:54 And we had a very close relationship.
    0:30:59 Um, he didn’t like to tell details about the war.
    0:31:03 He never, he never boasted.
    0:31:10 Although I asked him, as a boy would, how many fascists did you kill?
    0:31:12 He never talked about it.
    0:31:21 He believed that the war was a great, a great tragedy, a tragedy for everyone.
    0:31:28 And, uh, Ukraine was occupied and it was a tragedy for Ukraine,
    0:31:31 a tragedy for Europe, and a tragedy for the Jewish people.
    0:31:38 His own brothers, his entire family were executed.
    0:31:46 They were tortured by fascists who had occupied Ukraine and their village.
    0:31:54 His father was the head of the village and he was killed.
    0:31:55 They were shot.
    0:32:00 It was a mass, a mass grave, right?
    0:32:03 Yes, it was a communal burial.
    0:32:08 Some of them were killed outright and others were, they were buried alive.
    0:32:13 His four brothers, they all went to war.
    0:32:15 As soon as the war began, they were all there.
    0:32:23 He was the only one who had a military education and they all died in the war.
    0:32:25 He was the only one who came back.
    0:32:27 He had nobody.
    0:32:37 He came back and he found, found my grandmother, his future wife,
    0:32:40 and she was, she managed, what was it called then?
    0:32:42 I don’t know, they don’t have them anymore.
    0:32:50 It was a childcare facility and orphanage, so to speak, a place where orphans lived,
    0:32:56 children who, who don’t have parents, children of war.
    0:33:03 And she managed this childcare facility with difficult children, as they used to call them,
    0:33:08 difficult children who went through the war, who saw their parents killed.
    0:33:15 And this is how they met, because these difficult children, they,
    0:33:18 well, sometimes behave differently.
    0:33:21 They could steal something, do something bad.
    0:33:27 There were many, many children in the orphanage.
    0:33:31 Yes, that’s how she met my grandfather.
    0:33:34 And I loved him very much.
    0:33:44 And I think that my grandfather, frankly, would never have believed that this war is possible.
    0:33:49 He would never have believed it, because he worked in the police after the war.
    0:33:51 He was a colonel.
    0:33:57 He worked in a criminal investigation all his life.
    0:34:06 So he fought with bandits all his life after the Second World War.
    0:34:12 But also, I believe he fought for justice all his life.
    0:34:14 And we all lived in one apartment.
    0:34:21 And even after his death, I lived with both of my grandmothers and my parents,
    0:34:25 two grandmothers, who both lost their husbands.
    0:34:27 Both of them died.
    0:34:31 Well, it was an ordinary family.
    0:34:36 An ordinary family that lived like everyone lived back then in the Soviet Union.
    0:34:42 And even after the Soviets in the 90s, we lived in one apartment all together.
    0:34:45 What else is there to say?
    0:34:51 But I think the most important thing was values, respect.
    0:34:53 They gave me an education.
    0:34:56 My parents gave me an education.
    0:35:02 No one left me money or apartments, so I didn’t inherit anything material.
    0:35:09 But I believe that our real inheritance is here in our minds and in our hearts.
    0:35:09 I believe that.
    0:35:19 This is one second.
    0:35:26 So if I’m sorry, if you tell a joke, I will laugh about one, two or three seconds later.
    0:35:27 There’s a delay.
    0:35:34 So an ordinary family, but not an ordinary time, a World War II.
    0:35:35 World War II.
    0:35:39 Speaking of mass graves, I was at Babinyar yesterday.
    0:35:41 A large part of my family died there.
    0:35:45 In moments like this, such a place serves as a stark reminder
    0:35:49 of the profound historical gravity of the Second World War.
    0:35:54 I remember, I remember this song from my youth.
    0:35:59 On June 22nd at four o’clock, Kiev was bombed and the war began.
    0:36:06 I always wondered how it would feel to live in a moment when, when everything changed.
    0:36:11 The path of humanity completely shifts in a single moment, just like that.
    0:36:13 What do you think?
    0:36:17 What do you think about that moment in 1941?
    0:36:22 Now, after the 2022 invasion, how do you perceive the Second World War
    0:36:24 after you have witnessed all of it?
    0:36:32 Well, firstly, the war actually started earlier.
    0:36:35 It started here in Ukraine.
    0:36:41 Kiev was bombed, as you quoted, but the war had already begun before that.
    0:36:50 And I think I perceived it as a start of the full-scale invasion.
    0:36:56 Well, I think it’s hard.
    0:37:01 It’s hard to understand why nobody wants to listen,
    0:37:06 look at and analyze history.
    0:37:15 War, the rise of fascism and Nazism, the emergence of Hitler,
    0:37:18 Goebbels and their entire team.
    0:37:22 At the time, this wasn’t just about one party or even one country.
    0:37:29 It was essentially a wave, a wave of hatred,
    0:37:38 a wave of one race, one race above the rest.
    0:37:48 They were, in fact, constructing and ultimately implemented a theory around this idea later seizing Europe.
    0:37:55 They created a theory of one nation, one race, one world, their world.
    0:38:04 Of course, this idea is absolutely senseless, but it has become radicalized over the years and even gained support.
    0:38:16 A vision of one world, and in principle the so-called Russian world, the ideology Putin promotes and imposes, it wasn’t originally like that.
    0:38:22 He was a different person back then, or maybe he was always like this, but his rhetoric was different.
    0:38:28 At the beginning, remember, he talked about the EU and even about Russia’s future being tied to NATO.
    0:38:34 There were even talks of joining the European Union. NATO, he spoke about shared values with the West.
    0:38:36 That’s how it all sounded back then.
    0:38:46 And we must also look at Hitler, who was seriously, before the radical idea of taking over the whole world,
    0:38:54 he actually made certain steps and everyone believed he was helping the economy.
    0:39:02 And to be fair, he did take some steps in that direction, but he was a terrifying person.
    0:39:09 None of those actions justify him, nor do they excuse his actions.
    0:39:14 And that’s why we cannot look at the Second World War as if it started in 1939.
    0:39:23 It didn’t begin in 1941 either. We need to draw conclusions. When did it start? With the weaknesses of the world.
    0:39:30 The division of European states, the Molotov-Ribbentrop pact, all of this happened before 1941.
    0:39:37 People who were more informed, those who dug deeper, whether they were politicians or not,
    0:39:49 whether they were from different walks of life, including business, which was different back then, were speaking about all of this.
    0:39:59 Hitler won’t stop. There’ll be a world war. Hitler will destroy nations. Nations.
    0:40:05 And that’s what happened. Someone looked the other way. What I told you about. Europe was sinking then.
    0:40:12 I gave you an example of it. But the whole world looked the other way and didn’t pay attention and said,
    0:40:17 “No, we can negotiate with him. I’m telling you he is okay. We can negotiate with him.
    0:40:27 He’s just more right-leaning or it does not matter what they said. He’s just pro, very pro nationalist.”
    0:40:37 This is all nonsense and this is not the first time. And Hitler isn’t the first such case in history.
    0:40:49 We’re dealing with a person who is allowed to stick to this desire to destroy.
    0:40:56 He was consumed by it and enjoying it. And what happened to Hitler? Now, what about Putin?
    0:41:01 This invasion was also at four in the morning, around four in the morning.
    0:41:09 There were missile strikes on Ukraine. This is the same. I believe that intentions are also the same, but more on that later.
    0:41:15 By the way, you tell me if this is too long, you can stop me.
    0:41:17 Never long enough. It’s beautiful.
    0:41:29 Okay, so it happened here around four in the morning. Before this, I must honestly say,
    0:41:35 everyone said something, predicted something, etc., but I asked only for one thing.
    0:41:45 Primarily from the United States, if you are sure, if you have the evidence, if you talk to him and he tells you that there’ll be an invasion, if all this scares you,
    0:41:58 I only asked for two things. Send us weapons or better yet, strengthen us with preventive measures so there would be no war.
    0:42:04 It wasn’t the weapons that I was asking for. I asked for sanctions. Intimidate him.
    0:42:11 Please don’t say that. If he comes, if he crosses borders, if he kills, we’re imposing sanctions.
    0:42:16 Well, this is complete bullshit. Sorry, but really.
    0:42:17 Oh, I understand this.
    0:42:18 Oh, wonderful. Yes.
    0:42:20 I understood one word.
    0:42:23 Yeah.
    0:42:25 So they did not help.
    0:42:28 I believe that no, and this is a fact.
    0:42:41 We didn’t receive help. If we assume that words are help, well, then yes, we received a lot of it because there were plenty of words.
    0:42:44 Even more than plenty, yes?
    0:42:49 At four in the morning, there were strikes.
    0:42:53 Morally, is it possible to prepare for war?
    0:42:58 No, it doesn’t happen like you read in books, see in movies and so on.
    0:43:06 What happens to you? I was just looking at my wife and children. My children were asleep, but my wife was awake.
    0:43:13 There were strikes, missile strikes. We heard them.
    0:43:24 To you as a living person, how can this be? You just can’t fully believe this.
    0:43:39 You just don’t understand why now, given everything that happened in World War II, when millions of people died, none of it mattered.
    0:43:45 Still at four, at four in the morning, around four, three, forty, three, forty-five, remember?
    0:43:48 Around this time, yes, there were missile strikes.
    0:44:04 And later, by the way, a few days after, after the first days of the war, I spoke with Lukashenko on the phone.
    0:44:15 And he apologized. And he said that it was not me. Missiles were launched from my territory, and Putin was the one launching them.
    0:44:21 These are his words. I have witnesses. And I apologize, he said.
    0:44:28 But believe me, that’s what he told me. Volodya, this is not me. I’m not in charge, he told me.
    0:44:33 I’m not in charge. These are just missiles. This is Putin. I told him, don’t do that.
    0:44:41 This was done without me. That’s it. He just, on the phone, I remember this conversation.
    0:44:48 I told him that I believed. I told him, you are a murderer too, I’m just saying.
    0:44:55 And he told me, you must understand, you can’t fight the Russians. I told him that we never fought them.
    0:45:02 I said, it’s war. The missiles came from your land, from Belarus. How did you allow this?
    0:45:11 Then he replied, all right, retaliate then. I still remember him telling me, hit the refinery.
    0:45:16 You know how much I care about it. Moser oil refinery, is that it? Can’t recall.
    0:45:22 Moser oil refinery, I told him, what are you on about? What retaliation?
    0:45:28 Forgive me, Volodya. Yes. This was at five in the morning?
    0:45:33 No, no, no. This was during the first or maybe the second day, second or third day of the war.
    0:45:34 Ah, I see.
    0:45:43 Well, after that, I went back home. I was home with my children, with my wife.
    0:45:50 I just went to my wife very quickly that night at four o’clock. Yes, and just told her, get the children, get ready.
    0:45:56 You’ll probably need to go to my office very soon. And I left. That’s it.
    0:46:01 At this moment, you’re no longer a father.
    0:46:10 What happened to me, unfortunately, because I believe that this is, and not only do I believe, I understand,
    0:46:18 especially now that all of this is the most important thing, because your country is your family.
    0:46:25 The strength is in your family, and this is the most important thing, and I’m the president.
    0:46:31 And therefore, I had to stop being a father in my own family, and my wife had to do everything.
    0:46:41 She had to do everything regarding children, regarding safety, and I had to deal with the state because I’m the president.
    0:46:53 And this is my duty. And I, by the way, am taking this very seriously. I went to the office, and here we are now. You’re very welcome.
    0:47:03 Well, at that moment, on February 24th, 2022, everything changed again, just like in June 1941. Everything changed.
    0:47:12 And history took a turn, the history of humanity took a turn. And for you too, you were the president.
    0:47:21 You were talking about fighting corruption, about the country’s freedom, about interesting and innovative reforms.
    0:47:26 But that morning, on February 22nd, everything changed.
    0:47:31 Could you tell me about that morning, the details of your actions?
    0:47:36 When you had to quickly make difficult decisions.
    0:47:40 What was the process for you? How did you make these decisions?
    0:47:53 Did you discuss them with people you trust to understand how to respond to this invasion in every technical, political, and military aspect?
    0:47:56 What was the process for you? How did you make the decision?
    0:48:07 According to our legislation, in principle, I’m the supreme commander of the armed forces of Ukraine, so I had to give corresponding orders.
    0:48:14 Yes, I have a military office, and then later there was a military headquarters where all key people gathered.
    0:48:18 This is not only about the military, it’s about energy, etc., all key things.
    0:48:30 But at that moment, I made the decisions quickly and without a doubt, and I cannot say that I am just that kind of person.
    0:48:42 I’m just a living person who believed that if help is needed right now to help evacuate people, help with children, several cities were blocked.
    0:48:49 I was only thinking about how to deliver food there within a day.
    0:49:01 We did a lot of things, although we understood that they, in fact, occupied part of our state.
    0:49:12 And we distributed weapons to people. That’s how it was.
    0:49:21 Trucks came and simply distributed weapons to people so that they could defend the capital to ordinary people, just on the street.
    0:49:34 To ordinary people who understood that if the Russians entered the city, then we would have the same thing that’s happening in other cities per the information we received.
    0:49:43 Thanks to digitalization, by the way, we had very good digitalization before this, and we preserved a lot.
    0:49:50 And even when they were surrounding certain cities, a lot of things still worked.
    0:50:03 The banking system, the internet, we had television, and thanks to this, I made several decisions to ensure that people are united and have all the information.
    0:50:09 Russia is very good at spreading large-scale disinformation.
    0:50:24 Fortunately, I have two decades of experience managing a production studio, TV channels, and large media resources.
    0:50:30 I understood that we needed to build an information network very quickly.
    0:50:37 Thanks to this, I began to address the people constantly. This happened several times, three to five times a day.
    0:50:50 In fact, I became an information source for people who were in cities that were cut off from other information.
    0:51:01 And it was very important for me to keep all things digital, to keep the internet, to stay in touch with everyone, with all the people.
    0:51:13 Initially, that’s the contact we had, and then we also built a media platform where we had all the news agencies of Ukraine.
    0:51:23 And this network was called Marathon, and it was also very important for the people to trust us, and people had to receive information.
    0:51:32 Why? There were waves. There were waves of Russian on the first day who said he ran away.
    0:51:37 I had to go out into the street. I left the office and went outside.
    0:51:48 I had to do this because I was showing that this was no green screen, to show that it was the street, not some digital manipulation.
    0:51:54 I mean, I did these things, then I touched various objects. Now, people might think that these are small things,
    0:52:01 but I was actually showing that I was in a real place. All of this had an impact.
    0:52:07 I was absolutely sure of my actions. And these contacts, several contacts.
    0:52:14 And then I spoke to the Russians. I addressed Russians. I really did. And then only after that, I gathered.
    0:52:19 It was the first day when I invited all of the journalists here, wasn’t it?
    0:52:27 That was on the first day, I think. Well, not here, here, to the press center in this building.
    0:52:34 I talked to journalists. I asked them not to leave because we needed weapons.
    0:52:44 At that moment, they were handing out rifles to people. And for me, journalists and media platforms were essential voices.
    0:52:51 There were various journalists from different countries here, and they were essentially stuck.
    0:52:59 And I asked them for contacts, those who had access to Russians, Belarusians,
    0:53:04 Kazakhs who understood everything, the same information. And I spoke to them.
    0:53:11 And I spoke to them and spoke in Russian. I told them, you must stop Putin.
    0:53:15 This is terrible. This is horror. This is war. You must stop him.
    0:53:20 And if you stand up now, if you speak out, and if you go out into the streets, this was very important.
    0:53:24 I spoke to them in Russian to show them that there was no problem.
    0:53:30 And that all of these pretexts were made up.
    0:53:37 This is why it’s so painful to talk about the Russian language too, because look, if a person does not want to listen,
    0:53:41 they will not listen no matter what language we speak.
    0:53:49 I disagree with you here. I think and hope that many people in Russia will hear us today.
    0:53:55 They blocked YouTube recently. Are you aware of this in their country?
    0:54:00 I know. And I simply guarantee that this conversation will travel fast on the Internet.
    0:54:03 Everyone will hear you. They will hear you.
    0:54:08 Including the President of Russia will hear you. This is why I have hope.
    0:54:15 He is actually deaf, even if he speaks to you. He is deaf by his very nature.
    0:54:21 Do you understand the difference? You know, for instance, when you talk to Musk,
    0:54:31 you’re talking to an innovator, a scientist about rockets.
    0:54:35 You talk about how to save on costs and how they land.
    0:54:41 And on the other hand, Putin doesn’t launch rockets to save money but to kill people.
    0:54:47 Do you think you can talk to Putin about technology?
    0:54:54 Your guys were interviewing him and he told them about tribal history.
    0:54:59 Do you understand? Imagine a Russian man in his country listening to him.
    0:55:04 You know what Musk is about? Technology, Mars, artificial intelligence.
    0:55:09 And this guy, Putin, is standing there bare-assed, pontificating about tribes.
    0:55:14 You’ve got to understand. You think that when you do interviews,
    0:55:21 like Mr. Tucker, who did an interview there, that you’re about to make them friends.
    0:55:26 How could you… What does this have to do with friends?
    0:55:31 He’s different. He is simply different.
    0:55:33 But it’s still necessary.
    0:55:35 A mammoth stands before you.
    0:55:40 By the way, I must say that when you said bare-assed, it was not translated.
    0:55:42 Could the interpreter please translate?
    0:55:44 This is so that you can understand.
    0:55:46 Now he explained everything to me. I understand.
    0:55:48 That’s great.
    0:55:50 But we still need to talk.
    0:55:53 One should always speak with someone who listens.
    0:55:58 And you must speak when you know that this will benefit you,
    0:56:04 bring peace and calm to the world, not the other way around.
    0:56:08 I love President Trump’s message when he speaks.
    0:56:13 I think that we share a position on peace through strength.
    0:56:15 That is very important.
    0:56:20 It means that if you are strong, you can speak.
    0:56:22 And we need to be strong.
    0:56:27 And Ukraine has to be strong, strong enough.
    0:56:29 Otherwise, what for?
    0:56:41 So you know who, like Voldemort, who must not be named.
    0:56:44 Yes, he’s like Voldemort.
    0:56:51 He thrives, subsists, and lives on being subjectivized.
    0:56:56 Instead of isolation, he is offered to step out into the light.
    0:57:03 He’s darkness, personified, and you offer him, as it were, to be subjectivized.
    0:57:04 Why?
    0:57:07 There’s only one reason.
    0:57:09 Fear.
    0:57:12 And you say, we need to talk.
    0:57:18 Listen, we need to be in a strong position and not talk, but end the war.
    0:57:21 Yes, yes, it is possible through dialogue.
    0:57:23 We’re not opposed to it.
    0:57:30 But you just need to be in a strong position to make the other person want it.
    0:57:33 Do you think he wants to end the war?
    0:57:35 That’s what you suggested.
    0:57:36 I think this is naive.
    0:57:37 I’m sorry.
    0:57:42 With all due respect, it’s naive to think he wants to finish the war.
    0:57:45 Let’s tell you what.
    0:57:48 The circumstances, sorry for interrupting.
    0:57:49 There’s something we need.
    0:57:57 I think that President Trump not only has will, he has all these possibilities, and it’s not just talk.
    0:57:59 I really count on him.
    0:58:02 And I think that our people really count on him.
    0:58:14 So he has enough power to pressure him, to pressure Putin not into wanting to stop it.
    0:58:18 No, he will not want to, to pressure him to actually stop it.
    0:58:19 That is the difference.
    0:58:21 Don’t rely on his will.
    0:58:23 Putin’s will to stop.
    0:58:25 You won’t see it.
    0:58:26 That’s what I think.
    0:58:27 Sorry.
    0:58:28 No, sorry.
    0:58:29 I interrupted you first.
    0:58:39 But what I would want, I do have what some might call a naive dream of you sitting down with Putin and Trump
    0:58:47 and negotiating a deal about a ceasefire and together finding a path to long-term peace.
    0:58:54 And I think this requires strength, requires negotiations.
    0:58:58 There are a lot of carrots and sticks here that can be used to make a real deal.
    0:59:03 And Trump is very keen on making a deal and ready to negotiate.
    0:59:05 Can I ask you a question?
    0:59:06 Yeah.
    0:59:13 I just really want you and I to be on the same page.
    0:59:21 It’s very important to be in the same information space, extremely important.
    0:59:24 Let’s talk a bit about the ceasefire.
    0:59:28 Let me describe the situation to you.
    0:59:39 In December 2019, in Normandy, in Paris, at the Elysees Palace, Macron, Merkel, Putin and I agreed.
    0:59:42 On the ceasefire, the U.S. wasn’t there.
    0:59:46 And this, by the way, was a weak point of the meeting.
    0:59:50 If you’d like, we can later discuss why they weren’t there.
    0:59:53 It’s a security guarantee thing in general.
    0:59:57 It’s Germany’s position, etc.
    1:00:02 We agreed on an exchange of hostages and all-for-all exchange.
    1:00:04 We made a deal to exchange everyone for everyone.
    1:00:05 I think you know that.
    1:00:09 And there was also a meeting that lasted many hours.
    1:00:13 A meeting where we made a deal with him.
    1:00:14 Everyone was tired.
    1:00:17 It was just the two of us in the end.
    1:00:19 And I proposed a ceasefire.
    1:00:23 By the way, no one in Ukraine believed.
    1:00:26 Few believed in the ceasefire.
    1:00:28 And he wanted troop withdrawal.
    1:00:36 I calculated that if there were a withdrawal of troops from the line of contact the way Russians proposed, it would take 20 years.
    1:00:39 I proved it to him just in terms of time.
    1:00:41 Square kilometers.
    1:00:45 Namely the length of the line of contact or delimitation line.
    1:00:51 And we agreed on what I told him that it will not work out.
    1:00:57 But I had many points because I was deeply involved in the issue.
    1:00:59 I was involved very deeply.
    1:01:01 It’s my thing in general.
    1:01:09 If I start doing something, I can’t stand there like that guy I spoke about with my ass out, you know?
    1:01:12 I must be dressed.
    1:01:14 I must be prepared.
    1:01:17 I must be prepared better.
    1:01:20 Better than anyone in front of me.
    1:01:22 You do sports, right?
    1:01:25 I practiced for many years.
    1:01:30 And we know what fights are like, what boxing is, what type boxing is.
    1:01:33 This is what I did and I loved it very much.
    1:01:39 When you step into the ring, you understand everything pretty much.
    1:01:46 And so I stepped into it and I was definitely well prepared.
    1:01:48 But he wasn’t.
    1:01:51 He was not deeply involved in the process.
    1:01:53 What border?
    1:01:54 Where is it?
    1:01:57 How long will it take to disengage troops?
    1:01:58 And why wasn’t he involved?
    1:02:00 You want to know?
    1:02:02 Because he wasn’t going to do any of this.
    1:02:04 This is what confused me.
    1:02:13 If you are not deeply involved in the issue, well, then it’s as if you don’t really need the result.
    1:02:15 That’s what I think.
    1:02:16 So what happened?
    1:02:27 We agreed that there will be gas continuation, gas transit in 2019.
    1:02:28 We agreed with him.
    1:02:30 This was the security for Europe.
    1:02:32 Merkel asked me for it.
    1:02:36 And this was extremely important for Germany.
    1:02:38 We agreed with him.
    1:02:42 Secondly, we agreed that for him it was just money.
    1:02:46 So secondly, we agreed on an exchange.
    1:02:49 For me, this was the most important thing for them.
    1:02:52 Gas was for me, was the people.
    1:03:04 And this is a fact because I wanted to have a humanitarian advantage so that there would be further meetings that would lead to sustained peace.
    1:03:08 And third, ceasefire.
    1:03:12 Ceasefire you spoke about.
    1:03:14 What happened?
    1:03:18 The gas contract was signed because he needed it.
    1:03:21 And by the way, he knew everything about it.
    1:03:30 As for exchange, we took the first step and exchanged the people.
    1:03:37 Regarding the ceasefire, well, they started killing us in about a month.
    1:03:47 So I called him and I told him we agreed on a ceasefire.
    1:03:48 Didn’t we?
    1:03:50 Well, it wasn’t a piece of toilet paper, was it?
    1:03:52 This is serious business.
    1:03:53 Or so it seemed.
    1:03:55 It really was serious.
    1:04:01 Merkel, Macron, you and I, we all agreed on this together.
    1:04:05 A ceasefire is important, isn’t it?
    1:04:11 Not for New Year’s because everyone was celebrating New Year’s and now they’re offering us a Christmas ceasefire.
    1:04:12 It’s all the same.
    1:04:15 A ceasefire for two, three days just to get some praise.
    1:04:16 But this isn’t a performance.
    1:04:18 This isn’t some kind of theater.
    1:04:21 No, this, this is about people’s lives.
    1:04:22 And that’s what happened.
    1:04:25 After that, I called him a few more times.
    1:04:28 I think I only had two, three calls with him in total.
    1:04:30 I asked him for a ceasefire.
    1:04:32 He told me it couldn’t be.
    1:04:36 We will, we will figure it out now.
    1:04:44 People from, people from the occupied territory, Russians and separatists, they were all there together.
    1:04:47 They continued to shoot and kill our people.
    1:04:55 Yes, the front lines were quiet, but they killed people.
    1:05:00 They were killing people and I kept calling him.
    1:05:05 I called again and again, but there was nothing until after a few months, the Russians stopped answering the phone.
    1:05:08 We did not have any contact since.
    1:05:12 I wanted another meeting like we had in Normandy.
    1:05:14 I wanted the next meeting.
    1:05:19 I wanted to find a solution, but the Russians refused.
    1:05:26 We tried to make it happen through various European countries and not only European, but the Russians refused.
    1:05:31 They passed along some kind of bullshit, made excuses, they didn’t want it.
    1:05:35 Meanwhile, they were sending their snipers.
    1:05:41 We had evidence, living proof, even video evidence, because some of them were captured back then.
    1:05:43 Those were the snipers in training.
    1:05:44 They were training them.
    1:05:50 They were training them and later those snipers operated in Syria and Africa.
    1:05:54 These snipers were training in our country in the East.
    1:05:57 Ukrainians were living targets.
    1:06:03 They were shooting from the other side, killing people, women, people, children.
    1:06:04 They were shooting.
    1:06:05 It was a hunt.
    1:06:12 By the way, it was in the Russian speaking region in the East where, according to him, everyone is speaking Russian.
    1:06:18 That’s where they were shooting, where the situation currently is the most tense.
    1:06:19 They killed people.
    1:06:25 We sent this information, sent pictures, we sent them to the UN, sent them everywhere.
    1:06:28 We worked very hard, very persistently.
    1:06:32 I met with everyone, but who thought of Ukraine back then?
    1:06:34 They didn’t notice it much.
    1:06:40 They didn’t pay much attention to Crimea being illegally occupied either.
    1:06:46 And to be honest, the United States of America too, everyone was somewhat silent about this issue.
    1:06:47 That’s how it was.
    1:06:52 It was like that before a full-scale war.
    1:06:58 I want to ask you a question about the ceasefire.
    1:07:09 For example, in Mariupol, in Mariupol today, there are American and Ukrainian journalists.
    1:07:15 And everyone will tell you who had contact, who has contact now with Mariupol,
    1:07:20 who fled from there in the last minutes just before the occupation,
    1:07:24 or who was able to leave to escape after the occupation.
    1:07:28 Chernoff, who won an Oscar, was among them.
    1:07:32 And the journalists that left Mariupol, they are here.
    1:07:35 By the way, we had a conversation.
    1:07:44 They will tell you that 20,000, 30,000 civilians were tortured and buried there.
    1:07:47 We do not know the number of victims.
    1:07:51 People who didn’t want to work with them, who refused to cooperate with them,
    1:07:53 people who went on strikes to protest,
    1:07:57 people who did not want to work with the Russians who occupied Mariupol.
    1:07:59 And this is one example, just with this city.
    1:08:01 And I have a question for you.
    1:08:03 What about the millions of children?
    1:08:07 And I will ask you in Russian so that you hear this without delay.
    1:08:10 What about the millions of children over there?
    1:08:14 What if we just arranged a ceasefire without understanding what would happen next?
    1:08:20 Without understanding, what will happen to Ukraine’s security guarantees?
    1:08:23 What about the millions of children in the occupied territories?
    1:08:25 What should I tell them?
    1:08:27 What am I to tell them?
    1:08:29 What is it I should tell them?
    1:08:32 What? Whatever?
    1:08:35 Hey, all of you over there, see ya.
    1:08:39 And those tens of thousands of people buried there, they were.
    1:08:41 Is that what we want?
    1:08:44 Are we ready to forgive them for this?
    1:08:47 We must at least take the first step.
    1:08:52 If this is a ceasefire, we must know that there is a security guarantee
    1:08:55 for the part of Ukraine under our control.
    1:08:59 We need it so that he will not come back.
    1:09:01 This is very important.
    1:09:04 And what do we say to the people who live in those territories?
    1:09:06 These are millions of people.
    1:09:12 Did you know that since 2014 in Donetsk, in the Crimea,
    1:09:15 this is happening in Melitopol as well?
    1:09:17 As in Berdiansk now.
    1:09:21 They are making all these kits of drafting age.
    1:09:26 Go and fight.
    1:09:28 And if they don’t go, they will be killed.
    1:09:31 This is, do you understand what’s happening?
    1:09:36 That is why a ceasefire, everything I said.
    1:09:43 What I wish for and I believe in President Trump’s power to use
    1:09:50 all of this information to come up with a way to make Ukraine strong.
    1:09:53 And be strong.
    1:09:56 Why am I saying that?
    1:10:00 I will give you an example.
    1:10:07 President Trump will be in the same situation as I was in 2019.
    1:10:09 Precisely the same situation.
    1:10:11 I want to end the war.
    1:10:13 We want a lasting peace for Ukraine.
    1:10:15 We must do this.
    1:10:20 The ceasefire, exchange people, and then diplomatically return all territories.
    1:10:24 And we will do this through diplomacy.
    1:10:27 What will happen next with President Trump?
    1:10:31 If the ceasefire happens without security guarantees,
    1:10:35 at least for the territory we control, what does he get?
    1:10:40 If he manages to make a ceasefire deal.
    1:10:45 And three months later, Putin launches a new wave of attacks.
    1:10:48 What will Trump look like?
    1:10:51 What will Ukraine look like?
    1:10:54 What will everyone look like?
    1:10:56 Putin will just do it.
    1:10:58 And why would Putin do it?
    1:11:01 Because today, he’s afraid of Trump.
    1:11:11 But once Trump manages, for example, to do a ceasefire deal without serious security guarantees for Ukraine,
    1:11:13 he will give a pass to Putin.
    1:11:15 Not that he wants to.
    1:11:17 No, he does not want that.
    1:11:19 I believe in what he says.
    1:11:22 But he will give Putin an opportunity.
    1:11:26 Because in Putin’s head, he wants me to fight with Trump.
    1:11:30 Putin’s plan is to end the occupation of our territory.
    1:11:34 This is in his sick head.
    1:11:37 And I’m absolutely sure of this.
    1:11:44 That is why I told you, don’t wait for Putin to want to stop the war.
    1:11:50 Pressure him so that he is forced to stop the war.
    1:11:52 That’s important.
    1:11:56 It’s important to say that what you said about the children is a tragedy.
    1:11:57 War is hell.
    1:12:01 But let me say again, we must find a path to peace.
    1:12:02 There is one.
    1:12:03 What is it?
    1:12:04 There is one.
    1:12:07 Before ceasefire, strong Ukraine.
    1:12:08 Strong Ukraine’s position?
    1:12:10 Yes, we can speak about it with Trump.
    1:12:16 For me, we can speak about security guarantees.
    1:12:20 But a quick step, a quick step is NATO.
    1:12:23 A partial membership NATO.
    1:12:25 Yes, I understand.
    1:12:28 I understand Trump’s feelings about NATO.
    1:12:29 I heard him.
    1:12:32 He’s thinking through all of it, of course.
    1:12:38 But anyway, yes, NATO is a strong security guarantee for all the people for us.
    1:12:40 A part of security guarantees.
    1:12:45 The second part is the arms aid package, which we will not use.
    1:12:50 If a ceasefire works, nobody will use the weapons.
    1:12:51 For what?
    1:12:52 But it has to stay.
    1:12:57 But with all due respect to the United States and to the administration.
    1:12:58 Not like before.
    1:13:01 I don’t want the same situation like we had with Biden.
    1:13:04 I ask for sanctions now, please.
    1:13:06 And weapons now.
    1:13:08 And then we will see.
    1:13:14 If they start it again, of course, we’ll be happy if you’ll give us more and you will stand with us shoulder to shoulder.
    1:13:15 Of course, that is right.
    1:13:21 But it’s different when you have weapons.
    1:13:25 Putin wouldn’t have been able to occupy so much territory.
    1:13:28 It was very difficult for us to push him out.
    1:13:31 But we didn’t have weapons before and that is the same situation.
    1:13:33 It can be the same situation.
    1:13:35 I’m just sharing this with you.
    1:13:40 Like I said at the very beginning, I want to be very honest with you and with your audience.
    1:13:42 Yes, it’s true.
    1:13:47 If we do not have security guarantees, Putin will come again.
    1:13:52 To make it clear, let’s describe the idea that you are speaking about.
    1:13:54 I would like to offer you other ideas too.
    1:14:05 But right now, your idea is that NATO accepts Ukraine minus the five regions of Luhansk, Donetsk, Zaporizhia, Kursyn and Crimea.
    1:14:14 Just so you understand the situation, the invitation to NATO is legislatively issued to Ukraine.
    1:14:19 So to us, all those territories are still Ukraine.
    1:14:25 But NATO so far can only act in the part that is under Ukrainian control.
    1:14:26 This can be negotiated.
    1:14:28 I am sure about that.
    1:14:32 Yes, this would not be a great success for us.
    1:14:38 But if we see a diplomatic way to end the war, this is one of the ways.
    1:14:39 So it is.
    1:14:42 Sorry, that is a start.
    1:14:47 Secondly, weapons, arms aid package.
    1:14:50 I’m not ready to discuss this publicly right now.
    1:14:55 It’s all written down and President Trump might have seen it or not, but we’ve got no secrets from him.
    1:14:56 Yes.
    1:15:06 But mostly it depends on the willingness of the United States because some of it will come from the EU, some from the United States, of course, together.
    1:15:08 So not just from the United States.
    1:15:11 No, no, no, we need unity with this package.
    1:15:14 So the package and sanctions.
    1:15:16 Yes, sanctions.
    1:15:23 But I think it’s in the interest of all the smart people to not have Russian energy on the market in general.
    1:15:25 So he has to stop it.
    1:15:27 That’s all.
    1:15:28 It’s fine.
    1:15:30 American oil, American gas is okay.
    1:15:31 Why not?
    1:15:32 And it’s cheaper.
    1:15:34 So it will be cheaper for the whole world.
    1:15:36 The money will go to the United States.
    1:15:41 And I think he will be happy and the president and your people will be happy.
    1:15:42 But it’s your decision.
    1:15:43 I’m just sharing.
    1:15:44 Yes, and cheap oil.
    1:15:48 So Putin won’t have so much money for for the war.
    1:15:50 And that that’s it.
    1:15:52 But this is difficult because it’s a lot.
    1:15:57 You’re saying to continue the sanctions on Russia to accept Ukraine into NATO.
    1:16:00 I need to ask you some difficult questions about this.
    1:16:01 Yes, go on.
    1:16:03 I trust and respect your words today.
    1:16:06 Many people respect and love you in America.
    1:16:08 Trump respects you.
    1:16:10 Loves me.
    1:16:12 Oh, come on now.
    1:16:15 Remember last time you corrected me when I said that you love Javier Millet?
    1:16:16 You said no, no, no.
    1:16:17 I respect him.
    1:16:20 So let’s not talk about love today.
    1:16:26 But could we talk seriously about about guaranteeing Russia’s security?
    1:16:27 Okay.
    1:16:31 Can I interview you a little question is what land is the war happening on?
    1:16:36 And where did it start on our soil, on our territory?
    1:16:39 International law was violated.
    1:16:42 The sovereignty of our country was violated.
    1:16:44 Civilians were killed.
    1:16:47 Tens of thousands of our people were taken hostage.
    1:16:51 And everyone will tell you this happened.
    1:16:57 This is what happened when I speak with the global south, which is trying to balance the two sides because of the history,
    1:17:05 because of their roots and because of their shared economic interests with Russia in the past.
    1:17:12 And now, of course, when you talk to them, they are speaking a little bit like you.
    1:17:19 I mean, they’re balancing a little bit, you know, yeah, a little bit in between, but we will work on it.
    1:17:20 Yeah.
    1:17:21 It’s our first meeting.
    1:17:26 During the second one, you will be more on our side, but it’s just just just very convincing.
    1:17:27 Very charismatic.
    1:17:28 Yeah, thank you.
    1:17:33 But when I speak with them, when I speak, it’s very important.
    1:17:45 Even with their balancing attitude towards the war, they all recognize that this is a war.
    1:17:49 This is not just internal conflict.
    1:18:04 This is a full scare war that began, that Putin began and all of them, all of them, if you talk to them, they say,
    1:18:17 but then they all recognize that that it’s his own big mistake, Putin’s mistake and that he’s not right.
    1:18:21 That’s why I said, no, no, he’s not right.
    1:18:22 And you have to begin from this.
    1:18:28 If you begin at the middle between Ukraine and Russia, of course, we can speak like this.
    1:18:31 You are in the middle and say, OK, what’s going on?
    1:18:32 There is a fight.
    1:18:33 Where is the fight?
    1:18:42 It’s not the fight like in Europe when Napoleon is fighting against somebody in the middle of Europe.
    1:18:47 No, this is not in the middle of somewhere of the planet, not the planet.
    1:18:49 It’s concretely on our land.
    1:18:57 So one country with one army, one person came to another.
    1:18:58 That’s it.
    1:19:00 It’s very clear.
    1:19:03 Again, I would like us to find a path to peace.
    1:19:07 So let us nevertheless try to start in the middle.
    1:19:11 What other ideas do you think might you are a very intelligent person?
    1:19:15 Your Russian isn’t that good either.
    1:19:18 And I told you that this is only our first meeting.
    1:19:20 My English is not very good either.
    1:19:22 Your English is very good.
    1:19:23 Thank you.
    1:19:25 To be honest, I’m terrible at speaking in every language.
    1:19:28 Well, there are other ideas.
    1:19:29 For instance, sorry to say this.
    1:19:34 It sounds crazy, but what if both Ukraine and Russia are accepted into NATO?
    1:19:39 Putin himself spoke about Russia, maybe about NATO.
    1:19:43 What you just said is very correct.
    1:19:45 What are the guarantees for Russia?
    1:19:48 It’s not like I’m even interested what happens to them.
    1:19:53 To be honest, I don’t care what will happen to them in the future after the war ends.
    1:20:01 But these are our borders and we must understand what is going on there.
    1:20:05 Well, the NATO guarantees for Ukraine.
    1:20:09 Actually, this is also a security guarantee for the Russians.
    1:20:13 Frankly, I talked about this many times before.
    1:20:21 Sorry, I’m speaking figuratively, but as an example, if you were a father who lost his children,
    1:20:29 a grown man, a grown man, a man, an adult, and the war has ended.
    1:20:35 And he never got justice for real.
    1:20:38 For example, somebody decides to free support.
    1:20:39 We won’t give you anything.
    1:20:41 You can’t fight, you can’t continue.
    1:20:49 So we stop when we stop without any guarantees, without any support, without financing, without okay.
    1:20:55 And nobody is held accountable, but the man lost his children.
    1:20:59 He will not get anything.
    1:21:01 None of the killers will be in prison.
    1:21:07 All the sanctions will be removed and he lost his children.
    1:21:10 And we have thousands of such people.
    1:21:15 Why do you think they will not go to Russia?
    1:21:21 We’ll find a way and we’ll not kill the Russian soldiers there or somebody there.
    1:21:22 Why wouldn’t they?
    1:21:23 It’s human nature.
    1:21:24 It’s not about us.
    1:21:25 It’s everyone.
    1:21:32 Read American writers always after any war.
    1:21:37 If there is no justice for people, there must be punishment for the crime.
    1:21:39 It is only justice.
    1:21:41 How come my child was taken away?
    1:21:43 The war took him.
    1:21:45 This is very scary.
    1:21:54 And even whether it was my son who was fulfilling his constitutional duty or simply a missile that struck a civilian child.
    1:22:04 And if there is no justice and the killers are not punished, why wouldn’t these people come back with hate?
    1:22:06 They will definitely come back.
    1:22:13 So when we talk about NATO, NATO is not only stopping Russia.
    1:22:20 Do not forget NATO is stopping us too.
    1:22:23 Because there will not be justice for everyone.
    1:22:29 We know that NATO does not have the right to solve certain issues with war.
    1:22:32 NATO is a security alliance.
    1:22:35 It is protection, not brainwashing.
    1:22:39 What Putin claims that this is offensive is not true.
    1:22:45 NATO is a defensive alliance, a security alliance, and it is security for Russia.
    1:22:54 But unfortunately, there are many options for peace that don’t involve NATO inviting Ukraine as a member.
    1:22:58 Can you imagine security guarantees without NATO membership?
    1:23:07 For example, if America simply leaves NATO, I believe there is a high likelihood that Donald Trump would do such a thing.
    1:23:10 I think it’s very bad for NATO.
    1:23:14 That’s the end. That’s the death of NATO.
    1:23:18 It is a pity because I think that it’s a very good alliance.
    1:23:22 Maybe not everything is good there from the bureaucracy or money, etc.
    1:23:29 But totally, countries who are in NATO, they don’t fight.
    1:23:35 There is no war on the land of any of these NATO countries.
    1:23:36 I think that is the answer.
    1:23:40 It works or not. It works politically or militarily.
    1:23:42 I don’t know, but it works.
    1:23:48 So without Trump, without the United States of America, there will not be NATO.
    1:23:50 That is the first.
    1:23:54 So, and you say, can we imagine that?
    1:23:55 That what?
    1:23:57 That there could be security guarantee without.
    1:24:02 No, we don’t need guarantees without the United States.
    1:24:07 That’s it, because the United States is a very strong, powerful country.
    1:24:13 The United States puts the point, of course, Putin said that it’s just the Soviet Union,
    1:24:18 where, by the way, Ukraine was the second strong republic militarily.
    1:24:23 Yes, by the way, but he, of course, always forgets about it.
    1:24:28 But during the World War II, without help of the United States,
    1:24:34 support of your troops, support of your industry, industrially, militarily,
    1:24:42 without your money, without your people, Hitler could win.
    1:24:44 So the United States helped a lot.
    1:24:50 Of course, Europe, USSR, and of course everybody fought.
    1:24:52 Everybody did a lot.
    1:24:55 But without the United States, it couldn’t be such.
    1:25:03 I don’t use the word success, because I think that there is no war which ends successfully.
    1:25:10 Because this is a war, seven figure losses, heavy losses in World War II, millions of people.
    1:25:16 And that’s why without the United States, security guarantees are not possible.
    1:25:22 I mean these security guarantees which can prevent Russian aggression.
    1:25:24 Of course, we have security guarantees.
    1:25:29 Bilaterally, with some countries, financing, support of our internal military,
    1:25:34 and defending, and humanitarian issues, and demining which is very important,
    1:25:38 and helping our children in the school networks.
    1:25:40 By the way, this is a very sensitive point.
    1:25:43 How many? How many bomb shelters?
    1:25:47 How many bomb shelters we built with the partners for the children?
    1:25:52 And it’s a pity that they are underground, but can you imagine their eyes?
    1:25:56 When they came after COVID, you understand what does it mean COVID?
    1:26:01 But they had COVID in the war, and together they didn’t see each other for so many years.
    1:26:09 And when they saw each other, even underground, they were very happy and smiling.
    1:26:17 So we have such security guarantees, but it’s not enough to prevent.
    1:26:21 Yes, preventive measures also work to prevent the aggression of Putin.
    1:26:27 Your English is better than my Russian. This is wonderful.
    1:26:29 I’m not sure.
    1:26:30 I’m just giving you compliments.
    1:26:31 Thank you. No, no, thank you.
    1:26:33 I’m supposed to do that kind of thing to a president.
    1:26:35 Thank you so much.
    1:26:39 Okay, once again, without NATO guarantees,
    1:26:46 I have a dream that, let’s say, on January 25, or sometime at the end of January this year,
    1:26:51 you will sit down with Donald Trump, with Vladimir Putin,
    1:26:56 and together negotiate a ceasefire with strict security guarantees.
    1:27:01 And an agreement will be signed.
    1:27:03 What will this look like without NATO?
    1:27:05 I will make it clear.
    1:27:11 And so, first of all, I think January 25 or some other day.
    1:27:14 Well, you just call it January 25.
    1:27:18 And I don’t mind. It’s my birthday.
    1:27:22 And we sit down.
    1:27:26 First of all with Trump.
    1:27:33 We agree with him on how we can stop the war, stop Putin.
    1:27:38 It is important for us to sit down with him.
    1:27:45 Secondly, it is very important for us that Europe, which is very important for us,
    1:27:51 because we are part of Europe, and not only geographically, geopolitically,
    1:27:54 but also in the European Union where we will be.
    1:27:59 For us, it is very important that Europe also has a voice.
    1:28:01 It’s the second thing.
    1:28:07 It won’t be long because Europe will be looking at us and we’ll be looking at Trump.
    1:28:13 And by the way, I now see that when I talk about something with Donald Trump,
    1:28:16 whether we meet in person or we just have a call,
    1:28:20 all the European leaders always ask, “How was it?”
    1:28:23 This shows the influence of Donald Trump.
    1:28:26 And this has never happened before.
    1:28:31 With an American president, I tell you from my experience,
    1:28:37 this also gives you confidence that he can stop this war.
    1:28:43 That is why we and Trump come first and Europe will support Ukraine’s position.
    1:28:50 Because they understand that Ukraine has every right to have its voice heard in this
    1:28:52 because we are at war.
    1:28:55 Trump and I will come to an agreement.
    1:29:04 And I am sure that he can offer strong security guarantees together with Europe.
    1:29:08 And then we can talk to the Russians.
    1:29:14 That’s right. Not just three of us sitting down at once.
    1:29:17 And you still talk to me like that.
    1:29:24 Do you know how, as if Putin wants to sit down and talk, but Ukraine does not?
    1:29:25 This is not true.
    1:29:28 I think that, yes, he is, in fact, ready to talk.
    1:29:30 Did you talk to him?
    1:29:31 On the phone or what?
    1:29:33 How do you normally talk to him?
    1:29:36 I don’t know. Normally by the sea. The same as with you.
    1:29:39 He invites you to the sea with me. Just the three of us.
    1:29:41 No, no, one of us may drown.
    1:29:43 Who? Are you good at swimming?
    1:29:44 Yes, I am a good swimmer.
    1:29:47 You’re a good swimmer. Well…
    1:29:55 And I would like to add that if you have any contact with them, I just want to hear what happens then.
    1:30:03 I have never talked to Vladimir Putin, but I have a feeling that he is ready because Donald Trump is ready.
    1:30:06 I hope you are ready.
    1:30:09 And this is not just a feeling, but a dream.
    1:30:18 I have a dream here that the three of you will get together in a room and make peace.
    1:30:27 And I want to understand what it looks like, what security guarantees look like that would satisfy Ukraine, that would satisfy Russia.
    1:30:33 Ukraine needs security guarantees first and foremost. We are in danger.
    1:30:35 That is why they are called so.
    1:30:38 This is no joke to me.
    1:30:41 Let’s take a few steps back.
    1:30:45 Interesting.
    1:30:51 Why are security guarantees a strong position of Ukraine, strong weapons and so on so important?
    1:30:55 I will give you a little history lesson.
    1:31:00 Although I think you have prepared yourself and know everything perfectly.
    1:31:02 Well, you can correct me on that.
    1:31:09 Yes, Ukraine had security guarantees, the Budapest memorandum.
    1:31:15 Nuclear weapons are the security guarantees that Ukraine had. Ukraine had nuclear weapons.
    1:31:18 I do not want to characterize it as good or bad.
    1:31:21 Today, the fact that we do not have them is bad.
    1:31:23 Why? Because this is war.
    1:31:31 Today we are at war because you have unleashed the hands of a nuclear power.
    1:31:38 A nuclear power is fighting against us, against Ukraine and doing what it wants.
    1:31:45 By the way, even you are now talking about ceasefire, just a ceasefire.
    1:31:51 Maybe give flowers to Putin, maybe to say thank you so much for these years.
    1:31:53 That was a great part of my life.
    1:31:56 No, we are not just ready for this.
    1:32:01 Why? The Budapest memorandum, nuclear weapons, this is what we had.
    1:32:03 Ukraine used them for protection.
    1:32:06 This does not mean that someone attacked us.
    1:32:08 That doesn’t mean that we would have used it.
    1:32:10 We had that opportunity.
    1:32:12 These were our security guarantees.
    1:32:14 Why am I talking about this in detail?
    1:32:20 Because if you take the Budapest memorandum, by the way, I discussed this with President Trump.
    1:32:23 We have not finished this conversation yet.
    1:32:26 We will continue it regarding the Budapest memorandum.
    1:32:30 The Budapest memorandum included security guarantees for Ukraine.
    1:32:33 At first, three.
    1:32:36 The most important security guarantors for Ukraine.
    1:32:43 Three strategic friends and partners of Ukraine.
    1:32:45 This was in agreement.
    1:32:51 United States of America, Russia, Britain, France and China joined.
    1:32:57 There were five states that these are not even security guarantees.
    1:33:00 We now understand that this is not a guarantee of security.
    1:33:04 Because, on the one hand, these are security guarantees.
    1:33:09 But there was an English word, as far as I understand, assurance.
    1:33:13 It is translated as assurance.
    1:33:15 Assurance, right?
    1:33:22 In Russian, it will be an assurance.
    1:33:34 That is, give up nuclear weapons because you were under pressure of the US and Russia for Ukraine to give them up.
    1:33:37 These two powers were exerting pressure.
    1:33:42 These two states negotiated to ensure that Ukraine does not have nuclear weapons.
    1:33:45 They then agreed, these are the largest states.
    1:33:50 This is the nuclear five that does not even provide security guarantees.
    1:33:59 Now we just need to find these people and we just need to put in jail all of those who, frankly, invented all this.
    1:34:01 So, confidence.
    1:34:03 So, confidence.
    1:34:05 Assurance.
    1:34:11 Assurance that Ukraine will be territorially integral with its sovereignty.
    1:34:22 It was a piece of paper, if you are curious, by the way, that after occupying part of our Donbas and Crimea,
    1:34:30 Ukraine sent diplomats three times, I don’t think I remember, three times within a few years.
    1:34:35 We sent letters to all security guarantors, to all members of the Budapest memorandum.
    1:34:37 What did they send?
    1:34:41 What was written on the piece of paper?
    1:34:43 Consultations.
    1:34:48 Ukraine holds consultations if its territorial integrity is violated.
    1:34:52 And everyone should be in consultation.
    1:34:54 Everyone must come.
    1:34:58 Everyone must meet urgently.
    1:35:04 USA, Britain, Russia, France, China.
    1:35:06 Did anyone come?
    1:35:08 You ask?
    1:35:09 No.
    1:35:14 Did anyone reply to these letters, official letters, they are all recorded by diplomats?
    1:35:16 Did anyone conduct consultations?
    1:35:17 No.
    1:35:18 And why not?
    1:35:20 They didn’t give a fuck.
    1:35:23 This is understandable in Russian, right?
    1:35:31 That as Russia didn’t give a damn, neither did all the other security guarantors of the Budapest memorandum.
    1:35:40 None of them gave a damn about this country, these people, these security guaranties, etc.
    1:35:44 We take a break, this will be a Budapest memorandum.
    1:35:52 The last time with me, imagine how many years it was with me, in February 2022.
    1:35:55 In February 2022, the war began.
    1:36:01 A full-scale war, letters for consultations, have been sent.
    1:36:04 No one answers.
    1:36:08 Next, we are taking a break from the Budapest memorandum.
    1:36:10 The question is simple about Budapest.
    1:36:11 Can we trust this?
    1:36:12 No.
    1:36:20 Whichever country out of these five sat at the negotiating table, just a piece of paper.
    1:36:24 Believe me, we will save you.
    1:36:25 No.
    1:36:27 Another.
    1:36:29 This is a train.
    1:36:39 This is a train with waste paper, with security guarantees, which Ukraine has been riding for many years.
    1:36:50 The second car on this train is the Minsk Agreements, the Normandy Format and the Minsk Agreements, where it was written, where the signatories were.
    1:36:53 The United States of America was no longer there.
    1:36:55 I understand that Obama was here at the time.
    1:37:04 And as far as I know, I think they were simply not interested in what happened to Ukraine, and where it was in general, where it was located, well, somewhere there.
    1:37:10 Part of something, people, well, people, and let it be, let it be with these people.
    1:37:14 The United States simply did not participate.
    1:37:21 In the Minsk Agreements, there are no claims to the U.S. because they were not guarantors.
    1:37:24 Where is the claim?
    1:37:26 A step back.
    1:37:29 2008, Bucharest.
    1:37:33 Everyone has already learned from the Budapest memorandum.
    1:37:36 Bucharest, 2008.
    1:37:38 Bucharest.
    1:37:43 Mr. Bush, President of the United States.
    1:37:48 Republican says that Ukraine should be in NATO.
    1:37:51 This is the voice of Republicans.
    1:37:52 Check it out.
    1:37:55 Ukraine should be in NATO.
    1:37:58 Everybody is looking at the U.S., always.
    1:37:59 All in favor.
    1:38:00 Who is against?
    1:38:01 Merkel.
    1:38:08 So she opposes, and she forced everyone not to give Ukraine an invitation to join NATO, because that would be a step.
    1:38:11 Seriously, Republicans were in favor.
    1:38:18 The U.S. was in favor, because Republicans and Bush were not afraid of anyone.
    1:38:23 They were not afraid of anyone, and they knew that Ukraine rightly wanted to join NATO.
    1:38:24 She chooses so.
    1:38:25 And what is the question?
    1:38:27 Well, people made their choice.
    1:38:30 Well, and the Russians will not look that way.
    1:38:32 That was not the case then.
    1:38:33 Why?
    1:38:38 Because the Russians were different.
    1:38:40 Next, Minsk.
    1:38:43 We didn’t succeed.
    1:38:49 After the Minsk agreements, as I told you, hundreds of meetings were held.
    1:38:55 I have had hundreds of meetings since 2019.
    1:38:58 We could not think about a ceasefire.
    1:39:01 A ceasefire is our offer.
    1:39:04 This is not somebody’s suggestion.
    1:39:05 This is mine.
    1:39:07 I would like…
    1:39:09 I wanted to in Ukraine.
    1:39:12 Society was divided.
    1:39:13 Not everyone wanted to.
    1:39:14 Half did not want to.
    1:39:15 Half were against.
    1:39:16 Half were in favor.
    1:39:18 Some of them shouted, “Do not believe it.”
    1:39:20 Some of them shouted, “Believe it.”
    1:39:23 I am the president of Ukraine.
    1:39:29 I was given a mandate of trust by 70% of the population to take appropriate steps.
    1:39:31 And I made them.
    1:39:33 This is not a joke.
    1:39:35 We’ll just sit the three of us.
    1:39:38 I am simply telling you what is.
    1:39:41 This is, how can I tell you?
    1:39:47 These meetings must be serious and prepared.
    1:39:50 And prepared with those who want peace.
    1:39:52 Ukraine wants peace.
    1:39:53 US wants peace.
    1:39:55 We have to sit down with Trump.
    1:39:57 And that is 100%.
    1:39:59 First and foremost, number one.
    1:40:04 Moreover, he told me on the phone that he is waiting for us to meet.
    1:40:07 And there will be an official visit.
    1:40:11 And my visit would be the first or one of the first to him.
    1:40:13 And for him, this topic is very important.
    1:40:17 I know that he has his own matters, American issues, I understand.
    1:40:19 I heard his election program.
    1:40:27 But regarding international affairs, I think our issue is one of the most pressing issues for President Trump.
    1:40:29 Therefore, I believe very much I trust his words.
    1:40:31 And I hope we will meet again.
    1:40:34 We need to prepare.
    1:40:36 We have many plans to build on.
    1:40:37 And they exist.
    1:40:40 And they are supported by many countries.
    1:40:43 But we need his vision.
    1:40:46 He needs to look at all these details.
    1:40:48 But his vision, please.
    1:40:50 Because he can stop Putin.
    1:40:53 Because Putin is afraid of him.
    1:40:55 That’s a fact.
    1:40:59 But Trump is a president of a democratic country.
    1:41:01 And he does not come for life.
    1:41:03 He is not Putin.
    1:41:05 He will not come for 25 years.
    1:41:08 He will come for his term.
    1:41:10 Please tell me.
    1:41:14 Well, for example, he came for four years.
    1:41:20 And for the fifth year, Putin came with a war.
    1:41:24 Will it make Trump feel better that there was no war during his time?
    1:41:27 And that Ukraine was destroyed after him?
    1:41:29 Why destroyed?
    1:41:31 Putin is whoever.
    1:41:33 A killer whoever, but not a fool.
    1:41:37 He will be prepared.
    1:41:39 He knows all mistakes.
    1:41:43 He understands how we defeated his army after the invasion began.
    1:41:46 He realized that this was not a Soviet war.
    1:41:48 And that this would not happen with us.
    1:41:49 He will prepare.
    1:41:52 He will let everything into arms production.
    1:41:54 He will have lots of weapons.
    1:41:56 And there will be a very large army.
    1:42:02 And you think that after such humiliation, four years without a war,
    1:42:04 he did not finish us.
    1:42:08 He will return and fight only against Ukraine.
    1:42:11 He will destroy everything around.
    1:42:15 And if you say there is a risk that Trump, President Trump,
    1:42:17 will withdraw from NATO, for example.
    1:42:19 This is a decision of the United States.
    1:42:23 I’m simply saying that if it does, Putin will destroy Europe.
    1:42:26 Calculate the size of army in Europe?
    1:42:29 It’s just that I say it for a reason.
    1:42:31 Do the calculation.
    1:42:33 Why did Hitler conquer all of Europe then?
    1:42:35 Almost.
    1:42:40 Just count, remember, his armies of millions.
    1:42:42 Calculate what Europe has.
    1:42:44 What are the largest armies?
    1:42:46 We have the largest army.
    1:42:50 The Ukrainian army is the largest in Europe.
    1:42:55 The second place after us is four times smaller than us.
    1:42:56 France?
    1:42:58 Yes, 200,000.
    1:43:01 I think the French have about 200,000.
    1:43:04 We have 980.
    1:43:06 So this powerful coalition of European nations?
    1:43:08 That will not be enough.
    1:43:10 Yes, it’s not going to be enough.
    1:43:12 But you’re a smart man, there’s a lot of ideas.
    1:43:16 Partnerships with Global South, India,
    1:43:18 Middle East, Saudi Arabia,
    1:43:22 economic partnerships, political partnerships.
    1:43:24 It all protects you.
    1:43:26 First of all, look at one example.
    1:43:31 North Korea.
    1:43:33 Just look at this example.
    1:43:40 12,000 has arrived.
    1:43:48 Today, 3,800 killed or wounded.
    1:43:56 They can bring more, 30,000, 40,000,
    1:44:02 or maybe 500.
    1:44:04 They can bring many people.
    1:44:05 Why?
    1:44:09 Because they have order, autocracy and everything.
    1:44:12 Can Europe bring people together?
    1:44:14 No.
    1:44:19 Will Europe be able to build an army consisting of 2 to 3 million people?
    1:44:21 No Europe will not want to do this.
    1:44:22 And for what?
    1:44:25 We definitely don’t want a world war with you.
    1:44:27 There is no such purpose.
    1:44:30 There is no such purpose as gathering everyone.
    1:44:31 We do not want any war.
    1:44:36 We want to stop the Russians and they invite North Korean soldiers.
    1:44:41 Invited.
    1:44:44 Their faces are burned.
    1:44:47 They themselves burn their faces.
    1:44:51 Those who cannot escape, injured or killed.
    1:44:52 There’s a video.
    1:44:56 Everything I’m telling you, there is evidence of this.
    1:45:01 So that they are not recognizable, right?
    1:45:05 It means, what does it mean?
    1:45:08 It’s out of values which share Europe.
    1:45:10 Europe counts.
    1:45:15 It means that those guys, they don’t count.
    1:45:17 It’s count, yes?
    1:45:19 They don’t count the number of people.
    1:45:21 That is the answer.
    1:45:22 Can they move more?
    1:45:23 Yes.
    1:45:25 Can they move dozens of thousands?
    1:45:27 Yes, because we see what they have.
    1:45:35 Last year, for example, Europe gave us one million artillery rounds.
    1:45:41 We produced a lot ourselves, but they gave us initiative.
    1:45:42 It was initiative.
    1:45:52 One million artillery rounds and of 155 and etc.
    1:46:03 We produced more, but North Korea gave Putin 3.7, just gave him.
    1:46:05 So he also has a deficit for today.
    1:46:07 It means he needs what?
    1:46:08 He needs time.
    1:46:16 But the number of soldiers and the number of artillery rounds is not everything.
    1:46:23 As you have said, let’s say Donald Trump guarantees security for four years.
    1:46:36 You can form partnerships with India, with Saudi Arabia that enforce punishment, the stick, on oil prices, for example, if any aggressive action is taken.
    1:46:45 You can actually even build, I’ve met a lot of incredible Ukrainian tech people, IT people.
    1:46:51 You can build great companies that form partnerships with the United States, that form partnerships with China.
    1:46:59 And that is a big leverage against the aggression of however many million artillery rounds.
    1:47:01 And that is a sheet of paper.
    1:47:03 You don’t need a sheet of paper of protection.
    1:47:06 Ah, that’s you.
    1:47:09 Well, when you speak.
    1:47:10 In English.
    1:47:11 In English, yeah.
    1:47:18 You don’t even need answers because when you now we’re talking, you already answered on all the questions.
    1:47:28 The first one is that during this time, you need just cooperation, a lot of money for this military industry.
    1:47:38 In Ukraine or in Europe, with India, Saudi Arabia, Saudi and the United States, you need a lot of money.
    1:47:40 So the question is where you will get it.
    1:47:42 So my answer was to Trump.
    1:47:45 I said, this is one of the security guarantees.
    1:47:50 Take 300 billions of frozen Russian assets.
    1:47:51 We will take it.
    1:47:57 Take money, what we need for our interior production, and we will buy all the weapons from the United States.
    1:48:00 We don’t need gifts from the United States.
    1:48:03 It will be very good for your industry.
    1:48:11 For the United States, we will put money there, Russian money, not Ukrainian, not European, Russian money, Russian assets.
    1:48:13 They have to pay for this.
    1:48:15 We will put it and we will make it.
    1:48:17 This is one of security guarantees.
    1:48:20 Yes, of course, because this is a military guarantee.
    1:48:21 Yes.
    1:48:32 But then the second you said that energy price and a lot of sanctions on products and the Russian shadow fleet and etc.
    1:48:35 That is the second answer we spoke about before.
    1:48:37 Yes, put more sanctions on them.
    1:48:39 More sanctions.
    1:48:43 It’s okay, not to take off sanctions.
    1:48:47 It’s okay with you, but it’s not going to be okay with the president of Russia.
    1:48:50 Yes, but I’m not thinking how it will be very good for him.
    1:48:52 He’s still a killer.
    1:48:57 I understand, but unfortunately the reality is that a compromise is needed in order to reach an agreement.
    1:49:04 So in your understanding the fact that he is no in jailed after all the murders, he is not in jailed assuming all the murders,
    1:49:12 and no one in the world is able to put him in his place, send him to prison, do you think this is a small compromise?
    1:49:17 This is not a small compromise, and to forgive him will not be a small compromise.
    1:49:19 To forgive, no one will forgive.
    1:49:22 This is absolutely impossible to forgive him.
    1:49:25 We cannot get into the head and soul of a person who lost their family.
    1:49:28 Nobody never will accept this.
    1:49:30 Absolutely impossible.
    1:49:32 I don’t know, do you have children?
    1:49:34 No, not yet, but I would like to.
    1:49:36 Yes, God bless.
    1:49:38 And this is the most important thing in life.
    1:49:42 And they simply took away the most precious thing from you, will you ask?
    1:49:46 Who ruined your life before going to rip their head off?
    1:49:48 I’m just curious, they took your child away.
    1:49:50 Are you going to ask who did this?
    1:49:52 And they will answer that dude did this.
    1:49:54 You will say, oh well, then there are no questions.
    1:49:59 No, no, no, you will go fucking hell and bite their head off.
    1:50:02 And it will be fair.
    1:50:04 Can murderers be forgiven?
    1:50:09 That’s why you make security guarantees.
    1:50:14 What I told you, for those who are here, and what we control, and what will not happen.
    1:50:19 And that those who lost, we will never forget.
    1:50:21 And a matter of time.
    1:50:26 But when you gave us NATO, I just said, this means that after a while,
    1:50:32 everything I said about NATO, after a while, Ukraine will not go against Russia.
    1:50:36 And Russia will not go against Ukraine because you are in NATO.
    1:50:38 I am just saying, is not that a compromise?
    1:50:40 So NATO is a compromise.
    1:50:45 This is not just a security guarantee, in my opinion.
    1:50:51 Look, when rockets were attacking Israel, and Israel is not in NATO.
    1:50:56 NATO countries, aircrafts were deployed.
    1:50:58 Air defense.
    1:51:01 The air defense worked.
    1:51:06 Operated by different Middle Eastern countries.
    1:51:10 These are also security guarantees.
    1:51:16 And, by the way, Israel has nuclear weapons.
    1:51:21 So why do they need NATO, when in fact they have more than NATO has?
    1:51:26 The American, British, and French aviation stepped in.
    1:51:27 There was ADA.
    1:51:32 I don’t remember from Jordan.
    1:51:35 Listen, thousands of missiles were shot down that way.
    1:51:38 This is, what is this?
    1:51:40 So it’s a guarantee of safety.
    1:51:44 It’s just that it’s not called NATO.
    1:51:48 Is some Uncle Vova irritated by the word NATO?
    1:51:51 There’s a problem with the word?
    1:51:56 And I think he’s simply irritated by people who are alive and living here.
    1:52:00 If you believe this, it will be very difficult to negotiate.
    1:52:04 If you believe that the president of a country is completely crazy,
    1:52:07 it is really hard to come to an agreement with him.
    1:52:12 You have to look at him as a serious person who loves his country
    1:52:14 and loves the people in his country.
    1:52:16 And he conducts, yes, destructive military actions.
    1:52:19 Who are you talking about now, who loves his country?
    1:52:20 Putin.
    1:52:22 Do you think he doesn’t love his country?
    1:52:23 No.
    1:52:25 What is his country?
    1:52:27 He happened to consider Ukraine his country.
    1:52:28 What is his country?
    1:52:29 Explain it.
    1:52:31 Tomorrow he will say that it’s America.
    1:52:33 No pity for the Chechens?
    1:52:36 Do they look like Russians?
    1:52:39 Do they speak Russian?
    1:52:41 Of course.
    1:52:45 Of course they learn in schools like anywhere there’s been Russification.
    1:52:47 Who are the Chechens?
    1:52:50 A different people.
    1:52:52 Another faith.
    1:52:54 Other people.
    1:52:56 Another language.
    1:52:58 A million.
    1:53:01 Eliminated.
    1:53:03 And eliminated how?
    1:53:05 How did he kill them?
    1:53:06 With love?
    1:53:07 I know, fuck.
    1:53:08 By hugging.
    1:53:11 In Ukrainian, as we say,
    1:53:13 strangling by hugging.
    1:53:14 I love you so, so much.
    1:53:17 I love you so much that I want to kill you.
    1:53:18 That’s his love.
    1:53:20 And that’s not love.
    1:53:22 You’re mistaken.
    1:53:24 He does not love his people.
    1:53:25 He loves his inner circle.
    1:53:27 It’s only a small part of the people.
    1:53:31 He doesn’t love them.
    1:53:33 Why, I’ll explain.
    1:53:39 You cannot send your people to another land.
    1:53:45 To die knowing that they will die.
    1:53:46 Children.
    1:53:48 My daughter.
    1:53:49 My daughter.
    1:53:53 She is 20 years old.
    1:53:55 For me, this is a child.
    1:53:57 She is already an adult.
    1:53:59 Of course.
    1:54:01 But she is a child.
    1:54:05 The boys he sends are 18 years old.
    1:54:07 18 years old.
    1:54:08 They are children.
    1:54:10 He sends them.
    1:54:13 It’s not that fascists came to his land
    1:54:16 and he needs to defend it.
    1:54:20 He came to ours and he sent them.
    1:54:22 Chechnya, he sent them.
    1:54:24 Syria, he sent them.
    1:54:26 Africa, he sent them.
    1:54:28 Georgia, he sent them.
    1:54:32 Moldova, Transnistria, that was before him.
    1:54:34 Fine, we can leave that aside.
    1:54:36 He has enough sins of his own.
    1:54:43 And, and then there’s Ukraine, the largest part.
    1:54:57 780,000, 788,000 killed or wounded Russians.
    1:54:59 He calls them all Russians.
    1:55:02 Even those who don’t know, who don’t know how to speak.
    1:55:03 Russian.
    1:55:05 On his territory of Russia.
    1:55:07 Everything they’ve enslaved.
    1:55:08 Yes.
    1:55:10 Proud Varangians.
    1:55:12 So I wonder, is that love?
    1:55:13 What love is this?
    1:55:15 And for what?
    1:55:16 Does he love his people?
    1:55:17 No.
    1:55:19 Does he love his land?
    1:55:22 His country is bigger than America.
    1:55:24 How much land do you need?
    1:55:25 America is huge.
    1:55:28 America is simply an outstanding country.
    1:55:31 Outstanding country.
    1:55:34 Russia is bigger.
    1:55:37 Well, just bigger.
    1:55:39 So, so ask yourself.
    1:55:41 Does he love them?
    1:55:42 What is he doing?
    1:55:45 And what does he love?
    1:55:47 Do you think he’s been everywhere?
    1:55:49 In his Russia?
    1:55:51 It’s impossible to get around it.
    1:55:53 He hasn’t been everywhere.
    1:55:54 He just hasn’t.
    1:55:57 Well, I believe that Donald Trump loves America.
    1:56:01 And I don’t think he has been to every single American city.
    1:56:02 No, no, no.
    1:56:04 I saw his rallies.
    1:56:05 So many rallies.
    1:56:07 No, no, let’s, let’s be honest.
    1:56:08 Let’s be honest.
    1:56:11 He had it and I saw it and it’s very difficult.
    1:56:13 He’s not, I mean, he’s not 18.
    1:56:15 Yes, but he’s strong.
    1:56:17 And this is his will.
    1:56:20 Everywhere where the war is.
    1:56:22 I’m sure.
    1:56:25 I pray to God it never will be on your land.
    1:56:26 Yes.
    1:56:28 And I’m sure that it will not be.
    1:56:32 But I’m sure that if you have in some region the problems,
    1:56:37 how to say earthquake, hurricane, you have it all.
    1:56:43 Well, I’m sure that President Trump would be there.
    1:56:45 After one day, two or three days,
    1:56:47 I don’t know the security of all these things,
    1:56:48 but he will be.
    1:56:51 Otherwise, how will people look at him?
    1:56:53 Yes, of course he will.
    1:56:54 Of course.
    1:56:56 The same about me.
    1:56:58 I’m not comparing myself with him.
    1:57:00 I’m just where it is difficult for people.
    1:57:01 I have to come.
    1:57:06 The question, the next question is very simple.
    1:57:08 Region.
    1:57:11 Kursk region.
    1:57:14 The operation there.
    1:57:21 Did Putin, was Putin in Kursk during, during four months?
    1:57:22 No.
    1:57:25 Listen, I have tremendous respect for you.
    1:57:27 Admiration for many reasons.
    1:57:30 One of which is you stayed in Kiev.
    1:57:35 And another one is that you visit the front and you talk to the soldiers
    1:57:38 in the front and you talk to people all across Ukraine.
    1:57:39 Absolutely.
    1:57:41 Tremendous respect for that.
    1:57:46 And not enough people say that, you know,
    1:57:50 I had a conversation with Taka Carlson, for example.
    1:57:53 And, you know, I said that you’re a hero for staying in Kiev.
    1:57:58 And he said, well, he just did a thing that every leader should do.
    1:58:02 But I think not enough leaders do the thing that every leader should do.
    1:58:04 So tremendous respect.
    1:58:06 And I agree with you totally.
    1:58:10 Yes, a leader should go to the, to the front of a war.
    1:58:15 You know, that said, America has waged wars all across the world.
    1:58:22 That has the war in the, you know, in Afghanistan and Iraq cost $9 trillion
    1:58:27 and killed over a million people.
    1:58:35 War is hell and just because war is waged in terrible ways that it is,
    1:58:38 does not mean the leader does not love their country.
    1:58:40 But I take your point.
    1:58:46 I once again have a dream that even if there’s hate that you sit down
    1:58:53 with Donald Trump and Vladimir Putin and you find a way to peace.
    1:58:55 Let me ask you a question.
    1:58:56 What do you think?
    1:59:00 Will there ever be a day when the Ukrainian people forgive the Russian people
    1:59:07 and both peoples will travel back and forth again and marry each other?
    1:59:09 Rekindle and form friendships.
    1:59:11 Will there be such a time in the future?
    1:59:15 I think history has long answered this question.
    1:59:18 I don’t know how it will be for us.
    1:59:22 It will be in the future without a doubt.
    1:59:24 History has shown this time.
    1:59:28 Again, after every devastating war,
    1:59:43 one generation, one country recognizes that it is, was an aggressor
    1:59:49 and it comes to realize this is impossible to forgive.
    1:59:53 This is precisely the kind of education they’ve had in Germany
    1:59:57 for many years, even though these children had nothing to do with it.
    2:00:03 It was their grandfathers who participated and not all of them were participants
    2:00:09 of Nazi Germany’s war against, essentially against the world.
    2:00:12 Yes, and against life.
    2:00:17 And therefore they’re still apologizing.
    2:00:19 Apologizing is not easy.
    2:00:22 They know that they were the aggressors.
    2:00:27 If they were guilty, they do not look for compromise in history.
    2:00:30 Compromise in itself buys time.
    2:00:32 And they understand this.
    2:00:39 There are convicted murderers condemned both historically and by their own people.
    2:00:45 Reparations have been paid and security guarantees have been established, by the way,
    2:00:47 and all this is done.
    2:00:51 And when all this is done and recognized in any case,
    2:00:55 people develop relations with each other.
    2:00:56 That’s clear.
    2:01:03 But it can only happen the way it always has, always has in history.
    2:01:06 Russia will have to apologize.
    2:01:07 It will.
    2:01:10 This will happen because they are guilty.
    2:01:11 They are guilty.
    2:01:14 And as I told you, the guilty are different.
    2:01:19 Both those who participated and those who remain silent
    2:01:30 because silence is also about participating, in my opinion.
    2:01:32 Can I ask about Donald Trump?
    2:01:37 We’ve already mentioned him a lot, but let’s focus there.
    2:01:38 What do you admire?
    2:01:41 What do you respect about Donald Trump?
    2:01:51 And also maybe why do you think he won overwhelmingly the election in 2024 that American people chose him?
    2:01:53 He was stronger.
    2:02:00 He was much more stronger than Kamala Harris, Biden first, and then Kamala Harris, yes?
    2:02:07 He showed that he can intellectually and physically.
    2:02:13 It was an important point to show that if you want to have a strong country, you have to be strong.
    2:02:14 And he was strong.
    2:02:18 And this number of rallies, what I said is not a simple thing.
    2:02:19 He showed that he can.
    2:02:21 He is strong.
    2:02:27 So he doesn’t have any questions with his, I mean, this age and etc.
    2:02:28 Nothing.
    2:02:29 He is young.
    2:02:32 He is young here and his brains work.
    2:02:35 So I think it’s important, very important.
    2:02:38 And of course, a lot of interior questions.
    2:02:41 I understand the prices and etc.
    2:02:44 Economic questions and the questions off.
    2:02:47 You have the questions with other things.
    2:02:48 Immigration, yeah.
    2:02:50 A lot of things.
    2:02:51 I understand.
    2:02:56 So maybe he answered on those questions which people had.
    2:02:58 One of the questions.
    2:03:00 That he will finish the war.
    2:03:01 That he will finish the war.
    2:03:04 Yeah, for me, this is the main question.
    2:03:08 But I said that for him, he’s the president of the United States.
    2:03:12 For him, his priority is his questions in the United States.
    2:03:14 And I understand and I respect it.
    2:03:19 But the second he was speaking about the world, yes, he said that he will finish the war.
    2:03:30 And I hope very much because I think that our people really support his idea.
    2:03:32 That’s why I said it is for me.
    2:03:40 It’s very, very important to have enough people around him.
    2:03:44 Who will have connections with him with the right things.
    2:03:48 For me, the truth is very right things.
    2:03:51 What’s going on really in the battlefield?
    2:03:56 What’s going on really with Putin and Russia?
    2:03:58 What he really wants.
    2:04:00 And that is just to have it.
    2:04:07 You know, before any decision, you have to be at the same level of information.
    2:04:13 And we need, really, we need him to know everything from us.
    2:04:14 From you.
    2:04:17 From people in Ukraine.
    2:04:21 From people around who are really afraid.
    2:04:26 Afraid that Putin doesn’t want to stop the war.
    2:04:30 Afraid that he will come back with his aggression.
    2:04:45 So first of all, I should mention that our conversation today will be translated and dubbed into Ukrainian, English, Russian, other languages, Spanish.
    2:04:51 So you’re in your voice. So there are great guys originally from Poland.
    2:04:53 It’s a company called Eleven Labs.
    2:04:56 They’ve trained an AI.
    2:05:00 Artificial intelligence sounds truly remarkable in your voice.
    2:05:02 You have the freedom to speak in any language you choose.
    2:05:07 But no matter what, you will always find yourself returning to speaking in Ukrainian.
    2:05:11 That is, when you talk about Donald Trump, you can do it in Ukrainian or Russian.
    2:05:12 Everybody understands.
    2:05:14 Everybody understands.
    2:05:22 But you said that there’s some things about the war that maybe Americans don’t understand.
    2:05:24 So we talked about Putin.
    2:05:27 We talked about the security guarantees.
    2:05:35 But the reality of war, what’s happening on the ground, what do you think that people should understand?
    2:05:39 First of all, they have to understand the idea of Putin’s war.
    2:05:42 It is very important for him.
    2:05:45 I consider this process.
    2:05:52 I think it is very important for him not to give Ukraine independence.
    2:05:56 To prevent Ukraine from developing is an independent country.
    2:06:02 For him, influence, influence on Ukraine cannot be lost.
    2:06:15 And for him, it is like, I think for him, this is such a goal in this last mile.
    2:06:29 And certainly for him, the last mile and of his political life.
    2:06:32 And I think that this is the goal for him.
    2:06:39 The second story, I do not want to talk about these banalities that he wants to return.
    2:06:43 All the territories of the Soviet Union influence over them.
    2:06:45 He does this little by little.
    2:06:48 I just don’t want to, people need to know details.
    2:06:54 For example, Georgia, which was headed towards the EU and NATO, completely turns towards Russia,
    2:06:59 regardless of the fact that they have frozen conflicts.
    2:07:04 They have in Abkhazia what we have with Donbass, which is controlled by militant rebels.
    2:07:08 Abkhazia is not developing, it’s just a part, a very beautiful part of Georgia.
    2:07:10 That has died.
    2:07:13 And if you have the opportunity, then go there someday.
    2:07:16 You will understand it simply died because Putin wanted to.
    2:07:22 He wanted not to allow them to develop because a frozen conflict means that you will not be accepted.
    2:07:26 The EU and certainly will not be accepted into NATO because right now, yes,
    2:07:29 they do not take you because of a frozen conflict.
    2:07:31 And this is what Putin did.
    2:07:33 It’s very important for him not to lose this influence.
    2:07:39 That is, he turned back Georgia, young people, students, everyone leaves, and this is a fact.
    2:07:45 Georgia is quite small and they will leave, they want to live in Europe, they want to develop.
    2:07:49 Somebody in the United States, somebody in Europe, somebody in the EU, somebody in Britain.
    2:07:55 He will now fight for the Moldovan parliament.
    2:07:57 This is his second step.
    2:08:00 You will see in April what happens.
    2:08:04 You will see, oh, he will start turning Moldova away from Europe.
    2:08:07 Although they want to go there, he does not care.
    2:08:16 There will be a pro-Russian party and they will do something with the current president because she has won the elections.
    2:08:20 She is pro-European, but he will turn this back.
    2:08:24 The next steps are completely clear.
    2:08:32 He will do everything wherever he has lost influence, where there was influence, influence of the Soviet Union.
    2:08:38 He’ll turn it back as much as possible and we understand at what price you have seen Syria.
    2:08:43 You saw these tortures, what we saw in Bucha, what we saw everywhere we came
    2:08:46 and where our territories were occupied.
    2:08:50 In Syria, the same happened, there were a thousand people there
    2:08:54 and you have seen it, scientists were found, doctors were found.
    2:09:01 It is clear that any people are capable of generating their own opinion.
    2:09:05 Show their skills, develop society.
    2:09:11 Everyone who can express an opinion, everyone who can shape the independence
    2:09:17 and maturity of society such people are not needed and he wants this in Ukraine.
    2:09:26 And therefore, everyone should understand that Ukraine is like a large wall.
    2:09:33 From that Europe and if God willing, President Trump does not withdraw from NATO.
    2:09:37 Because again, I believe that this is the biggest risk.
    2:09:48 I think two steps, two steps that Putin would like to see is a weak NATO
    2:09:57 and this without Trump and a weak Ukraine which cannot survive on the battlefield
    2:10:04 simply cannot survive and prevent me from building a strong relationship with Trump.
    2:10:13 I think these two steps leaving NATO and Ukraine’s weakness will lead to a large-scale war
    2:10:19 which Putin will wage on all the territories of that Europe.
    2:10:26 Post-Soviet Europe, I mean Soviet Europe, not post-Soviet, but post-World War II period.
    2:10:35 That is Soviet Europe, Soviet-era Europe in order to completely control everything there.
    2:10:42 This is what he will do and besides this, this will happen in any case.
    2:10:52 Even if the US is thinking about leaving NATO, this war will affect the United States
    2:10:55 because North Korea is the first sign.
    2:11:02 North Korean skills, North Korean knowledge which they are now gaining from this war.
    2:11:10 These include mastering new technologies, large-scale drones, missiles, how it works,
    2:11:15 the kind of technological war we have today, cyber war, etc.
    2:11:24 All these skills Korea will bring home and scale up in that region and this will be a risk for the Pacific region.
    2:11:33 Security first and foremost, for Japan and for South Korea, they will face these risks 100%
    2:11:41 and it will be clear that Taiwan will also have to face them.
    2:11:51 Without this, it is impossible. This is already happening. This is already happening.
    2:12:03 Therefore, I think that President Trump has all power to stop Putin and give Ukraine strong security guarantees.
    2:12:07 We’ve been talking for two hours at the pause. Do you want to take the break?
    2:12:13 Yes, we will make a pause. We can have coffee, right? Coffee?
    2:12:16 Let’s do it.
    2:12:22 And give the interpreter some water.
    2:12:24 We’ll keep switching languages.
    2:12:28 Like a dragon, you know? Three heads, three translators.
    2:12:38 So one of the difficult decisions you had to make when the war began is to enact martial law.
    2:12:44 So when you won the presidency, you were the warrior for freedom.
    2:12:54 In fact, this war is for freedom, for freedom of the individual, freedom of speech, freedom of religion, freedom.
    2:13:02 But a lot of freedoms had to be curtailed, sacrificed in this fight, because there’s so much focus on the war.
    2:13:14 Do you feel the tension of that, the sacrifice that had to be made in democracy, in freedom, in fighting this war?
    2:13:19 In any case, this war is for our freedom.
    2:13:33 Generally speaking, to be honest, when you understand, over time, when the war passes, you understand that your main values are at home.
    2:13:41 This is your home, your children, your love, God willing, parents are alive.
    2:13:55 And if, and if not alive, then their memory, visiting their grave, choosing how to work, how much, preferably choosing where to work.
    2:13:57 All this is freedom.
    2:14:01 Freedoms are not just a desire, they are an opportunity.
    2:14:08 In any case, you are right because war is a limitation of opportunities.
    2:14:19 In any case, you fight for these opportunities, your parents, your parents and God gave you life, right?
    2:14:22 You fight for your life, your life.
    2:14:28 But we need to understand that first there is a war and then martial law is introduced.
    2:14:32 Martial law is not introduced because someone wanted to.
    2:14:36 You say this is not Pinochet, this is not Pinochet and so on.
    2:14:38 This is a completely different story.
    2:14:49 An aggressor came and according to your legislation, if the border is violated, if there is armed aggression, you have all this written down long ago, written out in legislation.
    2:14:59 You introduce martial law and the introduction of martial law everywhere at all times means, in any case, a restriction of opportunities.
    2:15:06 If opportunities are limited, rights and freedoms are restricted, therefore the war itself restricts rights and freedoms.
    2:15:10 Yes, and you can’t do anything about it.
    2:15:18 We try honestly to balance as much as possible.
    2:15:29 I believe that the business sector works despite the difficulties of the war and we do everything somewhere, you know, there somewhere to reduce some load.
    2:15:34 Unfortunately, we cannot reduce taxes.
    2:15:38 On the contrary, military tax is used for war.
    2:15:40 You need to take money somewhere.
    2:15:47 This, by the way, is about the fact, the fact that the US gave us a lot and Europe too.
    2:15:53 But compared to how much we needed for the war, this is not all.
    2:16:00 As for military salaries, you know, you know that we could not pay the salaries of a million strong army.
    2:16:03 We could not pay it using the money from our partners.
    2:16:05 These are all expenses.
    2:16:12 This is all the money that the country and people have accumulated.
    2:16:14 You can’t do anything.
    2:16:15 I really want to reduce taxes.
    2:16:18 I will tell you frankly, I really want to.
    2:16:26 Well, I think that the whole new tax system, new deregulation, new steps, new reforms, all this will be after the war.
    2:16:30 Although there is something to brag about, this is proof.
    2:16:36 And this is a document.
    2:16:44 Because if you want to get a candidacy for the European Union, you must implement the appropriate number of reforms.
    2:16:46 We do everything.
    2:16:55 During the war, we voted for many reforms, including anti-corruption, banking reforms, land reforms, major reforms.
    2:17:00 We started a large privatization and the war did not stop us.
    2:17:04 Yes, it slowed down, but we went through a lot.
    2:17:07 When do you think you will hold elections?
    2:17:13 Because for people who don’t know, part of the martial law elections were suspended and they were delayed and delayed and delayed.
    2:17:19 And I think the next sort of plan is in February of 2025.
    2:17:24 But when do you think there will be presidential elections in Ukraine?
    2:17:28 Elections were postponed once.
    2:17:30 They were not delayed to be clear.
    2:17:33 Elections did not take place in 2024.
    2:17:37 That year, first of all, we need to understand the Constitution.
    2:17:42 They were scheduled to be held in the spring of 2024.
    2:17:48 Due to martial law, under the Constitution, you cannot do this.
    2:17:51 These are the presidential elections.
    2:18:01 The parliamentary elections did not take place in the fall of 2024, according to the Constitution.
    2:18:03 Yes, there are security things.
    2:18:06 There is the Constitution, but there are security things.
    2:18:14 That is, everyone in Ukraine understands that this cannot be done until the war is over or legislation needs to be changed.
    2:18:19 I believe that elections will take place immediately after the end of martial law.
    2:18:21 This is according to the law.
    2:18:29 Or members of the parliament need to get together and change legislation, which will be very difficult to do.
    2:18:31 Because society is against it.
    2:18:33 Why society against it?
    2:18:38 It is understandable why.
    2:18:42 Because we want elections that we want to trust.
    2:18:47 8.5 million people went abroad.
    2:18:53 The infrastructure needs to be created for these millions of people to vote.
    2:18:56 Millions of people in the occupied territories.
    2:19:00 I’m not even talking about the occupation of 2014.
    2:19:03 I’m talking about the occupation right now.
    2:19:05 What to do with these people?
    2:19:08 This is a difficult question.
    2:19:14 And one of the most unfair ones is how to vote without having a million soldiers.
    2:19:19 That is, it is impossible.
    2:19:25 We need to think about how to change the system if the elections are held in times of war.
    2:19:30 Change the legislation, which should include changes to the voting system.
    2:19:32 To think about online voting.
    2:19:41 Everyone is afraid because of certain attacks, like cyber attacks and so on.
    2:19:43 But we need to think about it.
    2:19:50 I really think that it’s possible that we can end the war in 2025.
    2:19:51 In January.
    2:19:53 We’ve already agreed on it.
    2:19:55 I would very much like to.
    2:19:56 I would very much like to.
    2:19:58 After the war?
    2:20:00 And immediately.
    2:20:02 Yes, immediately.
    2:20:04 In the year of the end of the war, it’s a fact.
    2:20:05 Why?
    2:20:12 Because when martial law ends, you can immediately vote in parliament to hold elections.
    2:20:16 And then everyone, everyone will vote.
    2:20:19 Because there are no restrictive measures.
    2:20:23 And after they vote, I think elections can be held in 90 days.
    2:20:26 Something, something like that.
    2:20:27 Yes.
    2:20:33 And this means that immediately after the end of the war, elections may take place in 90 days.
    2:20:35 Are you running?
    2:20:37 For reelection?
    2:20:38 Even I don’t know, really.
    2:20:39 I don’t know.
    2:20:40 I don’t know.
    2:20:43 It is a very difficult question.
    2:20:47 It depends on how this war will finish.
    2:20:52 It depends on what people will want.
    2:20:56 Mostly it depends on people.
    2:21:00 First of all, and of course, my family.
    2:21:06 We had no time to speak about it with my family.
    2:21:11 And of course, didn’t have a chance because we don’t think about it now.
    2:21:13 I mean, it’s something, you know.
    2:21:22 There are a lot of some, not a lot of, but enough voices in Ukraine from politicians, opposition and etc.
    2:21:23 About this.
    2:21:24 Yes.
    2:21:33 But we don’t think really seriously, didn’t think seriously with my family about it.
    2:21:35 So this is war.
    2:21:37 I mean, how to think about what we’ll be after.
    2:21:41 It’s very difficult, really very difficult.
    2:21:49 If we look at the field of candidates, maybe you can give your opinion about the set of ideas you see out there,
    2:21:52 including your own about the future of Ukraine.
    2:22:00 As I understand, the candidates include Parshenko, Zaluzhny, Aystovich, Budanov, Klitschko, many others.
    2:22:03 This is the internet speaking to me.
    2:22:06 What do you think of the space of ideas that these candidates represent?
    2:22:08 You know, I think it can be.
    2:22:11 There can be even a bigger number of candidates.
    2:22:14 I don’t really know what will be.
    2:22:17 They have rights to participate if they want to.
    2:22:18 Yes.
    2:22:25 If they really want to and can, they can go and do what they want, honestly.
    2:22:27 Most important is what are they doing now?
    2:22:37 I think that all these people are famous Ukrainian people and it’s important for them to do everything they can today,
    2:22:40 not begin any election campaign.
    2:22:46 I think this what can divide our people to have the elections, you know, during the war.
    2:22:52 I mean, this make steps, speak about elections a lot, you know, make a big mess about it.
    2:22:54 I think this is not right.
    2:22:57 That’s why I’m not agreeing with some of these people.
    2:23:05 But they can and I think that they can and maybe some of them will and it’s okay.
    2:23:06 It’s normal.
    2:23:08 It’s very normal.
    2:23:11 Our system differs from the system in the United States.
    2:23:14 You have two parties and the parties decide who will be the leader.
    2:23:17 And in Ukraine, everybody can participate.
    2:23:20 Let them.
    2:23:23 You think you’re going to win the debate?
    2:23:29 You versus Zeluzhnik Parshenko or Estovic and you decide to run.
    2:23:31 Do you think you’re going to win the debate?
    2:23:33 Or you’re again focused on the war?
    2:23:35 Oh, I’m really focusing on the war.
    2:23:36 I understand.
    2:23:46 I think the most difficult debate is what will be brought to the table and we spoke about it.
    2:23:49 It will be during the war, how to finish the war.
    2:23:55 I think that is my goal because it will be one of my most complicated debates.
    2:24:03 And for any president who is in a war, of course, but I think this is my goal to win those debates.
    2:24:07 And the other things are not for today.
    2:24:18 As I said, the dream I have is a historic opportunity to make peace, to make lasting peace soon.
    2:24:20 So I’m glad you’re focused on that.
    2:24:26 Let me ask a question about that a lot of people in the United States think about.
    2:24:32 And I care a lot about the future of Ukraine is corruption.
    2:24:36 This is something you have cared a lot about for a long time.
    2:24:43 You won the presidency 2019 in big part, your message of fighting corruption.
    2:24:53 But there’s a lot of accusations that during war, I mentioned 9 trillion dollars in the United States, war breeds corruption.
    2:25:04 So can you speak to that? How you have been fighting corruption and you can you respond to the accusations that has been corruption in Ukraine?
    2:25:05 You know, it’s very simple.
    2:25:11 First of all, we really have a very sophisticated anti-corruption system.
    2:25:17 Sophisticated not in the sense that it’s difficult to understand, but in that it really consists of many elements.
    2:25:21 It’s the most sophisticated in all of Europe.
    2:25:24 This is another requirement of the European Union.
    2:25:27 It was a requirement for Ukraine.
    2:25:31 And for many years, Ukraine was not trusted.
    2:25:45 I want to tell you that under me, we all voted for bills, all the anti-corruption reforms, all well, almost all reforms and all anti-corruption bodies today are independent.
    2:25:48 They work as requested.
    2:25:50 I still believe that they are not perfect yet.
    2:25:52 There are many issues.
    2:26:00 There is a judicial system, but also a judicial reform that our partners, the United States, plus the EU demanded from us.
    2:26:02 This is all written out.
    2:26:08 This is written out in specific laws, in specific decrees, in specific decisions.
    2:26:09 We did this.
    2:26:13 We’ve done 99% of this.
    2:26:17 If something has not been done, it means that it is on the way.
    2:26:21 But in principle, all this exists and there is no such system as we have in Europe.
    2:26:25 To say that we do not have corruption would be lying.
    2:26:28 We just talk about it openly.
    2:26:30 We are genuinely fighting against it.
    2:26:42 Look, we have sitting in our prison, Ihor Kolomoisky, who is the most influential Ukrainian oligarch since independence.
    2:26:45 And no one could do anything about him.
    2:26:51 The United States of America wanted to have Kolomoisky and they went to great lengths because of money laundering, etc.
    2:26:57 There are criminal cases in the United States, I think in Delaware, something like that.
    2:26:59 Neither Europe could do anything about it.
    2:27:02 That is, we did a lot with oligarchs.
    2:27:06 Russian oligarchs, sanctions were imposed, they were thrown out.
    2:27:11 Some of them fled the state, but they are all under sanctions.
    2:27:19 We exchanged some of them for our soldiers, such as Medvedchuk, to whose daughter Putin is godfather.
    2:27:32 That is, we fought against the strongest influential oligarchs, which are and were in Ukraine and we eliminated a lot of corruption.
    2:27:35 Of course, corruption exists in everyday life.
    2:27:41 It exists, but institutionally, I am sure that Ukraine will overcome all this.
    2:27:48 This takes a little time, I would say honestly, that, listen, what we call corruption.
    2:27:58 And in some state of the world is called lobbyism, but this does not mean that there is no corruption there.
    2:28:03 Let’s take the aid you mentioned during the war.
    2:28:08 First of all, we have no money.
    2:28:13 We have no money except for the war.
    2:28:22 We received weapons from the United States of America, from Europe, if we take, for example, money from the United States of America.
    2:28:32 During all this time of the war, around 177 billion have been voted for or decided upon.
    2:28:38 177 billion, let’s be honest.
    2:28:43 We have not received half of this money.
    2:28:48 The second point, which is very important just as an example, is a corruption.
    2:28:52 The first question, whose corruption?
    2:28:54 This is the second.
    2:28:58 Here is just one small example for you.
    2:29:06 When the United States began to transfer us weapons, it was American money, but American weapons.
    2:29:09 Money for these weapons.
    2:29:15 I had, as a president, I had cargo jets.
    2:29:19 Not in Ukraine because of the war, we moved them very quickly to Europe.
    2:29:28 We had cargo. We have good cargo fleet, very good, because of Antonov.
    2:29:41 So I asked American side to grant me the opportunity because our jets are at another airfield.
    2:29:51 And I asked America to give me the opportunity to use our jets for transfer, not to pay a lot.
    2:29:55 To whom? To your companies, to American companies.
    2:29:58 No, I didn’t get this opportunity.
    2:30:01 My jets stayed put.
    2:30:07 And the United States cargo jets moved these weapons.
    2:30:11 But everywhere you have to spend money.
    2:30:21 So we could get more weapons, but we have to pay for this very expensive fleet.
    2:30:28 My question, is this corruption or not? Or lobbyism? What is it?
    2:30:31 You mean corruption on the part of the US companies?
    2:30:33 Yes, making such decisions.
    2:30:38 The lobbying for such decisions involves some companies that make these decisions.
    2:30:42 But I can’t be open about it and I couldn’t speak loudly about it.
    2:30:46 I didn’t want, nor did I intend to cause any scandals to arise.
    2:30:49 Because otherwise, you can freeze the support and that’s it.
    2:30:55 And that’s why when we talk about corruption, we must ask who is involved.
    2:31:03 If we had 177 and if we get the half, where is the half?
    2:31:07 If you will find the second half, you will find corruption.
    2:31:10 There is a perception of corruption.
    2:31:16 People like Donald Trump and Elon Musk really care about fighting corruption.
    2:31:25 What can you say to them to gain their trust that the money is going towards this fight for freedom, towards the war effort?
    2:31:30 In most cases, we did not receive money, we received weapons.
    2:31:36 And where we saw risks that something could be a weapon, we would slap everyone on the wrist.
    2:31:42 And believe me, this is not only about Ukraine, on the supply chain, everywhere.
    2:31:49 There are some or other people and companies who want to make money because everyone makes money on the war.
    2:31:51 We did not profit from the war.
    2:31:55 If we found someone, believe me, we slapped everyone on the wrist.
    2:32:03 And we did that, we did that, and we will continue to do so because to this day,
    2:32:11 when someone says that Ukraine was selling weapons, and by the way, Russia was the one pushing this narrative,
    2:32:19 we always responded, our soldiers would kill such people with their own hands without any trial.
    2:32:25 Do you honestly think anyone could steal weapons by the truckload when we ourselves don’t have enough on the front lines?
    2:32:35 And yet we have to provide proof to defend ourselves because when there’s an abundance of such misinformation, distrust starts to grow.
    2:32:41 And you’re right, people listen to various media outlets, see this and lose faith in you.
    2:32:47 In the end, you lose trust, and with it, you lose support.
    2:32:55 Therefore, believe me, we are fighting more against disinformation than against particular cases,
    2:33:02 although I still emphasize once again, at the everyday level, such things are still important.
    2:33:07 We catch these people and we fight them.
    2:33:18 I mentioned Elon Musk. I would be interested to hear what you think of him, why you respect him as a person, as an engineer, as an innovator, as a businessman.
    2:33:22 I would just like to hear from you, what do you think about Elon Musk?
    2:33:27 First of all, I had a conversation with him at the beginning of the war.
    2:33:35 I talked with him. I respect him, first and foremost.
    2:33:39 I respect the self-made man, right? In English, I love such people.
    2:33:46 You know, no one and nothing fell into their lap, but the man did something, did it all himself.
    2:33:56 I worked myself, created a big production company, and I know what it means to make money, to make money, to select talented people,
    2:34:06 to impart knowledge to them, to invest money and to create something, something important for certain people, you know.
    2:34:17 And I’m not comparing myself to Musk. He just, well, the man is a great leader of innovations in the world.
    2:34:22 And I believe that such people move the world forward.
    2:34:30 Therefore, I respect the result of his work, and we see this result.
    2:34:39 And for me, it has always been important that your result can be used, that these are not words, but facts.
    2:34:41 Let’s take the war.
    2:34:46 We are very grateful for Starlink. It has helped.
    2:34:51 We used it after Russian missile attacks on the energy infrastructure.
    2:34:55 There were problems with the internet, etc., with connection.
    2:35:02 We used Starlink both at the front and in kindergartens. It was used in schools. It helped children.
    2:35:08 We used it in various infrastructure, and it helped us very much.
    2:35:19 And I would very much like Elon to be on our side as much as possible to support us.
    2:35:23 And yes, I am grateful to him for Starlink. Truly I am.
    2:35:30 First of all, so that our guys have a connection and children too.
    2:35:44 And I am really grateful to him for that. I think we need, I would like him to come to Ukraine, to talk to people here, and to look around, and so on.
    2:35:46 Has Elon visited Kiev or Ukraine yet?
    2:35:47 No.
    2:35:53 I hope the Kiev airport will open soon. Then it will be easier to fly in.
    2:36:05 Yes, I am looking forward to it. Maybe we will open it, but only. And you must understand if the war is over, there must be sustainable peace and air defense systems, to be honest.
    2:36:20 And we must ensure that they are long-lasting and effective. Let’s take the airport, for example, and let’s focus on the airport in Zesho, which you know very well as it is handling important cargo for Ukraine in Poland.
    2:36:24 And there are patriot systems there, because everyone understands what the risk is.
    2:36:28 Well, Russia is a risk, and therefore we need air defense systems.
    2:36:38 And today, today take, for example, the air defense system of one city or another that is being shelled and move it, move it to the airport.
    2:36:44 Well, that would be dishonest. People are more important than planes.
    2:36:55 But there will be a moment. And Trump, by the way, I think that the war will end, and President Trump may be the first leader to travel here by airplane.
    2:36:58 I think it would be symbolic by airplane.
    2:37:03 Again, January 25th around that date, right? Flying in, meeting the Air Force One.
    2:37:04 That would be cool.
    2:37:08 Elon Musk. I will meet you there for the second time, too, on the plane.
    2:37:09 With pleasure.
    2:37:20 And you, by the way, before I forget, let me ask, are you coming on January 20th for President Trump’s inauguration?
    2:37:35 I would like to, of course. I will be considering what is happening then in the war, because there are moments of difficulties, escalation, many missiles, etc.
    2:37:45 But honestly, well, I can’t. I can’t come, especially during the war, unless President Trump invites me personally.
    2:37:59 I’m not sure it’s proper to come, because I know that in general leaders are, for some reason, not usually invited, to the inauguration of presidents of the United States of America.
    2:38:07 Well, and I know that there are leaders who can simply come, want to come, and will come.
    2:38:09 Yeah, I know.
    2:38:16 And I know the temperament of some of these people. They can come at their discretion.
    2:38:19 This is very, very difficult for me.
    2:38:24 I am the kind of person that cannot come without an invitation.
    2:38:32 This is Putin. We did not invite him. He came to us, so to say. And me? I can’t do that.
    2:38:38 No, but didn’t he publicly say that? It would be great if you came to the inauguration, or you mean did he invite it officially?
    2:38:46 No, wait, look, look, look. Listen, I am against any bureaucracy. I get rid of it as much as I can.
    2:38:59 Well, you know, there are some complexities involving security. I decide and I fly, and the United States of America officially provides security.
    2:39:08 Not that I need this, mind you. I do not ask for helicopters to fly around and protect me, but they will simply do it themselves, the security service itself.
    2:39:17 They had to do it. I don’t want it. And sometimes I don’t need it. And I am asking them. It was, for example, before the war.
    2:39:25 I think, yes, it was before the war. I had a meeting, yes, with President Trump. It was in 2019.
    2:39:30 I just wanted to go for a run early in the morning because I really wanted to exercise.
    2:39:39 And they, those tall bodyguards, a lot of them, they decided to join me, but I couldn’t really do it because they were in suits.
    2:39:51 And I was in sportswear. I said, no, I can’t. It’s always funny. I’m not, I don’t want to, you know, I don’t want to disturb anybody and cause anyone problems with me.
    2:39:57 And that’s why, if he will invite me, I will come.
    2:39:59 I thought he invited you.
    2:40:00 Yeah?
    2:40:03 Yeah, I thought he publicly invited you. But okay, I hope to see you there.
    2:40:08 I think they had to do some of their steps. I don’t know, but…
    2:40:12 Step, yeah. The stamp was missing.
    2:40:18 Yeah, but with pleasure with my wife, of course. And I think it’s important. It’s important.
    2:40:25 All right, let’s get back to a serious question. Sometimes they say it in America, this question of who is really in power.
    2:40:29 So let me ask, is someone controlling you?
    2:40:37 For example, oligarchs, American politicians, Yarmuk.
    2:40:48 I wanted to bring this up because I have been here in Ukraine since the, twice since the invasion of 2022.
    2:40:53 And one of the things I’ve learned, well, is that actually nobody controls you.
    2:41:05 And this is, this is one of your strengths as a president, as a person that oligarchs and other rich and powerful people like that cannot control you.
    2:41:07 Can you explain why that is, how you see it?
    2:41:15 I think, and it is indeed true, that I’m generally difficult to deal with.
    2:41:22 I am an ambitious person. I can’t submit to anyone.
    2:41:32 I can live by rules, by laws. I believe that this is the only thing that can control any person today.
    2:41:39 These are the rules and laws of the society or state where you live.
    2:41:45 And I believe that this is the most important thing. There is no person who could control me.
    2:41:55 As I once told President Trump, when we had a meeting, by the way, journalists asked if Trump influenced me during the phone call.
    2:42:03 I told him, I told the journalist the truth then, who can influence me, only my boy, my son.
    2:42:07 This is the fact, when he calls asking for something, well, then I lift up my arms.
    2:42:12 Yes, and I cannot do anything about it because children are children.
    2:42:18 I have so little time with them. And therefore, when there are these moments, they are precious and important to me.
    2:42:27 I am ready to do anything. Also, probably my parents, they are an authority for me.
    2:42:34 Beyond that, I view it more as a system. No one can control the president.
    2:42:44 Therefore, we have oligarchs who either fled or are in prison because oligarchs usually control cash flows and people and influence politics.
    2:42:49 And we have concrete examples. With sentences, they are not just under house arrest.
    2:42:55 Not just that there are some judgments under which their assets were frozen or sanctions were imposed.
    2:43:01 There are specific people who are behind bars. I think this is the answer regarding the influence.
    2:43:06 Would they like to influence me in the same way as any president of Ukraine?
    2:43:11 Because finance and cash flows always influence politics.
    2:43:18 Well, at least they want to do this. This is regarding the influence.
    2:43:27 And other people on the vertical, they perform tasks as my managers.
    2:43:32 Andri, you mentioned is one of those managers.
    2:43:38 Well, I am glad that I have such people.
    2:43:42 Well, probably there is nothing else to add here.
    2:43:47 I will just say that your team that I spoke with is an excellent team, excellent people.
    2:43:48 Thank you.
    2:44:00 Okay, one last question. The future of Ukraine. If you look 5, 10, 20 years into the future, what can help Ukraine flourish economically, culturally, politically in the future?
    2:44:05 Digital? It’s very important. Digitalization of all the process.
    2:44:09 We began this work. We have special Ministry of Digital Transformation.
    2:44:14 Yeah, so this is very good. And we also have our DEA.
    2:44:17 This is the name for all of these services. Yeah.
    2:44:20 So I think that is the most important.
    2:44:29 This is again, this is not only convenient that will cancel all the any possibilities for future corruption.
    2:44:34 Because you don’t have any, you know, you don’t have any personal connections with people in the government or elsewhere.
    2:44:38 So you’re just on your phone or any other device. That’s it.
    2:44:42 And I think we are doing very well. We are the best in Europe.
    2:44:44 All of Europe recognizes it.
    2:44:53 Some countries of the African Union asked us to provide this the same service and we will do it after the war immediately.
    2:44:57 And I think that we can bring money to Ukraine from this.
    2:45:01 And I think what we also need, we need a tax reform.
    2:45:06 I think it will be very important for the businesses to return.
    2:45:17 A lot of support will come, I think, from USA business investment, not as direct aid to us, just to the private sector and resources.
    2:45:25 And I mentioned this to President Trump and to some European leaders who are our key strategic partners that will be happy,
    2:45:33 especially with the Americans, will be happy to sign these contracts and engage in joint investments in many areas.
    2:45:40 And I think we can develop oil, gas, green energy, including solar power.
    2:45:44 And we already have the resources. We can invest money into this.
    2:45:56 We have oil reserves in the Black Sea that we can exploit and we need your expertise and the investment of your companies.
    2:46:03 We have gold and uranium reserves, the largest in Europe, by the way, which is also very important.
    2:46:07 For example, Russia has pushed France out of Africa.
    2:46:13 They urgently need uranium, which we have, so we are ready to open up for investments.
    2:46:20 And this will give us, of course, opportunities, jobs for people, revenue.
    2:46:22 I don’t want cheap labor, honestly.
    2:46:31 What I truly want, especially after the war, to open up for those people who can really contribute and earn, yes.
    2:46:34 And give a reason to the 8 million people to come back.
    2:46:41 Yes, it’s so important and they will come and we will recover and rebuild Ukraine.
    2:46:47 We will be very open to companies and, of course, we will welcome our people back.
    2:46:50 It’s so important culturally.
    2:46:55 I think the most important thing is to remain open and not change our direction.
    2:47:01 Because culturally aligning with Russia, it’s one idea while aligning with Europe is another.
    2:47:06 Our people have chosen Europe, it’s their choice, it’s our choice, the choice of our nation.
    2:47:08 And I think it’s very important.
    2:47:09 But first you have to end the war.
    2:47:10 Yes, you’re right.
    2:47:11 And we will.
    2:47:13 We want peace, you know?
    2:47:17 I mean, just to make it clear, we want peace.
    2:47:19 Just what I always say.
    2:47:22 You have to come to Ukraine and see for yourself.
    2:47:30 And people will tell you, “No, we can’t forgive those murderers who took our lives.”
    2:47:34 But we still want to make peace.
    2:47:45 And honestly, I think that the highest approval rating of the President of the United States of Trump now is in Ukraine.
    2:47:55 People really believe that he can truly help bring peace.
    2:48:04 Now they have faith, faith that he can make it happen, that he can support Ukraine and he can stop Putin.
    2:48:09 And that he will make sure Putin doesn’t get everything he wants.
    2:48:16 This is very important and it’s why we believe that we must not lose this opportunity.
    2:48:19 I hope you find the path to peace. Thank you.
    2:48:20 Thank you so much.
    2:48:21 Thank you for talking to me.
    2:48:22 Thank you for coming.
    2:48:26 Thank you.
    2:48:30 You started.
    2:48:32 Thank you very much.
    2:48:39 Thank you for listening to this conversation with the President of Ukraine Volodymyr Zelensky.
    2:48:46 And now let me answer some questions and try to reflect on and articulate some things I’ve been thinking about.
    2:48:59 If you would like to submit questions, including in audio and video form, go to lexfreedman.com/ama or to contact me for whatever other reason, go to lexfreedman.com/contact.
    2:49:02 First, I got a bunch of questions about this.
    2:49:10 So let me chat about the topic of language and let’s say the mechanics of multilingual conversation.
    2:49:13 Perhaps the details are interesting to some people.
    2:49:20 It also allows me to reflect back on the puzzle of it in this episode and what I can do better next time.
    2:49:30 I already explained in the intro the symbolic, historic, and geopolitical complexity of the choice of language in the conversation with President Zelensky.
    2:49:37 As I said, the Russian language is one that the President speaks fluently and was his primary language for most of his life.
    2:49:41 I speak Russian fluently as well.
    2:49:44 It’s the only common language we are both fluent in.
    2:49:50 So any other combination of language is required an interpreter, including when I spoke English.
    2:50:01 He did need an interpreter when I spoke English and, just like I was, was visibly encumbered and annoyed by the process of interpretation.
    2:50:09 This is why I tried to speak in Russian to the President instead of English so that he can directly understand me without an interpreter.
    2:50:13 I’m willing to take the hit for that as I am for everything else.
    2:50:15 I’m not trying to protect myself.
    2:50:21 I’m trying to do whatever is best for the conversation, for understanding.
    2:50:28 Though it has been getting harder and harder to stay open, vulnerable, and raw in public,
    2:50:39 while the swarms of chanting internet mobs stop by with their torches and their color-coded hats, flags, frogs, pronouns, and hashtags.
    2:50:47 Anyway, there is a lot of nuanced aspects of the conversational language that I would like to explain here.
    2:50:49 I’ll try to be brief.
    2:50:56 I can recommend a lot of books on this topic of language and communication that reveal just how amazing this technology of language is.
    2:51:04 For example, for a good overview, I recommend John McWarder’s books and especially his lecture series for the Great Courses on Language.
    2:51:06 There are several.
    2:51:14 In the Story of Human Language series, he gives a great discussion on spoken language versus written language,
    2:51:17 and that spoken language often relaxes the rules of communication.
    2:51:31 It uses shorter packets of words, loads in a bunch of subtle cues and meanings, all of which, like I’m trying to describe, are lost when there’s an interpreter in the loop.
    2:51:39 Let me also describe some relevant characteristics of my peculiar language abilities in quotes.
    2:51:41 I was never good at speaking.
    2:51:44 I listen, think, and understand better than I speak.
    2:51:51 For me, this is true for both English and Russian, but it is especially true for Russian.
    2:51:59 The Russian language allows for much more room for wit, nonstandard terms of phrase, metaphors, humor, rhyme, musicality,
    2:52:07 and, let’s say, deforming of words that create a lot of room for creativity in how meaning and emotion are conveyed.
    2:52:10 You could do the same in English, but it’s harder.
    2:52:15 I actually find that brits are sometimes very good at this.
    2:52:18 Like, one of my favorite humans to talk to is Douglas Murray.
    2:52:28 Setting the content of the conversation aside, the sheer linguistic brilliance and wit of dialogue with Douglas is a journey in itself.
    2:52:35 I think Christopher Hitchens had the same, and many others, like I said, especially Brits.
    2:52:45 Anyway, I’m able to detect and understand a lot of dynamism and humor in the Russian language, but I’m slow to generate it,
    2:52:47 in part because I just don’t practice.
    2:52:50 I have very few Russian-speaking friends.
    2:52:56 Funny enough, most of them are Ukrainian, but they speak with me and each other in Russian.
    2:53:01 But of course, as I mentioned, this is slowly changing due to the war.
    2:53:09 But I tried to speak to the president in Russian, so he would avoid needing an interpreter as much as possible.
    2:53:16 One of the things I want to improve for next time is to make sure I give very good equipment for interpretation,
    2:53:27 and arrange for an interpreter I trust to be exceptionally good for the dynamism and the endurance of a three-hour conversation in the style that I tried to do.
    2:53:31 Just to give you some behind-the-scenes details of the experience.
    2:53:42 Equipment-wise, funny enough, it’s not actually so trivial to set up wireless connections from us, the two people talking to the interpreter, and then back to us,
    2:53:46 in a way that’s super robust and has clean audio.
    2:53:51 The audio I had in my ear from the interpreter had a loud background noise,
    2:54:00 so the whole time I’m hearing a shh sound with the voice of the interpreter coming in very quietly.
    2:54:05 What a wonderful experience. This whole life is, frankly.
    2:54:14 Plus, his translation was often incomplete, at least for me, so I had to put together those puzzle pieces continuously.
    2:54:21 But, again, it worked out, and hopefully our constant switching of languages and having a meta-discussion about language
    2:54:29 provided good insights as to the complexity of this fight for our nation’s identity and sovereignty that Ukraine is going through.
    2:54:40 Behind-the-scenes, off-mic, on a personal level, President Zelensky was funny, thoughtful, and just a kind-hearted person.
    2:54:46 And really, the whole team were just great people. It was an experience I’ll never forget.
    2:54:54 After the conversation was recorded, the next challenge was to translate all of this and overdub it and do it super quickly.
    2:55:02 Like, these words I’m speaking now have to be translated and dubbed into Ukrainian and Russian.
    2:55:09 Eleven Labs were really helpful here, especially in bringing the President’s voice to life in different languages.
    2:55:16 But even more than that, they’re just an amazing team who inspired me and everyone involved.
    2:55:22 Please go support Eleven Labs. They are a great company and great people.
    2:55:30 The translation is separate from the text-to-speech and was done in part by AI and a lot by human.
    2:55:35 This is where the fact that we had constant switching between three languages was a real challenge.
    2:55:40 So there are six transition mappings that have to be done.
    2:55:48 English to Ukrainian and Russian, Ukrainian to English and Russian, and then Russian to English and Ukrainian.
    2:55:53 Continuously, sentence by sentence, sometimes word by word.
    2:56:00 And each combination of language to language translation is best done by a person who specializes in that kind of mapping.
    2:56:03 So it was all a beautiful mess, all of it.
    2:56:07 And on top of all that, great translation is super hard.
    2:56:12 For example, I’ve read and listened to a lot of the CF scheme, both English and Russian,
    2:56:16 and studied the process of how these books are translated by various translators.
    2:56:22 You can spend a week discussing how to translate a single important sentence well.
    2:56:28 Obviously, in this situation, we don’t have weeks. We have hours for the whole thing.
    2:56:37 One of the things I regret is not putting enough time into the hiring and selecting great translators from Russian and Ukrainian to English, especially.
    2:56:47 I think translation is an art, and so getting a good translator that works well with us is a process that needs more time and effort.
    2:56:49 I’ll be doing that more this month.
    2:56:54 By the way, we have a small but amazing team.
    2:56:58 If you want to join us, go to lexfreeman.com/hiring.
    2:57:04 If you’re passionate, work hard, and everyone on the team loves working with you, then we’ll do some epic stuff together.
    2:57:06 We’d love to work with you.
    2:57:16 Like I said about 11 Labs, there are a few things as awesome in life as being able to work hard with an amazing team towards a mission all of us are passionate about.
    2:57:20 Anyway, I’ll probably be doing a few more interviews in the Russian language.
    2:57:28 I do have a lingering goal of interviewing the mathematician Gagore Perlman, but there’s also others.
    2:57:37 I will also work on improving my whole pipeline, both equipment-wise and interpreter-wise, in doing these conversations in other languages.
    2:57:47 Because there are many that I would like to do in languages I don’t speak at all, like Chinese, Mandarin, or Spanish, Arabic, Hindi, Portuguese, French, German.
    2:57:56 I see language as both a barrier for communication and a portal into understanding the spirit of a people connected by that language.
    2:58:02 It’s all a weird and beautiful puzzle, and I’m just excited to get the chance to explore it.
    2:58:06 Alright, I got a question on how I prepare for podcasts.
    2:58:11 So this has evolved and expanded more and more over time.
    2:58:15 There are some podcasts that I prepare hundreds of hours for.
    2:58:23 In AI terms, let’s say, first I’m training a solid background model by consuming as much variety on the topic as possible.
    2:58:32 A lot of this comes down to picking high signal sources, whether it’s blogs, books, podcasts, YouTube videos, ex-accounts, and so on.
    2:58:41 For this conversation with President Zelensky, for example, since February 2022, I’ve spoken with hundreds of people on the ground.
    2:58:50 I’ve read Kindle or audiobook about 10 books fully, and then I skimmed about 20 more.
    2:58:55 And I don’t mean books about Zelensky, although he does appear in some of them.
    2:59:01 I mean books where this conversation was fully in the back of my mind as I’m reading the book.
    2:59:06 So, for example, I read Red Famine by Ann Applebaum.
    2:59:09 It’s about Hallinomore.
    2:59:11 Does it directly relate to Zelensky?
    2:59:19 Not on the surface, no, but it sort of continues to weave the fabric of my understanding of people, of the history of the region.
    2:59:30 But it’s really important for me to read books from various perspectives, and I’m always trying to calculate the bias under which the author operates,
    2:59:35 and adjusting for that in my brain as I integrate the information.
    2:59:43 For example, Ann Applebaum’s book, Gulag, is very different from Alexander Solzhenitsyn’s Gulag Archipelago.
    2:59:47 The former is a rigorous comprehensive historical account.
    2:59:54 The latter is a literary, psychological, and personal portrait of Soviet society.
    2:59:57 Both, I think, are extremely valuable.
    3:00:03 On the bias front, for example, The Rise and Fall of the Third Reich by William Sharer is a good example.
    3:00:13 It is full of bias, but he was there, and to me, he has written probably one of the greatest, if not THE greatest book on the Third Reich ever.
    3:00:18 But like I said, it has a lot of inaccuracies and biases. You can read about them online if you like.
    3:00:28 But my job in this case, and in all cases, is to adjust based on my understanding of the author’s biases, and take the wisdom from the text where it could be found,
    3:00:34 and putting the inaccuracies aside into the proverbial dustbins of history.
    3:00:41 So as I’m reading, I’m writing down my thoughts as they come up, always digging for some deeper insight about human nature.
    3:00:49 If I’m at my computer, I’ll write it down in Google Doc, sometimes use Notion or Obsidian.
    3:00:52 If I’m not at my computer, I’ll use Google Keep.
    3:01:00 So for example, if I’m listening to an audiobook and I’m running along the river, if a good idea comes to mind, I’ll stop, think for a few seconds,
    3:01:04 and then do speech-to-text note in Google Keep.
    3:01:11 By the way, listening to audiobook at 1x speed. Old school.
    3:01:17 And eventually I get a gigantic pile of thoughts and notes that I look over to refresh my memory.
    3:01:23 But for the most part, I just throw them out. It’s a background model building process.
    3:01:27 By the way, LLMs are increasingly becoming useful here for organization purposes,
    3:01:36 but have not yet been useful at least for me, and I do try a lot for insight extraction or insight generation purposes.
    3:01:43 I should mention that my memory for specific facts, names, dates, quotes is terrible.
    3:01:49 What I remember well is high-level ideas. That’s just how my brain works for better or for worse.
    3:02:01 I realize that sometimes forgetting all of the details and the words needed to express them makes me sound simplistic and even unprepared.
    3:02:07 I’m not. But that’s life. We have to accept our flaws and roll with them.
    3:02:13 Aside from books, I also listen to a lot of podcasts and YouTube videos where people are talking about the topic.
    3:02:22 So, for the President Zelensky episode, I listen probably to hundreds of hours of content from his supporters and from his critics from all sides.
    3:02:29 Again, I choose who to listen to based not on their perspective, but based on SNR, signal to noise ratio.
    3:02:36 If I’m regularly getting insights from a person, I will continue listening to them, whether I agree or disagree.
    3:02:46 In the end, this turns out to be a lot of hours of prep, but to say that it’s X hours per episode is not accurate because a lot of this preparation transfers from one guest to another,
    3:02:51 even when there’s an insane level of variety in the guests. We’re all humans after all.
    3:02:57 There is a thread that connects all of it together. Somehow, you feel it closely enough.
    3:03:07 For more technical guests in STEM fields, I’ve read papers, a lot of papers, and also technical blog posts and technical tweet threads.
    3:03:13 This is a very different process. For AI or CS related topics, I will run other people’s code.
    3:03:19 I will write my own, implement stuff from scratch. If it’s a software company, I’ll use their tools and software if relevant.
    3:03:28 But in the actual conversation, I constantly am searching for simple but profound insights at various levels of abstraction.
    3:03:40 Sometimes this means asking a trivial question in hopes of uncovering the non-trivial, counterintuitive but fundamental idea that opens the door to a whole new way of looking at the field.
    3:03:53 And actually, every guest is their own puzzle, like preparing for Rick Rubin was me listening to hundreds of songs he produced and even learning some on guitar, like “Heart” by Johnny Cash.
    3:04:05 Preparing for the cursor team episode meant, obviously, I had to use cursor fully for several weeks, all of its features, so I switched completely for VS Go to cursor.
    3:04:18 For Paul Rosalie, round two, especially, I literally went deep into the jungle with Paul and almost died, fully taking the leap toward adventure with him.
    3:04:24 When he gets close to the conversation, I’ll start working on the actual interview questions and notes.
    3:04:29 And there I’m asking myself, what am I personally curious about?
    3:04:39 Like, I love podcasts. I’m a big fan of many, many podcasts. And so I ask myself, what would I want this person to explain on a podcast?
    3:04:48 And maybe what aspect of their thought process or their humanity would I want to be surfaced or have the chance to be surfaced?
    3:04:57 In the actual conversation, I always try to put my ego aside completely and do whatever it takes to have a good conversation and serve the listener.
    3:05:09 This means asking questions simply, trying to define terms and give context if needed, being open-minded, vulnerable, curious, and challenging the guests when needed.
    3:05:17 Despite the claims on the internet, I do ask a lot of challenging questions, including follow-ups, but always with empathy.
    3:05:24 I don’t need to be right. I don’t need to signal my moral or intellectual superiority to anyone.
    3:05:41 I try to do the opposite, actually, because I want the guests to open up, and I trust the intelligence of the listener to see for themselves if the guest is full of shit or not, to detect the flaws and the strengths of how the guest thinks or who they are deep down.
    3:05:51 A lot of times, when interviewers grill the guest, it doesn’t reveal much, except give a dopamine hit to the echo chambers who hate the guest.
    3:05:58 As I said in the intro, I believe the line between good and evil does run through the heart of every man.
    3:06:09 The resulting conversations are sometimes a failure, sometimes because they are too short, sometimes because the chemistry was just not working, sometimes because I fucked it up.
    3:06:16 I try to take risks, give it everything I got, and enjoy the roller coaster of it all, no matter what.
    3:06:26 And, as I said, I trust the listener to put it all together, and I trust the critic to tear it apart, and I love you all for it.
    3:06:31 Alright, I got a bit of a fun question. It’s a long one.
    3:06:38 So, Deleon, cool name, wrote in saying he spotted me out in the wild and had a question about it.
    3:06:49 He wrote, I saw Lex working at the Detroit airport between flights. I hesitated and ultimately decided not to interrupt since he was in focus mode, true.
    3:06:53 Lex had his headphones earbuds on, listening to brown noise.
    3:07:03 Microsoft Surface propped up at eye level, Kinesis advantage keyboard on the table. The use of Microsoft Windows is surprising, but it has been discussed in the past, true.
    3:07:15 The ergonomics of the setup, surface at eye level, means that Lex cares about his health, but the anomalously large Kinesis advantage keyboard seems like such a burden to lug around airports.
    3:07:23 I cannot help but ask, why is it that Lex is going through the hassle to bring this absolutely large keyboard with him as carry on?
    3:07:26 It barely fits in a backpack.
    3:07:32 Carrying it around must be necessary for Lex for some reason. I love the puzzle of this that you’re trying to think through this.
    3:07:38 The pain of lugging this tool around must be much smaller than the problem it solves for a question mark.
    3:07:49 What problem does this keyboard solve? What makes it necessary at the airport? Productivity, health, RSI? Good questions. Thank you, Delia.
    3:07:54 Great question. It made me smile. So I thought I’d answer. I remember that day.
    3:08:07 There was something else about that day aside from the keyboard that I miss. So I am filled with a melancholy feeling that is appropriate for the holiday season.
    3:08:14 So let me try to set the melancholy feeling aside, answer a question about my computer setup when I’m traveling.
    3:08:25 So whether I’m going to SF Boston, Austin, London or the front in Ukraine, I am always bringing the Kinesis keyboard.
    3:08:38 I don’t have RSI or any other health issues of that kind that I’m aware of, even though I’ve been programming, playing guitar, doing all kinds of combat sports my whole life.
    3:08:44 All of which put my hands and fingers in a lot of precarious positions and situations.
    3:08:49 For that reason, and in general, ergonomics have never been a bit concerned for me.
    3:08:58 I can work on a crappy chair and table, sleep on the floor. It’s all great. I’m happy with all of it.
    3:09:02 So why Kinesis, which by the way is right here.
    3:09:16 I had to think about it. Your question actually made me reflect and I was hoping as I’m answering it, the truth will come out on many levels.
    3:09:27 So it is true that I’m more productive with it. I can type and correct mistakes very fast compared to a regular keyboard, both in natural language typing and in programming.
    3:09:39 So fast enough, I think where it feels like I can think freely without the physical bottlenecks and constraints of fingers moving.
    3:09:48 The bit rate in New Orleans parlance is high enough for me to not feel like there is cognitive friction of any kind.
    3:09:52 But the real answer may be the deeper, more honest answer or something else.
    3:10:02 I’ve used the Kinesis keyboard for over 20 years. So maybe it’s like one of those love stories where a guy and a girl love each other.
    3:10:08 And you try to quit because it doesn’t quite work. But every time you leave, you ask yourself why.
    3:10:15 And then you realize that when you’re together, your life is just full of simple joys.
    3:10:22 So what’s the point of leaving? What’s the point of life if not to keep close to you the things that bring you joy?
    3:10:34 Delian, like this keyboard, it brings me joy. It’s a bad metaphor over anthropomorphized perhaps, but I never promised a good one.
    3:10:39 I’m like a cheap motel on a road trip, low quality is part of the charm.
    3:10:44 I do have some good motel stories for another time. This does not feel like the appropriate time.
    3:10:56 All that said, to disagree with myself, I did use Emacs also for over 20 years and in a single week recently switched to VS Code and then cursor and never look back.
    3:11:00 So take my romantic nature with a grain of salt.
    3:11:12 So yes, eventually I’ll have to leave. But for now, you’ll keep finding me on occasion in a random airport somewhere listening to brown noise, writing away the hours on this Kinesis keyboard.
    3:11:25 Now, if you see me without it, maybe I’ll give you the same tanger of melancholy feeling I feel now in looking back to that airport in Detroit.
    3:11:37 Anyway, more about my travel setup. If anyone is curious, I usually do travel with a Windows laptop, but I am mostly using Linux on it through WSL, Windows Subsystem for Linux.
    3:11:42 And in some cases, I’m dual booting Linux and Windows.
    3:11:51 I also need to be able to video edit. So on longer trips, I usually have a bigger laptop with a bigger screen, lots of memory, good CPU, good GPU.
    3:11:55 All of that helps with video editing on Adobe Premiere.
    3:12:07 In general, I’m extremely minimalist except for the few let’s call them sentimental things like all my podcasts recording equipment fits into a small suitcase.
    3:12:13 I try to keep it as simple as possible. Thank you for the question and see you at the next airport.
    3:12:25 Alright, I think it’s time to bring things to a close. I’d like to give a big thanks to you for giving me your time and your support over the years. It means the world.
    3:12:41 If you want to get in touch with me, go to lexsrooming.com/contact. There you can get feedback, ask questions, request guests for the podcast, or submit the Coffee with Lex form if you just want to chat with me over a cup of coffee.
    3:12:50 I’ll be traveling across the world a bunch this year from Europe to South America and more so it would be cool to do some small meetups and meet some interesting people.
    3:12:56 This has been a journey of a lifetime. Thank you for everything.
    3:13:00 On to the next adventure. I love you all.
    3:13:16 [Music]

    Volodymyr Zelenskyy is the President of Ukraine. On YouTube this episode is available in English, Ukrainian, and Russian. Captions and voice-over audio tracks are provided in English, Ukrainian, Russian, and the original mixed-language version, with subtitles available in your preferred language. To listen to the original mixed language version, please select the English (UK) audio track audio track. The default is English overdub.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep456-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/volodymyr-zelenskyy-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    President Zelenskyy’s X: https://x.com/ZelenskyyUa
    President Zelenskyy’s Instagram: https://instagram.com/zelenskyy_official
    President Zelenskyy’s Website: https://www.president.gov.ua/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    Eight Sleep: Temp-controlled smart mattress cover.
    Go to https://eightsleep.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex

    OUTLINE:
    (00:00) – Introduction
    (20:17) – Language
    (30:06) – World War II
    (46:54) – Invasion on Feb 24, 2022
    (53:30) – Negotiating Peace
    (1:13:47) – NATO and security guarantees
    (1:26:39) – Sitting down with Putin and Trump
    (1:46:09) – Compromise and leverage
    (1:51:38) – Putin and Russia
    (2:01:30) – Donald Trump
    (2:12:01) – Martial Law and Elections
    (2:24:21) – Corruption
    (2:33:06) – Elon Musk
    (2:37:10) – Trump Inauguration on Jan 20
    (2:40:18) – Power dynamics in Ukraine
    (2:43:50) – Future of Ukraine
    (2:48:32) – Choice of language
    (2:58:02) – Podcast prep and research process
    (3:06:27) – Travel and setup
    (3:12:13) – Conclusion

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

    SOCIAL LINKS:
    – X: https://x.com/lexfridman
    – Instagram: https://instagram.com/lexfridman
    – TikTok: https://tiktok.com/@lexfridman
    – LinkedIn: https://linkedin.com/in/lexfridman
    – Facebook: https://facebook.com/lexfridman
    – Patreon: https://patreon.com/lexfridman
    – Telegram: https://t.me/lexfridman
    – Reddit: https://reddit.com/r/lexfridman

  • #455 – Adam Frank: Alien Civilizations and the Search for Extraterrestrial Life

    AI transcript
    0:00:11 The following is a conversation with Adam Frank, an astrophysicist interested in the evolution of star systems and the search for alien civilizations in our universe.
    0:00:19 And now a quick few second mention of each sponsor. Check them out in the description. It’s the best way to support this podcast.
    0:00:28 Let me say as a side note that I had to put a bunch of podcast episodes on hold to focus deeply on preparing for conversations with world leaders.
    0:00:34 So I apologize to include more sponsors on this episode than usual.
    0:00:40 They really wanted me to mention them this year and I’m not sure when I’m going to do another episode.
    0:00:45 We were going to do eight episodes this month, but instead I think we’re doing two.
    0:00:52 We’ll see every single day, every single hour, changes the plan, changes the situation, changes my life.
    0:01:00 So please be patient with me. There are no sponsor reads in the middle so you can skip this long and beautiful list.
    0:01:05 But I do try to make them interesting in case you do listen and I hope you do.
    0:01:13 In either case, please still check out the sponsors, buy their stuff. It is the best way to support this podcast.
    0:01:26 The sponsors are Encore for your ML stack, AidSleep for naps, Shopify for e-commerce, Natsuite for business, BetterHelp for the mind, Notion for notes, Element for electrolytes, and AG1 for nutrition.
    0:01:31 If you want to get in touch with me for whatever reason, go to lexfreeman.com/contact.
    0:01:36 Perhaps you could tell from my voice on top of everything else, I’m also sick.
    0:01:43 What a wonderful, beautiful, challenging life this is and I’m grateful for every second of it.
    0:01:47 All right, and now on to the full ad reads. Let’s go.
    0:01:59 This episode is brought to you by Encore, a platform that provides data focused AI tooling for data annotation, curation and management, and for model evaluation.
    0:02:11 For example, if you are an independent private or government agency that is running the drones that is flying all over New Jersey and the tri-state area,
    0:02:20 you might be doing the same kind of data annotation and collection, curation and management that Encore excels at.
    0:02:30 Also, if you’re an extraterrestrial species performing the same, I wonder what kind of computation tools alien civilizations have.
    0:02:36 At the physics level, computation is fundamentally a part of the fabric of the universe.
    0:02:49 So every advanced civilization would or surely would discover how to leverage that computation, how to organize that computation, how to access and communicate with that computation.
    0:02:58 Anyway, think of it, if you have a swarm of drones and you are the ruler of an alien civilization, want to collect some data about New Jersey,
    0:03:06 you are going to have to do some great machine learning and great machine learning is not just about the algorithms.
    0:03:08 It is so much more about the data.
    0:03:18 So whoever you are running the drone program over New Jersey, go try out Encore to curate, annotate and manage your AI data at Encore.com/Lex.
    0:03:20 That’s Encore.com/Lex.
    0:03:28 By the way, in all seriousness, I will probably talk about drones in New Jersey soon.
    0:03:38 I think it’s a fascinating mystery. Is it China? Is it aliens? Is it the U.S. government? Is it private companies within the U.S. government?
    0:03:43 Is it other nation states? Are nuclear weapons involved?
    0:03:49 And what are the mechanisms that ensure that the U.S. government is transparent about communicating with the discoverer?
    0:03:51 These are essential questions.
    0:03:53 Okay, onto A-Sleep.
    0:03:56 This episode is brought to you by A-Sleep and it’s pod 4 Ultra.
    0:04:02 You know, sleep makes me think about the night and I’ve been watching a lot of war movies.
    0:04:05 I’ve been watching a lot of war reporting.
    0:04:12 I’ve been watching a lot of conversations with soldiers and I’ve been talking to soldiers and there’s something about the night.
    0:04:18 There’s something about the quiet night that serves as the break from the hell of war.
    0:04:28 That’s the song from the Second World War, a song about a soldier writing to a woman he loves.
    0:04:34 That’s just it. Just like a man searched for meaning in the darkest hours of war.
    0:04:38 Those are the things that keep the flame of the heart going.
    0:04:48 Talking about these topics makes it difficult for me to then talk about A-Sleep and the technology and the comfort of a good night.
    0:04:52 Sleep, somewhere in America.
    0:05:00 That’s one of the things you discover when you travel, especially travel to a country that’s participating in war.
    0:05:09 That the basic comforts, the basic securities, the basic dreams and hopes and the ways of life are taken away.
    0:05:12 And still the human spirit persists.
    0:05:15 Anyway, this is supposed to be an ad read.
    0:05:23 Go to acleep.com/lex, use Code Lex to get up to $600 off your Pod 4 Ultra Purchase when bundled.
    0:05:25 That’s acleep.com/lex.
    0:05:32 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great looking online store.
    0:05:40 I’ve been reading a lot about the long history of the Silk Road, especially before and after the Mongol Empire and Jenghis Khan.
    0:05:47 I’ve been reading a lot about Jenghis Khan and the influence he had on revolutionizing the trade network.
    0:05:57 A lot of networks, the trade of not just goods, but information of knowledge, of languages, of ideas, of religions, of peoples.
    0:06:07 And it’s fascinating how roads of that nature, trade, first and foremost, can break down the barriers that divide peoples.
    0:06:10 I suppose it all starts with incentives.
    0:06:19 People are people and they have stuff they don’t need and they want to sell it and other people have stuff they want and they are willing to buy it.
    0:06:28 And those incentives that scale, overpower any kind of emotional, psychological, historical hatred and all those kinds of things.
    0:06:37 And it’s funny, the little incentives and the mechanisms of capitalism at its best can heal the wounds of war.
    0:06:45 Of course, they can also fuel the military industrial complex, which is the fuel of war.
    0:06:47 Oh, the double-edged sword.
    0:06:57 Anyway, take the Silk Road and fast forward to today and we have Shopify that you can sign up to for $1 per month trial period at Shopify.com/Lex.
    0:06:58 That’s all lowercase.
    0:07:02 Go to Shopify.com/Lex to take your business to the next level today.
    0:07:08 This episode is also brought to you by Netsuite and all in one cloud business management system.
    0:07:20 When I think about Netsuite and all the different administrative modules and the language, standardized language that allows them to communicate with each other.
    0:07:33 I think about all the empires throughout history that were able to create remarkable administrative systems, the Byzantine Empire, the Roman Empire, the Mongol Empire, as I mentioned.
    0:07:37 None of it works without paperwork.
    0:07:49 You know, bureaucracy rightfully so gets a bad rep, but it is best bureaucracy is necessary to manage the affairs of large organizations.
    0:07:55 You know, humans are very good at working with each other when they scale beyond a thousand people.
    0:07:57 So you need great administrative systems.
    0:08:03 And thankfully today, we have technology, we have tools like Netsuite to do just that.
    0:08:09 Take advantage of Netsuite’s flexible financing plan at Netsuite.com/Lex, that’s Netsuite.com/Lex.
    0:08:14 This episode is also brought to you by BetterHelp, spelled H-E-L-P, Help.
    0:08:25 One day in the distant future, AI systems will make for great therapists, but I think that’s a very dangerous road to walk down in the short term.
    0:08:31 I am a person who loves conversation and not small talk.
    0:08:36 The fake nice cities that alleviate social frictions, I’m not for that.
    0:08:41 I’m in for diving deep through conversation.
    0:08:47 And I think that is something that AI just can’t quite do yet and I would say not even close.
    0:08:48 It is an assistant.
    0:08:50 It is not a therapist.
    0:09:01 So the distinction, the differences, quite fascinating to analyze, to watch, to try to sort of elucidate and articulate clearly.
    0:09:10 Yeah, so I’m a big fan of talking to a human to explore your own mind and BetterHelp is a very easy, accessible way of doing that.
    0:09:16 Check them out at betterhelp.com/Lex and save in your first month as BetterHelp.com/Lex.
    0:09:27 This episode is brought to you by Notion, a note-taking system service app that I use and you should use, especially if you’re on a large team,
    0:09:34 to collaborate on all kinds of stuff, including notes and project management, wikis, all that kind of stuff.
    0:09:47 Nuclear weapons have been on my mind quite a bit and I think about the Manhattan Project and I think about the amount of incredible, rapid organization that was involved in that project.
    0:10:00 Just think about the coordination, the coordination of brilliant people working on separate parts of an incredibly complicated project where all of it has to be secret.
    0:10:07 So many of the people working on it may not even be aware of the bigger picture of it or the different modules involved.
    0:10:12 Just imagine the coordination required there, just truly, truly, truly incredible.
    0:10:16 And of course, imagine what modern day tools can do for that.
    0:10:29 Obviously, the Manhattan Project is a top secret project and a controversial one and a complicated one and one that I’ve done many episodes on in terms of its implications.
    0:10:42 But there’s a less controversial perspective on the Manhattan Project of just seeing it as a project that the entirety of a nation or maybe the entirety of a civilization takes on the moonshot project.
    0:10:48 We’re going to go to Mars, we’re going to go out there, we’re going to build something big together.
    0:10:58 I love projects like that at any scale, just the big togetherness where all the bullshit of distraction is thrown away and you just focus.
    0:11:03 So yeah, Notion helps with that kind of thing and they integrate AI extremely well.
    0:11:13 So you should try Notion AI for free when you go to Notion.com/Lex, that’s all lowercase, Notion.com/Lex to try the power of Notion AI today.
    0:11:21 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix.
    0:11:29 Did you know that salt in ancient Rome was a currency also referred to as white gold?
    0:11:47 How crazy is it that things like salt or cinnamon or frankly, gold and silver are things that all of us humans imbue with value for a time and even do horrific things to each other in order to attain more of it, the human greed for salt.
    0:11:51 So dark and so fascinating we humans are.
    0:12:07 Anyway, on a basic level, just thirst, something I’ve experienced in the Amazon jungle thirst for water and for that you need electrolytes to not just water, water and salt plus magnesium and potassium.
    0:12:13 That is the basic thing you want the most when it is gone.
    0:12:22 And I got the chance, the gift to experience it. Get a sample pack for free with any purchase. Try it at drinkelement.com/Lex.
    0:12:31 This episode is also brought to you by AG1, a drink I drink every day to feel better about myself.
    0:12:47 It’s basically a great multivitamin, it’s delicious and frankly, I feel quite sad that I’m out of travel packs and I’m going to be gone for a time and I will not have AG1.
    0:13:00 AG1 and Element are things that make me feel like I’m home, like everything’s going to be okay. I am bringing Element with me because it has these packets, but I went through all the AG1 travel packs.
    0:13:11 So that silly little thing is one of the things that will make me feel homesick. Funny how that is. It’s the little things.
    0:13:29 Anyways, the crazy things I do in terms of physical and mental perturbations to the bodily equilibrium on a daily basis is something that is rescued in part by making sure I get AG1 every single day.
    0:13:38 What am I going to do without AG1? You know what, I’ll probably bring some with me. I changed my mind now and you should do the same.
    0:13:45 They’ll give you one month supply of fish oil when you sign up at drinkag1.com/Lex.
    0:13:56 If you’re still listening to this, thank you. I’m deeply grateful for you, for your support, for being there for so many years. I love you all.
    0:14:06 This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Adam Frank.
    0:14:27 You wrote a book about aliens. So the big question, how many alien civilizations are out there?
    0:14:41 Yeah, that’s the question, right? The amazing thing is that after two and a half millennia of, you know, people yelling at each other or setting each other on fire occasionally over the answer, we now actually have the capacity to answer that question.
    0:14:52 So in the next 10, 20, 30 years, we’re going to have data relevant to the answer to that question. We’re going to have hard data finally that will one way or the other.
    0:15:01 You know, even if we don’t find anything immediately, we will have gone through a number of planets. We’ll be able to start putting limits on how common life is.
    0:15:07 The one answer I can tell you, which is was an important part of the problem is how many planets are there, right?
    0:15:18 And just like people have been arguing about the existence of life elsewhere for 2,500 years, people have been arguing about planets for the exact same amount of time, right?
    0:15:27 You can see Aristotle yelling at Democritus about this. You know, you can see that they had very wildly different opinions about how common planets were going to be and how unique Earth was.
    0:15:36 And that question got answered, right? Which is pretty remarkable that in a lifetime, you can have a 2,500 year old question. The answer is they’re everywhere.
    0:15:43 There are planets everywhere. And it was possible that planets were really rare. We didn’t really understand how planets formed.
    0:15:55 And so if you go back to, say, the turn of the 20th century, there was a theory that said planets formed when two stars passed by each other closely and then material was gravitationally squeezed out.
    0:16:05 In which case those kinds of collisions are so rare that you would expect one in a trillion stars to have planets. Instead, every star in the night sky has planets.
    0:16:18 So one of the things you’ve done is simulated the formation of stars. How difficult do you think it is to simulate the formation of planets, like simulator solar system through the entire evolution of the solar system?
    0:16:25 This is kind of a numerical simulation sneaking up to the question of how many planets are there.
    0:16:42 That actually we’re able to do now. There is, you can run simulations of the formation of planetary system. So if you run the simulation, really where you want to start is a cloud of gas, these giant interstellar clouds of gas that may have, you know, a million times the mass of the sun in them.
    0:16:56 And so you run a simulation of that. It’s turbulent. The gas is roiling and tumbling. And every now and then you get a place where the gas is dense enough that gravity gets hold of it and it can pull it downward. So you’ll start to form a protostar.
    0:17:03 And a protostar is basically the young star of, you know, this ball of gas where nuclear reactions are getting started.
    0:17:17 But it’s also a disk. So you, as material falls inward, because it’s everything’s rotating, as it falls inward, it’ll spin up and then it’ll form a disk. Material will collect in what’s called an accretion disk or a protoplanetary disk.
    0:17:27 And you can simulate all of that. Once you get into the disk itself and you want to do planets, things get a little bit more complicated because the physics gets more complicated. Now you got to start worrying about dust.
    0:17:44 Because actually dust, which is just dust is the wrong word. It’s smoke, really. These are the tiniest bits of solids. They will coagulate in the disk to form pebbles, right? And then the pebbles will collide to form rocks and the rocks will form boulders, etc, etc.
    0:17:59 That process is super complicated, but we’ve been able to simulate enough of it to begin to get a handle on how planets form, how you accrete enough material to get the first protoplanets or planetary embryos, as we call them.
    0:18:17 And then the next step is those things start slamming into each other to form planetary-sized bodies. And then the planetary bodies slam into each other. Earth, the moon came about because there was a Mars-sized body that slammed into the earth and basically blew off all the material that eventually formed the moon.
    0:18:23 And all of them have different chemical compositions, different temperatures?
    0:18:43 Yeah. So the temperature of the material in the disk depends on how far away you are from the star. So it decreases, right? And so there’s a really interesting point. So like, you know, close to the star, temperatures are really high. And the only thing that can condense, that can kind of freeze out is going to be stuff like metals.
    0:19:00 So that’s why you find Mercury is this giant ball of iron, basically. And then as you go further out, stuff, you know, the gas gets cooler, and now you can start getting things like water to freeze, right? So there’s something we call the snow line, which is somewhere in our solar system out around between Mars and Jupiter.
    0:19:21 And that’s the reason why the giant planets in our solar system, Jupiter, Saturn, Uranus and Neptune, all have huge amounts of ice in them, or water and ice. Actually, Jupiter and Saturn don’t have so much, but the moons do. The moons have so much water in them that there’s oceans, right? That we’ve got a number of those moons have got more water on them than there’s water on earth.
    0:19:41 Do you think it’s possible to do that kind of simulation to have a stronger and stronger estimate of how likely an earth-like planet is? Can we get the physics simulation done well enough to where we can start estimating, like, what are the possible earth-like things that could be generated?
    0:20:02 Yeah, I think we can. I think we’re learning how to do that now. So, you know, one part is, like, trying to just figure out how planets form themselves and doing the simulations, like that cascade from dust grains up to planetary embryos. That’s hard to simulate, because you’ve got to do both the gas and you’ve got to do the dust and the dust colliding and all that physics.
    0:20:30 Once you get up to a planet-sized body, then, you know, you kind of have to switch over to almost like a different kind of simulation. There often what you’re doing is you’re doing, you know, sort of, you’re assuming the planet is this sort of spherical ball, and then you’re doing, you know, like a 1D, a radial calculation, and you’re just asking, like, all right, how is this thing going to, what is the structure of it going to be? Like, am I going to have a solid iron core, or am I going to get a solid iron core with that liquid iron core out around it, like we have on earth?
    0:20:43 And then you get, you know, a silicate, kind of a rocky mantle, and then across all of those details, those are kind of beyond being able to do full 3D simulations from ab initio, from scratch. We’re not there yet.
    0:20:47 How important are those details, like the crust and the atmosphere, do you think?
    0:21:17 Hugely important. So I’m part of a collaboration at the University of Rochester where we’re using the giant laser. It’s literally, this is called the laboratory for laser energetics. We got a huge grant from the NSF to use that laser to, like, slam tiny pieces of silica to understand what the conditions are like at, you know, the center of the earth, or even more importantly, the center of super earths, like, the most, this is what’s wild. The most common kind of planet in the universe we don’t have in our solar system.
    0:21:42 Which is amazing, right? So the, we’ve been able to study enough or observe enough planets now to get a census. You know, we pretty, you know, we kind of have an idea of what who’s average, who’s weird, and our solar system’s weird, because the average planet has a mass between somewhere between a few times the mass of the earth to maybe, you know, 10 times the mass of the earth, and that’s exactly where there are no planets in our solar system.
    0:21:59 So the smaller ones of those we call super earths, the larger ones we call sub-neptunes. And they’re anybody’s guess. Like, we don’t really know what happens to material when you’re squeezed to those pressures, which is like millions, tens of millions of times the pressure on the surface of the earth.
    0:22:07 So those details really will matter of what’s going on in there, because that will determine whether or not you have, say, for example, plate tectonics.
    0:22:20 We think plate tectonics may have been really important for life on earth, for the evolution of complex life on earth. So it turns out, and this is sort of the next generation where we’re going with the, the understanding the evolution of planets in life.
    0:22:31 It turns out that you actually have to think hard about the planetary context for life. You can’t just be like, oh, there’s a warm pond, you know, and then some interesting, you know, chemistry happens in the warm pond.
    0:22:39 You actually have to think about the planet as a whole and what it’s gone through in order to really understand whether a planet is a good place for life or not.
    0:22:44 Why do you think plate tectonics might be useful for the formation of complex life?
    0:22:51 There’s a bunch of different things. One is that, you know, the earth went through a couple of phases of being a snowball planet.
    0:23:03 Like we, you know, we went into a period of glaciation where the pretty much the entire planet was under ice. The oceans were frozen. You know, early on in earth history, there was no, there was barely any land.
    0:23:10 We were actually a water world, you know, with just a couple of Australia sized cratons, they call them proto continents.
    0:23:14 So those, we went through these snowball earth phases.
    0:23:22 And if it wasn’t for the fact that we had kind of an active plate tectonics, which had a lot of volcanism on it, we could have been locked in that forever.
    0:23:33 Like once you get into a snowball state, a planet can be trapped there forever, which is, you know, maybe you already had life form, but then because it’s so cold, you may never get anything more than just microbes, right?
    0:23:46 So what plate tectonics does is because it fosters more volcanism, is that you’re going to get carbon dioxide pumped into the atmosphere, which warms the planet up and gets you out of the snowball earth phase.
    0:23:49 But even more, there’s even more really important things.
    0:24:01 I just finished a paper where we were looking at something called the hard steps model, which is this model that’s been out there for a long time that purports to say, intelligent life in the universe will be really rare.
    0:24:07 And it made all these assumptions about the earth’s history, particularly at the history of life and the history of the planet or have nothing to do with each other.
    0:24:15 And it turns out, as I was doing the reading for this, that earth probably early on had a had a more mild form of plate tectonics.
    0:24:18 And then somewhere about a billion years ago, it ramped up.
    0:24:21 And that ramping up changed everything on the planet.
    0:24:22 Because here’s a funny thing.
    0:24:25 The earth used to be flat, what I mean by that, right?
    0:24:28 So all the flat earthers out there can get excited for one second clip it.
    0:24:35 What I mean by that is that there really weren’t many mountain ranges, right?
    0:24:39 The beginning of, I think the term is orogenesis, mountain building.
    0:24:48 The true Himalayan style giant mountains didn’t happen until this more robust form of plate tectonics, where the plates are really being driven around the planet.
    0:24:54 And that is when you get the crusts hitting each other and they start pushing into these Himalayan style mountains.
    0:25:02 The weathering of that, the erosion of that puts huge amounts of nutrients, you know, things that microbes want to use into the oceans.
    0:25:12 And then what we call the net primary productivity, the, you know, the photo, the bottom of the food chain, how much sugars they are producing, how much photosynthesis they’re doing.
    0:25:15 Shot up by a factor of almost a thousand, right?
    0:25:21 So the fact that you had plate tectonics supercharged evolution in some sense.
    0:25:33 You know, like we’re not exactly sure how, how it happened, but it’s clear that the amount of life, the amount of living activity that was happening really got a boost from the fact that suddenly there was plate, this new vigorous form of plate tectonics.
    0:25:43 So it’s nice to have turmoil in terms of temperature, in terms of surface geometries, in terms of the chemistry of the planet, turmoil.
    0:25:45 Yeah, that’s actually really true.
    0:25:49 Because what happens is if you look at the history of life, that’s a really, you know, it’s an excellent point you’re bringing up.
    0:25:56 If you look at the history of life on earth, we get, you know, a biogenesis somewhere around at least 3.8 billion years ago.
    0:26:03 And that’s the first microbes they kind of take over enough that they really do, you get a biosphere, you get a biosphere that is actively changing the planet.
    0:26:09 But then you go through this period, they call the boring billion, we’re like, it’s a billion years and it’s just microbes.
    0:26:09 Nothing’s happening.
    0:26:10 It’s just microbes.
    0:26:12 I mean, the microbes are doing amazing things.
    0:26:14 They’re inventing fermentation.
    0:26:17 Thank you very much for we appreciate that.
    0:26:28 But it’s not until sort of you get probably this, these continents slamming into each other, you really get the beginning of continents forming and driving changes that evolution has to respond to.
    0:26:36 That on a planetary scale, this turmoil, this chaos is creating new niches, as well as closing other ones.
    0:26:38 And biology, evolution has to respond to that.
    0:26:47 And somewhere around there is when you get the Cambrian explosion is when suddenly everybody plan, you know, evolution goes on an orgy, essentially.
    0:26:54 So yeah, it does look like the that chaos or that turmoil was actually very helpful to evolution.
    0:27:02 I wonder if there is some extremely elevated levels of chaos, almost like catastrophes behind every leap of evolution.
    0:27:04 Like, you’re not going to have leaps.
    0:27:10 Like in human societies, we have like an Einstein that comes up with a good idea.
    0:27:24 But it feels like in an evolutionary time scale, you need some real big drama going on for for the evolutionary system to have to come up to a solution to that drama, like extra complex solution to that drama.
    0:27:26 Well, I think what I’m not sure if that’s true.
    0:27:29 I don’t know if it needs to be like an almost extinction event, right?
    0:27:33 Is it certainly true that we have gone through almost extinction events?
    0:27:42 Sorry, we’ve had, you know, five mass extinctions, but you don’t necessarily see that like there was this giant evolutionary leap happening after those.
    0:27:48 So, you know, with the comet impact, the KT boundary, certainly, you know, lots of niches opened up.
    0:27:49 And that’s why we’re here, right?
    0:27:56 Because, you know, our ancestors were just a little basically rodents, rats living under the footsteps of the dinosaurs.
    0:28:00 And it was that comet impact that opened the route for us.
    0:28:04 But it wasn’t, I mean, that still took another, you know, 65 million years.
    0:28:06 It wasn’t like this thing immediately happened.
    0:28:23 But what we found with this hard steps paper, because the whole idea of the hard steps paper was, it was one of these anthropic reasoning kinds of things where Brandon Carter said, Oh, look, the intelligence doesn’t show up on earth until about, you know, almost close to when the end of the sun’s lifetime.
    0:28:32 And so he’s like, well, there should be no reason why the sun’s lifetime and the time for evolution to produce intelligence should be the same.
    0:28:36 And so therefore, and he goes through all this reasoning, anthropic reasoning.
    0:28:43 And he ends up with the idea that like, oh, it must be that the odds of getting intelligence are super low.
    0:28:45 And so that’s the hard steps, right?
    0:28:48 So there was a series of steps in evolution that were, you know, very, very hard.
    0:28:55 And because that you can calculate some probability distributions and everybody loves a good probability distribution and they went a long way with this.
    0:29:14 But it turns out that the whole thing is flawed because on one, you know, when you look at it, of course, the timescale for the sun’s evolution and the timescale for evolution on life are coupled because life and the the timescale for evolution of the earth is coupled is about the same timescale as the evolution is the sun.
    0:29:15 It’s billions of years.
    0:29:17 The earth evolves over billions of years.
    0:29:19 And life and the earth co-evolve.
    0:29:26 That’s what Brandon Carter didn’t see is that actually the fate of the earth and the fate of life are inextricably combined.
    0:29:29 And this is really important for astrobiology, too.
    0:29:33 Life doesn’t happen on a planet.
    0:29:34 It happens to a planet.
    0:29:37 So this is something that David Grinspoon and Sarah Walker both say.
    0:29:39 And, you know, I agree with this.
    0:29:40 It’s a really nice way of putting it.
    0:29:49 So, you know, play tectonics, the evolution of oxygen of an oxygen atmosphere, which only happened because of life.
    0:29:57 These things, you know, these are things that are happening where life and the planet are sort of sloshing back and forth.
    0:30:09 And so rather than to your point about do you need giant catastrophes, maybe not giant catastrophes, but what happens is as the earth and life are evolving together, windows are opening up, evolutionary windows.
    0:30:23 Like, for example, life put oxygen into the atmosphere when life invented this new form of photosynthesis about two and a half billion years ago that broke water apart to, you know, work to do its chemical shenanigans.
    0:30:27 It broke water apart and pushed oxygen into the atmosphere.
    0:30:28 That’s why there’s oxygen in the atmosphere.
    0:30:29 It’s only because of life.
    0:30:35 That opened up huge possibilities, new spaces for evolution to happen.
    0:30:38 But it also changed the chemistry of the planet forever.
    0:30:48 So the evolution, the introduction of oxygen photosynthesis changed the planet forever and it opened up a bunch of windows for evolution that wouldn’t have happened otherwise.
    0:30:52 Like, for example, you and I, we need that amount of oxygen.
    0:30:59 Big brain creatures need an oxygen rich atmosphere because oxygen is so potent for metabolism.
    0:31:04 So you couldn’t get intelligent creatures 100 million years after the planet formed.
    0:31:15 So really on a scale of a planet, when there’s billions, trillions of organisms on a planet, they can actually have planetary scale impact.
    0:31:22 So the chemical shenanigans of an individual organism, when scaled out to trillions, can actually change a planet.
    0:31:33 Yeah, and we know this for a fact now, like this is, so there was this thing, Gaia theory that, you know, with James Lovelock introduced in the 70s, and then Lynn Margallus, the biologist, Lynn Margallus together.
    0:31:51 So this Gaia theory was the idea that planets pretty much take a, or sorry, life takes over a planet, life hijacks the planet in a way that some total of life creates these feedbacks between the planet and the life such that it keeps the planet habitable.
    0:31:52 It’s kind of a homeostasis, right?
    0:31:55 I can go out like right now outside, it’s 100 degrees, right?
    0:32:05 And I go outside, but my internal temperature is going to the same, and I can go back to, you know, Rochester, New York in the winter, and it’s going to be, you know, zero degrees, but my internal temperature is going to be the same.
    0:32:06 That’s homeostasis.
    0:32:19 The idea of Gaia theory was that life, the biosphere exerts this pressure on the planet or these feedbacks on the planet that even as other things are changing, the planet will always stay in the right kinds of conditions for life.
    0:32:29 Now, when this theory came out, it was very controversial, people were like, oh my God, you know, what are you, smoking weed, you know, and like, there were all these Gaian festivals with Gaian dances.
    0:32:32 And so, you know, it became very popular in the New Age community.
    0:32:37 But Lovelock, actually, they were able to show that, no, this has nothing to do with like the planet being conscious or anything.
    0:32:43 It was about these feedbacks that, that by the biology, the biosphere can exert these feedbacks.
    0:32:48 And now that’s become, whether or not it’s still, we’re still unclear whether there are true Gaian feedbacks.
    0:32:52 In the sense that the planet can really exert complete control.
    0:32:58 But it is absolutely true that the biosphere is a major player in Earth’s history.
    0:33:01 So the biosphere fights for homeostasis on Earth.
    0:33:02 The bias.
    0:33:05 So, OK, what I would say right now is I don’t know if I can say that scientifically.
    0:33:17 I can certainly say that the biosphere does a huge amount of the regulation of the planetary state and over billions of years has strongly modified the evolution of the planet.
    0:33:21 So whether or not a true Gaian feedback would be exactly what you said, right?
    0:33:24 The guy, the biosphere is this somehow and Sarah Walker and David Grinspoon.
    0:33:31 And I actually did a paper on this about the idea of planetary intelligence or cognition across a planetary scale.
    0:33:33 And I think that actually is possible.
    0:33:36 It’s not conscious, but there is a kind of cognitive activity going on.
    0:33:41 The biosphere, in some sense, knows what is happening because of these feedbacks.
    0:33:48 So it’s still unclear whether we have these full Gaian feedbacks, but we certainly have semi Gaian feedbacks.
    0:33:54 If there’s a perturbation on the planetary scale, temperature, you know, insulation, how much sunlight is coming in.
    0:33:59 The biosphere will start to have feedbacks that will damp that perturbation.
    0:34:02 Temperature goes up, the biosphere starts doing something, temperature comes down.
    0:34:13 Now, I wonder if the techno sphere also has a Gaian feedback or elements of a Gaian feedback such that the techno sphere will also fight to some degree for homeostasis.
    0:34:14 Open question, I guess.
    0:34:24 Well, that’s, I’m glad you asked that question because that paper that David and Sarah and I wrote, what we were arguing was, is that over the history of a planet, right?
    0:34:29 When life first forms, you know, 3.8 billion years ago, it’s kind of thin on the ground, right?
    0:34:32 You’ve got the first species, you know, these are all microbes.
    0:34:39 And they have not yet been, they’re not going to, enough of them to exert any kind of these Gaian feedback.
    0:34:42 So we call that an immature biosphere.
    0:34:50 But then as time goes on, as life becomes more robust and it begins to exert these feedbacks, keeping the planet in the place where it needs to be for life.
    0:34:52 We call that a mature biosphere, right?
    0:34:56 And the important thing, and we’re going to, I’m sure later on, we’re going to talk about definitions of life and such.
    0:35:03 There’s this great term called auto-poesis that Francisco Varela, the neurobiologist Francisco Varela came up with.
    0:35:10 And he said, you know, one of the defining things about life is this property of auto-poesis, which means self-creating and self-maintaining.
    0:35:16 Life does not create the conditions which will destroy itself, right?
    0:35:19 It’s always trying to keep itself in a place where it can stay alive.
    0:35:26 So the biosphere from this Gaian perspective has been auto-poetic for, you know, billions of years.
    0:35:31 Now, we just invented this techno-sphere in the last, you know, couple of hundred years.
    0:35:35 And what we were arguing in that paper is that it’s an immature techno-sphere, right?
    0:35:44 Because right now with climate change and all the other things we’re doing, we know we’re just the techno-sphere right now is sort of destroying the conditions under which it needs to maintain itself.
    0:35:55 So the real job for us, if we’re going to last over, you know, geologic timescales, if we want a techno-sphere that’s going to last tens of thousands, hundreds of thousands, millions of years,
    0:36:04 then we’ve got to become mature, which means to not undermine the conditions, to not subvert the conditions that you need to stay alive.
    0:36:07 So as of right now, it’s they were not auto-poetic.
    0:36:25 Well, I wonder if we look across thousands, tens of thousands, hundreds of thousands of years that perturbations, the techno-sphere should create perturbations as a way for developing greater and greater defenses against
    0:36:37 perturbations, which sounds like a ridiculous statement, but basically go out and play in the yard and hurt yourself to just strengthen the, or like drink water from the, from the pond.
    0:36:38 From the pond, yeah, right.
    0:36:40 Get sick a few times.
    0:36:41 Just strengthen the immune system.
    0:36:42 Yeah.
    0:36:44 Well, you know, it’s interesting with the techno-sphere.
    0:36:51 We can talk about this more, but like, you know, we’re just emerging as a techno-sphere in terms of as a interplanetary techno-sphere, right?
    0:37:03 That’s really the next step for us is to, um, David Grinspoon talks about, I love this idea of anti-accretion, like this amazing thing that for the first time, you know, over the entire history of the planet, stuff is coming off the planet, right?
    0:37:08 Used to be everything just fell down, all the meteorites fell down, but now we’re starting to push stuff out.
    0:37:17 And, you know, like the idea of planetary defense or such, you know, we are actually going to start exerting perturbations on the solar system as a whole.
    0:37:19 We’re going to start engineering if we make it, right?
    0:37:24 I always like to say that if we can get through climate change, the prize at the end is the solar system, right?
    0:37:30 So we will, um, we’ll be changed literally engineering the solar system.
    0:37:39 But what you can think of right now with what’s happening with the Anthropocene, the great acceleration that, that, uh, the, is the techno-sphere, you know, is the creation of that.
    0:37:42 That is a giant perturbation on the biosphere, right?
    0:37:47 And what you can’t do is, you know, the techno-sphere sits on top of the biosphere.
    0:37:55 And the techno, if the techno-sphere undermines the biosphere for its own conditions of habitability, then you’re in trouble, right?
    0:37:57 I mean, the biosphere is not going away.
    0:37:58 There’s nothing we could do.
    0:38:01 Like the idea that we have to save the earth is a little ridiculous.
    0:38:05 Like the earth is not a furry little bunny that we need to protect, but it’s the conditions for us, right?
    0:38:11 We, humanity, emerged out of this, out of the Holocene, the last 10,000 years interglacial period.
    0:38:14 We can’t tolerate very different kinds of earths.
    0:38:16 Um, so that’s what I mean about a perturbation.
    0:38:20 Before we forget, I got to ask you about this paper, pretty interesting.
    0:38:23 Uh, there’s an interesting table here about hard steps.
    0:38:40 Ebiogenesis, glucose fermentation, tuberic acid, all kinds of steps all the way to homo sapiens, animal intelligence, land ecosystems, endoskeletons, eye precursor, so formation of the eye, complex multicellularity.
    0:38:42 That’s definitely one of the big ones.
    0:38:43 Yeah.
    0:38:43 So interesting.
    0:38:45 I mean, what can you say about this chart?
    0:38:49 So there are all kinds of papers talking about what the difficulty of these steps.
    0:38:50 Right.
    0:38:51 And so this was the idea.
    0:39:03 So what Carter said was, you know, using anthropic reasoning, he said, there must be a few very hard steps for the evolution to get through to make it to intelligence, right?
    0:39:05 So there’s some steps are going to be easy.
    0:39:10 So every generation, you know, you roll the dice and yeah, it won’t take long for you to get that step.
    0:39:17 But there must be a few of them and he said you could even calculate what how many there were five, six in order to get to intelligence.
    0:39:21 And so this paper here, this plot is all these different people who’ve written all these papers.
    0:39:22 And this is the point.
    0:39:29 Actually, you can see all these papers that were written on the hard steps, each one proposing a different set of what those steps should be.
    0:39:36 And there’s this other idea from biology of the major transitions in evolution, MTEs, that those were the hard steps.
    0:39:40 But what we actually found was that none of those are actually hard.
    0:39:45 The whole idea of hard steps, that there are hard steps is actually suspect.
    0:39:52 So, you know, what’s amazing about this model is it shows how important it is to actually work with people who are in the field, right?
    0:39:56 So, you know, Brandon Carter was a brilliant physicist, the guy who came up with this.
    0:40:06 And then lots of physicists and astrophysicists like me have used this, but the people who actually study evolution and the planet were never involved.
    0:40:14 Right. And if you went and talked to an evolutionary biologist or a biogeophysicist, they’d look at you, when you explain this to the man, they’d be like, what?
    0:40:16 Like, what are you guys doing?
    0:40:29 Turns out, none of the details or none of the conceptual structure of this matches with what the people actually study the planet and its evolution.
    0:40:34 Is it mostly about the fact that there’s not really discrete big steps?
    0:40:36 Is this a gradual, continual kind of process?
    0:40:37 Well, there’s two things.
    0:40:40 The first most important one was that the planet and the biosphere have evolved together.
    0:40:45 That’s something that every, you know, most biogeophysicists completely accept.
    0:40:48 And it was the first thing that Carter kind of rejected.
    0:40:50 He said, like, no, that’s probably not possible.
    0:40:57 And yet, you know, like, if he’d only sort of had more discussions with this other community would have seemed like, no, there are actually windows that open up.
    0:41:01 And then the next thing is this idea of whether a step is hard or not.
    0:41:10 Because for a hard, what we mean by a hard step is that, like I said, every time there’s a generation, every time there’s the next generation born, you’re rolling the dice on whether this mutation will happen.
    0:41:19 And the idea of something being a hard step, there’s two ways in which something might even appear as a hard step and not be or actually not be a hard step at all.
    0:41:24 One is that you see something that has occurred in evolution has only happened once, right?
    0:41:25 So let’s take the opposite.
    0:41:32 We see something that’s happened multiple times, like wings, lots of examples of wings over lots of different evolutionary lineages.
    0:41:35 So that’s clearly not a hot making wings is not a hard step.
    0:41:38 There are certain other things that people say, no, that’s a hard step.
    0:41:47 Oxygen, you know, the oxygen photosynthesis, but they are so they tend to be so long ago that we’ve lost all the information.
    0:41:54 There could be other things in the fossil record that, you know, went made this innovation, but they’re just gone now.
    0:41:54 So you can’t tell.
    0:41:56 So there’s information loss.
    0:42:04 The other thing is the idea of pulling up the ladder that somebody, you know, some species makes the innovation, but then it fills the niche and nobody else can do it again.
    0:42:13 So yeah, it only happened once, but it happened once because basically the creature was so successful, it took over and there was no space for anybody else to evolve it.
    0:42:24 So yeah, so the interesting thing about this was seeing how, how much once you look at the details of life’s history on earth, how it really shifts you away
    0:42:25 from this hard steps model.
    0:42:28 And it shows you that those details, as we were talking about, like, do you have to know about the planet?
    0:42:30 Do you have to know about plate tectonics?
    0:42:31 Yeah, you’re going to have to.
    0:42:41 I mean, to be fair to Carter, on the first point, it makes it much more complicated if life and the planet are co-evolving.
    0:42:47 Because it’s not, it would be nice to consider the planet as a static thing that sets the initial conditions.
    0:42:54 Yeah. And then we can sort of, from an outside perspective, analyze planets based on the initial conditions they create.
    0:42:58 And then there’s a binary yes or no, will it create life?
    0:43:14 But if they co-evolve, it’s just a really complex dynamical system where everything is, because much more difficult from the perspective of SETI, of looking out there and trying to figure out which ones are actually producing life.
    0:43:23 But I think we’re at the point now, so now there may be other kinds of principles that actually, because co-evolution actually has its own, not deterministic, you’re done with determinism, right?
    0:43:29 But complex systems have patterns, complex systems have constraints.
    0:43:33 And that’s actually what we’re going to be looking for, our constraints on them.
    0:43:40 And so, you know, and again, nothing against Carter was a brilliant idea, but it just goes to show, you know, there’s this great XTC, you know, I’m a theoretical physicist, right?
    0:43:47 And so I love simplified, give me a simplified model with, you know, it’s a dynamical equation, some initial conditions, I’m very happy.
    0:43:56 But there’s this great XTC comic where like, you know, somebody’s working something out on the board and this physicist is looking over and saying, oh, oh, I just, I just wrote down an equation for that.
    0:43:57 I solved your problem.
    0:43:58 Do you guys even have a journal for this?
    0:44:01 You know, subtitle is Why Everybody Hates Physicists.
    0:44:01 Yeah.
    0:44:04 So sometimes that approach totally works.
    0:44:12 Sometimes physicists, you know, we can be very good at like zooming in on what is important and casting the details aside so you can get to the heart of an issue.
    0:44:15 And that’s very useful sometimes.
    0:44:17 Other times it obfuscates, right?
    0:44:23 Other times it clouds over actually what you needed to focus on, especially when it comes to complexity.
    0:44:33 Speaking of simplifying everything down to an equation, let’s return back to the question of how many alien civilizations are out there.
    0:44:35 And talk about the Drake equation.
    0:44:35 Yeah.
    0:44:38 Can you explain the Drake equation?
    0:44:42 You know, people have various feelings about the Drake equation.
    0:44:47 You know, it can be abused, but basically it was the story actually is really interesting.
    0:44:52 So Frank Drake in 1960 does the first ever astrobiological experiment.
    0:44:56 He gets a radio telescope, points it at a couple of stars and listens for signals.
    0:45:02 That was the first time anybody done any experiment about any kind of life in the history of humanity.
    0:45:05 And he does it and he’s kind of waiting for everybody to make fun of him.
    0:45:13 Instead, he gets a phone call from the government says, hey, we want you to do a meeting on interstellar communications, right?
    0:45:17 So he’s like, OK, so they organize a meeting with like just eight people.
    0:45:19 A young Carl Sagan is going to be there as well.
    0:45:25 And like the night before Drake has to come up with an agenda.
    0:45:30 How do you come up with an agenda for a meeting on a topic that no one’s ever talked about before, right?
    0:45:32 And so we actually write he breaks what he does.
    0:45:37 What’s so brilliant about the Drake equation is he breaks the problem of how many civilizations
    0:45:41 are there out there into a bunch of sub problems, right?
    0:45:43 And he breaks it into seven sub problems.
    0:45:48 Each one of them is a factor in an equation that when you multiply them all together,
    0:45:52 you get the number of civilizations out there that we could communicate with.
    0:45:56 So the first term is the rate at which stars form.
    0:46:00 The second term is the fraction of those stars that have planets, F sub p.
    0:46:05 The next term is the number of planets in the habitable zone, the place where we think life could form.
    0:46:13 The next term after that is the fraction of those planets were actually an a biogenesis event life forms occurs.
    0:46:19 The next one is the fraction of planets on which you start to get intelligence.
    0:46:25 After that, it’s the fraction of planets where that intelligence goes on to create a civilization.
    0:46:29 And then finally, the last term, which is the one that we really care about is the lifetime.
    0:46:31 How long you have a civilization. Now, how long does it last?
    0:46:34 Well, you say we humans, we humans, right?
    0:46:40 Because we’re standing, we’re staring at the guy, you know, multiple guns pointing at a nuclear war, climate change, AI.
    0:46:44 So, you know, how long on in general does civilizations last?
    0:46:50 Now, each one of these terms was brilliant about what he did was what he was doing was he was quantifying our ignorance, right?
    0:46:55 By breaking the problem up into these seven sub problems, he gave astronomers something to do, right?
    0:46:57 And so, you know, this is always with a new research field.
    0:47:00 You need a research program or else you just have a bunch of vague questions.
    0:47:03 You don’t even know really what you’re trying to do.
    0:47:07 So, you know, the star people could figure out how many stars we’re forming per year.
    0:47:13 The people who are interested in planets could go and find techniques to discover planets, etc, etc.
    0:47:16 I mean, these are their own fields.
    0:47:20 Essentially, by creating this equation, he’s launching new fields.
    0:47:21 Yeah, that’s exactly.
    0:47:26 He gave astrobiology, which wasn’t even a term then, a roadmap like, OK, you guys go do this.
    0:47:27 You go do that.
    0:47:28 You go do that.
    0:47:37 And it had such far reaching effect on astrobiology because it did break the problem up in a way that gave useful,
    0:47:40 you know, sort of marching orders for all these different groups.
    0:47:51 Like, for example, it’s because of the Drake equation in some sense that people who were involved in SETI pushed NASA to develop the technologies for planet hunting.
    0:48:01 There was this amazing meeting in 1978-1972 meetings, 1978-1979 that were driven in some part by the people who were involved in SETI getting NASA together to say,
    0:48:07 “Look, OK, look, how, you know, what’s what’s the roadmap for us to develop technologies to find, find planets?”
    0:48:18 So, yeah, so, you know, the Drake equation is absolutely foundational for astrobiology, but we should remember that it’s not a law of nature, right?
    0:48:21 It’s not something that’s it’s not equals MC squared.
    0:48:23 And so you can see it being abused in some sense.
    0:48:25 People, you know, it’s generated a trillion papers.
    0:48:26 Some of those papers are good.
    0:48:29 I’ve written some of those and some of those papers are bad.
    0:48:31 You know, I’m not sure where my paper fits in on those.
    0:48:34 So I’m saying, you know, one should be careful about what you’re using it for.
    0:48:43 But in terms of understanding the problem that that astrobiology faces, this really broke it up in a useful way.
    0:48:48 We could talk about each one of these, but let’s just look at exoplanets.
    0:48:48 Yeah.
    0:48:50 So that’s a really interesting one.
    0:48:57 I think when you look back, you know, hundreds of years from now, what’s in the 90s when they first detected the first 92 and 95.
    0:49:02 95 to me was really that was the discovery of the first planet orbiting a sun-like star.
    0:49:04 To me, that was the water, the dam being broken.
    0:49:09 I think that’s like one of the greatest discoveries in the history of science.
    0:49:10 I agree, I agree.
    0:49:16 Right now, I guess nobody’s celebrating it too much because you don’t know what it really means.
    0:49:28 But I think once we almost certainly will find life out there, it will obviously allow us to generalize across the entire galaxy, the entire universe.
    0:49:36 So if you can find life on a planet, even in the solar system, you can now start generalizing across the entire universe.
    0:49:36 You can.
    0:49:37 All you need is one.
    0:49:41 Like right now, it’s an, you know, our understanding of life, we have one example.
    0:49:43 We have n equals one example of life.
    0:49:45 So that means we could be an accident, right?
    0:49:51 It could be that we’re the only place in the entire universe where this weird thing called life has occurred.
    0:49:54 Get one more example and now you’re done.
    0:49:57 Because if you have one more example, now you’re even, you know, you don’t have to find all the other examples.
    0:49:59 You just know that it’s happened more than once.
    0:50:06 And now you are, you know, in from a Bayesian perspective, you can start thinking like, yeah, this life is not something that’s hard to make.
    0:50:10 Well, let me get your sense of estimates for the Drake equation.
    0:50:15 You were also written a paper expanding on the Drake equation, but what do you think is the answer?
    0:50:22 So the paper, there was this paper we wrote, Woody Sullivan and I in 2016, where we said, look, we have all this exoplanet data now, right?
    0:50:32 So the thing that exoplanet science and the exoplanet census I was talking about before have nailed is F sub p, the fraction of stars that have planets.
    0:50:39 It’s one every freaking star that you see in the sky hosts a family of worlds.
    0:50:44 I mean, it’s mind boggling because every one of those, those are all places, right?
    0:50:47 They’re either, you know, gas giants, probably with moons.
    0:50:49 So there’s the moons are places you can stand and look out.
    0:50:57 Or they’re like terrestrial worlds where even if there’s not life, there’s still snow falling and there’s oceans washing up on, you know, on shorelines.
    0:51:02 It’s incredible to think how many places and stories there are out there.
    0:51:06 So, right, the first term was F sub p, which is how many stars have planets.
    0:51:10 The next term is how many planets are in the habitable zone, right?
    0:51:13 On average, and it turns out to be one over five, right?
    0:51:15 So, you know, you know, we’re on point two.
    0:51:18 So that means you just count five of them, go out at night and go one, two, three, four, five.
    0:51:24 One of them has an earth like planet, you know, in the habitable zone, like, whoa.
    0:51:26 So what, what defines a habitable zone?
    0:51:34 Habitable zone is an idea that was developed in the 1958 by the Chinese American astronomer Shuxiang.
    0:51:36 And it was, it was a brilliant idea.
    0:51:40 It said, look, this is there, you know, I can do this simple calculation.
    0:51:46 If I take a planet and just stick it at some distance from a star of what’s the temperature of the planet?
    0:51:47 What’s the temperature of the surface?
    0:51:53 So now you’re all you’re going to ask, you give it a standard kind of, you know, earth like atmosphere and ask, could there be liquid water on the surface?
    0:51:53 Right.
    0:51:56 We believe that liquid water is really important for life.
    0:51:58 There could be other things that’s happening fine.
    0:52:04 But, you know, if you were to start off trying to make life, you’d probably choose water as your solvent for it.
    0:52:12 So basically the habitable zone is the band of orbits around a star where you can have liquid water on the surface.
    0:52:15 You could take a glass of water, pour it on the surface and it would just pool up.
    0:52:21 It wouldn’t freeze immediately, which would happen if your planet is too far out and it wouldn’t just boil away if your planet’s too close in.
    0:52:25 So that’s the formal definition of the habitable zone.
    0:52:27 So it’s a nice strict definition.
    0:52:30 There’s probably way more going on than that, but this is a place to start.
    0:52:31 Right.
    0:52:33 Well, we should say it’s a place to start.
    0:52:35 I do think it’s too strict of a constraint.
    0:52:36 I would agree.
    0:52:41 We’re talking about temperature where water can be on the surface.
    0:52:50 There’s so many other ways to get the aforementioned turmoil where the temperature varies, whether it’s volcanic.
    0:52:56 So interaction of volcanoes and ice and all of this on the moons of plants that are much farther away, all this kind of stuff.
    0:52:57 Yeah.
    0:53:07 Well, for example, we know in our own solar system, we have say Europa, the moon of Jupiter, which has got a hundred mile deep ocean under 10 miles of ice.
    0:53:08 Right.
    0:53:09 That’s not in the habitable zone.
    0:53:10 That is outside the habitable zone.
    0:53:12 And that may be the best place.
    0:53:14 It’s got more water than Earth does.
    0:53:18 All of its oceans are, you know, it’s twice as much water on Europa than there is on Earth.
    0:53:22 So, you know, that may be a really great place for life to form and it’s outside the habitable zone.
    0:53:26 So, you know, the habitable zone is a good place to start and it helps us.
    0:53:30 And there’s reason there’s reasons why you do want to focus on the habitable zone, because like Europa, I couldn’t.
    0:53:35 I won’t be able to see from across telescopic distances across light years.
    0:53:39 I wouldn’t be able to see life on Europa because it’s under 10 miles of ice.
    0:53:40 Right.
    0:53:47 So with the important thing about planets in the habitable zone is that we’re thinking they have atmospheres.
    0:53:54 Atmospheres are the things we can characterize for across 10, 50 light years and we can see biosignatures as we’re going to talk about.
    0:54:00 So there is a reason why the habitable zone becomes important for the detection of extra solar life.
    0:54:10 But for me, when I look up at the stars, it’s very likely that there’s a habitable planet or moon and each of the stars habitable defined broadly.
    0:54:14 Yeah, I think that’s not unreasonable to say.
    0:54:18 I mean, especially since the the formal definition, you get one in five, right?
    0:54:19 One in five is a lot.
    0:54:20 There’s a lot of stars in the sky.
    0:54:29 So yeah, saying that in general, when I look at a star, there’s a pretty good chance that there’s something habitable orbiting it is not a unreasonable scientific claim.
    0:54:36 To me, it seems like there should be alien civilizations everywhere.
    0:54:39 Why the Fermi Paradox?
    0:54:40 Why haven’t we seen them?
    0:54:43 Okay, the Fermi Paradox.
    0:54:47 Let’s talk about, I love talking about the Fermi Paradox because there is no Fermi Paradox.
    0:54:49 Dun dun dun dun.
    0:54:51 Yeah, so the Fermi Paradox.
    0:54:53 Let’s talk about the Fermi Paradox and the history of it.
    0:54:56 So Enrico Fermi, it’s 1950.
    0:55:01 He’s walking with his friends at Los Alamos Nuclear Weapons Lab to the Cantina.
    0:55:05 And there had been this cartoon in the New Yorker.
    0:55:12 They all read the New Yorker and the cartoon was trying to explain why there had been this rash of garbage cans being
    0:55:13 disappearing in New York.
    0:55:16 And this cartoon said, oh, it’s UFOs because this is already, you know, it’s 1950.
    0:55:19 The first big UFO craze happened in ’47.
    0:55:26 So they’d all, they were laughing about this as they’re walking and they started being physicists started talking about interstellar travel, interstellar propulsion, blah, blah.
    0:55:28 You know, conversation goes on for a while.
    0:55:32 Conversation turns to something else, you know, gone to other things.
    0:55:36 About 40 minutes later, over lunch, Fermi blurts out, well, where is everybody?
    0:55:37 Right?
    0:55:38 Typical Fermi sort of thing.
    0:55:42 He’d done the calculation in his head and he suddenly realized that, look, if
    0:55:54 one, if they’re, you know, if intelligence is common, that even traveling at sub light speeds, a civilization could cross, you know, kind of hop from one star system to the other and spread
    0:55:57 it out across the entire galaxy in a few hundred thousand years.
    0:55:58 And he realized this.
    0:56:00 And so he was like, why aren’t they here now?
    0:56:03 And that was the beginning of the Fermi paradox.
    0:56:12 It actually got picked up as a formal thing in 1975 in a paper by Hart, where he actually kind of went through this calculation and showed and said, well, there’s
    0:56:16 nobody here now, therefore, there’s nobody anywhere that, you know, okay.
    0:56:18 So that is what we will call the direct Fermi paradox.
    0:56:20 Why aren’t they here now?
    0:56:25 But something happened where people after SETI began, where people started to, there was this idea of the great silence.
    0:56:33 People got this idea in their head that like, oh, we’ve been looking for decades now for signals of extraterrestrial intelligence that we haven’t found any.
    0:56:35 Therefore, there’s nothing out there.
    0:56:38 But that, so we’ll call that the indirect Fermi paradox.
    0:56:43 And there absolutely is no indirect Fermi paradox for the most mundane of reasons, which is money.
    0:56:45 There’s never been any money to look.
    0:56:53 They’re really, SETI was always done by researchers who were kind of like scabbing some time, you know, some extra time from their other projects.
    0:56:57 So, you know, look a little bit, you know, at the sky with a telescope.
    0:56:58 Telescopes are expensive.
    0:57:06 So Jason Wright, one of my collaborators, he and his students did a study where they looked at the entire search space for SETI, you know, and imagine that’s an ocean.
    0:57:12 All the different stars you have to look at, the radio frequencies you have to look at, how when you look, how often you look.
    0:57:16 And they looked, then they summed up all the SETI searches that had ever been done.
    0:57:17 They went through the literature.
    0:57:25 And what they found was if the, if the, if that search space, if the sky is an ocean and you’re looking for fish, how much of the ocean have we looked at?
    0:57:27 And it turns out to be a hot tub.
    0:57:29 That’s how much of the ocean that we’ve looked up.
    0:57:34 We’ve dragged in a hot tub’s worth of ocean water up and there was no fish in it.
    0:57:37 And so now are we going to say up, well, there’s no fish in the ocean, right?
    0:57:41 So there is absolutely, positively no indirect Fermi Paradox.
    0:57:45 We just haven’t looked, but we’re starting to look.
    0:57:47 So that’s what’s, you know, finally we’re starting to look.
    0:57:48 That’s what’s exciting.
    0:57:51 The direct Fermi Paradox, there are so many ways out of that, right?
    0:57:57 There’s a book called “77 Solutions to the Fermi Paradox” that it just, you know, you can pick your favorite one.
    0:58:01 It just doesn’t carry a lot of weight because there’s so many ways around it.
    0:58:05 We did an actual simulation, my group, Jonathan Carroll, one of my collaborators.
    0:58:10 We actually simulated the galaxy and we simulated probes moving at sublight speed
    0:58:14 from one star to the other, gathering resources, heading to the next one.
    0:58:19 And so we could actually track the expansion wave across the galaxy.
    0:58:23 Have one Biogenesis event and then watch the whole galaxy get colonized or settled.
    0:58:28 And it is absolutely true that that wave crosses, you know, heart was right, Fermi was right.
    0:58:30 That wave crosses very quickly.
    0:58:33 But civilizations don’t last forever, right?
    0:58:35 So one question is, when did they visit?
    0:58:37 When did they come to Earth, right?
    0:58:42 So if you give civilizations a finite lifetime, you know, let them last 10,000, 100,000 years.
    0:58:44 What you find is you now have a steady state.
    0:58:46 Civilizations are dying.
    0:58:47 They’re, you know, they’re, they’re coming back.
    0:58:49 They’re traveling between the stars.
    0:58:51 What you find then is you can have big holes opened up.
    0:58:55 You can have regions of space where there is nobody for, you know, millions of years.
    0:58:59 And so if that, if we’re living in one of those bubbles right now,
    0:59:03 then maybe we were visited, but we were visited 100 million years ago.
    0:59:06 And there was a paper that Gavin Schmidt and I did that showed that if there was a civilization,
    0:59:12 whether it was like dinosaurs or aliens that was here 100 million years ago, there’s no way to tell.
    0:59:14 There’s just, there’s no record left over.
    0:59:16 The fossil record is too sparse.
    0:59:22 The only way maybe you could tell is by looking at the isotopic strata to see if there was anything
    0:59:24 reminiscent of an industrial civilization.
    0:59:30 But the idea that, you know, you’d be able to find, you know, iPhones or toppled buildings
    0:59:33 after 100 million years is there’s no way.
    0:59:41 So if there was an alien camp here, an alien village, a small civilization, maybe even a large civilization.
    0:59:44 Even a large civilization, even if it was 100 million years ago.
    0:59:46 And it lasted 10,000 years, fossil record’s not going to have it.
    0:59:48 Yeah, yeah.
    0:59:50 The fossil record is too sparse, right?
    0:59:52 Most things don’t fossilize.
    0:59:56 And 10,000 years is a, you know, blink in the eye of geological time.
    1:00:01 So we call our Gavin called this the Cylorean hypothesis after the Doctor Who episode with the
    1:00:02 lizard creatures, the Cyloreans.
    1:00:05 And so that paper got a lot of press.
    1:00:09 But it was, you know, it was, it was, it was an important idea.
    1:00:10 And it was, it was really Gavin’s.
    1:00:15 I was just helping with the astrobiology that to recognize that like, yeah, you know, we could have
    1:00:16 been visited a long time ago.
    1:00:17 There just would be no record.
    1:00:20 Yeah, it’s kind of mind blowing.
    1:00:21 It’s really mind blowing.
    1:00:28 And it’s also a good reminder that we’ve been intelligent species have been here for a very
    1:00:29 short amount of time.
    1:00:30 Very short amount of time.
    1:00:30 Yeah.
    1:00:35 This is not to say that there was like, so, oh, whenever I gave, you know, I like, I was on Joe
    1:00:36 Rogan for exactly this paper.
    1:00:42 And I had to always emphasize, we’re not saying there was a Cylorean, you know, but we’re just
    1:00:45 saying that if there was, that’s why I love Gavin’s question.
    1:00:47 Gavin’s question was just like, how could you tell?
    1:00:47 Right.
    1:00:49 It was a very beautifully scientific question.
    1:00:53 That’s what we were really showing is that you really, you know, unless you did a very
    1:00:57 specific kind of search, which nobody’s done so far, that, you know, there, there’s not
    1:01:02 an obvious way to tell that there, there could have been civilizations here earlier on.
    1:01:09 I’ve actually been reading a lot about ancient civilizations, and it just makes me
    1:01:17 sad how much of the wisdom of that time is lost and how much guessing is going on, whether
    1:01:20 it’s in South America, like what happened in the jungle?
    1:01:25 Yeah, like the Amazon, like the Amazon problem, that was, you know, the conquistor came and
    1:01:30 wiped everybody out, and especially just even the, like the plague may have decimated.
    1:01:33 So yeah, how much of that civilization?
    1:01:34 And there’s a lot of theories.
    1:01:40 And, you know, because of archaeology only looks at cities, they don’t really know the
    1:01:42 origins of humans.
    1:01:46 And there’s a, there’s a lot of really interesting theories in there, of course, controversial.
    1:01:49 There’s a lot of controversial people in every discipline.
    1:01:53 But archaeology is like a fascinating one, because we know so little that basically
    1:01:58 storytellers, you’re assembling the picture from just very few puzzle pieces.
    1:01:59 It’s fascinating.
    1:02:02 It makes me, it’s, it’s, it’s humbling.
    1:02:08 And it’s sad that there could be entire civilizations, ancient civilizations that are
    1:02:11 either almost entirely or entirely lost.
    1:02:12 Yeah.
    1:02:16 Well, like the, the, the indigenous peoples of North America, there could have been like
    1:02:17 millions and millions.
    1:02:21 You know, we get this idea that like, oh, you know, the Europeans came and it was empty,
    1:02:26 you know, but it was may have only been empty because the plague had swept up from the,
    1:02:28 you know, from the, what happened in Mesoamerica.
    1:02:32 So, and, you know, and they didn’t really build cities, but they had, they, I mean,
    1:02:35 they, they didn’t build wooden or stone cities.
    1:02:36 They built wooden cities, you know.
    1:02:40 Everybody seems to be building pyramids, and they’re really damn good at it.
    1:02:41 I don’t know.
    1:02:42 What is happening with a parrot?
    1:02:43 Like, what is, why, why does that apply?
    1:02:45 Like what archetype in our brain is that?
    1:02:53 And it is also really interesting speaking of archetypes is that independent civilizations
    1:03:00 formed, and they had a lot of similar kind of dynamics, like human nature when it, it
    1:03:04 builds up hierarchies in a certain way, builds up myths and religions in a certain way, it
    1:03:08 builds pyramids in a certain way, it goes to war, all this kind of stuff.
    1:03:09 Yeah.
    1:03:11 Independently, they’re just fascinating.
    1:03:15 Santa Fe Institute, the stuff the Santa Fe Institute does on this as complex systems
    1:03:19 you know, there are the origin of hierarchies and such, very cool.
    1:03:22 Yeah, Santa Fe folks, complexity in general is really cool.
    1:03:27 What phenomena emerge when a bunch of small things get together and interact.
    1:03:33 Going back to this, this paper, a new empirical constraint on the prevalence of technological
    1:03:37 species in the universe, this paper that expands on the Drake equation.
    1:03:39 What are some interesting things in this paper?
    1:03:43 Well, so the main thing we were trying to do with this paper is say, look, we have all of
    1:03:45 this exoplanet data, right?
    1:03:49 It’s got to be good for something, especially since two of the terms that have been nailed
    1:03:52 down empirically are two terms in the Drake equation.
    1:03:56 So F sub P, that’s the second term, fraction of stars that have planets.
    1:04:01 And then N sub B, the average number of planets in the habitable zone, those are the
    1:04:03 second and third term in the Drake equation.
    1:04:06 So what that means is all the astronomical terms have been nailed.
    1:04:10 And so we said like, okay, how do we use this to do something with the Drake equation?
    1:04:13 And so we realized is, well, okay, we got to get rid of time.
    1:04:15 The lifetime thing, we can’t say anything about that.
    1:04:21 But if we let that, if we don’t ask how long they last, but instead ask, what’s
    1:04:26 the probability that there have been any civilizations at all, no matter how long
    1:04:26 they lasted.
    1:04:28 I’m not asking whether they exist now or not.
    1:04:34 I’m just asking in general about probabilities to make a technological
    1:04:37 civilization anywhere and at any time in the history of the universe.
    1:04:39 And that we were able to constrain.
    1:04:49 And so what we found was basically that there have been 10 billion trillion habitable
    1:04:51 zone planets in the universe.
    1:04:57 And what that means is that are, those are 10 billion trillion experiments that
    1:04:57 have been run.
    1:05:03 And the only way that we’re the only time that this is, you know, this whole process
    1:05:08 from, you know, a biogenesis to a civilization has occurred is if every
    1:05:09 one of those experiments failed.
    1:05:09 Right.
    1:05:14 So therefore you could put a probability, we called it the pessimism line, right?
    1:05:18 We don’t really know what nature sets for the probability of making intelligent
    1:05:19 civilizations, right?
    1:05:21 But we could set a limit using this.
    1:05:26 We could say, look, as if the probability per habitable zone planet is less
    1:05:30 than 10 to the minus 22, one in 10 billion trillion, then yeah, we’re alone.
    1:05:34 If it’s anywhere larger than that, then we’re not the first.
    1:05:35 It’s happened somewhere else.
    1:05:37 And to me, that was, that was mind blowing.
    1:05:40 Doesn’t tell me there’s anybody nearby, the galaxy could be sterile.
    1:05:46 It just told me that like, you know, unless nature’s really against, it has
    1:05:50 some bias against civilizations, we’re not the first time this has happened.
    1:05:53 This has happened elsewhere over the course of cosmic history.
    1:05:57 10 billion trillion experiments.
    1:05:59 Yeah, that’s a lot of experiments.
    1:05:59 That’s a lot.
    1:06:00 Right.
    1:06:01 A thousand is a lot.
    1:06:01 Yeah.
    1:06:02 A hundred is a lot.
    1:06:03 Yeah.
    1:06:10 If we normal humans saw a hundred experiments and we knew that at least
    1:06:16 one time there was a successful human civilization built, I mean, we would say
    1:06:18 for sure in a hundred, you’ll get another one.
    1:06:18 Yeah.
    1:06:19 Yeah.
    1:06:19 So that’s what I mean.
    1:06:22 That’s why, so this, you know, these kinds of arguments, you have to be careful
    1:06:22 with what they can do.
    1:06:26 But what it really, I felt like what this paper showed was that, you know, the
    1:06:28 burden of proof is now on the pessimists, right?
    1:06:30 So that’s why we called it the pessimism line.
    1:06:34 There’s been, you know, throughout history, there’s been, you know, alien
    1:06:37 pessimists and alien optimists, and they’ve been yelling at each other.
    1:06:38 That’s all they had to go with, right?
    1:06:42 You know, and like with Giordano Bruno and 1600, they burned the guy at the
    1:06:43 stake for being an alien optimist.
    1:06:46 But nobody really knew what pessimism or optimism meant.
    1:06:49 This, you know, we sort of thought this was like the plank length.
    1:06:51 This was sort of the plank length of astrobiology.
    1:06:55 It gave you an actual number that, you know, if you could somehow calculate what
    1:06:59 the probability, you know, of forming a technological civilization was, this
    1:07:02 thing sort of shows you where the limit is.
    1:07:06 As long as you’re above 10 to the minus 22, then you actually, absolutely, it
    1:07:09 has occurred in the, in the, in the history, other civilizations have
    1:07:10 occurred in the history of the universe.
    1:07:15 So to me, at least a big question is FE, which is basically a biogenesis.
    1:07:18 How hard is it for life to originate on a planet?
    1:07:22 Cause all the other ones seem very likely.
    1:07:23 Everything seems very likely.
    1:07:26 The only open question to me is like, how hard is it for life to originate?
    1:07:30 There’s lots of ways to, again, you know, we don’t know unless we look and
    1:07:33 the, you know, you had Sarah walk around not too long ago.
    1:07:35 You know, she’s very interested in origins of life.
    1:07:39 Um, uh, so, you know, lots of people are working on this, but I think
    1:07:42 it’s, it’s hard looking at the history of the earth.
    1:07:44 You know, and again, this is, you can do Bayesian arguments on this.
    1:07:48 Um, but yeah, it’s forming life.
    1:07:51 I don’t think it’s hard getting, getting like basic biology started.
    1:07:52 I don’t think it’s hard.
    1:07:53 It’s still wild.
    1:07:57 It’s an amazing process that actually I think requires some deep rethinking
    1:08:01 about how we conceptualize what life is and what life isn’t.
    1:08:03 That’s one of the things I like about Sarah’s work.
    1:08:07 Um, we’re, we’re pursuing on a different level, uh, about the life as
    1:08:11 that the only process or the only system that uses information.
    1:08:16 Um, but still, regardless of all those kinds of details, uh, life is probably
    1:08:16 easy to make.
    1:08:18 That’s, that’s my, that’s my gut feeling.
    1:08:23 You know, I mean, day by day, this changes for me, but I just see once
    1:08:27 you create bacteria, it’s, it’s, it’s off to the races.
    1:08:30 You’re going to get complex life as long as you have enough time.
    1:08:36 I mean that boring billion and, but I just can’t imagine, uh, habitable planet
    1:08:39 not having a couple of billion to spare.
    1:08:39 Yeah.
    1:08:41 Couple of years to spare.
    1:08:45 You know, there is a mystery there about why did it take so long, like with
    1:08:48 the Cambrian explosion, but that may be again about these windows that like you
    1:08:53 couldn’t happen until, until the window, the planet and the, uh, life had evolved
    1:08:57 together enough that they together kind of opened the window for the, the next step.
    1:09:02 Um, you know, intelligent life and how long intelligent, you’re similar
    1:09:03 to technological civilizations.
    1:09:07 I think there’s a big question about how long those last and how, you know, I’m
    1:09:12 hopeful, you know, um, but, uh, but in terms of just like, I think life is
    1:09:15 absolutely going to be common in the, you know, pretty common in the unit.
    1:09:19 Yeah, I think it’s absolutely like, I think, uh, again, if I were to put
    1:09:23 everything, uh, even advanced civilizations are common.
    1:09:30 So the, to me, then the, the only explanation is the L our galaxy is a
    1:09:32 graveyard of civilizations.
    1:09:33 Yeah.
    1:09:35 Kids, you know, you think about it, we’ve only been around, I mean, as a tech, a
    1:09:39 lot, truly, you know, when we think about in, in Drake’s, uh, definition, you
    1:09:40 had to have radio telescopes.
    1:09:44 That’s been a hundred years, you know, and if we got another 10,000, a hundred
    1:09:47 thousand years of history, that would be, for us, it’d be pretty amazing, right?
    1:09:51 Um, but that’s still, that wouldn’t be long enough to really pop up the
    1:09:54 number of civilizations in the, in the galaxy.
    1:09:57 So you really need it to be like hundreds of millions of years.
    1:10:01 And that raises a question, which I am very interested in, which is how do
    1:10:04 we even talk about, I call it the billion year civilization, right?
    1:10:09 How do we even begin to hypothesize or think about in any kind of systematic
    1:10:14 way, what happens to a technological civilization across hundreds of
    1:10:16 millions to a billion years?
    1:10:16 Yeah.
    1:10:19 Like how, how do you even simulate the trajectories that civilizations
    1:10:21 can take across that kind of timescale?
    1:10:22 Yeah.
    1:10:27 Uh, when we, all the data we have is just for the 10,000 years or, or so 20,000
    1:10:30 years that humans have been building civilizations.
    1:10:33 And then just, I don’t, I don’t know what you put it at, but maybe a hundred
    1:10:35 years that we’ve been technological.
    1:10:36 Yeah.
    1:10:38 And we’re ready to blow ourselves to bits or, you know, drive
    1:10:39 ourselves off the planet.
    1:10:40 Yeah.
    1:10:42 No, it’s really interesting, but there’s got to be a way that I think
    1:10:43 that’s really a frontier.
    1:10:45 So you had David Kipping on not too long ago.
    1:10:48 Um, and David and I did a paper, uh, and Caleb Sharpe.
    1:10:51 David really drove this, uh, where we, you know, it was a Bayesian
    1:10:53 calculation to sort of ask the question.
    1:10:56 If you, if you were to find a detection, if you were to find a signal
    1:11:00 or, you know, a techno signature, would that come from a civilization
    1:11:03 that was younger, your age or older?
    1:11:06 And you could see, I mean, this is not hard to do, but it was great.
    1:11:08 The formalism, the formalism was hard, you know, it’s kind of
    1:11:12 intuitive, but the formalism was hard to show that, yeah, they’re older,
    1:11:13 you know, probably much older.
    1:11:16 So that means you really do need to think about, like, okay, how do
    1:11:19 billion year civilizations manifest themselves?
    1:11:20 What signatures will they leave?
    1:11:23 And yeah, can you even, I mean, what’s so cool about it?
    1:11:26 It’s so much fun because you’ve got to, like, you have to, you have
    1:11:28 to imagine the unimaginable.
    1:11:31 Like, you know, would you still, I mean, obviously biological evolution
    1:11:34 can happen on, you know, on those kinds of time scales.
    1:11:37 So you wouldn’t even really be the same thing you started out as, but
    1:11:39 social forms, what kind of social forms?
    1:11:42 Can you imagine that would be continuous over that?
    1:11:43 Or maybe they wouldn’t be continued.
    1:11:45 You should get, they drop out, you know, they destroy themselves
    1:11:46 and then they come back.
    1:11:51 So maybe it’s, you know, it’s a trunk or a punctuated evolution.
    1:11:53 I mean, but we got to sort of, this is the fun part.
    1:11:54 We have to sort of work this out.
    1:11:59 Well, I mean, one way to approach that question is like, how, what are
    1:12:02 the different ways to achieve homeostasis as you get greater and
    1:12:04 greater technological innovation?
    1:12:10 So like, if you expand out into the universe and you have, uh,
    1:12:13 optocartage have scale, what, what are the ways you can avoid destroying
    1:12:17 yourself, just achieve stability while still growing?
    1:12:18 Yeah.
    1:12:22 And I mean, that’s an interesting question.
    1:12:23 I think it’s probably simulatable.
    1:12:27 Could be, I mean, you know, agent-based modeling, you could do it with that.
    1:12:30 So, so, you know, our group has used agent-based modeling to do
    1:12:33 something like the Fermi paradox that was, that was agent-based modeling.
    1:12:34 But you can also do this.
    1:12:35 People at Santa Fe have done this.
    1:12:39 Other groups have done this to do use agent-based modeling to track the, the
    1:12:44 or formation of hierarchies, the formation of stable hierarchies.
    1:12:48 The, so I think that, I think it’s actually very doable, but, um, understanding
    1:12:51 the kind of assumptions and principles that are going into it and what you
    1:12:54 can extract from those, that is what is sort of the frontier.
    1:13:02 Do you think if humans colonize Mars, the dynamic between the civilization
    1:13:07 on Earth and Mars will be fundamentally different than the dynamic between
    1:13:09 individual nations on Earth right now?
    1:13:12 Like that’s, that’s the thing to load into the simulate, the agent-based
    1:13:13 simulation we’re talking about.
    1:13:17 If we settle it, Mars will very quickly want to become its own nation.
    1:13:21 Well, no, there’s already going to be nations on Mars.
    1:13:22 That’s guaranteed.
    1:13:22 Yeah.
    1:13:25 The moment you have two million people, one, the moment you have one million
    1:13:27 people, there’s going to be two tribes.
    1:13:29 And then they’re going to start fighting.
    1:13:30 Right.
    1:13:33 And the question is interplanetary fighting.
    1:13:34 How quickly does that happen?
    1:13:36 And does it have a different nature to it?
    1:13:38 Because of the distances, you know?
    1:13:40 Are you a fan of The Expanse?
    1:13:41 Do you have you watch The Expanse?
    1:13:42 Great show.
    1:13:45 Cause it’s all about the, I highly recommend to everybody.
    1:13:46 It’s based on a series of books that are excellent.
    1:13:48 It’s on prime, six seasons.
    1:13:50 And it’s basically about the settled solar system.
    1:13:53 It takes place about 300 years from now and the entire solar system is settled.
    1:13:57 And it is the best show about interplanetary politics.
    1:14:02 The first season, actually, the journal, what was it, foreign, foreign affairs said
    1:14:06 the best show on TV about politics, it takes place is interplanetary.
    1:14:09 So yeah, I think, you know, human beings being human beings.
    1:14:12 Yes, where there will be warfare and there will be conflict.
    1:14:16 I don’t think it’ll be necessarily all that different, you know, because really,
    1:14:20 I think within a few hundred years, we will have lots of people in the solar system.
    1:14:22 And it doesn’t even have to be on Mars.
    1:14:26 We did a paper where we look based on, because I wanted to know about whether
    1:14:29 an idea in the expanse was really possible.
    1:14:32 In the expanse, the asteroid belt, what they’ve done is they have
    1:14:35 colonized the asteroid belt by hollowing out the asteroids and spinning them up
    1:14:37 and living on the inside, right?
    1:14:40 Because they have the Coriolis force and I thought, like, wow, what a cool idea.
    1:14:44 And when I ran the blog for NPR, actually talked to the guys and said,
    1:14:47 did you guys calculate this to see if whether it’s possible?
    1:14:48 Sadly, it’s not possible.
    1:14:52 The rock is just not strong enough that if you tried to spin it up
    1:14:56 to the speeds you need to get one third gravity, which is what I think
    1:14:59 the minimum you need for human beings, the rock would just fall apart.
    1:14:59 It would break.
    1:15:03 But we came up with another idea, which was that if you could take small
    1:15:07 asteroids, put a giant bag around them, a nanofiber bag and spin those up.
    1:15:12 It would inflate the bag and then even a small couple of kilometer wide
    1:15:18 asteroid would expand out to you could get like a Manhattan’s worth of material inside.
    1:15:21 So forget about even colonizing Mars, space stations, right?
    1:15:24 Or space habitats with millions of people in them.
    1:15:28 So anyway, the point is that I think, you know, within a few hundred years,
    1:15:32 it is not unimaginable that there will be millions, if not billions,
    1:15:34 of people living in the solar system.
    1:15:38 And you think most of them will be in space habitats versus on Mars
    1:15:39 and on the planetary surface?
    1:15:42 You know, it’s a lot easier on some on some level, right?
    1:15:44 It depends on how like with nanofabrication and such.
    1:15:47 But, you know, getting down to gravity well is hard, right?
    1:15:50 So, you know, there’s a certain way in which there’s a lot of, you know,
    1:15:53 it’s a lot easier to build real estate out of the asteroids.
    1:15:54 But we’ll probably do both.
    1:15:56 I mean, I think what will happen is, you know, the next,
    1:16:00 should we make it through climate change and nuclear war and all the other and AI?
    1:16:05 The the next thousand years of human history is the solar system, right?
    1:16:10 And so, you know, I think we’ll settle every nook and cranny we possibly can.
    1:16:14 And it’s, you know, it’s a beautiful, what I love about what’s hopeful about it
    1:16:16 is this idea you’re going to have all of these pockets.
    1:16:20 And, you know, I’m sure there’s going to be a Mormon space habitat, like, you know,
    1:16:23 there’s going to be whatever you want, a libertarian space habitat.
    1:16:25 Everybody’s going to be able to kind of create there.
    1:16:27 There’ll be lots of experiments in human flourishing.
    1:16:31 And those kinds of experiments will be really useful for us to sort of figure
    1:16:36 out better ways for us to interact and have maximum flourishing, maximum wellness,
    1:16:38 maximum democracy, maximum freedom.
    1:16:42 Do you think that’s a good backup solution to go out into space
    1:16:47 sort of to avoid the possibility of humans destroying themselves completely here on Earth?
    1:16:50 Well, I think, you know, I want to be always careful with that because,
    1:16:53 like I said, it’s centuries that we’re talking about, right?
    1:16:57 So, you know, the problem with climate change and same with nuclear war,
    1:16:58 it’s breathing down our necks now.
    1:17:04 So it’s not a, you know, trying to establish a base on Mars is going to be
    1:17:09 so hard that it is not even going to be close to being self-sufficient for a couple
    1:17:11 of, you know, a century at least.
    1:17:13 So it’s not like a backup plan now.
    1:17:16 You know, we have to solve the problem of climate change.
    1:17:17 We have to deal with that.
    1:17:19 There’s still enough nuclear weapons to really do our, you know,
    1:17:22 horrific things to the planet for human beings.
    1:17:24 So I don’t think it’s like a backup plan in that way.
    1:17:26 But I do think, like I said, it’s the prize.
    1:17:31 It’s, you know, if we get through this, then we get the entire solar system to
    1:17:35 sort of play around in and experiment with and do really cool things with.
    1:17:38 Well, I think it could be a lot less than a couple of centuries.
    1:17:43 If there’s a urgency, like a real urgency, like a catastrophe, like,
    1:17:49 maybe a small nuclear war breaks out where it’s like, holy shit,
    1:17:52 this is for sure, for sure a bigger one is looming.
    1:17:56 Yeah, maybe if geopolitically, the war between China and the United States
    1:18:00 escalates, where there’s this tension that builds and builds and builds.
    1:18:03 And it becomes more obvious that we need to really, really be that story.
    1:18:09 I think my only dilemma with that is that I just think that a self-sufficient base
    1:18:12 is so far away that, like I say, you start doing that.
    1:18:14 And then there is a full-scale nuclear exchange.
    1:18:17 That base is, you know, it’s not going to last because it’s just, you know,
    1:18:22 the self-sufficiency requires a kind of economy, like literally a material
    1:18:27 economy that we are so far from with Mars, that we are centuries from.
    1:18:30 Like I said, you know, three centuries, which is not that long.
    1:18:34 Two to three centuries, you know, look at 1820, nobody had traveled faster
    1:18:37 than 60 miles an hour unless they were falling off a cliff, right?
    1:18:41 And now we routinely travel at 500 miles an hour, but it is sort of centuries long.
    1:18:45 So that’s why I think, I think we’d be better off trying to solve these problems
    1:18:49 than, you know, I just think the odds that we’re going to be able to create
    1:18:57 a self-sufficient colony on Mars before that threat comes to head is small.
    1:18:58 So we’d have to deal with the threat.
    1:19:02 Yeah, it’s an interesting scientific and engineering question of how to create
    1:19:06 a self-sufficient colony on Mars or out in space as a space habitat.
    1:19:10 Like where Earth entirely could be destroyed, you could still survive.
    1:19:11 Yeah, yeah.
    1:19:13 Because it’s really what about, you know, thinking about complex systems, right?
    1:19:21 A space habitat, you know, would have to be as robust as an ecosystem, as the kind
    1:19:24 of thing, you know, you go out and you see a pond with all the different webs
    1:19:25 of interactions.
    1:19:30 You know, that’s why I always think that, you know, if this process of going
    1:19:34 out into space is actually will help us with climate change and with thinking
    1:19:38 about making a long-term sustainable version of human civilization, because
    1:19:42 you really have to think about these webs, the complexity of these webs
    1:19:44 and recognize the biosphere has been doing this forever.
    1:19:46 The biosphere knows how to do this, right?
    1:19:50 And so, A, how do we support, how do we build a vibrant, powerful
    1:19:55 techno sphere that also doesn’t, you know, mess with the biospheres, mess
    1:19:58 with the biosphere’s capacity to support our techno sphere?
    1:20:01 So, you know, by doing this, by trying to build space habitats in some
    1:20:04 sense, you’re thinking about building a small scale version of this.
    1:20:07 So I think, I think the two problems are going to kind of feed back on each other.
    1:20:12 Well, there’s also the other possibility of, uh, like the movie, uh,
    1:20:16 Darren Aronofsky’s postcard from Earth, where we can create this kind
    1:20:22 of life gun that just shoots, so as opposed to, uh, engineering everything.
    1:20:27 Basically seeding life on a bunch of places and letting life do its thing,
    1:20:31 which is really good at doing, it seems like, so as opposed to like the, with
    1:20:36 a space habitat, you basically have to build the entire biosphere and techno
    1:20:38 sphere, the whole, the whole thing, by yourself.
    1:20:42 Yeah, uh, you know, if you just, hey, the aforementioned cockroach
    1:20:48 with some bacteria, place it in Europa, uh, I think you’d be surprised what happens.
    1:20:48 Yeah.
    1:20:49 Right.
    1:20:55 Like honestly, if you put a huge amount of bacteria, like a giant
    1:21:02 number of organisms from Earth into, uh, on Mars, on, uh, some of these moons
    1:21:06 of the other planets in the solar system, do you think like, I feel like
    1:21:08 some of them would actually find a way to survive?
    1:21:11 I, you know, the moon is hard because the moon is just like, there’s no, you
    1:21:14 know, the moon may be really hard, but you know, that’d be, I mean, I wonder
    1:21:16 if somebody’s must have done these experiments, right?
    1:21:18 Like how, because we know there are extremophiles, right?
    1:21:21 We know that they’re, you can go down, you know, 10 miles below the Earth’s
    1:21:24 surface and there are things where there’s no sunlight.
    1:21:29 There’s, you know, the conditions are so extreme and there’s lots of microbes
    1:21:32 having a great time, living off the radioactivity, you know, in the rocks.
    1:21:36 But, you know, they had lots of time to evolve to those conditions.
    1:21:41 So I’m not sure if you dumped a bunch of bacteria, you know, so somebody,
    1:21:42 like somebody must have done these experiments.
    1:21:50 Like, you know, how fast could microbial evolution occur in under harsh
    1:21:54 conditions that you maybe get somebody who figures out, okay, I can do with this.
    1:21:56 I think the moon’s too much because it’s so sterile.
    1:21:59 But, you know, Mars, I don’t know, maybe, I don’t know.
    1:22:01 We’d have to, that, but it’s an interesting idea.
    1:22:03 I wonder if somebody has done those experiments.
    1:22:06 Yeah, you think somebody would, like, let’s take a bunch of microbes.
    1:22:09 The harsh, take the harshest possible condition of all different kinds,
    1:22:10 temperature, all this kind of stuff.
    1:22:13 Right, pressure, salinity, and then just, like, dump a bunch of things
    1:22:17 that are not used to it and then just see, does everybody just die?
    1:22:18 You know, that’s it.
    1:22:18 There’s, you know.
    1:22:23 The thing about life, it, it flourishes in a non-sterile environment where
    1:22:27 there’s a bunch of options for resources, even if the condition is super harsh.
    1:22:32 In the lab, I don’t know if you can reconstruct harsh conditions plus options
    1:22:33 for survival.
    1:22:34 You know what I mean?
    1:22:40 Like, you have to have the, the, the huge variety of resources that are always
    1:22:44 available on a planet somehow, even when it’s in super harsh conditions.
    1:22:47 So that, so that’s actually not a trivial experiment.
    1:22:50 And I wouldn’t even, if somebody did that experiment in the lab, I’d be a little
    1:22:55 bit skeptical because, like, if, because I could see bacteria doesn’t survive
    1:22:59 in this kind of temperature, but then I’m feeling, I don’t know, I don’t know.
    1:23:00 Is there enough, right?
    1:23:03 Is that, you know, is there, are there other options?
    1:23:05 Like, you know, is the condition rich enough?
    1:23:06 Rich enough, yeah.
    1:23:08 You know, there’s, there’s an alternative view, though, which is there’s
    1:23:11 this great book by Kim Stanley Robinson called Aurora.
    1:23:15 You know, so there’s been a million, um, century ship stories, like where, you
    1:23:19 know, Earth sends out a, you know, generation ship or century ship and it goes
    1:23:21 to another planet and they land and they colonize.
    1:23:24 And on this one, they get all the way there and they think it’s, the plan’s
    1:23:28 going to be habitable and it turns out that it’s not habitable for earth life.
    1:23:30 Like that, you know, there’s, there’s like, you know, bacteria or prions
    1:23:35 actually, you know, super that just like, you know, kill people in the simplest way.
    1:23:38 Um, and the, the important thing about this book was the idea that like, you
    1:23:41 know, life is actually very tied to its planet.
    1:23:42 It may not be so easy.
    1:23:44 I just thought it was a really interesting idea.
    1:23:49 I’m not necessarily supporting it, but that actually life reflects the planetary
    1:23:53 conditions that not the planetary, the planet itself, the whole lineage, the
    1:23:57 whole history of the biosphere, and it may not be so easy this to, to just sort
    1:24:00 of be like, Oh, just drop it over here and it’ll, you know, cause the bacteria,
    1:24:03 even though they’re individual examples of life and they kind of believe this,
    1:24:07 the true unit of life, it’s not DNA, it’s not a cell.
    1:24:08 It’s the biosphere.
    1:24:10 It’s the whole community.
    1:24:10 Yeah.
    1:24:15 That’s actually an interesting field of study is how when you arrive from one
    1:24:20 planet to another, so we humans arrive to a planet that has a biosphere, maybe
    1:24:29 a techno sphere, what is the way to integrate without killing yourself or,
    1:24:31 or the other one, or the other one.
    1:24:33 That’s, let’s stick to biology.
    1:24:35 Like that, that’s an interesting question.
    1:24:41 I don’t know if we have a rigorous way of investigating that.
    1:24:45 Because everybody, everything on life is, you know, has the same lineage.
    1:24:48 We all come from Luca, you know, the last universal common ancestor.
    1:24:50 And what you see is often in science fiction, people will do things like,
    1:24:56 oh, well, it’s okay because like that bio, that metabolism, that biochemistry is so
    1:24:59 different from ours that we can coexist because they don’t even know each other,
    1:24:59 you know, right?
    1:25:02 That the, you know, and then the other version is you get there, you land and
    1:25:04 instantly, you know, the nose bleeds and you’re dead.
    1:25:08 Unfortunately, I think it’s the latter.
    1:25:11 Yeah, it sort of feels like, it’s the more alien kind of thing.
    1:25:17 So as we look out there, according to the Drake Equations, we just discussed,
    1:25:20 seems impossible to me that there’s not civilizations everywhere.
    1:25:21 So how do we look at them?
    1:25:22 This process of SETI.
    1:25:27 I have to put on my scientist hat and just say, my gut feeling is that dumb life,
    1:25:28 so to speak, is common.
    1:25:33 I am a little agnostic about, I can see ways in which intelligence civilizations
    1:25:38 may be sparse, but, but until, you know, we got to go look, it’s all, it’s all armchair,
    1:25:39 armchair astronomy.
    1:25:41 That’s, that’s from a sort of rigorous scientific perspective.
    1:25:46 From my bro science perspective, it seems, again, smoking the, the aforementioned weed.
    1:25:52 Yeah, after the bomb, yeah, I mean, honestly, it’s, it’s really just, it’s
    1:25:58 impossible to me that there’s not potentially dead, but advanced civilizations
    1:26:00 everywhere in our galaxy.
    1:26:00 Yeah.
    1:26:00 Yeah.
    1:26:02 The potentially dead port, I think.
    1:26:02 Right.
    1:26:05 It could be that, like, making civilizations is easy.
    1:26:06 They just don’t last long.
    1:26:09 So what we, when we went out there, we’d find a lot of extinct civilizations.
    1:26:10 Extinct civilizations.
    1:26:11 Yeah.
    1:26:13 Apex predators don’t survive.
    1:26:17 Like they, they get, get better, better, better and they die and kill themselves
    1:26:17 all somehow.
    1:26:20 Anyway, so just how do we find them?
    1:26:20 Yeah.
    1:26:26 So SETI, search for extraterrestrial technology, is a term that I am not fond of
    1:26:27 using anymore.
    1:26:30 I mean, some people in my field are, so I’m sorry, folks.
    1:26:34 But I’m really, what I really like is the idea of techno signatures.
    1:26:38 Cause I think, you know, to me, SETI is the, first of all, intelligence.
    1:26:39 We’re not really looking for intelligence.
    1:26:40 We’re looking for technology.
    1:26:45 I mean, you know, and SETI, the classic idea SETI is the radio telescopes,
    1:26:47 you know, in contact, Jody Foster with the headphones.
    1:26:50 That whole thing is still part, it’s still active.
    1:26:52 There’s still great things going on with it.
    1:26:54 But suddenly this whole new window opened up.
    1:27:00 When we discovered exoplanets, we now found a new way to look for
    1:27:04 intelligent civilizations or life in general, in a way that doesn’t have any
    1:27:07 of the assumptions that have to go into the classic radio SETI.
    1:27:11 And specifically what I mean is we’re not looking for somebody sending us a beacon.
    1:27:16 You really needed that with the classic model for a bunch of different reasons.
    1:27:19 You have to assume they wanted to be found and they were sending you a super
    1:27:19 powerful beacon.
    1:27:25 Now, because we know exactly where to look and we know exactly how to look, we
    1:27:30 can just go about looking for passive signatures of the civilization, going
    1:27:35 about its civilization in business, you know, without asking whether they want
    1:27:36 to be contacted or not.
    1:27:39 So this is what we call a biosignature or a techno signature.
    1:27:46 It is an imprint in the light from the planet of the activity of a biosphere
    1:27:47 or a techno sphere.
    1:27:47 And that’s really important.
    1:27:51 Yeah, that, that, that is why kind of the whole Gaia idea ends up being
    1:27:56 astrobiological, that biospheres and techno spheres are so potent, they
    1:27:58 change the entire planet.
    1:28:00 And you can see that from 20 light years.
    1:28:03 So let’s give an example of a biosignature to start off with, which
    1:28:07 would be a signature of a biosphere, oxygen, right?
    1:28:11 And on earth, at least, we know that oxygen is only in the atmosphere
    1:28:13 because life put it there.
    1:28:16 If life went away, the oxygen and particularly oxygen and methane, that
    1:28:19 pair, they would disappear, you know, very quickly.
    1:28:21 They’d react away, they’d all be gone.
    1:28:27 So if you find a planet with oxygen and methane, that’s a good bet that there’s
    1:28:28 a biosphere there.
    1:28:30 Okay, what about techno spheres?
    1:28:34 techno spheres, this is what, you know, so I’m the principal investigator on
    1:28:39 the first grant NASA has ever given to do these kind of exoplanet techno
    1:28:40 signatures.
    1:28:43 NASA was kind of, for reasons we can talk about, NASA had gotten pretty
    1:28:46 gun shy about funding anything about intelligent life.
    1:28:49 But okay, what’s an example of a techno signature?
    1:28:51 Well, one could be atmospheric pollution.
    1:28:54 I’m going to put pollution in quotes here because it doesn’t have to be
    1:28:56 pollution, but gases like chlorofluorocarbons.
    1:29:00 So we’ve dumped, you know, we dumped a huge amount of chlorofluorocarbons into
    1:29:02 the atmosphere by mistake.
    1:29:06 It was affecting the ozone, but we put so much in there that actually this is
    1:29:06 one of the things we did.
    1:29:10 We did a paper where we showed you could detect it across interstellar distances.
    1:29:15 You could look at the atmosphere, look at the light coming from a distant planet,
    1:29:19 pass the light through a spectrograph and see the, the spectral lines, the
    1:29:24 fingerprint, the spectral fingerprint of chlorofluorocarbons in an atmosphere.
    1:29:28 And that would for sure tell you that that word, there was a technological
    1:29:32 civilization there because there’s no other way to make chlorofluorocarbons
    1:29:35 except through some kind of industrial process.
    1:29:39 So you’re looking for, in the case of the biosphere, you’re looking for anomalies
    1:29:41 in the spectrograph.
    1:29:43 I wouldn’t necessarily call these anomalies.
    1:29:47 I’m looking for things that for biosignature, I’m looking for things that
    1:29:48 a geosphere, right?
    1:29:51 You know, that just rock and air wouldn’t produce on its own.
    1:29:53 What kind of chemicals would life produce?
    1:29:53 Right.
    1:29:56 And that’s, that’s part of the, that’s the interesting thing, right?
    1:29:59 So that’s what, you know, so we can use earth as an example, right?
    1:30:02 We can say, look, oxygen, we know there would be no oxygen in the atmosphere
    1:30:07 if it wasn’t for dimethyl sulfide, which is a compound that phyloplankton dump
    1:30:09 into the atmosphere, a lot of it, that’s sometimes mentioned.
    1:30:12 And there was even, there was a paper that somebody wrote where it was like,
    1:30:16 well, we’re not saying we see it, but, you know, there’s a bunch of noise
    1:30:17 in the spectra right there.
    1:30:22 So, you know, there’s a whole list of things that earth has done that are in
    1:30:24 the atmosphere that might be biosignatures.
    1:30:26 But now we’re reaching an interesting point.
    1:30:30 The field has matured to the point where we can start asking about agnostic
    1:30:34 biosignatures, things that have nothing to do with earth’s history.
    1:30:40 But we think that, that would still be indications of this weirdness we call life.
    1:30:40 Right?
    1:30:44 What, what is it in general that life does that leaves an imprint?
    1:30:49 So one of these things could be the structure of the network of chemical reactions.
    1:30:52 That biology always produces very different chemical networks.
    1:30:53 Who’s reacting with who?
    1:30:56 Then just rock and water, right?
    1:31:02 So, so there’s been some proposals for networked, you know, biosignatures.
    1:31:06 Information theory, you can use, you can try and look at the information
    1:31:11 that is in the different compounds that are you find in the atmosphere.
    1:31:14 And maybe that information shows you like, oh, if there’s too much
    1:31:16 information here, there must have been biology happening.
    1:31:17 It’s not just rock.
    1:31:18 Same thing for techno.
    1:31:22 We’re, that’s what we’re working on right now, that for techno signatures as well.
    1:31:25 So how do you detect techno signatures?
    1:31:25 Okay.
    1:31:28 So with techno signatures, I gave the example of chlorofluorocarbons.
    1:31:32 So that would be an example of, and again, that one is a non-agnostic one
    1:31:34 because we sort of like, oh, we produced chlorofluorocarbons.
    1:31:35 Maybe they will, right?
    1:31:37 And there’s solar panels, right?
    1:31:42 You can actually, the glint off of solar panels will produce a, the way the light
    1:31:46 is reflected off of solar panels, whether it, no matter what it’s made out of,
    1:31:51 actually, there was a paper that Monazvi Lingam and Avi Loeb did in, I think
    1:31:53 it was 2017, we’ve just followed up on it.
    1:31:55 That actually could act as a techno signature.
    1:31:59 You’d be able to see in the reflected light, this sort of big jump that would
    1:32:04 occur because of city lights, city, artificial illumination.
    1:32:08 If the, if there’s really like, you know, large scale cities, like, you know,
    1:32:13 Coruscant and Star Wars or Trent or in the foundation, those city lights would
    1:32:18 be detectable, you know, the spectral imprint of those across 20, 30 light years.
    1:32:23 So, you know, our job in this grant is to develop the first ever library of
    1:32:24 techno signatures.
    1:32:26 Nobody’s really ever thought about this before.
    1:32:32 So we’re trying to come up with all the possible ideas for what a civilization
    1:32:37 might produce that could be visible across, you know, interstellar distances.
    1:32:40 And are these good ones, or is these ones going to be hard to detect or such?
    1:32:42 City lights.
    1:32:47 So if a planet is all lit up with artificial light across 20 to 30 light years,
    1:32:48 we can see it.
    1:32:49 Yeah.
    1:32:52 If you looked at Earth at night from a distance where, you know, looked at
    1:32:56 spectra and you had sensitive enough instruments, you’d be able to see all the
    1:33:00 sodium lights and the reflected light off of, you know, they bounce off the ground,
    1:33:02 right, that the light bounces off the ground.
    1:33:07 So you’d convolve the, the sodium lamps with the reflected spectra from the
    1:33:09 ground and yeah, you’d be able to see that there’s city lights.
    1:33:13 Now, increase that by a factor of a thousand, you know, if you had a, a
    1:33:17 Trantor and you’d be able to detect that across interstellar distances.
    1:33:19 Thomas Beatty did this work, who’s now working with us.
    1:33:23 What do you think is the most detectable thing about Earth?
    1:33:26 Uh, wow, we just, this is fun.
    1:33:29 We just have a Sophia Schief, who’s part of our collaboration, just did a paper.
    1:33:30 We did Earth from Earth.
    1:33:35 If you were looking at Earth with Earth technology for a bunch of different
    1:33:39 techno signatures, how close would you have to be to be able to detect them?
    1:33:42 And most of them turn out to be, you’d have to be pretty close, at least out to
    1:33:45 the Oort cloud, but actually it’s, it is our radio signatures still.
    1:33:47 That is still most detectable.
    1:33:49 By the way, when you said you had to be pretty close and then you said the Oort
    1:33:52 cloud, that’s not very close, but you mean like from an interstellar.
    1:33:53 Interstellar distance.
    1:33:55 Cause the real question, you know, we really want to know is like, I’m sitting
    1:33:57 here on Earth, I’m looking at these exoplanets.
    1:34:00 The nearest star is four light years away.
    1:34:02 So that’s like the minimum distance.
    1:34:07 Um, so what can, if I’m looking at exoplanets, what kind of signals could I
    1:34:07 see?
    1:34:12 What is detectable about Earth with our current technology from the, our
    1:34:13 nearest solar system?
    1:34:14 Oh my God, there’s all kinds of stuff.
    1:34:18 Well, like our, our, the, the, um, chlorofluorocarbons, you can see, you
    1:34:21 know, you can see Earth’s pollution and you know, I think city lights, you
    1:34:25 had to be within, you know, within the solar system.
    1:34:29 If they do direct imaging of Earth, they’re going to need much more powerful.
    1:34:32 But let me tell you what the, let’s, let’s talk about direct imaging for a
    1:34:33 moment, because I just have to go on.
    1:34:34 This is such a cool idea, right?
    1:34:38 So what we really want, and the next generation of space telescopes and such
    1:34:39 is we’re trying to do direct imaging.
    1:34:44 We’re trying to get, uh, you know, an image of a planet separated from its
    1:34:47 star to be able to see the reflected light or the actual emission from the
    1:34:48 planet itself.
    1:34:48 Yeah.
    1:34:52 By the way, just to clarify, direct imaging means literally like a picture.
    1:34:53 A picture.
    1:34:56 But the problem is, is that with the, even with the, the, the prep, the
    1:35:00 thing that’s going to come after JWST, it’s going to be a pixel, right?
    1:35:01 You’re not going to get any kind of resolution.
    1:35:03 You’ll be able to get the light from it, which you’ll be able to pass
    1:35:05 through a spectrograph, but you’re not going to be able to take a picture.
    1:35:10 But there is this idea called the solar gravity lens telescope.
    1:35:11 I think that’s what it is.
    1:35:13 And the idea is insane, right?
    1:35:16 So their general relativity says, look, massive bodies distort space.
    1:35:18 They actually curve space time.
    1:35:21 So, um, the sun is a massive body.
    1:35:25 And so that means that the light passing through the sun gets focused.
    1:35:26 Like a lens, right?
    1:35:30 So the idea is to send a bunch of telescopes out kind of into the
    1:35:35 Oort cloud and then look back towards the sun, towards an exoplanet that is
    1:35:39 behind, not directly behind the sun, but is, you know, in the direction of the
    1:35:44 sun, and then let the, let the sun act like a lens and collect, focus the
    1:35:45 light onto the telescope.
    1:35:49 And you would be able to get, and they’ve done, it’s amazing.
    1:35:50 Like they’ve already, this idea is insane.
    1:35:55 They’d be able to get, if everything works out, 24 kilometer resolution.
    1:35:59 You’d be able to see Manhattan on a exoplanet.
    1:36:02 And this thing, it sounds insane, but actually, you know, NASA, they’ve
    1:36:06 already got, the team has already gotten through like sort of three levels of NASA.
    1:36:09 You know, there’s, there’s the NASA program for like, give us your wackiest idea.
    1:36:10 Right.
    1:36:13 And then the ones that survive that are like, okay, tell us whether that wacky
    1:36:15 idea, you know, is even feasible.
    1:36:16 And then, and they’re marching along.
    1:36:20 And the idea is that like, you know, and they even have plans for how you’d be
    1:36:25 able to get these probes out into the Oort cloud on relatively fast timescales.
    1:36:30 You need to be about 500 times as far from the sun as Earth is.
    1:36:33 Um, but right now everything looks like the idea seems to hold together.
    1:36:38 So probably when I’ll be dead, but when you’re an old man, um, it’s
    1:36:41 possible that something like this, could you imagine having like, yeah,
    1:36:46 res, that kind of resolution, a picture of an exoplanet down to, you know,
    1:36:47 kilometers.
    1:36:49 So I’m very excited about that.
    1:36:52 I can only imagine having a picture like that.
    1:36:56 And then there’s some, um, mysterious artifacts that you’re seeing.
    1:36:57 Yeah.
    1:37:03 I mean, it’s both, um, inspiring and, and almost heartbreaking that we
    1:37:09 can see, like, I think we would be able to see a civilization where there’s
    1:37:12 like a lot of scientists agree that this is very likely something and then we
    1:37:14 can’t, we can’t get there.
    1:37:17 But you know, I mean, again, this is the thing about being a long lived.
    1:37:20 We’ve got to get to the point where we’re long lived enough that, so let’s
    1:37:23 say we found like, this is what I always liked to, let’s imagine that we
    1:37:27 find, say 10 light years away, we find a planet that looks like it’s got
    1:37:28 techno signatures, right?
    1:37:29 It doesn’t end there.
    1:37:32 Like that would be the most important discovery in the history of humanity.
    1:37:34 And it wouldn’t be like, well, okay, we’re done.
    1:37:38 The first thing we do is we’d big, bigger telescope to try and do those
    1:37:38 imaging, right?
    1:37:41 And then the next thing after that, we plan a mission there, right?
    1:37:46 There’s there, we would figure out, like with breakthrough, breakthrough
    1:37:50 star shot, there was this idea of trying to use, you know, giant lasers to
    1:37:55 propel small spacecrafts, light sails, almost to the speed of light.
    1:37:57 So they would get there in 10 years and take pictures.
    1:38:00 And so we’ll, you know, if we actually made this discovery, there would be
    1:38:05 the impulse, there would be the effort to actually try and send something to,
    1:38:06 to get there.
    1:38:10 Now, you know, we probably couldn’t land, we could, but the, you know,
    1:38:14 so maybe we, maybe we take 30 years to build, 10 years to get there, 10
    1:38:15 years to get the picture back.
    1:38:18 Okay, you’re dead, but your kids are, you know what I mean?
    1:38:20 So it becomes now this multi-generational project.
    1:38:22 How long did it take to build the pyramids?
    1:38:25 How long did it take to build the giant cathedrals, right?
    1:38:27 Those were multi-generational projects.
    1:38:30 And I think we’re on the cusp of that kind of project.
    1:38:33 I think that would probably unite humans.
    1:38:34 I think it would play a big role.
    1:38:35 I think it would be helpful.
    1:38:36 I mean, human beings are a mess.
    1:38:37 Let’s face it.
    1:38:41 But I think having that record, that’s why I always say to people, discovery
    1:38:44 of life of any kind of life, even if it was microbial life, it wouldn’t matter.
    1:38:48 That to know that we’re not an accident, to know that there is probably, if we
    1:38:50 found one example of life, we’d know that we’re not an accident and there’s
    1:38:53 probably lots of life and that we’re a community.
    1:38:56 We’re part of a cosmic kind of community of life.
    1:38:58 And who knows what life has done, right?
    1:39:00 We don’t really, all bets are off with life.
    1:39:04 Since we’re talking about the future of telescopes, let’s talk about our
    1:39:08 current super sexy, awesome telescope, the James Webb Space Telescope, that I
    1:39:10 still can’t believe actually worked.
    1:39:10 I can’t believe it worked.
    1:39:12 I was really skeptical.
    1:39:15 I was like, okay, guys, all right, sure.
    1:39:20 We only got one shot for this incredibly complicated piece of hardware to unfold.
    1:39:23 So what kind of stuff can we see with it?
    1:39:27 I’ve been just looking through different kinds of announcements that have been
    1:39:29 detected, there’s been some direct imaging.
    1:39:30 Yes, like a single pixel.
    1:39:36 The kinds of exoplanets we’re able to direct image, I guess would have to be hot.
    1:39:40 Hot, usually far away from the, you know, reasonably far away from the star.
    1:39:43 I think, you know, JWST is really kind of at the hairy edge of being able to do
    1:39:44 much with this.
    1:39:47 What’s more important, I think, for JWST is the spectra.
    1:39:49 And the problem with spectra is that there’s not sexy pictures.
    1:39:51 It’s like, hey, look at this wiggly line.
    1:39:57 But be able to find and characterize atmospheres around terrestrial exoplanets
    1:40:00 is the critical next step.
    1:40:01 That’s where we are right now.
    1:40:04 In order to look for life, we’re going to be, we need to find planets with
    1:40:05 atmospheres, right?
    1:40:09 And then we need to be able to do this thing called characterization, where
    1:40:12 we look at the spectral fingerprints for what’s in the atmosphere.
    1:40:13 Is there carbon?
    1:40:14 Is there carbon dioxide?
    1:40:15 Is there oxygen?
    1:40:15 Is there methane?
    1:40:18 Um, and that’s the most exciting thing.
    1:40:23 For example, there was this planet K218B, which had, they did a beautiful
    1:40:24 job getting the spectra.
    1:40:28 And the spectra indicated it may be an entirely new kind of habitable world
    1:40:30 called a Hysian world.
    1:40:33 Hysian meaning hydrogen ocean world.
    1:40:37 And that is a kind of planet that it would be a, uh, you know, kind of in the
    1:40:41 super earth sub-neptune domain we were talking about, you know, maybe eight times
    1:40:46 that mass of the earth, but it’s got a layer of hydrogen of an atmosphere of hydrogen.
    1:40:48 Hydrogen is an amazing greenhouse gas.
    1:40:53 So hydrogen will keep the, uh, the planet underneath it warm enough that
    1:40:55 you could get liquid water.
    1:40:59 You can get a giant ocean of, uh, uh, of liquid water.
    1:41:02 And that’s an entirely different kind of planet that could be habitable planet.
    1:41:05 You know, it could be a 60 degree warm ocean.
    1:41:11 So the data that came out of JWST for that planet was good enough to
    1:41:14 be able to indicate like, oh yeah, you know what the models from what we
    1:41:17 understand what the models, this looks like it’s a, it could be a Hysian world.
    1:41:20 And it’s 120 light years away from earth.
    1:41:21 Yeah.
    1:41:22 And so isn’t that amazing?
    1:41:25 You can, it’s 120 light years away, but we can see into the atmosphere.
    1:41:29 We can see to the atmosphere so well that we can be like, oh, look, methane,
    1:41:32 methane was a five sigma detection.
    1:41:37 Like you knew that the data were so good that it was like the gold standard of science.
    1:41:42 What about detecting, uh, maybe, uh, the direct imaging or in other
    1:41:48 ways, megastructures that the civilizations build, you know, what’s great
    1:41:50 about megastructures is first of all, it’s fun to say, who doesn’t want to say
    1:41:52 megastructure, alien megastructure, right?
    1:41:55 Every morning I’m looking for an opportunity to say that.
    1:42:00 Um, so the, the, the, the, er, example of this is the Dyson sphere, right?
    1:42:00 Which is amazing.
    1:42:03 Cause, you know, it was literally 1960 that this idea came up.
    1:42:04 Can you explain the Dyson sphere?
    1:42:05 Yeah, the Dyson sphere.
    1:42:08 So Freeman Dyson, you know, one of the greatest physicists ever, um, who had
    1:42:11 was very broad minded and thought about a lot of different things.
    1:42:15 He recognized that, you know, when a civilization, as civilizations progress,
    1:42:19 what they’re going to need is ever more energy to do ever more, you know,
    1:42:20 amazing things.
    1:42:22 And what’s the best energy source in a solar system?
    1:42:23 It’s the star, right?
    1:42:29 So if you surrounded the star with solar collecting machine, sunlight
    1:42:34 collecting machines, um, and the limit of this would actually build a sphere
    1:42:37 and actual sphere around your star that had all solar panels on the inside.
    1:42:41 You could capture every photon the star produced, which is, you know,
    1:42:43 this insane amount of light.
    1:42:47 You would have enough power now to do anything to re-engineer your solar system.
    1:42:48 Um, so that was a Dyson sphere.
    1:42:51 It turns out that a Dyson sphere doesn’t really work cause it’s unstable.
    1:42:55 You know, but a Dyson swarm is, and that’s really what he meant.
    1:43:00 You know, this large collection of large orbiting structures that we’re
    1:43:01 able to collect light.
    1:43:01 Yeah.
    1:43:05 So he didn’t actually mean a rigid sphere structure.
    1:43:06 Yeah.
    1:43:07 He basically meant a swarm.
    1:43:11 So that, like you said, and then the limit basically starts to look.
    1:43:13 People started to say, yeah, it was like a sphere.
    1:43:17 And we actually almost thought we might have found one of these, um, uh,
    1:43:19 back with, uh, a Bajoyan star.
    1:43:22 We saw, you know, the way we detect planets is through the transit method
    1:43:26 where the planet passes in front of the star and there’s a dip in the star light.
    1:43:27 It’s a little eclipse basically.
    1:43:29 And we know exactly what they should look like.
    1:43:33 And then with this one star, there were these really weird transits where like,
    1:43:36 it was like this little dragon’s tooth and then there’d be another one
    1:43:39 and another one and another one and then nothing and then three more.
    1:43:43 And in the paper that was written about this, they suggested they, you know,
    1:43:45 they went through the list of they, oh, it’s could be comets,
    1:43:46 could be chunks of a broken up planet.
    1:43:49 And it could also be an alien megastructure.
    1:43:52 And of course the news picked up on this and like everybody’s, you know,
    1:43:54 newsfeed the next day, alien megastructures discovered.
    1:43:58 Turns out, sadly, they were not alien megastructures.
    1:44:00 They were probably guests or dust clouds.
    1:44:03 Um, but it raised the possibility like, oh, these are observable.
    1:44:06 And people have worked out the details of what they would look like.
    1:44:08 You don’t really need direct imaging.
    1:44:09 You can do transits, right?
    1:44:11 They’re big enough that when they pass in front of the star,
    1:44:13 they’re going to produce a little blip of light because that’s what
    1:44:14 they’re supposed to write.
    1:44:15 They’re absorbing starlight.
    1:44:19 So people did have worked out like, well, a square one or a triangular one,
    1:44:20 but that wouldn’t be a distance fear.
    1:44:23 There would be like one object, one object, right?
    1:44:25 Which is what, if it’s a swarm, you’d expect like the light to be like
    1:44:28 blinking in and out as these things pass in front of, you know,
    1:44:32 if you’ve got thousands of these, much of the time they’ll be blotting
    1:44:34 out the star, sometimes they won’t be, right?
    1:44:39 And so you’re going to get an irregular sort of signal, a transit signal.
    1:44:39 Yeah.
    1:44:41 One you wouldn’t expect from a star that doesn’t have anything.
    1:44:42 Exactly.
    1:44:44 Or just a planet, right?
    1:44:44 Or a couple of planets.
    1:44:48 There’d be so many of these that it would be like beep, beep, blip, blip, blip, blip.
    1:44:54 And that usually doesn’t happen in a star system because there’s only
    1:44:55 just a handful of planets.
    1:44:56 That’s exactly what it is.
    1:44:57 Everything’s coagular.
    1:45:00 And a stable solar system, you get a handful of planets, you know,
    1:45:03 five, 10, that’s it probably, and nothing else.
    1:45:07 So if now suddenly you see lots of these little microtransits, you’re
    1:45:10 telling you there’s something else that’s big enough to create a transit.
    1:45:14 But, you know, too many of them, and also within a regular shape, the
    1:45:18 transit itself, that these are, these could be megastructures.
    1:45:21 How many people are looking for megastructures now?
    1:45:26 Well, the main groups looking for megastructures are, again, Jason Wright
    1:45:29 at Penn State and collaborators.
    1:45:31 The way they’re looking for it though is for infrared light.
    1:45:35 Because, you know, the second law of thermodynamics says, look, if you capture
    1:45:39 all of this starlight, you’re going to warm up the, you know, your things
    1:45:41 going to warm up and emit an infrared.
    1:45:45 You’re just going to be waste heat, waste heat and waste light from this.
    1:45:49 That feels like a louder, clearer way to detect it.
    1:45:49 Right.
    1:45:51 And that’s actually, you know, Dyson, that’s actually why Dyson proposed it.
    1:45:54 He wasn’t really proposing it because like he was saying, this is what
    1:45:56 civilizations are going to do.
    1:45:58 He proposed it because he was like, oh, we want to start looking for alien
    1:45:59 civilizations.
    1:46:02 Here’s something that would have a detectable signature.
    1:46:07 Um, so, uh, Jason and company have done, you know, pretty good searches.
    1:46:11 And recently they’ve made news because, you know, they were able to eliminate a
    1:46:12 lot of places.
    1:46:14 No, these are not Dyson spheres, but they did have a couple that were like
    1:46:18 anomalous enough that they’re like, well, this is kind of what it would look like.
    1:46:19 It’s not a detection.
    1:46:21 And they were saying, they would never say it’s a detection, but they were
    1:46:23 like, they were not non-detections.
    1:46:25 And they’re potential candidates.
    1:46:25 Potential candidates.
    1:46:26 Yeah.
    1:46:26 Love it.
    1:46:28 We have megastructure candidates.
    1:46:29 That’s inspiring.
    1:46:32 What other megastructures do you think that could be?
    1:46:35 I mean, that, so that’s Dyson spheres about capturing the energy of a star.
    1:46:36 Yeah.
    1:46:37 Well, there could be other.
    1:46:41 Well, there’s something called the Clark belt, right?
    1:46:43 So we have a bunch of satellites that are in geosynchronous orbit.
    1:46:47 Nothing naturally is going to end up in geosynchronous orbit, right?
    1:46:49 Geosynchronous orbit is one particular orbit that’s really useful.
    1:46:52 If you want to beam things straight down, or if you want to put a space
    1:46:53 elevator up, right?
    1:46:58 Um, so, uh, there’s this idea that if, you know, a civilization becomes
    1:47:02 you know, advanced enough that it’s really using geosynchronous orbit,
    1:47:05 that you actually get a belt, something that would actually be detectable
    1:47:07 from a distance via a transit.
    1:47:11 Uh, there’s been a couple of papers written about the possibility of these
    1:47:16 Clark belts, densely occupied Clark belts being a megastructure.
    1:47:20 It’s not as mega as a Dyson swarm, but it’s, you know, kind of planetary scale.
    1:47:22 You think it’s detectable Clark belt?
    1:47:23 It could be.
    1:47:26 I mean, like in our list of techno signatures, it would be down there,
    1:47:29 but it would be again, if you had an advanced enough civilization that did
    1:47:33 enough of this, it would certainly you’d have a Clark belt.
    1:47:35 And the question is whether or not it’s detectable.
    1:47:35 Yeah.
    1:47:37 Probably Dyson sphere is the, that’s the more exciting.
    1:47:38 Let’s go to one.
    1:47:39 Yeah, yeah.
    1:47:42 Speaking of the Dyson sphere, let’s talk to the Kardashev scales.
    1:47:43 Right.
    1:47:47 What is the Kardashev scale and where are humans on it?
    1:47:47 Right.
    1:47:49 So the Kardashev scale was the same time.
    1:47:54 This is this golden age of SETI, like kind of like 60, 59 to 65.
    1:47:58 When it just starts, like this is, you know, Frank Drake has done his
    1:48:01 first experiment, people are like, Oh my God, this is even possible.
    1:48:04 And so people are just thrown out these ideas.
    1:48:07 And as I, you know, said in the book, science is conservative.
    1:48:09 And what I mean by that is it holds on to its best ideas.
    1:48:13 So Kardashev comes up with this idea that look, if we’re, again, it’s always
    1:48:14 about detectability.
    1:48:18 If we’re looking for civilizations, we should think about what are the state,
    1:48:23 what are the natural stages, natural in quotes that a civilization goes through.
    1:48:27 And he was thinking in terms of energy use, which is like a good physicist.
    1:48:35 So the, he said, look, the first hurdle in terms of energy or threshold
    1:48:38 that a civilization will go through is using all the starlight that falls
    1:48:39 onto a planet.
    1:48:41 He called that a type one civilization.
    1:48:45 In whatever way you’re doing it, some large fraction of the starlight
    1:48:47 that falls on your planet, you are using for your own ends.
    1:48:52 The next would be to use all the starlight there is from that star.
    1:48:53 Right.
    1:48:54 So that’s the Dyson sphere.
    1:48:58 So he actually Dyson had already proposed his idea of the swarm
    1:48:59 and Kardashev was picking out.
    1:49:01 So that’s a type two civilization.
    1:49:06 Type three is galactic scale, a civilization that could use all the starlight
    1:49:07 in a galaxy.
    1:49:07 Right.
    1:49:09 So we are now, where are we now?
    1:49:12 Remarkably on a log scale, we’re at point seven of a type one.
    1:49:14 So we’re not even type one.
    1:49:15 No, no, no, we’re not even type one.
    1:49:21 But according to, there was a paper written by a group that said, you know,
    1:49:25 if we continue on our path, we’ll be at a type one at around 2,300.
    1:49:26 2,300.
    1:49:28 So this is on a log scale.
    1:49:32 So point seven.
    1:49:37 So type one is about 10 to the 16th Watts type two is 10 orders of magnitude
    1:49:39 larger than that 10 to the 26th Watts.
    1:49:44 And I think estimate for the galaxy is another 10 orders of magnitude.
    1:49:44 Yeah.
    1:49:47 Cause there’s a hundred billion star of order, a hundred billion stars.
    1:49:49 So that’s a lot.
    1:49:50 That’s a lot.
    1:49:53 Do you think humans ever get to type one?
    1:49:57 Um, I think, you know, there’s a problem with type one, which is that, you know,
    1:49:59 we already know about climate change, right?
    1:50:03 The effects of our harvesting energy to do the work of civilization is already
    1:50:06 changing the climate state, right?
    1:50:08 And that’s something that, you know, Kardashev couldn’t have recognized.
    1:50:15 When you, you know, there’s, there’s, uh, the first law of thermodynamics, right?
    1:50:17 Which is just about energy, you know, the different forms of energy.
    1:50:20 Then there’s the second law, which is about when you use that energy.
    1:50:22 And Kardashev wasn’t thinking about the second law.
    1:50:28 If you get all that energy and you use it, there’s waste heat.
    1:50:29 You don’t get to use it all, right?
    1:50:32 You can only, second law tells you that if, you know, I have a tank of
    1:50:36 gasoline, I can only use a certain fraction of the energy in that tank.
    1:50:38 And the rest is going to go to heating up the engine block.
    1:50:43 Um, so that second law tells you that, you know, you can only use so much energy
    1:50:48 before the climate state is like, uh, oh, you know, sorry, is going to change on you.
    1:50:52 So there’s a way in which we probably can’t get to a type one without like
    1:50:54 devastating the earth’s climate.
    1:50:58 So we’re probably going to have to figure out the most important thing actually
    1:51:01 here is probably, this is why space becomes the colonization or settlement of space.
    1:51:05 If we have an idea that we’ve been working on for a while called service worlds, right?
    1:51:12 That at some point you probably move a lot of your, um, industry off world, right?
    1:51:15 We’ve got mercury, for example, there’s nothing on mercury.
    1:51:16 There’s no life on mercury.
    1:51:18 Why don’t you put your energy harvesting there?
    1:51:19 Right.
    1:51:21 Because you can’t mess with the biosphere.
    1:51:23 The biosphere is more powerful than you are.
    1:51:23 Right.
    1:51:31 And so, yeah, so, so there’s limits to how much energy we can harvest to do work on
    1:51:34 the earth without really adversely affecting the biosphere.
    1:51:39 It does seem that the best response to the climate change is not to use less technology,
    1:51:48 but to, to invent better technology and to invent technology that avoids the destructive effects.
    1:51:49 This is the frontier we are.
    1:51:52 And that was the topic of my last book, Light of the Stars.
    1:51:56 It’s like you’ve got, you have to do the astrobiology of the Anthropocene.
    1:52:00 You have to see the transition that we’re going through now of the Anthropocene on a
    1:52:03 kind of planetary astrobiological framework.
    1:52:07 And, you know, that paper we were talking about with the 10 billion trillion worlds,
    1:52:10 that was actually in service of the work I was doing for this other book, where I wanted
    1:52:13 to know how often are, do you go through an anthra?
    1:52:17 How, you know, does every civil is a technological civilization trigger its own
    1:52:21 planetary crisis, its own climate Anthropocene crisis.
    1:52:24 And the answer we actually came up from doing models was like, yeah, probably.
    1:52:28 And then the question is, are you smart enough to figure out how to readjust what you’re
    1:52:32 doing technologically so that you’re not, you know, that all boats rise, right?
    1:52:36 You want to figure out how to do this so that the biosphere becomes even more productive
    1:52:39 and healthy and resilient.
    1:52:40 So yeah, right.
    1:52:42 It’s the kind of technology.
    1:52:46 I think there’s probably absolutely limits on how much energy you can use, use.
    1:52:48 But how do you use that energy?
    1:52:52 And then also, yeah, getting off planet, eventually, if you want to use 10 times
    1:52:56 more energy than that, you’re going to be not going to do it on on world.
    1:53:02 So how do we detect alien type one, two and three civilizations?
    1:53:07 So we’ve been kind of talking about basically type one civilization detection.
    1:53:08 Yeah, right.
    1:53:12 Maybe with the Dyson sphere, you start to get like a little bit more type two.
    1:53:16 But it feels like if you have a type two civilization, it won’t be
    1:53:18 just the Dyson sphere, right?
    1:53:22 It feels like that just for the same reason you mentioned climate change.
    1:53:28 But now at the star system level, they’re probably expanding, right?
    1:53:31 So how, how would you detect a type two?
    1:53:34 How about propulsion plumes, right?
    1:53:39 If you’re expanding, no, no, we just, I literally just put in a NASA proposal now.
    1:53:42 Thomas Beatty, who’s joined us from these at the University of Wisconsin,
    1:53:46 has an idea to look for plumes, right?
    1:53:51 If you have a civiliz, if you have a solar system wide civilization, right?
    1:53:53 And you’ve got space truckers going back and forth, right?
    1:53:56 You know, from Mars to, you know, they’re doing the in settlers run.
    1:54:00 They’re accelerating and decelerating the whole way there, right?
    1:54:04 If you want to get to Mars in a couple of weeks, you have your fusion drive
    1:54:08 on the entire way out there, you flip and burn and have it on, you know.
    1:54:11 So you’re also always have gravity, you have thrust gravity.
    1:54:14 So would those plumes be detectable?
    1:54:17 Because now you’ve got spaceships going all over the place and the odds that,
    1:54:20 like, you know, the plume is going to cross your field of view becomes,
    1:54:21 could become pretty high.
    1:54:25 So, yeah, that’s, I think that’s a good way of looking for.
    1:54:31 That’s one idea of looking for, you know, large scale interplanetary,
    1:54:34 which is kind of like when you’re getting to a type type two.
    1:54:38 Another possibility is looking for the tailings of asteroid mining.
    1:54:42 This was an idea it was a group at Harvard, Smithsonian that, you know,
    1:54:46 would be able to look for, if you’re really chewing up asteroids to build
    1:54:50 space habitats, can, you know, there’d be dust particles left around.
    1:54:52 And would they look different from, just say, the dust, you know,
    1:54:54 from just regular collisions?
    1:54:56 So pollution of all different kinds.
    1:54:57 Pollution of all different kinds.
    1:54:58 And trash also.
    1:54:58 Okay.
    1:55:02 So trash is an interesting idea when you come to the actual solar system, right?
    1:55:06 We are actually, there’s a whole other field of techno signatures,
    1:55:07 which are things in the solar system.
    1:55:12 What if somebody came by a billion years ago, you know,
    1:55:13 and left some stuff, right?
    1:55:17 So the earth has been showing biosignatures for billions of years.
    1:55:21 And, you know, a species like us looking at our level, looking at earth,
    1:55:24 would have been able to know that earth had life on it, had a biosign,
    1:55:27 had a biosphere for billions of years.
    1:55:31 So maybe somebody sent something by, you know, a half a billion years ago.
    1:55:37 So, um, this idea of looking, say at the moon for artifacts is that have been
    1:55:40 there for a long time is something that people, a number of people are doing.
    1:55:43 We’re just working on a paper where we just calculated, this was super fun.
    1:55:49 We calculated how long would the lunar lander exist on the moon
    1:55:52 before micrometeorites just chewed it down, right?
    1:55:55 How long would you be able to land on the moon and go, oh, look, there’s,
    1:55:57 you know, there’s somebody was here and left some debris.
    1:56:01 Um, so there’s this process called gardening, which is just the micrometeorite
    1:56:03 constant range of micrometeorites.
    1:56:07 You know, and that’s where you get the lunar regolith, that fine powder
    1:56:09 on the moon is because of this gardening.
    1:56:13 And it turns out it is literally hundreds of millions to billions of years.
    1:56:14 Oh, nice.
    1:56:18 That, uh, yeah, that the lunar lander will be visible.
    1:56:21 Oh, so we should be able to find artifacts.
    1:56:21 Yeah.
    1:56:23 If there’s art, if there are artifacts on the, and people have proposed
    1:56:27 doing this with, um, artificial intelligence, we have, you know, the moon has
    1:56:31 been mapped down to like a couple of meters with various probes and all that
    1:56:32 data is sitting there.
    1:56:35 So have, why not use machine learning to like look through all those things
    1:56:39 and look for anything that looks not like the lunar surface.
    1:56:43 And they did a test program where they gave it, they gave the computer, you
    1:56:46 know, sort of like, I don’t know, 50 miles around the Apollo 11 or Apollo,
    1:56:50 maybe it was Apollo 17 site, and it instantly was able to pull out the lander.
    1:56:54 I mean, the whole task of looking for anomalies, something that looks not
    1:56:57 like the lunar surface, you may get sound obvious, but it’s not exactly obvious.
    1:57:05 Like anomalies is really not, I mean, detect something that doesn’t look
    1:57:05 right about this room.
    1:57:08 It’s, it’s actually really difficult, really difficult.
    1:57:09 It’s really difficult.
    1:57:11 And it’s, you know, what’s cool, it’s a really information
    1:57:13 theoretic kind of proposal.
    1:57:16 You really have to use information theory to say like, what’s the background?
    1:57:20 What’s, you know, well, how do I define something that I can say that looks weird?
    1:57:25 So, yeah, maybe when you’re looking at a spectrograph or something, like, it’s
    1:57:30 still, it’s still like, it’s going to look really weird potentially.
    1:57:35 Like we’re kind of, we’re kind of hypothesizing all the things that humans
    1:57:36 would build and how do we detect that.
    1:57:39 That could be really weird stuff.
    1:57:43 That’s why there’s this emphasis now on these agnostic signatures, right?
    1:57:45 So, um, actually disequilibrium is a nice one.
    1:57:50 For one way to define life is it is a system that is far from equilibrium, right?
    1:57:51 It’s alive, right?
    1:57:54 Cause as soon as it dies, it turns into, it goes back to equilibrium.
    1:57:58 And so you can look at all chemicals in an atmosphere, even if you don’t know
    1:58:00 whether these could be chemicals that you have no idea whether or not they have
    1:58:04 anything to do with life, but the degree of disequilibrium, the degree to
    1:58:08 which they show that that atmosphere has not, you know, the chemicals have
    1:58:11 not all kind of like just gone down to a, you know, they’ve all reacted
    1:58:13 away to an equilibrium state.
    1:58:16 You can actually tell that in very general ways using what’s called a Gibbs,
    1:58:17 the Gibbs free energy.
    1:58:19 And that, that’s kind of a signature.
    1:58:24 Like if you see an atmosphere that is wildly out of equilibrium, you know,
    1:58:27 that indicates that there’s some, there’s something happening on that planet,
    1:58:33 biosphere or techno sphere that is pumping gases, you know, into the, um,
    1:58:36 into the atmosphere that is keeping the whole system from relaxing.
    1:58:41 So is it possible we can detect anomalies in, in space time?
    1:58:44 Well, you, you could detect, and there’s, there’s been some work on this, like
    1:58:47 with the Accubre drive, you know, these proposals for warp drives.
    1:58:48 And we can talk about that later.
    1:58:52 I’m skeptical of those, but, um, cause it may really be possible that you just
    1:58:56 can’t go fast from the speed of light, but people have done work on like, you
    1:59:01 know, what would be the signature of, uh, an Accubre drive?
    1:59:02 What would be the signature?
    1:59:06 You like, you know, could you detect if you’re using a drive like that, then
    1:59:09 you certainly are distorting space time, which means any light that’s passing by
    1:59:13 has gotten, you know, it’s, it’s, it’s trajectory has gotten altered because
    1:59:15 it had to pass through the distorted space time.
    1:59:18 So yeah, there are possibilities along with that.
    1:59:20 You know, one of the funny things, I don’t know if they’ve gotten past this,
    1:59:23 but somebody had calculated the problem with the Accubre drive or this warp
    1:59:28 drive was that if, if you dropped out of warp, there would be this spray of gamma
    1:59:31 rays that would like sterilize any planet in front of you.
    1:59:34 So it’s like, well, yeah, you probably don’t want to do that, but that
    1:59:36 would be a great bios our techno signature.
    1:59:37 I don’t know.
    1:59:38 They’re planted obliterated.
    1:59:40 So you think it’s not possible to travel fast?
    1:59:41 I wouldn’t say that.
    1:59:42 I wouldn’t say that.
    1:59:45 But what I think, you know, if you look at the physics, we understand, right?
    1:59:45 Yeah.
    1:59:52 Um, the, you know, every possibility for faster than light travel really
    1:59:54 relies on something that doesn’t exist, right?
    1:59:58 So, so, you know, the cool thing is Einstein’s field equations.
    1:59:59 You can actually play with them.
    2:00:00 The equations are right there.
    2:00:04 You can add things to the, you know, right or left hand side that allow
    2:00:07 you to get something like the Accubre drive.
    2:00:10 That was a metric that, you know, showed you like, oh, it’s a warped bubble.
    2:00:15 It’s a warping of space time that moves through space time faster than
    2:00:16 the speed of light, right?
    2:00:20 Because nothing to move across space time faster than the speed of light,
    2:00:23 but space time itself can move faster than the speed of light.
    2:00:27 But here’s the problem with all of those proposals is they all need something.
    2:00:31 The thing you added, the little fictional term you added on the, into the equations
    2:00:35 is something called, um, exotic matter and it doesn’t exist.
    2:00:37 It’s really just something we dreamed up to make the equation to do
    2:00:38 what we wanted them to do.
    2:00:45 So, you know, it’s a nice fiction, but really right now, you know, you know,
    2:00:49 we live in this weird moment in history of the great acceleration.
    2:00:55 We’re like, the technology we use now is, you know, is completely different
    2:00:59 from the technology we used 10 years ago is remarkably different
    2:01:01 from the technology from a hundred years ago.
    2:01:06 Um, but, you know, I remember playing, um, uh, Assassin’s Creed where everybody’s
    2:01:09 like, you know, what is it’s 1200 and everybody’s like stab, stab, stab.
    2:01:10 And I was like, yeah, it’s a great game.
    2:01:16 And then I got Assassin’s Creed two and, uh, it was 300 years later and everybody’s
    2:01:21 like stab, stab, stab and it was like 300 years and the technology hadn’t changed.
    2:01:23 And that was actually true for most of human history, right?
    2:01:28 You used your great grandfather’s tools because there was no need to have any
    2:01:30 other new tools and you probably did his job.
    2:01:34 Uh, so, you know, we can be fooled into thinking like, Oh, you know,
    2:01:36 technology is just going to go on forever.
    2:01:39 We’re always going to find new advances as opposed to sometimes things just
    2:01:41 flatten out for a long time.
    2:01:45 So you have to be careful about that bias that we have living in this time of
    2:01:46 great acceleration.
    2:01:52 Yeah, but, uh, also it is a great acceleration and we also are not good at
    2:01:55 predicting what that entails if it does keep accelerating.
    2:02:00 So for example, somebody like, um, Eric Weisstein often talks about we under
    2:02:03 invest in theoretical physics research.
    2:02:10 Basically like we’re trying too hard for traditional chemical propulsion on
    2:02:14 rockets versus like trying to hack physics.
    2:02:21 Sort of warp drives and so on, because it’s really hard to do space travel.
    2:02:25 And it seems like in the long arc of human history, if we survive the way
    2:02:30 to really travel across long distances is going to be some new, totally new thing.
    2:02:31 Right.
    2:02:31 Right.
    2:02:34 So it’s not going to be an engineering problem.
    2:02:38 It’s going to be a physics, a fundamental physics, fun about the physics.
    2:02:42 Well, yeah, I mean, I agree with that in principle, but I think there’s been, you
    2:02:44 know, I mean, there’s a lot of ideas out there.
    2:02:46 People, you know, string theory, people have been playing with string theory
    2:02:48 now for 40 years.
    2:02:51 It’s not like people haven’t been, not like there hasn’t been a lot of effort.
    2:02:53 And, you know, and again, I’m not going to predict.
    2:02:57 I think it’s entirely possible that we have, you know, there’s incredible
    2:03:00 boundaries of physics that have yet to be poked through.
    2:03:03 In which case, then all bets are off, right?
    2:03:06 Once you get sort of, you know, interstellar, fast interstellar travel.
    2:03:08 Whoa, you know, who knows what can happen.
    2:03:13 Um, but I tend to be drawn to like science fiction stories that take the
    2:03:17 speed of light seriously, like what kind of civilization can you build where like
    2:03:22 it takes, you know, 50 years to get to where you’re going and a 50 years back.
    2:03:23 Like, so, I don’t know.
    2:03:26 I mean, yeah, there’s no way I’m going to say that, that we won’t get warp drives.
    2:03:29 But as of right now, there’s, it’s all fictional.
    2:03:32 It’s, you know, it’s barely even a coherent concept.
    2:03:36 Well, it’s also a really exciting possibility of hacking this whole thing by
    2:03:41 extending human lifespan or extending our notion of, of time.
    2:03:47 And maybe as dark as the same, but the value of an individual human life versus
    2:03:50 the value of life from the perspective of generations.
    2:03:54 So you can have something like a generational ship that travels for hundreds
    2:04:00 of thousands of years and it, you’re not sad, uh, that you’ll never see the
    2:04:07 destination because you kind of have the value for the, uh, prolonged survival of
    2:04:08 humanity versus your own individual life.
    2:04:09 Yeah.
    2:04:10 It’s a wild ethical question.
    2:04:14 Isn’t it one of the, that book I told you about Aurora was suck.
    2:04:18 I love the book because it was such a sort of inversion of the usual.
    2:04:20 Cause you know, I’ve read, I love science fiction.
    2:04:23 I’ve read so many generationship stories and they get to that planet.
    2:04:25 The planet turns out to be uninhabitable.
    2:04:28 It’s inhabited, but it’s uninhabitable for earth because again, he has this
    2:04:31 idea of like, you know, life is particular to their planets.
    2:04:36 So they turn around and they come back and then when they land, the main character
    2:04:39 goes, there’s still people who are, you know, arguing for more generationships.
    2:04:42 And she goes and she punches the guy out cause she spent her whole life in a
    2:04:46 tube, you know, with this, I thought that was a really interesting inversion.
    2:04:48 You know, the interesting thing about, about, we were talking about these
    2:04:52 space habitats, but if you really had a space habit, not some super cramped,
    2:04:55 you know, crappy, usual version of a century ship, but if you had these
    2:04:58 like space habitats that were really, you know, like the O’Neill cylinders,
    2:05:00 they’re actually pretty nice places to live.
    2:05:04 Put a thruster on those, you know, like why, why keep them in the solar system?
    2:05:09 Maybe that’s, maybe space is full of like these sort of traveling space habitats
    2:05:12 that are in some sense a, you know, their worlds in them, in and of themselves.
    2:05:17 There’s the show Silo, which raises the question of basically, if you’re
    2:05:22 putting on a generational ship, what do you tell the inhabitants of that ship?
    2:05:24 You might want to lie to them.
    2:05:25 Yeah.
    2:05:29 You might want to tell them a story that they believe because there is a society,
    2:05:30 there’s human nature.
    2:05:35 It’s like, how do you maintain homeostasis of that little society?
    2:05:40 I mean, that’s a fascinating technical question, the social question, the
    2:05:41 psychology question.
    2:05:43 You know, the generation ship too, and you know, which I talked about in the
    2:05:47 book, the idea of also the, you know, you talked about extending human lifetimes
    2:05:53 or, you know, the stasis, the cryostasis, which is a mainstay of science fiction,
    2:05:53 you know, that, you know, right.
    2:05:56 You can be put to, you can basically put in suspended animation and such.
    2:05:59 None of these things we know are possible, but you know, it’s so interesting.
    2:06:02 And this is why I love science fiction, the way it seeds ideas, right?
    2:06:05 All these ideas we’re going to talk about because they’ve been staples of
    2:06:07 science fiction for 50 years.
    2:06:09 I mean, the whole field of cryogenics.
    2:06:09 Yeah.
    2:06:10 Where are we at with that?
    2:06:10 Yeah.
    2:06:13 I wonder what the state of the art is for a complex organism.
    2:06:17 Can you freeze, how long can you freeze and then unfreeze?
    2:06:17 Right.
    2:06:19 Maybe, maybe like with bacteria, you could do freeze.
    2:06:20 Oh, bacteria can last.
    2:06:22 This is the thing about panspermia, right?
    2:06:28 How long can, you know, how long can a bacteria survive in a rock that’s
    2:06:33 been blasted, you know, if there’s a common impact across, you know, interstellar
    2:06:35 distances, that does seem to actually be possible.
    2:06:36 People have done those kinds of calculations.
    2:06:41 It’s not out of the realm of possibility, but a complex organism, multi-cellular,
    2:06:43 multi-systemic or multi-systems, right?
    2:06:44 With organs and such.
    2:06:46 Also, what makes an organism?
    2:06:49 I mean, it could, you know, which part do you want to preserve?
    2:06:55 Cause maybe the, for humans, it seems like, uh, like what makes a personality?
    2:06:59 It feels like you want to preserve a set of memories.
    2:07:05 Like if I woke up in a different body with the same memories, I pretty much, I
    2:07:06 would feel like I would be the same person.
    2:07:07 Altered carbon?
    2:07:09 Have you, that’s a, that’s a great series.
    2:07:12 I think it’s on Netflix, just to, you know, that’s a really great series.
    2:07:14 Well, that’s exactly the idea of sleeves.
    2:07:17 Everybody’s able to like, you know, you can re-sleeve in another body.
    2:07:20 Um, and it raises exactly sort of this question.
    2:07:22 It’s not the greatest cyberpunk, but it’s pretty good.
    2:07:25 It’s got, it’s got some great, great action sequences too.
    2:07:30 As we get better and better advancements in large language models that are able
    2:07:36 to be fine-tuned on you, it raises a question because I had to, to me, that
    2:07:39 already passed the touring test, as we traditionally have defined it.
    2:07:43 Is, so if there’s going to be an LLM that’s able to copy you in terms of
    2:07:48 language extremely well, it’s going to raise ethical and, uh, I don’t know,
    2:07:53 philosophical questions about what makes you, you like what, if there’s a thing
    2:07:59 that can talk exactly like you, like, what is the thing that makes you use?
    2:08:04 Is it, is it, it’s going to speak about your memories very effectively.
    2:08:08 This leads us to, if we’re going to get to the, the blind spot, I, I, you know,
    2:08:13 I am of the opinion, heretical in some camps, that, you know, the brain
    2:08:17 is not the minimal, the minimal structure for consciousness.
    2:08:19 You know, it’s the whole body.
    2:08:19 It’s embodied.
    2:08:22 It may actually, in some sense, it’s communities, actually.
    2:08:26 Um, so yeah, so I don’t, I mean, I’m, you know, I could be wrong, but this is,
    2:08:28 you know, this is what this whole work that I did with Marcelo
    2:08:32 Gleiser and Evan Thompson, the, um, philosophy of science, which is
    2:08:34 interesting because it leads to this question about, you know, right.
    2:08:36 Oh, maybe we should just download ourselves into computers.
    2:08:36 Right.
    2:08:41 That’s another story that, that one tells, I’m super skeptical about those, but
    2:08:44 is that’s one of the narratives about interstellar travel is just like, and
    2:08:47 that anybody we meet is going to be a machine anyway, whether it’s like,
    2:08:51 whether it’s downloaded bodies or it’s just going to be artificial intelligence.
    2:08:54 Like there’s the whole idea of how long does biological evolution last?
    2:08:58 Maybe it’s a very short period before everybody, you know, goes to, or the
    2:09:02 machine’s takeover and, you know, kill you, or, you know, it’s some hybrid.
    2:09:04 What do you think aliens look like?
    2:09:08 So we talked about all the different kinds of bio signatures.
    2:09:11 They might leave over techno signatures, but what would they look like?
    2:09:15 When we show up, are they going to have arms and legs?
    2:09:18 Are they, uh, going to be recognizable at all?
    2:09:20 Are they going to be carbon based?
    2:09:21 Yeah.
    2:09:22 So great question.
    2:09:27 And this question gets to the heart of thinking about life, right?
    2:09:28 About what life is.
    2:09:30 And this is the physical part of that.
    2:09:33 There’s also sort of the informational part of it.
    2:09:38 Um, but let’s just talk about the physical part of it, which is, you know, life.
    2:09:42 Anything that we’re going to call life is probably going to work on Darwinian evolution.
    2:09:44 That’s the nice thing about Darwinian evolution.
    2:09:46 Just like we know the laws of physics are general.
    2:09:49 The laws of Darwinian evolution are kind of this logic, this basic logic.
    2:09:54 Um, that, you know, anything we’d reasonably call life probably has to operate
    2:09:55 under these kinds of principles.
    2:10:01 And so, you know, evolution is about solving problems that, you know, to survive.
    2:10:05 Um, that the environment presents and the environment.
    2:10:10 So it’s going to present these problems in physical and chemical terms so that you’d expect.
    2:10:15 Um, you expect a kind of balance between what we call convergence, evolutionary
    2:10:18 convergence and evolutionary contingency.
    2:10:23 So, you know, if you’ve got to move along a surface, you know, a surface between, you know,
    2:10:27 a hard surface and air, then the idea of some kind of jointed stick, right?
    2:10:30 Legs make sense that you’re probably going to trigger that.
    2:10:34 You know, if you look at Earth’s history, multiple times, multiple lineages that
    2:10:37 had nothing to do with each other are going to solve the problem of getting
    2:10:42 towards energy sources using some kind of, you know, a stick like apparatus.
    2:10:43 So that’s about movement.
    2:10:43 Yeah.
    2:10:45 So that’s one problem that has to be solved.
    2:10:47 One problem that has to be solved is I got to get to food, right?
    2:10:49 Another problem is I got to get away from predators, right?
    2:10:50 Um, you’ve seen wings.
    2:10:56 We’ve seen wings, the line that went through dinosaurs to birds, involved wings, insects,
    2:10:59 evolved wings, mammals, evolved wings.
    2:11:02 If the gas is dense enough that a curved surface, if you move through the curved
    2:11:04 surface, it’s going to produce lift.
    2:11:05 Yeah, there you go.
    2:11:06 Evolution will trip on that.
    2:11:12 So I think you, you can expect certain classes of solutions to the basic problems that
    2:11:17 life is going to, is going to be presented with, stay alive, reproduce.
    2:11:22 Um, but one of the weird things about like with the UFO things is that you always
    2:11:24 see like, oh, they all look like humans.
    2:11:26 They’re just like basically humans with, you know, triangular heads.
    2:11:29 And that’s where we get to, um, contingency, right?
    2:11:31 So what we’ve been talking about is convergence.
    2:11:35 You expect that evolution will converge on wings multiple times when presented
    2:11:38 with the problems that wings can solve.
    2:11:42 Um, but con, contingency is accidents, right?
    2:11:46 That, you know, you’ve got something that’s evolving a certain kind of wing,
    2:11:47 a leathery wing, right?
    2:11:50 Uh, and then, you know, the climate changes and they all die out.
    2:11:51 End of story.
    2:11:53 Or, you know, an asteroid, that total accident asteroid hits.
    2:11:58 And so, uh, contingency accidents play also a huge role in evolution.
    2:12:03 And one of the things that, you know, lots of evolutionary biologists have talked
    2:12:06 about is the idea that if you ran the tape of Earth’s history over again, would
    2:12:08 you get the same creatures?
    2:12:12 Now, um, uh, Stephen Jay Gould was of the opinion that no way that you wouldn’t
    2:12:16 find anything on earth that right resemble to any species today.
    2:12:19 They’ve done experiments actually on this with, uh, E. coli.
    2:12:22 You take, you know, you take a bunch of E. coli, you let them evolve for a while.
    2:12:26 You take a bunch of them out, freeze them, let one, you know, let that population
    2:12:27 continue to evolve.
    2:12:30 The other one’s frozen now started over again with the frozen.
    2:12:34 And it seems to be that contingency tends to win, right?
    2:12:37 The contingency, at least from what we can tell, I mean, that’s not a, that’s not
    2:12:41 a hard result, but in those experiments, what you find is that accidents really
    2:12:41 do matter.
    2:12:43 So the idea, and this is important.
    2:12:47 So yes, you should expect legs or jointed sticks.
    2:12:48 How many joints they’re going to be?
    2:12:49 Anybody’s guess.
    2:12:54 Um, you know, do you expect humanoids, you know, things with a, you know, uh, a
    2:12:58 sensing apparatus on top of a shoulder with two arms and two legs, that’s
    2:13:02 probably a pretty random set of occurrences that led to that.
    2:13:06 I guess what is a brain versus the nervous system?
    2:13:09 Like, where’s most of the cognition competition going on?
    2:13:10 Yeah.
    2:13:11 Yeah.
    2:13:13 You could see that in organisms.
    2:13:18 Like I actually had, I don’t know how the brain evolved.
    2:13:19 Like, why does it have to be in one place?
    2:13:20 Doesn’t have to be.
    2:13:24 So my favorite word, word of the day is liquid brains, right?
    2:13:27 This idea of distributed cognition, which, um, fascinating idea.
    2:13:32 And we’ve come to understand how much, uh, distributed cognition there is.
    2:13:37 Obviously you social animals, like termites, et cetera, and ants.
    2:13:39 That’s an example of distributed cognition.
    2:13:41 The organism is the whole colony.
    2:13:43 This is one thing that’s been really interesting in the state of the study.
    2:13:46 When we cut to, for aliens is that when we’ve come to recognize that human
    2:13:50 intelligence, it’s not actually, it’s been the kinds of things that go into
    2:13:54 intelligence are distributed all across the biosphere.
    2:13:58 Lots of different examples of things show various pieces of what we have.
    2:14:01 Jason Wright will describe it as like a deck of cards.
    2:14:02 The cards are all there.
    2:14:06 We got the hand that actually led to the kind of technological progress that we,
    2:14:10 we see, but the kinds of, you know, the basic idea of using tools, the basic idea
    2:14:14 of recognizing each other eye to eye, all the things that we define as intelligence.
    2:14:19 You can find many places in many other, um, uh, places across many other line
    2:14:21 lineages across the earth.
    2:14:24 So it could be, they could be very, very different with something like, yeah,
    2:14:29 maybe that’s, you know, the hive mind idea or, you know, bacterial colonies
    2:14:33 that actually managed to, you know, come to their own version of high cognition.
    2:14:40 Well, I wonder if there’s, if we stretch out time across 10s, 20 billion years,
    2:14:46 whether there’s an Darwinian evolution stops working at some point in terms
    2:14:51 of the biology or the chemistry of the organisms and it switches to ideas.
    2:14:54 For example, it’s much more rapidly you’re operating.
    2:14:58 Maybe I guess it’s a kind of Darwinian evolution on the space of memes or
    2:15:03 whatever, as a technology seems to operate on, and, and, and, yeah, but certainly
    2:15:06 markets can operate in ways that look very Darwinian.
    2:15:12 So basically a planet is working hard to get to the first kind of organisms that’s
    2:15:17 able to be a nice platform for ideas to compete.
    2:15:17 Yeah.
    2:15:19 And then it kind of stops evolving there.
    2:15:21 And then, then it’s ideas that take off.
    2:15:21 Right, right.
    2:15:23 Cause yeah, cultural, like it’s true.
    2:15:28 It’s amazing that cultural evolution totally disconnects from, from the
    2:15:29 Darwinian process.
    2:15:32 But I’d be careful to say that like a planet is working hard to do this.
    2:15:33 Cause, you know, it’s really impotent looking at us.
    2:15:39 Like what we think of is ideas and culture and, you know, it’s quite possible.
    2:15:41 We’re going to make it another 200 years and this is gone.
    2:15:41 Right.
    2:15:44 Cause it actually wasn’t a very good idea long term.
    2:15:45 We just don’t know.
    2:15:50 Oh, so maybe the idea generation organism is actually the thing that destroys.
    2:15:52 Not the biosphere, but it destroys itself.
    2:15:54 It may not be very long term.
    2:15:58 It may be very potent for a short period of time, but that it’s not sustainable.
    2:16:00 It doesn’t become like we were talking about before mature.
    2:16:06 It’s very hard to make it into integrated into a mature bio slash techno sphere.
    2:16:08 And of course, you know, evolution is not working for anything.
    2:16:10 Well, here’s the actually interesting thing.
    2:16:10 Right.
    2:16:13 So people are very much, you know, evolutionary biologists will get very,
    2:16:14 their hair will stand on it.
    2:16:16 And if you start talking about evolution, having a purpose or anything,
    2:16:21 but the very interesting thing about purpose is that once you do get to a idea
    2:16:27 generating species or collective organism, um, yeah, then, uh, you know,
    2:16:30 kind of all bets are off and there is goals.
    2:16:32 There is teleology.
    2:16:37 There is a, you know, the now suddenly, you know, absolutely there’s a direction implied.
    2:16:40 So that’s kind of the cool, interesting thing that once you get to that evolution
    2:16:43 stops being goal lists and direction lists.
    2:16:46 And suddenly, yeah, we’re the ones who supply or any kind of creature
    2:16:49 like us has an absolute direction that way they decide on.
    2:16:53 Although you could argue that from a perspective of the entire human civilization,
    2:16:54 we’re also directionless.
    2:17:01 We have a sense that there’s a direction in this cluster of humans.
    2:17:04 And then there’s another cluster as a different set of direction.
    2:17:06 There’s all kinds of religions that are competing.
    2:17:08 There’s different ideologies that are competing.
    2:17:14 And when you just zoom out across, if we survive across thousands of years,
    2:17:15 it will seem directionless.
    2:17:17 It will seem like a pinball.
    2:17:20 It’s an unholy mess.
    2:17:24 But, you know, but at some point, like the expansion into the solar system.
    2:17:26 Like that would be both direction.
    2:17:29 I mean, depending on how you look at it, it was directional.
    2:17:32 There was a, there was a decision that the collective of human beings
    2:17:36 made to like anti a creed to start spreading out into the solar system.
    2:17:40 So that was definitely a goal there that may have been reached
    2:17:44 in some crazy sort of, you know, nonlinear way.
    2:17:45 But it was still, right?
    2:17:48 There was still, it’s still a goal was set and it was achieved.
    2:17:50 If there’s advanced civilizations out there,
    2:17:56 what do you think is the proper protocol for interacting with them?
    2:17:58 Do you think there would be peaceful?
    2:18:00 Do you think there would be war like?
    2:18:02 Like, what do we do next?
    2:18:05 We detect, we detect a civilization through all the technical
    2:18:08 signatures we’ve been talking about, maybe direct imaging.
    2:18:09 Maybe there’s really strong signal.
    2:18:13 We come up with a strategy of how to actually get there.
    2:18:13 Yeah.
    2:18:16 But what’s the, then the generals, as they always do.
    2:18:19 The military industrial complex.
    2:18:20 We’ve watched that movie.
    2:18:26 Where what kind of rock is, what kind of, and do we bring rockets?
    2:18:26 Right.
    2:18:30 Well, I think, you know, so this also, this is a general question
    2:18:33 also leads to many messaging, extraterrestrial intelligence.
    2:18:36 And I am definitely of the opinion of like, you should be very careful, you
    2:18:39 know, like, I don’t think it’s necessarily a bad idea to have your head
    2:18:40 below the grass.
    2:18:44 Um, you know, the people who advocate like, oh, yeah, we should be sending,
    2:18:49 you know, powerful messages that are easily detectable into interstellar space.
    2:18:51 I’m like, why would you, because we just don’t know.
    2:18:53 Like, I’m not going to say they are warlike.
    2:18:54 I’m not going to say they’re not warlike.
    2:18:57 I have no idea, you know, but we sure as hell.
    2:19:00 Well, first of all, who gets to decide that the idea that a bunch of
    2:19:03 astronomers who happen to have a radio telescope, I don’t, you know,
    2:19:07 who speaks for earth, which I think was a great book somebody wrote.
    2:19:12 Um, so, you know, definitely we should, we should be cautious, I would say,
    2:19:14 because we just have zero information.
    2:19:17 And the idea, you used to have this idea of well, if they’re advanced,
    2:19:18 they’ve managed to survive.
    2:19:22 So of course they’re going to be wearing togas, you know, and be singing kumbaya.
    2:19:25 But I just wouldn’t, I just wouldn’t assume that it’s also possible, though,
    2:19:29 that like their cognitive structure is so different that we’re not even living
    2:19:31 in the same universe in a certain way.
    2:19:32 I think we have to be prepared for that.
    2:19:39 We may not even be able to recognize each other in some way as, as cognizing beings.
    2:19:40 One of my favorite movies is Arrival.
    2:19:42 I don’t know if you’ve ever seen that one.
    2:19:44 I really love that one because, you know, they literally, they have a different
    2:19:47 language, they have a different cognitive structure in terms of their language.
    2:19:49 And they’re literally kind of living in a different physics.
    2:19:53 Different physics, different language, different, different, everything.
    2:19:53 Yeah.
    2:19:58 But in the case of Arrival, it can at least like recognize that they’re there.
    2:20:01 And they managed to cross the language barrier.
    2:20:02 Yeah.
    2:20:06 So, but that’s both sides have an interest in communicating, which you kind
    2:20:11 of suppose that an advanced civilization would have a curiosity.
    2:20:16 Because like, how do you become advanced without a kind of curiosity about the
    2:20:17 mysterious, about the other.
    2:20:23 But also, you know, if they’re long lived, they may just be like, we’re not even interested.
    2:20:28 Like we’ve done this, we’re like, we, you know, you know, 10, 10 billion year, sorry,
    2:20:31 say 10 million years ago, we were really interested in that, in this, in communicating
    2:20:34 with you, you know, young and young and, but now we’re not at all.
    2:20:37 And that’s just, you know, one of the beauties of this, again, is how to think
    2:20:41 about this systematically, because you’re so far past the hairy edge, right?
    2:20:46 Of our experience, of what we know that you want to think about it, right?
    2:20:49 You don’t want to be like, don’t know, can’t say anything, because that’s not fun.
    2:20:53 But you also have to sort of systematically go after your own biases, right?
    2:20:56 So the one of the things I loved about Arrival too, was, you know, Carl
    2:21:00 Sagan always had this idea, like we’ll teach him math, we’ll teach him our math.
    2:21:01 Then they’ll teach us their math.
    2:21:04 And then, you know, we’ll be telling each other knock, knock jokes, you know,
    2:21:06 and swapping cures for cancer.
    2:21:09 And, you know, in the movie, like they send a Carl Sagan guy in and a linguist.
    2:21:12 And the Carl Sagan guy fails immediately, right?
    2:21:15 And it’s the linguist who understands that language is actually embodied.
    2:21:17 Language is not just something that happens in your head.
    2:21:19 It’s actually the whole experience.
    2:21:20 And she’s the one who breaks through.
    2:21:26 And it just points to the idea that, um, how utterly different the cognitive
    2:21:29 structures, the, you know, of, of a, of a different species should be.
    2:21:33 So somehow we have to figure out how to think about it, but be so careful of our
    2:21:37 biases or figure out like a systematic way to break through our biases and not
    2:21:39 just tell something, make science fiction movies.
    2:21:40 You know what I mean?
    2:21:41 Yeah.
    2:21:42 Yeah.
    2:21:46 Speaking of biases, do you think aliens have visited earth?
    2:21:49 You’ve mentioned that they could have visited and started civilizations.
    2:21:51 I wouldn’t, we wouldn’t even know about it.
    2:21:55 If it was a hundred million years ago, how could we even begin to answer this
    2:21:56 question?
    2:21:58 Whether they’ve got to look, got to look, got to figure out ways to look.
    2:22:02 So I, you know, I mean, I, I don’t put it, it’s not high on my list of, you know,
    2:22:07 things that I’m, I think are probable, but it certainly it needs to be explored.
    2:22:09 You know, and unless you look, you never know.
    2:22:13 So looking on the moon, look at, where would we find if, if aliens had passed
    2:22:17 through the solar system anytime in the last three billion years, where might we
    2:22:18 find artifacts?
    2:22:20 Where might artifacts still be around earth?
    2:22:23 Probably not because of weathering and resurfacing.
    2:22:27 Um, the moon’s a good place, uh, certain kinds of orbits, you know, maybe they
    2:22:29 parked a probe in an orbit that was stable.
    2:22:31 So you got to figure out which orbits actually you could put something there
    2:22:33 and it’ll last for a billion years.
    2:22:38 So those are the kind of questions I don’t, like I said, I don’t, it’s not high
    2:22:41 on my list of thinking this could happen, but it could happen.
    2:22:43 I certainly can’t, unless you look, you don’t know.
    2:22:48 What about speaking of biases, what about if aliens visiting earth is the
    2:22:53 elephant in the room, meaning like, uh, the potential of aliens say seeding life on earth?
    2:22:56 Uh, you mean like in that directed panspermia?
    2:23:01 Directed panspermia or seeding some aspect of the evolution?
    2:23:03 Like 2001.
    2:23:04 Yeah.
    2:23:04 Yeah.
    2:23:10 Uh, you know, it’s great story, but you know, always with Occam’s razor or whatever
    2:23:15 with science, if I can, if I can answer that question without that extra, very
    2:23:18 detailed, uh, hypothesis, then I should.
    2:23:22 And you know, the idea that evolution is a natural process, that’s what I would
    2:23:23 go for first, right?
    2:23:26 There’s, there’s, that just seems, it’s so much easier to do it.
    2:23:31 That way than adding, you know, sort of, cause it’s kind of a duo sex machina thing
    2:23:33 of like, oh, then the aliens came down and they solved that problem that you’re
    2:23:36 trying to solve by just coming down and putting their finger on the scales.
    2:23:42 So to you, the origin of life is, uh, is a pretty simple thing that doesn’t
    2:23:43 require an alien.
    2:23:46 I wouldn’t say that it’s not a simple thing, but it doesn’t, you know, putting,
    2:23:50 I think, cause you know, all you’re doing is kicking the can down the road, right?
    2:23:52 The aliens, the aliens formed, right?
    2:23:56 So you’re just saying like, all right, I’m just kicking the can down the road
    2:23:56 to the aliens.
    2:23:59 How did they, how did, what was their a biogenesis event?
    2:24:02 Well, so from a different perspective, I’m just saying, it seems to me that
    2:24:06 there’s obviously advanced civilizations everywhere throughout the galaxy and
    2:24:08 through the universe from the Drake equation perspective.
    2:24:11 And then if I was an alien, what would I do?
    2:24:19 You know, I’ve gotten a chance to learn about the uncontacted tribes in the Amazon.
    2:24:23 I recently went to the Amazon, you get to understand how they function and how
    2:24:29 the humans in the Amazon, they’re in contact with the civilized world, how
    2:24:30 they interact with the uncontacted tribes.
    2:24:35 First of all, the uncontacted tribes are very violent towards the outside world,
    2:24:37 but everybody else try to stay away from them.
    2:24:40 They try to kind of protect them, don’t talk about them or don’t, don’t talk
    2:24:42 about their location and all this kind of stuff.
    2:24:47 And I’ve begun to internalize and understand that perspective of why you’re doing
    2:24:47 that.
    2:24:51 And if I was an alien civilization, if I probably would be doing a similar kind
    2:24:55 of thing, and of course, there’s always the teenager of the troll who’s going to
    2:24:59 start messing with the stuff or the scientists, you know, right.
    2:25:03 And so it’s not from our perspective.
    2:25:03 Yes.
    2:25:08 And if you’re in the Truman show, like Occam’s razor, but like also the Occam’s
    2:25:15 razor from the perspective of the alien civilization, we have to have the humility
    2:25:19 to understand that that interaction will be extremely difficult to detect.
    2:25:20 That won’t be obvious.
    2:25:21 Right.
    2:25:24 I understand the logic of what you’re saying, but the problem for me with that
    2:25:28 is that right there, the first you have to assume that alien civilizations are
    2:25:31 common, which I’m not sure about it, that most of them may be dead.
    2:25:34 Or they’re not yet still, you know, like I, while I think that life is common.
    2:25:35 And again, this is just my biases.
    2:25:35 Right.
    2:25:41 So now the problem is how do we sort out sort of, you know, the, the, the biases
    2:25:47 we’re bringing or the assumptions we’re bringing in from, you know, from the, the
    2:25:50 sort of causal chain that comes out of that.
    2:25:53 I would first want to try and do this without it.
    2:25:55 Like, you know, if we’re looking at the origin of life or the evolution of life
    2:26:00 on earth, I’d want to do it just on its own without asking for this other layer.
    2:26:05 Because it requires a bunch of these other assumptions, which also have
    2:26:07 their own sort of breaking of causal chains.
    2:26:11 Cause I don’t really like the idea that when you ask, what would you do
    2:26:12 if you were an alien?
    2:26:17 But again, like alien minds could be so unbelievably different, right?
    2:26:20 That they wouldn’t even recognize the question you just posed, right?
    2:26:23 Cause it just like, you know, we’re very much, we have a very particular
    2:26:27 kind of cognitive structure, you know, and, and we’re very governed by, you know,
    2:26:31 even if you went and talked to, this is an interesting thing to think about, you
    2:26:34 know, if I could suddenly magically appear a hundred thousand years ago and
    2:26:37 talk to a hunter-gatherer about their worldview and their motivations, you
    2:26:41 know, I might find something that’s like, there were no resemblance to things
    2:26:44 that I think are sort of, oh, that’s what naturally humans do.
    2:26:45 Well, let me, let me ask you this question.
    2:26:47 Let’s, let’s together do the thought experiment.
    2:26:52 If we create a time machine that allows us to travel back and talk to them or
    2:26:59 we discover maybe a primitive alien civilization on a nearby star system,
    2:27:01 what, what would we do?
    2:27:01 Yeah.
    2:27:03 I think that’s a great question.
    2:27:05 I mean, so, you know, it’s interesting how that even brings up the ethical
    2:27:06 questions, right?
    2:27:10 Let’s say that, you know, would we, we’d have to first sort of sort out what
    2:27:14 are the consequences for them and what do we feel our ethical responsibilities are
    2:27:17 to them and also, sorry, from a capitalist perspective.
    2:27:20 What are we to gain from this interaction?
    2:27:21 Right, right, right.
    2:27:23 You look at the way the missionaries, you know, missionaries had these
    2:27:27 interactions because they thought converting them to whatever religion they
    2:27:29 were, you know, was the most important.
    2:27:30 That’s what the gain was.
    2:27:34 So from our perspective, I mean, we’d have to sort that out.
    2:27:40 I think given, you know, if we’re doing this thought experiment, we are curious.
    2:27:42 And I think eventually we’d want to reach out to them.
    2:27:47 Now, I think when you say we, let’s start with the people in this room, right?
    2:27:52 But there is, I wonder who the dominant forces are in the world, because I think
    2:27:58 there’s a lot of people, the military, they will probably move first so they
    2:28:03 can steal whatever advantage they can from this new discovery so they can
    2:28:05 hurt China or China hurt America.
    2:28:07 That’s one perspective.
    2:28:12 Then there’s the, the capitalist who will see like how the benefit of the
    2:28:15 costs here and how can I make money off of this?
    2:28:16 There’s opportunity here.
    2:28:18 There’s gold in them hills.
    2:28:22 And I wonder, and I think the scientists is just not going to, unlike the movies.
    2:28:24 We’re not going to get much say.
    2:28:26 They’re going to put them, hey guys, we, wait a minute.
    2:28:28 They would engage probably.
    2:28:32 I mean, it’s just as, as a human society as we are now, we would engage.
    2:28:35 And we would be detectable, I think.
    2:28:36 In our engagement.
    2:28:37 In our engagement.
    2:28:39 Yeah, yeah, probably.
    2:28:44 So using that trivial bias logic, I just, it just feels like aliens would need
    2:28:46 to be engaging in a very obvious way.
    2:28:48 Yeah, yeah, yeah.
    2:28:53 This brings up that old direct for me paradox for me.
    2:28:56 Uh, what do you make of all the UFO sightings?
    2:29:03 I am all in favor of an open, agnostic, you know, transparent scientific
    2:29:05 investigation of UFOs and UAPs.
    2:29:12 But the idea that, that there’s any data that we have that links UFOs and
    2:29:15 UAPs to non-human technology, I just think they’re the standards.
    2:29:20 They just, none of what is claimed to be the data lives up to the standards of
    2:29:20 evidence.
    2:29:22 So let’s just take a moment on that idea of standards of evidence because I’ve
    2:29:25 made a big deal about this both, you know, in the book and elsewhere.
    2:29:26 Whenever I talk about this.
    2:29:30 So what people have to understand about science is we are really scientists.
    2:29:32 We are really mean to each other.
    2:29:35 We are brutal to each other because we have this thing that we call standards
    2:29:39 of evidence and it’s the idea of like, you have a piece of evidence that you
    2:29:44 want to link to a claim and, you know, under what conditions can you say, oh,
    2:29:48 look, I’ve got evidence of, you know, this claim X, Y and C.
    2:29:53 And in science, we are so mean to each other about whether or not that piece
    2:29:55 of evidence lives up to the standards that we have.
    2:29:59 And we spent 400 years determining what those standards are.
    2:30:02 Um, and that is why cell phones work, right?
    2:30:07 If you didn’t have super rigorous standards about, you know, what you think
    2:30:10 that’s, oh, this little antenna, I’ve invented a new kind of antenna that I
    2:30:13 can slip into the cell phone and I, you know, I can show you that it works.
    2:30:15 You know, if you didn’t have these standards, you know, you did every
    2:30:17 cell phone would be a brick, right?
    2:30:21 And when it comes to UFOs and UPS, the evidence you have and the claim that
    2:30:26 though this shows that, you know, we are being visited by non-human, uh,
    2:30:31 advanced civilization just doesn’t even come close to the same standards.
    2:30:34 I’m going to have to obey or whatever live under.
    2:30:39 If my team, you know, the group I work with is one of them says, look, we’ve
    2:30:42 discovered, he wants to announce that, oh, we’ve discovered, uh, techno
    2:30:44 signature on an alien planet.
    2:30:47 We’re going to get shredded as we expect to be.
    2:30:48 We expect to be beaten up.
    2:30:52 And, you know, the UAP UFO community should expect the same thing.
    2:30:56 You don’t get, you know, you don’t get a pass because it’s a really cool topic.
    2:30:57 So that’s where I am right now.
    2:31:01 I just don’t think any of the evidence is even close to anything that
    2:31:02 could support that claim.
    2:31:07 Well, I generally assign a lot of value to anecdotal evidence from pilots.
    2:31:13 Not scientific value, but just like, it’s always nice to get anecdotal
    2:31:15 evidence as a first step.
    2:31:17 I was like, hmm, I wonder if there’s something there.
    2:31:20 But unfortunately with this topic, there’s so much excitement around it.
    2:31:24 There’s a lot of people that are, uh, basically trying to make money off of it.
    2:31:26 There’s hoaxes, all this kind of stuff.
    2:31:29 So even, even if there’s some signal, there’s just so much noise.
    2:31:30 It’s very difficult to operate with.
    2:31:33 So how do we get better signal?
    2:31:40 So you’ve talked about sort of, if we wanted to really search for UFOs on earth.
    2:31:40 Right.
    2:31:44 And, uh, maybe detect things like weird physics.
    2:31:47 What kind of instruments would we be using?
    2:31:47 Yeah.
    2:31:51 So, uh, you know, in the book, I talked about the idea of this is really stupid,
    2:31:54 but you know, you want to look up, you want to look down and you want to look
    2:31:54 all around.
    2:31:55 I think that’s brilliant.
    2:31:58 I mean, that’s, it’s simple, not stupid.
    2:31:59 It’s like literally.
    2:32:03 So you want to do ground based detectors that, you know, upward looking
    2:32:06 ground based sectors of the kind we’re already building for meteors, right?
    2:32:07 For tracking meteors.
    2:32:10 You want to have space based detectors, put them on satellites.
    2:32:12 This is what the NASA UAP panel was thinking about.
    2:32:16 And then probably on pile, you know, all, we have lots of people in the sky.
    2:32:21 There should be detectors, uh, on the planes or at least, you know, some
    2:32:24 kind of alert system that if some pilot says, Oh, look, I’m seeing something.
    2:32:25 I don’t understand.
    2:32:29 Boop, presses the red button and that triggers the ground based and, uh,
    2:32:34 is space based, um, uh, data collectors and the data collectors themselves.
    2:32:36 This is something that people really don’t understand and it’s so important.
    2:32:41 In order to actually do science with anything, the data you have, you have
    2:32:45 to understand where it came from, like down to the, you know, the nth degree.
    2:32:51 You have to know how that camera behaves in a bunch of different wavelengths.
    2:32:52 You have to have characterized that.
    2:32:56 You have to know what the software does, what the limits of the software
    2:32:59 possibly have to know what happened to the camera as it was it refurbished
    2:33:04 recently, um, in, you know, in every spectral wavelength, uh, in all of its
    2:33:08 data, um, collection and, and, and processing, you have to know all of those
    2:33:11 steps and having them all characterized because especially if you want to claim
    2:33:15 like, Oh my God, I saw something take a right hand turn at Mach 500, right?
    2:33:19 You better have all of that nailed down before you make that kind of claim.
    2:33:23 So we have to have characterized detectors looking up, down and maybe on, on
    2:33:24 planes themselves.
    2:33:26 We need a rational search strategy.
    2:33:29 So let’s say you want to lay out these, uh, ground based detectors.
    2:33:30 Where do you put them?
    2:33:30 Right?
    2:33:32 There’s only so much money in the world.
    2:33:35 So, you know, do you want to put them near places where you’ve seen a lot of
    2:33:39 things beforehand, or do you want to, you know, have them try and do a, a sparse
    2:33:40 coverage of the entire country?
    2:33:44 Um, and then you need the, uh, the data analysis, analysis, right?
    2:33:47 You’re going to have so much data, so many false positives or, you know,
    2:33:51 false triggering that you need a way of sorting through enormous amounts of
    2:33:53 data and figuring out what you’re going to throw out and what you’re going to
    2:33:53 keep.
    2:33:55 And all of these things we’re used to doing in other scientific
    2:33:56 enterprises.
    2:34:00 And without that, if we don’t do that, we’re going to be having the same damn
    2:34:03 argument about these things for, you know, the next hundred years.
    2:34:09 But if I asked you, I give you a trillion dollars and ask you to allocate to one
    2:34:16 place, looking out, steady, or looking at earth, what should you allocate?
    2:34:18 Oh God, looking out, looking out, because that’s the bet.
    2:34:21 You know, as I always like to say, here’s my, my codification of this.
    2:34:24 If you said, Hey, Adam, I’d like to find some Nebraskans.
    2:34:27 And I said, Oh good, let’s go to the Himalayas.
    2:34:29 You know, you’d be like, why am I going there?
    2:34:32 I’m like, well, you know, maybe there’s some Himalayas, you know,
    2:34:33 some Nebraskans and Himalayas.
    2:34:34 Say no, no, let’s go to Nebraska.
    2:34:40 If we’re looking for aliens, why don’t we look on alien planets where they live?
    2:34:44 Cause that’s, we have that technology now, as opposed to the, you know, the, the
    2:34:48 bucket of assumptions that you have to come up with in order to say, like, Oh,
    2:34:48 they’re here right now.
    2:34:50 You know, they just happen to be here right now.
    2:34:53 And also the very important thing, I called this the high beam argument.
    2:34:57 You know, to deal with the UFO stuff, you have to deal with all of, you have to
    2:35:00 answer these weird irrational things that are happening.
    2:35:05 Like, okay, there’s an advanced civilization that is visiting earth regularly.
    2:35:07 They don’t want to be detected.
    2:35:11 They’ve got super powerful technology, but they really suck at using it because
    2:35:14 they, we keep seeing them, we keep seeing them, but then they disappear.
    2:35:14 Right.
    2:35:19 I mean, explain to me what rational world that works under.
    2:35:22 It’s like, you know, so there’s that whole sort of argument you’ve got to
    2:35:27 explain, like why, if they want to stay hidden, are they so bad at it?
    2:35:31 So, you know, that’s why I take that level of difficulty.
    2:35:33 And then I put it on top of where should I look?
    2:35:37 I should look at the, the, you know, I should look at where they, where they’re
    2:35:41 from that makes me want to look at, do the telescopic stuff.
    2:35:41 Yeah.
    2:35:48 I think the more likely explanation is either the sensors are not working correctly
    2:35:51 or it’s a secret military technology being tested.
    2:35:51 Absolutely.
    2:35:55 I mean, if you had, I listen, I, that’s why, again, I think UAP, you know, the
    2:35:58 absolutely UAP should be studied scientifically.
    2:36:02 Um, uh, but if I had to make a bet and it’s just a bet, I would say this is,
    2:36:05 you know, this is pure state adversary stuff.
    2:36:10 When I did, I did a, a New York Times op-ed for this in 2021, which blew up.
    2:36:13 And, um, and so, you know, I had a lot of, you know, people talking to me.
    2:36:16 While I was doing that, I sort of looked at the signals, intelligence people,
    2:36:21 the sig int and an eint, electronic intelligence communities.
    2:36:23 And what they were saying about, you know, the New York Times articles
    2:36:27 and the, the various videos, and really none of them were talking about UFOs.
    2:36:29 They were all talking about, you know, pure state.
    2:36:31 That’s why I learned the word pure state adversaries.
    2:36:35 How like even simple drone technologies, you can, you know, and you want to,
    2:36:39 you purposely want to do this, you want to, um, fake, you know, signals into
    2:36:42 the electronics, uh, of their adversary.
    2:36:46 So they crank it up so then you can just soak up all the electromagnetic
    2:36:49 radiation and know exactly what those advanced radars can do.
    2:36:52 That said, I’m not saying that that’s what this is.
    2:36:58 If I wasn’t the head of an alien civilization and I chose to not, to
    2:37:03 minimize the amount of contact I’m doing, I would try to figure out what would
    2:37:07 these humans, what would these aliens like to see?
    2:37:13 That’s why like the big heads in the humanoid form, like, I mean, that’s
    2:37:15 kind of like how would I would approach communication.
    2:37:18 If I, if I was much more intelligent, I would observe them enough.
    2:37:23 It’s like, all right, if I wanted to communicate with a nail colony, I
    2:37:26 would observe it long enough to see what are the basic elements of communication.
    2:37:27 Yeah.
    2:37:27 Yeah.
    2:37:31 And maybe I would do a trivial thing, like do a, like a fake ant.
    2:37:31 Right.
    2:37:32 A robot ant.
    2:37:35 A robot ant, but then it’s not enough to just do a robot ant.
    2:37:38 You’d have to do a robot ant that like moves in the way they do.
    2:37:42 And maybe aliens are just shitty at doing the robot ants.
    2:37:45 But no, I do sort of, I just wanted to make the case for that.
    2:37:49 This is the plot, actually, of a great science fiction book called Eon by Greg
    2:37:52 Bear, and the idea was like these sort of, you know, this, this is actually
    2:37:58 where my first, I got, I became sort of more than agnostic, anti-Medi, because
    2:38:01 the idea is that, yes, our aliens come, they, you know, they sort of make their
    2:38:04 arrival, and really their point is to get rid of us.
    2:38:06 It’s the, it’s the dark forest hypothesis.
    2:38:10 And what they do is they sort of literally the way they present themselves is
    2:38:14 in this sort of classic UFO thing, and they do it and they, you know, they arrive
    2:38:16 at the, this was during the Soviet Union, they arrive at the USSR, they arrive
    2:38:20 in China, and they’re kind of faking us out so that we never can organize
    2:38:24 ourselves against, so it was really, they did exactly kind of what you’re
    2:38:27 talking about, but for nefarious purposes.
    2:38:28 Okay.
    2:38:29 Let me ask the podhead question.
    2:38:32 Another, yet another, the whole conversation.
    2:38:33 I’m sorry.
    2:38:34 Boggs before breakfast.
    2:38:37 It’s, it’s science and podhead questions back and forth.
    2:38:44 Okay, what if aliens take a form that’s unlike what we kind of traditionally
    2:38:50 envision in analyzing physical objects?
    2:38:52 What if they take the form of, say, ideas?
    2:38:58 What if real podhead, if it’s consciousness itself, like the subjective
    2:39:04 experience is an alien being, maybe ideas and is an easier one to visualize
    2:39:07 because we can think of ideas as entities traveling from human to human.
    2:39:11 When, you know, I made the claim that the most important, that finding
    2:39:14 life, any kind of life would be the most important discovery in human history.
    2:39:18 And one of the reasons is, again, as I said, that, you know, life, if we’re not
    2:39:22 an accident and there’s other life, then there’s probably lots of other life.
    2:39:27 And because the most significant thing about life is it can innovate, right?
    2:39:33 If I give you a star and, you know, give, tell you the mass and the composition,
    2:39:35 you can basically pretty much use the laws of physics, tell exactly what’s
    2:39:37 going to happen to that star over its entire lifetime.
    2:39:40 Maybe not the little tiny details, but overall, it’s going to be a white dwarf.
    2:39:41 It’s going to be a black hole in the story.
    2:39:44 If I gave you a single cell and said what’s going to happen in a few billion
    2:39:48 years, you’d never be able to predict a giant rabbit that can punch you in the face,
    2:39:49 right, a kangaroo.
    2:39:53 So life has this possibility of innovating, of being creative.
    2:39:57 So here’s, so what it means is, and that’s a part of it, kind of a fundamental
    2:39:59 definition of what it means to be alive.
    2:40:00 It goes past itself.
    2:40:07 So give life enough time, you know, and what are the, what are the
    2:40:07 end results?
    2:40:09 Like, you know, there’s, there’s, you know, like, that’s why I love
    2:40:10 science fiction so much.
    2:40:15 It does, at some point does life reach a point where it climbs into the laws
    2:40:19 of physics itself, it becomes the laws of physics or, you know, these, these
    2:40:23 sort of lie at the, the extreme limits of thinking about what, what we mean by
    2:40:27 reality, what we mean by, you know, uh, uh, experience.
    2:40:30 Um, but I’m not sure there was much we can do with them scientifically,
    2:40:33 but it, you know, they’re, they’re open-ended question about the open-ended
    2:40:37 nature of what it means to be alive and what life can do.
    2:40:42 Since you said it’s the biggest question, which is an interesting thought
    2:40:45 experiment, what is the biggest scientific question we can possibly answer?
    2:40:49 You know, some people might say about, like, what happened before the big
    2:40:52 bang, like some big physics questions about the universe.
    2:40:58 I can see the argument for, you know, how many alien civilizations, or if
    2:41:01 there’s other life out there, you want to speak to that a little bit?
    2:41:03 Like why, why is the, why is it?
    2:41:07 Is it the biggest question in your, why is it number one in your top five?
    2:41:08 I’ve evolved in this, right?
    2:41:10 You know, I started off as a theoretical physicist.
    2:41:13 I went into, um, computational astrophysics and magneto hydrodynamics
    2:41:16 of star formation, but I always, you know, I was a philosophy minor.
    2:41:19 I always had the sort of bigger questions sort of floating around the back of my mind.
    2:41:24 And what I’ve come to now is the most important question in the, for physics
    2:41:25 is what is life?
    2:41:29 What the hell is the difference between a rock and a cell fundamentally?
    2:41:32 And what I really mean by this, and this is where I’m going to go non-traditional,
    2:41:36 um, is that really the fundamental question that is the, is agency?
    2:41:39 What does it mean to be an autonomous agent?
    2:41:41 How the hell does that happen?
    2:41:43 You know, it’s so, I’m not a reductionist.
    2:41:45 I’m not somebody who’s just like, well, you just put together enough chemicals
    2:41:47 and bing, bang, boom, and you know, it suddenly appears.
    2:41:54 There’s something that really is going to demand a reconception of what nature itself is.
    2:41:56 And so yeah, black holes are super cool.
    2:41:57 Cosmology is super cool.
    2:42:04 But really this question of, of what is life, especially from by viewing it from the inside,
    2:42:07 because it’s really about the verb to be, right?
    2:42:10 Really, what is the most, what is the most impressive philosophical question
    2:42:12 beyond science is the verb to be?
    2:42:15 What is, what is being, right?
    2:42:19 This is what Stephen Hawking said when he talked about what puts the fire in the equations.
    2:42:20 The fire, right?
    2:42:22 The fire is this, this presence.
    2:42:25 And this is where it touches things like, you know, whatever you want to say it,
    2:42:28 the sacred spirituality, whatever you want to talk about.
    2:42:31 My first book was about science and, and human spirituality.
    2:42:36 So it’s like, you know, so this question of life, what makes life as a physical system,
    2:42:42 you know, so different is, is to me much, because it’s, you know, that’s where being appears.
    2:42:45 Being doesn’t appear out there, right?
    2:42:47 The only place that ever appears to any of us is us.
    2:42:51 So, you know, I can do this kind of projection into this third person thing,
    2:42:53 but nobody ever has that, that God’s eye view.
    2:42:54 That’s a story we tell.
    2:43:00 This is where, you know, this between us is where the verb to be appears.
    2:43:07 So this is something that you write about in the blind spot, why science cannot ignore human experience,
    2:43:15 sort of trying to pull the fire into the process of science.
    2:43:18 And it’s a kind of critique of materialism.
    2:43:20 Can you explain the main thesis of this book?
    2:43:20 Yeah.
    2:43:24 So the idea of the blind spot is that there is this thing.
    2:43:27 That is central to science.
    2:43:29 So the blind, we’re using the blind spot as a metaphor, right?
    2:43:34 So the eye has an optic nerve and the optic nerve is what allows vision to happen.
    2:43:38 So you can’t have vision without the optic nerve, but actually you’re blind to the optic nerve.
    2:43:41 There’s a little hole in your vision where the optic nerve is.
    2:43:45 And what we’re saying is that science has something like this.
    2:43:49 That there is something that without which science would not be possible.
    2:43:51 But that science, the way it’s been configured.
    2:43:55 And actually, when we mean the blind spot, I’ll get into exactly what I mean, what it is.
    2:43:57 But it’s not really science.
    2:44:00 It is a set of ideas that got glued on to science.
    2:44:03 It’s a metaphysics that got glued on to science.
    2:44:06 And so what is that thing that is, what is the blind spot?
    2:44:07 It’s experience.
    2:44:09 It is presence.
    2:44:12 And by experience, people have to be very careful because I’m not talking about being an observer.
    2:44:15 It’s the, you know, there’s lots of words for it.
    2:44:16 There’s direct experience.
    2:44:23 There is presence being the life world within the philosophy called phenomenology.
    2:44:24 There’s the life world.
    2:44:28 It’s this sort of raw presence that you can’t get away from until you die.
    2:44:32 And then who the hell knows, you know, that like, you know, as long as you’re around, it’s there.
    2:44:35 And what we’re saying is that that is the way to say this.
    2:44:41 That is the precondition for the possibility of science.
    2:44:47 And the whole nature of science, the way it has evolved is that it purposely pushed that out.
    2:44:49 It pushed that out so it could make progress.
    2:44:52 And that’s fine for a certain class of problems.
    2:44:58 But when we try to answer, when we try and go deeper, there’s a whole other class of problems.
    2:45:03 The nature of consciousness, the nature of time, quantum mechanics, that comes back to bite us.
    2:45:09 And that if we don’t learn how to take, understand that that is always the background,
    2:45:11 that experience is always the background.
    2:45:17 Then we just end up with these paradoxes and these yoga that require this intellectual yoga to get out of.
    2:45:20 I think you give a bunch of examples of that, like looking at temperature as a number.
    2:45:23 There’s a very sort of objective scientific way of looking at that.
    2:45:25 And then there’s the experience of the temperature.
    2:45:29 And how you build the parable of temperature that we call it.
    2:45:30 So what is the blind spot?
    2:45:32 We use the term, it’s a constellation.
    2:45:33 It’s not just materialism.
    2:45:37 It’s a constellation of ideas that are all really sort of philosophical views.
    2:45:42 They’re not what science says, but because of the evolution of the history of science and culture,
    2:45:44 they got like pin the tail on the donkey.
    2:45:48 They were sort of pinned on and to tell us that this is what science says.
    2:45:49 So what is it?
    2:45:55 One is reductionism, that you are nothing but your nerve cells, which are nothing but the chemistry,
    2:45:58 which is nothing but, you know, all the way down to quarks.
    2:45:58 That’s it.
    2:45:59 So that’s reductionism.
    2:46:07 The objective frame that science gives us this God’s eye view, this third person view of the world to view the world from the outside.
    2:46:09 That that’s what science, you know, bequeaths to us, that view.
    2:46:14 Physicalism, that everything in the world is basically made of stuff.
    2:46:16 There’s nothing else to talk about, right?
    2:46:19 That that’s all there is and everything can be reduced to that.
    2:46:24 And then also the reification of mathematics, that mathematics is somehow more real than this.
    2:46:25 And there’s a bunch of other things.
    2:46:32 But all of these together, what they all do is they end up pushing experience out and saying experience is an epiphenomena.
    2:46:33 Consciousness.
    2:46:39 I don’t, I tend not to use the word consciousness because it’s, I think it gets, you know, it leads us in the wrong direction.
    2:46:44 We should focus on experience because it’s a verb kind of in a way or it’s verb, it’s verb like.
    2:46:53 So yeah, and that this, by being blind to that, we end up with these paradoxes and problems that really not only block science,
    2:46:56 but also have been detrimental to society as a whole, especially where we’re at right now.
    2:47:02 So you actually say that that from a perspective of detrimental society, that there’s a crisis of meaning.
    2:47:09 And then we respond to that in a way that’s counterproductive to these bigger questions, scientific questions.
    2:47:15 So the three ways, the three responses you mentioned is scientific triumphalism.
    2:47:20 And then on the other side is rejecting science completely, both on the left and the right.
    2:47:24 I think the postmodernist on the left and anti-establishment people on the right.
    2:47:28 And then just pseudoscience that kind of does this in between thing.
    2:47:32 Can you just speak to those responses and to the crisis of meaning?
    2:47:33 Right, right.
    2:47:39 So the crisis of meaning is that, you know, on the one hand, science wants to tell us that we’re insignificant.
    2:47:40 We’re not important.
    2:47:42 We’re just, you know, biological machines.
    2:47:46 And, you know, so we’re basically an insignificant part of the universe.
    2:47:51 And the other hand, we also find ourselves being completely significant in cosmology.
    2:47:56 We have to figure out how to look from the inside at cosmology.
    2:47:57 We’re always the observers.
    2:48:00 We’re at the center of this, you know, collapsing wave front of light.
    2:48:03 You know, in quantum mechanics, it really comes in.
    2:48:06 It comes in, you know, the measurement problem just puts us front and center.
    2:48:11 We’ve spent a hundred, some people spent a hundred years trying to ignore the measurement part of the measurement problem.
    2:48:13 So on the one hand, we’re insignificant.
    2:48:14 And on the other hand, we’re central.
    2:48:15 So which one is it, right?
    2:48:21 And so this all comes from not understanding actually the foundational role of experience.
    2:48:27 This inability, we can’t, it’s, we can’t do science without already being present in the world.
    2:48:31 We can’t reduce what happens in science to some sort of formal, it’s real.
    2:48:36 A lot of it is about, we love our formal systems, you know, our mathematics and we’re substituting.
    2:48:40 That’s one of the things that we, there’s two philosophers we really like who are heroes.
    2:48:45 One is Herserl, who is a mathematician who invented phenomenology.
    2:48:51 And the other is Whitehead, who was one of the greatest mathematicians of the 20th century.
    2:48:54 And Herserl came up with this idea of the surreptitious substitution.
    2:49:01 Part of the blind spot is substituting a formal system, a calculus of, you know, data for actual experience.
    2:49:06 That that’s more important than, and so let me just do before I go to those three responses.
    2:49:10 Let’s just do the parable of temperature, because I think it’ll, people can, it’ll, it’ll help them understand what we mean.
    2:49:14 So think about degree Celsius, right?
    2:49:19 We kind of have in the modern scientific culture we live in, we think like, oh, yeah, degree Celsius.
    2:49:24 They’re out there, the universe, it’s, you know, the molecular cloud in space is 10 degrees, you know, Kelvin.
    2:49:32 The way we got there is we’ve forgotten how that idea is rooted in experience, right?
    2:49:37 We started off with science by, we had the experiment, the subjective experience of hot and cold.
    2:49:40 I feel hot, I feel cold, you feel hot, you feel cold.
    2:49:48 Science was this process of trying to extract from those experiences what Michelle Bitbowl philosopher calls the structural invariance.
    2:49:51 The things that, like, we could both kind of do agree on.
    2:50:03 So, you know, we figured out, like, oh, we could make a gradated little cylinder that’s got mercury in it and that, you know, hot things will be higher in that, you know, on that gradated cylinder, cold things will be lower.
    2:50:07 And we can both kind of figure out what we’re going to agree on our standards for that.
    2:50:09 And then we have thermometry, yay.
    2:50:16 We have a way of sort of like having a structural invariant of this sort of very personal experience of hot or cold.
    2:50:19 And then from that, we can come up with thermodynamics, et cetera.
    2:50:28 And then we end up at the bottom, you know, at the end of that with this idea of, like, every day I wake up and I check my phone and I’m like, oh, it’s going to be, you know, 60 degrees out, great.
    2:50:42 And we start thinking that 60 degrees is more real than hot and cold, that thermodynamics, the whole formal structure of thermodynamics is more real than the basic experience of hot and cold that it came from, you know.
    2:50:50 It required that bodily experience that also, not just me, you, I have to tell you, you know, it’s part of my communication with you, cold today, isn’t it?
    2:50:50 Right.
    2:51:01 That from that basic, irreducible experience of being in the world, you know, with everything that involves, I developed degrees Celsius, but then I forgot about it.
    2:51:02 I forgot the experience.
    2:51:04 So that’s called the amnesia of experience.
    2:51:18 So that’s what we mean by the, you know, how the blind spot emerges, how we end up, how science purposely pushes experience out of the way so it can make progress, but then it forgets that experience was important.
    2:51:19 So where does this show up?
    2:51:23 Why is this, you know, what are the responses to trying to get this back in?
    2:51:25 And where, where, where does this crisis of meaning emerge?
    2:51:31 So scientific triumphalism is the idea that only, the only thing that’s true for us are scientific truths, right?
    2:51:41 Unless it can be codified in a formal system and represented as data, you know, captured in some kind of scientific causal network, it doesn’t even exist, right?
    2:51:47 And anything else that’s not part of it, part that can be formalized in that way is an epiphenomenon.
    2:51:48 It’s not real.
    2:51:59 So, you know, scientific triumphalism is this response to, to the mist, you know, the weirdness of, you know, I could call it the mystery, the weirdness of experience by kind of just ignoring it and completely.
    2:52:08 So there’s no other truth, you know, art, music, you know, human spirituality, it’s all actually reducible just to neuro, you know, neural correlates.
    2:52:11 So that’s one way that it’s been dealt with.
    2:52:12 The other way is this sort of right.
    2:52:23 You’ve got on the, on the postmodern, you know, the left academic left, you get this thing like science is just a game, you know, it’s just a game by the from the powerful come up with, which is also not true.
    2:52:27 Science is totally potent and requires an account for what is happening.
    2:52:31 So that’s another way to push sort of science away or respond to it.
    2:52:42 The denial, science denial that happens, that’s also another way of sort of, you know, not understanding the balance that science is trying, that we need to establish with experience.
    2:52:53 And then there’s just pseudoscience, which wants to sort of say like, oh, you know, the new age movement or whatever, which wants to have, you know, wants to deal with experience by kind of elevating it in this weird pseudo spiritual way.
    2:52:56 Or, you know, it said that doesn’t have the rigor of science.
    2:53:02 So, you know, all of these ways, all of these responses, we have this difficulty about experience.
    2:53:07 We need to understand how experience fits into the web of meaning.
    2:53:11 And we don’t really have an accurate, we don’t have a good way of doing it yet.
    2:53:19 And the point of the book was to identify very clearly how the problem manifests, what the problem is, and what its effects are in the various sciences.
    2:53:26 And by the way, we should mention that at least the first two responses, they kind of feed each other.
    2:53:40 There’s a, just to observe the scientific community, those who sort of gravitate a little bit towards the scientific triumphalism, they, there’s an arrogance that builds in the human soul.
    2:53:49 I mean, it has to do with PhDs, it has to do with sitting on an academic throne, all of those things, and the human nature with the egos and so on, it builds.
    2:53:52 And of course that, nobody likes arrogance.
    2:54:01 And so the, those that reject science, the arrogance is fuel for the people that reject science, which just goes back, and it’s just, is this divide that builds.
    2:54:05 Yeah, no, and that was a problem like when you saw, so like I said, you know, my first book was about science and human spirituality.
    2:54:13 So I was trying to say that like, you know, science is actually, if we look at what happens in human spirituality, not religion, religion is about politics, right?
    2:54:20 But about, you know, for the entire history of the species, we’ve, we’ve had this experience of, for a better, lack of a better word, the sacredness.
    2:54:24 I’m not connecting this God or anything, I’m just saying this experience of like the more.
    2:54:34 And then, you know, with the new atheist movement, you’ve got people saying that like, anybody who feels that is an idiot, you know, they just can’t handle the hardcore science.
    2:54:46 When in fact, their views of the world are so denuded of it, they can’t even see the role that experience plays in how they came up with their formal systems, you know, and experience fundamentally is weird, you know, mysterious.
    2:54:50 It’s like, it’s, it’s, you know, kind of goes down forever in some sense, there is always more.
    2:55:01 So yeah, that arrogance then, just if you’re telling everybody who’s not hardcore enough to do the, you know, standard model of cosmology, that they’re idiots, that’s not going to bode well for your, you know, the advance of your project.
    2:55:19 So you’re proposing, at least to consider the idea that experience is a fundamental experience is not just an illusion that emerges from the set of quirks, that there could be something about the conscious experience of the world that is like at the core of reality.
    2:55:20 Yeah, but I wouldn’t do it.
    2:55:24 I wouldn’t, because, you know, there’s panpsychism, right, which is all the way there.
    2:55:24 Yeah.
    2:55:27 Panpsychism is like, that’s literally one of the laws of physics.
    2:55:28 Right, right.
    2:55:36 But see, what all those do is like, just the idea of, say, like, physicalism versus idealism, which are kind of the two philosophical schools you can go with.
    2:55:38 Physicalism says, all that exists is physical.
    2:55:40 Idealism says, all that exists is mind.
    2:55:48 We’re actually saying, look, both of these, to take either of those positions is already to project out into that third person view, right?
    2:55:53 And that third person view, we want to really emphasize, is a fiction.
    2:55:55 It’s a useful fiction when you’re doing science, right?
    2:56:01 If I want to do, like, you know, the Newtonian physics of billiard balls on a pool table, great.
    2:56:03 I don’t want to have to think about experience at all, right?
    2:56:13 But, you know, if I’m asking deeper questions, I can’t ignore the fact that there really is no third person view and that any story I tell about the world is coming from.
    2:56:19 It’s not just first person, but it’s literally, because I’m going to argue that experience always involves all of us.
    2:56:30 Experience always originates out of a community that, you know, you’re always telling those stories from the perspective of already existing, of already being inexperienced.
    2:56:40 So whatever account we want to give is of the world is going to have to take that as experience as being irreducible and the irreducible starting point.
    2:56:42 So ultimately, like, we don’t have an answer.
    2:56:45 Like, that’s when people are like, well, what are you suggesting is your alternative?
    2:56:48 It’s like, look, that’s the good work of the next science to come.
    2:56:50 Well, our job was to point out the problem with this.
    2:56:57 But what we would argue with is, and we’re thinking about the next book, is this is really going to require a new conception of nature, right?
    2:57:07 That doesn’t sort of jump right to that third person, that fictional third person view and somehow figures out how to do science, recognizing that it always starts from experience.
    2:57:14 It always starts from this field of experience or in phenomenology, the world is the life world that you’re embedded in.
    2:57:16 You can’t un-embed yourself from it.
    2:57:23 So how do you do, so one of the things that Whitehead said was, you know, we have to avoid the bifurcation of nature.
    2:57:30 And what he meant by that is the bifurcation into, like, sort of scientific concepts, wavelength, you know, think about, like, the seeing a sunset.
    2:57:38 You can say, like, oh, look, it’s just wavelengths, you know, and scattering particles and your experience of the redness, the actual experience of the redness and all the other things.
    2:57:39 It’s not just red.
    2:57:40 There’s no qualia.
    2:57:41 There’s no pure redness.
    2:57:44 Everything that’s happening in the experiential part is just an epiphenomenon.
    2:57:46 It’s just, you know, brain states, whatever.
    2:57:48 He said, you can’t do that.
    2:57:49 They’re just, they’re both real.
    2:57:53 They’re both accounts or both, they both need to be integrated.
    2:57:57 And so that required, I think, a really a different conception of what we mean by nature.
    2:58:08 Is it something like incorporating in the physics, in the study of nature, the observer, the experiencing observer, or is that still also looking for my third person?
    2:58:10 I think that that’s what we have to figure out, right?
    2:58:13 And so actually, you know, a great place to think about this is quantum mechanics, right?
    2:58:22 Cause one of the things we’re arguing is like, look, in the chapter that I wrote on, cause it was, I wrote this with Evan Thompson, who’s a wonderful philosopher and Marcelo
    2:58:24 Gleiser, who’s a theoretical physicist.
    2:58:33 Um, when I was writing the chapter on the origin of the blind spot, like, you know, sort of what, how this emerged out of history, my subheader was like, well, it made sense at the time.
    2:58:39 Cause it did, you know, it really, there was a reason why people adopted this third person, God’s eye deterministic view.
    2:58:43 This view of sort of like, yeah, the perfect clockwork of the universe.
    2:58:44 Yeah, totally made sense.
    2:58:53 But by the time you got to the beginning of the 20th century, science itself was telling you like, and no place does this appear more than in quantum mechanics, right?
    2:59:08 Quantum mechanics slams you with the idea that the of the measurement problem, you know, uh, the most important thing about quantum mechanics is you have a dynamical equation, the Schrodinger equation, which, you know, you put in, like we talked about before, you have initial conditions.
    2:59:13 And now you got a differential equation and you crank out the differential equation and it makes predictions for the future, right?
    2:59:19 Exactly like Newtonian physics or its higher versions of the Lagrange or Hamiltonians.
    2:59:28 But then this other thing happens where it’s like, oh, by the way, as soon as you look at it, as soon as the measurement is made, I have a whole nother set of rules for you.
    2:59:30 You know, that’s the born, what we call the born rule.
    2:59:35 And it was telling you right from the beginning that measurement matters, right?
    2:59:40 So when you’re asking like, how will we do this, quantum mechanics is actually pointing to how to do it.
    2:59:43 So, you know, there’s been all these different interpretations of the quantum mechanics.
    2:59:47 Many of them try to pretend the measurement problem isn’t there.
    2:59:59 Go to enormous lengths like the, the many worlds interpretation, literally inventing an infinite number of unobservable parallel universes to avoid the thing that quantum mechanics is telling them, which is that measurements matter.
    3:00:07 And then you get something like Cubism, which is I’m going to advocate for is a new interpretation of quantum mechanics, which puts the born rule at the center, right?
    3:00:13 Instead of like focusing on the Schrodinger equation and the weird things that come out of it, like Schrodinger’s cat and all that other stuff.
    3:00:16 It says, no, no, actually, the real mystery is the born rule.
    3:00:18 Let’s think about the born rule.
    3:00:24 And like you said, that puts the agent, the agent and information at the center of the whole thing.
    3:00:27 So that’s not a thing you’re trying to get rid of.
    3:00:31 That’s the thing you’re trying to integrate at the center of the thing in quantum mechanics.
    3:00:43 It becomes super obvious, but maybe the same kind of thing should be incorporated in every layer of study of nature.
    3:00:43 Absolutely.
    3:00:44 That’s exactly it.
    3:00:47 So, you know, one of the things that’s really interesting to me, so I’m, you know, I have a project.
    3:00:52 I’m part of a big project that Chris Fuchs and Jacques Pinier on Cubism.
    3:00:53 So I’ve been part of that.
    3:00:56 And what I’ve been amazed by is the language they use.
    3:00:59 So what’s cool about Cubism is it comes from quantum information theory.
    3:01:02 It’s a pretty modern version of thinking about quantum mechanics.
    3:01:14 And it’s always about you have an agent who makes an action on the world and then the information they get from that action through the experiment.
    3:01:19 That’s the action in the world updates their priors, updates their, their, you know, their Bayesian.
    3:01:20 That’s why it’s called Cubism.
    3:01:24 Quantum Bayesianism updates how the information they’ve gotten from the world.
    3:01:40 Now, this turns out to be kind of the same language that we’re using in a project that’s about the physics of life, where we have a grant from the Templeton Foundation to look at semantic information and the role of semantic information in living systems like cells.
    3:01:48 So, you know, we have Shannon information, which is a probability distribution that tells you, you know, basically how much surprise there is in a, in a message.
    3:01:51 Semantic information focuses on meaning, right?
    3:02:04 Focuses on, in a very simple way, just like, what is, how much of the information that I’m, that the agent, you know, the critter is getting from the world actually has, helps it survive, right?
    3:02:06 That’s the most basic idea of meaning, right?
    3:02:08 We can get all philosophical about meaning, but this is it.
    3:02:10 Does it help me stay alive or not?
    3:02:26 And the whole question of agency and autonomy that occurs in this setting of just asking about how do cells move up a chemical gradient to get more food kind of has the same feel, the same, you know, sort of architecture as what’s going on in quantum mechanics.
    3:02:50 So I think what you said is exactly it. How do we bring this sort of recognition that there’s always us, the agent or life, the agent interacting with the world and drawing it, both giving information and passing information back as a way of doing science, doing hardcore science with experiments, but never forgetting that agency, which also means experience in some sense, is at the center of the whole thing.
    3:03:06 So you think that could be something like Cubism, quantum Bayesianism that creates a theory like a Nobel Prize winning theory, sort of like hardcore real theories that put the agent at the center.
    3:03:08 Yes, that’s what we’re looking for.
    3:03:10 I think that is really, that’s the exciting part.
    3:03:16 And it’s a move, you know, the scientific triumphalist thing says, you know, we understand why people love this.
    3:03:24 Like, I have these equations and these equations represent, you know, there’s this platonic idea that they are, you know, they exist eternally on their own.
    3:03:26 It’s kind of quasi religious, right?
    3:03:30 It’s sort of like somehow look, these equations are the, you’re reading the mind of God.
    3:03:37 But this other approach to me is just as exciting, because what you’re saying is there’s us and the world, they’re inseparable, right?
    3:03:52 It’s always us and the world. And what we’re now finding about is this kind of co-creation, this interaction, you know, between the agent and the world, such that these powerful laws of physics that need an account, like in no way am I saying these laws aren’t important.
    3:04:07 These laws are amazing, but they need an account, but not an account that strips, you know, that turns the experience, turns the agent into just a, you know, an epiphenomena that pushes the agent out and makes it seem as if the agent is not the most
    3:04:08 important part of the story.
    3:04:23 So if you pull on this thread and say there’s a whole discipline born of this, putting the agent as the primary thing in a theory and a physics theory, like how is it possible it just like breaks the whole thing open?
    3:04:42 So there’s this whole effort of, you know, unifying general relativity and quantum mechanics of like coming up with a theory of everything. What if these are like the tip of the iceberg? What if the agent thing is like really important?
    3:04:56 So, you know, listen, that that would be like kind of my dream. I’m not going to be the one to do it because I’m not smart enough to do it. But, you know, Marcelo and I have for a while have been sort of critical of where foundational physics has been for a while with string theory.
    3:05:11 I spent my whole life listening to talks about string theory real soon, you know, and it’s gotten ever more disconnected from, you know, data observations. There were people talking for a while that it’s post empirical.
    3:05:30 And, you know, I want to always wanted to write a paper or an article that was like physicists have been smoking their own stash, right? There’s this way we’ve gotten used to like, you know, you have to outweard the other person like my theory is 38 dimensions, my theory is 22 dimensions, but it’s got, you know, you know, psychedelic squirrels in it.
    3:05:41 And so there’s been a problem. There’s a problem. I don’t need to tell you there’s a crisis in physics or there’s a crisis in cosmology. Other people have used that. That’s been the headline on scientific American stories.
    3:06:10 So they’re clearly another direction has to be found. And maybe it has nothing to do with this. But I suspect that because so many times the agent or the having to deal with the view from the inside or the role of agency, like when it comes to time, thinking that you can replace the block universe with the actual experience of time, you know, clocks don’t tell time, we use clocks to tell time.
    3:06:22 So maybe that even like the fundamental nature of time can’t be viewed from the outside, that there’s a new physics theory that is going to come from that comes from this agential informational computational view.
    3:06:29 I don’t know. But that’s kind of what I think it would be fertile ground to explore.
    3:06:35 Yeah, the time is really interesting one. This time is really important to us humans. What is time?
    3:06:56 Yeah, that’s a right. What is time? So the way we have tended to view it is we’ve taken this is what when Herschel talks about the surreptitious substitution, we’ve taken Einstein’s beautiful, powerful, formal system for viewing time, and we substituted that for the actual experience of time, right?
    3:07:09 So the block universe where like next Tuesday is already written down, you know, it’s in the block, you know, the four dimensional universe, all events are already there, which is very potent for making certain kinds of predictions within this sort of, you know, the scientific framework.
    3:07:17 But, you know, it is not lived time. And, you know, this was pointed out to Einstein and he eventually recognized it.
    3:07:31 Very famous meeting between Henri Berkson, who was the most famous philosopher of like the, you know, 20 early 20th century, and Einstein, where Einstein was giving a talk on relativity and Berkson, whose whole thing was about time and was about duration.
    3:07:45 He wanted to separate the scientific image of time, the map of time from the actual terrain, which he used the word duration, like we humans were where duration for us is full.
    3:07:49 It’s sort of, it’s stretched out. It’s got a little bit of the past, a little bit of the future, a little bit of the present.
    3:07:57 Music is the best example, right? You’re hearing music, you’re both already anticipating what’s going to happen, and you’re, you know, remembering what’s going on.
    3:08:14 There’s a kind of phenomenal structure there, which is different from the representation of time that you have with the formal mathematics and what, you know, the way we would look at this is that the problem with the surreptitious substitution, the problem with the blind spot,
    3:08:37 is it says, Oh, no, no, the formal system is time, but really the only place time appears is with us, right? Where we’re, you know, so having a theory that actually could start with us, you know, and then stretch out into the universe rather than imposing this imaginary third person view back on us, you know, could that’s a route towards a different way of approaching the whole problem.
    3:08:44 I just wonder who’s the observer? I mean, define what the agent is in any kind of frame is difficult.
    3:08:56 Right. And so that, but that’s the good work of the science ahead of us. Right. So what happened with this idea of the structural invariance I was talking about? So, you know, we start with experience, which is irreducible, there’s no atoms of experience, right, it’s a whole.
    3:09:06 And we go through the whole process, which is a communal process, by the way, there’s a philosopher Robert Crease, who talks about the workshop that’s starting in like the 1700s, 1600s, we developed this communal
    3:09:17 space to work in, sometimes it was literally a physical space, a laboratory, where these ideas would be pulled apart, refined, argued over, and then validated and we went to the next step.
    3:09:30 So this idea of pulling out from experience, these thinner, abstract, structural invariance, the things that we could actually do science with, and it’s kind of like we call it an ascending spiral of abstraction, right.
    3:10:00 So the problem with the way we do things now is we take that those abstractions, which came from experience, and then with something like, you know, a computational model of consciousness or experience, we think we can put it back in, like you literally pulled out these super thin things, these abstractions, you know, neglecting experience, because that’s the only way to do science, and then you think somehow I’m going to put, I’m going to jam experience back in and, you know, have an explanation for experience.
    3:10:09 So do you think it’s possible to show that something like free will is quote unquote real, if you integrate experience back into the physics, into the physics model of the world?
    3:10:14 What I would say is that free will is a given, and that’s the thing about experience, right.
    3:10:24 So one of the things that Whitehead said, I really love this quote, he says it’s not the job of either science or philosophy to account for the concrete, it’s the job to account for the abstract.
    3:10:38 The concrete, what’s happening between us right now is just given, you know, it’s just, it’s presented to us every day, it’s presented to if you want an explanation fine, but the explanation actually doesn’t add anything to it, right.
    3:10:47 So that free will in some sense is the nature of being an agent, right, to be an agent, agency and autonomy are sort of the two things that are, you know, they’re equivalent.
    3:10:50 And so in some sense, to be an agent is to be autonomous.
    3:11:04 And so then the question really to ask is, can you have an account for agency and autonomy that captures aspects of its, its arising in the world or the way it and the world sort of co arise.
    3:11:20 But the idea, you know, the reason why we argue about free will often is because we already have this blind spot view that the world is deterministic because of our equations, which themselves, we treat the equations as if they’re more real than experience, you know, and the equations are a paler, you know, they don’t
    3:11:28 corral experience, they are a thinner, you know, representation, as we like to say, don’t confuse the map for the terrain.
    3:11:32 What’s happening between us right now in this, you know, all the weirdness of it, that’s the terrain.
    3:11:40 The map is what I can write down on equations and then in the workshop do experiments on super powerful needs an account, but experience overflows that.
    3:11:49 What if the experience is an illusion, like, how do we know what if the agency that we experience is an illusion?
    3:11:58 An illusion looking from where like, right, because that already requires to just take that stance is you’ve already pushed yourself into that third person view, right.
    3:12:15 And so what we’re saying is that’s a third person view, which now you’re going to say like, oh, I’ve got a whole other set of entities of ontological entities, meaning, you know, things that I think exist in God’s living room in spite, you know, that are independent of me and the community of living things I’m part of.
    3:12:27 So you’re pushing it elsewhere at this, just like there’s a stack of turtles is probably if this experience, the human experience is an illusion, maybe there’s an observer for whom it’s not an illusion.
    3:12:29 So you always have to find an observer somewhere.
    3:12:30 Yeah, right.
    3:12:40 And that’s where that’s why, you know, fundamentally, the blind spot, especially the scientific triumphalist part is following a religious impulse, you know, it’s wanting the God’s eye view.
    3:12:41 And you know, it’s really interesting.
    3:12:50 And when we think about this and the way this gets talked about, especially publicly, you know, there’s a line of philosophical inquiry that this language gets couched in.
    3:12:56 And it is actually a pretty, it’s only one version of philosophy, right.
    3:12:58 So it is pretty much what we call the analytic tradition, right.
    3:13:06 But there’s even in Europe or in the Western tradition, and you know, for Western, what we’ll call Western philosophy, there’s phenomenology.
    3:13:10 There’s a herceral and Eidegger and Merlupanti, which took an entirely different track.
    3:13:13 They were really interested in the structure of experience.
    3:13:20 They spent all their time trying to understand, trying to develop a language that could kind of climb into the circle that is experience.
    3:13:20 Right.
    3:13:23 You experience, you’re not going to be able to start with axioms and work your way to it.
    3:13:24 It’s over, it’s given.
    3:13:29 So you have to kind of jump in and then try and find a language to account for its structure.
    3:13:44 But then, so that has not been part of this discussion about you’ll never, good luck finding a YouTube video where someone, you know, a famous scientist is talking about science from a phenomenological point of view, even though it’s a huge branch of philosophy.
    3:13:48 And then you get the philosophies that occurred from other cores of civilization, right.
    3:13:55 So there’s the, there’s the Western core out of which comes the Greeks and the, you know, the Judeo-Christian Islamic tradition.
    3:13:58 But then you get India and you get Asia, and they developed their own.
    3:14:03 They were highly complex societies that developed their own responses to these questions.
    3:14:12 And they, for reasons because they had contemplative practice, they were very focused on like direct, trying to like directly probe attention and experience.
    3:14:16 They asked questions in ways that the West never really did.
    3:14:18 Phenomenology kind of started it.
    3:14:27 But, you know, there’s, there’s philosophers like Nagarjuna and Vasubandhu, and they’re like the Plato and the, you know, Aristotle of, you know, sort of those philosophies.
    3:14:30 And they were really focused on experience in the West.
    3:14:39 I think maybe because we had the Judeo-Christian tradition, where we already had this kind of God, who was going to be the frame on which you could always point to that frame.
    3:14:48 The, in the, the traditions that came from the classical philosophies of India and Asia, they started always with, they wanted to know about experience.
    3:14:54 Their whole philosophies and their logic and their, their argumentation was based on, “I’ve got this experience.
    3:14:56 I can’t get out of this experience.
    3:14:58 How do I reason from it?”
    3:15:03 So I think there’s like a lot of other philosophical traditions that we could draw from, you know, not like slavishly.
    3:15:09 We don’t all have to become Buddhists to do it, but there are traditions that really tried to work this out in a way that the Western traditions.
    3:15:10 Just didn’t.
    3:15:17 But there’s also the practical fact that it’s difficult to build a logical system on top of experience.
    3:15:20 It’s difficult to have the rigor of science on top of experience.
    3:15:25 And so it’s, as science advances, we might get better and better.
    3:15:39 Like the same is, it’s very difficult to have any kind of mathematical or kind of scientific rigor to, why complexity emerges from simple rules and simple objects, sort of the Santa Fe questions.
    3:15:40 Yeah, I think, but I think we can do it.
    3:15:42 I think there’s aspects of it.
    3:15:45 I mean, as long as you’re never trying to like, “This is what experience is.”
    3:15:52 Like, I think that’s kind of the where we’re, you know, you’re never going to have a causal account of experience because it’s just given.
    3:15:57 But you can do lots about, and that’s what the good work is, is to, “How do I approach this?
    3:16:00 How do I approach this in a way that’s rigorous that I can do experiments with also?”
    3:16:07 But so, for example, I was just reading this beautiful paper that was talking about in the, you know, this is what we’re accounting with our semantic information too.
    3:16:09 Causal closure.
    3:16:11 Love this idea, right?
    3:16:14 The idea that, so we talked about auto-poesis a while back, right?
    3:16:20 The idea that living systems are, they are self-creating and self-maintaining.
    3:16:23 So the membrane, cell membrane is a great example of this, right?
    3:16:26 The cell membrane, you can’t have a cell without a cell membrane.
    3:16:30 The cell membrane lets stuff through, keeps other stuff out, right?
    3:16:40 But the cell membrane is part of the processes and it’s a product of the processes that the cell membrane needs, right?
    3:16:43 In some sense, the cell membrane, cell membrane creates itself.
    3:16:45 So there’s this strange, it’s always with life.
    3:16:47 There’s always this strange loop.
    3:16:53 And so somehow figuring out how to jump into that strange loop is, you know, the science that’s ahead of us.
    3:17:01 And so this idea of causal closure, accounting for how the, you know, we talked about like a downward causation, right?
    3:17:04 So reductionism says everything only depends on the microstate.
    3:17:06 Everything just depends on the atoms, right?
    3:17:06 That’s it.
    3:17:10 You don’t really, if you know, if you know the Lagrangian for the standard model, you’re done.
    3:17:13 You know, of course, in principle, you need God’s computer, but fine.
    3:17:15 You know, in principle, you know, in principle, it can be done.
    3:17:17 Causal closure.
    3:17:21 And there’s, I was just reading this great paper that sort of argues for this.
    3:17:33 There’s ways in which using epsilon machines and all this machinery from information theory that you can see ways in which the system can organize itself so that it decouples from the microstates.
    3:17:40 Now, the macro state fundamentally no longer needs the microstate for its own description, its own account of the laws.
    3:17:44 Whether that paper is true or not, it’s an example of heading down that road.
    3:17:46 There’s also Robert Rosen’s work.
    3:17:59 He was a theoretical biologist who he was, you know, he talked about closure to efficient cause that, that living systems, you know, are organizationally closed, are, are causally closed so that they don’t depend anymore in the microstate.
    3:18:01 And he made, he had a proof, which is very contentious.
    3:18:04 Nobody knows if it’s, you know, some argue it’s true, some argue it’s not.
    3:18:10 But he said that because of this, living systems are not church-turing complete.
    3:18:13 They cannot be represented as formal systems.
    3:18:15 So, you know, in that way, they’re not axioms.
    3:18:18 They’re not living systems will not be axioms.
    3:18:21 They can only be partially captured by algorithms.
    3:18:26 Now, again, people fight back and forth about whether or not his proof was, you know, is valid or not.
    3:18:36 But I’m saying I’m giving you examples of like, you know, when you, when you see the blind spot, when you acknowledge the blind spot, it opens up a whole other class of kinds of scientific investigations.
    3:18:39 You know, the book we thought was going to be really heretical, right?
    3:18:46 You know, obviously, you know, most, most public facing scientists are very sort of in that, especially scientific triumphal.
    3:18:48 And so we were just like, waiting, you know, waiting for the fight.
    3:18:55 And then the review from science came out and it was like, totally pro, you know, they was very positive.
    3:19:01 We’re like, oh my God, you know, and then a review came out in nature physics and it was totally positive.
    3:19:09 And then a review came out in the Wall Street Journal, because we kind of criticized not capitalism, but we criticized sort of all industrial economies.
    3:19:12 Forget that they were sort of had been touched by the blind spot.
    3:19:13 Socialism, communism doesn’t matter.
    3:19:20 These extractive, you know, had sort of had that sort of view that the world is just reducible to, you know, resources.
    3:19:23 The Wall Street Journal gave us a great review.
    3:19:38 So it feels like there’s actually out there, there is some among working scientists in particular, there is some dissatisfaction with this triumphalist view and a recognition that we need to shift something in order to like jump past these hurdles that we’ve been arguing about.
    3:19:41 Forever, and we’re not, you know, we’re sort of stuck in a vortex.
    3:19:46 Well, it is, I mean, I think there’s a hunger to acknowledge that there’s an elephant in the room like that.
    3:19:48 We’re just removing the age.
    3:19:54 Like it’s, everyone is doing it and it’s like, yeah, yeah, there’s the experience.
    3:19:58 And then there’s the third person perspective on the world.
    3:20:06 And so to, man, science from applying scientific rigor from a first person perspective is very difficult.
    3:20:07 I mean, it’s fascinating.
    3:20:14 I think we can do it because it’s also the thing, you know, what’s really interesting is this, I think it’s not just first person, it’s first and second, right?
    3:20:24 Because science, because when so, like one idea is that we, you know, the idea that, oh, science gives us this objective third person view, that’s one way of talking about objectivity.
    3:20:30 There’s a whole other way is that I do the experiment, you do the experiment, we talk to each other, we agree on methods, and we both get the same result.
    3:20:33 That is a very different way of thinking about objectivity.
    3:20:41 And it acknowledges that, you know, when we talk about agents, agency and individuality are flexible, right?
    3:20:47 So there’s a great paper, Speaking of Santa Fe by David Krakauer, where they looked at sort of information theoretic measures of individuality.
    3:20:54 What you find is it’s actually pretty fluid, like my liver cell is an individual, but really it’s part of the liver.
    3:20:57 And my liver is, you know, a separate system, but really it’s part of me.
    3:21:07 But I’m, so I’m an individual, yay, but actually I’m part of a society, like, and I couldn’t be me without the entire community of, say, language users, right?
    3:21:09 I wouldn’t even be able to frame any questions.
    3:21:16 And my community of language users is part of ecosystems, right, that are alive, that I am a part of a lineage of.
    3:21:17 This is like Sarah Walker stuff.
    3:21:21 And then that those ecosystems are part of the biosphere, right?
    3:21:28 We’re never separable, as opposed to this very atomizing, the triumphal, this science view is wants like Boltzmann brains.
    3:21:30 You’re just a brain floating in the space, you know?
    3:21:40 Yeah, there’s a fascinating degree to which agencies fluid, like you are an individual, but you and I talking is the kind of individual.
    3:21:41 Yeah.
    3:21:47 And then the person listening to this right now is also an individual.
    3:21:47 Right.
    3:21:48 I mean, that’s a weird thing.
    3:21:49 That’s a weird thing, right?
    3:21:51 Because there’s like, there’s a broadcast nature too.
    3:21:54 This is why information theoretic.
    3:22:00 So the idea that we’re pursuing now, which I get really excited about is this idea of information architecture, right?
    3:22:05 Or organization, informational organization, because, you know, right, physicalism is like everything’s atoms.
    3:22:15 But, you know, Kant recognized, Kant is apparently the one who came up with the word organism, because he recognized that life has a weird organization that would see specifically different from machines.
    3:22:31 And so this idea that how do we engage with the idea that organization, which is often I can be cast in information theoretic terms or computational terms even, is sort of it’s not really quite physical, right?
    3:22:41 It’s embodied in physical, you know, in the physical, it has to instantiate in the physical, but it also has this other realm of design, you know, and not design like intelligent design.
    3:22:46 But there’s a, you know, organization itself is a relationship of constraints and information flow.
    3:22:52 And I think, again, that’s an entirely new, interesting way that we might get a very different kind of science that would flow out of that.
    3:22:58 So going back to Kant and organism versus machine.
    3:23:03 So I showed you a couple of legged robots.
    3:23:04 Very cool.
    3:23:08 Is it possible for machines to have agency?
    3:23:11 I would not discount that possibility.
    3:23:23 I think, you know, there’s no reason I would say that it’s impossible that machines could, whatever it manifests, that strange loop that we’re talking about, that auto poesis.
    3:23:29 I don’t think there’s a reason to say it can’t happen in silicon.
    3:23:35 I think whatever it would, it would be very different from us, like the idea that it would be like, oh, it’d be just like us, but now it’s instantiated.
    3:23:39 I think it might have very different kind of experiential nature.
    3:23:45 I don’t think, I don’t think what we have now, like the LLMs are really there.
    3:23:49 But, but I, yeah, I’m not going to say that it’s not possible.
    3:23:54 I wonder how far I can get with imitation, which is essentially what LLMs are doing.
    3:23:55 So imitating humans.
    3:24:04 And I wouldn’t discount either the possibility that through imitation, you can achieve what you call consciousness or.
    3:24:07 Agency or the ability to have experience.
    3:24:10 I think for most us humans that think, oh, that’s just fake.
    3:24:15 That’s copying, but there’s some degree to which we humans are just copying each other.
    3:24:20 We just are really good imitation machines coming from babies.
    3:24:23 We were born in this world and we’re just learning to imitate each other.
    3:24:31 And through the imitation and the tension in the disagreements in the imitations, we gain personality, perspective, all that kind of stuff.
    3:24:35 Yeah, I think so, I, you know, it’s possible, right?
    3:24:47 It’s possible, but I think probably the view I’m advocating would say that one of the most important parts of agency is there’s something called E4, the E4 theory of cognition.
    3:24:52 Embodiment in action, embedding, and there’s another one, extension.
    3:25:09 But so the idea is that you actually have to be in a body, which is itself part of an environment that is the physical nature of it and of the of the extension in with other living systems as well is essential.
    3:25:15 So that’s why I think the LLMs are not going to, it’s not just imitation, it’s going to require, this goes to the brain in the vat thing.
    3:25:21 I did an article about the brain in the vat, which was really Evans, I was reporting on Evans, where they did the brain in the vat argument.
    3:25:25 But they said, look, in the end, actually, the only way to actually get a real brain in the vat is actually to have a brain in a body.
    3:25:29 And if it could be a robot body, you know, but you still need a brain in the body.
    3:25:36 So I don’t think LLMs will get there because they can’t, you know, you really need to be embedded in a world, at least that’s the E4 idea.
    3:25:50 The E4, the 4E approach to cognition argues that cognition does not occur solely in the head, but it’s also embodied, embedded, enacted, and extended by way of extra cranial processes and structures.
    3:25:56 Though very much in vogue, 4E cognition has received relatively few critical evaluations.
    3:26:05 This is a paper by reflecting on two recent collections, this article reviews the 4E paradigm with a view to assessing the strengths and weaknesses.
    3:26:06 That’s fascinating.
    3:26:12 I mean, yeah, they’re the branches of what is cognition extends far and it could go real far.
    3:26:13 Right.
    3:26:20 There’s a great story about an interaction between Jonas Salk, who is very much a reductionist, you know, the great biologist, and
    3:26:25 Gregory Bateson, who was a cyberneticist, and Bateson always loved to poke people.
    3:26:27 And he said to Salk, he said, you know, where’s your mind?
    3:26:32 And, you know, Salk went up here and Bateson said, no, no, no, out here.
    3:26:34 And what he really meant was this extended idea.
    3:26:42 It’s not just within your cranium to be, to be, to have experience, you know, experience in some sense is not a thing you have.
    3:26:44 It is a thing you do, right?
    3:26:56 It’s almost perform it in a way, which is why both actually having a body, but having the body itself be in a world with other bodies is from this perspective is really important.
    3:27:03 And it’s very attractive to me and, you know, seeing, again, if we’re really going to do science with them, we’re going to have to, like, have these ideas crash up against data, you know, crash up against.
    3:27:08 We can’t just armchair it, you know, or, you know, or a quarter, you know, couch quarterbacking it.
    3:27:11 But I think there’s a lot of possibility here.
    3:27:16 It’s a very radically different way of looking at what we mean by nature.
    3:27:26 What do you make of the fact that this individual observer, you as an individual observer, only get a finite amount of time to exist in this world?
    3:27:27 To make you sad?
    3:27:30 No, actually, it doesn’t make me sad.
    3:27:33 So, okay, so, you know, full reveal.
    3:27:37 I have been doing contemplative practice in the Zen tradition for 30 years.
    3:27:40 I’ve been staring at a wall for 30 years.
    3:27:42 And it’s taught me a lot, right?
    3:27:47 You know, I’m really, I really value what that practice has given me about the nature of experience.
    3:27:51 And one of the things it’s taught me is like, you know, I don’t really matter that very much.
    3:28:01 This thing I call Adam Frank is really, you know, it’s kind of a construct, you know, there’s this process going on of which I am actually fundamentally.
    3:28:02 And that’s super cool.
    3:28:05 But, you know, it’s going to go, you know, I don’t know where it came from.
    3:28:06 It’s going to go.
    3:28:09 I don’t really need it to, you know, and then, and then who in the hell knows?
    3:28:11 You know, I’m not, I’m not an advocate for an afterlife.
    3:28:15 But just that, like, you know, what I love, Zen has this idea of beyond birth and death.
    3:28:17 And they don’t mean reincarnation.
    3:28:20 What they mean is, dude, you don’t even really understand what life is.
    3:28:21 You know what I mean?
    3:28:24 I’m like this, you know, this core level of your own experience.
    3:28:29 So, you know, your ideas about what death is are equally ill-formed, you know?
    3:28:34 And it’s, it’s, so, you know, the contemplative practice really tries to focus on experience itself.
    3:28:39 Like spend five days at a Zen session doing contemplative practice from, you know,
    3:28:42 seven a.m. until nine p.m., obviously with breaks.
    3:28:47 And you’ll really get a much deeper understanding of, like, what my own experience is.
    3:28:48 What does it really like?
    3:28:52 You have, you, it forces you to learn how to stabilize your attention because, you know,
    3:28:55 attention is kind of like this thing, like it’s usually just like, oh, over there.
    3:28:56 Oh, my foot hurts.
    3:28:57 Oh, I got to do my taxes.
    3:28:58 Oh, that, you know, what’s that guy over there?
    3:29:00 Why is he wearing those stupid shoes?
    3:29:03 And with the contemplative practice, you learn how to stabilize it.
    3:29:07 And once you stabilize it, you can now begin to sort of explore the phenomenal nature of it.
    3:29:12 So what I think I’ve learned from that is like, kind of whatever, you know,
    3:29:14 I’m not, I’m not really kind of real to begin with.
    3:29:16 The Adam Frankfurt, the identity, the thing.
    3:29:20 And the, the part of me that is real is, you know, everything’s coming and going.
    3:29:21 It’s all coming and going.
    3:29:26 Well, how could, how could I ever not come and go when the entire world is just, you know,
    3:29:29 Buddhism has this idea of codependent arising.
    3:29:30 Nothing exists.
    3:29:32 Nothing has self-nature.
    3:29:33 Nothing exists by itself.
    3:29:37 It’s an endless, infinitely connected web.
    3:29:42 But still, there’s a deliciousness to the individual experience.
    3:29:48 You get attached to it and it ends and it’s, it’s good while last and it sucks that it ends.
    3:29:51 Like you can be like, ah, well, everything comes and goes.
    3:29:54 But like I was eating ice cream yesterday.
    3:29:59 Found this awesome low carb ice cream called Delights here in Austin.
    3:30:01 And, you know, it ends.
    3:30:06 And I was like, and I was staring at the empty container and it was.
    3:30:07 That’s beautiful, man.
    3:30:08 I love that.
    3:30:10 You could say like, yeah, well, that’s how it all is.
    3:30:15 But can I say that, that’s what I’ve learned from, because I love your idea of the deliciousness of it.
    3:30:21 You know, but what I think happens with contemplative practice when it deepens is that it’s not just,
    3:30:23 you’re not just saying, right?
    3:30:25 This is why, you know, I do Koan practice.
    3:30:28 So this is a tradition in Zen that it was established.
    3:30:31 It was a teaching method that was established like a thousand years ago.
    3:30:32 They’re these book of Koans.
    3:30:37 And every Koan, you know, if you’ve ever read Godel Escher Bach, he’s got a whole chapter on Koans.
    3:30:41 They’re kind of non-logical problems that you have to work on.
    3:30:46 One of my favorite one was stop the sound of the distant temple bell.
    3:30:48 You know, you’re like, what?
    3:30:51 Every time my teacher gives it to me, I’m like, what are you talking about?
    3:30:54 You know, this is a whole Zen thing of like, up is down, but down is up.
    3:30:55 You must understand this.
    3:30:59 So, you know, your job with these Koans is to, is to sit with them.
    3:31:02 Is to sit with them until you sort of kind of, you know, you realize what the
    3:31:06 thing is trying to teach you, what aspect of experience it’s trying to teach you.
    3:31:07 So there’s no answer.
    3:31:09 There’s no, and in fact, actually, you don’t give an answer.
    3:31:11 You actually usually have to demonstrate.
    3:31:14 The first time I sat in when I did a Koan and the guy was like, don’t tell me the answer.
    3:31:15 Show me the answer.
    3:31:17 I was like, what are you talking about?
    3:31:20 But after doing these for years now, you know, I’ve kind of learned,
    3:31:22 learned the language of them.
    3:31:25 So I could never tell you, if I told you the answer, I could give you a
    3:31:26 Koan and tell you the answer.
    3:31:27 You’d be like, what?
    3:31:30 You know, it’s never, it’s not the words.
    3:31:34 It’s the, you know, so like your experience of like, yeah, the cup is empty with
    3:31:36 a contemplative practice as it deepens over years.
    3:31:38 There really does take years, just like anything in math.
    3:31:40 They can be took me years to understand Lagrangians.
    3:31:43 You kind of come to a deeper understanding with like, yeah, the words of like,
    3:31:45 it’s not just like, oh, everything changes.
    3:31:48 You actually feel that movement.
    3:31:52 Like you feel it with like breath to breath, you know, and it really becomes
    3:31:57 sometimes I have this feeling this is messed up, but I’m just joy and it’s
    3:31:58 not connected to anything.
    3:31:58 Right.
    3:31:59 That’s what I’ve kind of gotten from practice.
    3:32:04 It’s just like, yeah, you know, that passage, that, that infinite passage of
    3:32:07 moment to moment, that is truly the way things are.
    3:32:08 And it’s okay.
    3:32:10 Like not, it’s not okay because I have a feeling about it.
    3:32:10 Okay.
    3:32:11 I want it to be okay.
    3:32:12 It just is okay.
    3:32:14 It’s a really, it’s a pretty awesome thing.
    3:32:15 That’s beautiful.
    3:32:19 I mean, I, I, I, maybe it’s the genetics, maybe it’s the biochemistry of my brain,
    3:32:24 but I generally have that joy about experience, just amorphous joy, but it
    3:32:28 seems like, again, maybe it’s my Eastern European roots, but there’s always like
    3:32:30 a melancholy that’s also sitting next to the joy.
    3:32:36 And I think it always feels like they’re intricately linked.
    3:32:41 So the melancholy is about, maybe about the finiteness of experience.
    3:32:44 And the joy is just about the beauty of experience.
    3:32:45 And they’re just kind of sitting there.
    3:32:46 Yeah.
    3:32:49 Which is cool actually, because that, you know, I’m also, you know, I come from
    3:32:53 Eastern, my roots are Eastern European as well going back and I get it.
    3:32:53 Right.
    3:32:56 I mean, you know, the, but that’s also the cool thing.
    3:32:58 I think one of the things is, is like, yeah, well that, that is what it is.
    3:32:59 That is what it is.
    3:33:00 Right.
    3:33:00 You don’t have to do anything.
    3:33:03 You don’t have to like manipulate or move it around or like, yeah, this is the
    3:33:04 experience, you know?
    3:33:08 Can you speak to the, just the practical nature of sitting there from 7am to 9pm?
    3:33:10 I’m like, what the hell are you doing?
    3:33:11 What’s, what’s powerful?
    3:33:12 What’s fascinating to you?
    3:33:15 What have you learned from just the experience of staring at a wall?
    3:33:15 Yeah.
    3:33:16 Yeah.
    3:33:19 So, um, you know, it’s not really, I mean, you’re staring, you’re facing a
    3:33:22 wall and what you’re doing is you’re, you know, you’re just sitting with, you
    3:33:24 know, you can, there’s different meditative practices, right?
    3:33:25 There’s counting breaths.
    3:33:26 So that’s usually what I do.
    3:33:29 I sit down, I start counting breaths and for the first half hour, it’s just like,
    3:33:30 blah, blah, blah.
    3:33:32 I’m thinking, like I said, I’m thinking about my taxes.
    3:33:34 I’m thinking about what I got to do later on.
    3:33:35 Yada, yada, yada.
    3:33:39 First time I ever did a full session, a two day session, I swear to God, I had
    3:33:43 Bruce Springsteen’s Born to Run album track through from the beginning to the
    3:33:45 end with the pauses, back in when they were LPs.
    3:33:45 Yeah.
    3:33:47 The fricking nice, you know?
    3:33:49 Cause my mind was just like, I need to do something.
    3:33:51 So it literally played the whole album in order.
    3:33:53 That’s pretty cool, actually.
    3:33:56 Yeah, it was pretty amazing to see, you know, cause you really do, you see the
    3:33:59 dynamics of your mind, but what happens is, and this took me a while.
    3:34:05 I used to, I used to hate sitting, you know, I do it, but I, after a while, the
    3:34:09 mind gets exhausted, like that part of the mind, the upper level though, the roof
    3:34:11 brain chatter is just like there’s nothing else to do.
    3:34:15 And then you get bored and I now I realized that’s the, that’s when something
    3:34:16 interesting is going to happen.
    3:34:20 Cause you kind of like drop down and now it’s a very physical practice.
    3:34:23 People think you’re just sitting there not thinking or thinking about not
    3:34:27 thinking actually becomes a very physical process where you’re really just
    3:34:28 following the breath.
    3:34:33 You’re kind of riding the breath and it gets very quiet, you know, and within
    3:34:37 that quietness, it’s, you know, there’s, there’s a path, you know, because
    3:34:40 obviously there’s been, Buddhism is always like, you know, you know, not
    3:34:42 about thinking, but there’s a huge literature.
    3:34:45 So these guys are always about, don’t think I’ve written all this stuff, but
    3:34:47 they’re guideposts, they’re like the finger pointing at the moon.
    3:34:51 And, you know, there’s the idea of first, you know, your mind is usually
    3:34:52 scattered, right?
    3:34:54 Like right now when I walk out, I’m going to go get the Uber and every
    3:34:55 mind’s going to be all over the place.
    3:34:59 But with sitting, first you concentrate the mind so that there’s no more
    3:34:59 scatter anymore.
    3:35:01 The thoughts are still happening, but you’re just not there happening up
    3:35:02 there.
    3:35:03 You’re not even paying attention to them.
    3:35:09 And then as time goes on, you unify the mind, which is this very powerful
    3:35:13 thing where kind of the self drops away, you know, and there’s just this
    3:35:15 presence, it’s kind of like a raw presence.
    3:35:20 And that’s often where the, the, the joy up, up wells from, but you sit with
    3:35:21 whatever, maybe you’re going to sit and you’re going to have it.
    3:35:24 Like, you know, maybe you’re going to go through like an hour of being
    3:35:26 bummed out about your mom who died or something.
    3:35:29 You know, you’re just going to sit with whatever comes up.
    3:35:30 You’re going to make that.
    3:35:32 That’s why the sitting part, you’re making the commitment.
    3:35:33 I’m going to sit here with whatever comes up.
    3:35:34 I will not be moved.
    3:35:37 And then what you come away with, it actually over time, it actually
    3:35:39 changes kind of who you are.
    3:35:42 Like I’m still the asshole I was from New Jersey growing up, but I just
    3:35:45 have more space now for things, you know?
    3:35:48 Well, yeah.
    3:35:52 Once Jersey, I was Jersey, but I love, they had Bruce Springsteen.
    3:35:53 He’s just blasting in your head.
    3:35:54 Yeah, that was amazing.
    3:35:55 Why are we here?
    3:35:59 What do you think is the, is the, is the purpose, the meaning of human existence?
    3:35:59 It’s good.
    3:36:02 We just had the last conversation because I’m going to give this answer,
    3:36:04 which is so corny.
    3:36:05 Um, it’s love.
    3:36:08 And I’m not messing around because really actually what happened to you.
    3:36:12 So within Buddhism, there’s the idea of the Bodhisattva principle.
    3:36:13 You’re here to help.
    3:36:14 You’re just here to help, right?
    3:36:19 Compassion, like that’s a really essential part of this path, of the Dharma path.
    3:36:22 And when I first started out, I was like, um, I don’t care about compassion.
    3:36:23 I’m here for knowledge, right?
    3:36:26 I’m here, you know, I started contemplative practice because of the
    3:36:27 usual thing I was suffering.
    3:36:29 I had, you know, the reason everybody comes to things like this, you know, life
    3:36:33 was hard, I was going through stuff, but I also wanted knowledge.
    3:36:35 I wanted to understand the foundational nature of reality.
    3:36:36 So it was like compassion, whatever.
    3:36:39 But then I found out that you can’t get that.
    3:36:40 You can’t get though.
    3:36:42 You can’t go to love without compassion.
    3:36:49 Somehow in this process, you realize that it really is about helping
    3:36:51 all sentient beings.
    3:36:53 That’s the way they, you know, just being here to help.
    3:36:57 So I know that sounds cornball, but especially for a guy from Jersey, which
    3:36:59 is like, you know, the main thing is to get over.
    3:37:01 You’re like, your job is to get over.
    3:37:03 Um, uh, but that’s really what I found.
    3:37:06 It’s, it is actually kind of, and that’s what that joy, the joy.
    3:37:08 Some of that joy is just, it’s like this.
    3:37:11 One of the things I have, when I have like really, you know, there’s a kind
    3:37:13 of experience I’ll have in contemplative practice, which we’ll carry
    3:37:16 out into the world, which is just this gratitude for the fact that the world
    3:37:18 is just, the world gives you everything.
    3:37:19 And this is a certain way, right?
    3:37:24 Just the blue sky and the breath, the world is just giving you itself
    3:37:25 completely unhindered.
    3:37:26 It holds nothing back.
    3:37:28 And, uh, yeah, that’s kind of the experience.
    3:37:31 And then you kind of like, oh, I need to be helpful because who’s not
    3:37:32 having this experience, you know?
    3:37:34 So just love for the world as it is.
    3:37:37 Love for the way, and all the beings who are suffering, everybody’s suffering.
    3:37:41 Everybody’s, you know, your worst political opponent, they’re suffering,
    3:37:46 you know, and our job is just to try and drop our biases and our stories
    3:37:49 and see this fundamental level at which life is occurring.
    3:37:53 And, uh, hopefully there’s many alien civilizations out there going
    3:37:55 through the same journey out of suffering towards love.
    3:37:59 Yeah, that would, I, you know, that may be a universal thing about
    3:38:00 what it means to be alive.
    3:38:00 I hope so.
    3:38:01 I hope so too.
    3:38:04 If that or they’re coming to eat us, especially if they’re a type three
    3:38:07 civilization, they got really big guns.
    3:38:13 Uh, well, this was a truly mind blowing, fascinating, just awesome conversation.
    3:38:14 Adam, thank you for everything you do.
    3:38:15 And thank you for talking to me.
    3:38:16 Oh, thank you.
    3:38:17 This was a lot of fun.
    3:38:20 Thanks for listening to this conversation with Adam Frank.
    3:38:24 To support this podcast, please check out our sponsors in the description.
    3:38:28 And now let me leave you with some words from Carl Sagan.
    3:38:33 The cosmos is all that is, or ever was, or ever would be.
    3:38:37 Our feeblest contemplations of the cosmos stare us.
    3:38:42 There’s a tingling in the spine, a catch in the voice, a faint sensation, as if a
    3:38:44 distant memory or falling from a height.
    3:38:50 We know we are approaching the greatest of mysteries.
    3:38:54 Thank you for listening and hope to see you next time.
    3:38:55 .
    3:38:56 Yeah.
    3:38:57 .
    3:38:58 .
    3:38:59 .
    3:39:00 .
    3:39:01 .
    3:39:02 .
    3:39:03 .
    3:39:04 .
    3:39:06 Yeah.
    3:39:07 Yeah.
    3:39:09 (gentle music)
    3:39:19 [BLANK_AUDIO]

    Adam Frank is an astrophysicist studying star systems and the search for extraterrestrial life and alien civilizations.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep455-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Adam’s Website: https://adamfrankscience.com
    Adam’s X: https://x.com/adamfrank4
    Adam’s Instagram: https://instagram.com/adamfrankscience
    Adam’s Books:
    The Little Book of Aliens: https://amzn.to/3OTX1rP
    Light of the Stars: https://amzn.to/4iMKC6C
    The Blind Spot: https://amzn.to/4gOCe4K
    The Constant Fire: https://amzn.to/3ZVnxX4

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Encord: AI tooling for annotation & data management.
    Go to https://encord.com/lex
    Eight Sleep: Temp-controlled smart mattress cover.
    Go to https://eightsleep.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (14:22) – Planet formation
    (19:32) – Plate tectonics
    (26:54) – Extinction events
    (31:04) – Biosphere
    (34:02) – Technosphere
    (38:17) – Emergence of intelligence
    (44:29) – Drake equation
    (48:43) – Exoplanets
    (51:28) – Habitable zones
    (54:30) – Fermi Paradox
    (1:03:28) – Alien civilizations
    (1:12:55) – Colonizing Mars
    (1:25:11) – Search for aliens
    (1:41:37) – Alien megastructures
    (1:47:43) – Kardashev scale
    (1:52:56) – Detecting aliens
    (1:59:38) – Warp drives
    (2:05:45) – Cryogenics
    (2:09:03) – What aliens look like
    (2:17:48) – Alien contact
    (2:28:53) – UFO sightings
    (2:40:38) – Physics of life
    (3:06:29) – Nature of time
    (3:22:53) – Cognition
    (3:27:16) – Mortality

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips