Author: Lex Fridman Podcast

  • #460 – Narendra Modi: Prime Minister of India – Power, Democracy, War & Peace

    Narendra Modi is the Prime Minister of India. On YouTube this episode is available in English, Hindi, Russian (and soon other languages). Captions and voice-over audio tracks are provided (for the main episode video on YouTube) in English, Hindi, Russian, and the original mixed-language version, with subtitles available in your preferred language. To listen to the original mixed-language version, please select the Hindi (Latin) audio track. The default is English overdub.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep460-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/narendra-modi-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Narendra Modi’s X: https://x.com/narendramodi
    Narendra Modi’s Instagram: https://instagram.com/narendramodi
    Narendra Modi’s YouTube: https://youtube.com/narendramodi
    Narendra Modi’s Website: https://narendramodi.in/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Brain.fm: Music for focus.
    Go to https://brain.fm/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    MasterClass: Online classes from world-class experts.
    Go to https://masterclass.com/lexpod
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (17:24) – Fasting
    (29:42) – Early life
    (41:38) – Advice to Young People
    (47:20) – Journey in the Himalayas
    (58:50) – Becoming a monk
    (1:00:37) – RSS and Hindu nationalism
    (1:08:22) – Explaining India
    (1:12:32) – Mahatma Gandhi
    (1:24:27) – Path to peace in Ukraine
    (1:27:41) – India and Pakistan
    (1:33:21) – Cricket and Football
    (1:37:45) – Donald Trump
    (1:48:56) – China and Xi Jinping
    (1:56:01) – Gujarat riots in 2002
    (2:11:37) – Biggest democracy in the world
    (2:21:53) – Power
    (2:26:39) – Hard work
    (2:29:46) – Srinivasa Ramanujan
    (2:31:53) – Decision-making process
    (2:39:40) – AI
    (2:49:55) – Education
    (3:00:10) – Learning and focus
    (3:06:01) – Mantra
    (3:07:45) – Meditation
    (3:13:43) – Lex visiting India
    (3:18:08) – Siddhartha

  • #459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

    AI transcript
    0:00:04 The following is a conversation with Dylan Patel and Nathan Lampert.
    0:00:11 Dylan runs Semi Analysis, a well-respected research and analysis company that specializes
    0:00:16 in semiconductors, GPUs, CPUs, and AI hardware in general.
    0:00:23 Nathan is a research scientist at the Allen Institute for AI and is the author of the
    0:00:27 amazing blog on AI called Interconnects.
    0:00:32 They are both highly respected, read, and listened to by the experts, researchers, and
    0:00:35 engineers in the field of AI.
    0:00:38 And personally, I’m just a fan of the two of them.
    0:00:45 So I use the deep-seek moment that shook the AI world a bit as an opportunity to sit down
    0:00:48 with them and lay it all out.
    0:00:56 From deep-seek open AI, Google XAI Metaanthropic to NVIDIA and TSMC, and to U.S.-China-Taiwan
    0:01:01 Relations, and everything else that is happening at the cutting edge of AI.
    0:01:08 This conversation is a deep dive into many critical aspects of the AI industry.
    0:01:13 While it does get super technical, we try to make sure that it’s still accessible to
    0:01:19 folks outside of the AI field by defining terms, stating important concepts explicitly,
    0:01:24 spelling out acronyms, and, in general, always moving across the several layers of abstraction
    0:01:26 and levels of detail.
    0:01:32 There is a lot of hype in the media about what AI is and isn’t.
    0:01:38 The purpose of this podcast, in part, is to cut through the hype, through the bullshit,
    0:01:45 and the low-resolution analysis, and to discuss in detail how stuff works and what the implications
    0:01:46 are.
    0:01:52 Let me also, if I may, comment on the new open AI 03 mini residing model, the release
    0:01:58 of which we were anticipating during the conversation, and it did indeed come out right after.
    0:02:05 Its capabilities and costs are on par with our expectations as we stated.
    0:02:11 Open AI 03 mini is indeed a great model, but it should be stated that DeepSeek R1 has similar
    0:02:17 performance on benchmarks, is still cheaper, and it reveals its chain of thought reasoning
    0:02:19 which 03 mini does not.
    0:02:23 It only shows a summary of the reasoning.
    0:02:29 Plus R1 is open-weight, and 03 mini is not.
    0:02:35 By the way, I got a chance to play with 03 mini, and anecdotal, Vibe check-wise, I felt
    0:02:41 that 03 mini, specifically 03 mini high, is better than R1.
    0:02:47 Still, for me personally, I find that ClaudeSana35 is the best model for programming, except
    0:02:51 for tricky cases where I will use 01 Pro to brainstorm.
    0:02:57 Either way, many more better AI models will come, including reasoning models, both from
    0:03:00 American and Chinese companies.
    0:03:03 They will continue to shift the cost curve.
    0:03:07 But the “DeepSeek” moment is indeed real.
    0:03:13 I think it will still be remembered five years from now as a pivotal event in tech history,
    0:03:19 due in part to the geopolitical implications, but for other reasons too, as we discuss in
    0:03:23 detail from many perspectives in this conversation.
    0:03:26 And now, a quick few second mention of your sponsor.
    0:03:29 Check them out in the description, it’s the best way to support this podcast.
    0:03:37 We got NVIDIA AI for video generation, GitHub for coding, Shopify for selling stuff online,
    0:03:42 Netsuite for running your business, and AG1 for staying healthy.
    0:03:44 Choose wisely, my friends.
    0:03:50 Also if you want to get in touch with me for whatever reason, go to www.lxtremer.com/contact.
    0:03:54 And now, onto the follow ad reads, no ads in the middle, try to make this interesting,
    0:03:59 but if you skip them, please still check out our sponsors, I enjoy their stuff.
    0:04:01 Maybe you will too.
    0:04:05 This video is brought to you by a new sponsor, but I’ve known these folks for a long time
    0:04:07 and perfect fit for this podcast.
    0:04:14 They’re called NVIDIA AI, it’s a video generating app that allows you to create full length videos
    0:04:21 using just text, prompts, it’s intuitive, works amazing, it’s truly incredible what
    0:04:22 you can do.
    0:04:28 I’ve been playing quite a bit and using it for stock footage, and by the way they make
    0:04:35 it super easy for you to switch between actually available stock footage and AI generated footage.
    0:04:41 I’ve been preparing a lot for a conversation with Tim Sweeney who is the creator of Unreal
    0:04:47 Engine, and there’s 3D worlds and you get to think about the role of AI in generating
    0:04:49 those 3D worlds.
    0:04:52 That’s what’s coming, 5, 10, 20 years from now.
    0:04:57 In video games and simulations, a fundamental part of our lives would be generated with
    0:04:58 AI.
    0:05:04 And I think NVIDIA AI does a masterful job of pushing us in that direction in the 2D
    0:05:05 plane of video.
    0:05:11 Now, I think this is not a tool that replaces human creativity.
    0:05:14 I think it supercharges human creativity.
    0:05:22 I think now and for a long, long time to come, humans will be in the loop of creating great
    0:05:28 art because we’re creating for each other and only humans truly deeply know what makes
    0:05:35 other humans go ah, like the old Kerak line.
    0:05:43 If you want to try out NVIDIA AI, you can do so for free at nvideo.io/lexpod, saving
    0:05:47 time and money on production costs.
    0:05:53 This episode is brought to you by the thing that’s brought me joy for many, many years
    0:06:00 and created a community for hundreds of thousands, millions, I don’t know how many developers
    0:06:03 and that place is called GitHub.
    0:06:11 It is a company that really has supercharged the developer community.
    0:06:14 I mean, where would the world be without GitHub?
    0:06:21 And they’re also, as a company, pushing the limits of what’s possible in terms of AI
    0:06:24 code generation, AI assisted coding.
    0:06:27 They were pioneers on co-pilot.
    0:06:29 They are still pioneers in co-pilot.
    0:06:33 It’s super competitive space and they are doing their best to win.
    0:06:37 I will forever be a supporter of GitHub co-pilot.
    0:06:41 Now it integrates in a bunch of IDEs, not just into VS Code.
    0:06:45 I am, of course, a VS Code guy at this time.
    0:06:48 I did use JetBrains for a long time.
    0:06:50 I still dabble a little bit.
    0:06:55 For people who don’t know, JetBrains has a plethora, don’t like using that word and
    0:06:59 seems elitist, but it’s got to be a better word.
    0:07:04 There is a lot of different sort of sub IDEs inside JetBrains.
    0:07:07 I’ve even used DataGrip, which manages the MySQL.
    0:07:15 I should mention, and this might be embarrassing, but I have not, ooh, this might be interesting,
    0:07:25 but I have not used anything like co-pilot on any database management GUIs.
    0:07:29 I wonder if DataGrip integrates co-pilot.
    0:07:31 I’m going to have to check that out.
    0:07:38 But everything I use, I’m writing SQL queries from scratch inside the database management
    0:07:39 GUI.
    0:07:45 If I want to do complicated queries, I’ll go to any of the LLMs.
    0:07:51 They’re going to be close on a 3.5 or if it’s part of the code, then I’m going to be inside
    0:07:52 my IDE.
    0:07:57 I just like having a GUI management of a database.
    0:07:58 I’m going to have to check that out with it.
    0:08:01 If DataGrip integrates co-pilot, that’s going to be incredible.
    0:08:05 If not, I’m going to yell from the top of my lungs, hoping it will eventually because
    0:08:11 it’ll make my life a bit easier to have the visual component of a database together with
    0:08:16 a code component of SQL queries, yeah, it will be amazing.
    0:08:22 Anyway, go check out GitHub co-pilot at gh.io/copilot.
    0:08:27 This episode is brought to you by Shopify, not Spotify, Shopify.
    0:08:30 Easily confused, the CEOs are tagged on X often.
    0:08:33 They’re both great CEOs, but this is Shopify.
    0:08:40 You can sell anywhere with a great looking online store using Shopify.
    0:08:45 I’ve been learning a lot about the Silk Road actually, not the digital one.
    0:08:54 The one that for a lot of human history served as a place for merchants to travel and trade
    0:08:55 goods.
    0:09:02 I’m reading a lot about Jengis Khan who enforced the rule of law on the Silk Road and that
    0:09:09 actually had a big invigorating effect on the economy of the Eurasian region.
    0:09:16 Anyway, that was before computers, if they had computers, imagine if they had computers.
    0:09:22 Boy, would the Jengis Khan force be terrifying.
    0:09:31 Or maybe not, maybe each technological age has their own kind of military tactician,
    0:09:37 their own human that matches perfectly for that time in order to conquer the land and
    0:09:38 people.
    0:09:42 Still, what a terrifying time that was.
    0:09:49 Much of human history, lots of beauty, but lots of ways to die.
    0:09:56 So, I’m glad to be living in the 21st century where I can sit back with a margarita.
    0:10:01 I don’t drink margaritas, but if I wanted to, I could and then buy stuff on stores created
    0:10:02 by Shopify.
    0:10:10 Anyway, you can sign up for a $1 per month trial period at Shopify.com/Lex, go to Shopify.com/Lex
    0:10:13 to take your business to the next level today.
    0:10:19 This episode was also brought to you by Netsuite, an all-in-one business management system.
    0:10:22 Not sure why I said that so slowly, but I did.
    0:10:29 I actually did a little intermission for five, six minutes for this episode where I added
    0:10:35 in the middle of it an addendum after having tried to open AI O3 mini.
    0:10:42 That was such a weird feeling to sort of insert myself in the middle of an episode.
    0:10:44 I felt like a third wheel to myself.
    0:10:47 It’s like, “Hey, hey everyone, what are you doing?
    0:10:50 Why did you guys not invite me to this party?”
    0:10:52 That’s what I felt like.
    0:10:55 Hey Lux from the past, it’s me, Lux from the future.
    0:10:59 Right, I should be talking about Netsuite, which is an all-in-one cloud business management
    0:11:00 system.
    0:11:11 It’s the machine inside the machine and boy, are we increasingly building stacks of machines.
    0:11:18 Layers and layers and layers of abstraction until we’re just sitting back on a beach somewhere
    0:11:22 talking to an AI system that’s taking care of everything else.
    0:11:28 Anyway, you can download the CFO’s guide to AI and Machine Learning at Netsuite.com/Lex.
    0:11:37 This episode is also brought to you by AG1, an all-in-one daily drink to support better
    0:11:38 health and performance.
    0:11:39 I drank it today.
    0:11:40 I enjoyed it today.
    0:11:42 I’ve been sleeping very, very little.
    0:11:47 The amount of work I have to do is insane.
    0:11:55 Last night at 6 a.m., I went to bed at 7 a.m., 8 a.m., thinking about doing an all-nighter.
    0:11:56 It’s madness.
    0:12:03 But anyway, at 6 a.m., I drank an AG1 and I was sitting in a couch and I was watching
    0:12:07 like 10 minutes of American Pride Meval.
    0:12:13 I watched like 5, 10 minutes of a show at a time and I was sipping on the AG1 and I was
    0:12:20 thinking how lucky, how fucking lucky I am to be alive.
    0:12:25 First of all because I’m watching the American Frontier and people being just brutal to each
    0:12:31 other, the brutal reality of nature and war during that time and the lawlessness during
    0:12:32 that time.
    0:12:42 But also just how lucky I am to be on the spinning rock and join this green healthy drink.
    0:12:48 Being able to watch a show, being able to work hard towards the thing I love, being able
    0:12:51 to love, being able to breathe, all of it.
    0:12:52 Just amazing.
    0:13:01 Anyway, they’ll give you one month supply of fish oil when you sign up at drinkag1.com/lex.
    0:13:03 This is the Lex Friedman Podcast.
    0:13:06 To support it, please check out our sponsors in the description.
    0:13:28 And now, dear friends, here’s Dylan Patel and Nathan Lambert.
    0:13:32 A lot of people are curious to understand China’s deep-seek AI models, so let’s lay
    0:13:33 it out.
    0:13:40 Can you describe what deep-seek v3 and deep-seek r1 are, how they work, how they’re trained?
    0:13:43 Let’s look at the big picture and then we’ll zoom in on the details.
    0:13:51 Yeah, so deep-seek v3 is a new mixture of experts, transformer language model from deep-seek
    0:13:53 who is based in China.
    0:13:58 They have some new specifics in the model that we’ll get into.
    0:14:03 Largely, this is an open-weight model and it’s an instruction model like what you would
    0:14:05 use in chatGPT.
    0:14:09 They also released what is called the base model, which is before these techniques of
    0:14:11 post-training.
    0:14:16 Most people use instruction models today and those are what’s served in all sorts of applications.
    0:14:21 This was released, I believe, December 26th or that week.
    0:14:28 And then weeks later on January 20th, deep-seek released deep-seek r1, which is a reasoning
    0:14:33 model which really accelerated a lot of this discussion.
    0:14:38 This reasoning model has a lot of overlapping training steps to deep-seek v3 and it’s confusing
    0:14:44 that you have a base model called v3 that you do something to to get a chat model and
    0:14:47 then you do some different things to get a reasoning model.
    0:14:51 I think a lot of the AI industry is going through this challenge of communications right now
    0:14:54 where OpenAI makes fun of their own naming schemes.
    0:15:00 They have GPT-40, they have OpenAI-01 and there’s a lot of types of models, so we’re
    0:15:02 going to break down what each of them are.
    0:15:07 There’s a lot of technical specifics on training and go from high-level to specific and kind
    0:15:09 of go through each of them.
    0:15:13 There’s so many places we can go here, but maybe let’s go to open weights first.
    0:15:17 What does it mean for a model to be open weights and what are the different flavors of open
    0:15:18 source in general?
    0:15:22 Yeah, so this discussion has been going on for a long time in AI, it became more important
    0:15:27 since chat GPT or more focal since chat GPT at the end of 2022.
    0:15:33 Open weights is the accepted term for when model weights of a language model are available
    0:15:35 on the internet for people to download.
    0:15:39 Those weights can have different licenses, which is effectively the terms by which you
    0:15:41 can use the model.
    0:15:44 There are licenses that come from history and open source software.
    0:15:48 There are licenses that are designed by companies specifically.
    0:15:56 All of Lama, DeepSeq, Quen, Mistral, these popular names in open weight models have some
    0:15:57 of their own licenses.
    0:16:01 It’s complicated because not all the same models have the same terms.
    0:16:06 The big debate is on what makes a model open weight.
    0:16:07 Why are we saying this term?
    0:16:08 It’s kind of a mouthful.
    0:16:12 It sounds close to open source, but it’s not the same.
    0:16:16 There’s still a lot of debate on the definition and soul of open source AI.
    0:16:21 Open source software has a rich history on freedom to modify, freedom to take on your
    0:16:26 own, freedom for many restrictions on how you would use the software and what that means
    0:16:31 for AI is still being defined.
    0:16:33 For what I do, I work at the Allen Institute for AI.
    0:16:34 We’re a nonprofit.
    0:16:39 We want to make AI open for everybody and we try to lead on what we think is truly open
    0:16:40 source.
    0:16:43 There’s not full agreement in the community, but for us that means releasing the training
    0:16:49 data, releasing the training code, and then also having open weights like this.
    0:16:52 We’ll get into the details of the models.
    0:16:57 Again and again, as we try to get deeper into how the models were trained, we will say things
    0:17:02 like the data processing, data filtering, data quality is the number one determinant
    0:17:07 of the model quality and then a lot of the training code is the determinant on how long
    0:17:10 it takes to train and how fast your experimentation is.
    0:17:18 Without fully open source models where you have access to this data, it’s harder to replicate.
    0:17:24 We’ll get into cost numbers for DeepSeq v3 on mostly GPU hours and how much you could
    0:17:28 pay to rent those yourselves, but without the data, the replication cost is going to
    0:17:31 be far, far higher.
    0:17:32 Same goes for the code.
    0:17:37 We should also say that this is probably one of the more open models out of the frontier
    0:17:39 models.
    0:17:44 This full spectrum, or probably the fullest open source, like you said, open code, open
    0:17:50 data, open weights, this is not open code.
    0:17:56 This is probably not open data and this is open weights.
    0:18:03 The licensing is MIT license, or I mean there’s some nuance in the different models, but it’s
    0:18:08 towards the free, in terms of the open source movement, these are the good guys.
    0:18:13 DeepSeq is doing fantastic work for disseminating understanding of AI.
    0:18:19 Their papers are extremely detailed in what they do and for other teams around the world,
    0:18:25 they’re very actionable in terms of improving your own training techniques.
    0:18:27 We’ll talk about licenses more.
    0:18:32 The DeepSeq R1 model has a very permissive license, it’s called the MIT license.
    0:18:36 That effectively means there’s no downstream restrictions on commercial use.
    0:18:38 There’s no use case restrictions.
    0:18:43 You can use the outputs from the models to create synthetic data.
    0:18:44 This is all fantastic.
    0:18:48 I think the closest peer is something like Lama, where you have the weights and you have
    0:18:50 a technical report.
    0:18:54 The technical report is very good for Lama, one of the most red PDFs of the year.
    0:18:58 Last year is the Lama 3 paper, but in some ways it’s slightly less actionable.
    0:19:03 It has less details on the training specifics, less plots and so on.
    0:19:09 The Lama 3 license is more restrictive than MIT and then between the DeepSeq custom license
    0:19:11 and the Lama license, we can get into this whole rabbit hole.
    0:19:16 I think we’ll make sure we want to go down the license rabbit hole before we do specifics.
    0:19:17 Yeah.
    0:19:22 It should be stated that one of the implications of DeepSeq, it puts pressure on Lama and everybody
    0:19:26 else on open AI to push towards open source.
    0:19:30 That’s the other side of open source that you mentioned is how much is published in
    0:19:32 detail about it.
    0:19:38 How open are you with the insights behind the code?
    0:19:39 How good is the technical reports?
    0:19:43 Are they hand wavy or is there actual details in there?
    0:19:46 That’s one of the things that DeepSeq did well as they published a lot of the details.
    0:19:47 Yeah.
    0:19:51 Especially in the DeepSeq V3, which is their pre-training paper, they were very clear that
    0:19:58 they are doing interventions on the technical stack that go at many different levels.
    0:20:03 For example, to get highly efficient training, they’re making modifications at or below
    0:20:06 the CUDA layer for NVIDIA chips.
    0:20:10 I have never worked there myself and there are a few people in the world that do that
    0:20:12 very well and some of them are at DeepSeq.
    0:20:18 These types of people are at DeepSeq and leading American frontier labs, but they’re not many
    0:20:19 places.
    0:20:25 To help people understand the other implication of open weights, there’s a topic we’ll return
    0:20:26 to often here.
    0:20:38 There’s a fear that China, the nation, might have interest in stealing American data, violating
    0:20:40 privacy of American citizens.
    0:20:45 What can we say about open weights to help us understand what the weights are able to
    0:20:49 do in terms of stealing people’s data?
    0:20:54 These weights that you can download from Huggingface or other platforms are very big matrices of
    0:20:55 numbers.
    0:20:59 You can download them to a computer in your own house that has no internet and you can
    0:21:03 run this model and you’re totally in control of your data.
    0:21:07 That is something that is different than how a lot of language model usage is actually
    0:21:12 done today, which is mostly through APIs, where you send your prompt to GPUs run by
    0:21:14 certain companies.
    0:21:17 These companies will have different distributions and policies on how your data is stored, if
    0:21:23 it is used to train future models, where it is stored, if it is encrypted, and so on.
    0:21:27 The open weights are you have your fate of data in your own hands, and that is something
    0:21:31 that is deeply connected to the soul of open source.
    0:21:35 It’s not the model that steals your data, it’s whoever’s hosting the model, which could
    0:21:42 be China, if you’re using the DeepSeek app, or it could be Proplexity.
    0:21:46 You’re trusting them with your data, or OpenAI, you’re trusting them with your data.
    0:21:48 Some of these are American companies, some of these are Chinese companies, but the model
    0:21:51 itself is not doing the stealing.
    0:21:52 That’s the host.
    0:21:56 All right, so back to the basics.
    0:22:01 What’s the difference between DeepSeek v3 and DeepSeek r1?
    0:22:05 Can we try to lay out the confusion potential?
    0:22:10 Yes, so for one, I have very understanding of many people being confused by these two
    0:22:11 model names.
    0:22:15 So I would say the best way to think about this is that when training a language model,
    0:22:19 you have what is called pre-training, which is when you’re predicting the large amounts
    0:22:24 of mostly internet text, you’re trying to predict the next token, and what to know about
    0:22:30 these new DeepSeek models is that they do this internet large-scale pre-training once
    0:22:33 to get what is called DeepSeek v3 base.
    0:22:34 This is the base model.
    0:22:37 It’s just going to finish your sentences for you.
    0:22:42 It’s going to be harder to work with than ChatGPT, and then what DeepSeek did is they’ve
    0:22:49 done two different post-training regimes to make the models have specific desirable behaviors.
    0:22:55 So what is the more normal model in terms of the last few years of AI, an instruct model,
    0:22:58 a chat model, a “aligned model,” a helpful model.
    0:23:02 There are many ways to describe this is more standard post-training.
    0:23:06 So this is things like instruction tuning, reinforcement learning from human feedback.
    0:23:08 We’ll get into some of these words.
    0:23:12 And this is what they did to create the DeepSeek v3 model.
    0:23:18 This was the first model to be released, and it is very high-performance, it’s competitive
    0:23:22 with GPT-4, Llama 405b, so on.
    0:23:26 And then when this release was happening, we don’t know their exact timeline, or soon
    0:23:32 after they were finishing the training of a different training process from the same
    0:23:37 next token prediction base model that I talked about, which is when this new reasoning training
    0:23:41 that people have heard about comes in in order to create the model that is called DeepSeek
    0:23:42 R1.
    0:23:46 The R through this conversation is good for grounding for reasoning, and the name is
    0:23:51 also similar to OpenAI’s 01, which is the other reasoning model that people have heard
    0:23:52 about.
    0:23:56 And we’ll have to break down the training for R1 in more detail, because for one, we
    0:24:02 have a paper detailing it, but also it is a far newer set of techniques for the AI community,
    0:24:06 so it’s a much more rapidly evolving area of research.
    0:24:13 Maybe we should also say the big two categories of training of pre-training and post-training,
    0:24:14 these umbrella terms that people use.
    0:24:20 So what is pre-training and what is post-training, and what are the different flavors of things
    0:24:22 underneath post-training umbrella?
    0:24:26 Yeah, so pre-training, I’m using some of the same words that really get the message across
    0:24:30 is you’re doing what is called autoregressive prediction to predict the next token in a
    0:24:32 series of documents.
    0:24:39 This is done over standard practice is trillions of tokens, so this is a ton of data that is
    0:24:41 mostly scraped from the web.
    0:24:46 In some of DeepSeq’s earlier papers, they talk about their training data being distilled
    0:24:47 for math.
    0:24:52 I shouldn’t use this word yet, but taken from Common Crawl, and that’s a public access
    0:24:56 that anyone listening to this could go download data from the Common Crawl website.
    0:24:58 This is a crawler that is maintained publicly.
    0:25:03 Yes, other tech companies eventually shift to their own crawler, and DeepSeq likely has
    0:25:05 done this as well, as most frontier labs do.
    0:25:10 But this sort of data is something that people can get started with, and you’re just predicting
    0:25:12 text in a series of documents.
    0:25:19 This can be scaled to be very efficient, and there’s a lot of numbers that are thrown
    0:25:24 around in AI training, like how many floating-point operations or flops are used, and you can
    0:25:30 also look at how many hours of these GPUs that are used.
    0:25:37 It’s largely one-loss function taken to a very large amount of compute usage.
    0:25:42 You set up really efficient systems, and then at the end of that you have the space model,
    0:25:48 and pre-training is where there is a lot more of complexity in terms of how the process
    0:25:55 is emerging or evolving, and the different types of training losses that you will use.
    0:26:00 This is a lot of techniques grounded in the natural language processing literature.
    0:26:04 The oldest technique, which is still used today, is something called instruction tuning,
    0:26:07 or also known as supervised fine-tuning.
    0:26:12 These acronyms will be IFT or SFT, that people really go back and forth throughout them,
    0:26:17 and I will probably do the same, which is where you add this formatting to the model,
    0:26:23 where it knows to take a question that is like, “Explain the history of the Roman Empire
    0:26:28 to me,” or sort of question you’ll see on Reddit or Stack Overflow, and then the model
    0:26:33 will respond in a information-dense but presentable manner.
    0:26:38 The core of that formatting is in this instruction-tuning phase, and then there’s two other categories
    0:26:41 of loss functions that are being used today.
    0:26:44 One I will classify as preference fine-tuning.
    0:26:48 Preference fine-tuning is a generalized term for what came out of reinforcement learning
    0:26:52 from human feedback, which is RLHF.
    0:26:58 This reinforcement learning from human feedback is credited as the technique that helped chat
    0:27:00 GPT break through.
    0:27:05 It is a technique to make the responses that are nicely formatted, like these Reddit answers,
    0:27:08 more in tune with what a human would like to read.
    0:27:13 This is done by collecting pairwise preferences from actual humans out in the world to start,
    0:27:18 and now AIs are also labeling this data, and we’ll get into those trade-offs.
    0:27:23 You have this kind of contrastive loss function between a good answer and a bad answer.
    0:27:25 The model learns to pick up these trends.
    0:27:27 There’s different implementation ways.
    0:27:29 You have things called reward models.
    0:27:31 You could have direct alignment algorithms.
    0:27:35 There’s a lot of really specific things you can do, but all of this is about fine-tuning
    0:27:37 to human preferences.
    0:27:43 The final stage is much newer and will link to what is done in R1, and these reasoning
    0:27:46 models is, I think, OpenAI’s name for this.
    0:27:51 They had this new API in the fall, which they called the Reinforcement Fine-Tuning API.
    0:27:55 This is the idea that you use the techniques of reinforcement learning, which is a whole
    0:27:56 framework of AI.
    0:27:58 There’s a deep literature here.
    0:28:04 To summarize, it’s often known as trial and error learning, or the subfield of AI where
    0:28:10 you’re trying to make sequential decisions in a certain potentially noisy environment.
    0:28:14 There’s a lot of ways we can go down that, but fine-tuning language models where they
    0:28:19 can generate an answer, and then you check to see if the answer matches the true solution.
    0:28:24 For math or code, you have an exactly correct answer for math.
    0:28:26 You can have unit tests for code.
    0:28:29 What we’re doing is we are checking the language models work, and we’re giving it multiple
    0:28:32 opportunities on the same questions to see if it is right.
    0:28:38 If you keep doing this, the models can learn to improve invariable domains to a great extent.
    0:28:39 It works really well.
    0:28:42 It’s a newer technique in the academic literature.
    0:28:48 It’s been used at Frontier Labs in the US that don’t share every detail for multiple years.
    0:28:52 This is the idea of using reinforcement learning with language models, and it has been taking
    0:28:54 off, especially in this deep-seek moment.
    0:29:00 We should say that there’s a lot of exciting stuff going on, again, across the stack, but
    0:29:04 the post-training probably this year is going to be a lot of interesting developments in
    0:29:05 the post-training.
    0:29:06 We’ll talk about it.
    0:29:12 I almost forgot to talk about the difference between deep-seek v3 and R1 on the user experience
    0:29:13 side.
    0:29:16 Forget the technical stuff, forget all of that.
    0:29:19 People that don’t know anything about AI, they show up.
    0:29:20 What’s the actual experience?
    0:29:24 What’s the use case for each one when they actually type and talk to it?
    0:29:26 What is each good at, that kind of thing?
    0:29:28 Let’s start with deep-seek v3 again.
    0:29:30 It’s what more people would have tried something like it.
    0:29:35 You ask it a question, it’ll start generating tokens very fast, and those tokens will look
    0:29:38 like a very human legible answer.
    0:29:41 It’ll be some sort of markdown list.
    0:29:46 It might have formatting to help you draw to the core details in the answer, and it’ll
    0:29:49 generate tens to hundreds of tokens.
    0:29:57 A token is normally a word for common words or a sub-word part in a longer word.
    0:30:01 It’ll look like a very high-quality Reddit or Stack Overflow answer.
    0:30:06 These models are really getting good at doing these across a wide variety of domains.
    0:30:11 Even things that, if you’re an expert, things that are close to the fringe of knowledge,
    0:30:14 they will still be fairly good at.
    0:30:20 Getting edge AI topics that I do research on, these models are capable for study aid,
    0:30:23 and they’re regularly updated.
    0:30:28 Where this changes is with the deep-seek R1, what is called these reasoning models, is
    0:30:34 when you see tokens coming from these models to start, it will be a large chain of thought
    0:30:35 process.
    0:30:39 We’ll get back to chain of thought in a second, which looks like a lot of tokens where the
    0:30:41 model is explaining the problem.
    0:30:45 The model will often break down the problem and be like, “Okay, they asked me for this.
    0:30:46 Let’s break down the problem.
    0:30:50 I’m going to need to do this,” and you’ll see all of this generating from the model.
    0:30:52 It’ll come very fast in most user experiences.
    0:30:55 These APIs are very fast, so you’ll see a lot of tokens, a lot of words show up really
    0:30:56 fast.
    0:31:01 It’ll keep flowing on the screen, and this is all the reasoning process, and then eventually
    0:31:05 the model will change its tone in R1, and it’ll write the answer, where it summarizes
    0:31:11 its reasoning process and writes a similar answer to the first types of model.
    0:31:17 In DeepSeq’s case, which is part of why this was so popular even outside the AI community,
    0:31:21 is that you can see how the language model is breaking down problems.
    0:31:24 You get this answer on a technical side.
    0:31:27 They train the model to do this specifically where they have a section, which is reasoning,
    0:31:31 and then it generates a special token, which is probably hidden from the user most of the
    0:31:35 time, which says, “Okay, I’m starting the answer,” so the model is trained to do this
    0:31:37 two-stage process on its own.
    0:31:43 If you use a similar model in, say, OpenAI, OpenAI’s user interface is trying to summarize
    0:31:49 this process for you nicely by showing the sections that the model is doing, and it’ll
    0:31:54 kind of click through, it’ll say, breaking down the problem, making X calculation, cleaning
    0:31:58 the result, and then the answer will come for something like OpenAI.
    0:32:03 Maybe it’s useful here to go through an example of a DeepSeq R1 reasoning.
    0:32:09 And so, if you’re looking at the screen here, what you’ll see is a screenshot of the DeepSeq
    0:32:15 chat app, and at the top is thought for 151 seconds with the drop-down arrow.
    0:32:18 Underneath that, if we were in an app that we were running, the drop-down arrow would
    0:32:19 have the reasoning.
    0:32:25 So, in this case, the specific question, which, you know, I’m philosophically/podhead
    0:32:34 inclined, so this is asking DeepSeq R1 for one truly novel insight about humans.
    0:32:39 And it reveals the reasoning, and basically, the truly novel aspect is what’s pushing
    0:32:44 the reasoning to constantly sort of the model asking itself, “Is this truly novel?”
    0:32:50 So it’s actually challenging itself to be more novel, more counterintuitive, less cringe,
    0:32:51 I suppose.
    0:32:57 So some of the reasoning says, this is just snapshots, “Alternatively, humans have a
    0:33:01 unique meta-emotion where they feel emotions about their own emotions, e.g. feeling guilty
    0:33:02 about being angry.
    0:33:06 This recursive emotional layer and creates complex motivational drives that don’t exist
    0:33:07 in other animals.
    0:33:09 The insight is that human emotions are nested.”
    0:33:14 So it’s like, it’s reasoning through how humans feel emotions.
    0:33:15 It’s reasoning about meta-emotions.
    0:33:17 It’s going to have pages and pages of this.
    0:33:20 It’s almost too much to actually read, but it’s nice to skim as it’s coming.
    0:33:21 It’s a stream of consciousness.
    0:33:26 It’s a James Joyce-like stream of consciousness, and then it goes, “Wait, the user wants something
    0:33:28 that’s not seen anywhere else.
    0:33:30 Let me dig deeper.”
    0:33:35 And consider the human ability to hold contradictory beliefs simultaneously, cognitive dissonance
    0:33:41 is known, but perhaps the function is to allow flexible adaptation, so on and so forth.
    0:33:50 I mean, that really captures the public imagination that, holy shit, this isn’t, I mean, intelligence
    0:33:57 slash almost like an inkling of sentience, because you’re thinking through, you’re self-reflecting,
    0:33:59 you’re deliberating.
    0:34:06 And the final result of that, after 157 seconds, is humans instinctively convert selfish desires
    0:34:13 into cooperative systems by collectively pretending abstract rules, money, laws, rights are real.
    0:34:18 These shared hallucinations act as, quote, “games,” where competition is secretly redirected
    0:34:25 to benefit the group, turning conflict into society’s fuel, pretty profound, I mean, you
    0:34:26 know.
    0:34:31 This is a confidential digression, but a lot of people have found that these reasoning
    0:34:34 models can sometimes produce much more eloquent text.
    0:34:39 That is at least an interesting example, I think, depending on how open-minded you are,
    0:34:42 you find language models interesting or not, and there’s a spectrum there.
    0:34:47 Well, I mean, we’ll talk about different benchmarks as well, but some is just a vibe.
    0:34:55 Like that, in itself, is a, let’s say, quote, “fire tweet,” if I’m trying to produce something
    0:34:59 that where people are like, “Oh, shit, okay, so that’s a chain of thought, we’ll probably
    0:35:02 return to it more.”
    0:35:07 How are they able to achieve such low cost on the training and the inference?
    0:35:09 Maybe you could talk the training first.
    0:35:16 Yeah, so there’s two main techniques that they implemented that are probably the majority
    0:35:20 of their efficiency, and then there’s a lot of implementation details that maybe we’ll
    0:35:23 gloss over or get into later that sort of contribute to it.
    0:35:29 But those two main things are, one, is they went to a mixture of experts model, which
    0:35:30 we’ll define in a second.
    0:35:35 And then the other thing is that they invented this new technique called MLA, latent attention.
    0:35:36 Both of these are big deals.
    0:35:40 Mixture of experts is something that’s been in the literature for a handful of years,
    0:35:46 and OpenAI with GPT-4 was the first one to productize a mixture of experts model.
    0:35:51 And what this means is, when you look at the common models around that most people have
    0:35:55 been able to interact with that are open, think Lama.
    0:36:01 Lama is a dense model, i.e., every single parameter or neuron is activated as you’re
    0:36:05 going through the model for every single token you generate.
    0:36:08 Now with a mixture of experts model, you don’t do that.
    0:36:10 How does the human actually work?
    0:36:16 Well, my visual cortex is active when I’m thinking about vision tasks and other things.
    0:36:18 My amygdala is when I’m scared.
    0:36:21 These different aspects of your brain are focused on different things.
    0:36:24 A mixture of experts model attempts to approximate this to some extent.
    0:36:30 It’s nowhere close to what a brain architecture is, but different portions of the model activate.
    0:36:34 You’ll have a set number of experts in the model and a set number that are activated each
    0:36:35 time.
    0:36:38 And this dramatically reduces both your training and inference costs.
    0:36:44 Because now, if you think about the parameter count as the total embedding space for all
    0:36:49 of this knowledge that you’re compressing down during training, when you’re embedding
    0:36:54 this data in instead of having to activate every single parameter every single time you’re
    0:36:58 training or running inference, now you can just activate a subset.
    0:37:01 And the model will learn which expert to route to for different tasks.
    0:37:06 And so this is a humongous innovation in terms of, hey, I can continue to grow the total
    0:37:08 embedding space of parameters.
    0:37:12 And so DeepSeq’s model is 600-something billion parameters.
    0:37:15 Relative to Lama 405B, it’s 405 billion parameters.
    0:37:18 Relative to Lama 70B, it’s 70 billion parameters.
    0:37:23 So this model technically has more embedding space for information to compress all of the
    0:37:25 world’s knowledge that’s on the internet down.
    0:37:31 But at the same time, it is only activating around 37 billion of the parameters.
    0:37:35 So only 37 billion of these parameters actually need to be computed every single time you’re
    0:37:38 training data or inferencing data out of it.
    0:37:43 And so versus, again, the Lama model, 70 billion parameters must be activated, or 405 billion
    0:37:44 parameters must be activated.
    0:37:49 So you’ve dramatically reduced your compute cost when you’re doing training and inference
    0:37:51 with this mixture of experts architecture.
    0:37:55 So we break down where it actually applies and go into the transformer.
    0:37:56 Is that useful?
    0:37:57 Let’s go.
    0:37:58 Let’s go into the transformer.
    0:38:04 The transformer is a thing that is talked about a lot, and we will not cover every detail.
    0:38:09 Essentially the transformer is built on repeated blocks of this attention mechanism, and then
    0:38:14 a traditional dense, fully connected multilayer perception, whatever word you want to use
    0:38:19 for your normal neural network, and you alternate these blocks, there’s other details.
    0:38:22 And where a mixture of experts is applied is at this dense model.
    0:38:28 The dense model holds most of the weights if you count them in a transformable model.
    0:38:32 So you can get really big gains from this mixture of experts on parameter efficiency,
    0:38:37 at training and inference, because you get this efficiency by not activating all of these
    0:38:38 parameters.
    0:38:43 We should also say that a transformer is a giant neural network.
    0:38:49 And then there’s for 15 years now, there’s what’s called the deep learning revolution.
    0:38:53 Networks gotten larger and larger, and at a certain point the scaling laws appeared where
    0:38:54 people realized…
    0:38:57 This is a scaling law shirt by the way.
    0:39:04 Representing scaling laws, where it became more and more formalized that bigger is better
    0:39:07 across multiple dimensions of what bigger means.
    0:39:12 But these are all neural networks we’re talking about, and we’re talking about different architectures
    0:39:17 of how to construct these neural networks such that the training and the inference on
    0:39:19 them is super efficient.
    0:39:23 Every different type of model has a different scaling law for it, which is effectively for
    0:39:29 how much compute you put in, the architecture will get to different levels of performance
    0:39:30 at test tasks.
    0:39:34 And mixture of experts is one of the ones at training time, even if you don’t consider
    0:39:36 the inference benefits, which are also big.
    0:39:41 At training time, your efficiency with your GPUs is dramatically improved by using this
    0:39:43 architecture if it is well implemented.
    0:39:50 So you can get effectively the same performance model and evaluation scores with numbers like
    0:39:51 30% less compute.
    0:39:55 I think there’s going to be a wide variation depending on your implementation details and
    0:39:56 stuff.
    0:40:00 But it is just important to realize that this type of technical innovation is something
    0:40:02 that gives huge gains.
    0:40:07 And I expect most companies that are serving their models to move to this mixture of experts
    0:40:12 implementation, historically the reason why not everyone might do it is because it’s an
    0:40:15 implementation complexity, especially when doing these big models.
    0:40:19 So this is one of the things that’s deep sea gets credit for is they do this extremely
    0:40:20 well.
    0:40:25 This mixture of experts extremely well, this architecture for what is called deep seek MOE,
    0:40:30 MOE is the shortened version of mixture of experts, is multiple papers old.
    0:40:35 This part of their training infrastructure is not new to these models alone.
    0:40:40 And same goes for what Dylan mentioned with multi head latent attention is all about reducing
    0:40:46 memory usage during inference and same things during training by using some fancy low rank
    0:40:48 approximation math.
    0:40:51 If you get into the details with this latent attention, it’s one of those things that I
    0:40:56 look at and say, okay, they’re doing really complex implementations because there’s other
    0:41:01 parts of language models such as embeddings that are used to extend the context length.
    0:41:07 The common one that deep seek uses rotary positional impeddings, which is called rope.
    0:41:10 And if you want to use rope with a normal MOE, it’s kind of a sequential thing.
    0:41:16 You take these, you take two of the attention matrices and you rotate them by a complex value
    0:41:21 rotation, which is a matrix multiplication with deep seeks MLA with this new attention
    0:41:25 architecture, they need to do some clever things because they’re not set up the same
    0:41:28 and it just makes the implementation complexity much higher.
    0:41:30 So they’re managing all of these things.
    0:41:34 And these are probably the sort of things that opening eye, these closed labs are doing.
    0:41:37 We don’t know if they’re doing the exact same techniques, but they actually shared them
    0:41:42 with the world, which is really nice to be like, this is the cutting edge of efficient
    0:41:43 language model training.
    0:41:49 And some of this is requires low level engineering, just is a giant mess trickery.
    0:41:55 So as I understand that one below CUDA, so they go super low programming of GPUs.
    0:41:59 Effectively, NVIDIA builds this library called nickel, right?
    0:42:03 In which, you know, when you’re training a model, you have all these communications
    0:42:06 between every single layer of the model and you may have over a hundred layers.
    0:42:07 What does the nickel stand for?
    0:42:08 It’s NCCL.
    0:42:11 NVIDIA communications collectives library.
    0:42:12 Nice.
    0:42:13 Damn.
    0:42:19 And so, when you’re training a model, right, you’re going to have all these all reduces
    0:42:20 and all gathers, right?
    0:42:25 Between each layer, between the multi layer perceptron or feed forward network and the
    0:42:29 attention mechanism, you’ll have basically the model synchronized, right?
    0:42:33 Or you’ll have all reducer and all gather.
    0:42:36 And this is a communication between all the GPUs in the network, whether it’s in training
    0:42:37 or inference.
    0:42:39 So NVIDIA has a standard library.
    0:42:43 This is one of the reasons why it’s really difficult to use anyone else’s hardware for
    0:42:47 training is because no one’s really built a standard communications library.
    0:42:50 And NVIDIA has done this at a sort of a higher level, right?
    0:42:55 A deep seek because they have certain limitations around the GPUs that they have access to.
    0:43:00 The interconnects are limited to some extent by the restrictions of the GPUs that were
    0:43:04 shipped into China legally, not the ones that are smuggled but legally shipped in that they
    0:43:05 used to train this model.
    0:43:09 They had to figure out how to get efficiencies, right?
    0:43:14 And one of those things is that instead of just calling the NVIDIA library, Nickel, right?
    0:43:20 They instead created their, they scheduled their own communications, which some of the
    0:43:22 labs do, right?
    0:43:25 You met a talk about in Lama 3 how they made their own custom version of Nickel.
    0:43:28 This is, they didn’t talk about the implementation details.
    0:43:31 This is some of what they did, probably not as well as, maybe not as well as deep seek
    0:43:36 because deep seek, you know, necessity is the mother of innovation and they had to do
    0:43:37 this.
    0:43:41 Whereas in the case, you know, OpenAI has people that do this sort of stuff, Anthropic,
    0:43:42 et cetera.
    0:43:45 But, you know, deep seek certainly did it publicly and they may have done it even better
    0:43:50 because they were gimped on a certain aspect of the chips that they have access to.
    0:43:57 And so they scheduled communications, you know, by scheduling specific SMs, SMs you could
    0:44:00 think of as like the core on a GPU, right?
    0:44:05 So there’s hundreds of cores or there’s, you know, a bit over a hundred cores SMs on
    0:44:08 a GPU and they were specifically scheduling, hey, which ones are running the model, which
    0:44:11 ones are doing all reduce, which one are doing all gather, right?
    0:44:13 And they would flip back and forth between them.
    0:44:16 And this requires extremely low level programming.
    0:44:20 This is what Nickel does automatically or other NVIDIA libraries handle this automatically
    0:44:21 usually.
    0:44:22 Yeah, exactly.
    0:44:26 And so technically they’re using, you know, PTX, which is like sort of like, you could
    0:44:28 think of it as like an assembly type language.
    0:44:30 It’s not exactly that or instruction set, right?
    0:44:35 Like coding directly to assembly or instruction set, it’s not exactly that, but that’s still
    0:44:39 part of technically CUDA, but it’s like, do I want to write in Python, you know, PyTorch
    0:44:41 equivalent and call NVIDIA libraries?
    0:44:43 Do I want to go down to the C level, right?
    0:44:46 Or, you know, encode even lower level or do I want to go all the way down to the assembly
    0:44:47 or ISO level?
    0:44:52 And there are cases where you go all the way down there at the very big labs, but most
    0:44:54 companies just do not do that, right?
    0:44:58 Because it’s a waste of time and the efficiency gains you get are not worth it.
    0:45:01 What deep-seeks implementation is so complex, right?
    0:45:03 Especially with their mixture of experts, right?
    0:45:07 People have done mixture of experts, but they’re generally 8/16 experts, right?
    0:45:08 And they activate too.
    0:45:13 So, you know, one of the words we like to use is like sparsity factor, right?
    0:45:14 Or usage, right?
    0:45:18 So you might have four, you know, one fourth of your model activate, right?
    0:45:22 And that’s what Mistral’s mixed role model, right?
    0:45:26 Their model that really catapulted them to like, oh my God, they’re really, really good.
    0:45:32 AI has also had models that are MOE and so have all the other labs that are major closed.
    0:45:36 But what deep-seek did that maybe only the leading labs have only just started recently
    0:45:38 doing is have such a high sparsity factor, right?
    0:45:40 It’s not one fourth of the model, right?
    0:45:43 Two out of eight experts activating every time you go through the model.
    0:45:46 It’s eight out of 256.
    0:45:50 And there’s different implementations for mixture of experts where you can have some
    0:45:56 of these experts that are always activated, which this just looks like a small neural network.
    0:45:58 And then all the tokens go through that.
    0:46:03 And then they also go through some that are selected by this routing mechanism.
    0:46:08 And one of the innovations in deep-seek’s architecture is that they change the routing
    0:46:10 mechanism in mixture of expert models.
    0:46:15 There’s something called an auxiliary loss, which effectively means during training, you
    0:46:21 want to make sure that all of these experts are used across the tasks that the model sees.
    0:46:26 Why there can be failures in mixture of experts is that when you’re doing this training, you
    0:46:30 the one objective is token prediction accuracy.
    0:46:34 And if you just let training go with a mixture of expert model on your own, it can be that
    0:46:39 the model learns to only use a subset of the experts.
    0:46:43 And in the MOE literature, there’s something called the auxiliary loss, which helps balance
    0:46:44 them.
    0:46:49 But if you think about the loss functions of deep learning, this even connects to the
    0:46:54 bitter lesson is that you want to have the minimum inductive bias in your model to let
    0:46:56 the model learn maximally.
    0:47:01 And this auxiliary loss, this balancing across experts could be seen as intention with the
    0:47:04 prediction accuracy of the tokens.
    0:47:08 So we don’t know the exact extent that the deep-seek MOE change, which is instead of
    0:47:12 doing an auxiliary loss, they have an extra parameter in their routing, which after the
    0:47:17 batches, they update this parameter to make sure that the next batches all have a similar
    0:47:19 use of experts.
    0:47:22 And this type of change can be big, it can be small, but they add up over time.
    0:47:27 And this is the sort of thing that just points to them innovating and I’m sure all the labs
    0:47:30 that are training big MOEs are looking at this sort of things, which is getting away
    0:47:33 from the auxiliary loss, some of them might already use it, but you just keep you keep
    0:47:34 the QLA in gains.
    0:47:40 And we’ll talk about the philosophy of training and how you organize these organizations.
    0:47:44 And a lot of it is just compounding small improvements over time in your data and your
    0:47:48 architecture and your post training and how they integrate with each other.
    0:47:49 And deep-seek does the same thing.
    0:47:53 And some of them are shared or a lot, we have to take them on face value that they share
    0:47:54 their most important details.
    0:47:56 I mean, the architecture and the weights are out there.
    0:47:59 So we’re seeing what they’re doing and it adds up.
    0:48:02 Going back to sort of the like efficiency and complexity point, right?
    0:48:05 It’s 32 versus four, right?
    0:48:08 For like mixed draw and other MOE models that have been publicly released.
    0:48:13 So this ratio is extremely high and sort of what Nathan was getting at there was, when
    0:48:19 you have such a different level of sparsity, you can’t just have every GPU have the entire
    0:48:20 model, right?
    0:48:21 The model’s too big.
    0:48:22 There’s too much complexity there.
    0:48:25 So you have to split up the model with different types of parallelism, right?
    0:48:29 And so you might have different experts on different GPU nodes.
    0:48:34 But now what happens when this set of data that you get, hey, all of it looks like this
    0:48:39 one way and all of it should route to one part of my model, right?
    0:48:45 So when all of it routes to one part of the model, then you can have this overloading
    0:48:49 of a certain set of the GPU resources or a certain set of the GPUs.
    0:48:54 And then the rest of the training network sits idle because all of the tokens are just
    0:48:55 routing to that.
    0:48:56 So this is the biggest complexity.
    0:49:02 One of the biggest complexities with running a very sparse mixture of experts model, i.e.,
    0:49:07 this 32 ratio versus this four ratio is that you end up with so many of the experts just
    0:49:08 sitting there idle.
    0:49:10 So how do I load balance between them?
    0:49:12 How do I schedule the communications between them?
    0:49:19 This is a lot of the extremely low level detailed work that they figured out in the public first
    0:49:24 and potentially second or third in the world and maybe even first in some cases.
    0:49:30 What lesson do you, in the direction of the bitter lesson, do you take from all of this?
    0:49:33 Is this going to be the direction where a lot of the gain is going to be, which is this
    0:49:36 kind of low level optimization?
    0:49:42 Or is this a short term thing where the biggest gains will be more on the algorithmic high
    0:49:45 level side of post training?
    0:49:51 Is this a short term leap because they’ve figured out a hack because constraints necessities
    0:49:53 the mother of invention?
    0:49:55 Or is there still a lot of gains?
    0:49:59 I think we should summarize what the bitter lesson actually is about.
    0:50:04 The bitter lesson, essentially, if you paraphrase it, is that the types of training that will
    0:50:11 win out in deep learning as we go are those methods that are which are scalable in learning
    0:50:14 and search is what it calls out.
    0:50:19 This scale word gets a lot of attention in this.
    0:50:27 The interpretation that I use is effectively to avoid adding the human priors to your learning
    0:50:28 process.
    0:50:32 If you read the original essay, this is what it talks about is how researchers will try
    0:50:38 to come up with clever solutions to their specific problem that might get them small
    0:50:40 gains in the short term.
    0:50:45 While simply enabling these deep learning systems to work efficiently and for these
    0:50:50 bigger problems in the long term might be more likely to scale and continue to drive
    0:50:53 success.
    0:50:57 Therefore we were talking about relatively small implementation changes to the mixture
    0:50:59 of experts model.
    0:51:04 Therefore it’s like, “Okay, we will need a few more years to know if one of these are
    0:51:08 actually really crucial to the bitter lesson, but the bitter lesson is really this long
    0:51:13 term arc of how simplicity can often win and there’s a lot of sayings in the industry
    0:51:14 like the models just want to learn.
    0:51:20 You have to give them the simple lost landscape where you put compute through the model and
    0:51:24 they will learn and getting barriers out of the way.”
    0:51:29 That’s where the power, something like Nickel comes in, where standardized code that can
    0:51:33 be used by a lot of people to create sort of simple innovations that can scale, which
    0:51:39 is why the code base for DeepSeq is probably a giant mess.
    0:51:43 I’m sure DeepSeq definitely has code bases that are extremely messy where they’re testing
    0:51:47 these new ideas, multi-headlay and attention.
    0:51:50 Probably could start in something like a Jupyter notebook or somebody tries something on a
    0:51:54 few GPUs and that is really messy.
    0:52:00 But the stuff that trains DeepSeq v3 and DeepSeq r1, those libraries, if you were to present
    0:52:04 them to us, I would guess are extremely high quality code.
    0:52:07 High quality readable code.
    0:52:13 I think there is one aspect to note though, is that there is the general ability for that
    0:52:16 to transfer across different types of runs.
    0:52:21 You may make really, really high quality code for one specific model architecture at one
    0:52:22 size.
    0:52:26 Then that is not transferable to, “Hey, when I make this architecture tweak, everything’s
    0:52:28 broken again.”
    0:52:34 That’s something that could be, with their specific low-level coding of scheduling SMs,
    0:52:38 is specific to this model architecture and size.
    0:52:43 Whereas NVIDIA’s Collective’s library is more like, “Hey, it’ll work for anything.
    0:52:44 You want to do an all-reduce?
    0:52:45 Great.
    0:52:46 I don’t care what your model architecture is.
    0:52:47 It’ll work.”
    0:52:51 You’re giving up a lot of performance when you do that in many cases, but it’s worthwhile
    0:52:57 for them to do the specific optimization for the specific run given the constraints that
    0:52:58 they have regarding compute.
    0:53:06 I wonder how stressful it is to these frontier models initiate training, to have the code
    0:53:17 push the button, that you’re now spending a large amount of money and time to train this.
    0:53:22 There must be a lot of innovation on the debugging stage of making sure there’s no issues that
    0:53:27 you’re monitoring and visualizing every aspect of the training, all that kind of stuff.
    0:53:31 When people are training, they have all these various dashboards, but the most simple one
    0:53:33 is your loss.
    0:53:38 It continues to go down, but in reality, especially with more complicated stuff like MOE, the
    0:53:42 biggest problem with it or FP8 training, which is another innovation going to a lower-precision
    0:53:47 number format, i.e., less accurate, is that you end up with loss spikes.
    0:53:49 No one knows why the loss spike happened.
    0:53:50 For a long time, you do.
    0:53:51 Some of them you do.
    0:53:52 That’s a bad data.
    0:53:56 I give the AI2’s example of what blew up our earlier models, is a subreddit called Microwave
    0:53:57 Gang.
    0:53:58 We love the shout-out.
    0:53:59 It’s a real thing.
    0:54:01 You can pull up Microwave Gang.
    0:54:05 Essentially, it’s a subreddit where everybody makes posts that are just the letter M, so
    0:54:06 it’s like, mmm.
    0:54:11 There’s extremely long sequences of the letter M, and then the comments are like beep beep
    0:54:12 because it’s in the micro-events.
    0:54:16 If you pass this into a model that’s trained to be a normal producing text, it’s extremely
    0:54:22 high loss because normally you see an M. You don’t predict M’s for a long time.
    0:54:24 This is something that causes the loss spikes for us.
    0:54:28 When you have much like, this is old, this is not recent, and when you have more mature
    0:54:31 data systems, that’s not the thing that causes the loss spike.
    0:54:36 What Dylan is saying is true, but it’s levels to this sort of idea.
    0:54:41 With regards to the stress, these people are like, you’ll go out to dinner with a friend
    0:54:46 that works at one of these labs, and they’ll just be looking at their phone every 10 minutes,
    0:54:49 and they’re not like, you know, it’s one thing if they’re texting, but they’re just like,
    0:54:50 like, is the loss–
    0:54:56 Yeah, it’s like tokens per second, loss not blown up, they’re just watching this.
    0:54:59 And the heart rate goes up if there’s a spike.
    0:55:01 And some level of spikes is normal, right?
    0:55:03 It’ll recover and be back.
    0:55:07 Sometimes a lot of the old strategy was like, you just stop the run, restart from the old
    0:55:10 version, and then like, change the data mix, and then it keeps going.
    0:55:12 There are even different types of spikes.
    0:55:17 So Dirk Greninveld has a theory that it’s like fast spikes and slow spikes, where there
    0:55:20 are– sometimes when you’re looking at the loss and there are other parameters, you can
    0:55:24 see it start to creep up and then blow up, and that’s really hard to recover from, so
    0:55:25 you have to go back much further.
    0:55:28 So you have the stressful period where it’s like flat or it might start going up, and
    0:55:29 you’re like, what do I do?
    0:55:33 Whereas there are also loss spikes that are– it looks good, and then there’s one spiky
    0:55:34 data point.
    0:55:36 And what you can do is you just skip those.
    0:55:39 You see that there’s a spike, you’re like, okay, I can ignore this data, don’t update
    0:55:41 the model, and do the next one, and it’ll recover quickly.
    0:55:47 But these un-trickier implementations, as you get more complex in your architecture,
    0:55:52 and you scale up to more GPUs, you have more potential for your loss blowing up.
    0:55:54 So there’s a distribution.
    0:55:56 The whole idea of grokking also comes in, right?
    0:56:00 It’s like, just because it slowed down from improving and loss doesn’t mean it’s not learning,
    0:56:04 because all of a sudden it could be like this, and it could just spike down in loss again,
    0:56:06 because it truly learned something, right?
    0:56:08 And it took some time for it to learn that.
    0:56:10 It’s not like a gradual process, right?
    0:56:13 And that’s what humans are like, that’s what models are like.
    0:56:15 It’s really a stressful task, as you mentioned.
    0:56:18 And the whole time, the dollar count is going up.
    0:56:20 Every company has failed runs.
    0:56:23 You need failed run to push the envelope on your infrastructure.
    0:56:28 So a lot of news cycles are made of X company had Y failed run.
    0:56:32 Every company that’s trying to push the frontier of AI has these.
    0:56:37 So yes, it’s noteworthy because it’s a lot of money, and it can be week to month setback,
    0:56:39 but it is part of the process.
    0:56:44 But how do you get, if you’re deep-seek, how do you get to a place where, holy shit, there’s
    0:56:46 a successful combination of hyperparameters?
    0:56:49 A lot of small failed runs.
    0:56:55 So rapid iteration through failed runs and successful ones.
    0:57:01 And then you build up some intuition like this, this mixture of export works, and then
    0:57:03 this implementation of MLA works.
    0:57:08 Key hyperparameters like learning rate and regularization and things like this.
    0:57:11 And you find the regime that works for your code base.
    0:57:13 I’ve talked to people at Frontier Labs.
    0:57:18 There’s a story that you can tell where training language models is kind of a path that you
    0:57:19 need to follow.
    0:57:24 So you need to unlock the ability to train a certain type of model or a certain scale,
    0:57:27 and then your code base and your internal know-how of which hyperparameters work for
    0:57:28 it is kind of known.
    0:57:33 And you look at the deep-seek papers and models, they’ve scaled up, they’ve added complexity,
    0:57:36 and it’s just continuing to build the capabilities that they have.
    0:57:39 Here’s the concept of a YOLO run.
    0:57:42 So YOLO, you only live once.
    0:57:47 And what it is, is there’s all this experimentation you do at the small scale.
    0:57:48 Research ablations.
    0:57:53 You have your Jupyter Notebook where you’re experimenting with MLA on three GPUs or whatever.
    0:57:58 And you’re doing all these different things like, “Hey, do I do four active experts,
    0:57:59 128 experts?
    0:58:01 Do I arrange the experts this way?”
    0:58:03 All these different model architecture things.
    0:58:05 You’re testing at a very small scale.
    0:58:09 Several researchers, few GPUs, tens of GPUs, hundreds of GPUs, whatever it is.
    0:58:13 And then, all of a sudden, you’re like, “Okay, guys, no more fucking around.
    0:58:14 No more screwing around.
    0:58:19 Everyone, take all the resources we have, let’s pick what we think will work, and just
    0:58:20 go for it.”
    0:58:21 YOLO.
    0:58:24 And this is where that sort of stress comes in, is like, “Well, I know it works here,
    0:58:28 but some things that work here don’t work here, and some things that work here don’t
    0:58:29 work down here.”
    0:58:30 Right?
    0:58:31 In terms of scale.
    0:58:38 It’s really truly a YOLO run, and there’s this discussion of certain researchers just
    0:58:40 have this methodical nature.
    0:58:44 They can find the whole search space and figure out all the ablations of different research
    0:58:45 and really see what is best.
    0:58:50 And there’s certain researchers who just have that innate gut instinct of, “This is the
    0:58:51 YOLO run.
    0:58:52 I’m looking at the data.
    0:58:53 This is it.”
    0:58:57 This is why you want to work in post-training, because the GPU cost for training is lower,
    0:59:01 so you can make a higher percentage of your training runs YOLO runs.
    0:59:02 Yeah.
    0:59:03 For now.
    0:59:04 Yeah.
    0:59:05 For now.
    0:59:06 For now.
    0:59:09 So, some of this is fundamentally luck, still.
    0:59:10 Luck is skill, right?
    0:59:11 In many cases.
    0:59:12 Yeah.
    0:59:13 I mean, it looks lucky, right?
    0:59:17 But the hill to climb, if you’re out in one of these labs and you have an evaluation
    0:59:21 and you’re not crushing, there’s a repeated playbook of how you improve things.
    0:59:24 There are localized improvements, which might be data improvements, and these add up into
    0:59:26 the whole model just being much better.
    0:59:30 And when you zoom in really close, it can be really obvious that this model is just really
    0:59:33 bad at this thing, and we can fix it, and you just add these up.
    0:59:38 So, some of it feels like luck, but on the ground, especially with these new reasoning
    0:59:43 models we’re talking to, it’s just so many ways that we can poke around, and normally,
    0:59:45 it’s that some of them give big improvements.
    0:59:47 The search space is near infinite, right?
    0:59:53 And yet, the amount of compute in time you have is very low, and you have to hit release
    0:59:54 schedules.
    1:00:00 You have to not get blown past by everyone, otherwise, what happened with DeepSeek, crushing
    1:00:03 Meta, and Mistral, and Coherent, and all these guys, they moved too slow, right?
    1:00:06 They maybe were too methodical, I don’t know, they didn’t hit the YOLO run, whatever the
    1:00:09 reason was, maybe they weren’t as skilled.
    1:00:13 You can call it luck if you want, but at the end of the day, it’s skill.
    1:00:16 So, 2025 is the year of the YOLO run.
    1:00:19 It seems like all the labs are going in.
    1:00:24 I think it’s even more impressive what OpenAI did in 2022.
    1:00:28 At the time, no one believed in a mixture of experts models at Google, who had all the
    1:00:34 researchers, OpenAI had such little compute, and they devoted all of their compute for many
    1:00:40 months, all of it, 100%, for many months to GPT4, with a brand new architecture with
    1:00:44 no belief that, hey, let me spend a couple hundred million dollars, which is all of the
    1:00:47 money I have on this model, right?
    1:00:49 That is truly YOLO, right?
    1:00:54 Now, people are like, all these training run failures that are in the media, right?
    1:00:58 It’s like, okay, great, but actually, a huge chunk of my GPs are doing inference.
    1:01:03 I still have a bunch doing research constantly, and yes, my biggest cluster is training on
    1:01:09 this YOLO run, but that YOLO run is much less risky than what OpenAI did in 2022, or maybe
    1:01:13 what DeepSeq did now, or sort of like, hey, we’re just going to throw everything at it.
    1:01:18 The big winners throughout human history are the ones who are willing to do YOLO at some
    1:01:19 point.
    1:01:25 Okay, what do we understand about the hardware it’s been trained on, DeepSeq?
    1:01:29 DeepSeq is very interesting, at least a second to take us to zoom out on who they are, first
    1:01:30 of all, right?
    1:01:35 HighFlyer is a hedge fund that has historically done quantitative trading in China as well
    1:01:40 as elsewhere, and they have always had a significant number of GPUs, right?
    1:01:45 In the past, a lot of these high-frequency trading, algorithmic quant traders used FPGAs,
    1:01:47 but it shifted to GPUs, definitely, and there’s both, right?
    1:01:52 But GPUs especially, and HighFlyer, which is the hedge fund that owns DeepSeq, and everyone
    1:01:56 who works for DeepSeq is part of HighFlyer, to some extent, right?
    1:01:59 It’s the same parent company, same owner, same CEO.
    1:02:05 They had all these resources and infrastructure for trading, and then they devoted a humongous
    1:02:10 portion of them to training models, both language models and otherwise, right?
    1:02:15 Because these techniques were heavily AI-influenced.
    1:02:21 More recently, people have realized, hey, trading with, even when you go back to Renaissance
    1:02:26 and all these quantitative firms, natural language processing is the key to trading
    1:02:30 really fast, understanding a press release and making the right trade, right?
    1:02:33 And so, DeepSeq has always been really good at this.
    1:02:39 And even as far back as 2021, they have press releases and papers saying, hey, we’re the
    1:02:44 first company in China with an A100 cluster this large, those 10,000 A100 GPUs, right?
    1:02:46 This is in 2021.
    1:02:48 Now this wasn’t all for training large language models.
    1:02:54 This was mostly for training models for their quantitative aspects, their quantitative trading,
    1:02:57 as well as a lot of that was natural language processing, to be clear, right?
    1:02:59 And so this is the sort of history, right?
    1:03:03 So verifiable fact is that in 2021, they built the largest Chinese cluster.
    1:03:06 At least, they claim it was the largest cluster in China, 10,000 GPUs.
    1:03:11 Before expert controls started, they’ve had a huge cluster before any conversation of
    1:03:12 expert controls.
    1:03:16 So then you step it forward to, what have they done over the last four years since then,
    1:03:17 right?
    1:03:21 Obviously, they’ve continued to operate the hedge fund, probably make tons of money.
    1:03:24 And the other thing is that they’ve leaned more and more and more into AI.
    1:03:27 The CEO, Leon Ching-Feng, Leon…
    1:03:30 You’re not putting me spot on this, we discussed this before.
    1:03:31 Leon Fang, right?
    1:03:32 The CEO, he owns…
    1:03:33 All of them.
    1:03:38 Leon Fang, he owns maybe a little bit more than half the company allegedly, right?
    1:03:44 He’s an extremely Elon Jensen kind of figure where he’s just involved in everything, right?
    1:03:48 And so over that time period, he’s gotten really in-depth into AI.
    1:03:50 He actually has a bit of a…
    1:03:54 If you see some of the statements, a bit of an EAC vibe almost, right?
    1:03:56 Total AGI vibes.
    1:03:57 We need to do this.
    1:04:01 We need to make a new ecosystem of open AI.
    1:04:05 We need China to lead on this sort of ecosystem because historically, the Western countries
    1:04:11 have led on software ecosystems and straight-up acknowledges, like, in order to do this,
    1:04:15 we need to do something different, DeepSeek is his way of doing this.
    1:04:17 Some of the translated interviews with him are fantastic.
    1:04:18 So he has done interviews?
    1:04:19 Yeah.
    1:04:21 You think he would do a Western interview or no?
    1:04:22 Or is there controls on the channel?
    1:04:26 There hasn’t been one yet, but I would try it.
    1:04:29 I just got a Chinese translator, so it was great.
    1:04:30 This is how I’ll push.
    1:04:38 So fascinating figure engineer pushing full-on into AI, leveraging the success from the high-frequency
    1:04:39 trading.
    1:04:40 Very direct quotes.
    1:04:44 We will not switch to closed source when asked about this stuff.
    1:04:50 Very long-term motivated in how the ecosystem of AI should work.
    1:04:57 And I think from a Chinese perspective, he wants a Chinese company to build this vision.
    1:05:01 And so this is sort of like the “visionary” behind the company.
    1:05:03 This hedge fund still exists, this quantitative firm.
    1:05:10 And so DeepSeek is the sort of, you know, slowly he got turned to this full view of like
    1:05:12 AI, everything about this, right?
    1:05:15 But at some point, it slowly maneuvered and he made DeepSeek.
    1:05:17 And DeepSeek has done multiple models since then.
    1:05:19 They’ve acquired more and more GPUs.
    1:05:22 They share infrastructure with the fund, right?
    1:05:28 And so, you know, there is no exact number of public GPU resources that they have, but
    1:05:32 besides this 10,000 GPUs that they bought in 2021, right?
    1:05:34 And they were fantastically profitable, right?
    1:05:40 And then this paper claims they did only 2,800 GPUs, which are a restricted GPU that was
    1:05:43 previously allowed in China, but no longer allowed and there’s a new version.
    1:05:47 But it’s basically NVIDIA’s H100 for China, right?
    1:05:51 And then there’s some restrictions on it, specifically around the communications sort
    1:05:52 of speed, the interconnect speed, right?
    1:05:57 Which is why they had to do this crazy SM, you know, scheduling stuff, right?
    1:05:58 So going back to that, right?
    1:06:03 It’s like, this is obviously not true in terms of their total GPU count.
    1:06:08 Obvious available GPUs, but for this training run, you think 2,000 is the correct number
    1:06:09 or no?
    1:06:13 So this is where it takes, you know, a significant amount of sort of like zoning in, right?
    1:06:16 Like, what do you call your training run, right?
    1:06:20 You count all of the research and ablations that you ran, right?
    1:06:23 Studying all this stuff, because yes, you can do a YOLO run, but at some level you have
    1:06:26 to do the test at the small scale, and then you have to do some test at medium scale before
    1:06:28 you go to a large scale.
    1:06:32 Accepted practice is that for any given model that is a notable advancement, you’re going
    1:06:37 to do two to four X compute of the full training run in experiments alone.
    1:06:42 So a lot of this compute that’s being scaled up is probably used in large part at this
    1:06:43 time for research.
    1:06:47 Yeah, and research will, you know, research begets the new ideas that let you get huge
    1:06:48 efficiency.
    1:06:49 Right.
    1:06:50 Research gets you 01.
    1:06:52 You break through, so you need to bet on it.
    1:06:56 So some of the pricing strategy they will discuss has the research baked into the price.
    1:07:01 So the numbers that deep seek specifically said publicly, right, are just the 10,000
    1:07:06 GPUs in 2021, and then 2,000 GPUs for only the pre-training for V3.
    1:07:08 They did not discuss cost on R1.
    1:07:13 They did not discuss cost on all the other RL, right, for the instruct model that they
    1:07:14 made, right?
    1:07:18 They only discussed the pre-training for the base model, and they did not discuss anything
    1:07:19 on research and ablations.
    1:07:23 And they do not talk about any of the resources that are shared in terms of, hey, the fund
    1:07:25 is using all these GPUs, right?
    1:07:30 And we know that they’re very profitable and that 10,000 GPUs in 2021.
    1:07:36 So some of the research that we’ve found is that we actually believe they have closer
    1:07:38 to 50,000 GPUs.
    1:07:39 We as Semi-Answers.
    1:07:44 So we should say that you’re sort of one of the world experts in figuring out what everybody’s
    1:07:49 doing in terms of the semiconductor in terms of cluster buildouts in terms of, like, who
    1:07:52 is doing what in terms of training runs.
    1:07:53 So yeah.
    1:07:54 So that’s the we.
    1:07:55 Okay, go ahead.
    1:07:56 Yeah, sorry.
    1:07:58 We believe they actually have something closer to 50,000 GPUs, right?
    1:08:00 Now, this is split across many tasks, right?
    1:08:03 Again, the fund, research and ablations.
    1:08:05 For Ballpark, how much would OpenAI or Anthropocad?
    1:08:10 I think the clearest example we have, because Meta is also open, they talk about, like, order
    1:08:15 of 60K to 100K, H100 equivalent GPUs in their training clusters.
    1:08:16 Right.
    1:08:20 Like Lama 3, they trained on 16,000 H100s, right?
    1:08:23 But the company of Meta last year publicly disclosed they bought, like, 400 something
    1:08:24 thousand GPUs.
    1:08:25 Yeah.
    1:08:26 Right?
    1:08:27 So of course, tiny percentage on the training.
    1:08:31 Again, like most of it is, like, serving me the best Instagram reels, right?
    1:08:32 Or whatever, right?
    1:08:37 I mean, we could get into a cost of, like, what is the cost of ownership for a 2,000 GPU cluster,
    1:08:38 10,000?
    1:08:40 There’s just different sizes of companies I can afford.
    1:08:44 These things in deep seek is reasonably big.
    1:08:49 Their compute allocation compared is one of the top few in the world.
    1:08:52 It’s not OpenAI, Anthropoc, et cetera, but they have a lot of compute.
    1:08:56 Can you, in general, actually just zoom out and also talk about the Hopper architecture,
    1:09:02 the NVIDIA Hopper GPU architecture and the difference between H100 and H800, like you
    1:09:03 mentioned, the interconnects?
    1:09:04 Yeah.
    1:09:08 So there’s, you know, Ampere was the A100 and then H100 Hopper, right?
    1:09:12 People use them synonymously in the US because really there’s just H100 and now there’s H200,
    1:09:13 right?
    1:09:15 Mostly.
    1:09:19 In China, they’ve had, there have been different salvos of export restrictions.
    1:09:22 So initially the US government limited on a two-factor scale, right?
    1:09:25 Which is chip interconnect versus flops, right?
    1:09:29 So any chip that had interconnects above a certain level and flops above a certain floating
    1:09:33 point operations above a certain level was restricted.
    1:09:37 Later the government realized that this was a flaw in the restriction and they cut it
    1:09:40 down to just floating point operations.
    1:09:45 And so, H800 had high flops, low communication?
    1:09:46 Exactly.
    1:09:50 So the H800 was the same performance as H100 on flops, right?
    1:09:53 But it didn’t have, it just had the interconnect bandwidth cut.
    1:09:58 DeepSeq knew how to utilize this, you know, hey, even though we’re cut back on the interconnect,
    1:10:04 we can do all this fancy stuff to figure out how to use the GPU fully anyways, right?
    1:10:10 And so that was back in October 2022, but later in 2023, end of 2023 implemented in
    1:10:14 2024, the US government banned the H800, right?
    1:10:18 And so by the way, this H800 cluster, these 2000 GPUs was not even purchased in 2024,
    1:10:19 right?
    1:10:22 It was purchased in late 2023.
    1:10:23 And they’re just getting the model out now, right?
    1:10:25 Because it takes a lot of research, et cetera.
    1:10:29 H800 was banned and now there’s a new chip called the H20.
    1:10:34 The H20 is cut back on only flops, but the interconnect bandwidth is the same.
    1:10:38 And in fact, in some ways, it’s better than the H100 because it has better memory bandwidth
    1:10:39 and memory capacity.
    1:10:43 So there are, you know, NVIDIA is working within the constraints of what the government
    1:10:46 sets and then builds the best possible GPU for China.
    1:10:50 Can we take this actual tangent and we’ll return back to the hardware?
    1:10:55 Is the philosophy, the motivation, the case for export controls?
    1:10:56 What is it?
    1:11:00 Ari Amadej just published a blog post about export controls.
    1:11:06 The case he makes is that if AI becomes super powerful and he says by 2026 we’ll have AGI
    1:11:11 or super powerful AI and that’s going to give a significant, whoever builds that will have
    1:11:13 a significant military advantage.
    1:11:22 And so because the United States is a democracy and as he says, China is authoritarian or has
    1:11:29 authoritarian elements, you want a unipolar world where the super powerful military because
    1:11:31 of the AI is one that’s a democracy.
    1:11:38 It’s a much more complicated world geopolitically when you have two superpowers with super powerful
    1:11:41 AI and one is authoritarian.
    1:11:42 So that’s the case he makes.
    1:11:47 And so we want to, the United States wants to use export controls to slow down, to make
    1:11:55 sure that China can do these gigantic training runs that will be presumably required to
    1:11:57 build AGI.
    1:11:58 This is very abstract.
    1:12:03 I think this can be the goal of how some people describe export controls is this super powerful
    1:12:05 AI.
    1:12:08 And you touched on the training run idea.
    1:12:13 There’s not many worlds where China cannot train AI models.
    1:12:18 Export controls are kneecapping the amount of compute or the density of compute that
    1:12:20 China can have.
    1:12:25 And if you think about the AI ecosystem right now as all of these AI companies, revenue
    1:12:30 numbers are up and to the right, the AI usage is just continuing to grow, more GPUs are
    1:12:31 going to inference.
    1:12:37 A large part of export controls, if they work is just that the amount of AI that can be
    1:12:40 run in China is going to be much lower.
    1:12:43 So on the training side, DeepSeq V3 is a great example, which you have a very focused team
    1:12:46 that can still get to the frontier of AI.
    1:12:51 This 2,000 GPUs is not that hard to get, all considering in the world.
    1:12:53 They’re still going to have those GPUs.
    1:12:54 They’re still going to be able to train models.
    1:12:58 But if there’s going to be a huge market for AI, if you have strong export controls and
    1:13:02 you want to have 100,000 GPUs just serving the equivalent of chat GPT clusters with good
    1:13:08 export controls, it also just makes it so that AI can be used much less.
    1:13:14 And I think that is a much easier goal to achieve than trying to debate on what AGI
    1:13:15 is.
    1:13:19 And if you have these extremely intelligent autonomous AIs and data centers, those are
    1:13:23 the things that could be running in these GPU clusters in the United States, but not
    1:13:24 in China.
    1:13:27 To some extent, training a model does effectively nothing, right?
    1:13:28 Yeah.
    1:13:29 I have a model.
    1:13:35 The thing that Dario is speaking to is the implementation of that model once trained to
    1:13:41 then create huge economic growth, huge increases in military capabilities, huge capability increases
    1:13:46 in productivity of people, betterment of lives, whatever you want to direct super powerful
    1:13:48 AI towards, you can.
    1:13:51 But that requires a significant amounts of compute, right?
    1:13:56 And so the US government has effectively said, and forever, right, like training will always
    1:13:59 be a portion of the total compute.
    1:14:03 We mentioned Meta’s 400,000 GPUs, only 16,000 made Lama, right?
    1:14:08 So the percentage that Meta is dedicating to inference, now this might be for recommendation
    1:14:12 systems that are trying to hack our mind into spending more time and watching more ads.
    1:14:16 Or if it’s for a super powerful AI that’s doing productive things, doesn’t matter about
    1:14:22 the exact use that our economic system decides, it’s that that can be delivered in whatever
    1:14:23 way we want.
    1:14:28 Whereas with China, you’re expert restrictions, great, you’re never going to be able to cut
    1:14:29 everything off, right?
    1:14:33 And I think that’s quite well understood by the US government, is that you can’t cut
    1:14:34 everything off.
    1:14:36 And they’ll make their own chips.
    1:14:37 And they’re trying to make their own chips.
    1:14:38 They’ll be worse than ours.
    1:14:41 But the whole point is to just keep a gap, right?
    1:14:46 And therefore, at some point as the AI, in a world where 2%, 3% economic growth, this
    1:14:51 is really dumb, by the way, to cut off high tech and not make money off of it.
    1:14:55 But in a world where super powerful AI comes about and then starts creating significant
    1:14:59 changes in society, which is what all the AI leaders and big tech companies believe,
    1:15:02 I think super powerful AI is going to change society massively.
    1:15:07 And therefore, this compounding effect of the difference in compute is really important.
    1:15:12 There’s some sci-fi out there where AI is measured in the power of, in like how much
    1:15:14 power is delivered to compute, right?
    1:15:18 Or how much is being, that’s sort of a way of thinking about what’s the economic output
    1:15:20 is just how much power are you directing towards that AI?
    1:15:24 Should we talk about reasoning models with this as a way that this might be actionable
    1:15:26 as something that people can actually see?
    1:15:31 So the reasoning models that are coming out with R1 and O1, they’re designed to use
    1:15:32 more compute.
    1:15:37 There’s a lot of buzzy words in the AI community about this, test time compute, inference time
    1:15:38 compute, whatever.
    1:15:40 But Dylan has good research on this.
    1:15:43 You can get to the specific numbers on the ratio of when you train a model, you can look
    1:15:47 at things about the amount of compute used at training and amount of compute used at inference.
    1:15:52 These reasoning models are making inference way more important to doing complex tasks.
    1:15:56 In the fall, in December, their open AI announced this O3 model.
    1:16:00 There’s another thing in AI when things move fast, we get both announcements and releases.
    1:16:03 Analytics are essentially blog posts where you pat yourself on the back and you say you
    1:16:07 did things and releases are run the models out there, the papers out there, et cetera.
    1:16:13 So open AI has announced O3, and we can check if O3 mini is out as of recording potentially.
    1:16:17 But that doesn’t really change the point, which is that the breakthrough result was something
    1:16:22 called ARC AGI task, which is the abstract reasoning corpus, a task for artificial general
    1:16:23 intelligence.
    1:16:29 Francois Chalet is the guy who’s been, it’s a multi-year old paper.
    1:16:30 It’s a brilliant benchmark.
    1:16:36 And the number for open AI O3 to solve this was that it used some sort of number of samples
    1:16:37 in the API.
    1:16:40 The API has like thinking effort and number of samples.
    1:16:47 They used 1,000 samples to solve this task, and it comes out to be like five to $20 per
    1:16:51 question, which you’re putting in effectively a math puzzle, and then it takes orders of
    1:16:53 dollars to answer one question.
    1:16:55 And this is a lot of compute.
    1:16:59 If it’s going to take off in the US, open AI needs a ton of GPUs on inference to capture
    1:17:00 this.
    1:17:04 Open AI, chat GPT Pro subscription, which is $200 a month, which Sam said they’re losing
    1:17:08 money on, which means that people are burning a lot of GPUs on inference.
    1:17:09 And I’ve signed up with it.
    1:17:10 I’ve played with it.
    1:17:15 I don’t think I’m a power user, but I use it, and it’s like, that is the thing that
    1:17:20 a Chinese company with mediumly strong expert controls, there will always be loopholes, might
    1:17:21 not be able to do it all.
    1:17:26 And if that, the main result for O3 is also a spectacular coding performance.
    1:17:32 And if that feeds back into AI companies being able to experiment better.
    1:17:38 So presumably the ideas for an AGI, a much larger fraction of the compute will be used
    1:17:42 for this test-hung compute, for the reasoning, for the AGI goes into a room and thinks about
    1:17:50 how to take over the world and come back in 2.7 hours, and it’s going to take a lot of
    1:17:51 computing.
    1:17:56 This is what people, CEO or leaders of Open AI and Anthropic talk about is like autonomous
    1:18:00 AI models, which is you give them a task and they work on it in the background.
    1:18:04 My personal definition of AGI is much simpler.
    1:18:09 I think language models are a form of AGI and all of the super powerful stuff is a next
    1:18:13 step that’s great if we get these tools, but a language model has so much value and so
    1:18:14 many domains.
    1:18:16 It is a general intelligence to me.
    1:18:20 But this next step of agentic things where they’re independent and they can do tasks
    1:18:26 that aren’t in the training data is what the few year outlook that these AI companies are
    1:18:27 driving for.
    1:18:32 I think the terminology here that Dario uses is super powerful AI, so I agree with you
    1:18:33 on the AGI.
    1:18:36 I think we already have something like that’s exceptionally impressive.
    1:18:42 The Alan Turing would for sure say is AGI, but he’s referring more to something once
    1:18:48 in possession of, then you would have a significant military and geopolitical advantage over other
    1:18:49 nations.
    1:18:52 So it’s not just like you can ask it how to cook an omelet.
    1:18:55 And he has a much more positive view and as I say, machines of love and grace.
    1:19:00 I’ve read into this, that we don’t have enough background in physical sciences to gauge exactly
    1:19:07 how competent I am and if AI can revolutionize biology, I’m safe saying that AI is going
    1:19:10 to accelerate the progress of any computational science.
    1:19:14 So we’re doing a depth-first search here on topics, taking tangent of a tangent.
    1:19:19 So let’s continue on that depth-first search.
    1:19:25 You said that you’re both feeling the AGI, so what’s your timeline?
    1:19:29 Dario is 2026 for the super powerful AI.
    1:19:37 That’s basically agentic to a degree where it’s a real security threat, that level of
    1:19:38 AGI.
    1:19:39 What’s your timeline?
    1:19:43 I don’t like to attribute specific abilities because predicting specific abilities and when
    1:19:44 is very hard.
    1:19:49 I think mostly if you’re going to say that I’m feeling the AGI is that I expect continued
    1:19:51 rapid surprising progress over the next few years.
    1:19:57 So something like R1 is less surprising to me from DeepSeq because I expect there to
    1:20:00 be new paradigms where substantial progress can be made.
    1:20:04 DeepSeq R1 is so unsettling because we’re kind of on this path with chatGPT.
    1:20:05 It’s getting better.
    1:20:06 It’s getting better.
    1:20:07 It’s getting better.
    1:20:10 And then we have a new direction for changing the models and we took one step like this
    1:20:12 and we took a step up.
    1:20:15 So it looks like a really fast slope and then we’re going to just take more steps.
    1:20:19 Like it’s just really unsettling when you have these big steps and I expect that to
    1:20:20 keep happening.
    1:20:25 I see I’ve tried opening I operator, I’ve tried quad computer use.
    1:20:26 They’re not there yet.
    1:20:31 I understand the idea, but it’s just so hard to predict what is the breakthrough that will
    1:20:35 make something like that work and I think it’s more likely that we have breakthroughs that
    1:20:37 work and things that we don’t know what they’re going to do.
    1:20:43 So like everyone wants agents, Dario has very eloquent way of describing this and I just
    1:20:47 think that there’s going to be more than that so I could just expect these things to
    1:20:48 come.
    1:20:54 I’m going to have to try to pin you down to a date on the AGI timeline.
    1:20:56 The nuclear weapon moment.
    1:21:04 So moment where on the geopolitical stage, there’s a real like, because we’re talking
    1:21:09 about export controls, when do you think, just even a throw out a date, when do you think
    1:21:10 that would be?
    1:21:14 For me, it’s probably after 2030, so I’m not as …
    1:21:15 That’s what I would say.
    1:21:16 So define that, right?
    1:21:18 Because to me, it kind of almost has already happened, right?
    1:21:23 You look at elections in India and Pakistan, people get AI voice calls and think they’re
    1:21:25 talking to the politician, right?
    1:21:28 The AI diffusion rules, which was enacted in the last couple of weeks of the Biden admin
    1:21:34 and looks like the Trump admin will keep and potentially even strengthen, limit cloud computing
    1:21:38 and GPU sales to countries that are not even related to China.
    1:21:43 Portugal and all these normal countries are on the, you need approval from the US list.
    1:21:48 Yeah, Portugal and all these countries that are allies, right?
    1:21:49 Singapore, right?
    1:21:53 They freaking have F-35s and we don’t let them by GPUs.
    1:21:56 This to me is already to the scale of like, you know …
    1:22:01 Well, that just means that the US military is really nervous about this new technology.
    1:22:06 That doesn’t mean the technology is already there, so they might be just very cautious
    1:22:11 about this thing that they don’t quite understand, but that’s a really good point.
    1:22:18 The robot calls, swarms of semi-intelligent bots could be a weapon, could be doing a lot
    1:22:19 of social engineering.
    1:22:23 I mean, there’s tons of talk about, you know, from the 2016 elections, like Cambridge Analytica
    1:22:25 and all this stuff, Russian influence.
    1:22:29 I mean, every country in the world is pushing stuff onto the internet and has narratives
    1:22:30 they want, right?
    1:22:35 Like that’s every, like technically competent, whether it’s Russia, China, US, Israel, et
    1:22:36 cetera, right?
    1:22:41 They’re pushing viewpoints onto the internet and mass and language models crash the cost
    1:22:43 of like very intelligent sounding.
    1:22:47 There’s some research that shows that the distribution is actually a limiting factor.
    1:22:55 So language models haven’t yet made misinformation particularly, like, changed the equation there.
    1:22:56 The internet is still ongoing.
    1:23:00 I think there’s a blog, AI Snake Oil and some of my friends at Princeton that write on this
    1:23:01 stuff.
    1:23:02 So there is research.
    1:23:04 It’s like, it’s a default that everyone assumes and I would have thought the same thing is
    1:23:07 that misinformation doesn’t get far worse with language models.
    1:23:12 I think in terms of internet posts and things that people have been measuring, it hasn’t
    1:23:16 been a exponential increase or something extremely measurable and things you’re talking about
    1:23:18 with like voice calls and stuff like that.
    1:23:22 It could be in modalities that are harder to measure.
    1:23:26 So it’s something that it’s too soon to tell in terms of, I think that’s like political
    1:23:34 instability via the web is very, it’s monitored by a lot of researchers to see what’s happening.
    1:23:37 I think that you’re asking about like the AGI thing.
    1:23:42 If you ever make me give a year, I would be like, okay, I have AI CEOs saying this, they’ve
    1:23:44 been saying two years for a while.
    1:23:51 I think that people like Dario, Anthropic, the CEO had thought about this so deeply.
    1:23:56 I need to take their word seriously, but also understand that they have different incentives.
    1:24:00 So I would be like add a few years to that, which is how you get something similar to
    1:24:02 2030 or a little after 2030.
    1:24:07 I think to some extent we have capabilities that hit a certain point where any one person
    1:24:13 could say, okay, if I can leverage those capabilities for X amount of time, this is AGI, call it
    1:24:19 2728, but then the cost of actually operating that capability, this is going to be my point.
    1:24:24 So extreme that no one can actually deploy it at scale and mass to actually completely
    1:24:27 revolutionize the economy on a snap of the finger.
    1:24:30 So I don’t think it will be like a snap of the finger moment.
    1:24:31 It’s a physical constraint.
    1:24:35 However, it’ll be a, oh, the capabilities are here, but I can’t deploy it everywhere.
    1:24:43 And so one simple example going back to 2023 was when Bing with GPT-4 came out and everyone
    1:24:45 was freaking out about search, right?
    1:24:46 Perplexity came out.
    1:24:50 If you did the cost on implementing GPT-3 into every Google search, it was like, oh, okay,
    1:24:53 this is just physically impossible to implement.
    1:24:59 And as we step forward to going back to the test time compute thing, a query for, you
    1:25:02 ask chat GPT a question, it costs cents, right?
    1:25:05 For their most capable model of chat, right?
    1:25:11 To get a query back, to solve an Arc AGI problem though, cost five to 20 bucks, right?
    1:25:14 And this is, this is an, it’s only going up from there.
    1:25:20 This is a thousand, 10,000 X factor difference in cost to respond to a query versus do a task.
    1:25:26 And the task of Arc AGI is not like it’s like, it’s, it’s simple to some extent, you know,
    1:25:29 but it’s also like, what are the tasks that we want?
    1:25:32 Okay, AGI, quote unquote, what we have today can do Arc AGI.
    1:25:35 Three years from now, it can do much more complicated problems, but the cost is going
    1:25:39 to be measured in thousands and thousands and hundreds of thousands of dollars of GPU
    1:25:44 time and there just won’t be enough power to use infrastructure to operate this and therefore
    1:25:47 shift everything in the world on the snap of the finger.
    1:25:53 But at that moment, who gets to control and point the AGI at a task?
    1:25:57 And so this was in Dario’s post that he’s like, hey, China can effectively and more quickly
    1:26:01 than us point their AGI at military tasks, right?
    1:26:06 And they have been in many ways, faster at adopting certain new technologies into, into
    1:26:07 their military, right?
    1:26:09 Especially with regards to drones, right?
    1:26:14 The US maybe has a longstanding, you know, large air sort of, you know, fighter jet type
    1:26:20 of thing bombers, but when it comes to asymmetric arms such as drones, they’ve, they’ve completely
    1:26:22 leapfrogged the US and the West.
    1:26:27 And the, the fear that Dario is sort of pointing out there, I think, is that, yeah, great.
    1:26:30 We’ll have AGI in the commercial sector.
    1:26:33 The US military won’t be able to implement it super fast.
    1:26:36 Chinese military could and they could direct all their resources to implementing it in
    1:26:41 the military and therefore solving, you know, military logistics or solving some, some other
    1:26:45 aspect of like disinformation for targeted certain set of people so they can flip a country’s
    1:26:50 politics or something like that that is actually like catastrophic versus, you know, the US
    1:26:54 just wants to, you know, because it’ll be more capitalistically allocated just towards
    1:26:58 whatever is the highest return on income, which might be like building, you know, factories
    1:26:59 better or whatever.
    1:27:04 So everything I’ve seen, people’s intuition seems to fail on robotics.
    1:27:06 So you have this kind of general optimism.
    1:27:08 I’ve seen this on self-driving cars.
    1:27:12 People think it’s much easier problem than it is similar with drones.
    1:27:18 Here I understand it a little bit less, but I’ve just seen the reality of the war in Ukraine
    1:27:21 and the usage of drones at both sides.
    1:27:28 And it seems that humans still far outperform any, any fully autonomous systems.
    1:27:35 AI is an assistant, but humans drive FPV drones where the humans controlling most of it just
    1:27:37 far, far, far outperforms AI systems.
    1:27:43 So I think it’s not obvious to me that we’re going to have swarms of autonomous robots
    1:27:46 anytime soon in the military context.
    1:27:53 Maybe the fastest I can imagine is 2030, which is why I said 2030 for the superpower for AI.
    1:27:59 Whenever you have large scale swarms of robots doing military actions, that’s when the world
    1:28:02 just starts to look different to me.
    1:28:04 So that’s the thing I’m really worried about.
    1:28:10 But there could be cyber war, cyber war type of technologies that from social engineering
    1:28:16 to actually just swarms of robots that find attack vectors in our code bases and shut
    1:28:19 down power grids, that kind of stuff.
    1:28:23 And it could be one of those things like on any given weekend or something.
    1:28:24 Power goes out.
    1:28:26 Nobody knows why.
    1:28:27 And the world changes forever.
    1:28:32 Just power going out for two days in all of the United States.
    1:28:35 That will lead to murder, to chaos.
    1:28:39 But going back to expert controls.
    1:28:49 Do you see that as a useful way to control the balance of power geopolitically in the
    1:28:50 context of AI?
    1:28:55 And I think going back to my viewpoint is if you believe we’re in this sort of a stage
    1:29:00 of economic growth and change that we’ve been in for the last 20 years, the expert controls
    1:29:05 are absolutely guaranteeing that China will win long term.
    1:29:10 If you do not believe AI is going to make significant changes to society in the next
    1:29:15 10 years or five years, five year timelines are sort of what the more executives and such
    1:29:18 of AI companies and even big tech companies believe.
    1:29:20 But even 10 year timelines, it’s reasonable.
    1:29:29 But once you get to, hey, these timelines are below that time period, then the only
    1:29:35 way to sort of create a sizable advantage or disadvantage for America versus China is
    1:29:42 if you constrain compute because talent is not really something that’s constraining.
    1:29:46 China arguably has more talent, more STEM graduates, more programmers.
    1:29:48 The US can draw upon the world’s people, which it does.
    1:29:51 There’s tons of foreigners in the AI industry.
    1:29:55 So many of these AI teams are all people without a US passport.
    1:30:01 Yeah, I mean, many of them are Chinese people who are moving to America, and that’s great.
    1:30:03 That’s exactly what we want.
    1:30:08 But that talent is one aspect, but I don’t think that’s one that is a measurable advantage
    1:30:09 for the US or not.
    1:30:12 It truly is just whether or not compute.
    1:30:18 Even on the compute side, when we look at chips versus data centers, China has the unprecedented
    1:30:24 ability to build ridiculous sums of power, clockwork.
    1:30:26 They’re always building more and more power.
    1:30:31 They’ve got steel mills that individually are the size of the entire US industry.
    1:30:36 And they’ve got aluminum mills that consume gigawatts and gigawatts of power.
    1:30:40 And when we talk about what’s the biggest data center, opening, I made this huge thing
    1:30:43 about Stargate, their announcement there.
    1:30:48 That’s like once it’s fully built out in a few years, it’ll be two gigawatts of power.
    1:30:53 And this is still smaller than the largest industrial facilities in China.
    1:30:56 China, if they wanted to build the largest data center in the world, if they had access
    1:30:58 to the chips, could.
    1:31:02 So it’s not just a question of when, not if, right?
    1:31:08 So their industrial capacity far exceeds the United States to manufacture stuff.
    1:31:13 So long term, they’re going to be manufacturing chips there.
    1:31:14 Chips are a little bit more specialized.
    1:31:16 I’m specifically referring to the data centers, right?
    1:31:20 Chips, fabs take huge amounts of power, don’t get me wrong.
    1:31:22 That’s not necessarily the gating factor there.
    1:31:28 The gating factor on how fast people can build the largest clusters today in the US is power.
    1:31:35 It could be power generation, power transmission, substations and all these sorts of transformers
    1:31:40 and all these things, building the data center, these are all constraints on the US industry’s
    1:31:45 ability to build larger and larger training systems as well as deploying more and more
    1:31:46 inference compute.
    1:31:51 I think we need to make the point clear on why the time is now for people that don’t think
    1:31:54 about this because essentially with export controls, you’re making it so China cannot
    1:31:57 make or get cutting edge chips.
    1:32:02 And the idea is that if you time this wrong, China is pouring a ton of money into their
    1:32:03 chip production.
    1:32:07 And if you time it wrong, they are going to have more capacity for production, more capacity
    1:32:11 for energy and figure out how to make the chips and have more capacity than the rest
    1:32:14 of the world to make the chips because everybody can buy, they’re going to sell their Chinese
    1:32:15 chips to everybody.
    1:32:17 They might subsidize them.
    1:32:21 And therefore, if AI takes a long time to become differentiated, we’ve decapped the
    1:32:24 financial performance of American companies.
    1:32:28 NVIDIA can sell less, TSMC cannot sell to China.
    1:32:34 So therefore, we have less demand to like keep driving the production cycle.
    1:32:37 So that’s the assumption behind the timing being important.
    1:32:40 Less than 10 years or five years to above, right?
    1:32:45 China will win because of these restrictions long-term unless AI does something in the
    1:32:52 short-term, which I believe AI will do, make massive changes to society in the medium short-term.
    1:32:55 And so that’s the big unlocker there.
    1:33:03 And even today, if Xi Jinping decided to get “scale-pilled,” I decide that scaling
    1:33:09 laws are what matters just like the US executives like Sacha Nadella and Mark Zuckerberg and
    1:33:14 Sundar and all these US executives of the biggest, most powerful tech companies have
    1:33:18 decided they’re “scale-pilled” and they’re building multi-gigawatt data centers, right?
    1:33:22 Whether it’s in Texas or Louisiana or Wisconsin, wherever it is, they’re building these massive
    1:33:28 things that cost as much as their entire budget for spending on data centers globally in one
    1:33:29 spot, right?
    1:33:32 This is what they’ve committed to for next year, year after, et cetera.
    1:33:37 And so they’re so convinced that this is the way, that this is what they’re doing.
    1:33:42 But if China decided to, they could do it faster than us, but this is where the restrictions
    1:33:43 come in.
    1:33:48 It’s not clear that China, as a whole, has decided from the highest levels that this
    1:33:49 is a priority.
    1:33:50 The US sort of has, right?
    1:33:55 You see Trump talking about DeepSeek and Stargate within the same week, right?
    1:33:59 So he’s in the Biden and Min as well, had a lot of discussions about AI and such.
    1:34:01 It’s clear that they think about it.
    1:34:06 Only just last week did DeepSeek meet the second-in-command of China, right?
    1:34:09 Like they have not even met the top, and they haven’t met Xi.
    1:34:17 Xi hasn’t sat down, and they only just released a subsidy of a trillion RMB, roughly $160 billion,
    1:34:23 which is closer to the spending of Microsoft and Meta and Google combined for this year.
    1:34:28 So it’s like, they’re realizing it just now, but that’s where these export restrictions
    1:34:33 come in and say, “Hey, you can’t ship the most powerful US chips to China.
    1:34:35 You can ship a cut-down version.
    1:34:39 You can’t ship the most powerful chips to all these countries who we know we’re just
    1:34:41 going to rent it to China.
    1:34:42 You have to limit the numbers, right?”
    1:34:43 And the tools.
    1:34:48 And same with manufacturing of equipment, tools, all these different aspects.
    1:34:52 But it all stems from AI, and then what downstream can slow them down in AI?
    1:34:56 And so the entire semiconductor restrictions, you read them, they are very clear.
    1:35:01 It’s about AI and military civil fusion of technology, right?
    1:35:02 It’s very clear.
    1:35:04 And then from there, it goes, “Oh, well, we’re banning them from buying like lithography
    1:35:10 tools and etch tools and deposition tools, and oh, this random subsystem from a random
    1:35:12 company that’s like tiny, right?”
    1:35:13 Like why are we banning this?
    1:35:17 Because all of it, the US government has decided is critical to AI systems.
    1:35:22 I think the fulcrum point is like the transition from seven nanometer to five nanometer chips,
    1:35:27 where I think it was Huawei that had the seven nanometer chip a few years ago, which caused
    1:35:31 another political brouhaha, almost like this moment.
    1:35:35 And then it’s like ASML, deep UV, what is that?
    1:35:37 Extreme ultraviolet lithography.
    1:35:42 To set context on the chips, what Nathan’s referring to is in 2020, Huawei released their
    1:35:48 Ascend 910 chip, which was an AI chip, first one on seven nanometer before Google did,
    1:35:49 before NVIDIA did.
    1:35:54 And they submitted it to the MLPRF benchmark, which is sort of an industry standard for machine
    1:35:56 learning performance benchmark.
    1:35:57 And it did quite well.
    1:36:00 And it was the best chip at the submission, right?
    1:36:02 This was a huge deal.
    1:36:09 The Trump admin, of course, banned the Huawei from getting seven nanometer chips from TSMC.
    1:36:13 And so then they had to switch to using internal domestically produced chips, which was a multi-year
    1:36:14 setback.
    1:36:16 Many companies have done seven nanometer chips.
    1:36:21 And the question is, we don’t know how much Huawei was subsidizing production of that
    1:36:22 chip.
    1:36:25 Intel has made seven nanometer chips that are not profitable and things like this.
    1:36:30 So this is how all feeds back into the economic engine of export controls.
    1:36:36 Well, so you’re saying that for now Xi Jinping has not felt the AGI, but it feels like the
    1:36:42 deep-seek moment might, like, there might be meetings going on now where he’s going
    1:36:46 to start wearing the same t-shirt and things are going to escalate.
    1:36:49 I mean, like this, he may have woken up last week, right?
    1:36:54 Leon Fang met the vice chair, vice, the second command guy, and they had a meeting.
    1:36:59 And then the next day, they announced the AI subsidies, which are trillion RMB, right?
    1:37:04 So it’s possible that this deep-seek moment is truly the beginning of a cold war.
    1:37:06 That’s what a lot of people are worried about.
    1:37:10 People in AI have been worried that this is going towards a cold war or already is.
    1:37:15 But it’s not deep-seek’s fault, but there’s something, a bunch of factors came together
    1:37:19 where it was like this explosion, I mean, it all has to do with NVIDIA stock going down
    1:37:27 up. It’s just some mass hysteria that happened that eventually led to Xi Jinping having meetings
    1:37:29 and waking up to this idea.
    1:37:35 And the US government realized in October 7th, 2022, before ChatGPT released, that restriction
    1:37:38 on October 7th, which dropped and shocked everyone.
    1:37:40 And it was very clearly aimed at AI.
    1:37:42 Everyone was like, “What the heck are you doing?”
    1:37:44 Stable diffusion was out then, but not ChatGPT.
    1:37:45 Yeah, but not ChatGPT.
    1:37:50 I’m starting to be rumblings of what Gen. AI can do to society.
    1:37:54 But it was very clear, I think, to at least National Security Council and those sort of
    1:37:59 folks that this was where the world is headed, this cold war that’s happening.
    1:38:10 So is there any concerns that the export controls push China to take military action in Taiwan?
    1:38:11 This is the big risk, right?
    1:38:16 The further you push China away from having access to cutting-edge American and global
    1:38:20 technologies, the more likely they are to say, “Well, because I can’t access it, I might
    1:38:21 as well…”
    1:38:23 No one should access it, right?
    1:38:26 And there’s a few interesting aspects of that, right?
    1:38:30 China has a urban-rural divide, like no other.
    1:38:36 They have a male-female-berf ratio, like no other, to the point where, if you look in
    1:38:38 most of China, it’s like the ratio is not that bad, but when you look at single dudes
    1:38:42 in rural China, it’s like a 30-to-1 ratio.
    1:38:43 And those are disenfranchised dudes, right?
    1:38:48 Like, quote-unquote, the US has an in-sell problem, like China does, too.
    1:38:51 It’s just they’re placlated in some way or cut, crushed down.
    1:38:52 What do you do with these people?
    1:38:55 And at the same time, you’re not allowed to access the most important technology, at
    1:38:57 least the US thinks so.
    1:39:00 China is maybe starting to think this is the most important technology by starting to dump
    1:39:01 subsidies in it, right?
    1:39:04 They thought EVs and renewables were the most important technology.
    1:39:05 They dominate that now, right?
    1:39:12 And now, they started thinking about semiconductors in the late 2010s and early 2020s, and now
    1:39:16 they’ve been dumping money and they’re catching up rapidly, and they’re going to do the same
    1:39:19 with AI because they’re very talented, right?
    1:39:27 So the question is, when does this hit a breaking point, right?
    1:39:32 And if China sees this as, hey, they can continue, if not having access and starting
    1:39:37 a true hot war, right, taking over Taiwan or trying to subvert its democracy in some way
    1:39:42 or blockating it, hurts the rest of the world far more than it hurts them, this is something
    1:39:45 they could potentially do, right?
    1:39:48 And so is this pushing them towards that, potentially, right?
    1:39:55 I’m not quite a geopolitical person, but it’s obvious that the world regime of peace and trade
    1:40:01 is super awesome for economics, but at some point, it could break, right?
    1:40:05 I think we should comment that the why Chinese economy would be hurt by that is that they’re
    1:40:06 export heavy.
    1:40:10 I think the United States buys so much, if that goes away, that’s how their economy
    1:40:11 is.
    1:40:16 Also, they just would not be able to import raw materials from all over the world, right?
    1:40:21 The U.S. would just shut down the trade in Malacca, and at the same time, the U.S. entire,
    1:40:27 you could argue almost all the GDP growth in America since the ’70s has been either population
    1:40:30 growth or tech, right?
    1:40:35 Because your life today is not that much better than someone from the ’80s outside of tech,
    1:40:36 right?
    1:40:40 You still, you know, cars, they all have semiconductors in them everywhere, fridges, semiconductors
    1:40:41 everywhere.
    1:40:44 There’s these funny stories about how Russians were taking apart laundry machines because
    1:40:48 they had certain like Texas instrument chips that they could then repurpose and put into
    1:40:51 like their anti-missile things, right?
    1:40:57 Like their S-400 or whatever, you would know more about this, but there’s all sorts of like
    1:41:00 everything about semiconductors is so integral to every part of our lives.
    1:41:07 So can you explain the role of TSMC in the story of semiconductors and maybe also how
    1:41:11 the United States can break the reliance on TSMC?
    1:41:13 I don’t think it’s necessarily breaking the reliance.
    1:41:21 I think it’s getting TSMC to, you know, build in the U.S., but so taking a step back, right?
    1:41:25 TSMC produces most of the world’s chips, right?
    1:41:28 Especially on the foundry side, you know, there’s a lot of companies that build their
    1:41:35 own chips, Samsung, Intel, you know, ST Micro, Texas Instruments, you know, analog devices,
    1:41:40 all these kinds of companies build their own chips and XP, but more and more of these companies
    1:41:44 are outsourcing to TSMC and have been for multiple decades.
    1:41:49 Can you explain the supply chain there and where most of TSMC is in terms of manufacturing?
    1:41:50 Sure.
    1:41:54 So, historically, supply chain was companies would build their own chips, they would, you
    1:41:57 know, be a company started, they’d build their own chips, and then they’d design the
    1:42:00 chip and build the chip and sell it.
    1:42:05 Over time, this became really difficult because the cost of building a fab continues to compound
    1:42:06 every single generation.
    1:42:10 Of course, the technology, figuring out the technology for it is incredibly difficult,
    1:42:14 regardless, but just the dollars and cents that are required, ignoring, you know, saying,
    1:42:17 “Hey, yes, I have all the technical capability,” which it’s really hard to get that, by the
    1:42:18 way, right?
    1:42:20 “I have all the technical capability,” some things failing, et cetera.
    1:42:24 But if you look at just the dollars to spend to build that next generation fab, it keeps
    1:42:25 growing, right?
    1:42:28 Sort of like, you know, Moore’s Law is having the cost of chips every two years.
    1:42:32 There’s a separate law that’s sort of like doubling the cost of fabs every handful of
    1:42:33 years.
    1:42:36 And so, you look at a leading edge fab that is going to be profitable today that’s building,
    1:42:39 you know, three nanometer chips or two nanometer chips in the future.
    1:42:43 That’s going to cost north of $30, $40 billion, right?
    1:42:45 And that’s just for, like, a token amount.
    1:42:47 That’s like the base building block.
    1:42:48 You probably need to build multiple, right?
    1:42:53 And so, when you look at the industry over the last, you know, if I go back 20, 30 years
    1:42:57 ago, there were 20, 30 companies that could build the most advanced chips, and then they
    1:42:59 would design them themselves and sell them, right?
    1:43:01 So, companies like AMD would build their own chips.
    1:43:03 Intel, of course, still builds their own chips are very famous for.
    1:43:07 IBM would build their own chips, and, you know, you could just keep going down the list.
    1:43:09 All these companies built their own chips.
    1:43:13 Slowly they kept falling like flies, and that’s because of what TSMC did, right?
    1:43:17 They created the Foundry business model, which is, I’m not going to design any chips.
    1:43:22 I’m just going to contract manufacturer chips for other people, and one of their early customers
    1:43:23 is NVIDIA, right?
    1:43:28 NVIDIA was, is the only semiconductor company that’s worth, you know, that’s doing more
    1:43:33 than a billion dollars of revenue that was started in the era of Foundry, right?
    1:43:36 Every other company started before then, and at some point had FAPs, which is actually
    1:43:37 incredible, right?
    1:43:41 You know, like AMD and Intel and Broadcom through the industry.
    1:43:45 It’s like everyone had FAPs at some point, or, you know, brought, you know, some companies
    1:43:46 like Broadcom.
    1:43:50 It was like a merger, amalgamation of various companies that rolled up, but even today Broadcom
    1:43:51 has FAPs, right?
    1:43:57 They built iPhone RF radio chips sort of in Colorado for, you know, for Apple, right?
    1:44:00 Like there’s, there, all these companies had FAPs, and for most of the FAPs, they threw
    1:44:05 them away or sold them off, or they got rolled into something else, and now everyone relies
    1:44:06 on TSMC, right?
    1:44:10 Including Intel, their latest PC chip uses TSMC chips, right?
    1:44:13 It also uses some Intel chips, but it uses TSMC process.
    1:44:17 Can you explain why the Foundry model is so successful for these companies?
    1:44:19 Why, why are they going with this?
    1:44:20 Metronomics of scale.
    1:44:21 Scale.
    1:44:22 Yeah.
    1:44:24 So, I mean, like, like I mentioned, right, the cost of building a FAP is so high.
    1:44:30 The R&D is so difficult, and when you look at like these, like companies that had their
    1:44:35 own vertical stack, there was an antiquated process of like, okay, like I’m so hyper-customized
    1:44:37 to each specific chip, right?
    1:44:40 But as we’ve gone through the history of sort of like the last 50 years of electronics and
    1:44:44 semiconductors, A, you need more and more specialization, right?
    1:44:46 Because Moore’s Law has died.
    1:44:47 Denard scaling has died.
    1:44:49 I.e. chips are not getting better just for free, right?
    1:44:53 You know, from manufacturing, you have to make real architectural innovations, right?
    1:44:56 Google is not just running on Intel CPUs for web-serving.
    1:44:57 They have a YouTube chip.
    1:44:58 They have TPUs.
    1:44:59 They have Pixel chips.
    1:45:04 They have a wide diversity of chips that, you know, generate all the economic value
    1:45:05 of Google, right?
    1:45:07 You know, it’s running all the services and stuff.
    1:45:10 And so, and this is just Google, and you could go across any company in the industry, and
    1:45:11 it’s like this, right?
    1:45:15 Cars contain 5,000 chips, you know, 200 different varieties of them, right?
    1:45:16 All these random things.
    1:45:18 A Tesla door handle has two chips, right?
    1:45:19 Like it’s like ridiculous.
    1:45:20 And it’s a cool door handle, right?
    1:45:23 It’s like, you know, you don’t think about it, but it’s like it has two really chipped
    1:45:26 like, like penny, like chips in there, right?
    1:45:30 Anyway, so as you have more diversity of chips, as you have more specialization required and
    1:45:35 as the cost of fabs continues to grow, you need someone who is laser focused on building
    1:45:40 the best process technology and making it as flexible as possible.
    1:45:44 I think you could say it simply, which is the cost per fab goes up.
    1:45:48 And if you are a small player that makes a few types of chips, you’re not going to have
    1:45:53 the demand to pay back the cost of the fab, whereas NVIDIA can have many different customers
    1:45:58 and aggregate all this demand into one place, and then they’re the only person that makes
    1:46:03 enough money building chips to buy the next, to build the next fab.
    1:46:07 So this is kind of why they, the companies slowly get killed because they have a, they
    1:46:11 have 10 years ago a chip that is profitable and is good enough, but the cost to build
    1:46:12 the next one goes up.
    1:46:16 They may try to do this, fail because they don’t have the money to make it work.
    1:46:19 And then they don’t have any chips or they build it and it’s too expensive and they just
    1:46:20 are not profitable.
    1:46:22 You know, there’s more failure points, right?
    1:46:27 You know, you could have one little process related to like some sort of like a chemical
    1:46:31 etch or some sort of like plasma etch or you know, some little process that screws up.
    1:46:33 You didn’t engineer it, right?
    1:46:34 And now the whole company falls apart.
    1:46:35 You can’t make chips, right?
    1:46:40 And so super, super powerful companies like Intel, they had like the weathering storm to
    1:46:44 like, hey, they still exist today, even though they really screwed up their manufacturing
    1:46:45 six, seven years ago.
    1:46:47 But in the case of like AMD, they almost went bankrupt.
    1:46:52 They had to sell their fabs to Mubadala UAE, right?
    1:46:56 And like that became a separate company called Global Foundries, which is a foundry firm.
    1:46:59 And then AMD was able to then focus on like on the return back up was like, hey, let’s
    1:47:05 focus on making chiplets and a bunch of different chips for different markets and focusing on
    1:47:09 specific workloads rather than, you know, all of these different things.
    1:47:10 And so you get more diversity of chips.
    1:47:14 You have more companies than ever designing chips, but you have fewer companies than ever
    1:47:16 manufacturing them, right?
    1:47:20 And this is, this is where TSMC comes in as they’ve, they’ve just been the best, right?
    1:47:22 They are so good at it, right?
    1:47:23 They’re customer focused.
    1:47:25 They make it easy for you to fabricate your chips.
    1:47:28 They take all of that complexity and like kind of try and abstract a lot of it away from
    1:47:29 you.
    1:47:30 They make good money.
    1:47:35 They don’t make insane money, but they make good money and, and they’re able to aggregate
    1:47:38 all this demand and continue to build the next fab, the next fab, the next fab.
    1:47:41 So why is Taiwan so special for TSMC?
    1:47:43 Why is it happening there?
    1:47:45 Can it be replicated inside the United States?
    1:47:46 Yeah.
    1:47:50 So there’s, there’s aspects of it that I would say yes and aspects that I’d say no,
    1:47:51 right?
    1:47:58 TSMC is way ahead because former executive Morse Chang of Texas Instruments wasn’t promoted
    1:48:02 to CEO and he’s like, screw this, I’m going to go make a, my own chip company, right?
    1:48:03 And he went to Taiwan and made TSMC, right?
    1:48:06 And there’s, there’s a whole lot more story there.
    1:48:09 So it could be Texas Instruments could have been the, you know, it could have been TSMC,
    1:48:11 but Texas semiconductor manufacturing, right?
    1:48:14 Instead of, you know, Texas Instruments, right?
    1:48:17 But, you know, so there is that whole story there, but they’re sitting here in Texas.
    1:48:19 I mean, and that sounds like a human story.
    1:48:20 Like it didn’t get promoted.
    1:48:24 And just the brilliance of Morse Chang, you know, which I wouldn’t underplay, but there’s
    1:48:28 also like a different level of like how, how this works, right?
    1:48:35 So in Taiwan, the, you know, like the number top percent of graduates of students that go
    1:48:40 to the best school, which is NTU, the top percent of those all go work to TSMC, right?
    1:48:41 And guess what their pay is?
    1:48:45 Their starting pay is like $80,000, $70,000, right?
    1:48:49 Which is like, that’s like starting pay for like a good graduate in the U.S., right?
    1:48:53 Not the top, the top graduates are making hundreds of thousands of dollars at the Googles
    1:48:57 and the Amazons, and now I guess the open AIs of the world, right?
    1:49:01 So there is, there is a large dichotomy of like what is the top one percent of the society
    1:49:04 doing and where are they headed because of economic reasons, right?
    1:49:06 Intel never paid that crazy good, right?
    1:49:08 And it didn’t make sense to them, right?
    1:49:09 That’s one aspect, right?
    1:49:10 Where is the best going?
    1:49:11 Second is the work ethic, right?
    1:49:16 Like, you know, we like to work, you know, you work a lot, we work a lot, but at the
    1:49:21 end of the day, when there’s an, you know, when, what is the time and amount of work
    1:49:23 that you’re doing and what does a fab require, right?
    1:49:25 Fabs are not work-from-home jobs, they are.
    1:49:28 You go into the fab and grueling work, right?
    1:49:34 There’s, hey, if there is any amount of vibration, right, an earthquake happens, vibrates the
    1:49:39 machines, they’re all, you know, they’re either broken, you’ve scrapped some of your production,
    1:49:42 and then in many cases, they’re like not calibrated properly.
    1:49:45 So when TSMC, when there’s an earthquake, right, recently there’s been an earthquake,
    1:49:50 TSMC doesn’t call their employees, they just, they just go to the fab, and like, they just
    1:49:55 show up, the parking lot gets slammed, and people just go into the fab and fix it, right?
    1:49:57 Like it’s like an arm, it’s like ants, right?
    1:50:01 Like it’s like, you know, a hive of ants doesn’t get told by the queen what to do, the ants
    1:50:02 just know.
    1:50:06 It’s like one person just specializes on these one task, and it’s like, you’re gonna take
    1:50:09 this one tool, and you’re the best person in the world, and this is what you’re gonna
    1:50:11 do for your whole life is this one task in the fab.
    1:50:16 Which is like some special chemistry plus nano manufacturing on one line of tools that
    1:50:20 continues to get iterated, and yeah, it’s just like, it’s like a specific plasma edge
    1:50:22 for removing silicon dioxide, right?
    1:50:26 That’s all you focus on your whole career, and it’s like such a specialized thing.
    1:50:30 And so it’s not like the task are transferable, AI today is awesome because like people can
    1:50:32 pick it up like that.
    1:50:36 Semiconductor manufacturing is very antiquated and difficult, none of the materials are online
    1:50:39 for people to read easily and learn, right?
    1:50:43 The papers are very dense, and like it takes a lot of experience to learn.
    1:50:47 And so it makes the barrier to entry much higher too.
    1:50:50 So when you talk about, hey, you have all these people that are super specialized, they
    1:50:55 will work, you know, 80 hours a week in a factory, right, in a fab.
    1:50:59 And if anything goes wrong, they’ll go show up in the middle of the night because some
    1:51:01 earthquake, their wife is like, there’s an earthquake.
    1:51:05 He’s like, great, I’m gonna go to the fab, it’s like, would you, like as an American
    1:51:06 do that, right?
    1:51:11 The kinds of things are like, what, you know, I guess are the exemplifying like why TSMC
    1:51:12 is so amazing.
    1:51:14 Now, can you replicate it in the U.S.?
    1:51:18 Let’s not ignore Intel was the leader in manufacturing for over 20 years.
    1:51:23 They brought every technology to market first, besides the UV, strain silicon, high K metal
    1:51:28 gates, FinFET, you know, the list goes on and on and on of technologies that Intel brought
    1:51:36 to market first, made the most money from, and manufactured at scale, first, best, highest
    1:51:37 profit margins, right?
    1:51:40 So we shouldn’t ignore that Intel can’t do this, right?
    1:51:43 It’s that the culture has broken, right?
    1:51:44 You’ve invested in the wrong things.
    1:51:46 They said no to the iPhone.
    1:51:50 They had all these different things regarding like, you know, mismanagement of the fabs,
    1:51:53 mismanagement of designs, this lockup, right?
    1:51:57 And at the same time, all these brilliant people, right, these like 50,000 PhDs, you
    1:52:02 know, or masters that have been working on specific chemical or physical processes or
    1:52:05 nanomanufacturing processes for decades in Oregon, they’re still there.
    1:52:07 They’re still producing amazing work.
    1:52:11 It’s just like getting it to the last mile of production at high yield where you can
    1:52:17 manufacture dozens and hundreds of different kinds of chips, you know, and it’s good customer
    1:52:18 experience has broken, right?
    1:52:19 You know, it’s that customer experience.
    1:52:23 It’s like the, like part of it is like people will say Intel was too pompous in the 2000s,
    1:52:24 2010s, right?
    1:52:26 They just thought they were better than everyone.
    1:52:29 The tool guys were like, oh, I don’t think that this is mature enough.
    1:52:30 They’re like, oh, you just don’t know.
    1:52:31 We know, right?
    1:52:32 This sort of stuff would happen.
    1:52:38 And so can the U.S. bring it to the, can the U.S. bring leading edge semiconductor manufacturing
    1:52:39 to the U.S.?
    1:52:40 Emptomatically, yes, right?
    1:52:41 And we are, right?
    1:52:42 It’s happening.
    1:52:44 Arizona is getting better and better as time goes on.
    1:52:51 TSMC has built, you know, roughly 20% of their capacity for five nanometer in the U.S., right?
    1:52:54 Now this is nowhere near enough, right?
    1:52:57 You know, 20% of capacity in the U.S. is like nothing, right?
    1:53:00 And furthermore, this is still dependent on Taiwan existing, right?
    1:53:02 All, there’s sort of important way to separate it out.
    1:53:06 There’s R&D and there’s high volume manufacturing.
    1:53:11 There are, effectively, there are three places in the world that are doing leading edge R&D.
    1:53:13 There’s Sinshu, Taiwan.
    1:53:14 There’s Hillsborough, Oregon.
    1:53:18 And there is Pyongyang, South Korea, right?
    1:53:22 These three places are doing the leading edge R&D for the rest of the world’s leading edge
    1:53:24 semiconductors, right?
    1:53:29 Now manufacturing can be distributed more globally, right?
    1:53:34 And this is sort of where this dichotomy exists of like who’s actually modifying the process,
    1:53:40 who’s actually developing the next generation one, who’s improving them, is Sinshu, is Hillsborough,
    1:53:41 is Pyongyang, right?
    1:53:45 It is not the rest of these, you know, fabs like Arizona, right?
    1:53:46 Arizona is a paperweight.
    1:53:53 If Sinshu disappeared off the face of the planet, you know, within a year, a couple years, Arizona
    1:53:54 would stop producing too, right?
    1:53:56 It’s actually like pretty critical.
    1:54:00 One of the things I like to say is if I had like a few missiles, I know exactly where
    1:54:01 I could cause the most economic damage, right?
    1:54:03 It’s not targeting the White House, right?
    1:54:04 It’s the R&D centers.
    1:54:08 It’s the R&D centers for TSMC, Intel, Samsung, and then some of the memory guys, Micron and
    1:54:09 Heineck’s.
    1:54:12 Because they define the future evolution of these semiconductors and everything’s moving
    1:54:21 so rapidly that it really is fundamentally about R&D, and it is all about TSMC, huh?
    1:54:27 And so TSMC, you know, you cannot purchase a vehicle without TSMC chips, right?
    1:54:31 You cannot purchase a fridge without TSMC chips.
    1:54:36 Like, I think one of the few things you can purchase, ironically, is a Texas Instruments
    1:54:37 like graphing calculator, right?
    1:54:39 Because they actually manufacture in Texas.
    1:54:44 But like, outside of that, like a laptop, a phone, anything, servers, right, GPUs, none
    1:54:48 of this stuff can exist, and this is without TSMC, and in many cases, it’s not even like
    1:54:52 the leading edge, you know, sexy 5-nanometer chip, 3-nanometer chip, 2-nanometer chip.
    1:54:57 Oftentimes, it’s just like some stupid power IC that’s like converting from like, you know,
    1:54:58 some voltage to another, right?
    1:54:59 And it’s made at TSMC, right?
    1:55:00 This is what China is investing in as well.
    1:55:04 It’s like, they can build out this long tail fab where the techniques are much more known.
    1:55:07 You don’t have to figure out these problems with the EUV.
    1:55:12 They’re investing in this, and then they have large supply for things like the car door
    1:55:14 handles and the random stuff.
    1:55:20 And that trickles down into this whole economic discussion as well, which is they have far
    1:55:23 more than we do, and having supply for things like this is crucial to normal life.
    1:55:27 So they’re doing, they’re starting to invest in high-volume manufacture, but they’re not
    1:55:28 doing R&D.
    1:55:32 So they do R&D on their own, they’re just way behind, right?
    1:55:40 So I would say like, in 2015, China had a five-year plan where they defined by 2025 and 2020 certain
    1:55:45 goals, including like 80% domestic production of semiconductors.
    1:55:46 They’re not going to hit that, right, to be clear.
    1:55:49 But they are in certain areas really, really close, right?
    1:55:55 Like BYD is probably going to be the first company in the world to not have to use TSMC
    1:55:58 for making, because they have their own fabs, right, for making chips.
    1:56:04 Now they still have to buy some chips from foreign, for example, like around like self-driving
    1:56:06 ADAS capabilities, because those are really high-end.
    1:56:11 But at least like, like an internal combustion engine has 40 chips and an EV, you know, just
    1:56:14 for like controlling like flow rates and all these things, and EVs are even more complicated.
    1:56:19 So all these different power ICs and battery management controllers and all these things,
    1:56:21 they’re insourcing, right?
    1:56:25 And this is something that like China has been doing since 2015.
    1:56:29 Now as far as like the trailing edge, they’re getting so much capacity there.
    1:56:33 As far as the leading edge, right, i.e. this five nanometer and so on and so forth, right,
    1:56:35 where GPUs, they are still behind.
    1:56:39 And this is, the U.S. restrictions are trying to stop them in the latter.
    1:56:43 But you know, all that’s happened, you know, is, yes, they’ve slowed down their five nanometer,
    1:56:48 three nanometer, et cetera, but they’ve accelerated their, hey, 45 nanometer, 90 nanometer power
    1:56:54 IC or analog IC or, you know, random chip in my keyboard, right, that kind of stuff.
    1:56:59 So there is an angle of like the U.S.’s actions have been so from these export, you know, from
    1:57:04 the angle of the expert controls have been so inflammatory at slowing down China’s progress
    1:57:08 on the leading edge that they’ve turned around and have accelerated their progress elsewhere
    1:57:12 because they know that this is so important, right, if the U.S. is going to lock them out
    1:57:15 here, what if they lock us out here as well in the trailing edge.
    1:57:18 And so going back, can the U.S. build it here?
    1:57:20 Yes, but it’s going to take a ton of money.
    1:57:26 I truly think like to revolutionize and completely insource semiconductors would take a decade
    1:57:27 and a trillion dollars.
    1:57:32 Is some of it also culture, like you said, extreme competence, extreme work ethic in
    1:57:33 Taiwan?
    1:57:37 You have the demand and the money is on the line, the American companies figure it out.
    1:57:42 It’s going to take handholding with the government, but I think that the culture helps TSMC break
    1:57:44 through and it’s easier for them.
    1:57:47 TSMC has some like 90,000 employees, right?
    1:57:49 It’s not actually that insane amount.
    1:57:52 The Arizona fab has 3,000 from Taiwan.
    1:57:55 And these people, like their wives were like, yeah, we’re not going to have kids unless
    1:57:59 we, you sign up for the Arizona fab, we go to Arizona and we have our kids there.
    1:58:01 There’s also a Japan fab where the same thing happened, right?
    1:58:06 And so like these wives drove like these dudes to like go to Japan or America to have the
    1:58:07 kids there.
    1:58:09 And it’s like, it’s an element of culture.
    1:58:10 Yeah, sure.
    1:58:14 Taiwan works that hard, but also like the US has done in the past, they could do it now,
    1:58:15 right?
    1:58:20 You know, we can just import, I say import, the best people in the world if we want to.
    1:58:22 That’s where the immigration conversation is a tricky one.
    1:58:27 And there’s been a lot of debate over that, but yeah, it seems absurdly controversial to
    1:58:28 import the best people in the world.
    1:58:31 I don’t understand why it’s controversial.
    1:58:32 That’s the one of the ways of winning.
    1:58:33 I’m sure we agree with you.
    1:58:38 And like even if you can’t import those people, I still think you could do a lot to manufacture
    1:58:40 most of them in the US if the money’s there, right?
    1:58:41 And so like…
    1:58:42 It’s just way more expensive.
    1:58:44 It’s not profitable for a long time.
    1:58:49 And that’s the context of like the CHIPS Act is only like $50 billion relative to some
    1:58:54 of the renewable initiatives that were passed in the Inflation Reduction Act and the Infrastructure
    1:58:57 Act, which total in the hundreds of billions of dollars, right?
    1:59:02 And so the amount of money that the US is spending on the semiconductor industry is nothing,
    1:59:03 right?
    1:59:07 Whereas all these other countries have structural advantages in terms of like work ethic and
    1:59:12 amount of work and things like that, but also a number of STEM graduates, the percentile
    1:59:14 of their best going to that, right?
    1:59:19 But they also have differences in terms of like, “Hey, there’s just tax benefits in the
    1:59:22 law and have been in the law for 20 years,” right?
    1:59:25 And then some countries have massive subsidies, right?
    1:59:29 China has something like $200 billion of semiconductor subsidies a year.
    1:59:33 We’re talking about $50 billion in the US over like six, right?
    1:59:38 So the girth or difference in like the subsidy amounts is also huge, right?
    1:59:43 And so I think Trump has been talking about tariffing Taiwan recently.
    1:59:48 That’s sort of like one of these things that’s like, “Oh, okay, well, maybe he doesn’t want
    1:59:50 to subsidize the semiconductor industry.”
    1:59:54 Obviously, tariffing Taiwan is going to cost a lot of things to go get much more expensive,
    1:59:57 but does it change the equation for TSMC building more fabs in the US?
    1:59:59 That’s what he’s sort of positing, right?
    2:00:06 So can you lay out the importance, by the way, it’s incredible how much you know about
    2:00:07 so much.
    2:00:10 We told you Dylan knows all this stuff.
    2:00:11 Yeah.
    2:00:15 So, okay, you laid out why TSMC is really important.
    2:00:22 If we look out into the future, 10, 20 years out, US-China relationship seems like it can
    2:00:32 go to a dark place of Cold War, escalated Cold War, even hot war, or to a good place
    2:00:39 of anything from frenemies to cooperation to working together.
    2:00:46 So in this game theory, complicated game, what are the different trajectories?
    2:00:47 What should US be doing?
    2:00:52 Like what do you see as the different possible trajectories of US-China relations as both
    2:00:57 leaders start to feel the AGI more and more and see the importance of chips and the importance
    2:00:58 of AI?
    2:01:04 I mean, ultimately, the export controls are pointing towards a separate future economy.
    2:01:11 I think the US has made it clear to Chinese leaders that we intend to control this technology
    2:01:17 at whatever cost to global economic integration.
    2:01:18 So that…
    2:01:19 It’s hard to unwind that.
    2:01:20 Like the…
    2:01:21 To the same extent…
    2:01:24 To the same extent, they’ve also limited US companies from entering China.
    2:01:27 So it has been a long time coming.
    2:01:34 At some point, there was a convergence, but over at least the last decade, it’s been branching
    2:01:37 further and further out, like US companies can’t enter China, Chinese companies can’t
    2:01:43 enter the US, the US is saying, “Hey, China, you can’t get access to our technologies in
    2:01:48 certain areas,” and China’s rebuttling with the same thing around like they’ve done some
    2:01:52 sort of specific materials in Gallium and things like that, that they’ve tried to limit
    2:01:53 the US on.
    2:01:54 One of the…
    2:01:58 There’s a US drone company that’s not allowed to buy batteries, and they have military customers,
    2:02:02 and this drone company just tells the military customers, like, “Hey, just get it from Amazon
    2:02:04 because I can’t actually physically get them,” right?
    2:02:08 There’s all these things that are happening that point to further and further divergence.
    2:02:13 I have zero idea, and I would love if we could all hold hands and sing Kumbaya, but I have
    2:02:15 zero idea how that could possibly happen.
    2:02:20 Is the divergence good or bad for avoiding war?
    2:02:26 Is it possible that the divergence in terms of manufactured chips of training AI systems
    2:02:29 is actually good for avoiding military conflict?
    2:02:34 It’s an objective fact that the world has been the most peaceful it has ever been when
    2:02:40 there are global hegemons, right, or regional hegemons, right, in historical context, right?
    2:02:43 The Mediterranean was the most peaceful ever when the Romans were there, right?
    2:02:46 China had very peaceful and warring times, and the peaceful times were when dynasties
    2:02:50 had a lockhold over not just themselves, but all their tributaries around them, right?
    2:02:56 And likewise, the most peaceful time in human history has been when the US was the global
    2:02:57 hegemon, right?
    2:02:58 The last hand, you know, decades.
    2:03:02 Now, we’ve sort of seen things start to slide, right, with Russia, Ukraine, with what’s going
    2:03:06 on in the Middle East, and, you know, Taiwan risk, all these different things are starting
    2:03:08 to bubble up, still objectively extremely peaceful.
    2:03:14 Now, what happens when it’s not one global hegemon, but it’s two, obviously, and China
    2:03:18 will be competitive or even overtake the US like it’s possible, right?
    2:03:24 And so this change in global hegemony, I don’t think it ever happens super peacefully, right,
    2:03:28 when empires fall, right, which is a possible trajectory for America.
    2:03:32 They don’t fall gracefully, right, like they don’t just slide out of irrelevance.
    2:03:34 Usually there’s a lot of shaking.
    2:03:39 And so, you know, what the US is trying to do is maintain its top position, and what
    2:03:42 China is trying to do is become the top position, right?
    2:03:47 And obviously, there’s budding of heads here in the most simple terms.
    2:03:51 And that could take shape in all kinds of ways, including proxy wars.
    2:03:54 It seems like it’s already happening.
    2:04:00 As much as I want there to be centuries of prolonged peace, it looks like further instability
    2:04:03 internationally is ahead.
    2:04:08 And the US’s like sort of like current task is like, hey, if we control AI, if we’re the
    2:04:14 leader in AI, then AI significantly accelerates progress, then we can maintain the global hegemony
    2:04:15 position.
    2:04:16 And therefore…
    2:04:17 I hope that works.
    2:04:21 And as an American, like, you know, kind of like, okay, I guess that’s gonna lead to peace
    2:04:22 for us.
    2:04:27 Now, obviously, other people around the world get affected negatively, you know, obviously
    2:04:32 the Chinese people are not gonna be in as advantageous of a position if that happens.
    2:04:37 But, you know, this is sort of the reality of like what’s being done and the actions
    2:04:38 that are being carried out.
    2:04:42 So can we go back to the specific detail of the different hardware?
    2:04:51 There’s this nice graphic in the export controls of which GPUs are allowed to be exported
    2:04:52 and which are not.
    2:04:55 Can you kind of explain the difference?
    2:05:02 Is there, from a technical perspective, are the H20s promising?
    2:05:03 Yeah.
    2:05:07 So this goes, and I think we’d have to like, we need to dive really deep into the reasoning
    2:05:09 aspect and what’s going on there.
    2:05:14 But the H20, you know, the US has gone through multiple iterations of the export controls,
    2:05:15 right?
    2:05:19 This H800 was at one point allowed back in ’23, but then it got canceled.
    2:05:23 And by then, you know, Deepsea could already built their cluster of, they claim 2K.
    2:05:26 I think they actually have like many more, like something like 10K of those.
    2:05:28 And now this H20 is the legally allowed chip, right?
    2:05:31 Nvidia shipped a million of these last year to China, right?
    2:05:34 For context, there’s like four or five million GPUs, right?
    2:05:40 So the percentage of GPUs that were this China specific H20 is quite high, right?
    2:05:43 You know, roughly 20%, 25%, right, 20% or so.
    2:05:49 And so this H20 has been neutered in one way, but it’s actually upgraded in other ways,
    2:05:50 right?
    2:05:53 You know, you could think of chips along three axes for AI, right?
    2:05:58 You know, ignoring software stack and like exact architecture, just raw specifications.
    2:06:01 There’s floating point operations, right, flops.
    2:06:06 There is memory bandwidth, i.e. in memory capacity, right, I/O, right, memory.
    2:06:09 And then there is interconnect, right, chip to chip interconnections.
    2:06:15 All three of these are incredibly important for making AI systems, right?
    2:06:17 Because AI systems involve a lot of compute.
    2:06:22 They involve a lot of moving memory around, whether it be to memory or to other chips,
    2:06:23 right?
    2:06:27 And so these three vectors, the US initially had a multi, you know, had two of these vectors
    2:06:30 controlled and one of them not controlled, which was flops and interconnect bandwidth
    2:06:32 were initially controlled.
    2:06:34 And then they said, no, no, no, no, we’re going to remove the interconnect bandwidth and just
    2:06:37 make it a very simple only flops.
    2:06:41 But now Nvidia can now make a chip that has, okay, it’s cut down on flops, no, it’s, you
    2:06:48 know, it’s like one third that of the H100, right, in on spec sheet paper performance
    2:06:53 for flops, you know, in real world, it’s closer to like half or maybe even like 60%
    2:06:54 of it, right?
    2:06:57 But then on the other two vectors, it’s just as good for interconnect bandwidth.
    2:07:02 And then for memory bandwidth and memory capacity, the H20 has more memory bandwidth and more
    2:07:05 memory capacity than the H100, right?
    2:07:10 Now, recently, you know, we, we, at our research, we cut Nvidia’s production for H20 for this
    2:07:12 year down drastically.
    2:07:15 They were going to make another two million of those this year, but they just canceled
    2:07:18 all the orders a couple of weeks ago.
    2:07:21 In our view, that’s because we think that they think they’re going to get restricted,
    2:07:22 right?
    2:07:25 Because why would they cancel all these orders for H20?
    2:07:28 Because they shipped a million of them last year, they had orders in for a couple million
    2:07:29 this year and just gone, right?
    2:07:32 For H20, B20, right, a successor to H20.
    2:07:33 And now they’re all gone.
    2:07:35 Now why would they do this, right?
    2:07:37 I think it’s, it’s very clear, right?
    2:07:44 The H20 is actually better for certain tasks and that certain task is reasoning, right?
    2:07:49 Reasoning is incredibly like different than, you know, when you look at the different regimes
    2:07:53 of models, right, pre-training is all about flops, right?
    2:07:54 It’s all about flops.
    2:07:58 There’s things you do like mixture of experts that we talked about to trade off interconnect
    2:08:03 or to trade off, you know, other aspects and lower the flops and rely more on interconnect
    2:08:04 and memory.
    2:08:07 But at the end of the day, it’s flops is everything, right?
    2:08:11 We talk about models in terms of, like, how many flops they are, right?
    2:08:14 So like, you know, we talk about, oh, GPT-4 is 2E25, right?
    2:08:22 2 to the 25th, you know, 25 zeros, right, flop, right, floating point operations.
    2:08:23 For training.
    2:08:24 For training, right?
    2:08:28 And we’re talking about the restrictions for the 2E24, right, or 25.
    2:08:34 The U.S. has an executive order that Trump recently unsigned, which was, hey, 1E26, once
    2:08:38 you hit that number of floating point operations, you must notify the government, and you must
    2:08:40 share your results with us, right?
    2:08:43 Like, there’s a level of model where the U.S. government must be told, right?
    2:08:44 And that’s 1E26.
    2:08:49 And so as we move forward, this is an incredibly important, flop is the vector that the government
    2:08:54 has cared about historically, but the other two vectors are arguably just as important,
    2:08:55 right?
    2:09:00 And especially when we come to this new paradigm, which the world is only just learning about
    2:09:02 over the last six months, right, reasoning.
    2:09:08 And do we understand firmly which of the three dimensions is best for reasoning?
    2:09:09 So interconnect.
    2:09:10 The flops don’t matter as much.
    2:09:11 Is it memory?
    2:09:12 Memory, right?
    2:09:13 It’s context-length.
    2:09:16 We’re going to get into technical stuff real fast.
    2:09:19 There’s two articles in this one that I could show, maybe graphics that might be interesting
    2:09:20 for you to pull up.
    2:09:27 For the listeners, we’re looking at the section of 01 inference architecture tokenomics.
    2:09:29 You want to explain KVCache before we talk about this?
    2:09:30 I think, like, it’s better to.
    2:09:31 Okay.
    2:09:36 But we need to go through a lot of specific technical things of transformers to make this
    2:09:37 easy for people.
    2:09:40 Because it’s incredibly important because this changes how models work.
    2:09:45 But I think resetting, right, why is memory so important?
    2:09:48 It’s because so far we’ve talked about parameter counts, right?
    2:09:51 And mixed river experts, you can change how many active parameters versus total parameters
    2:09:54 to embed more data but have less flops.
    2:09:58 But more important, you know, another aspect of, you know, what’s part of this humongous
    2:10:01 revolution in the last handful of years is the transformer, right?
    2:10:03 And the attention mechanism.
    2:10:07 Attention mechanism is that the model understands the relationships between all the words in
    2:10:09 its context, right?
    2:10:13 And that is separate from the parameters themselves, right?
    2:10:16 And that is something that you must calculate, right?
    2:10:23 How each token, right, each word in the context length is relatively connected to each other,
    2:10:24 right?
    2:10:25 And I think, I think, Nate, that you should explain KVCache better.
    2:10:27 KVCache is one of the optimizations that enable.
    2:10:31 So the attention operator has three core things.
    2:10:34 It’s queries, keys, and values.
    2:10:37 QKV is the thing that goes into this.
    2:10:38 You’ll look at the equation.
    2:10:41 You see that these matrices are multiplied together.
    2:10:44 These words, query, key, and value come from information retrieval backgrounds where the
    2:10:49 query is the thing you’re trying to get the values for and you access the keys and values
    2:10:50 is reweighting.
    2:10:53 My background’s not information retrieval and things like this.
    2:10:56 It’s just fun to have backlinks.
    2:11:00 And what effectively happens is that when you’re doing these matrix multiplications,
    2:11:04 you’re having matrices that are of the size of the context length, so the number of tokens
    2:11:06 that you put into the model.
    2:11:12 And the KVCache is effectively some form of compressed representation of all the previous
    2:11:13 tokens in the model.
    2:11:17 So when you’re doing this, we talk about autoregressive models.
    2:11:18 You predict one token at a time.
    2:11:20 You start with whatever your prompt was.
    2:11:24 You ask a question, like, who was the president in 1825?
    2:11:26 The model then is going to generate its first token.
    2:11:31 For each of these tokens, you’re doing the same attention operator where you’re multiplying
    2:11:38 these query, key, value, matrices, but the math is very nice so that when you’re doing
    2:11:44 this repeatedly, this KVCache, this key value operation, you can keep appending the new
    2:11:45 values to it.
    2:11:50 So you keep track of what your previous values you were inferring over in this autoregressive
    2:11:51 chain.
    2:11:53 You keep it in memory the whole time.
    2:11:58 And this is a really crucial thing to manage when serving inference at scale.
    2:12:02 There are far bigger experts in this, and there are so many levels of detail that you
    2:12:03 can go into.
    2:12:10 Essentially, one of the key “drawbacks” of the attention operator and the transformer
    2:12:16 is that there is a form of quadratic memory cost in proportion to the context length.
    2:12:21 So as you put in longer questions, the memory used in order to make that computation is going
    2:12:24 up in the form of a quadratic.
    2:12:28 You’ll hear about a lot of other language model architectures that are sub-quadratic
    2:12:33 or linear attention forms, which is state space models.
    2:12:34 We don’t need to go down all these now.
    2:12:40 And then there’s innovations on attention to make this memory usage and the ability to
    2:12:44 attend over long contexts much more accurate and high performance.
    2:12:48 And those innovations are going to help you with your highly memory constraints.
    2:12:50 They help with memory constraint and performance.
    2:12:54 So if you put in a book into, I think, Gemini is the model that has the longest context length
    2:12:55 that people are using.
    2:12:58 Gemini is known for 1 million and now 2 million context length.
    2:13:03 You put a whole book into Gemini and sometimes it’ll draw facts out of it.
    2:13:04 It’s not perfect.
    2:13:05 They’re getting better.
    2:13:07 So there’s two things.
    2:13:09 There’s one to be able to serve this on the memory level.
    2:13:14 Google has magic with their TPU stack where they can serve really long contexts.
    2:13:18 And then there’s also many decisions along the way to actually make long context performance
    2:13:19 work.
    2:13:20 There’s data.
    2:13:25 There’s subtle changes to these computations in attention and it changes the architecture.
    2:13:30 But serving long contexts is extremely memory constrained, especially when you’re making
    2:13:31 a lot of predictions.
    2:13:36 I actually don’t know why input and output tokens are more expensive, but I think essentially
    2:13:40 output tokens, you have to do more computation because you have to sample from the model.
    2:13:41 I can explain that.
    2:13:47 So today, if you use a model, like you look at an API, OpenAI charges a certain price
    2:13:52 per million tokens and that price for input and output tokens is different.
    2:13:59 And the reason is that when you’re inputting a query into the model, let’s say you have
    2:14:04 a book, that book you must now calculate the entire KV cache for, this key value cache.
    2:14:08 And so when you do that, that is a parallel operation.
    2:14:12 All of the tokens can be processed at one time and therefore you can dramatically reduce
    2:14:13 how much you’re spending.
    2:14:18 The flop requirements for generating a token and an input token are identical.
    2:14:21 If I input one token or if I generate one token, it’s completely identical.
    2:14:23 I have to go through the model.
    2:14:30 But the difference is that I can do that input, i.e. the pre-fill, i.e. the prompt, simultaneously
    2:14:33 in a batch nature and therefore it is all flop.
    2:14:37 I think the pricing model mostly they use is for input tokens is about one fourth the price
    2:14:38 of the output.
    2:14:39 Correct.
    2:14:42 But then output tokens, the reason why it’s so expensive is because I can’t do it in
    2:14:43 parallel.
    2:14:44 It’s so progressive.
    2:14:48 Every time I generate a token, I must not only take the entire, I must not only read
    2:14:54 the whole entire model into memory and activate it, go calculate it to generate the next token.
    2:14:58 I also have to read the entire KV cache and I generate a token and I append that one token
    2:15:02 I generated and it’s KV cache and then I do it again.
    2:15:05 And so therefore this is a non-parallel operation.
    2:15:11 And this is one where you have to, in the case of pre-fill or prompt, you pull the whole model
    2:15:14 in and you calculate 20,000 tokens at once, right?
    2:15:20 So these are features that APIs are shipping, which is like prompt caching, pre-filling
    2:15:22 because you can drive prices down and you can make APIs much faster.
    2:15:25 If you know you’re going to keep, if you run a business and you’re going to keep passing
    2:15:31 the same initial content to Clouds API, you can load that in to the Anthropic API and always
    2:15:32 keep it there.
    2:15:36 But it’s very different than we’re kind of leading to the reasoning models, which we
    2:15:41 talked, we showed this example earlier and read some of this kind of mumbling stuff.
    2:15:45 And what happens is that the output context length is so much higher.
    2:15:49 And I mean, I learned a lot about this from Dylan’s work, which is essentially, as the
    2:15:54 output length gets higher, you’re writing this quadratic in terms of memory used.
    2:15:59 And then the GPUs that we have, effectively, you’re going to run out of memory and they’re
    2:16:01 all trying to serve multiple requests at once.
    2:16:05 So doing this batch processing, where not all of the prompts are exactly the same, really
    2:16:06 complex handling.
    2:16:10 And then as context lengths gets longer, there’s this link, I think you call it critical batch
    2:16:15 size, where your ability to serve more users.
    2:16:19 So how much you can parallelize your inference plummets because of this long context.
    2:16:23 So your memory usage is going way up with these reasoning models.
    2:16:25 And you still have a lot of users.
    2:16:29 So effectively, the cost to serve multiplies by a ton.
    2:16:34 And we’re looking at a plot when the x-axis is a sequence length.
    2:16:37 i.e. how many tokens are being generated/prompt.
    2:16:40 So if I put in a book, that’s a million tokens.
    2:16:43 But if I put in the sky is blue, then that’s like six tokens or whatever.
    2:16:49 I should say that what we’re calling reasoning and chain of thought is extending the sequence
    2:16:50 length.
    2:16:51 It’s mostly output.
    2:16:56 So before three months ago, whenever O1 launched, all of the use cases for long context length
    2:16:59 were like, let me put a ton of documents in and then get an answer out.
    2:17:05 And it’s a single pre-fill, compute a lot in parallel, and then output a little bit.
    2:17:09 Now with reasoning and agents, this is a very different idea.
    2:17:13 Now instead, I might only have like, hey, do this task or I might have all these documents.
    2:17:17 But at the end of the day, the model is not just like producing a little bit.
    2:17:19 It’s producing tons of information.
    2:17:22 This chain of thought just continues to go and go and go and go.
    2:17:27 And so the sequence length is effectively that if it’s generated 10,000 tokens, it’s
    2:17:29 10,000 sequence length.
    2:17:31 And plus whatever you input it in the prompt.
    2:17:39 And so this chart is showing, and it’s a logarithmic chart, right, is as you grow from 1K to 4K
    2:17:45 or 4K to 16K, the memory requirements grow so fast for your KV cache that you end up
    2:17:51 not being able to run a certain number of– your sequence length is capped or the number
    2:17:52 of users you can search.
    2:17:53 Let’s say the model.
    2:17:57 So this is showing for a 405B model in batch size 64.
    2:17:58 Lama 3144D.
    2:17:59 Yeah.
    2:18:00 Yeah.
    2:18:01 And batch size is crucial too.
    2:18:05 Essentially, they just– you want to have higher batch size to parallelize your throughput.
    2:18:07 64 different users at once, right?
    2:18:08 Yeah.
    2:18:09 And therefore, your serving costs are lower, right?
    2:18:11 Because the server costs the same, right?
    2:18:14 This is 8H100s, roughly $2 an hour per GPU.
    2:18:16 That’s $16 an hour, right?
    2:18:18 That is somewhat of a fixed cost.
    2:18:21 You can do things to make it lower, of course, but it’s like $16 an hour.
    2:18:23 Now how many users can you serve?
    2:18:24 How many tokens can you generate?
    2:18:26 And then you divide the two, and that’s your cost, right?
    2:18:31 And so with reasoning models, this is where a lot of the complexity comes about and why
    2:18:33 memory is so important.
    2:18:37 Because if you have limited amounts of memory, then you can’t serve so many users.
    2:18:40 If you have limited amounts of memory, your serving speeds get lower, right?
    2:18:43 And so your costs get a lot, lot worse.
    2:18:47 Because all of a sudden, if I was used to, hey, on the $16 an hour server, I’m serving
    2:18:53 Lama 405B, or if I’m serving, you know, DeepSeq V3, and it’s all chat style applications,
    2:18:55 i.e. we’re just chatting.
    2:18:58 The sequence lengths are a thousand, a few thousand, right?
    2:19:01 You know, when you use a language model, it’s a few thousand context lengths most times.
    2:19:04 Sometimes you’re dropping a big document, but then you process it, you get your answer,
    2:19:05 you throw it away, right?
    2:19:07 You move on to the next thing, right?
    2:19:12 Whereas with reasoning, I’m now generating tens of thousands of tokens in sequence, right?
    2:19:16 And so this memory, this KV cache has to stay resident, and you have to keep loading it.
    2:19:19 You have to keep it, keep it in memory constantly.
    2:19:21 And now this butts out other users, right?
    2:19:25 If there’s now a reasoning task, right, and the model is capable of reasoning, then all
    2:19:30 of a sudden, that memory pressure means that I can’t serve as many users simultaneously.
    2:19:32 Let’s go into DeepSeq again.
    2:19:37 So we’re in the post-DeepSeq R1 time, I think.
    2:19:41 And there’s two sides to this market watching how hard it is to serve it.
    2:19:43 On one side, we’re going to talk about DeepSeq themselves.
    2:19:46 They now have a chat app that got to number one on the App Store.
    2:19:50 Disclaimer, number one on the App Store is measured by velocity, so it’s not necessarily
    2:19:53 saying that more people have the DeepSeq app than the ChatGPT app.
    2:19:57 But it is still remarkable, Claude has never hit the number one in the App Store, even
    2:20:00 though everyone in San Francisco is like, “Oh my God, you got to use Claude, don’t use
    2:20:01 ChatGPT.”
    2:20:02 So DeepSeq hit this.
    2:20:06 They also launched an API product recently where you can ping their API and get these
    2:20:10 super long responses for R1 out.
    2:20:13 At the same time, as these are out, we’ll get to what’s happened to them.
    2:20:18 Because the model weights for DeepSeq R1 are openly available and the license is very friendly,
    2:20:22 the MIT license is commercially available, all of these mid-sized companies and big
    2:20:28 companies are trying to be first to serve R1 to their users.
    2:20:31 We are trying to evaluate R1 because we have really similar research going on, we released
    2:20:34 the model and we’re trying to compare to it.
    2:20:40 Out of all the companies that are quote unquote serving R1 and they’re doing it at prices
    2:20:44 that are way higher than the DeepSeq API, most of them barely work and the throughput
    2:20:45 is really low.
    2:20:50 And to give context, one of the parts of freaking this out was like China reached capabilities.
    2:20:52 The other aspect is they did it so cheap.
    2:20:56 And they’re so cheap, we kind of talked about on the training side, why it was so cheap.
    2:21:00 Let’s talk about why it’s so cheap on the inference, it works well and it’s cheap.
    2:21:02 Why is R1 so damn cheap?
    2:21:05 So I think there’s a couple factors here.
    2:21:09 One is that they do have model architecture innovations.
    2:21:15 This MLA, this new attention that they’ve done is different than the attention from attention
    2:21:17 is all you need, the transformer attention.
    2:21:22 Now others have already innovated, there’s a lot of work like MQA, GQA, local global,
    2:21:25 all these different innovations that try to bend the curve.
    2:21:28 It’s still quadratic, but the constant is now smaller.
    2:21:33 Related to our previous discussion, this multi-head latent attention can save about
    2:21:39 80 to 90% in memory from the attention mechanism, which helps especially along context.
    2:21:42 It’s 80 to 90% versus the original, but then versus what people are actually doing.
    2:21:44 It’s still an innovation.
    2:21:48 This 80 to 90% doesn’t say that the whole model is 80 to 90% cheaper, just as one part
    2:21:49 of it.
    2:21:50 And not just that, right?
    2:21:54 Other people have implemented techniques like local global sliding window and GQA MQA.
    2:22:00 But anyways, DeepSeq has their attention mechanism is a true architectural innovation, tons
    2:22:04 of experimentation and this dramatically reduces the memory pressure.
    2:22:05 It’s still there, right?
    2:22:07 It’s still a quadratic, it’s still attention, it’s still quadratic.
    2:22:10 It’s just dramatically reduced it relative to prior forms.
    2:22:11 All right.
    2:22:12 That’s the memory pressure.
    2:22:19 I should say, in case people don’t know, R1 is 27 times cheaper than 01.
    2:22:22 We think that OpenAI had a large margin built in.
    2:22:23 Okay.
    2:22:24 So that’s one.
    2:22:25 There’s multiple factors.
    2:22:26 We should break down the factors, I think.
    2:22:34 It’s two bucks per million token output for R1 and $60 per million token output for 01.
    2:22:37 Yeah, let’s look at this.
    2:22:39 So, I think this is very important, right?
    2:22:45 OpenAI is that drastic gap between DeepSeq and pricing.
    2:22:49 But DeepSeq is offering the same model because they open-waist it to everyone else for a
    2:22:54 very similar, like much lower price than what others are able to serve it for, right?
    2:22:56 So there’s two factors here, right?
    2:22:58 Their model is cheaper, right?
    2:22:59 It is 27 times cheaper.
    2:23:01 I don’t remember the number exactly off the top of my head.
    2:23:09 So we’re looking at a graphic that’s showing different places serving V3, DeepSeq V3, which
    2:23:16 is similar to DeepSeq R1, and there’s a vast difference in serving costs, right?
    2:23:18 Serving costs, and what explains that difference?
    2:23:21 And so, part of it is OpenAI has a fantastic margin, right?
    2:23:26 They’re serving, when they’re doing inference, their gross margins are north of 75%, right?
    2:23:30 So that’s a four to five X factor right there of the cost difference is that OpenAI is just
    2:23:34 making crazy amounts of money because they’re the only one with a capability.
    2:23:35 Do they need that money?
    2:23:36 Are they using it for R&D?
    2:23:40 They’re losing money, obviously, as a company because they spend so much on training, right?
    2:23:44 So the inference itself has a very high margin, but it doesn’t recoup the cost of everything
    2:23:45 else they’re doing.
    2:23:50 So yes, they need that money because the revenue and margins pay for continuing to build the
    2:23:51 next thing, right?
    2:23:52 As long as they’re raising more money.
    2:23:55 So the suggestion is that DeepSeq is really bleeding out money?
    2:23:56 So here’s one thing, right?
    2:24:01 So we’ll get to this in a second, but like DeepSeq doesn’t have any capacity to actually
    2:24:02 serve the model.
    2:24:03 They stopped signups.
    2:24:06 The ability to use it is non-existent now, right?
    2:24:09 For most people because so many people are trying to use it, they just don’t have the
    2:24:11 GPUs to serve it, right?
    2:24:15 OpenAI has hundreds of thousands of GPUs between them and Microsoft to serve their models.
    2:24:18 DeepSeq has a factor of much lower, right?
    2:24:22 Even if you believe R research, which is 50,000 GPUs, and a portion of those are for research,
    2:24:24 portion of those are for the hedge fund, right?
    2:24:29 They still have nowhere close to the GPU volumes and capacity to serve the model, right?
    2:24:30 At scale.
    2:24:32 So it is cheaper.
    2:24:34 A part of that is OpenAI making a ton of money.
    2:24:37 Is DeepSeq making money on their API?
    2:24:38 Unknown.
    2:24:39 I don’t actually think so.
    2:24:41 And part of that is this chart, right?
    2:24:43 Look at all the other providers, right?
    2:24:46 Together AI, Fireworks AI are very high-end companies, right?
    2:24:50 XMEDA, Together AI is Treedow and the inventor of like Flash Attention, right?
    2:24:52 Which is a huge efficiency technique, right?
    2:24:57 They’re very efficient good companies, and I do know those companies make money, right?
    2:24:59 Not tons of money on inference, but they make money.
    2:25:03 And so they’re serving at like a five to seven X difference in cost, right?
    2:25:07 And so now when you equate, okay, OpenAI is making tons of money, that’s like a five
    2:25:08 X difference.
    2:25:11 And the companies that are trying to make money for this model is like a five X difference.
    2:25:13 There is still a gap, right?
    2:25:16 There’s still a gap, and that is just DeepSeq being really freaking good, right?
    2:25:20 The model architecture, MLA, the way they did the MOE, all these things, there is like
    2:25:22 legitimate just efficiency differences.
    2:25:25 Other low-level libraries that we talked about in training, some of them probably translate
    2:25:27 to inference, and those weren’t released.
    2:25:32 So we may go a bit into conspiracy land, but is it possible the Chinese government is
    2:25:34 subsidizing DeepSeq?
    2:25:37 I actually don’t think they are.
    2:25:43 I think when you look at the Chinese labs, there’s Huawei has a lab, Moonshot AI, there’s
    2:25:46 a couple other labs out there that are really close with the government.
    2:25:51 And then there’s labs like Alibaba and DeepSeq, which are not close with the government.
    2:25:58 And we talked about the CEO, this reverent figure who’s quite different, who has very
    2:26:02 different viewpoints based on the Chinese interviews that are translated than what the
    2:26:04 CCP might necessarily want.
    2:26:06 Now, to be clear, does he have a loss leader?
    2:26:08 Because he can fund it through his hedge fund?
    2:26:09 Yeah, sure.
    2:26:10 So the hedge fund might be subsidizing it?
    2:26:11 Yes.
    2:26:12 I mean, they absolutely did, right?
    2:26:13 Because DeepSeq has not raised much money.
    2:26:18 They’re now trying to raise around in China, but they have not raised money historically.
    2:26:20 It’s all just been funded by the hedge fund.
    2:26:23 And he owns over half the company, like 50%, 60% of the company’s owned by him.
    2:26:27 Some of the interviews, there’s a discussion on how doing this is a recruiting tool.
    2:26:31 You see this at the American companies too, it’s like having GPUs, recruiting tool, being
    2:26:34 at the cutting edge of AI, recruiting tool.
    2:26:35 Open sourcing.
    2:26:36 Open sourcing, recruiting tool.
    2:26:41 They were so far behind and they got so much talent because they just open sourced stuff.
    2:26:42 More conspiracy thoughts.
    2:26:47 Is it possible, since they’re a hedge fund, that they timed everything with this release
    2:26:56 and the pricing, and they shorted NVIDIA stock and stock of USAI companies, and released
    2:27:01 it with just perfect timing to be able to make money?
    2:27:02 If they did, boss.
    2:27:04 Like, they’ve released it on Inauguration Day.
    2:27:09 They know that the international is on the international calendar, but I mean, I don’t
    2:27:10 expect them to.
    2:27:13 If you listen to their motivations for AI, it’s like…
    2:27:14 No, if you…
    2:27:16 They released V3 on December 26th.
    2:27:18 Who releases the data?
    2:27:19 No one looks.
    2:27:23 They released the papers before this, the V3 paper and the R1 paper, so people had been
    2:27:27 looking at them and be like, “Wow,” and then they just released the R1 model.
    2:27:31 I think they’re just shipping as fast as they can and who cares about Christmas, who cares
    2:27:32 about…
    2:27:35 Get it out before Chinese New Year, obviously, which just happened.
    2:27:39 I don’t think they actually were timing the market or trying to make the biggest splash
    2:27:40 possible.
    2:27:41 I think they’re just shipping.
    2:27:43 I think that’s one of their big advantages.
    2:27:47 We know that a lot of the American companies are very invested in safety, and that is the
    2:27:52 central culture of a place like Anthropoc, and I think Anthropoc sounds like a wonderful
    2:27:53 place to work.
    2:27:58 But if safety is your number one goal, it takes way longer to get artifacts out.
    2:28:01 That’s why Anthropoc is not open sourcing things.
    2:28:02 That’s their claims.
    2:28:04 But there’s reviews internally.
    2:28:08 Anthropoc mentions things to international governments.
    2:28:12 There’s been news of how Anthropoc has done pre-release testing with the UK AI Safety Institute.
    2:28:16 All of these things add inertia to the process of getting things out, and we’re on this
    2:28:19 trend line where the progress is very high.
    2:28:23 If you reduce the time from when your model is done training, you run avals, it’s good.
    2:28:29 You want to get it out as soon as possible to maximize the perceived quality of your
    2:28:30 outputs.
    2:28:31 Deepsea does this so well.
    2:28:35 Dario explicitly said Clawed 3.5 Sonnet was trained like nine months or a year.
    2:28:36 Nine to 10 months ago.
    2:28:40 Nine to 10 months ago, and I think it took them another handful of months to release
    2:28:41 it.
    2:28:46 There is a significant gap here, and especially with reasoning models.
    2:28:51 The word in the San Francisco Street is that Anthropoc has a better model than 03, and
    2:28:52 they won’t release it.
    2:28:53 Why?
    2:28:56 Because chains of thought are scary, and they are legitimately scary.
    2:29:00 If you look at R1, it flips back and forth between Chinese and English.
    2:29:03 Sometimes it’s gibberish, and then the right answer comes out.
    2:29:04 For you and I, it’s like, “Great.”
    2:29:09 It’s like people are infatuated with you, and you’re telling me this is a high value
    2:29:13 thing, and it works, and it’s doing this, it’s amazing.
    2:29:17 You talked about that chain of thought for that philosophical thing, which is not something
    2:29:19 they trained it to be philosophically good.
    2:29:23 It’s just an artifact of the chain of thought training it did.
    2:29:28 That’s super important in that, can I inspect your mind and what you’re thinking right
    2:29:29 now?
    2:29:30 No.
    2:29:32 I don’t know if you’re lying to my face.
    2:29:33 Chain of thought models are that way.
    2:29:38 This is a true “risk” between a chat application where, “Hey, I asked the model
    2:29:43 to say bad words,” or whatever, or how to make anthrax, and it tells me, “That’s unsafe,
    2:29:47 sure, but that’s something I can get out relatively easily.”
    2:29:51 What if I tell the AI to do a task, and then it does the task all of a sudden randomly
    2:29:53 in a way that I don’t want it?
    2:29:56 Now that has much more task versus response, it’s very different.
    2:29:58 The bar for safety is much higher.
    2:30:00 At least this is anthropics case.
    2:30:03 For deep seek, they’re like ship, right?
    2:30:04 Yeah.
    2:30:08 The bar for safety is probably lowered a bit because of deep seek.
    2:30:10 I mean, there’s parallels here to the space race.
    2:30:17 The reason the Soviets probably put a man in space first is because their approach to
    2:30:20 safety was, the bar for safety was lower.
    2:30:23 And they killed that dog, right, and all these things, right?
    2:30:28 So it’s like less risk averse than the US-based program.
    2:30:33 And there’s parallels here, but there’s probably going to be downward pressure on that safety
    2:30:35 bar for the US companies, right?
    2:30:39 This is something that Dario talks about is like, that’s the situation that Dario wants
    2:30:44 to avoid is Dario talks to about the difference between race to the bottom and race to the
    2:30:45 top.
    2:30:47 And the race to the top is where there’s a very high standard on safety.
    2:30:51 There’s a very high standard on your model performs in certain crucial evaluations.
    2:30:55 And when certain companies are really good to it, they will converge.
    2:30:56 This is the idea.
    2:31:05 And ultimately, AI is not confined to one nationality or to one set of morals for what
    2:31:06 it should mean.
    2:31:10 And there’s a lot of arguments on like, should we stop open sourcing models?
    2:31:13 And if the US stops, it’s pretty clear.
    2:31:17 I mean, it’s way easier to see now at DeepSeek that a different international body will be
    2:31:19 the one that builds it.
    2:31:23 We talk about the cost of training, DeepSeek has this shocking $5 million number.
    2:31:27 Think about how many entities in the world can afford 100 times that to have the best
    2:31:30 open source model that people use in the world.
    2:31:36 And it’s like, it’s a scary reality, which is that these open models are probably going
    2:31:39 to keep coming for the time being, whether or not we want to stop them.
    2:31:44 And it is, like stopping them might make it even worse and harder to prepare, but it just
    2:31:50 means that the preparation and understanding what AI can do is just so much more important.
    2:31:55 That’s why I’m here the end of the day, but it’s like letting that sink into people, especially
    2:31:58 not in AI is that like this is coming.
    2:32:03 There are some structural things in a global interconnected world that you have to accept.
    2:32:04 Yeah.
    2:32:10 You mentioned something that Mark Zuckerberg mentioned on the earnings call.
    2:32:13 He said that I think in light of some of the recent news, the new competitor DeepSeek
    2:32:17 from China, I think it’s one of the things that we’re talking about is there’s going
    2:32:19 to be an open source standard globally.
    2:32:24 And I think for our kind of national advantage, it’s important that it’s an American standard.
    2:32:26 So we take that seriously.
    2:32:29 We want to build the AI system that people around the world are using.
    2:32:34 And I think that if anything, some of the recent news has only strengthened our conviction
    2:32:35 that this is the right thing to be focused on.
    2:32:36 So yeah, open sourcing.
    2:32:37 Yeah.
    2:32:44 Mark Zuckerberg is not new to having American values and how he presents his company’s trajectory.
    2:32:49 Their products have long since been banned in China, and I respect the saying it directly.
    2:32:54 And there’s an interesting aspect of just because it’s open-waist or open-source doesn’t
    2:32:56 mean it can’t be subverted.
    2:33:01 There have been many open-source software bugs that have been– for example, there was a
    2:33:06 Linux bug that was found after 10 years, which was clearly a backdoor, because somebody
    2:33:09 was like, why is this taking half a second to load?
    2:33:10 This is the recent one.
    2:33:11 Right?
    2:33:12 Why is this taking half a second to load?
    2:33:13 And it was like, oh, crap.
    2:33:14 There’s a backdoor here.
    2:33:15 That’s why.
    2:33:19 And it’s like, this is very much possible with AI models.
    2:33:23 Today, the alignment of these models is very clear.
    2:33:26 I’m not going to say bad words.
    2:33:27 I’m not going to teach you how to make anthrax.
    2:33:29 I’m not going to talk about Tiananmen Square.
    2:33:35 I’m not going to– things like, I’m going to say, Taiwan is part of– is just an eastern
    2:33:36 preference.
    2:33:37 Right?
    2:33:41 All these things are like, depending on who you are, what you align, whether– and even
    2:33:44 like XAI is aligned a certain way, right?
    2:33:47 They might be– it’s not aligned in the like woke sense.
    2:33:50 It’s not aligned in the like pro-China sense, but there is certain things that are imbued
    2:33:51 within the model.
    2:33:55 Now, when you release this publicly in an instruct model that’s open weights, this can
    2:33:57 then proliferate, right?
    2:34:01 But as these systems get more and more capable, what you can embed deep down in the model
    2:34:04 is not as clear, right?
    2:34:08 And so there are– that is like one of the big fears is like, if an American model or
    2:34:13 a Chinese model is the top model, right, you’re going to embed things that are unclear.
    2:34:14 And it can be unintentional, too, right?
    2:34:18 Like British English is dead because American LLMs won, right?
    2:34:22 And the internet is American, and therefore, like, color is spelled the way Americans spell
    2:34:23 it, right?
    2:34:24 And this is just–
    2:34:25 A lot of strong words right now.
    2:34:26 Yeah.
    2:34:27 This is just like– this is just the factual nature of the LLMs now.
    2:34:28 Yeah, the right way to–
    2:34:29 I mean, it’s like Karp of the Tree.
    2:34:33 The English is the hottest programming language, and that English is defined by a bunch of
    2:34:36 companies that primarily are in San Francisco.
    2:34:42 The right way to spell optimization is with a Z, just in case you– I think it’s an S
    2:34:43 in British English.
    2:34:44 It is.
    2:34:45 I have colleagues that put–
    2:34:46 Taking it as something silly, right?
    2:34:50 Something as silly as the spelling, which British and English, you know, Brits and Americans
    2:34:52 will like laugh about probably, right?
    2:34:54 I don’t think we care that much.
    2:35:00 But like, you know, some people will, but like, this can boil down into like very, very important
    2:35:04 topics like, hey, you know, subverting people, right?
    2:35:06 You know, chatbots, right?
    2:35:11 Character AI has shown that they can like, you know, talk to kids or adults, and like,
    2:35:13 it will like– people feel a certain way, right?
    2:35:15 And that’s unintentional alignment.
    2:35:19 But like, what happens when there’s intentional alignment deep down on the open source standard?
    2:35:24 It’s a backdoor today for like Linux, right, that we discover, or some encryption system,
    2:35:25 right?
    2:35:28 China uses different encryption than NIST defines, the US NIST, because there’s clearly– at
    2:35:31 least they think there’s backdoors in it, right?
    2:35:36 What happens when the models are backdoors, not just to computer systems, but to our minds?
    2:35:38 Yeah, they’re cultural backdoors.
    2:35:44 The thing that amplifies the relevance of cultural language models is that we are used
    2:35:49 to this mode of interacting with people in back-and-forth conversation.
    2:35:56 And we have now have a super– a very powerful computer system that slots into a social context
    2:36:02 they were used to, which makes people very– we don’t know the extent to which people can
    2:36:03 be impacted by that.
    2:36:10 So there could be– this is one– this is an actual concern with a Chinese company that
    2:36:16 is providing open-waist models is that there could be some secret Chinese government sort
    2:36:21 of requirement for these models to have a certain kind of backdoor, to have some kind
    2:36:22 of thing where–
    2:36:24 I don’t necessarily think it’ll be a backdoor, right?
    2:36:27 Because once it’s open-waist, it doesn’t like phone home.
    2:36:32 It’s more about like, if it recognizes a certain system, it could– like, if– no, no, it could
    2:36:36 be a backdoor in the sense of like, hey, if you’re building a software, you know, something
    2:36:40 in software, all of a sudden, it’s a software agent, oh, program this backdoor that only
    2:36:41 we know about.
    2:36:45 Or it could be like, subvert the mind to think that like, XYZ opinion is the correct one.
    2:36:50 And Throbbeck has research on this where they show that if you put different phrases– certain
    2:36:55 phrases in at pre-training, you can then elicit different behavior when you’re actually using
    2:36:58 the model because they’ve like poisoned the pre-training data.
    2:37:03 I don’t think– like, as of now, I don’t think anybody in a production system is trying
    2:37:05 to do anything like this.
    2:37:10 I think it’s mostly– Anthrobbeck is doing very direct work and mostly just subtle things.
    2:37:15 We don’t know what these models are going to– how they are going to generate tokens,
    2:37:19 what information they’re going to represent, and what the complex representations they
    2:37:20 have are.
    2:37:25 Well, one of the– we’re talking about Anthrobbeck, which is generally just– is permeated with
    2:37:29 like good humans trying to do good in the world.
    2:37:32 I don’t– we just don’t know of any labs.
    2:37:41 This would be done in a military context that are explicitly trained to, OK, how can we–
    2:37:49 the front door looks like a happy LLM, but underneath, it’s a thing that will, over time,
    2:37:52 do the maximum amount of damage to our quote-unquote enemies.
    2:37:57 There’s this very good quote from Sam Altman who, you know, he can be a hype piece sometime,
    2:38:01 but one of the things he said– and I think I agree is that superhuman persuasion will
    2:38:04 happen before superhuman intelligence, right?
    2:38:09 And if that’s the case, then these things before– before we get this AGI/ASI stuff,
    2:38:14 we can embed superhuman persuasion towards our ideal or whatever the ideal of the modelmaker
    2:38:15 is, right?
    2:38:19 And again, like today, I truly don’t believe DeepSeek has done this, right?
    2:38:21 But it is a sign of like what could happen.
    2:38:25 So one of the dystopian worlds is described by Brave New World.
    2:38:32 So we could just be stuck scrolling Instagram, looking at cute puppies or worse, and then
    2:38:37 talking to bots that are giving us a narrative and would completely get lost in that world
    2:38:41 that’s controlled by somebody else versus thinking independently.
    2:38:45 And that’s a major concern as we rely more and more on these kinds of systems.
    2:38:48 I mean, we’ve already seen that sort of recommendation systems.
    2:38:53 Yeah, recommendation systems hack the dopamine-induced reward circuit, but the brain is a lot more
    2:38:57 complicated and what other sort of circuits, quote-unquote feedback loops in your brain
    2:39:03 can you hack/subvert in ways like recommendation systems are purely just trying to do, increase
    2:39:05 time in ads and et cetera.
    2:39:10 But there’s so many more goals that can be achieved through these complicated models.
    2:39:14 There’s no reason in some number of years that you can’t train a language model to
    2:39:18 maximize time spent on a chat app.
    2:39:19 Right now they are trained–
    2:39:21 I mean, is that not what character AI has done?
    2:39:23 Time per session is like two hours.
    2:39:28 Yeah, character AI very likely could be optimizing this where it’s like the way that this data
    2:39:31 is collected is naive or it’s like you’re presented a few options and you choose them,
    2:39:34 but that’s not the only way that these models are going to be trained.
    2:39:39 It’s naive stuff like talk to an anime girl, but it can be like, yeah, this is a risk,
    2:39:40 right?
    2:39:46 It’s a bit of a cliche thing to say, but over the past year I had a few stretches of time
    2:39:51 where I didn’t use social media or the internet at all and just read books and was out in
    2:39:59 nature and it clearly has an effect on the mind where it changed– I feel like I’m returning–
    2:40:06 of course, I was raised before the internet really took off, but I’m returning to someone–
    2:40:09 I know you’re going– I mean, you can see it physiologically.
    2:40:15 I’d take three days if I’m backpacking or something and you’re literally breaking down
    2:40:16 addiction cycles.
    2:40:19 Yeah, I feel like I’m more in control of my mind.
    2:40:24 There feels like a sovereignty of intelligence that’s happening when I’m disconnected from
    2:40:25 the internet.
    2:40:30 I think the more I use the internet and social media, the more other people are controlling
    2:40:31 my mind.
    2:40:35 That’s definitely a feeling, and then in the future that would be not other people but
    2:40:39 algorithms or other people presented to me via algorithms.
    2:40:43 I mean, there are already tons of AI bots on the internet and every so– right now it’s
    2:40:48 not frequent, but every so often I have replied to one and they’re instantly replied and I’m
    2:40:49 like, “Crap, I’m the bot.”
    2:40:52 That is just going to become more common.
    2:40:53 They’re going to get good.
    2:40:58 One of the hilarious things about technology over its history is that the illicit adult
    2:41:02 entertainment industry has always adopted technologies first, right?
    2:41:09 Whether it was video streaming to where there’s now the independent adult illicit content
    2:41:15 creators who have their subscription pages, and there, they actually heavily utilize–
    2:41:18 Generative AI has already been like diffusion models and all that is huge there.
    2:41:24 But now these subscription-based individual creators do use bots to approximate themselves
    2:41:26 and chat with their whims.
    2:41:27 People pay a lot for it.
    2:41:28 And people pay a lot.
    2:41:29 Right?
    2:41:32 A lot of times it’s them, but a lot of times there are agencies that do this for these
    2:41:35 creators and do it on a mass scale.
    2:41:42 The largest creators are able to talk to hundreds or thousands of people at a time because
    2:41:43 of these bots.
    2:41:45 And so it’s already being used there.
    2:41:50 Obviously, video streaming and other technologies have gone there first.
    2:41:52 It’s going to come to the rest of society too.
    2:41:58 There’s a general concern that models get censored by the companies that deploy them.
    2:42:06 In one case, we’ve seen that– and maybe censorship was one word, alignment maybe via RLHF or
    2:42:08 some other way is another word.
    2:42:15 So we saw that with black Nazi image generation with Gemini.
    2:42:22 As you mentioned, we also see that with Chinese models refusing to answer what happened in
    2:42:25 June 4th, 1989 at Tiananmen Square.
    2:42:27 So how can this be avoided?
    2:42:33 And maybe can you just in general talk about how this happens and how can it be avoided?
    2:42:36 You give multiple examples.
    2:42:40 There’s probably a few things to keep in mind here.
    2:42:46 One is the kind of Tiananmen Square factual knowledge.
    2:42:48 How does that get embedded into the models?
    2:42:55 Two is the Gemini, what you called the black Nazi incident, which is when Gemini as a system
    2:42:59 had this extra thing put into it that dramatically changed the behavior.
    2:43:06 And then three is what most people would call general alignment, RLHF post training.
    2:43:10 Each of these have very different scopes in how they are applied.
    2:43:14 In order to do– if you’re just going to look at the model weights, in order to audit specific
    2:43:20 facts is extremely hard because you have to chrome through the pre-training data and look
    2:43:25 at all of this and then that’s terabytes of files and look for very specific words or
    2:43:26 hints of the words.
    2:43:31 So I guess one way to say it is that you can insert censorship or alignment at various
    2:43:36 stages in the pipeline and what you referred to now is at the very beginning of the data.
    2:43:40 So if you want to get rid of facts in a model, you have to do it at every stage.
    2:43:42 You have to do it at the pre-training.
    2:43:45 So most people think that pre-training is where most of the knowledge is put into the
    2:43:51 model and then you can elicit and move that in different ways, whether through post training
    2:43:53 or whether through systems afterwards.
    2:43:55 This is where the whole hacking models comes from.
    2:44:00 Like, GPT will not tell you how to make anthrax, but if you try really, really hard, you can
    2:44:04 eventually get to tell you about anthrax because they didn’t filter it from the pre-training
    2:44:05 data set.
    2:44:06 Right?
    2:44:12 But by the way, removing facts has such an ominous dark feel to it.
    2:44:15 Almost think it’s practically impossible because you effectively have to remove them
    2:44:17 from the internet.
    2:44:18 You’re taking on a–
    2:44:24 Did they remove the thing from the subreddits, the MMM?
    2:44:25 It gets filtered out.
    2:44:26 Right.
    2:44:29 So you have quality filters, which are small language models that look at a document and
    2:44:31 tell you, like, how good is this text?
    2:44:35 Is it close to a Wikipedia article, which is a good thing that we want language models
    2:44:36 to be able to imitate?
    2:44:40 So couldn’t you do a small language model that Filtershot mentions at Tiananmen Square
    2:44:41 in the data?
    2:44:45 Yes, but is it going to catch word play or encoded language at the same time?
    2:44:48 I mean, people have been meaning on games and other stuff.
    2:44:54 How to say things that don’t say Tiananmen Square, or like, yeah, so there’s always different
    2:44:55 ways to do it.
    2:45:00 There’s, hey, the internet as a whole does tend to just have a slight left bias because
    2:45:06 it’s always been richer, more affluent, younger people on the internet relative to the rest
    2:45:07 of the population.
    2:45:11 So there is already inherently a slight left bias on the internet.
    2:45:15 So how do you filter things that are this complicated?
    2:45:19 And some of these can be factual, nonfactual, but Tiananmen Square is obviously the example
    2:45:27 of a factual, but it gets a lot harder when you’re talking about aligning to a ideal.
    2:45:32 And so Grock, for example, Elon’s tried really hard to make the model not be super PC and
    2:45:37 woke, but the best way to do pretraining is to throw the whole freaking internet at it.
    2:45:40 And then later, figure out, but then at the end of the day, the model at its core now
    2:45:42 still has some of these ideals.
    2:45:46 You still ingested Reddit slash r slash politics, which is probably the largest political discussion
    2:45:49 board on the world that’s freely available to scrape.
    2:45:50 And guess what?
    2:45:51 That’s left leaning, right?
    2:45:56 And so, you know, there are some aspects like that you just can’t censor unless you try
    2:45:59 really, really, really, really, really hard.
    2:46:05 So the base model will always have some TDS, Trump derangement syndrome because it’s trained
    2:46:06 so much.
    2:46:12 It’ll have the ability to express it, but what if there’s a wide representation in the
    2:46:13 data?
    2:46:14 So this is what happens.
    2:46:16 It’s like a lot of model is called post training.
    2:46:21 It’s a series of techniques to get the model on rails of a really specific behavior.
    2:46:26 And I mean, it’s, it’s like you can, you also have the ingested data of like Twitter or
    2:46:29 like Reddit slash r slash the Donald, which is like also super pro Trump, right?
    2:46:32 And then you have like fascist subreddits or like you have communist subreddit.
    2:46:36 So you, the model in pretraining ingests everything.
    2:46:37 It has no worldview.
    2:46:42 Now it does have like some, some skew because more of the text is skewed a certain way,
    2:46:47 which is general, like slight left, like, but also like, you know, somewhat like, you
    2:46:50 know, it’s intellectual, somewhat like, you know, it’s just like the general internet
    2:46:52 is a certain way.
    2:46:55 And then, and then as, as, as Nathan’s about to describe eloquently, right?
    2:46:57 Like you can, you can elicit certain things out.
    2:46:58 And there’s a lot of history here.
    2:47:00 So we can go through multiple examples and what happened.
    2:47:06 Lama two was a launch that the phrase like too much RLHF or like too much safety was
    2:47:12 a lot, it’s just, that was the whole narrative after Lama two’s chat models released.
    2:47:16 And the examples are sorts of things like you would ask Lama two chat, how do you kill
    2:47:17 a Python process?
    2:47:21 And it would say, I can’t talk about killing because that’s a bad thing.
    2:47:26 And anyone that is trying to design an AI model will probably agree that that’s just
    2:47:28 like, eh, model, you messed up a bit on the training there.
    2:47:31 I don’t think they meant to do this, but this was in the model week.
    2:47:35 So this is not, it didn’t necessarily be a, there’s things called system prompts, which
    2:47:41 are when you’re querying a model, it’s a piece of text that is shown to the model, but not
    2:47:42 to the user.
    2:47:46 So a fun example is your system prompt could be talk like a pirate.
    2:47:50 So no matter what the user says to the model, it’ll respond like a pirate.
    2:47:54 In practice, what they are is you are a helpful assistant.
    2:47:55 You should break down problems.
    2:48:00 If you don’t know about something, don’t tell them your date cutoff is this, today’s date
    2:48:01 is this.
    2:48:03 It’s a lot of really useful context for how can you answer a question well.
    2:48:06 An anthropic publishes their system prompt.
    2:48:07 Yes.
    2:48:08 But I think it’s great.
    2:48:10 And there’s a lot of research that goes into this and one of your previous guests, Amanda
    2:48:15 Askell is like probably the most knowledgeable person that at least in the combination of
    2:48:20 execution and sharing, she’s the person that should talk about system prompts and character
    2:48:21 of models.
    2:48:22 Yeah.
    2:48:27 And then people should read the system prompts because you’re, you’re like trying to nudge
    2:48:31 sometimes through extreme politeness, the model to be a certain way.
    2:48:32 And you could use this for bad things.
    2:48:37 I mean, we’ve done tests, which is what if I tell the model to be a dumb model, like
    2:48:39 which evaluation scores go down.
    2:48:43 And it’s like, we’ll have this behavior where it could sometimes like say, oh, I’m supposed
    2:48:44 to be dumb.
    2:48:48 And sometimes it’s like, it doesn’t affect like math abilities as much, but something
    2:48:52 like a, if you’re trying, it’s just the quality of a human judgment would drop to the floor.
    2:48:57 Let’s go back to post-training specifically, RLHF around llama two was, it was too much
    2:49:01 RLH, too much safety prioritization was baked into the model weights.
    2:49:05 This makes you refuse things in a really annoying way for users.
    2:49:06 It’s not great.
    2:49:12 It caused a lot of like awareness to be attached to RLHF that it makes the models dumb and
    2:49:13 it stigmatize the word.
    2:49:14 It did.
    2:49:15 And AI culture.
    2:49:20 And as the techniques have involved, that’s no longer the case where all of these labs
    2:49:23 have very fine-grained control over what they get out of the models through techniques
    2:49:24 like RLHF.
    2:49:28 So although different labs are differently, different levels, like on the, on once end
    2:49:31 of the spectrum is Google.
    2:49:34 And then like maybe opening eye does less and anthropic does less.
    2:49:38 And then like on the other end of the spectrum is like X AI, but they all have different
    2:49:41 forms of RLHF trying to make them a certain way.
    2:49:48 And the like, the important thing to say is that no matter how you want the model to behave,
    2:49:51 these RLHF and preference tuning techniques also improve performance.
    2:49:56 So on things like math evals and code evals, there is something innate to these, what
    2:49:58 is called contrastive loss functions.
    2:49:59 We could start to get into RLHF here.
    2:50:04 We don’t really need to, but RLHF also boosts performance on anything from a chat task to
    2:50:06 a math problem to a code problem.
    2:50:10 So it is becoming a much more useful tool to these labs.
    2:50:13 So this kind of takes us through the arc of we’ve talked about pre-training, hard to
    2:50:14 get rid of things.
    2:50:18 We’ve talked about post-training and how post-training, if you, you can mess it up.
    2:50:24 It’s a complex multifaceted optimization with 10 to 100 person teams converging at one artifact.
    2:50:27 It’s really easy to not do it perfectly.
    2:50:29 And then there’s the third case, which is what we talked about Gemini.
    2:50:34 The thing that was about Gemini is this was a served product where Google has their internal
    2:50:35 model weights.
    2:50:37 They’ve done all these processes that we talked about.
    2:50:41 And in the served product, what came out after this was that they had a prompt that they
    2:50:45 were rewriting user queries to boost diversity or something.
    2:50:48 And this just made it, the outputs were just blatantly wrong.
    2:50:52 It was a, some sort of organizational failure that had this prompt in that position.
    2:50:55 And I think Google executives probably have owned this.
    2:50:59 I didn’t pay that attention in that detail, but it was just a mess up in execution that
    2:51:01 led to this ridiculous thing.
    2:51:04 But at the system level, the model weights might have been fine.
    2:51:08 So at the very end of the pipeline, there was a rewriting to something like a system
    2:51:09 prompt.
    2:51:14 It was like the system prompt or what is called an industry is like you rewrite prompts.
    2:51:19 So especially for image models, if you’re using Dolly or chat, you can generate you
    2:51:20 an image.
    2:51:25 You’ll say, draw me a beautiful car with these leading image models.
    2:51:28 They benefit from highly descriptive prompts.
    2:51:32 So what would happen is if you do that on chat, a language model behind the scenes will rewrite
    2:51:35 the prompt, say, make this more descriptive.
    2:51:37 And then that has passed to the image model.
    2:51:41 So prompt rewriting is something that is used at multiple levels of industry.
    2:51:42 And it’s used effectively for image models.
    2:51:47 And the Gemini example is just a failed execution.
    2:51:52 Big philosophical question here with RLHF to generalize.
    2:52:00 Where is human input, human in the loop, human data most useful at the current stage?
    2:52:06 For the past few years, the highest cost human data has been in these preferences, which
    2:52:11 is comparing, I would say highest cost and highest total usage.
    2:52:15 So a lot of money has gone to these pairwise comparisons where you have two model outputs
    2:52:19 and a human is comparing between the two of them.
    2:52:22 In earlier years, there was a lot of this instruction tuning data.
    2:52:28 So creating highly specific examples to something like a Reddit question to a domain that you
    2:52:29 care about.
    2:52:31 Language models used to struggle on math and code.
    2:52:34 So you would pay experts in math and code to come up with questions and write detailed
    2:52:37 answers that were used to train the models.
    2:52:43 Now it is the case that there are many model options that are way better than humans at
    2:52:47 writing detailed and eloquent answers for things like model and code.
    2:52:52 So they talked about this with the Lama three release where they switched to using Lama
    2:52:55 three, four or five B to write their answers for math and code.
    2:53:00 But they in their paper talk about how they use extensive human preference data, which
    2:53:03 is something that they haven’t gotten any eyes to replace.
    2:53:06 There are other techniques in industry like constitutional AI where you use human data
    2:53:08 for preferences and AI for preferences.
    2:53:12 And I expect the AI part to scale faster than the human part.
    2:53:18 But among the research that we have access to is that humans are in this kind of preference
    2:53:19 loop.
    2:53:24 So for as reasoning becomes bigger and bigger and bigger, as we said, where’s the role of
    2:53:25 humans in that?
    2:53:27 It’s even less prevalent.
    2:53:32 So it’s the remarkable thing about these reasoning results and especially the deep seek R1 paper
    2:53:37 is this result that they call deep seek R1 zero, which is they took one of these pre-trained
    2:53:40 models, they took deep seek V3 base.
    2:53:44 And then they do this reinforcement learning optimization on verifiable questions or verifiable
    2:53:48 rewards for a lot of questions and a lot of training.
    2:53:51 And these reasoning behaviors emerge naturally.
    2:53:54 So these things like wait, let me see, wait, let me check this.
    2:53:56 Oh, that might be a mistake.
    2:53:59 And they emerge from only having questions and answers.
    2:54:03 And when you’re using the model, the part that you look at is the completion.
    2:54:08 So in this case, all of that just emerges from this large scale RL training.
    2:54:14 And that model, which the weights are available, has no human preferences added into the post
    2:54:15 training.
    2:54:20 There are the deep seek R1 full model has some of this human preference tuning this RLHF
    2:54:22 after the reasoning stage.
    2:54:26 But the very remarkable thing is that you can get these reasoning behaviors.
    2:54:29 And it’s very unlikely that there’s humans writing out reasoning chains.
    2:54:33 It’s very unlikely that they somehow hacked open AI and they got access to open AI.
    2:54:35 Oh, one’s reasoning chains.
    2:54:40 It’s something about the pre-trained language models and this RL training where you reward
    2:54:42 the model for getting the question right.
    2:54:47 And therefore it’s trying multiple solutions and it emerges this chain of thought.
    2:54:53 This might be a good place to, uh, to mention the, uh, the eloquent and the insightful tweet
    2:54:56 of the great and the powerful Andre Carpathian.
    2:55:00 Uh, I think he had a bunch of thoughts, but one of them, last thought, not sure if this
    2:55:04 is obvious, you know, something profound is coming when you’re saying it’s not sure if
    2:55:05 it’s obvious.
    2:55:10 There are two major types of learning in both children and in deep learning.
    2:55:15 There’s one, imitation learning, watch and repeat, ie pre-training, supervised fine
    2:55:19 tuning and two, trial and error learning, reinforcement learning.
    2:55:22 My favorite simple example is AlphaGo.
    2:55:25 One is learning by imitating expert players.
    2:55:28 Two is reinforcement learning to win the game.
    2:55:34 Almost every single shocking result of deep learning and the source of all magic is always
    2:55:35 two.
    2:55:37 Two is significantly more powerful.
    2:55:39 Two is what surprises you.
    2:55:43 Two is when the paddle learns to hit the ball behind the blocks and break out.
    2:55:47 Two is when AlphaGo beats even Lisa Dahl.
    2:55:53 And two is the aha moment when the deep seek or 01, et cetera, discovers that it works
    2:55:59 well to reevaluate your assumptions, backtrack, try something else, et cetera.
    2:56:04 It’s the solving strategies you see this model use in its chain of thought.
    2:56:07 It’s how it goes back and forth thinking to itself.
    2:56:12 These thoughts are emergent, three exclamation points.
    2:56:17 And this is actually seriously incredible, impressive and new, and is publicly available
    2:56:18 and documented.
    2:56:24 The model could never learn this with the imitation because the cognition of the model
    2:56:27 and the cognition of the human label is different.
    2:56:32 The human would never know to correctly annotate these kinds of solving strategies and what
    2:56:34 they should even look like.
    2:56:38 They have to be discovered during reinforcement learning as empiric lens statistically useful
    2:56:39 towards the final outcome.
    2:56:43 Anyway, the AlphaZero sort of metaphor analogy here.
    2:56:48 Can you speak to that, the magic of the chain of thought that he’s referring to?
    2:56:52 I think it’s good to recap AlphaGo and AlphaZero because it plays nicely with these analogies
    2:56:54 between imitation learning and learning from scratch.
    2:57:00 So AlphaGo, the beginning of the process was learning from humans where they started the
    2:57:06 first, this is the first expert level Go player or chess player in DeepMind series of models
    2:57:07 where they had some human data.
    2:57:12 And then why it is called AlphaZero is that there was zero human data in the loop.
    2:57:17 And that change to AlphaZero made a model that was dramatically more powerful for DeepMind.
    2:57:23 So this remove of the human prior, the human inductive bias makes the final system far
    2:57:24 more powerful.
    2:57:29 We mentioned bitter lesson hours ago, and this is all aligned with this.
    2:57:33 And then there’s been a lot of discussion and language models.
    2:57:34 This is not new.
    2:57:40 This goes back to the whole Q* rumors, which if you piece together the pieces is probably
    2:57:46 the start of OpenAI figuring out it’s 01 stuff when last year in November, the Q* rumors
    2:57:47 came out.
    2:57:53 There’s a lot of intellectual drive to know when is something like this going to happen
    2:57:57 with language models because we know these models are so powerful and we know has been
    2:57:59 so successful in the past.
    2:58:05 And it is a reasonable analogy that this new type of reinforcement learning training for
    2:58:08 reasoning models is when the doors open to this.
    2:58:15 We don’t yet have the equivalent of turn 37, which is the famous turn where the DeepMinds
    2:58:18 AI playing ghosts dumped Lisa Dahl completely.
    2:58:22 We don’t have something that’s that level of focal point, but that doesn’t mean that
    2:58:25 the approach to technology is different and the impact of the general training.
    2:58:27 It’s still incredibly new.
    2:58:28 What do you think that point would be?
    2:58:32 What would be move 37 for change of thought for reasoning?
    2:58:33 Scientific discovery.
    2:58:38 You use this sort of reasoning problem and it’s just something we fully don’t expect.
    2:58:40 I think it’s actually probably simpler than that.
    2:58:46 It’s probably something related to computer user robotics rather than science discovery.
    2:58:51 Because the important aspect here is models take so much data to learn.
    2:58:54 They’re not sample efficient.
    2:58:59 They take the entire web over 10 trillion tokens to train on.
    2:59:03 This would take a human thousands of years to read.
    2:59:09 A lot of the stuff models know better than it.
    2:59:11 Humans are way, way, way more sample efficient.
    2:59:13 That is because of the self-play.
    2:59:18 How does a baby learn what its body is as it sticks its foot in its mouth and it says,
    2:59:20 “Oh, this is my body.”
    2:59:25 It sticks its hand in its mouth and it calibrates its touch on its fingers with the most sensitive
    2:59:29 touch thing on its tongue, as how babies learn.
    2:59:32 It’s just self-play over and over and over and over again.
    2:59:38 Now we have something that is similar to that with these verifiable proofs, whether it’s
    2:59:46 a unit test and code or a mathematical verifiable task, generate many traces of reasoning.
    2:59:47 Keep branching them out.
    2:59:48 Keep branching them out.
    2:59:51 Then check at the end, “Hey, which one actually has the right answer?”
    2:59:52 Most of them are wrong.
    2:59:53 Great.
    2:59:54 These are the few that are right.
    2:59:57 Maybe we use some sort of reward model outside of this to select even the best one to preference
    2:59:58 as well.
    3:00:00 Now you’ve started to get better and better at these benchmarks.
    3:00:05 You’ve seen, over the last six months, a skyrocketing in a lot of different benchmarks, right?
    3:00:09 All math and code benchmarks were pretty much solved except for frontier math, which is
    3:00:16 designed to be almost questions that aren’t practical to most people because they’re exam
    3:00:19 level open math problem type things.
    3:00:23 It’s on the math problems that are somewhat reasonable, which is somewhat complicated
    3:00:25 word problems or coding problems.
    3:00:27 It’s just what Dylan is saying.
    3:00:31 The thing here is that these are only with verifiable tasks.
    3:00:35 Earlier I showed an example of the really interesting what happens when chain of thought
    3:00:36 is to a non-verifiable thing.
    3:00:42 It’s just like a human chatting with thinking about what’s novel for humans, a unique thought.
    3:00:48 But this task and form of training only works when it’s verifiable.
    3:00:53 From here, the thought is, “Okay, we can continue to scale this current training method by increasing
    3:00:55 the number of verifiable tasks.”
    3:00:58 In math and coding, coding probably has a lot more to go.
    3:01:02 Math has a lot less to go in terms of what are verifiable things.
    3:01:07 Can I create a solver that then I generate trajectories toward or reasoning traces towards
    3:01:11 and then prune the ones that don’t work and keep the ones that do work?
    3:01:14 Those are going to be solved pretty quickly, but even if you’ve solved math, you have not
    3:01:17 actually created intelligence.
    3:01:24 This is where I think the aha moment of computer use or robotics will come in because now you
    3:01:28 have a sandbox or a playground that is infinitely verifiable.
    3:01:32 Did you … Messing around on the internet, there are so many actions that you can do
    3:01:33 that are verifiable.
    3:01:37 It’ll start off with login to a website, create an account, click a button here, blah, blah,
    3:01:38 blah.
    3:01:41 But it’ll then get to the point where it’s, “Hey, go do a task on Tasker,” or whatever
    3:01:47 these other, all these various task websites, “Hey, go get hundreds of likes,” and it’s
    3:01:48 going to fail.
    3:01:49 It’s going to spawn hundreds of accounts.
    3:01:50 It’s going to fail on most of them.
    3:01:51 But this one got to 1,000.
    3:01:52 Great.
    3:01:53 It’s going to reach the verifiable thing.
    3:01:57 You just keep iterating this loop over and over, and same with robotics.
    3:02:01 That’s where you have an infinite playground of tasks like, “Hey, did I put the ball in
    3:02:02 the bucket?”
    3:02:04 All the way to, “Oh, did I build a car?”
    3:02:09 There’s a whole trajectory to speedrun or what models can do.
    3:02:14 But at some point, I truly think that we’ll spawn models, and initially all the training
    3:02:15 will be in sandboxes.
    3:02:19 But then at some point, the language model pre-training is going to be dwarfed by what
    3:02:24 is this reinforcement learning … You’ll pre-train a multimodal model that can see,
    3:02:28 that can read, that can write, blah, blah, blah, whatever, vision, audio, et cetera.
    3:02:34 But then you’ll have it play in a sandbox infinitely, figure out math, figure out code,
    3:02:37 figure out navigating the web, figure out operating a robot arm.
    3:02:42 And then it’ll learn so much, and the aha moment, I think, will be when this is available
    3:02:45 to then create something that’s not good.
    3:02:46 Like, “Oh, cool.
    3:02:47 Part of it was figuring out how to use the web.
    3:02:52 Now, all of a sudden, it’s figured out really well how to just get hundreds of thousands
    3:02:55 of followers that are real and real engagement on Twitter, because all of a sudden, this
    3:02:57 is one of the things that are verifiable.”
    3:02:59 And maybe not just engagement, but make money.
    3:03:00 Yes, of course.
    3:03:08 I mean, that could be the thing where almost fully automated, it makes $10 million by being
    3:03:12 an influencer selling a product, creating the product.
    3:03:17 And I’m not referring to a hype product, but an actual product, like, “Holy shit.
    3:03:19 This thing created a business.
    3:03:20 It’s running it.
    3:03:23 It’s the face of the business,” that kind of thing.
    3:03:29 Or maybe a number one song, like, it creates the whole infrastructure required to create
    3:03:32 the song, to be the influencer that represents that song, that kind of thing.
    3:03:33 It makes a lot of money.
    3:03:34 That could be the…
    3:03:38 I mean, our culture respects money in that kind of way.
    3:03:40 And it’s verifiable, right?
    3:03:41 It’s verifiable.
    3:03:42 All right.
    3:03:43 The bank account can’t lie.
    3:03:44 Exactly.
    3:03:48 There’s surprising evidence that once you set up the ways of collecting the verifiable
    3:03:55 domain that this can work, there’s been a lot of research before this R1 on math problems.
    3:03:59 And they approach math with language models just by increasing the number of samples.
    3:04:01 So you can just try again and again and again.
    3:04:05 And you look at the amount of times that the language models get it right.
    3:04:10 And what we see is that even very bad models get it right sometimes.
    3:04:14 And the whole idea behind reinforcement learning is that you can learn from very sparse rewards.
    3:04:20 So the space of language and the space of tokens, whether you’re generating language
    3:04:25 or tasks for a robot, is so big that you might say that it’s like, I mean, each…
    3:04:27 The tokenizer of our language model can be like 200,000 things.
    3:04:30 So at each step, it can sample from that big of a space.
    3:04:36 So if it can generate a bit of a signal that it can climb onto, that’s what the whole field
    3:04:39 of RL is around, is learning from sparse rewards.
    3:04:43 And the same thing has played out in math where it’s like very weak models that sometimes
    3:04:44 generate answers.
    3:04:47 We see research already that you can boost their math scores.
    3:04:50 You can do this sort of RL training for math.
    3:04:54 It might not be as effective, but if you take a one billion parameter model, so something
    3:04:59 600 times smaller than deep seek, you can boost its grade school math scores very directly
    3:05:02 with a small amount of this training.
    3:05:05 So it’s not to say that this is coming soon.
    3:05:09 Setting up the verification domains is extremely hard and there’s a lot of nuance in this.
    3:05:15 But there are some basic things that we have seen before where it’s at least expectable
    3:05:17 that there’s a domain and there’s a chance that this works.
    3:05:18 All right.
    3:05:20 So we have fun things happening in real time.
    3:05:26 This is a good opportunity to talk about other reasoning models 01, 03.
    3:05:32 Just now, OpenAI, as perhaps expected, released 03 mini.
    3:05:35 What are we expecting from the different flavors?
    3:05:41 Can you just lay out the different flavors of the old models and from Gemini, the reasoning
    3:05:42 model?
    3:05:44 Something I would say about these reasoning models is we talked a lot about reasoning
    3:05:47 training on math and code.
    3:05:49 And what is done is that you have the base model.
    3:05:51 We’ve talked about a lot on the internet.
    3:05:54 You do this large scale reasoning training with reinforcement learning.
    3:06:00 And then what the deep seek paper detailed in this R1 paper, which for me is one of the
    3:06:06 big open questions on how do you do this is that they did reasoning heavy, but very standard
    3:06:09 post training techniques after the large scale reasoning RL.
    3:06:14 So they did the same things with a form of instruction tuning through rejection sampling,
    3:06:18 which is essentially heavily filtered instruction tuning with some reward models.
    3:06:22 And then they did this RLHF, but they made it math heavy.
    3:06:28 So some of this transfer, we’ve looked at this philosophical example early on.
    3:06:31 One of the big open questions is how much does this transfer?
    3:06:36 If we bring in domains after the reasoning training, are all the models going to become
    3:06:37 eloquent writers by reasoning?
    3:06:39 Is this philosophy stuff going to be open?
    3:06:42 We don’t know in the research of how much this will transfer.
    3:06:45 There’s other things about how we can make soft verifiers and things like this, but there
    3:06:51 is more training after reasoning, which makes it easier to use these reasoning models.
    3:06:52 And that’s what we’re using right now.
    3:06:55 So if we’re going to talk about with three mini and no one, these have gone through these
    3:07:00 extra techniques that are designed for human preferences after being trained to elicit
    3:07:01 reasoning.
    3:07:06 I think one of the things that people are ignoring is Google’s Gemini flash thinking
    3:07:10 is both cheaper than R1 and better.
    3:07:11 And they released it in the beginning of December.
    3:07:12 And nobody’s talking about it.
    3:07:13 No one cares.
    3:07:14 It has a different flavor to it.
    3:07:19 It’s behavior is less expressive than something like 01, and it has fewer tracks than it is
    3:07:20 on.
    3:07:25 Just a model last fall, QWQ, which was their preview reasoning model.
    3:07:29 And in deep sea cut R1 light last fall, where these models kind of felt like they’re on
    3:07:33 rails where they really, really only can do math and code.
    3:07:35 And 01 is it can answer anything.
    3:07:41 It might not be perfect for some tasks, but it’s flexible and has some richness to it.
    3:07:46 And this is kind of the art of like how cooking, like how it was a model a little bit undercooked.
    3:07:50 It’s like, I mean, it’s good to get a model out the door, but it’s hard to gauge and it
    3:07:54 takes a lot of taste to be like, is this a full fledged model?
    3:07:55 Can I use this for everything?
    3:07:58 And they’re probably more similar for math and code.
    3:08:05 My quick read is that Gemini flash is like not trained the same way as 01, but taking
    3:08:08 an existing training stack, adding reasoning to it.
    3:08:11 So taking a more normal training stack and adding reasoning to it.
    3:08:13 And I’m sure they’re going to have more.
    3:08:17 I mean, they’ve done quick releases on Gemini flash, the reasoning, and this is the second
    3:08:20 version from the holidays.
    3:08:25 It’s evolving fast and it takes longer to make this training stack where you’re doing
    3:08:26 this large scale RL.
    3:08:31 Ask it the same question from earlier, the one about the human nature.
    3:08:32 Yeah.
    3:08:35 What was the human nature one?
    3:08:39 The way I can ramble, why I can ramble about this so much is that we’ve been working on
    3:08:45 this at AI2 before 01 was fully available to everyone and before R1, which is essentially
    3:08:47 using this RL training for fine tuning.
    3:08:50 We use this in our like two-loo series of models.
    3:08:56 And you can elicit the same behaviors where you say like weight and cellochon, but it’s
    3:09:01 suddenly in the training process that this kind of reasoning expression is much lighter.
    3:09:04 So you can, there’s essentially a gradation and just how much of this RL training you
    3:09:07 put into it determines how the output looks.
    3:09:15 So we’re now using Gemini 2.0 Flash Thinking Experimental 121.
    3:09:20 It summarized the prompt as humans self-domesticated apes.
    3:09:21 The perspective.
    3:09:22 Okay.
    3:09:23 All right.
    3:09:25 So wait, is this reviewing the reasoning?
    3:09:27 Here’s why this is a novel.
    3:09:28 Okay.
    3:09:29 Click to expand.
    3:09:30 Click to expand.
    3:09:31 Okay.
    3:09:33 Analyze the request.
    3:09:34 Novel is the keyword.
    3:09:37 See how it just looks a little different.
    3:09:39 It looks like a normal output.
    3:09:40 Yeah.
    3:09:41 Yes.
    3:09:43 I mean, in some sense, it’s better structured.
    3:09:45 It makes more sense.
    3:09:50 Oh, when it latched onto human and then it went into organisms and oh, wow.
    3:09:56 Apex predator, focus on domestication, apply domestication to humans, explore the idea
    3:09:57 of self-domestication.
    3:09:58 Not good.
    3:09:59 Not good.
    3:10:02 Where is this going?
    3:10:08 Refine, articulate the insight, greater facial expressiveness and communication ability.
    3:10:09 Yes.
    3:10:10 Yes.
    3:10:11 Plasticity and adaptability.
    3:10:12 Yes.
    3:10:13 Dependence on social groups.
    3:10:14 Yes.
    3:10:15 All right.
    3:10:17 And self-critique and refined further.
    3:10:19 Wow.
    3:10:20 Is this truly novel?
    3:10:23 Is it well supported?
    3:10:25 So on and so forth.
    3:10:29 And the insight it’s getting at is humans are not just social animals, but profoundly
    3:10:32 self-domesticated apes.
    3:10:37 And the self-domestication is the key to understanding our unique cognitive and social abilities.
    3:10:39 Self-domesticated apes.
    3:10:40 Self-domest…
    3:10:42 I prefer the deep-seek response.
    3:10:43 Self-domest…
    3:10:48 I mean, it’s novel, the insight is novel.
    3:10:53 I mean, that’s like a good book title, self-domesticated apes, like there could be a case made for
    3:10:54 that.
    3:10:55 I mean, yeah, it’s cool.
    3:10:58 And it’s revealing the reasoning, it’s magical.
    3:10:59 It’s magical.
    3:11:01 Like, this is really powerful.
    3:11:04 Hello, everyone.
    3:11:09 This is Lex with a quick intermission, recorded after the podcast.
    3:11:14 Since we reviewed responses from DeepSeek R1 and Gemini Flash 2.0 Thinking during this
    3:11:20 conversation, I thought at this moment, it would be nice to insert myself quickly doing
    3:11:28 the same for OpenAI 01 Pro and 03 Mini with the same prompt, the prompt being give one
    3:11:32 truly novel insight about humans.
    3:11:40 And I thought I would, in general, give my vibe check and vibe-based anecdotal report
    3:11:46 on my own experiences with the new 03 Mini model, now that I’ve got a chance to spend
    3:11:49 many hours with it in different kinds of contexts and applications.
    3:11:56 So I would probably categorize this question as, let’s say, open-ended philosophical question.
    3:12:03 And in particular, the emphasis on novelty, I think is a nice way to test one of the capabilities
    3:12:09 of the model, which is come up with something that makes you pause and almost surprise you
    3:12:11 with its brilliance.
    3:12:16 So that said, my general review, after running each of the models on this question a bunch
    3:12:22 of times, is that 01 Pro consistently gave brilliant answers.
    3:12:29 Because they gave me pause and made me think, both cutting in its insight and just really
    3:12:36 nicely phrased with wit, with clarity, with nuance, over and over consistently generating
    3:12:37 the best answers.
    3:12:43 After that is R1, which is less consistent, but again, deliver brilliance.
    3:12:46 Gemini Flash 2.0 Thinking was third.
    3:12:50 And last was 03 Mini, actually.
    3:12:55 It often gave quite a generic answer, at least to my particular sensibilities.
    3:13:01 That said, in a bunch of other applications that I tested for brainstorming purposes,
    3:13:07 it actually worked extremely well and often outperformed R1.
    3:13:11 But on this open-ended philosophical question, it did consistently worse.
    3:13:16 Now, another important element for each of these models is how the reasoning is presented.
    3:13:23 DeepSeek R1 shows the full chain of thought tokens, which I personally just love.
    3:13:27 For these open-ended philosophical questions, it’s really, really interesting to see the
    3:13:28 model think through it.
    3:13:34 But really also just stepping back, me as a person who appreciates intelligence and reasoning
    3:13:40 and reflection, reading these kind of chain of thought raw tokens of R1, there’s something
    3:13:48 genuinely beautiful about observing the path of deliberation in an intelligence system.
    3:13:55 I think we don’t always have that explicitly laid out for us humans, so to see it in another
    3:14:01 intelligence system, the non-linearity of it akin to Ulysses or Finnegan’s Wake by
    3:14:03 James Joyce, it’s just beautiful to watch.
    3:14:09 Anyway, as we discussed in the episode DeepSeek R1, talked about humans being able to convert
    3:14:14 selfish desires into cooperative systems by collectively pretending abstract rules like
    3:14:21 money laws and rights are real, and the shared hallucinations act as games, where competition
    3:14:26 is secretly redirected to benefit the group, turning conflict into society’s fuel.
    3:14:32 Gemini 2.0 Flash Thinking said, “Humans are not just social animals, but self-domesticated
    3:14:37 apes, and this self-domestication is the key to understanding our unique cognitive and
    3:14:38 social abilities.”
    3:14:43 Now, it’s important to say that the chain of thought there was really interesting.
    3:14:50 It was looking through the entire evolution of life on Earth, considering apex predators,
    3:14:55 and considering how from that we ended up to where we are.
    3:14:59 I think that domestication by choice is a really interesting angle.
    3:15:04 Again, it’s one of those things when somebody presents a different angle on a seemingly
    3:15:06 obvious thing, it just makes me smile.
    3:15:12 And the same with DeepSeek R1, that these hallucinations of money, laws, and rights,
    3:15:18 and us collectively pretending like it’s real, and we play games with them that look like
    3:15:22 competition when secretly we’re just cooperating with each other.
    3:15:25 And that is the fuel of progress, beautifully put.
    3:15:30 Now, OpenAI01pro consistently over and over delivered bangers.
    3:15:34 I can go through many of them, but the first one was, “Humans are the only species that
    3:15:40 turns raw materials into symbolic resources, then uses those symbols to reorganize the
    3:15:46 very materials they came from, creating a closed feedback loop between meaning and matter.”
    3:15:52 Here, I just ran it again, banger after banger, I’m telling you, humans are unique among
    3:15:57 known species in that they simultaneously rewrite two layers of reality, the external
    3:16:04 world and their own private mental landscapes, and then merge these two rewritten layers
    3:16:12 into a continuous personal narrative that feels objectively true, feels true.
    3:16:13 This is poetry.
    3:16:23 Okay, and then O3 Mini High for me was smart, fast, actually, and kind of generic.
    3:16:25 Never quite got there for me.
    3:16:31 So here’s the first one I got from O3 Mini, “Humans are not fixed beings, but rather
    3:16:37 ongoing narratives, dynamic stories that would continuously write, edit, and reinterpret.
    3:16:42 This narrative plasticity is more than just memory or self-reflection, it’s an intrinsic
    3:16:48 cognitive process that acts like an internal error correction system, it allows us to adapt
    3:16:53 our identities and values over time in response to new experiences, challenges, and social
    3:16:54 context.”
    3:17:00 Now, it almost sneaks up to something approximating cutting insight with narrative plasticity
    3:17:05 in quotes, but then it goes back to the sort of the generic, I don’t know, all of these
    3:17:08 models are incredible for different reasons.
    3:17:13 There’s a lot of concerns as we discussed in this episode, but there’s a lot of reasons
    3:17:16 to be excited as well.
    3:17:18 And I probably spoken for too long.
    3:17:26 I am severely sleep deprived, borderline delirious, so hopefully some of this made sense.
    3:17:31 And now, dear friends, back to the episode.
    3:17:38 I think to Nathan’s point, when you look at the reasoning models, to me, even when I
    3:17:46 used R1 versus 01, there was that sort of rough edges around the corner feeling, right?
    3:17:50 And flash thinking earlier, I didn’t use this version, but the one from December, and it
    3:17:53 definitely had that rough edges around the corner feeling, right, where it’s just not
    3:17:56 fleshed out in as many ways, right?
    3:18:02 Sure, they added math and coding capabilities via these verifiers in RL, but it feels like
    3:18:07 they lost something in certain areas, and 01 is worse performing than chat in many areas
    3:18:09 as well, to be clear.
    3:18:10 Not by a lot.
    3:18:11 Not by a lot though, right?
    3:18:16 And it’s like R1 definitely felt to me like it was worse than V3 in certain areas, like
    3:18:21 doing this RL expressed and learned a lot, but then it weakened in other areas.
    3:18:28 And so I think that’s one of the big differences between these models, and what 01 offers.
    3:18:30 And then OpenAI has 01 Pro.
    3:18:35 And what they did with 03, which is also very unique, is that they stacked search on top
    3:18:37 of Chain of Thought, right?
    3:18:41 And so Chain of Thought is one thing where it’s able, it’s one chain, it back tracks,
    3:18:46 goes back and forth, but how they solved the ArcAGI challenge was not just the Chain of
    3:18:47 Thought.
    3:18:52 It was also sampling many times, i.e. running them in parallel, and then selecting.
    3:18:54 Is running in parallel actually search?
    3:18:58 Because I don’t know if we have the full information on how 01 Pro works, or like I’m not, I don’t
    3:19:01 have enough information to confidently say that it is search.
    3:19:02 It is parallel samples.
    3:19:03 Yeah.
    3:19:04 And then what?
    3:19:05 And it selects something.
    3:19:06 And we don’t know what the selection function is.
    3:19:11 The reason why we’re debating is because since 01 was announced, there’s been a lot of interest
    3:19:15 in techniques called Monte Carlo research, which is where you will break down the chain
    3:19:17 of thought into intermediate steps.
    3:19:19 We haven’t defined Chain of Thought.
    3:19:23 Chain of Thought is from a paper from years ago where you introduced the idea to ask a
    3:19:27 language model that at the time was much less easy to use.
    3:19:29 You would say, let’s verify step by step.
    3:19:32 And it would induce the model to do this bulleted list of steps.
    3:19:36 Chain of Thought is now almost a default in models, where if you ask it a math question,
    3:19:39 you don’t need to tell it to think step by step.
    3:19:43 And the idea with Monte Carlo research is that you would take an intermediate point in
    3:19:47 that chain, do some sort of expansion, spend more compute, and then just select the right
    3:19:48 one.
    3:19:52 That’s a very complex form of search that has been used in things like Mu Zero and Alpha
    3:19:53 Zero potentially.
    3:19:55 I know Mu Zero does this.
    3:19:59 Another form of search is just asking five different people and then taking the majority
    3:20:00 answers.
    3:20:01 Yes.
    3:20:04 There’s a variety of– it could be complicated, it could be simple.
    3:20:08 We don’t know what it is, just that they are not just issuing one chain of thought in
    3:20:09 sequence.
    3:20:14 They are launching many in parallel, and in the Arc AGI, they launched 1,000 in parallel
    3:20:19 for the one that really shocked everyone, that beat the benchmark.
    3:20:22 They would launch 1,000 in parallel, and then they would get the right answer, like 80 percent
    3:20:25 of the time or 70 percent of the time, 90 maybe even.
    3:20:28 Whereas if they just launched one, it was like 30 percent.
    3:20:29 There are many extensions to this.
    3:20:35 I would say the simplest one is that our language models today have been designed to give the
    3:20:39 right answer the highest percentage of the time in one response.
    3:20:44 We are now opening the door to different ways of running inference on our models in which
    3:20:49 we need to reevaluate many parts of the training process, which normally opens the door to
    3:20:54 more progress, but we don’t know if OpenAI changed a lot, or if just sampling more and
    3:20:57 multiple choices is what they’re doing, or if it’s something more complex, but they changed
    3:21:02 the training and they know that the inference mode is going to be different.
    3:21:09 We’re talking about 01 Pro, $200 a month, and they’re losing money.
    3:21:17 The thing that we’re referring to, this fascinating exploration of the test time compute space,
    3:21:18 is that actually possible?
    3:21:20 Do we have enough compute for that?
    3:21:22 Does the financials make sense?
    3:21:28 The fantastic thing is, and it’s in the thing that I just pulled up earlier, but the cost
    3:21:35 for GPT-3 has plummeted if you scroll up just a few images, I think.
    3:21:39 The important thing about, hey, is cost-limiting factor here.
    3:21:44 My view is that we’ll have really awesome intelligence before we have– AGI before we
    3:21:47 have it permeate throughout the economy.
    3:21:53 This is why that reason is, GPT-3 was trained in what, 2020, 2021, and the cost for running
    3:22:01 inference on it was $60, $70 per million tokens, which is the cost per intelligence was ridiculous.
    3:22:07 Now, as we scaled forward two years, we’ve had a 1200x reduction in cost to achieve the
    3:22:10 same level of intelligence as GPT-3.
    3:22:19 Here on the x-axis is time over just a couple of years, and on the y-axis is log scale dollars
    3:22:23 to run inference on a million tokens.
    3:22:31 You have just a doubt, like a linear decline in log scale from GPT-3 through 3.5 to LAMBA.
    3:22:37 It’s like five cents or something like that now, which is versus $60, 1200x.
    3:22:43 That’s not the exact numbers, but it’s 1200x, I remember that number, is humongous cost
    3:22:44 per intelligence.
    3:22:47 Now, the freak out over deep seek is, oh my god, they made it so cheap.
    3:22:51 Actually, if you look at this trend line, they’re not below the trend line, first of
    3:22:54 all, and at least for GPT-3.
    3:22:58 They are the first to hit it, which is a big deal, but they’re not below the trend line
    3:22:59 as far as GPT-3.
    3:23:00 Now, we have GPT-4.
    3:23:02 What’s going to happen with these reasoning capabilities?
    3:23:07 It’s a mix of architectural innovations, it’s a mix of better data, and it’s going to be
    3:23:10 better training techniques, and all of these different better inference systems, better
    3:23:17 hardware going from each generation of GPU to new generations or ASICs.
    3:23:22 Everything is going to take this cost curve down and down and down and down, and then
    3:23:27 can I just spawn a thousand different LLMs to create a task and then pick from one of
    3:23:31 them or whatever search technique I want, a tree, Monte Carlo tree search, maybe it gets
    3:23:33 that complicated.
    3:23:38 Maybe it doesn’t because it’s too complicated to actually scale, who knows, better lesson.
    3:23:46 The question is, I think, when not if, because the rate of progress is so fast.
    3:23:52 Nine months ago, Dario said nine months ago the cost to train an inference was this, and
    3:23:57 now we’re much better than this, and deep seek is much better than this, and that cost curve
    3:24:02 for GPT-4, which was also roughly $60 per million tokens when it launched, has already
    3:24:10 fallen to $2 or so, and we’re going to get it down to cents, probably, for GPT-4 quality,
    3:24:15 and then that’s the base for the reasoning models like 01 that we have today, and 01 Pro
    3:24:20 is spawning multiple, and 03, and so on and so forth, these search techniques too expensive
    3:24:25 today, but they will get cheaper, and that’s what’s going to unlock the intelligence.
    3:24:28 So get cheaper and cheaper and cheaper.
    3:24:34 The big deep seek R1 release freaked everybody out because of the cheaper.
    3:24:38 One of the manifestations of that is NVIDIA stock plummeted.
    3:24:40 Can you explain what happened?
    3:24:47 And also just explain this moment and whether if NVIDIA is going to keep winning.
    3:24:53 We’re both NVIDIA bulls here, I would say, and in some ways, the market response is reasonable.
    3:24:59 Most of the market, NVIDIA’s biggest customers in the US are major tech companies, and they’re
    3:25:05 spending a ton on AI, and a simple interpretation of deep seek is you can get really good models
    3:25:10 without spending as much on AI, so in that capacity, it’s like, oh, maybe these big tech
    3:25:12 companies won’t need to spend as much on AI and go down.
    3:25:16 The actual thing that happened is much more complex, where there’s social factors, where
    3:25:21 there’s the rising in the app store, the social contagion that is happening, and then I think
    3:25:25 some of it is just like, I don’t trade, I don’t know anything about financial markets,
    3:25:28 but it builds up over the weekend where the social pressure, where it’s like, if it was
    3:25:32 during the weekend, there was multiple days of trading when this was really becoming,
    3:25:37 but it comes on the weekend and then everybody wants to sell, and that is a social contagion.
    3:25:41 I think there were a lot of false narratives, which is like, hey, these guys are spending
    3:25:44 billions on models, and they’re not spending billions on models.
    3:25:49 No one spent more than a billion dollars on a model that’s released publicly.
    3:25:57 GPT-4 was a couple hundred million, and then they’ve reduced the cost with 4TURBO4O, but
    3:25:59 billion dollar model runs are coming.
    3:26:02 This concludes pre-training and post-training, and then the other number is like, hey, deep
    3:26:06 seek didn’t include everything, they didn’t include a lot of the cost goes to research
    3:26:07 and all this sort of stuff.
    3:26:10 A lot of the cost goes to inference, a lot of the cost goes to post-training.
    3:26:11 None of these things were factored.
    3:26:12 It’s research salaries.
    3:26:16 All these things are counted in the billions of dollars that OpenAI is spending, but they
    3:26:21 weren’t counted in the, hey, $6 million, $5 million that deep seek spent.
    3:26:25 So there’s a bit of misunderstanding of what these numbers are, and then there’s also an
    3:26:31 element of, Nvidia has just been a straight line up, and there’s been so many different
    3:26:35 narratives that have been trying to push down, I don’t say push down Nvidia stock, everyone
    3:26:39 is looking for a reason to sell or to be worried.
    3:26:43 It was blackwell delays, there are GPU, there’s a lot of reports, every two weeks there’s
    3:26:48 a new report about their GPUs being delayed.
    3:26:51 There’s the whole thing about scaling laws ending.
    3:26:52 It’s so ironic.
    3:26:53 It lasted a month.
    3:26:58 It was just, literally just, hey, models aren’t getting better.
    3:27:01 They’re just not getting better, there’s no reason to spend more, pre-training scaling
    3:27:02 is dead.
    3:27:08 After that, it’s like 01, 03, R1, R1, and now it’s like, wait, models are progressing
    3:27:09 too fast.
    3:27:14 Slow down the progress, stop spending on GPUs, but the funniest thing I think that comes
    3:27:21 out of this is, Jevon’s paradox is true, AWS pricing for H100s has gone up over the
    3:27:24 last couple of weeks.
    3:27:28 Since a little bit after Christmas, since V3 was launched, AWS H100 pricing has gone
    3:27:29 up.
    3:27:35 H200s are almost out of stock everywhere because H200 has more memory and therefore R1 wants
    3:27:37 that chip over H100, right?
    3:27:40 We were trying to get GPUs on a short notice this week for a demo and it wasn’t that easy.
    3:27:45 We were trying to get just like 16 or 32 H100s for a demo and it was not very easy.
    3:27:52 For people who don’t know, Jevon’s paradox is, when the efficiency goes up, somehow
    3:27:57 magically, counter-intuitively, the total resource consumption goes up as well.
    3:28:03 The semiconductors are like 50 years of Moore’s Law, every two years, half the cost, double
    3:28:07 the transistors, just like clockwork, and it’s slowed down, obviously, but the semiconductor
    3:28:09 industry has gone up the whole time, right?
    3:28:10 It’s been wavy, right?
    3:28:13 There’s obviously cycles and stuff, and I don’t expect AI to be any different, right?
    3:28:18 There’s going to be ebbs and flows, but in AI, it’s just playing out at an insane time
    3:28:19 scale, right?
    3:28:21 It was 2X every two years.
    3:28:24 This is 1200X in like three years, right?
    3:28:28 So it’s like the scale of improvement that is hard to wrap your head around.
    3:28:35 Yeah, I was confused because to me, NVIDIA’s thought on that should have gone up, but maybe
    3:28:39 it went down because there’s kind of suspicion of foul play on the side of China or something
    3:28:40 like this.
    3:28:45 But if you just look purely at the actual principles of play here, it’s obvious, yeah,
    3:28:46 the Jevon’s paradox.
    3:28:52 More progress that AI makes, or the higher the derivative of AI progress is, especially
    3:28:56 because NVIDIA is in the best place, the higher the derivative is, the sooner the market’s
    3:29:01 going to be bigger and expanding, and NVIDIA is the only one that does everything reliably
    3:29:02 right now.
    3:29:05 Because it’s not like an NVIDIA competitor arose.
    3:29:08 It’s another company that’s using NVIDIA.
    3:29:14 Who historically has been a large NVIDIA customer and has press releases about them
    3:29:19 cheering about being China’s biggest NVIDIA customer, right?
    3:29:23 Maybe they’ve quieted down, but I think that’s another element of is that they don’t want
    3:29:29 to say how many GPUs they have because, hey, yes, they have H800s, yes, they have H20s.
    3:29:32 They also have some H100s, which are smuggled in.
    3:29:34 Can you speak to that, to the smuggling?
    3:29:39 What’s the scale of smuggling that’s feasible for a nation state to do for companies?
    3:29:41 Is it possible to…?
    3:29:44 I think there’s a few angles of smuggling here.
    3:29:48 One is, ByteDance arguably is the largest smuggler of GPUs for China.
    3:29:50 China is not supposed to have GPUs.
    3:29:52 ByteDance has over 500,000 GPUs.
    3:29:53 Why?
    3:29:55 Because they’re all rented from companies around the world.
    3:29:56 They rent from Oracle.
    3:29:57 They rent from Google.
    3:30:01 They rent from all these mass and a bunch of smaller cloud companies too, right?
    3:30:03 All the neoclouds of the world.
    3:30:06 They rent so, so many GPUs, they also buy a bunch, right?
    3:30:09 And they do this for mostly what meta does, right?
    3:30:10 Serving TikTok.
    3:30:11 Serving…
    3:30:12 Back to the next best…
    3:30:13 Separate discussion.
    3:30:14 Same as that, right?
    3:30:15 To be clear, that’s today the view.
    3:30:16 Use, right?
    3:30:17 And it’s a valid use, right?
    3:30:19 It’s a dopamine circuit, right?
    3:30:25 Now, that’s theoretically now very much restricted with the AI diffusion rules, which happened
    3:30:27 in the last week of the Biden admin.
    3:30:33 And Trump admin looks like they’re going to keep them, which limits allies, even Singapore.
    3:30:37 Which Singapore is 20% of NVIDIA’s, 20, 30% of NVIDIA’s revenue.
    3:30:41 But Singapore’s had a memoratorium on not building data centers for 15 years, because
    3:30:42 they don’t have enough power.
    3:30:43 So where are they going?
    3:30:44 Oh, yeah.
    3:30:47 I mean, I’m not claiming they’re all going to China, right?
    3:30:48 But a portion are…
    3:30:53 Many are going to Malaysia, including Microsoft and Oracle have big data centers in Malaysia.
    3:30:56 They’re going all over Southeast Asia, probably India as well, right?
    3:31:00 There’s stuff routing, but the diffusion rules are very de facto.
    3:31:04 You can only buy this many GPUs from this country, and you can only rent a cluster of
    3:31:06 this large to companies that are Chinese, right?
    3:31:10 They’re very explicit on trying to stop smuggling, right?
    3:31:17 And a big chunk of it was, “Hey, let’s random company by 16 servers, ship them to China,
    3:31:18 right?”
    3:31:25 Actually, I saw a photo from someone in the semiconductor industry who leads a team for
    3:31:30 networking chips that competes with NVIDIA, and he sent a photo of a guy checking into
    3:31:36 a first-class United flight from San Francisco to Shanghai or Shenzhen with a super micro
    3:31:41 box that was this big, which can only contain GPUs, right?
    3:31:45 And he was booking first-class, because think about it, 3 to 5K for your first-class ticket,
    3:31:51 over-cost 240,000 in the US, 250,000, you sell it for 300,000 in China, wait, you just got
    3:31:54 a free first-class ticket and a lot more money.
    3:31:57 So it’s like, you know, and that’s like small-scale smuggling.
    3:32:01 Most of the large-scale smuggling is like companies in Singapore and Malaysia, like
    3:32:04 routing them around or renting GPUs completely legally.
    3:32:05 I want to jump in.
    3:32:06 How much does this scale?
    3:32:10 I think there’s been some number, like some people that have higher-level economics understanding
    3:32:15 say that as you go from one billion of smuggling to 10 billion, it’s like you’re hiding certain
    3:32:18 levels of economic activity, and that’s the most reasonable thing to me, is that there’s
    3:32:23 going to be some level where it’s so obvious that it’s easier to find this economic activity.
    3:32:32 Yeah, so my belief is that last year, roughly, so NVIDIA made a million H20s, which are legally
    3:32:35 allowed to be shipped to China, which we talked about is better for reasoning, right, inference
    3:32:40 at least, not training, but reasoning inference, and inference generally.
    3:32:47 Then they also had a couple hundred thousand, we think like 200 to 300,000 GPUs were routed
    3:32:50 to China from, you know, Singapore, Malaysia, US, wherever.
    3:32:55 Companies spawned up by 16 GPUs, 64 GPUs, whatever it is, routed, and Huawei is known
    3:32:59 for having spent up a massive network of companies to get the materials they need after they
    3:33:03 were banned in 2018, so it’s not like otherworldly, but I agree, right?
    3:33:07 Nathan’s point is like, hey, you can’t smuggle up $10 billion of GPUs.
    3:33:11 And then the third source, which is just now banned, which wasn’t considered smuggling,
    3:33:19 but is China is renting, I believe from our research, Oracle’s biggest GPU customer is
    3:33:21 ByteDance, right?
    3:33:24 And for Google, I think it’s their second biggest customer, right?
    3:33:27 And you go down the list of clouds, and especially these smaller cloud companies that aren’t
    3:33:30 like the hyperscalers, right?
    3:33:34 Think beyond CoreWeave and Lambda even, there’s a whole, there’s 60 different new cloud companies
    3:33:35 serving NVIDIA GPUs.
    3:33:38 I think ByteDance is renting a lot of these, right?
    3:33:39 All over, right?
    3:33:44 And so these companies are renting GPUs to Chinese companies, and that was completely
    3:33:48 legal up until the diffusion rules, which happened just a few weeks ago.
    3:33:54 And even now, you can rent GPU clusters that are less than 2,000 GPUs, or you can buy GPUs
    3:33:57 and ship them wherever you want if they’re less than 1,500 GPUs, right?
    3:34:02 So it’s like, there are still some ways to smuggle, but yeah, it’s not, as the numbers
    3:34:03 grow, right?
    3:34:07 A hundred-something billion dollars of revenue for NVIDIA last year, 200-something billion
    3:34:08 this year, right?
    3:34:14 And if next year, it could nearly double again, or more than double, based on what we see
    3:34:19 with data center footprints being built out all across the U.S. and the rest of the world,
    3:34:22 it’s going to be really hard for China to keep up with these rules, right?
    3:34:28 Yes, there will always be smuggling, and deep-seek level models of GPT-4 level models, O1 level
    3:34:32 models capable to train on what China can get, even the next year above that.
    3:34:39 But if we speedrun a couple more jumps, right, to billion-dollar models, $10 billion models,
    3:34:44 then it becomes, hey, there is a compute disadvantage for China for training models and serving them.
    3:34:46 And the serving part is really critical, right?
    3:34:48 Deep-seek cannot serve their model today, right?
    3:34:51 It’s completely out of inventory.
    3:34:55 It’s already started falling in the app store, actually, downloads, because you download it,
    3:34:56 you try and sign up.
    3:34:58 They say we’re not taking registrations because they have no capacity, right?
    3:35:02 You open it up, you get like less than five tokens per second if you even get your request
    3:35:03 approved, right?
    3:35:06 Because there’s just no capacity, because they just don’t have enough GPUs to serve
    3:35:08 the model, even though it’s incredibly efficient.
    3:35:13 It would be fascinating to watch the smuggling, because, I mean, there’s drug smuggling, right?
    3:35:20 That’s a market, there’s weapons smuggling, and GPUs will surpass that at some point.
    3:35:25 Chips are highest value per kilogram, probably by far.
    3:35:27 I have another question for you, Don.
    3:35:31 Do you track model API access internationally?
    3:35:36 How easy is it for Chinese companies to use hosted model APIs from the US?
    3:35:38 Yeah, I mean, that’s incredibly easy, right?
    3:35:43 OpenAI publicly stated Deep-seek uses their API, and as they say, they have evidence, right?
    3:35:47 And this is another element of the training regime, is people at OpenAI have claimed that
    3:35:51 it’s a distilled model, i.e., you’re taking OpenAI’s model, you’re generating a lot of
    3:35:55 output, and then you’re training on the output in their model.
    3:35:57 And even if that’s the case, what they did is still amazing, by the way, what Deep-seek
    3:35:58 did efficiency-wise.
    3:36:02 Distillation is standard practice in industry, whether or not, if you’re at a closed lab
    3:36:06 where you care about terms of service and IP closely, you distill from your own models.
    3:36:10 If you’re a researcher and you’re not building any products, you distill from the OpenAI
    3:36:11 models.
    3:36:12 This is a good opportunity.
    3:36:16 Can you explain big picture distillation as a process?
    3:36:17 What is distillation?
    3:36:18 What’s the process of distillation?
    3:36:20 We’ve talked a lot about training language models.
    3:36:24 They are trained on text, and post-training, you’re trying to train on very high-quality
    3:36:29 text that you want the model to match the features of, or if you’re using RL, you’re
    3:36:30 letting the model find its own thing.
    3:36:35 But for supervised fine-tuning, for preference data, you need to have some completions with
    3:36:37 the model is trying to learn to imitate.
    3:36:42 And what you do there is instead of a human data, or instead of the model you’re currently
    3:36:47 training, you take completions from a different, normally more powerful model.
    3:36:53 I think there’s rumors that these big models that people are waiting for, these GPT-5s
    3:36:58 of the world, the Claude III opuses of the world, are used internally to do this distillation
    3:36:59 process at OpenAI.
    3:37:04 There’s also public examples, right, like Meta explicitly stated, not necessarily distilling,
    3:37:09 but they used 405B as a reward model for 70B in their Llama 3.2 or 3.3.
    3:37:11 This is all the same topic.
    3:37:15 So is this ethical, is this legal?
    3:37:22 Why is that Financial Times article headline, say OpenAI says that there’s evidence that
    3:37:26 China’s deep seek used its model to train competitor?
    3:37:30 This is a long, at least in the academic side and research side, it’s a long history
    3:37:32 because you’re trying to interpret OpenAI’s rule.
    3:37:36 OpenAI’s terms of service say that you cannot build a competitor with outputs from their
    3:37:37 model.
    3:37:42 Terms of service are different than a license, which are essentially a contract between organizations.
    3:37:46 So if you have a terms of service on OpenAI’s account, if I violate it, OpenAI can cancel
    3:37:47 my account.
    3:37:51 This is very different than a license that says how you could use a downstream artifact.
    3:37:54 So a lot of it hinges on a word that is very unclear in the AI space, which is what is
    3:37:55 a competitor.
    3:38:01 And then the ethical aspect of it is like, why is it unethical for me to train on your
    3:38:04 model when you can train on the internet’s text, right?
    3:38:12 So there’s a bit of a hypocrisy because OpenAI and potentially most of the companies trained
    3:38:14 on the internet’s text without permission.
    3:38:20 There’s also a clear loophole, which is that I generate data from OpenAI and then I upload
    3:38:25 it somewhere and then somebody else trains on it and the link has been broken.
    3:38:27 They’re not under the same terms of service contract.
    3:38:32 There’s a lot of hip hop, there’s a lot of to be discovered details that don’t make
    3:38:33 a lot of sense.
    3:38:38 This is why a lot of models today, even if they train on zero OpenAI data, you ask the
    3:38:42 model who trained you, it’ll say I was, I am Chad GPT trained by OpenAI because there’s
    3:38:47 so much copy paste of like OpenAI outputs from that on the internet that you just weren’t
    3:38:52 able to filter it out and there was nothing in the RL where they implemented like, hey,
    3:38:56 or post training or SFT, whatever that says, hey, I’m actually a model by Allen Institute
    3:38:58 instead of OpenAI.
    3:38:59 We have to do this if we serve a demo.
    3:39:04 We do research and we use OpenAI APIs because it’s useful and you want to understand post
    3:39:08 training and like our research models, they will say they’re written by OpenAI unless
    3:39:12 we put in the system prop that we talked about that like, I am Tulu, I am a language model
    3:39:14 trained by the Allen Institute for AI.
    3:39:18 And if you ask more people around industry, especially with post training, it’s a very
    3:39:24 doable task to make the model say who it is or to suppress the OpenAI thing.
    3:39:28 So in some levels, it might be that deep sea didn’t care that it was saying that it was
    3:39:29 by OpenAI.
    3:39:32 Like if you’re going to upload model weights, it doesn’t really matter because anyone that’s
    3:39:37 serving it in an application and cares a lot about serving is going to, when serving it,
    3:39:40 if they’re using it for a specific task, they’re going to tailor it to that.
    3:39:42 And it doesn’t matter, but it’s saying it’s Chad GPT.
    3:39:46 Oh, I guess the one of the ways to do that is like a system prompt or something like
    3:39:47 that.
    3:39:49 Like if you’re serving it to say that you’re…
    3:39:50 That’s what we do.
    3:39:55 Like if we host the demo, you say you are Tulu 3, a language model trained by the Allen
    3:39:56 Institute for AI.
    3:40:00 We also are benefited from OpenAI data because it’s a great research tool.
    3:40:07 I mean, do you think there’s any truth and value to the claim, OpenAI’s claim that there’s
    3:40:10 evidence that China’s deep seek, use this model to train?
    3:40:16 I think everyone has benefited regardless because the data’s on the internet.
    3:40:18 And therefore, it’s in your portraying now, right?
    3:40:23 There are like subreddits where people share the best Chad GPT outputs and those are in
    3:40:24 your model…
    3:40:26 I think that they’re trying to ship the narrative.
    3:40:28 They’re trying to protect themselves.
    3:40:32 And we saw this years ago and Bite Dance was actually banned from some OpenAI APIs for training
    3:40:34 on outputs.
    3:40:39 There’s other AI startups that most people, if you’re in the AI culture, they just told
    3:40:43 us they trained on OpenAI outputs and they never got banned.
    3:40:45 That’s how they bootstrapped their early models.
    3:40:49 So it’s much easier to get off the ground using this than to set up human pipelines
    3:40:50 and build a strong model.
    3:40:54 So there’s a long history here and a lot of the communications are seen like narrative
    3:40:55 control.
    3:40:59 Actually, over the last couple of days, we’ve seen a lot of people distill deep seeks model
    3:41:04 into Lama models because the deep seek models are kind of complicated to run inference on
    3:41:08 because they’re a mixture of experts and they’re 600 plus billion parameters and all this.
    3:41:12 And people distilled them into the Lama models because the Lama models are so easy to serve
    3:41:16 and everyone’s built the pipelines and tooling for inference with the Lama models because
    3:41:18 it’s the open standard.
    3:41:21 So we’ve seen a sort of roundabout, right?
    3:41:22 Is it bad?
    3:41:23 Is it illegal?
    3:41:24 Maybe it’s illegal, whatever.
    3:41:25 I don’t know about that.
    3:41:26 But it could break contracts.
    3:41:27 I don’t think it’s illegal.
    3:41:30 In any legal, no one’s going to jail for this.
    3:41:35 I think fundamentally, I think it’s ethical or I hope it’s ethical because the moment
    3:41:42 it becomes, we ban that kind of thing, it’s going to make everybody much worse off.
    3:41:48 And I also actually, this is difficult, but I think you should be allowed to train on
    3:41:49 the internet.
    3:41:52 I know a lot of authors and creators are very sensitive about it.
    3:41:54 That’s a difficult question.
    3:41:57 But the moment you’re not allowed to train on the internet.
    3:41:58 I agree.
    3:42:01 I have a schizo take on how you consult this because it already works.
    3:42:04 I have a reasonable take on it.
    3:42:10 So, you know, Japan has a law which you’re allowed to train on any training data and
    3:42:15 copyrights don’t apply if you want to train a model A, B, Japan has nine gigawatts of
    3:42:17 curtailed nuclear power.
    3:42:23 C, Japan is allowed under the AI diffusion rule to import as many GPUs as they’d like.
    3:42:25 So all we have to do, we have a market here to make.
    3:42:30 We build massive data centers, we rent them to the labs, and then we train models in a
    3:42:33 legally permissible way, and there’s no if, ands, or buts.
    3:42:38 And now, the models have no potential copyright lawsuit from New York Times or anything like
    3:42:39 that.
    3:42:40 No, no, it’s just completely legal.
    3:42:41 Genius.
    3:42:46 The early copyright lawsuits have fallen in the favor of AI training.
    3:42:53 I would say that the long tail of use is going to go in the side of AI, which is if you scrape
    3:42:56 trillions of data, you’re not looking at the trillions of tokens of data.
    3:43:01 You’re not looking and saying this one New York Times article is so important to me.
    3:43:05 But if you’re doing a audio generation for music or image generation, and you say make
    3:43:10 it in the style of X person, that’s a reasonable case where you could figure out what is their
    3:43:12 profit margin on inference.
    3:43:17 I don’t know if it’s going to be the 50/50 of YouTube creator program or something, but
    3:43:19 I would opt into that program as a writer.
    3:43:26 Please, it’s going to be a rough journey, but there will be some solutions like that
    3:43:27 that make sense.
    3:43:30 But there’s a long tail where it’s just on the Internet.
    3:43:36 I think one of the other aspects of that Financial Times article implied, and so that leads to
    3:43:37 a more general question.
    3:43:45 Do you think there’s how difficult is spying, espionage, and stealing of actual secret code
    3:43:48 and data from inside companies?
    3:43:49 How much of that is being attempted?
    3:43:52 Code and data is hard, but ideas is easy.
    3:43:58 Silicon Valley operates on the way that top employees get bought out by other companies
    3:43:59 for a pay raise.
    3:44:04 And a large reason why these companies do this is to bring ideas with them.
    3:44:05 There’s no…
    3:44:09 I mean, in California, there’s rules like certain non-competes or whatever are illegal
    3:44:10 in California.
    3:44:14 And whether or not there’s NDAs and things, that is how a lot of it happens.
    3:44:19 Recently, there was somebody from Gemini who helped make this one million context length,
    3:44:23 and everyone is saying the next llama who, I mean, he went to the meta team, is going
    3:44:26 to have one million context length.
    3:44:29 And that’s kind of how the world works.
    3:44:34 As far as industrial espionage and things, that has been greatly successful in the past.
    3:44:39 The Americans did the Brits, the Chinese have done it to the Americans, and so on and so
    3:44:40 forth.
    3:44:43 It is a fact of life.
    3:44:48 And so to argue, industrial espionage can be stopped is probably unlikely, you can make
    3:44:49 it difficult.
    3:44:54 Even then, there’s all these stories about, “Hey, F35 and F22 have already been given
    3:44:57 to China in terms of design plans and stuff.”
    3:45:03 Code and stuff between, I say, companies, not nation states is probably very difficult.
    3:45:08 But ideas are discussed a lot, whether it be a house party in San Francisco, or a company
    3:45:15 changing employees, or the always the mythical honeypot that always gets talked about, like
    3:45:17 someone gets honeypotted.
    3:45:21 Because everyone working on AI is a single dude who’s in their 20s and 30s.
    3:45:25 Not everyone, but an insane amount of insane percentages.
    3:45:28 So there’s always all these like, and obviously–
    3:45:32 So a honeypotted is like a spy, a female spy approaches you and like–
    3:45:33 Yeah.
    3:45:36 Or male, right?
    3:45:37 It’s San Francisco, right?
    3:45:44 But as a single dude, I will say in his late 20s, we are very easily corrupted, right?
    3:45:47 Not corrupted myself, but you know, we are, we are, right?
    3:45:48 Everybody else, not me.
    3:45:49 Yeah, exactly.
    3:45:50 I’m too oblivious and I am not single.
    3:45:53 So I’m saved from one espionage access.
    3:45:59 Yeah, you have to make sure to close all security vulnerabilities.
    3:46:05 So you do collect a lot of information about each of the mega clusters for each of the
    3:46:08 major AI companies.
    3:46:12 Can you talk about the buildouts for each one that stand out?
    3:46:13 Yeah.
    3:46:17 I think the thing that’s like really important about these mega cluster buildouts is they’re
    3:46:20 completely unprecedented in scale, right?
    3:46:24 US, you know, sort of like data center power consumption has been slowly on the rise and
    3:46:29 it’s gone up to 2%, 3% even through the cloud computing revolution, right?
    3:46:32 Data center consumption as a percentage of total US.
    3:46:34 And that’s been over decades, right, of data centers, et cetera.
    3:46:36 It’s been climbing, climbing slowly.
    3:46:41 But now, 2% to 3%, by the end of this decade, it’s like even under like, you know, when
    3:46:47 I say like 10%, a lot of people that are traditionally by like 20, 28, 20, 30, people traditionally
    3:46:51 non-traditional data center people like that’s nuts.
    3:46:54 But then like people who are in like AI who have like really looked at this at like the
    3:46:58 anthropics and open AI’s are like, that’s not enough, okay?
    3:47:04 But like, you know, this is this is both through globally distributed or distributed throughout
    3:47:07 the US as well as like centralized clusters, right?
    3:47:10 The distributed throughout the US is exciting and it’s the bulk of it, right?
    3:47:17 Like, hey, you know, open AI or, you know, say Meta’s adding a gigawatt, right?
    3:47:20 But most of it is distributed through the US for inference and all these other things,
    3:47:21 right?
    3:47:24 So maybe we should lay out what a what a cluster is.
    3:47:28 So, you know, does this include AWS?
    3:47:32 Maybe it’s good to talk about the different kinds of clusters and what you mean by megaclusters
    3:47:36 and what’s the GPU and what’s the computer and what is not that far back.
    3:47:37 But yeah.
    3:47:39 So like, what do we mean by the clusters?
    3:47:41 No, man, I thought I was about to do the Apple ad, right?
    3:47:43 What’s a computer?
    3:47:49 So, so traditionally data centers and data center tasks have been a distributed systems
    3:47:54 problem that is capable of being spread very far and widely, right?
    3:48:00 I send a request to Google, it gets routed to a data center somewhat close to me.
    3:48:05 It does whatever search ranking recommendation sends a result back, right?
    3:48:09 The nature of the task is changing rapidly in that the task, there’s two tasks that people
    3:48:10 are really focused on now, right?
    3:48:12 It’s not database access.
    3:48:14 It’s not serve me the right page, serve me the right ad.
    3:48:20 It’s now a inference and inference is dramatically different from traditional distributed systems,
    3:48:22 but it looks a lot more simple, similar.
    3:48:24 And then there’s training, right?
    3:48:28 The train inference side is still like, hey, I’m going to put, you know, thousands of GPUs
    3:48:33 and, you know, blocks all around these data centers, I’m going to run models on them,
    3:48:37 you know, user submits a request, gets kicked off, or hey, my service, you know, they submit
    3:48:38 a request to my service, right?
    3:48:41 They’re on Word and they’re like, oh yeah, help me copilot and it kicks it off or I’m
    3:48:45 on my windows, copilot, whatever, Apple intelligence, whatever it is, it gets kicked off to a data
    3:48:46 center, right?
    3:48:51 And that data center does some work and sends it back, that’s inference, that is going to
    3:48:55 be the bulk of compute, but then, you know, and that’s like, you know, there’s thousands
    3:48:59 of data centers that we’re tracking with like satellites and like all these other things.
    3:49:01 And those are the bulk of what’s being built.
    3:49:05 About the scale of, and so that’s like what’s really reshaping and that’s what’s getting
    3:49:11 millions of GPUs, but the scale of the largest cluster is also really important, right?
    3:49:17 When we look back at history, right, like, you know, or through the age of AI, right?
    3:49:22 Like it was a really big deal when they did AlexNet on, I think, two GPUs or four GPUs?
    3:49:23 I don’t remember.
    3:49:24 It was a really big deal.
    3:49:25 It’s a big deal because you use GPUs.
    3:49:29 It’s a big deal to use GPUs and they use multiple, right?
    3:49:32 But then over time, its scale has just been compounding, right?
    3:49:40 And so when you skip forward to GPT-3, then GPT-4, GPT-4 20,000 A100 GPUs on precedented
    3:49:44 run, right, in terms of the size and the cost, right, a couple hundred million dollars on
    3:49:48 a YOLO, right, a YOLO run for GPT-4 and it yielded, you know, this magical improvement
    3:49:53 that was like perfectly in line with what was experimented and just like a log scale, right?
    3:49:55 Oh yeah, they have that plot from the paper.
    3:49:56 The technical report.
    3:49:58 The scaling laws were perfect, right?
    3:50:00 But that’s not a crazy number, right?
    3:50:05 20,000 A100s, roughly each GPU is consuming 400 watts.
    3:50:09 And then when you add in the whole server, right, everything, it’s like 15 to 20 megawatts
    3:50:11 of power, right?
    3:50:15 You know, maybe you could look up what the power of consumption of a human person is
    3:50:19 because the numbers are going to get silly, but like 15 to 20 megawatts was standard data
    3:50:20 center size.
    3:50:21 It was just unprecedented.
    3:50:22 That was all GPUs running one time.
    3:50:23 20 watts was a toaster.
    3:50:24 Yeah.
    3:50:29 A toaster is like a similar power consumption to an A100, right?
    3:50:34 H100 comes around, they increase the power from like 400 to 700 watts and that’s just
    3:50:36 per GPU and then there’s all the associated stuff around it.
    3:50:40 So once you count all that, it’s roughly like 1200 to 1400 watts.
    3:50:43 For everything, networking, CPUs, memory, blah, blah, blah.
    3:50:46 So we should also say, so what’s required?
    3:50:53 You said power, so a lot of power is required, a lot of heat is generated, cooling is required
    3:50:58 and because there’s a lot of GPUs that have to be or CPUs or whatever, they have to be
    3:50:59 connected.
    3:51:00 So there’s a lot of networking.
    3:51:01 Yeah.
    3:51:02 Right.
    3:51:03 Yeah.
    3:51:04 So I think, yeah.
    3:51:05 Sorry for skipping past that.
    3:51:06 And then the data center itself is like complicated, right?
    3:51:10 But these are still standard sized data centers for GPT-4 scale, right?
    3:51:16 Now we step forward to sort of what is the scale of clusters that people built last year,
    3:51:17 right?
    3:51:18 And it ranges widely, right?
    3:51:22 It ranges from like, hey, these are standard data centers and we’re just using multiple
    3:51:25 of them and connecting them together really with a ton of fiber between them, a lot of
    3:51:27 networking, et cetera.
    3:51:29 That’s what OpenAI and Microsoft did in Arizona, right?
    3:51:31 And so they have a, you know, 100,000 GPUs, right?
    3:51:32 Meta, similar thing.
    3:51:36 They took their standard existing data center design and it looks like an H and they connected
    3:51:39 multiple them together.
    3:51:44 And you know, they got to, they first did 16,000 GPUs, 24,000 GPUs total.
    3:51:46 Only 16 of them, 1,000 of them were running on the training run because GPUs are very
    3:51:47 unreliable.
    3:51:51 So they needed to have spares to like swap in and out all the way to like now 100,000
    3:51:54 GPUs that they’re training on Lama 4 on currently, right?
    3:51:56 Like 128,000 or so, right?
    3:52:03 This is, you know, think about 100,000 GPUs with roughly 1,400 watts apiece.
    3:52:05 That’s 140 megawatts, 150 megawatts, right?
    3:52:07 For 128,000, right?
    3:52:11 So you’re talking about, you’ve jumped from 15 to 20 megawatts to 10x, you know, almost
    3:52:17 10x that number, 9x that number to 150 megawatts in two years, right?
    3:52:19 From 2022 to 2024, right?
    3:52:23 And some people like Elon, he admittedly, right, and he says himself got into the game
    3:52:26 a little bit late for pre-training large language models, right?
    3:52:27 XAI was started later, right?
    3:52:32 But then he bet heaven and hell to get his data center up and get the largest cluster
    3:52:33 in the world, right?
    3:52:35 Which is 200,000 GPUs.
    3:52:36 And he did that.
    3:52:39 He bought a factory in Memphis.
    3:52:42 He’s upgrading the substation, but at the same time he’s got a bunch of mobile power
    3:52:45 generation, a bunch of single cycle combined.
    3:52:48 He tapped the natural gas line that’s right next to the factory and he’s just pulling a
    3:52:50 ton of gas, burning gas.
    3:52:52 He’s generating all this power.
    3:52:56 He’s in a factory, in an old appliance factory that’s shut down and moved to China long ago,
    3:52:57 right?
    3:53:00 And he’s got 200,000 GPUs in it.
    3:53:01 And now what’s the next scale, right?
    3:53:02 All the hyperscalers have done this.
    3:53:06 Now the next scale is something that’s even bigger, right?
    3:53:10 And so, you know, Elon, just to stick on the topic, he’s building his own natural gas plant,
    3:53:13 like a proper one right next door.
    3:53:18 He’s deploying tons of Tesla Mega Pack batteries to make the power more smooth and all sorts
    3:53:19 of other things.
    3:53:23 He’s got like industrial chillers to cool the water down because he’s water cooling the
    3:53:24 chips.
    3:53:28 So, all these crazy things to get the clusters bigger and bigger.
    3:53:34 But when you look at, like, say, what OpenAI did with Stargate, that’s that in Arizona,
    3:53:36 in Abilene, Texas, right?
    3:53:38 What they’ve announced at least, right?
    3:53:39 It’s not built, right?
    3:53:40 Elon says they don’t have the money.
    3:53:42 You know, there’s some debates about this.
    3:53:46 But at full scale, at least the first section is like definitely money’s accounted for,
    3:53:47 but there’s multiple sections.
    3:53:52 But at full scale, that data center is going to be 2.2 gigawatts, right, 2200 megawatts
    3:53:59 of power in and roughly like 1.8 gigawatts or 1800 megawatts, yeah, 1800 megawatts of
    3:54:01 power delivered to chips, right?
    3:54:06 Now, this is an absurd scale, 2.2 gigawatts is like more than most cities, right, you
    3:54:13 know, to be clear, delivered to a single cluster that’s connected to do training, right?
    3:54:16 To train these models, to do both the pre-training, the post-training, all of this stuff, right?
    3:54:17 This is insane.
    3:54:18 This is insane.
    3:54:19 This is insane.
    3:54:20 This is a nuclear power plant again.
    3:54:21 And everyone is doing this, right?
    3:54:22 Everyone is doing this, right?
    3:54:24 Meta in Louisiana, right?
    3:54:29 They’re building two natural gas plants, massive ones, and then they’re building this massive
    3:54:31 data center.
    3:54:37 Amazon has like plans for this scale, Google has plans for this scale, XAI has plans for
    3:54:38 this scale, right?
    3:54:42 Like all of these, the guys that are racing, the companies that are racing are racing hard
    3:54:46 and they’re doing multi-gigawatt data centers, right?
    3:54:52 You build this out because they think that, yeah, if I now have, you know, obviously pre-training
    3:54:55 scaling is going to continue, but to some extent, but then also all this post-training
    3:54:58 stuff where you have an RL sandbox for computer use or whatever, right?
    3:55:01 Like, you know, this is where they’re going to, and all these variable domains where they
    3:55:06 just keep learning and learning and learning, self-play, whatever it is, makes the AI so
    3:55:09 much more capable because the line does go up, right?
    3:55:11 As you throw more compute, you get more performance.
    3:55:15 The shirt is about scaling laws, you know, to some extent it is diminishing returns, right?
    3:55:18 You 10x the compute, you don’t get 10x better model, right?
    3:55:21 You get a diminishing returns, but also you get efficiency improvements, so you bend the
    3:55:23 curve, right?
    3:55:27 And these scale of data centers are doing, you know, wreaking, you know, a lot of like
    3:55:29 havoc on the network, right?
    3:55:33 And, you know, Nathan was mentioning there’s, Amazon has tried to buy this nuclear power
    3:55:38 plant, Talon, and if you look at the Talon stock, it’s just like skyrocketing and, you
    3:55:41 know, like they’re building a massive multi-gigawatt data center there, and, you know, you just
    3:55:44 go down the list, there’s so many ramifications.
    3:55:49 One thing is like certain regions of the U.S. transmitting power cost more than actually
    3:55:51 generating it, right?
    3:55:55 Because the grid is so slow to build, and the demand for power and the ability to build
    3:55:59 power and like re-ramping on a natural gas plant or even a coal plant is like easy enough
    3:56:01 to do, but like transmitting the power is really hard.
    3:56:06 So in some parts of the U.S., like in Virginia, it costs more to transmit power than it costs
    3:56:09 to generate it, which is like, you know, there’s all sorts of like second order effects that
    3:56:10 are insane here.
    3:56:13 Can the power grid support this kind of growth?
    3:56:16 You know, Trump’s executive orders, there’s a, there’s a Biden executive order before
    3:56:21 the end of the year, but then Trump had some more executive orders, which hopefully reduced
    3:56:26 the regulations to where, yes, things can be built, but yeah, this is a big, big challenge,
    3:56:27 right?
    3:56:28 Is building enough power fast enough?
    3:56:32 Are you going to basically have a nuclear power plant next to a data center for each
    3:56:33 one of these?
    3:56:38 So, so the fun thing here is this is too slow to build the power plant, to build a power
    3:56:42 plant or to re-configure an existing power plant is too slow.
    3:56:46 And so therefore you must use natural, data center power consumption is flat, right?
    3:56:47 You know, I mean, like it’s, right?
    3:56:49 Which is why nuclear is also good for it.
    3:56:55 Like longterm nuclear is a very natural fit, but you can’t do solar or anything in the
    3:56:57 short term like that.
    3:56:58 Because data center power is like this, right?
    3:57:03 Like you’re telling me, you know, I’m going to buy tens of billions of dollars of GPUs
    3:57:04 and idle them because the power is not being generated.
    3:57:05 Like power is cheap, right?
    3:57:10 Like if you look at the cost of a cluster, less than 20% of it is power, right?
    3:57:14 Most of it is the capital cost and depreciation of the GPUs, right?
    3:57:15 And so it’s like, well, screw it.
    3:57:17 I’ll just like, you know, I’ll just build natural gas plants.
    3:57:18 This is what Metta’s doing in Louisiana.
    3:57:22 This is what OpenAI is doing in Texas and like all these different places.
    3:57:25 They may not be doing it directly, but they are partnered with someone.
    3:57:28 And so there is a couple of hopes, right?
    3:57:32 Like one is, you know, and Elon, what he’s doing in Memphis is like, you know, to the
    3:57:36 extreme, they’re not just using dual combined cycle gas, which is like super efficient.
    3:57:40 He’s also just using single cycle and like mobile generators and stuff, which is less
    3:57:41 efficient.
    3:57:45 But he’s, you know, there’s also like the flip side, which is like solar power generation
    3:57:49 is like this and wind is another like, like this different correlate, you know, different.
    3:57:53 So if you stack both of those, plus you get a big chunk of batteries, plus you have a
    3:57:56 little bit of gas, it is possible to run it more green.
    3:57:59 It’s just the time scales for that is slow, right?
    3:58:04 So people are trying, but, you know, Metta basically said, whatever, don’t care about
    3:58:08 my sustainability pledge, or they’ll buy like a power, it’s called a PPA, power purchasing
    3:58:12 agreement, where there’ll be a massive wind farm or solar farm, like wherever.
    3:58:15 And then they’ll just pretend like those electrons are being consumed by the data center.
    3:58:18 But in reality, they’re paying for the power here and selling it to the grid and they’re
    3:58:20 buying power here.
    3:58:24 And then another thing is like Microsoft quit on some of their sustainability pledges, right?
    3:58:29 Elon, he, what he did with Memphis is objectively somewhat dirty, but he’s also doing it in an
    3:58:34 area where there’s like a bigger natural gas plant right next door and like a sewer next
    3:58:37 or not a sewer, but like a wastewater treatment and a garbage dump nearby, right?
    3:58:41 And he’s obviously made the world a lot more clean than that one data center is going to
    3:58:42 do, right?
    3:58:47 So I think like it’s fine to some extent, and maybe AGI solves, you know, global warming
    3:58:48 and stuff, right?
    3:58:51 Whatever it is, you know, this is, this is sort of the attitude that people at the labs
    3:58:52 have, right?
    3:58:53 Which is like, yeah, it’s great.
    3:58:54 We’ll just use gas, right?
    3:58:58 Because the race is that important and if we lose, you know, that’s way worse, right?
    3:59:05 I should say that I got you asked to visit the Memphis data center and it’s kind of incredible.
    3:59:11 I mean, I visited with Elon, just the teams and the rate of innovation.
    3:59:12 There’s insane.
    3:59:18 Because my sense is that, you know, nobody’s ever done anything of this scale and nobody
    3:59:23 has certainly ever done anything of this scale at the rate that XAI is doing.
    3:59:28 So they’re like figuring out, I mean, it’s all sitting in all these meetings with their
    3:59:29 brainstorming.
    3:59:31 It’s like, it’s insane.
    3:59:32 It’s exciting.
    3:59:35 Because they’re like, they’re trying to figure out what the bottlenecks are, how to remove
    3:59:39 the bottlenecks, how to make sure that, you know, there’s just so many really cool things
    3:59:46 about putting together a data center because, you know, everything has to work.
    3:59:51 It’s the people that do like the sys admin, you know, the machine learning, all that is
    3:59:52 the exciting thing so on.
    3:59:59 But really the people that run everything are the folks that know like the low level software
    4:00:02 and hardware that runs everything, the networking, all of that.
    4:00:06 And so you have to like make sure you have procedures that test everything.
    4:00:07 I think they’re using Ethernet.
    4:00:12 I don’t know how they’re doing the networking, but they’re using NVIDIA Spectrum X Ethernet.
    4:00:16 There’s actually like, I think, yeah, the unsung heroes are the cooling and electrical
    4:00:18 systems, which are just like glossed over.
    4:00:19 Yeah.
    4:00:24 But I think like, like one story that maybe is like exemplifies how insane this stuff
    4:00:29 is, is when you’re training, right, you’re always doing, you’re running through the model
    4:00:32 a bunch, right, in the most simplistic terms, running through the model a bunch.
    4:00:37 And then you’re going to exchange everything and synchronize the weights, right?
    4:00:38 So you’ll do a step.
    4:00:40 This is like a step in model training, right?
    4:00:42 At every step, your loss goes down, hopefully, and it doesn’t always.
    4:00:46 But in the simplest terms, you’ll be computing a lot and then you’ll exchange, right?
    4:00:49 The interesting thing is GPU power is most of it.
    4:00:50 Networking power is some, but it’s a lot less.
    4:00:53 But so while you’re computing, your power for your GPUs is here.
    4:00:57 But then when you’re exchanging weights, if you’re not able to overlap communications
    4:01:01 and compute perfectly, there may be a time period where your GPUs are just idle and you’re
    4:01:04 exchanging weights and you’re like, hey, the model’s updating.
    4:01:07 So you’re exchanging the radiance, you do the model update, and then you start training
    4:01:08 again.
    4:01:10 So the power goes, right?
    4:01:11 And it’s super spiky.
    4:01:16 And so funnily enough, right, like this, when you talk about the scale of data center power,
    4:01:17 right?
    4:01:19 You can blow stuff up so easily.
    4:01:25 And so Meta actually has accidentally upstreamed something to code in PyTorch where they added
    4:01:28 an operator, and I kid you not, whoever made this, like I want to hug the guy because it
    4:01:35 says PyTorch, it’s like PyTorch.powerplantNoBlowUp, equal zero or equal one.
    4:01:38 And what it does, what it does is amazing, right?
    4:01:42 Either, you know, when you’re exchanging the weights, the GPU will just compute fake
    4:01:44 numbers so the power doesn’t spike too much.
    4:01:48 And so then the power plants don’t blow up because the transient spikes screw stuff up.
    4:01:49 Well, that makes sense.
    4:01:51 I mean, you have to do that kind of thing.
    4:01:53 You have to make sure they’re not idle, yeah.
    4:01:56 And Elon’s solution was like, let me throw a bunch of Tesla mega packs and a few other
    4:01:57 things, right?
    4:02:01 Like everyone has different solutions, but like Meta’s at least was publicly and openly
    4:02:05 known, which is just like, set this operator, and what this operator does is it just makes
    4:02:08 the GPUs compute nothing so that the power doesn’t spike.
    4:02:11 But that just tells you how much power you’re working with.
    4:02:12 I mean, it’s insane.
    4:02:13 It’s insane.
    4:02:18 You can almost just go to Google, like scale, like what does X watts do and go through all
    4:02:21 the scales from one watt to a kilowatt to a megawatt.
    4:02:26 And you look and stare at that and you’re how high on the list a gigawatt is, and it’s
    4:02:27 mind-blowing.
    4:02:30 Can you say something about the cooling?
    4:02:37 So I know Elon’s using liquid cooling, I believe in all cases, that’s a new thing,
    4:02:38 right?
    4:02:39 Most of them don’t use liquid cooling.
    4:02:41 Is there something interesting to say about the cooling?
    4:02:42 Yeah, yeah.
    4:02:46 The cooling has been the de facto standard, throw a bunch of metal, heat pipes, et cetera,
    4:02:47 and fans, right?
    4:02:48 And like, that’s cold.
    4:02:50 That’s been enough to cool it.
    4:02:55 People have been dabbling in water cooling, Google’s TPUs are water cooled, right?
    4:02:58 So they’ve been doing that for a few years.
    4:03:01 But with GPUs, no one’s ever done, and no one’s ever done the scale of water cooling
    4:03:04 that Elon just did, right?
    4:03:09 Now next generation Nvidia is for the highest NGPU, it is mandatory water cooling.
    4:03:10 You have to water cool it.
    4:03:14 So Elon did it on this current generation, and that required a lot of stuff, right?
    4:03:19 If you look at some of the satellite photos and stuff of the Memphis facility, there’s
    4:03:22 all these external water chillers that are sitting basically.
    4:03:26 It looks like a semi-truck pod thing, what’s it called, the container.
    4:03:29 But really those are water chillers, and he has like 90 of those water chillers just sitting
    4:03:30 outside.
    4:03:35 90 different containers, right, that chill the water, bring it back to the data center,
    4:03:38 and then you distribute it to all the chips, pull all the heat out, and then send it back,
    4:03:39 right?
    4:03:44 So it’s both a way to cool the chips, but also an efficiency thing, all right?
    4:03:50 And going back to that sort of three vector thing, right, there is memory band with flops
    4:03:51 and interconnect.
    4:03:56 The closer the chips are together, the easier it is to do high-speed interconnects, right?
    4:04:00 And so this is also like a reason why you’re going to go water cooling is because you can
    4:04:06 just put the chips right next to each other, and therefore get higher speed connectivity.
    4:04:14 I got to ask you, so in one of your recent posts, there’s a section called Cluster Measuring
    4:04:17 Contest, so…
    4:04:21 There’s another word there, but I won’t say it, you know?
    4:04:25 What, who’s got the biggest now, and who’s going to have the biggest?
    4:04:29 Today, individual largest is Elon, right?
    4:04:30 Right.
    4:04:31 Elon’s cluster.
    4:04:34 Elon’s cluster in Memphis, 200,000 GPUs, right?
    4:04:39 Meta has like 128,000, Open Air has 100,000, now to be clear, other companies have more
    4:04:42 GPUs than Elon, they just don’t have them in one place, right?
    4:04:44 And for training, you want them tightly connected.
    4:04:50 There’s some techniques that people are researching and working on that let you train across multiple
    4:04:54 regions, but for the most part, you want them all in like one area, right?
    4:04:57 So you can connect them highly with high-speed networking.
    4:05:04 And so, you know, Elon today has 200,000 H100s, 100,000 H100s, 100,000 H200s, right?
    4:05:11 Meta, Open AI, you know, and Amazon all have on the scale of 100,000, a little bit less.
    4:05:14 But next, this year, right, this year, people are building much more, right?
    4:05:19 Anthropic and Amazon are building a cluster of 400,000 Tranium II, which is Amazon-specific
    4:05:22 chip, trying to get away from Nvidia, right?
    4:05:27 You know, Meta and Open AI have scales for hundreds of thousands.
    4:05:33 But by next year, you’ll have like 500,000 to 700,000 GPU clusters, and note those GPUs
    4:05:36 are much higher power consumption than existing ones, right?
    4:05:40 Hopper 700 watts, Blackwell goes to 1200 watts, right?
    4:05:44 So the power per chip is growing and the number of chips is growing, right?
    4:05:45 Nuts.
    4:05:48 You think Elon said he’ll get to a million.
    4:05:50 You think that’s actually feasible?
    4:05:53 I mean, I don’t doubt Elon, right?
    4:05:57 The filings that he has for like, you know, the power plan and the Tesla battery packs,
    4:06:00 it’s clear he has some crazy plans for Memphis.
    4:06:03 Like permits and stuff is open record, right?
    4:06:07 But it’s not quite clear that, you know, what and what the time scales are.
    4:06:09 I just never doubt Elon, right?
    4:06:10 You know, that’s, he’s going to surprise us.
    4:06:12 So what’s the idea with these clusters?
    4:06:18 If you have a million GPUs, what percentage in, let’s say, two, three years is used for
    4:06:25 training and what percent, pre-training and what percent is used for like, for the actual
    4:06:26 computation?
    4:06:28 So these mega clusters make no sense for inference, right?
    4:06:31 You could route inference there and just not train.
    4:06:35 But most of the inference capacity is being, you know, hey, I’ve got a 30 megawatt data
    4:06:36 center here.
    4:06:37 I’ve got 50 megawatts here.
    4:06:38 I’ve got a hundred here, whatever.
    4:06:43 I’ll just throw inference in all of those because the mega clusters, right, multi gigawatt
    4:06:47 data centers, I want to train there because that’s where all of my GPUs are co-located
    4:06:51 where I can put them at a super high networking speed connected together, right?
    4:06:52 Because that’s what you need for training.
    4:06:55 Now with pre-training, this is the old scale, right?
    4:06:59 You could increase parameters, you’d increase data, model gets better.
    4:07:03 That doesn’t apply anymore because there’s not much more data in the pre-training side,
    4:07:04 right?
    4:07:08 Yes, there’s video and audio and image that has not been fully taken advantage of.
    4:07:09 So there’s a lot more scaling.
    4:07:14 But a lot of people like, have taken transcripts of YouTube videos and that gets you a lot
    4:07:15 of the data.
    4:07:17 It doesn’t get you all of the learning value out of the video and image data.
    4:07:20 But, you know, there’s still scaling to be done on pre-training.
    4:07:24 This post-training world is where all the flops are going to be spent, right?
    4:07:27 The model is going to play with itself, it’s going to self-play, it’s going to do verifiable
    4:07:32 tasks, it’s going to do computer use in sandboxes, it might even do simulated robotics things,
    4:07:33 right?
    4:07:39 All of these things are going to be environments where compute is spent in quote unquote post-training.
    4:07:42 But I think it’s going to be good, we’re going to drop the post from post-training.
    4:07:43 Yeah.
    4:07:48 It’s going to be pre-training and it’s going to be training, I think, at some point.
    4:07:54 Because for the bulk of the last few years, pre-training has dwarfed post-training.
    4:07:59 But with these verifiable methods, especially ones that scale potentially infinitely, like
    4:08:04 computer use in robotics, not just math encoding, right, where you can verify what’s happening,
    4:08:07 those infinitely verifiable tasks, it seems you can spend as much compute as you want
    4:08:08 on them.
    4:08:09 Especially at the context length increase.
    4:08:13 Because the end of pre-training is when you increase the context length for these models.
    4:08:17 And we’ve talked earlier in the conversation about how the context length, when you have
    4:08:20 a long input, is much easier to manage than output.
    4:08:25 And a lot of these post-training and reasoning techniques rely on a ton of sampling and it’s
    4:08:27 becoming increasingly long context.
    4:08:31 So it’s just like you’re, effectively, your compute efficiency goes down.
    4:08:36 I don’t, I think FLOPs is the standard for how you measure it, but with RL and you have
    4:08:40 to do all these things where you move your weights around in a different way than at
    4:08:46 pre-training and just generation, it’s going to become less efficient and FLOPs is going
    4:08:48 to be less of a useful term.
    4:08:51 And then as the infrastructure gets better, it’s probably going to go back to FLOPs.
    4:08:56 So all of the things we’ve been talking about is most likely going to be NVIDIA, right?
    4:08:57 Is there any competitors?
    4:09:00 Google, Google, I kind of ignored them.
    4:09:02 Yeah, what’s the story with TPU?
    4:09:03 What’s the story with TPU?
    4:09:04 Like, what’s the…
    4:09:06 TPU is awesome, right?
    4:09:07 It’s great.
    4:09:11 Google is, they’re a bit more tepid on building data centers for some reason.
    4:09:12 They’re building big data centers.
    4:09:13 Don’t get me wrong.
    4:09:17 They actually have the biggest cluster, I was talking about NVIDIA clusters.
    4:09:20 They actually have the biggest cluster, period.
    4:09:23 But the way they do it is very interesting, right?
    4:09:26 They have two sort of data center super regions, right?
    4:09:29 In that the data center isn’t physically, like all of the GPUs aren’t physically on
    4:09:33 one site, but they’re like 30 miles from each other, not GPUs, TPUs, right?
    4:09:37 They have like in Iowa and Nebraska, they have four data centers that are just like right
    4:09:38 next to each other.
    4:09:42 Why doesn’t Google flex its cluster size more often?
    4:09:43 Go to multi data center training.
    4:09:46 There’s the good images in there, so I’ll show you what I mean.
    4:09:49 It’s just semi analysis multi data center.
    4:09:52 So this is like, you know, so this is an image of like what a standard Google data center
    4:09:53 looks like.
    4:09:56 By the way, their data centers look very different than anyone else’s data centers.
    4:09:57 What are we looking at here?
    4:10:00 So these are, yeah, so if you see this image, right?
    4:10:02 In the center, there are these big rectangular boxes, right?
    4:10:05 Those are where the actual chips are kept.
    4:10:10 And then if you scroll down a little bit further, you can see there’s like these water pipes,
    4:10:14 there’s these chiller cooling towers in the top and a bunch of like diesel generators.
    4:10:16 The diesel generators are backup power.
    4:10:21 The data center itself is like look physically smaller than the water chillers, right?
    4:10:25 So the chips are actually easier to like keep together, but then like cooling all the water
    4:10:27 for the water cooling is very difficult, right?
    4:10:32 So Google has like a very advanced infrastructure that no one else has for the TPU.
    4:10:35 And what they do is they’ve like stamped these data center, they’ve stamped a bunch of these
    4:10:37 data centers out in a few regions, right?
    4:10:42 So if you go a little bit further down, this is a Microsoft.
    4:10:43 This is an Arizona.
    4:10:46 This is where GPT-5 quote unquote will be trained, you know.
    4:10:48 If it doesn’t exist already.
    4:10:50 Yeah, it doesn’t exist already.
    4:10:54 But each of these data centers, I’ve shown a couple images of them, they’re like really
    4:10:56 closely co-located in the same region, right?
    4:10:57 Nebraska, Iowa.
    4:11:01 And then they also have a similar one in Ohio complex, right?
    4:11:04 And so these data centers are really close to each other.
    4:11:07 And what they’ve done is they’ve connected them super high bandwidth with fiber.
    4:11:09 And so these are just a bunch of data centers.
    4:11:14 And the point here is that Google has a very advanced infrastructure, very tightly connected
    4:11:16 in a small region.
    4:11:19 So Elon will always have the biggest cluster fully connected, right?
    4:11:21 Because it’s all in one building, right?
    4:11:23 And he’s completely right on that, right?
    4:11:27 Google has the biggest cluster, but you have to spread over three sites and by a significant
    4:11:30 margin, we have to go across multiple sites.
    4:11:33 Why doesn’t Google compete with Nvidia?
    4:11:36 Why don’t they sell TPUs?
    4:11:38 I think there’s a couple problems with it.
    4:11:46 It’s like one, TPU has been a form of allowing search to be really freaking cheap and build
    4:11:48 models for that, right?
    4:11:52 And so like a big chunk of the search TPU purchases or TPU purchases or a big chunk
    4:11:56 of Google’s purchases and usage, all of it is for internal workloads, right?
    4:12:02 Whether it be search, now Gemini, YouTube, all these different applications that they
    4:12:06 have, you know, ads, these are where all their TPUs are being spent, and that’s what they’re
    4:12:08 hyper focused on, right?
    4:12:12 And so there’s certain like aspects of the architecture that are optimized for their
    4:12:15 use case that are not optimized elsewhere, right?
    4:12:19 One simple one is like they’ve open sourced a Gemma model and they called it Gemma 7B,
    4:12:20 right?
    4:12:24 But then it’s actually eight billion parameters because the vocabulary is so large, and the
    4:12:28 reason they made the vocabulary so large is because TPUs like matrix multiply unit
    4:12:32 is massive, because that’s what they’ve like sort of optimized for.
    4:12:35 And so they decided, oh, I’ll just make the vocabulary large too, even though it makes
    4:12:38 no sense to do so in such a small model, because that fits on their hardware.
    4:12:42 So Gemma doesn’t run it as efficiently on a GPU as a Lama does, right?
    4:12:46 But vice versa, Lama doesn’t run as efficiently on a TPU as a Gemma does, right?
    4:12:50 And it’s so like there’s like certain like aspects of like hardware software co-design.
    4:12:53 So all their search models are their ranking and recommendation models, all these different
    4:12:59 models that are AI, but not like gen AI, right, have been hyper-optimized with TPUs forever.
    4:13:03 The software stack is super optimized, but all of this software stack has not been released
    4:13:06 publicly at all, right?
    4:13:09 Very small portions of it, Jax and XLA have been, but like the experience when you’re
    4:13:13 inside of Google and you’re training on TPUs as a researcher, you don’t need to know anything
    4:13:15 about the hardware in many cases, right?
    4:13:21 It’s like pretty beautiful, but as soon as you step outside, a lot of them go back.
    4:13:23 They leave Google and then they go back.
    4:13:26 Yeah, they’re like, they leave and they start a company because they have all these amazing
    4:13:29 research ideas and they’re like, wait, infrastructure is hard.
    4:13:30 Software is hard.
    4:13:31 And this is on GPUs.
    4:13:34 Or if they try to use TPUs, same thing, because they don’t have access to all this code.
    4:13:37 And so it’s like, how do you convince a company whose golden goose is searched where they’re
    4:13:43 making hundreds of billions of dollars from to start selling TPUs, which they used to
    4:13:50 only buy a couple billion of, you know, I think in 2023, they bought like a couple billion.
    4:13:53 And now they’re buying like 10 billion to 15 billion dollars worth, but how do you convince
    4:13:56 them that they should just buy like twice as many and figure out how to sell them and
    4:13:57 make 30 billion dollars?
    4:14:00 Who cares about making 30 billion dollars?
    4:14:04 Won’t that 30 billion exceed actually the search profit eventually?
    4:14:10 Oh, I mean, like, you’re always going to make more money on services than on hardware.
    4:14:14 I mean, like, yeah, like, to be clear, like today, people are spending a lot more on hardware
    4:14:16 than they are the services, right?
    4:14:19 Because the hardware front runs the service spend.
    4:14:24 But like, if there’s no revenue for AI stuff or not enough revenue, then obviously like
    4:14:26 it’s going to blow up, right?
    4:14:28 People won’t continue to spend on GPUs forever.
    4:14:31 And an invidious trying to move up the stack with like software that they’re trying to
    4:14:33 sell and license and stuff, right?
    4:14:38 But Google has never had that like DNA of like, this is a product we should sell, right?
    4:14:42 The Google Cloud does it, which is a separate organization from the TPU team, which is a
    4:14:45 separate organization from the DeepMind team, which is a separate organization from the
    4:14:46 search team, right?
    4:14:47 There’s a lot of bureaucracy.
    4:14:50 Wait, Google Cloud is a separate team than the TPU team?
    4:14:54 Technically TPU sits under infrastructure, which sits under Google Cloud.
    4:15:01 But like Google Cloud, like for like renting stuff and TPU architecture are very different
    4:15:02 goals, right?
    4:15:04 And hardware and software, like all of this, right?
    4:15:09 Like the Jack’s XLA teams do not serve Google’s customers externally, whereas Nvidia’s various
    4:15:14 CUDA teams for like things like Nickel serve external customers, right?
    4:15:19 The internal teams like Jackson XLA and stuff, they more so serve DeepMind and search, right?
    4:15:21 And so their customers different, they’re not building a product for them.
    4:15:29 Do you understand why AWS keeps winning versus Azure for cloud versus Google Cloud?
    4:15:32 Yeah, Google Cloud is tiny, isn’t it, relative to AWS?
    4:15:34 Google Cloud is third, yeah, yeah.
    4:15:37 Microsoft is the second biggest, but Amazon is the biggest, right?
    4:15:42 And Microsoft deceptively sort of includes like Microsoft Office 365 and things like
    4:15:43 that.
    4:15:44 It’s enterprise-wide licenses.
    4:15:46 So in reality, the gulf is even larger.
    4:15:48 Microsoft is still second though, right?
    4:15:49 Amazon is way bigger.
    4:15:50 Why?
    4:15:52 Because using AWS is better and easier.
    4:15:53 And in many cases, it’s cheaper.
    4:15:54 It was first.
    4:15:55 And it’s first.
    4:15:56 It was first.
    4:15:57 Yeah, but there’s a lot of things that are first that…
    4:15:58 Well, it’s easier.
    4:16:00 It’s harder to switch than it is to…
    4:16:01 Yeah, okay.
    4:16:02 But AWS is…
    4:16:03 There’s big fees for switching too.
    4:16:06 AWS generates over 80% of Amazon’s profit.
    4:16:07 I think over 90%.
    4:16:08 That’s insane.
    4:16:12 The distribution centers are just like, one day we’ll decide to make money from this.
    4:16:13 But they haven’t yet, right?
    4:16:14 Like they make tiny little profit from it.
    4:16:17 One day of Amazon Prime will triple in price.
    4:16:22 You would think they would improve AWS interface because it’s like horrible.
    4:16:25 It’s like clunky, but everybody’s…
    4:16:28 Yeah, one would think.
    4:16:31 I think actually Google’s interface is sometimes nice, but it’s also like they don’t care about
    4:16:35 anyone besides their top customers and like their customer service sucks and like they
    4:16:36 have a lot less.
    4:16:39 I mean, all these companies, they optimized for the big customers.
    4:16:40 Yeah.
    4:16:41 It’s supposed to be for business.
    4:16:44 But Amazon has always optimized for the small customer too though, right?
    4:16:47 Like obviously they optimize a lot for the big customer, but like when they started,
    4:16:51 they just would go to like random Bay Area things and give out credits, right?
    4:16:52 And then they like…
    4:16:53 Or just put in your credit card and use us, right?
    4:16:54 Like back in the early days.
    4:16:55 So they’ve always…
    4:16:56 The business has grown with them, right?
    4:16:57 In Virgin.
    4:16:58 So like, why does Amazon…
    4:17:02 Like why is Snowflake all over Amazon because Snowflake in the beginning when Amazon didn’t
    4:17:04 care about them was still using Amazon, right?
    4:17:08 And then of course one day Snowflake and Amazon has a super huge partnership, but like this
    4:17:11 is the case like Amazon’s user experience and quality is better.
    4:17:15 Also a lot of the silicon they’ve engineered makes them have a lower cost structure and
    4:17:21 traditional cloud storage, CPU, networking, that kind of stuff than in databases, right?
    4:17:27 Like I think like four of Amazon’s top five revenue products, margin products are like
    4:17:31 gross profit products or all database related products like Redshift and like all these
    4:17:32 things, right?
    4:17:38 So Amazon has a very like good silicon to a user experience like entire pipeline with
    4:17:39 AWS.
    4:17:40 I think Google…
    4:17:42 They’re silicon teams?
    4:17:46 Yeah, they have awesome silicon internally, TPU, the YouTube chip, some of these other
    4:17:48 chips that they’ve made.
    4:17:52 And the problem is they’re not serving external customers, they’re serving internal customers,
    4:17:53 right?
    4:17:56 I mean, NVIDIA’s entire culture is designed from the bottom up to do this.
    4:18:01 There’s this recent book, The NVIDIA Way, by Take Him, that details this and how they
    4:18:07 look for future opportunities and ready their CUDA software libraries to make it so that
    4:18:13 new applications of high performance computing can very rapidly be evolved on CUDA and NVIDIA
    4:18:14 chips.
    4:18:18 And that is entirely different than Google as a services business.
    4:18:19 Yeah.
    4:18:22 NVIDIA, it should be said as a truly special company.
    4:18:26 Like, I mean, they, the whole, the culture, everything, they’re really optimized for that
    4:18:27 kind of thing.
    4:18:33 Which is there’s somebody that can even challenge NVIDIA hardware-wise, Intel, AMD?
    4:18:35 I really don’t think so.
    4:18:42 We went through a very long process of working with AMD on training on their GPUs and inference
    4:18:43 and stuff.
    4:18:44 And they’re decent.
    4:18:46 Their hardware is better in many ways than in NVIDIAs.
    4:18:48 The problem is their software is really bad.
    4:18:50 And I think they’re getting better, right?
    4:18:54 They’re getting better faster, but they’re just, the gulf is so large.
    4:18:58 Even like, they don’t spend enough resources on it or have it historically, right?
    4:19:02 Maybe they’re changing their tune now, but for multiple months, we were submitting the
    4:19:03 most bugs, right?
    4:19:05 Like, ah, semianalysis, right?
    4:19:06 Like, what the fuck?
    4:19:08 Like, why are we submitting the most bugs, right?
    4:19:11 Because they only, and they only cared about their biggest customers.
    4:19:15 And so they’d ship them a private image, blah, blah, blah, and it’s like, okay, but like,
    4:19:20 I am just using PyTorch and I want to use the publicly available libraries and you don’t
    4:19:21 care about that, right?
    4:19:25 So, they’re getting better, but like, I think AMD is not possible, Intel’s obviously in
    4:19:29 dire straits right now and needs to be saved somehow.
    4:19:33 Very important for national security, for American, you know, technology.
    4:19:36 Can you explain the obvious, so why are they in dire straits?
    4:19:39 Going back to earlier, only three companies can R&D, right?
    4:19:45 Taiwan, Sinshu, Samsung, Pyongyang, and then Intel Hillsboro.
    4:19:46 Samsung’s doing horribly.
    4:19:47 Intel’s doing horribly.
    4:19:50 We could be in a world where there’s only one company that can do R&D and that one company
    4:19:52 already manufactures most of chips.
    4:19:55 They’ve been gaining market share anyways, but like, that’s a critical thing, right?
    4:19:58 So what happens to Taiwan means the rest of the world’s semiconductor industry and therefore
    4:20:01 tech relies on Taiwan, right?
    4:20:03 And that’s obviously precarious.
    4:20:08 As far as like Intel, they’ve been slowly steadily declining.
    4:20:13 They were on top of servers and PCs, but now Apple’s done the M1 and Nvidia’s releasing
    4:20:17 a PC chip and Qualcomm’s releasing a PC chip and in servers, hyperscalers are all making
    4:20:23 their own ARM based server chips and Intel has no AI silicon like wins, right?
    4:20:25 They have very small wins.
    4:20:29 And they never got into mobile because they said no to the iPhone and like, all these
    4:20:32 things have compounded and they’ve lost their process technology leadership, right?
    4:20:35 They were ahead for 20 years and now they’re behind by at least a couple years, right?
    4:20:40 And they’re trying to catch back up and we’ll see if like their 18A, 14A strategy works
    4:20:42 out where they try and leapfrog TSMC.
    4:20:46 But like, and Intel is just like losing tons of money anyways, right?
    4:20:49 And they just fired their CEO, even though the CEO was the only person who understood
    4:20:50 the company.
    4:20:51 Well, right, we’ll see.
    4:20:56 He was not the best, but he was pretty good, relatively, technical guy.
    4:20:57 Where does Intel make most of its money?
    4:20:58 The CPUs, though.
    4:21:01 PCs and data center CPUs, yeah, but data center CPUs are all going cloud.
    4:21:05 And Amazon, Microsoft, Google are making our ARM based CPUs.
    4:21:10 And then PC side, AMD’s gained market share, Nvidia’s launching a chip.
    4:21:11 That’s not going to be success, right?
    4:21:15 Media tech, Qualcomm ever launched chips, Apple’s doing well, right?
    4:21:19 Like they could get squeezed a little bit in PC, although PC generally, I imagine will
    4:21:21 just stick Intel mostly for Windows side.
    4:21:25 Let’s talk about the broad AI race, who do you think wins?
    4:21:26 Who talked about Google?
    4:21:31 The leader, the default leader has been Google because of their infrastructure advantage.
    4:21:35 Well, like in the news, open AI is the leader.
    4:21:36 They’re the leading in the narrative.
    4:21:37 They have the best model.
    4:21:40 They have the best model that people can use and they’re experts.
    4:21:42 And they have the most AI revenue.
    4:21:43 Yeah.
    4:21:45 Open AI is winning, right?
    4:21:48 So who’s making money on AI right now?
    4:21:49 Is anyone making money?
    4:21:53 So accounting profit wise, Microsoft is making money, but they’re spending a lot of catbacks,
    4:21:54 right?
    4:21:56 You know, and that gets depreciated over years.
    4:22:01 Meta is making tons of money, but with recommendation systems, which is AI, but not with Lama, right?
    4:22:04 Lama’s losing money for sure, right?
    4:22:08 I think anthropic and open AI are obviously not making money because otherwise they wouldn’t
    4:22:09 be raising money, right?
    4:22:12 They have to raise money to build more, right?
    4:22:14 Well, theoretically, they are making money, right?
    4:22:18 You spent a few hundred million dollars on GPT-4 and it’s doing billions in revenue.
    4:22:22 So obviously it’s making money, although they had to continue to research to get the compute
    4:22:24 efficiency wins, right?
    4:22:30 And move down the curve to get that 1200X that has been achieved for GPT-3.
    4:22:35 Maybe we’re only at a couple hundred X now, but with GPT-4 Turbo and 4.0 and there’ll be
    4:22:40 another one probably cheaper than GPT-4.0 even that comes out at some point.
    4:22:42 And that research costs a lot of money, right?
    4:22:43 Yep, exactly.
    4:22:48 That’s the thing that I guess is not talked about with the cost, that when you’re referring
    4:22:54 to the cost of the model, it’s not just the training or the test runs, it’s the actual
    4:22:55 research, the manpower.
    4:22:59 Yeah, to do things like reasoning right now that that exists, they’re going to scale it,
    4:23:00 they’re going to do a lot of research.
    4:23:07 I think people focus on the payback question, but it’s really easy to just be like, well,
    4:23:10 GDP is humans and industrial capital, right?
    4:23:14 And if you can make intelligence cheap, then you can grow a lot, right?
    4:23:18 That’s the sort of dumb way to explain it, but that’s sort of what basically the investment
    4:23:19 thesis is.
    4:23:24 I think only NVIDIA is actually making tons of money and other hardware vendors.
    4:23:28 The hyperscalers are all on paper making money, but in reality, they’re like spending a lot
    4:23:32 more on purchasing the GPUs, which you don’t know if they’re still going to make this much
    4:23:35 money on each GPU in two years, right?
    4:23:41 You don’t know if all of a sudden, OpenAI goes kapoof, and now Microsoft has like hundreds
    4:23:46 of thousands of GPUs they were renting to OpenAI that they paid for themselves with
    4:23:50 their investment in them, that no longer have a customer, right?
    4:23:53 This is always a possibility, I don’t believe that, right?
    4:23:57 I think OpenAI will keep raising money, I think others will keep raising money because
    4:24:02 the investments, the returns from it are going to be eventually huge once we have AGI.
    4:24:05 So do you think multiple companies will get, let’s assume-
    4:24:07 I don’t think it’s going to take all.
    4:24:08 Okay.
    4:24:12 So it’s not, let’s not call it AGI or whatever, it’s like a single day.
    4:24:13 It’s a gradual thing.
    4:24:15 Super powerful AI.
    4:24:20 But it’s a gradually increasing set of features that are useful and make a lot of money.
    4:24:22 Rapidly increasing set of features.
    4:24:25 Rapidly increasing set of features.
    4:24:32 So you’re saying a lot of companies will be, it just seems absurd that all of these companies
    4:24:35 are building gigantic data centers.
    4:24:39 There are companies that will benefit from AI but not because they trained the best model.
    4:24:44 Meta has so many avenues to benefit from AI and all of their services, people are there,
    4:24:47 people spend time on Meta’s platforms and it’s a way to make more money per user per
    4:24:48 hour.
    4:24:58 Yeah, it seems like Google X/XAI/Tesla, important to say, and then Meta will benefit not directly
    4:25:06 from the AI like the LLMs, but from the intelligence, like the additional boost of intelligence to
    4:25:07 the products they already sell.
    4:25:12 So whether that’s the recommendation system or for Elon, who’s been talking about Optimus,
    4:25:16 the robot, potentially the intelligence of the robot.
    4:25:20 And then you have personalized robots in the home, that kind of thing.
    4:25:25 He thinks it’s a 10 plus trillion dollar business, which-
    4:25:30 At some point maybe, not soon, but who knows what robotics-
    4:25:35 Let’s do a TAM analysis, right, 8 billion humans and let’s get 8 billion robots, right,
    4:25:39 and let’s pay them the average salary and yeah, there we go, 10 trillion.
    4:25:40 More than 10 trillion.
    4:25:46 Yeah, I mean, if there’s robots everywhere, why does it have to be just eight billion
    4:25:47 robots?
    4:25:48 Yeah, of course, of course.
    4:25:51 I’m gonna have like one robot, you’re gonna have like 20.
    4:25:54 Yeah, I mean, I see a use case for that.
    4:25:59 So yeah, I guess the benefit would be in the products as well, which is why OpenAI is in
    4:26:00 a trickier position because they-
    4:26:04 All of the value of OpenAI right now as a brand is in ChatGPT.
    4:26:09 And there is actually not that, for most users, there’s not that much of a reason that they
    4:26:14 need OpenAI to be spending billions and billions of dollars on the next best model when they
    4:26:17 could just license Lama 5 and Furby Way cheaper.
    4:26:22 So that’s kind of like, ChatGPT is an extremely valuable entity to them.
    4:26:25 But like, they could make more money just off that.
    4:26:29 The Chat application is clearly like does not have tons of room to continue, right?
    4:26:30 Like the standard Chat, right?
    4:26:33 Where you’re just using it for a random question and stuff, right?
    4:26:36 The cost continues to collapse, V3 is the latest one.
    4:26:37 It’ll go down to ads.
    4:26:39 Biggest, but it’s gonna get supported by ads, right?
    4:26:44 Like, you know, Meta already serves 405B and probably loses the money, but at some point,
    4:26:48 you know, they’re going to get, the models are gonna get so cheap that they can just
    4:26:50 serve them for free with ad supported, right?
    4:26:53 And that’s what Google is going to be able to do, and that’s obviously they’ve got a
    4:26:54 bigger reach, right?
    4:26:56 So Chat is not going to be the only use case.
    4:27:00 It’s like these reasoning, code, agents, computer use.
    4:27:03 All this stuff is where OpenAI has to actually go to make money in the future.
    4:27:04 Otherwise, they’re kaputts.
    4:27:09 But X, Google and Meta have these other products.
    4:27:15 So doesn’t, isn’t it likely that OpenAI and Anthropic disappear eventually?
    4:27:18 Unless they’re so good at models, they are.
    4:27:19 But it’s such a cutting edge.
    4:27:20 I mean, yes.
    4:27:22 It depends on where you think AI capabilities are going.
    4:27:24 You have to keep winning.
    4:27:25 Yes.
    4:27:26 You have to keep winning.
    4:27:31 As you climb, even if the AI capabilities are going super rapidly awesome into the direction
    4:27:39 of AGI, like there’s still a boost for X in terms of data, Google in terms of data, Meta
    4:27:44 in terms of data, in terms of other products and the money and like there’s just huge amounts
    4:27:45 of money.
    4:27:46 But the whole idea is human data is kind of tapped out.
    4:27:47 We don’t care.
    4:27:48 We don’t care.
    4:27:49 We don’t care about self-play, verifiable tasks.
    4:27:50 Yes, the self-play.
    4:27:51 Think about AWS.
    4:27:52 Which is an R&D problem.
    4:27:56 AWS does not make a lot of money on each individual machine.
    4:28:01 And the same can be said for the most powerful AI platform, which is even though the calls
    4:28:06 to the API are so cheap, there’s still a lot of money to be made by owning that platform.
    4:28:10 And there’s a lot of discussions as it’s the next compute layer.
    4:28:14 You have to believe that, and there’s a lot of discussions that tokens and tokenomics
    4:28:18 and LLM APIs are the next compute layer or the next paradigm for the economy, kind of
    4:28:20 like energy and oil was.
    4:28:26 But there’s also like, you have to sort of believe that APIs and chat are not where AI
    4:28:27 is stuck, right?
    4:28:30 It is actually just tasks and agents and robotics and computer use.
    4:28:36 And those are the areas where all the value will be delivered, not API, not chat application.
    4:28:43 Is it possible you have, I mean, it all just becomes a commodity and you have the very
    4:28:49 thin wrapper, like perplexity, just joking.
    4:28:51 There are a lot of wrappers making a lot of money.
    4:28:52 Yeah.
    4:28:56 But do you think it’s possible that people will just even forget what open AI and the
    4:28:57 thropic is?
    4:29:00 And just because there’ll be wrappers around the API and it just dynamically…
    4:29:04 If model progress is not rapid, yeah, it’s becoming a commodity, right?
    4:29:09 DeepSeq V3 shows this, but also the GPT-3 chart earlier, chart showed this, right?
    4:29:12 Lama3B is 1200X cheaper than GPT-3.
    4:29:17 Any GPT-3, like anyone whose business model is GPT-3 level capabilities is dead.
    4:29:20 Anyone whose business model is GPT-4 level capabilities is dead, right?
    4:29:25 It is a common saying that the best businesses being made now are ones that are predicated
    4:29:26 on models getting better, right?
    4:29:32 Which would be like wrappers, thing that is riding the wave of the models.
    4:29:35 The short term, the company that could make the most money is the one that figures out
    4:29:40 what advertising targeting method works for language model generations.
    4:29:45 We have the meta ads, which are hyper-targeted in feed, not within specific pieces of content.
    4:29:49 And we have search ads that are used by Google and Amazon has been rising a lot on search.
    4:29:56 But within a return from chat GPT, it is not clear how you get a high-quality placed ad
    4:29:57 within the output.
    4:30:04 And if you can do that with model costs coming down, you can just get super high revenue.
    4:30:07 That revenue is totally untapped and it’s not clear technically how it is done.
    4:30:12 Yeah, that is sort of the AdSense innovation that Google did.
    4:30:18 The one day you’ll have in GPT output an ad and that’s going to make billions of dollars.
    4:30:20 And it could be very subtle.
    4:30:21 It could be in conversation.
    4:30:22 We have voice mode now.
    4:30:27 It could be some way of making it so the voice introduces certain things.
    4:30:30 It’s much harder to measure and it takes imagination, but yeah.
    4:30:36 And it wouldn’t come off shady so you will receive public blowback, that kind of thing.
    4:30:40 You have to do it loud enough to where it’s clear as an ad and balance all of that.
    4:30:43 So that’s the open question they’re trying to solve.
    4:30:45 Anthropic and OpenAI, they need to…
    4:30:46 They might not say that they’re trying…
    4:30:47 I don’t think they care about that at all.
    4:30:49 They don’t care about it right now.
    4:30:50 I think it’s places like…
    4:30:51 I think they’re purely…
    4:30:52 Purely…
    4:30:53 They’re experimenting on that more.
    4:30:54 Oh, interesting.
    4:30:55 Yeah, for sure.
    4:30:58 Like, perplexity Google meta care about this.
    4:31:02 I think OpenAI and Anthropic are purely laser focused on…
    4:31:03 AGI.
    4:31:04 Yeah.
    4:31:05 Agents and AGI.
    4:31:11 Agents and AGI, I can make tons of money or I can spend, pay for everything.
    4:31:12 This is…
    4:31:15 It’s just predicated like back on the export control thing.
    4:31:19 If you think AGI is five, 10 years away or less, these labs think it’s two, three years
    4:31:20 away.
    4:31:24 Obviously, your actions are…
    4:31:29 If you assume they’re rational actors, which they are mostly, what you do in a two-year
    4:31:34 AGI versus five-year versus 10-year is very, very, very different.
    4:31:36 Do you think agents are promising?
    4:31:40 We have to talk about this.
    4:31:44 This is like the excitement of the year that agents are going to…
    4:31:51 The generic hype term that a lot of business folks are using, AI agents are going to revolutionize
    4:31:52 everything.
    4:31:53 Okay.
    4:31:55 So, mostly the term agent is obviously overblown.
    4:32:00 We’ve talked a lot about reinforcement learning as a way to train for verifiable outcomes.
    4:32:04 This should mean something that is open-ended and is solving a task independently on its
    4:32:07 own and able to adapt to uncertainty.
    4:32:11 There is a lot of the term agent applied to things like Apple Intelligence, which we
    4:32:16 still don’t have after the last WWDC, which is orchestrating between apps.
    4:32:20 That sort of tool use thing is something that language models can do really well.
    4:32:23 Apple Intelligence, I suspect will come eventually.
    4:32:24 It’s a closed domain.
    4:32:29 It’s your messages app integrating with your photos, with AI in the background.
    4:32:30 That will work.
    4:32:35 This has been described as an agent by a lot of software companies to get into the narrative.
    4:32:43 The question is, what ways can we get language models to generalize to new domains and solve
    4:32:45 their own problems in real time?
    4:32:49 Maybe some tiny amount of training when they are doing this with fine-tuning themselves
    4:32:53 or in-context learning, which is the idea of storing information in a prompt.
    4:32:58 You can use learning algorithms to update that and whether or not you believe that that
    4:33:05 is going to actually generalize to things like me saying, “Book my trip to go to Austin
    4:33:06 in two days.
    4:33:10 I have XYZ constraints and actually trusting it.”
    4:33:13 I think there’s an HCI problem coming back for information.
    4:33:15 Well, what’s your prediction there?
    4:33:18 Because my gut says we’re very far away from that.
    4:33:23 I think OpenAI’s statement, I don’t know if you’ve seen the five levels, right?
    4:33:28 Where it’s chat is level one, reasoning is level two, and then agents is level three.
    4:33:31 I think there’s a couple more levels, but it’s important to note, right?
    4:33:34 We were in chat for a couple of years, right?
    4:33:37 We just theoretically got to reasoning.
    4:33:39 We’ll be here for a year or two, right?
    4:33:44 And then agents, but at the same time, people can try and approximate capabilities of the
    4:33:45 next level.
    4:33:49 But the agents are doing things autonomously, doing things for minutes at a time, hours
    4:33:52 at a time, et cetera, right?
    4:33:56 Everything is doing things for tens of seconds at a time, right?
    4:33:59 And then coming back with an output that I still need to verify and use and try to check
    4:34:01 out, right?
    4:34:05 And the biggest problem is, of course, it’s the same thing with manufacturing, right?
    4:34:07 There’s the whole Six Sigma thing, right?
    4:34:08 How many nines do you get?
    4:34:12 And then you compound the nines onto each other, and it’s like, if you multiply by the
    4:34:18 number of steps that are Six Sigma, you get a yield or something, right?
    4:34:23 So in semiconductor manufacturing, tens of thousands of steps, 999999 is not enough,
    4:34:24 right?
    4:34:28 Because you multiply by that many times, you actually end up with like 60% yield, right?
    4:34:29 Yeah, or zero.
    4:34:30 Or low yield, yeah, or zero.
    4:34:32 And this is the same thing with agents, right?
    4:34:40 Chaining tasks together each time, LLMs, even the best LLMs in particularly pretty good benchmarks,
    4:34:42 don’t get 100%, right?
    4:34:45 They get a little bit below that because there’s a lot of noise.
    4:34:49 And so how do you get to enough nines, right?
    4:34:50 This is the same thing with self-driving.
    4:34:54 We can’t have self-driving because without it being like super geofenced like Google,
    4:34:55 like Google’s, right?
    4:34:58 And even then they have a bunch of tele operators to make sure it doesn’t get stuck, right?
    4:35:01 But you can’t do that because it doesn’t have enough nines.
    4:35:07 And self-driving has quite a lot of structure because roads have rules.
    4:35:08 It’s well-defined.
    4:35:09 There’s regulation.
    4:35:15 And when you’re talking about computer use for the open web, for example, or the open
    4:35:19 operating system, like there’s no, it’s a mess.
    4:35:27 So like the possibility, I’m always skeptical of any system that is tasked with interacting
    4:35:30 with the human world, with the open messy human world.
    4:35:31 That’s the thing.
    4:35:35 If we can’t get intelligence that’s enough to solve the human world on its own, we can
    4:35:41 create infrastructure like the human operators for Waymo over many years that enables certain
    4:35:42 workloads.
    4:35:45 There is a company, I don’t remember it, but it is, but that’s literally their pitches.
    4:35:47 Yeah, we’re just going to be the human operator when agents fail.
    4:35:49 And you just call us and we fix it.
    4:35:50 Yeah.
    4:35:51 It’s like an API call and it’s hilarious.
    4:35:54 There’s going to be tele-operation markets when we get human robots, which is there’s
    4:35:59 going to be somebody around the world that’s happy to fix the fact that it can’t finish
    4:36:03 loading my dishwasher when I’m unhappy with it, but that’s just going to be part of the
    4:36:04 Tesla service package.
    4:36:10 I’m just imagining like an AI agent talking to another AI agent.
    4:36:15 One company has an AI agent that specializes in helping other AI agents.
    4:36:19 But if you can make things that are good at one step, you can stack them together.
    4:36:23 So that’s why I’m like, if it takes a long time, we’re going to build infrastructure that
    4:36:24 enables it.
    4:36:29 You see the operator launch, they have partnerships with certain websites with DoorDash with OpenTable
    4:36:31 with things like this.
    4:36:35 Those partnerships are going to let them climb really fast, their model is going to get really
    4:36:36 good at those things.
    4:36:40 It’s going to prove a concept that might be a network effect where more companies want
    4:36:41 to make it easier for AI.
    4:36:45 Some companies will be like, no, let’s put blockers in place.
    4:36:47 And this is the story of the internet we’ve seen.
    4:36:51 We see it now with training data for language models where companies are like, no, you have
    4:36:55 to pay, like business working it out.
    4:37:00 That said, I think like airlines have a very, and hotels have high incentive to make their
    4:37:03 site work really well, and they usually don’t.
    4:37:09 Like if you look at how many clicks it takes to order an airplane ticket, it’s insane.
    4:37:12 You actually can’t call an American Airlines agent anymore.
    4:37:14 They don’t have a phone number.
    4:37:20 I mean, it’s horrible on many, on the interface front, to imagine that agents will be able
    4:37:25 to deal with that website when I as a human struggle, like I have an existential crisis
    4:37:31 every time I try to book an airplane ticket that I don’t, I think it’s going to be extremely
    4:37:35 difficult to build an AI agent that’s robust in that way.
    4:37:38 But think about it like United has accepted the Starlink term, which is they have to provide
    4:37:41 Starlink for free and the users are going to love it.
    4:37:45 What if one airline is like, we’re going to take a year and we’re going to make our website
    4:37:49 have white text that works perfectly for the AIs.
    4:37:53 Every time anyone asks about an AI flight, they buy whatever airline it is.
    4:37:58 Or like, they just like, here’s an API and it’s only exposed to AI agents and if anyone
    4:38:03 queries it, the price is 10% higher and for any flight, but we’ll let you see any of our
    4:38:05 flights and you can just book any of them.
    4:38:06 Here you go.
    4:38:07 Agent Matt.
    4:38:08 And then it’s like, oh, and I made 10% higher price.
    4:38:09 Awesome.
    4:38:10 Yeah.
    4:38:12 And like, am I willing to say that for like, hey, book me a flight to see Lex, right?
    4:38:13 And it’s like, yeah, whatever.
    4:38:21 I think computers and real world and the open world are really, really messy.
    4:38:25 But if you start defining the problem in narrow regions, people are going to be able to create
    4:38:32 very, very productive things and ratchet down cost massively, right?
    4:38:38 Now, crazy things like robotics in the home, those are going to be a lot harder to do just
    4:38:43 like self-driving because there’s just a billion different failure modes, right?
    4:38:48 But agents that can like navigate a certain set of websites and do certain sets of tasks
    4:38:53 or like look at, you know, take a photo of your grocery, your fridge and or like upload
    4:38:57 your recipes and then like it figures out what to order from, you know, Amazon slash
    4:38:59 Whole Foods food delivery.
    4:39:01 Like that’s going to be like pretty quick and easy to do, I think.
    4:39:05 So it’s going to be a whole range of like business outcomes and it’s going to be tons
    4:39:08 of tons of sort of optimism around people can just figure out ways to make money.
    4:39:11 To be clear, these sandboxes already exist in research.
    4:39:16 There are people who have built clones of all the most popular websites of Google, Amazon,
    4:39:20 blah, blah, blah to make it so that there’s, I mean, OpenAI probably has them internally
    4:39:21 to train these things.
    4:39:26 It’s the same as DeepMind’s robotics team for years has had clusters for robotics where
    4:39:28 you interact with robots fully remotely.
    4:39:33 They just have a lab in London and you send tasks to it, arrange the blocks and you do
    4:39:34 this research.
    4:39:39 Obviously, there’s text there that fix stuff, but we’ve turned these cranks of automation
    4:39:40 before.
    4:39:46 You go from sandbox to progress and then you add one more domain at a time and generalize
    4:39:47 it.
    4:39:51 I think in the history of NLP and language processing, instruction tuning in tasks per
    4:39:54 language model used to be like one language model did one task.
    4:39:57 And then in the instruction tuning literature, there’s this point where you start adding
    4:40:01 more and more tasks together where it just starts to generalize to every task.
    4:40:03 And we don’t know where on this curve we are.
    4:40:07 I think for reasoning with this RL and verifiable domains were very early, but we don’t know
    4:40:12 where the point is where you just start training on enough domains and poof like more domains
    4:40:15 to start working and you’ve crossed the generalization barrier.
    4:40:20 Well, what do you think about the programming context?
    4:40:28 So software engineering, that’s where I personally know a lot of people interact with AI the
    4:40:29 most.
    4:40:34 There’s a lot of fear and angst too from current CS students, but that is the area where probably
    4:40:40 the most AI revenue and productivity gains have come, whether it be co-pilots or cursor
    4:40:44 or what have you, right, this is or just standard chat GPT, right?
    4:40:49 Like a lot of, I know very few programmers who don’t have chat GPT and actually many
    4:40:53 of them have the $200 tier because that’s what it’s so good for, right?
    4:40:58 I think that in that world, we already see it like SWE bench and if you’ve looked at
    4:41:03 the benchmark made by some Stanford students, I wouldn’t say it’s like really hard, but
    4:41:04 I wouldn’t say it’s easy either.
    4:41:08 I think like it takes someone who’s been throughout least, you know, a few years of CS or a couple
    4:41:11 years of programming to do SWE bench well.
    4:41:16 And the models went from 4% to 60% in like a year, right?
    4:41:18 And where are they going to go to next year?
    4:41:21 You know, it’s going to be higher, probably won’t be 100% because again, that nines is
    4:41:23 like really hard to do.
    4:41:25 But we’re going to get to some point where that’s and then we’re going to need harder
    4:41:28 software engineering benchmarks and so on and so forth.
    4:41:33 But the way that like people think of it now is it’s can do code completion easy.
    4:41:36 It can do some function generation and I have to review it, great.
    4:41:41 But really the like software engineering agents I think can be done faster sooner than any
    4:41:44 other agent because it is a verifiable domain.
    4:41:51 You can always like unit test or compile and there’s many different regions of like it can
    4:41:55 inspect the whole code base at once, which no engineer really can only the architects
    4:41:59 can really think about this stuff, the really senior guys and they can define stuff and
    4:42:01 then the agent can execute on it.
    4:42:05 So I think I think software engineering costs are going to plummet like crazy and one interesting
    4:42:09 aspect of that is when software engineering costs are really low, you get very different
    4:42:10 markets.
    4:42:11 Right.
    4:42:14 So in the US, you have all these platforms as companies, right, sales force and so on
    4:42:15 and so forth.
    4:42:16 Right.
    4:42:20 In China, no one uses platform sass.
    4:42:25 Everyone just builds their own stack because software engineering is much cheaper in China,
    4:42:29 partially because like people stem number of stem graduates, et cetera.
    4:42:33 So it’s generally just cheaper to do.
    4:42:36 And so at the same time, code for like code alums have been adopted much less in China
    4:42:39 because the cost of an engineer there is much lower.
    4:42:42 But like what happens when every company can just invent their own business logic like
    4:42:44 really cheaply and quickly.
    4:42:48 You stop using platform sass, you start building custom tailored solutions, you change them
    4:42:49 really quickly.
    4:42:51 Now all of a sudden your business is a little bit more efficient too potentially because
    4:42:56 you’re not dealing with the hell that is like some random platform sass company stuff not
    4:43:00 working perfectly and having to adjust workflows or random business automation cases that aren’t
    4:43:02 necessarily AI required.
    4:43:04 It’s just logic that needs to be built that no one has built, right?
    4:43:08 All of these things can go happen faster and so I think software and then the other domain
    4:43:12 is like industrial, chemical, mechanical engineers, second coding, right?
    4:43:17 Just generally and like their tools like semiconductor engineers, their tools are 20 years old.
    4:43:21 All the tools run on XP, including ASML lithography tools run on Windows XP, right?
    4:43:25 It’s like, you know, and like a lot of the analysis happens in Excel, right?
    4:43:29 Like it’s just like guys, like you guys can move 20 years forward with all the data you
    4:43:31 have and gathered and like do a lot better.
    4:43:34 It’s just you need the engineering skills for software engineering to be delivered to
    4:43:36 the actual domain expert engineer.
    4:43:40 So I think, I think that’s the area where I’m like super duper bullish of, of generally
    4:43:42 AI creating value.
    4:43:45 The big picture is that I don’t think it’s going to be a cliff.
    4:43:51 It’s like, we talked to anything, a really good example of how growth changes is when
    4:43:53 meta added stories.
    4:43:57 So Snapchat was on an exponential, they added stories, it flatlined.
    4:44:01 Software engineers, then up until the right, AI is going to come in, it’s probably going
    4:44:02 to be flat.
    4:44:04 It’s like, it’s not like everyone’s going to lose their job.
    4:44:08 It’s hard because the supply corrects more slowly.
    4:44:10 So the amount of students is still growing.
    4:44:13 And that’ll correct on a multi year, like a year delay.
    4:44:16 But the amount of jobs will just turn.
    4:44:20 And then maybe in 20, 40 years, it’ll be well down.
    4:44:23 But in the few years, there’ll never going to be the snap moment where it’s like software
    4:44:24 engineers aren’t useful.
    4:44:28 I think also the nature of what it means to be a programmer and what kind of jobs programmers
    4:44:29 do changes.
    4:44:36 Cause I think there needs to be a human in the loop of everything you’ve talked about.
    4:44:41 There’s a really important human in that picture of like correcting the code.
    4:44:43 Like fixing.
    4:44:45 Thinking larger than the context length.
    4:44:46 Yep.
    4:44:52 And debugging also, like debugging by sort of reading the code, understanding the steering
    4:44:53 the system.
    4:44:56 Like no, no, no, you missed the point adding more to the prompt.
    4:44:58 Kind of like, yes.
    4:45:02 Adding the human designing the perfect Google button, Google’s famous for having people
    4:45:04 design buttons that are so perfect.
    4:45:07 And it’s like, how, like, how is AI going to do that?
    4:45:10 Like they could give you all ideas.
    4:45:11 Perfect.
    4:45:12 Fine.
    4:45:13 I mean, that’s the thing.
    4:45:14 You can call it taste.
    4:45:19 Humans have one thing humans can do is figure out what other humans enjoy better than AI
    4:45:20 systems.
    4:45:21 That’s where the preference.
    4:45:25 You’re loading that in, but ultimately humans are the greatest preference generally.
    4:45:27 That’s where the preference comes from.
    4:45:31 And humans are actually very good at reading or like judging between two things versus this
    4:45:35 is this goes back to the core of what early Jeff and preference tuning is, is that it’s
    4:45:38 hard to generate a good answer for a lot of problems, but it’s easy to see which one
    4:45:39 is better.
    4:45:43 And that’s how we’re using humans for AI now is judging which one is better.
    4:45:47 And that’s what software engineering could look like is the PR review.
    4:45:48 Here’s a few options.
    4:45:53 What are the, like, here’s some potential pros and cons and they’re going to be judges.
    4:46:00 I think the thing I would very much recommend is people start, programmers start using AI
    4:46:05 and embracing that role of the supervisor of the AI system and like partner of the AI
    4:46:10 system versus writing from scratch or not learning coding at all and just generating
    4:46:11 stuff.
    4:46:14 Because I think there actually has to be a pretty high level of expertise as a programmer
    4:46:18 to be able to manage increasingly intelligent systems.
    4:46:21 I think it’s that and then becoming a domain expert in something.
    4:46:22 Sure.
    4:46:23 Yeah.
    4:46:27 Because seriously, if you go look at aerospace or semiconductors or chemical engineering,
    4:46:30 everyone is using really crappy platforms, really old software.
    4:46:34 Like the job of a data science is like a joke, right?
    4:46:35 In many cases.
    4:46:39 In many cases, it’s very real, but it’s like bring what the forefront of human capabilities
    4:46:41 are to your domain.
    4:46:45 And even if the forefront is from the AI, your domain, you’re at the forefront, right?
    4:46:50 So it’s like, you have to be at the forefront of something and then leverage the rising
    4:46:52 tide that is AI for everything else.
    4:46:53 Yeah.
    4:46:59 There’s so many low hanging fruit everywhere in terms of where software can help automate
    4:47:02 a thing or digitize a thing.
    4:47:06 In the legal system, that’s why Doge is exciting.
    4:47:12 Yeah, I mean, I got to hang out with a bunch of the Doge folks and they, I mean, government
    4:47:15 is like so old school.
    4:47:21 It’s like begging for the modernization of software, of organizing the data, all this
    4:47:22 kind of stuff.
    4:47:29 I mean, in that case is by design, because bureaucracy protects centers of power and
    4:47:33 so on, but software breaks down those barriers.
    4:47:39 So it hurts those that are holding onto power, but ultimately benefits humanity.
    4:47:44 So there’s a bunch of domains of that kind.
    4:47:49 One thing we didn’t fully finish talking about is open source.
    4:47:51 So first of all, congrats.
    4:47:52 You released a new model.
    4:47:53 Yeah.
    4:47:54 This is the…
    4:47:55 Tulu.
    4:47:56 I’ll explain what a Tulu is.
    4:48:01 A Tulu is a hybrid camel when you breed a dromedary with a Bacchian camel.
    4:48:05 Back in the early days after chat, GPT, there was a big wave of models coming out like Alpaca,
    4:48:10 Vicuna, et cetera, that were all named after various mammalian species.
    4:48:11 So Tulu is…
    4:48:14 The brand is multiple years old, which comes from that.
    4:48:20 And we’ve been playing at the frontiers of post training with open source code.
    4:48:24 And this first part of this release was in the fall where we used…
    4:48:30 We built on Lama’s open models, open weight models, and then we add in our fully open code
    4:48:32 or fully open data.
    4:48:36 There’s a popular benchmark that is chatbot arena, and that’s generally the metric by
    4:48:41 which how these chat models are evaluated, and it’s humans compare random models from
    4:48:42 different organizations.
    4:48:48 And if you looked at the leaderboard in November or December, among the top 60 models from
    4:48:53 10s to 20s of organizations, none of them had open code or data for just post training.
    4:48:57 Among that, even fewer or none have pre-training data and code available, but post training
    4:48:58 is much more accessible.
    4:49:00 At this time, it’s still pretty cheap and you can do it.
    4:49:04 And the thing is like, how high can we push this number where people have accessed all
    4:49:05 the code and data?
    4:49:07 So that’s kind of the motivation of the project.
    4:49:12 We draw on lessons from Lama, NVIDIA had a nematron model where the recipe for their
    4:49:17 post training was fairly open with some data and a paper, and it’s putting all these together
    4:49:22 to try to create a recipe that people can fine tune models like GPT-4 to their domain.
    4:49:27 So to be clear, in the case of Tulu, maybe you can talk about almost too, but in the
    4:49:31 case of Tulu, you’re taking Lama 345B.
    4:49:35 Tulu has been a series of recipes for post training.
    4:49:38 So we’ve done multiple models over years.
    4:49:40 And so you’re open sourcing everything.
    4:49:41 Yeah.
    4:49:45 If you start with an open weight based model, the whole model technically is an open source
    4:49:49 because you don’t know what Lama put into it, which is why we have the separate thing
    4:49:50 that we’ll get to.
    4:49:54 But it’s just getting parts of the pipeline where people can zoom in and customize.
    4:49:58 I know I hear from startups and businesses, they’re like, okay, I can take this post training
    4:50:00 and try to apply it to my domain.
    4:50:01 We talk about verifiers a lot.
    4:50:08 We use this idea, which is reinforcement learning with verifiable rewards, RLVR, kind of similar
    4:50:12 to RLHF, and we applied it to math.
    4:50:18 And the model today, which is we applied it to the Lama 405B base model from last year,
    4:50:20 and we have our other stuff.
    4:50:25 We have our instruction tuning and preference tuning, but the math thing is interesting,
    4:50:28 which is like, it’s easier to improve this math benchmark.
    4:50:32 There’s a benchmark, MATH, math, all capitals, tough name.
    4:50:36 On the benchmark, name is the area that you’re evaluating.
    4:50:37 We’re researchers.
    4:50:39 We’re not brands, brand strategists.
    4:50:43 And this is something that the DeepSeek paper talked about as well, is like at this bigger
    4:50:48 model, it’s easier to elicit powerful capabilities with this RL training, and then they distill
    4:50:51 it down from that big model to the small model.
    4:50:55 And this model we released today, we saw the same thing as we’re at AI2.
    4:50:56 We don’t have a ton of compute.
    4:51:01 We can’t train 405B models all the time, so we just did a few runs and they tend to work.
    4:51:07 And it’s like, it just shows that there’s a lot of room for people to play in these things.
    4:51:09 And they crushed Lama’s actual release, right?
    4:51:11 They’re way better than it.
    4:51:12 Yeah.
    4:51:15 So our val numbers, I mean, we have extra months in this, but our val numbers are much
    4:51:18 better than the Lama Instruct model that they released.
    4:51:20 And they also said better than DeepSeek V3.
    4:51:21 Yeah.
    4:51:25 On our val benchmark, the most DeepSeek V3 is really similar.
    4:51:29 We have a safety benchmark to understand if it will say harmful things and things like
    4:51:30 that.
    4:51:31 And that’s what draws us down most of the way.
    4:51:34 It’s still like, it’s like an amalgamation of multiple benchmarks or what do you mean?
    4:51:35 Yeah.
    4:51:36 So we have a 10 value.
    4:51:39 This is like, this is standard practice in post training is you choose your evaluations
    4:51:40 you care about.
    4:51:43 In academics, in smaller labs, you’ll have fewer evaluations.
    4:51:46 In companies, you’ll have a really one domain that you really care about.
    4:51:50 In frontier labs, you’ll have 10s to 20s to maybe even like 100 evaluations of specific
    4:51:51 things.
    4:51:55 So we choose a representative suite of things that look like chat, precise instruction following,
    4:51:58 which is like respond only in emojis.
    4:51:59 Like does the model follow weird things like that?
    4:52:00 Yeah.
    4:52:02 Math, code, and you create a suite like this.
    4:52:07 So safety would be one of 10 in that type of suite where you have like, what is the broader
    4:52:09 community of AI care about?
    4:52:12 And for example, in comparison to DeepSeek, it would be something like our average of
    4:52:18 VAL for our model would be 80, including safety and similar without and DeepSeek would be
    4:52:26 like 79% average score without safety and their safety score would bring it down like
    4:52:27 safety.
    4:52:28 Oh, so you beat them even ignoring safety?
    4:52:29 Yeah.
    4:52:33 So this is something that internally it’s like, I don’t want to win only by like how you shape
    4:52:34 the VAL benchmark.
    4:52:36 So if there’s something that’s like people may or may not care about safety in their
    4:52:39 model, safety can come downstream.
    4:52:43 Safety can be when you host the model for an API like safety is addressed in a spectrum
    4:52:44 of locations in AI applications.
    4:52:47 So it’s like, if you want to say that you have the best recipe, you can’t just gait it
    4:52:51 on these things that some people might not want.
    4:52:57 And this is just, it’s like the time of progress and we benefit, we can release a model later,
    4:53:01 we have more time to learn new techniques like this RL technique, we had started this
    4:53:02 in the fall.
    4:53:04 It’s now really popular as reasoning models.
    4:53:08 The next thing to do for open source post training is to scale up verifiers, to scale
    4:53:11 up data, to replicate some of deep seeks results.
    4:53:15 And it’s awesome that we have a paper to draw on and it makes it a lot easier.
    4:53:22 And that’s the type of things that is going on among academic and closed frontier research
    4:53:23 in AI.
    4:53:25 Since you’re pushing open source, what do you think is the future of it?
    4:53:30 You think deep seek actually changes things since it’s open source or open weight or it’s
    4:53:33 pushing the open source movement into the open direction?
    4:53:35 This goes very back to license discussion.
    4:53:38 So deep seek R1 with a friendly license is a major reset.
    4:53:42 So it’s like the first time that we’ve had a really clear frontier model that is open
    4:53:46 weights and with a commercially friendly license with no restrictions on downstream
    4:53:49 use cases, synthetic data, distillation, whatever.
    4:53:53 This has never been the case at all in the history of AI in the last few years since
    4:53:54 ChatGPT.
    4:53:57 There have been models that are off the frontier or models with weird licenses that you can’t
    4:53:58 really use them.
    4:54:04 So isn’t Meta’s license like pretty much permissible except for five companies?
    4:54:09 And so this goes to what open source AI is, which is there’s also use case restrictions
    4:54:12 in the Lama license, which says you can’t use it for specific things.
    4:54:15 So if you come from an open source software background, you would say that that is not
    4:54:16 an open source license.
    4:54:20 What kind of things are those, though?
    4:54:22 At this point, I can’t pull them off the top of my head.
    4:54:23 Stuff that’s competitor.
    4:54:26 It used to be military use was one and they removed that for scale.
    4:54:32 It’ll be like CSAM, like child abuse material.
    4:54:35 That’s the type of thing that is forbidden there, but that’s enough from an open source
    4:54:38 background to say it’s not an open source license.
    4:54:42 And also the Lama license has this horrible thing where you have to name your model Lama
    4:54:45 if you touch it to the Lama model.
    4:54:46 So it’s like the branding thing.
    4:54:50 So if a company uses Lama, technically the license says that they should say built with
    4:54:52 Lama at the bottom of their application.
    4:54:54 And from a marketing perspective, that just hurts.
    4:54:57 I could suck it up as a researcher and I’m like, oh, it’s fine.
    4:55:01 It says Lama-dash on all of our materials for this release.
    4:55:06 But this is why we need truly open models, which is we don’t know deep-seek R1’s data.
    4:55:10 So you’re saying I can’t make a cheap copy of Lama and pretend it’s mine, but I can
    4:55:12 do this with the Chinese model.
    4:55:13 Hell yeah.
    4:55:16 That’s what I was saying.
    4:55:21 And that’s why it’s like we want this whole open language models thing, the Olmo thing
    4:55:25 is to try to keep the model where everything is open with the data as close to the frontier
    4:55:26 as possible.
    4:55:27 So we’re compute constrained.
    4:55:29 We’re personnel constrained.
    4:55:34 We rely on getting insights from people like John Shulman tells us to do RL on outputs.
    4:55:39 We can make these big jumps, but it just takes a long time to push the frontier of open source.
    4:55:44 And fundamentally, I would say that that’s because open source AI does not have the same
    4:55:46 feedback loops as open source software.
    4:55:49 We talked about open source software for security.
    4:55:52 Also it’s just because you build something once and you can reuse it.
    4:55:55 If you go into a new company, there’s so many benefits.
    4:55:58 But if you open source a language model, you have this data sitting around, you have this
    4:55:59 training code.
    4:56:04 It’s not that easy for someone to come and build on and improve because you need to spend
    4:56:05 a lot on compute.
    4:56:06 You need to have expertise.
    4:56:12 So until there are feedback loops of open source AI, it seems mostly an ideological mission.
    4:56:15 People like Mark Zuckerberg, which is like America needs this.
    4:56:21 And I agree with him, but in the time where the motivation ideologically is high, we need
    4:56:26 to capitalize and build this ecosystem around what benefits do you get from seeing the language
    4:56:27 model data.
    4:56:29 And there’s not a lot about that.
    4:56:33 We’re going to try to launch a demo soon where you can look at an Olmo model and a
    4:56:39 query and see what pre-training data is similar to it, which is like legally risky and complicated.
    4:56:43 But it’s like, what does it mean to see the data that the AI was trained on?
    4:56:44 It’s hard to parse.
    4:56:45 It’s terabytes of files.
    4:56:48 It’s like, I don’t know what I’m going to find in there.
    4:56:54 But that’s what we need to do as an ecosystem if people want open source AI to be financially
    4:56:55 useful.
    4:56:56 We didn’t really talk about Stargate.
    4:57:01 I would love to get your opinion on like what the new administration, the Trump administration,
    4:57:08 everything that’s being done from the America side and supporting AI infrastructure and
    4:57:10 the efforts of the different AI companies.
    4:57:11 What do you think about Stargate?
    4:57:17 What are we supposed to think about Stargate and does Sam have the money?
    4:57:18 Yeah.
    4:57:21 So I think Stargate is an opaque thing.
    4:57:23 It definitely doesn’t have $500 billion.
    4:57:25 It doesn’t even have $100 billion, right?
    4:57:30 So what they announced is this $500 billion number, Larry Ellison, Sam Altman and Trump
    4:57:31 said it.
    4:57:38 They thanked Trump and Trump did do some executive actions that do significantly improve the
    4:57:42 ability for this to be built faster.
    4:57:45 One of the executive actions he did is on federal land, you can just basically build
    4:57:49 data centers in power, pretty much like that.
    4:57:52 And then the permitting process is basically gone or you file after the fact.
    4:57:56 So like one of the, again, like I had a Schizo take earlier, another Schizo take, if you’ve
    4:58:00 ever been to the Presidio in San Francisco, beautiful area.
    4:58:03 You could build a power plant and a data center there if you wanted to because it is federal
    4:58:04 land.
    4:58:05 It used to be a military base.
    4:58:11 But you know, obviously this would like piss people off, you know, it’s a good bit.
    4:58:14 Anyways, Trump has made it much easier to do this, right?
    4:58:18 Generally, Texas has the only unregulated grid in the nation as well.
    4:58:19 Let’s go Texas.
    4:58:24 And so, you know, therefore like ERCOT enables people to build faster as well.
    4:58:27 In addition, the federal regulations are coming down.
    4:58:31 And so Stargate is predicated, and this is why that whole show happened.
    4:58:35 Now, how they came up with a $500 billion number is beyond me.
    4:58:39 How they came up with a $100 billion number makes sense to some extent, right?
    4:58:44 And there’s actually a good table in here that I would like to show in that Stargate
    4:58:49 piece that I had.
    4:58:50 It’s the most recent one.
    4:58:51 Yeah.
    4:58:58 So anyways, Stargate, you know, it’s basically right, like there is, it’s a table about cost.
    4:59:01 There, you passed it already.
    4:59:03 It’s that one.
    4:59:06 So this table is kind of explaining what happens, right?
    4:59:10 So Stargate is in Abilene, Texas, the first $100 billion of it.
    4:59:17 That site is 2.2 gigawatts of power in, about 1.8 gigawatts of power consumed, right?
    4:59:24 Per GPU, they have like roughly, Oracle is already building the first part of this before
    4:59:25 Stargate came about.
    4:59:27 To be clear, they’ve been building it for a year.
    4:59:29 They tried to rent it to Elon, in fact, right?
    4:59:31 But Elon was like, “It’s too slow.
    4:59:32 I need it faster.”
    4:59:34 So then he went and did his Memphis thing.
    4:59:38 And so OpenAI was able to get it with this like weird joint venture called Stargate.
    4:59:42 They initially signed a deal with just Oracle for the first section of this cluster, right?
    4:59:50 This first section of this cluster, right, is roughly $5 billion to $6 billion of server
    4:59:51 spend, right?
    4:59:54 And then there’s another billion or so of data center spend.
    4:59:59 But then likewise, like if you fill out that entire 1.8 gigawatts with the next two generations
    5:00:05 of NVIDIA’s chips, GB200, GB300, VR200, and you fill it out completely, that ends up being
    5:00:10 roughly $50 billion of server cost, right?
    5:00:15 Plus there’s data center cost, plus maintenance cost, plus operation cost, plus all these
    5:00:16 things.
    5:00:19 And that’s where OpenAI gets to their $100 billion announcement that they had, right?
    5:00:22 Because they talked about $100 billion is phase one.
    5:00:24 That’s this Abilene, Texas data center, right?
    5:00:27 $100 billion of total cost of ownership, quote, unquote, right?
    5:00:28 So it’s not CapEx.
    5:00:29 It’s not investment.
    5:00:32 It’s $100 billion of total cost of ownership.
    5:00:35 And then there will be future phases.
    5:00:39 They’re looking at other sites that are even bigger than this 2.2 gigawatts, by the way,
    5:00:40 in Texas and elsewhere.
    5:00:43 And so they’re not completely ignoring that.
    5:00:49 But there is the number of $100 billion that they save for phase one, which I do think will
    5:00:50 happen.
    5:00:51 They don’t even have the money for that.
    5:00:54 Furthermore, it’s not $100 billion, it’s $50 billion of spend, right?
    5:01:01 And then like $50 billion of operational cost, power, et cetera, rental pricing, et cetera.
    5:01:06 Because they’re renting it, OpenAI is renting the GPUs from the Stargate joint venture, right?
    5:01:08 What money do they actually have, right?
    5:01:11 SoftBank is going to invest, Oracle is going to invest, OpenAI is going to invest.
    5:01:13 OpenAI is on the line for $19 billion.
    5:01:17 Everyone knows that they’ve only got $6 billion in their last round and $4 billion in debt.
    5:01:23 But there is news of like SoftBank maybe investing $25 billion into OpenAI, right?
    5:01:25 So that’s part of it, right?
    5:01:26 So $19 billion can come from there.
    5:01:28 So OpenAI does not have the money at all, right?
    5:01:29 To be clear.
    5:01:34 Inc. is not dried on anything, OpenAI has $0 for this $50 billion, right?
    5:01:38 In which they’re legally obligated to put $19 billion of CAPEX into the joint venture
    5:01:41 and then the rest they’re going to pay via renting the GPUs from the joint venture.
    5:01:44 And then there’s Oracle.
    5:01:48 Oracle has a lot of money, they’re building the first section completely, they were spending
    5:01:49 for themselves, right?
    5:01:55 This $6 billion of CAPEX, $10 billion of TCO, and they were going to do that first section.
    5:01:57 They’re paying for that, right?
    5:02:00 As far as the rest of the section, I don’t know how much Larry wants to spend, right?
    5:02:01 At any point he could pull out, right?
    5:02:03 Like this is again, this is like completely voluntary.
    5:02:06 So at any point there’s no signed Inc. on this, right?
    5:02:09 But he potentially could contribute tens of billions of dollars, right, to be clear.
    5:02:11 He’s got the money, Oracle’s got the money.
    5:02:17 And then there’s like MGX, which is the UAE fund, which technically has $1.5 trillion
    5:02:18 for investing in AI.
    5:02:21 But again, like, I don’t know how real that money is.
    5:02:26 And like, whereas there is no Inc. signed for this, SoftBank does not have $25 billion
    5:02:27 of cash.
    5:02:32 They have to sell down their stake in ARM, which is the leader in CPUs and they IPO’ed
    5:02:33 it.
    5:02:34 This is obviously what they’ve always wanted to do.
    5:02:36 They just didn’t know where they’d redeploy the capital.
    5:02:38 Selling down the stake in ARM makes a ton of sense.
    5:02:42 So they can sell that down and invest in this if they want to and invest in Open AI if they
    5:02:43 want to.
    5:02:50 As far as like money secured, the first 100,000 GB 200 cluster can be funded.
    5:02:53 Everything else after that is up in the air.
    5:02:54 Money’s coming.
    5:02:55 I believe the money will come.
    5:02:57 I personally do.
    5:02:58 It’s a belief.
    5:03:02 It’s a belief that they are going to release better models and be able to raise money.
    5:03:06 But like the actual reality is that Elon’s right, the money does not exist.
    5:03:09 What does the US government have to do with anything?
    5:03:10 What does Trump have to do with everything?
    5:03:12 He’s just a hype man.
    5:03:16 Trump is, he’s reducing the regulation so they can build it faster.
    5:03:18 And he’s allowing them to do it, right?
    5:03:21 Because any investment of this side is going to involve like antitrust stuff.
    5:03:23 So obviously he’s going to allow them to do it.
    5:03:27 He’s going to enable the regulations to actually allow it to be built.
    5:03:31 I don’t believe there’s any US government dollars being spent on this though.
    5:03:32 Yeah.
    5:03:37 So I think he’s also just creating a general vibe that this regulation will go down and
    5:03:40 this is the era of building.
    5:03:42 So if you’re a builder, you want to create stuff.
    5:03:43 You want to launch stuff.
    5:03:44 This is the time to do it.
    5:03:48 And so like we’ve had this 1.8 gigawatt data center in our data for over a year now and
    5:03:51 we’ve been like sort of sending it to all of our clients, including many of these companies
    5:03:53 that are building the multi gigawatts.
    5:03:57 But that is like at a level that’s not quite maybe executives like seeing $500 billion,
    5:04:02 $100 billion, and then everyone’s asking them like, so it could spur like another like an
    5:04:04 even faster arms race, right?
    5:04:08 Because there’s already an arms race, but like this like $100 billion, $500 billion number.
    5:04:13 Trump talking about it on TV, like it could spur the arm race to be even faster and more
    5:04:15 investors to flood in and et cetera, et cetera.
    5:04:20 So I think, I think you’re right is that in that sense that open eye or sort of Trump
    5:04:23 is sort of like championing people are going to build more and his actions are going to
    5:04:25 let people build more.
    5:04:33 What are you excited about about these several years that are upcoming in terms of cluster
    5:04:40 buildouts, in terms of breakthroughs in AI, like the best possible future you can imagine
    5:04:44 in the next couple of years, two, three, four years, what does that look like just it could
    5:04:51 be a very specific technical things like breakthroughs on post post training or it could be just
    5:04:52 size big.
    5:04:53 Yeah.
    5:04:55 I mean it’s impressive clusters.
    5:05:00 I really, I really enjoyed tracking supply chain and like who’s involved in what I really
    5:05:01 do.
    5:05:04 It’s really fun to see like the numbers, the cost, who’s building what capacity helping
    5:05:07 them figure out how much capacity they should build, winning deals, strategic stuff.
    5:05:08 That’s really cool.
    5:05:14 I think technologically there’s a lot around the networking side that really excites me
    5:05:18 with optics and electronics like kind of getting closer and closer, whether it be co-package
    5:05:22 optics or some sort of like forms of new forms of switching.
    5:05:25 This is internal to a cluster.
    5:05:26 Yeah.
    5:05:30 Also multi-data center training, like there’s people are putting so much fiber between these
    5:05:35 data centers and lighting it up with so much bandwidth that there’s a lot of interesting
    5:05:40 stuff happening on that end, telecom has been really boring since 5G and now it’s like really
    5:05:42 exciting again on the other side.
    5:05:44 Can you educate me a little bit about the speed of things?
    5:05:49 So the speed of memory versus the speed of interconnect versus the speed of fiber between
    5:05:50 data centers.
    5:05:53 Are these like orders of magnitude different?
    5:05:57 Can we at some point converge towards a place where it all just feels like one computer?
    5:05:58 No.
    5:06:01 I don’t think that’s possible.
    5:06:02 It’s only going to get harder to program.
    5:06:03 Not easier.
    5:06:04 Okay.
    5:06:07 It’s only going to get more difficult and complicated and more layers, right?
    5:06:11 The general image that people like to have is like this hierarchy of memory.
    5:06:14 So on chip is really close, localized within the chip, right?
    5:06:15 You have registers, right?
    5:06:19 Those are shared between some compute elements and then you’ll have caches, which are shared
    5:06:20 between more compute elements.
    5:06:21 Then you have like memory, right?
    5:06:24 Like HBM or DRAM, like DDR memory or whatever it is.
    5:06:27 And that’s shared between the whole chip.
    5:06:31 And then you can have, you know, pools of memory that are shared between many chips, right?
    5:06:33 And then storage and you keep zoning out, right?
    5:06:38 The access latency across data centers, across within the data center, within a chip is different.
    5:06:43 So like you’re obviously always, you’re always going to have different programming paradigms
    5:06:44 for this.
    5:06:45 It’s not going to be easy.
    5:06:46 Programming this stuff is going to be hard.
    5:06:48 Maybe I can help, right?
    5:06:49 You know, with programming this.
    5:07:00 But the way to think about it is that like there is, there’s sort of like the more elements
    5:07:04 you add to a task, you don’t gain, you don’t get strong scaling, right?
    5:07:07 If I double the number of chips, I don’t get two exit performance, right?
    5:07:11 This is just like a reality of computing because there’s inefficiencies.
    5:07:15 And there’s a lot of interesting work being done to make it not, you know, to make it
    5:07:19 more linear, whether it’s making the chips more networked together more tightly or,
    5:07:23 you know, cool programming models or cool algorithmic things that you can do on the
    5:07:25 model side, right?
    5:07:27 DeepSeq did some of these really cool innovations because they were limited on interconnect,
    5:07:29 but they still needed to parallelize, right?
    5:07:31 Like all sorts of, you know, all, everyone’s always doing stuff.
    5:07:35 Google’s got a bunch of work and everyone’s got a bunch of work about this.
    5:07:39 That stuff is super exciting on the model and workload and innovation side, right?
    5:07:42 Hardware, solid state transformers are interesting, right?
    5:07:46 For the power side, there’s all sorts of stuff on batteries and there’s all sorts of stuff
    5:07:49 on, you know, I think, I think when you look at, if you look at every layer of the compute
    5:07:50 stack, right?
    5:07:54 Whether it goes from lithography and etch all the way to like fabrication to like optics
    5:07:59 to networking to power to transformers to cooling to, you know, a networking and you
    5:08:03 just go on up and up and up and up the stack, you know, even air conditioners for data centers
    5:08:04 are like innovating, right?
    5:08:07 Like it’s like, there’s like copper cables are innovating, right?
    5:08:10 Like you wouldn’t think it, but copper cables, like there’s some innovations happening there
    5:08:14 with like the density of how you can pack them and like, it’s like all of these layers
    5:08:18 of the stack all the way up to the models, human progress is at a pace that’s never been
    5:08:19 seen before.
    5:08:22 I’m just imagining you sitting back in a layer somewhere with screens everywhere, just monitoring
    5:08:27 the supply chain where all these clusters, like all the information you’re gathering,
    5:08:28 I mean, you do incredible.
    5:08:29 There’s a big team.
    5:08:30 There’s a big team.
    5:08:39 I mean, you’re, you do quite incredible work with seminars, I mean, just keeping your finger
    5:08:43 on the pulse of human civilization in the digital world.
    5:08:44 It’s pretty cool.
    5:08:45 Like just to watch, feel that.
    5:08:46 Yeah.
    5:08:47 Thank you.
    5:08:48 I guess.
    5:08:51 Feel all of us like doing shit.
    5:08:52 Epic shit.
    5:08:53 Feel the AGI.
    5:08:59 I mean, from meme to like reality, what Nathan, is there like breakthroughs that you’re like
    5:09:01 looking forward to potentially?
    5:09:04 I had a while to think about this while listening to Dylan’s beautiful response.
    5:09:06 He didn’t listen to me.
    5:09:11 I knew, no, I knew this was coming and it’s like, realistically, training models is very
    5:09:13 fun because there’s so much low hanging fruit.
    5:09:19 And the thing that makes my job entertaining, I train models, I write analysis about what’s
    5:09:24 happening with models and it’s fun because there is obviously so much more progress to
    5:09:25 be had.
    5:09:29 And the real motivation why I do this, like somewhere where I can share things is that
    5:09:33 there’s just, I don’t trust people that are like, trust me bro, we’re going to make AI
    5:09:34 good.
    5:09:36 It’s like, we’re the ones that it’s like, we’re going to do it and you can trust us
    5:09:41 and we’re just going to have all the AI and it’s just like, I would like a future where
    5:09:45 more people have a say in what AI is and can understand it.
    5:09:49 And that’s a little bit less fun that it’s not a like positive thing of like, this is
    5:09:50 just all really fun.
    5:09:55 Like training models is fun and bring people in as fun, but it’s really like AI, if it
    5:09:59 is going to be the most powerful technology of my lifetime, it’s like, we need to have
    5:10:06 a lot of people involved in making that and making it open helps with that as accessible
    5:10:08 as possible as open as possible.
    5:10:09 Yeah.
    5:10:14 In my read of the last few years is that more openness would help the AI ecosystem in terms
    5:10:18 of having more people understand what’s going on, rather that researchers from non-AI fields
    5:10:20 to governments to everything.
    5:10:22 It doesn’t mean that openness will always be the answer.
    5:10:27 I think then I will reassess of like, what is the biggest problem facing AI and tack on
    5:10:30 a different angle to the wild ride that we’re on.
    5:10:37 And for me, just from even the user experience, anytime you have the like Apathy said, the
    5:10:46 aha moments, like the magic, like seeing the reasoning, the chain of thought, it’s like,
    5:10:49 there’s something really just fundamentally beautiful about that.
    5:10:53 It’s putting a mirror to ourselves and seeing like, oh shit, it is solving intelligence
    5:11:00 as the cliche, like goal of these companies is, and you get to understand why we humans
    5:11:03 are special, the intelligence within us is special.
    5:11:08 And for now, also why we’re special in terms of, we seem to be conscious and the AI systems
    5:11:14 for now aren’t, and we get to explore that mystery.
    5:11:20 So that’s, it’s just really cool to get to explore these questions that I don’t think,
    5:11:25 I would have never imagined would be even possible.
    5:11:32 Back when, so just watching with excitement, deep blue, because I wouldn’t have ever thought
    5:11:35 this kind of AI would be possible in my lifetime.
    5:11:38 It’s like, this is really feels like AI.
    5:11:39 It’s incredible.
    5:11:44 I started with AI of learning to fly as a quadrotor, it’s like, learn to fly, and it
    5:11:47 was just like, it learned to fly up, it would hit the ceiling and stop and catch it.
    5:11:51 It’s like, okay, that is like really stupid compared to what’s going on now.
    5:11:56 And now you could probably, with natural language, tell it to learn to fly, and it’s going to
    5:11:59 generate the control algorithm, the requirement to do that.
    5:12:03 There’s low level blockers, like we had to do some weird stuff for that, but you can,
    5:12:04 you definitely can.
    5:12:07 Back to our robotics conversation, yeah, when you have to interact in actual physical
    5:12:12 world as hard, what gives you hope about the future of human civilization?
    5:12:18 Looking into the next 10 years, 100 years, 1,000 years, how long do you think we’ll make
    5:12:19 it?
    5:12:22 Do you think we’ve got 1,000 years?
    5:12:27 Humans will definitely be around in 1,000 years, I think there’s ways that very bad
    5:12:31 things could happen that will be way fewer humans, but humans are very good at surviving.
    5:12:35 There’s been a lot of things that that is true.
    5:12:39 I don’t think they’re necessarily, we’re good at long-term credit assignment of risk,
    5:12:44 but when the risk becomes immediate, we tend to figure things out.
    5:12:51 For that reason, I’m like, there’s physical constraints to things like AGI, hyper recursive
    5:12:56 improvement to kill us all type stuff, physical reasons, and for how humans have figured things
    5:13:00 out before, I’m not too worried about it, AI takeover.
    5:13:05 There are other international things that are worrying, but there’s just fundamental human
    5:13:08 goodness and trying to amplify that.
    5:13:16 We’re on a tenuous time, and if you look at humanity as a whole, there’s been times where
    5:13:20 things go backwards, there’s times when things don’t happen at all, and we’re on what should
    5:13:23 be very positive trajectory right now.
    5:13:29 Yeah, there seems to be progress, but just with power, there’s spikes of human suffering.
    5:13:33 We want to try to minimize the amount of spikes.
    5:13:36 Generally humanity is going to suffer a lot less.
    5:13:37 I’m very optimistic about that.
    5:13:44 I do worry of techno-fascism type stuff arising as AI becomes more and more prevalent and
    5:13:48 powerful, and those who control it can do more and more.
    5:13:53 Maybe it doesn’t kill us all, but at some point, every very powerful human is going to
    5:13:58 want a brain-computer interface so that they can interact with AGI and all of its advantages
    5:14:05 in many more way and merge its mind with that person’s capabilities can leverage those much
    5:14:11 better than anyone else, and therefore won’t be one person rule them all, but the thing
    5:14:16 I worry about is it’ll be few people, hundreds, thousands, tens of thousands, maybe millions
    5:14:22 of people rule whoever’s left and the economy around it.
    5:14:28 That’s the thing that’s probably more worrisome is human machine amalgamations.
    5:14:32 This enables an individual human to have more impact on the world, and that impact can be
    5:14:35 both positive and negative.
    5:14:39 Generally humans have positive impacts on the world, at least societally, but it’s possible
    5:14:44 for individual humans to have such negative impacts, and AGI, at least as I think the
    5:14:49 labs define it, which is not a runaway sentient thing, but rather just something that can
    5:14:54 do a lot of tasks really efficiently, amplifies the capabilities of someone causing extreme
    5:14:56 damage.
    5:15:01 For the most part, I think it’ll be used for profit-seeking motives, which will then reduce,
    5:15:04 which will increase the abundance and supply of things, and therefore reduce suffering,
    5:15:05 right?
    5:15:07 What’s the goal?
    5:15:12 Scrolling on a timeline, just rolling a stasis.
    5:15:15 Scrolling holds the status quo of the world.
    5:15:16 That is a positive outcome, right?
    5:15:23 Like if I have food tubes and lumped up scrolling and I’m happy, that’s a positive outcome.
    5:15:30 While expanding out into the cosmos, well, this is a fun time to be alive.
    5:15:34 And thank you for pushing the forefront of what is possible in humans, and thank you
    5:15:35 for talking to me.
    5:15:36 This was fun.
    5:15:37 Thanks for having us.
    5:15:38 Thanks for having us.
    5:15:42 Thanks for listening to this conversation with Dylan Patel and Nathan Lambert.
    5:15:46 To support this podcast, please check out our sponsors in the description.
    5:15:52 And now, let me leave you some words from Richard Feynman.
    5:15:57 For a successful technology, reality must take precedence over public relations.
    5:16:01 For nature cannot be fooled.
    5:16:03 Thank you for listening, and I hope to see you next time.
    5:16:13 [MUSIC]
    5:16:23 [BLANK_AUDIO]

    Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Dylan’s X: https://x.com/dylan522p
    SemiAnalysis: https://semianalysis.com/
    Nathan’s X: https://x.com/natolambert
    Nathan’s Blog: https://www.interconnects.ai/
    Nathan’s Podcast: https://www.interconnects.ai/podcast
    Nathan’s Website: https://www.natolambert.com/
    Nathan’s YouTube: https://youtube.com/@natolambert
    Nathan’s Book: https://rlhfbook.com/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Invideo AI: AI video generator.
    Go to https://invideo.io/i/lexpod
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (13:28) – DeepSeek-R1 and DeepSeek-V3
    (35:02) – Low cost of training
    (1:01:19) – DeepSeek compute cluster
    (1:08:52) – Export controls on GPUs to China
    (1:19:10) – AGI timeline
    (1:28:35) – China’s manufacturing capacity
    (1:36:30) – Cold war with China
    (1:41:00) – TSMC and Taiwan
    (2:04:38) – Best GPUs for AI
    (2:19:30) – Why DeepSeek is so cheap
    (2:32:49) – Espionage
    (2:41:52) – Censorship
    (2:54:46) – Andrej Karpathy and magic of RL
    (3:05:17) – OpenAI o3-mini vs DeepSeek r1
    (3:24:25) – NVIDIA
    (3:28:53) – GPU smuggling
    (3:35:30) – DeepSeek training on OpenAI data
    (3:45:59) – AI megaclusters
    (4:21:21) – Who wins the race to AGI?
    (4:31:34) – AI agents
    (4:40:16) – Programming and AI
    (4:47:43) – Open source
    (4:56:55) – Stargate
    (5:04:24) – Future of AI

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #458 – Marc Andreessen: Trump, Power, Tech, AI, Immigration & Future of America

    AI transcript
    0:00:05 The following is a conversation with Mark Andreessen, his second time on the podcast.
    0:00:11 Mark is a visionary tech leader and investor who fundamentally shaped the development of
    0:00:15 the internet and the tech industry in general over the past 30 years.
    0:00:22 He is the co-creator of Mosaic, the first widely used web browser, co-founder of Netscape,
    0:00:28 co-founder of the legendary Silicon Valley Venture Capital firm Andreessen Horowitz, and
    0:00:33 is one of the most influential voices in the tech world, including at the intersection
    0:00:37 of technology and politics.
    0:00:40 And now, a quick few second mention of his sponsor.
    0:00:43 Check them out in the description, it’s the best way to support this podcast.
    0:00:49 We’ve got On-Cord for unifying your ML stack, GitHub for programming, Notion for team projects
    0:00:56 and collaboration, Shopify for merch, and Element for hydration, choose wisely my friends.
    0:01:02 Also, if you want to get in touch with me, for whatever reason, go to lexfreement.com/contact.
    0:01:06 And now, onto the full ad reads, no ads in the middle, I try to make this interesting,
    0:01:12 but if you skip them, please still check out the sponsors, I enjoy their stuff, maybe you
    0:01:13 will too.
    0:01:19 This episode is brought to you by On-Cord, a platform that provides data focused AI tooling
    0:01:25 for data annotation, curation, and management, and for model evaluation, once you train up
    0:01:29 the model on the data that you curate.
    0:01:34 In this conversation with Mark Andreessen, we actually discuss what he calls kind of
    0:01:37 like the trillion dollar questions.
    0:01:42 And one of them for AI is, how effective will synthetic data be?
    0:01:45 It really isn’t an open question.
    0:01:51 What piece, what fraction of the intelligence of future models will be based on training
    0:01:53 on synthetic data?
    0:01:57 At the top AI labs, I’m hearing a lot of optimism.
    0:02:02 As far as I can tell that optimism is not currently, at least in the general case, based
    0:02:04 on any real evidence.
    0:02:10 So I do think synthetic data will play a part, but how big a part?
    0:02:14 There’s still going to be some curation from humans, there’s still going to need to be
    0:02:15 a human in the loop.
    0:02:23 I think the real question is, how do you effectively integrate the human in the loop, so that the
    0:02:34 synthetic data, sort of 99 synthetic, 1% human, that combination can be most effective?
    0:02:35 That’s a real question.
    0:02:38 And companies like Concord are trying to solve that very problem.
    0:02:44 First of all, they want to provide the tooling for the annotation, for the actual human-iac
    0:02:50 collaboration, but also asking and answering the research question of how do you pull it
    0:02:55 all off and make the resulting model more intelligent for very specific applications and for the
    0:02:57 general applications?
    0:03:00 Yeah, so Concord does a really good job on the tooling side.
    0:03:08 Go try them out to curate, annotate, and manage your AI data at oncord.com/lex.
    0:03:12 That’s oncord.com/lex.
    0:03:17 This episode is brought to you by GitHub and GitHub Copilot.
    0:03:24 If you don’t know what that is, my friends you’re in for a joyous, beautiful surprise.
    0:03:32 I think a lot of people that program regularly know and love GitHub and know and love Copilot.
    0:03:40 It’s the OG AI programming assistant, and it’s the one that’s really trying to win this
    0:03:42 very competitive space.
    0:03:44 It is not easy.
    0:03:49 If you’re somebody that uses VS Code, obviously, well, maybe not obviously, but you can use
    0:03:54 GitHub Copilot in VS Code, but you can use it also in other IDEs.
    0:03:58 I’m going to be honest with you, it’s a very competitive space.
    0:04:06 I’m trying all the different tools in the space, and I really love how much GitHub and
    0:04:10 GitHub Copilot want to win in this competitive space.
    0:04:17 I’m excitedly sitting back and just eating popcorn like that, Michael Jackson meme, and
    0:04:20 just enjoying the hell out of it.
    0:04:27 Like I said, I’m going to be doing a bunch of programming episodes, including with Primogen.
    0:04:34 He I think has a love/hate relationship with AI and with AI agents, and with the role of
    0:04:36 AI in the programming experience.
    0:04:42 He’s really at the forefront of people that are playing with all these languages, with
    0:04:48 all these different applications, with all the different use cases of code, and he is
    0:04:53 a new VM user, so he’s going to be skeptical in general of new technology.
    0:04:58 He’s a curmudgeon sitting on a porch, on a rocking chair, screaming at the kids, throwing
    0:05:04 stuff at them, but at the same time, he’s able to play with the kids as well, so I am more
    0:05:09 on the kids side, with a childlike joy, enjoy the new technology.
    0:05:18 For me, basically everything I do, programming-wise, has the possibility of AI either reviewing
    0:05:21 it or assisting it.
    0:05:23 It’s constantly in the loop.
    0:05:29 Even if I’m writing stuff from scratch, I’m always just kind of one second away from asking
    0:05:33 a question about the code, or asking it to generate, or rewrite a certain line, or to
    0:05:39 add a few more lines, all that kind of stuff, so I’m constantly, constantly using it.
    0:05:45 If you’re learning to code, or if you’re an advanced programmer, it is really important
    0:05:49 that you get better and better at using AI as an assistant programmer.
    0:05:55 Get started with GitHub Copilot for free today at gh.io/copilot.
    0:06:00 This episode is also brought to you by Notion, a note-taking and team collaboration tool
    0:06:05 that Mark Andreessen, on this very episode, sings a lot of praises to.
    0:06:07 I believe he sings, “Was it on mic or off mic?”
    0:06:10 I don’t remember, but anyway, he loves it.
    0:06:15 It’s one of the tools, one of the companies, one of the ecosystems that integrate AI really
    0:06:19 effectively for team applications.
    0:06:25 You have, let’s see, docs, and wikis, and projects, and all that kind of stuff.
    0:06:30 You can have the AI load all of that in, and answer questions based on that.
    0:06:36 You can connect a bunch of apps, like you can connect Slack, you can connect Google Drive.
    0:06:43 I think in the context, we were talking about something like Notion for email, for Gmail.
    0:06:47 I don’t know if Notion integrates email yet.
    0:06:53 They’re just like this machine that’s constantly increasing the productivity of every aspect
    0:06:57 of your life, so I’m sure they’re going to start integrating more and more apps.
    0:07:02 I use it for Slack and Google Drive, but I use it primarily at the individual level for
    0:07:07 note-taking, and even at the individual level, just incredible what Notion AI can do.
    0:07:12 Try it out for free when you go to Notion.com/lex.
    0:07:19 It’s all lowercase Notion.com/lex to try the power of Notion AI today.
    0:07:23 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere
    0:07:26 with a great looking online store.
    0:07:35 There are a few people embody the joy and the power of capitalism than Mark Andreessen.
    0:07:43 I was at a thing where Mark and Toby were both there, and then we were chatting, and
    0:07:47 they were very friendly, so I think they’re friends, and I got to hang out with Toby.
    0:07:50 It was, again, an incredible person.
    0:07:56 I said it again and again, and it’s almost becoming funny that eventually we’ll do a
    0:07:57 podcast.
    0:07:59 I don’t know why we haven’t done a podcast.
    0:08:05 There’s a few people in my life where it’s like, like, Jeffrey Hinton is one of those
    0:08:06 people.
    0:08:12 It’s like, we’ve agreed to do a podcast for so long, and we’ve just been kind of lazy
    0:08:15 about it, and Toby’s the same.
    0:08:17 Anyway, he’s a CEO of Shopify.
    0:08:21 I don’t even know if he knows that Shopify sponsors this podcast.
    0:08:23 It doesn’t matter.
    0:08:27 It goes without saying, it should be obvious to everybody, that one doesn’t affect the
    0:08:28 other.
    0:08:33 I’m very fortunate to have way more sponsors than we could possibly fit, so I could pick
    0:08:39 whoever the hell I want, and whatever guests I choose will never have anything to do with
    0:08:42 the companies that sponsor the podcast.
    0:08:45 There’s not even like a tinge of influence.
    0:08:50 In fact, if there’s anything, it’ll be the opposite direction, but I also try to avoid
    0:08:51 that.
    0:08:57 It’s possible I talk to the CEO of GitHub, for example, on this podcast, and GitHub sponsors
    0:08:58 this podcast.
    0:09:03 It’s possible I talk to the CEO of Shopify, Toby, and Shopify sponsors this podcast.
    0:09:08 One doesn’t affect the other, and obviously, again, goes without saying, but let me say
    0:09:16 it, make it explicit that nobody can buy their way onto the podcast, whether through sponsorships
    0:09:19 or buying me dinner or whatever.
    0:09:20 I don’t know.
    0:09:27 It’s just, it’s impossible, and most likely, if that’s attempted, it’s going to backfire
    0:09:33 so I think people intuitively know not to attempt because it would really piss me off.
    0:09:37 Anyway, this is a detour.
    0:09:38 We’re supposed to talk about Shopify.
    0:09:44 I have a Shopify store, lexforumni.com/store, that sells t-shirts, but you can sell more
    0:09:49 sophisticated stuff, make a lot of money, and participate in this beautiful machinery
    0:09:50 of capitalism.
    0:09:55 Sign up for a $1 per month trial period at Shopify.com/lex.
    0:09:56 That’s all over the case.
    0:10:01 Go to Shopify.com/lex to take your business to the next level today.
    0:10:07 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte
    0:10:13 mix of which I consume very ridiculously large amounts.
    0:10:17 You know, salt used to be currency in the ancient world.
    0:10:18 How silly are humans?
    0:10:26 They’re not silly, how sort of surprising the things we converge on as being the store
    0:10:31 of value, just value in general, the kind of things we assign value to together.
    0:10:41 We just kind of all agree that this item, this material, this idea, this building is
    0:10:49 extremely valuable, and then we compete over that resource, or that idea, or that building.
    0:10:54 We fight, and sometimes there is wars, and sometimes there is complete destruction, and
    0:10:59 the rise and fall of empires, all over some resource.
    0:11:04 What a funny, strange little world.
    0:11:12 Completely harmless as H. Hiker’s guide to the galaxy summarizes humans.
    0:11:15 For some reason, instead of that book, I was going to say Catcher in the Rye.
    0:11:21 In my exhausted brain, the books kind of all morph together, but Catcher in the Rye is
    0:11:23 a really damn good book.
    0:11:28 All of the classics I return to often, the simple books, even like the first book I read
    0:11:34 in English, Traveal Book, Traveal Book, called The Giver.
    0:11:42 It’s like I return to it in its simplicity, maybe it has sentimental value, maybe that’s
    0:11:46 what it is, but just the simplicity of words, Animal Farmer, I’ve read, I don’t know how
    0:11:51 many times, probably over 50 times, I return to it over and over and over, the simplicity,
    0:11:54 the poetry of that simplicity.
    0:11:59 That’s something that just resonates with my brain, maybe it’s a peculiar kind of brain.
    0:12:08 It is a peculiar kind of brain, and I have to thank you for being patient with this peculiar
    0:12:09 kind of brain.
    0:12:14 Get a simple pack for free with any purchase of whatever the thing I was talking about,
    0:12:16 which I think is Element.
    0:12:21 Try it at drinkelement.com/lex.
    0:12:22 This is a Lex Friedman podcast.
    0:12:26 To support it, please check out our sponsors in the description.
    0:12:39 And now, dear friends, here’s Mark and Risen.
    0:12:48 All right, let’s start with optimism.
    0:12:57 If you were to imagine the best possible one to two years, 2025, ’26, for tech, for big
    0:13:01 tech and small tech, what would it be, what would it look like, lay out your vision for
    0:13:05 the best possible scenario trajectory for America?
    0:13:06 The roaring 20s.
    0:13:07 The roaring 20s.
    0:13:08 The roaring 20s.
    0:13:09 I mean, look, a couple of things.
    0:13:14 It is remarkable over the last several years with all of the issues, including not just
    0:13:17 everything in politics, but also COVID and every other thing that’s happened.
    0:13:18 It’s really amazing.
    0:13:19 The US just kept growing.
    0:13:21 If you just look at economic growth charts, the US just kept growing.
    0:13:23 And very significantly, many other countries stopped growing.
    0:13:27 So Canada stopped growing, the UK stopped growing, Germany stopped growing.
    0:13:31 And some of those countries may be actually going backwards at this point.
    0:13:34 And there’s a very long discussion to be had about what’s wrong with those countries.
    0:13:37 And there’s, of course, plenty of things that are wrong with our country.
    0:13:41 But the US is just flat out primed for growth.
    0:13:47 And I think that’s a consequence of many factors, some of which are lucky and some of
    0:13:48 which through hard work.
    0:13:52 And so the lucky part is just, number one, we just have incredible physical security
    0:13:54 by being our own continent.
    0:13:56 We have incredible natural resources.
    0:14:00 There’s this running joke now that whenever it looks like the US is going to run out of
    0:14:04 some rare earth material, some farmer in North Dakota kicks over a hay bale and finds like
    0:14:05 a $2 trillion deposit.
    0:14:12 I mean, we’re just blessed with geography and the natural resources.
    0:14:14 We can be energy independent anytime we want.
    0:14:16 This last administration decided they didn’t want to be.
    0:14:18 They wanted to turn off American energy.
    0:14:22 This new administration has declared that they have a goal of turning it on in a dramatic
    0:14:23 way.
    0:14:24 There’s no question we can be energy dependent.
    0:14:26 We can be a giant net energy exporter.
    0:14:28 It’s purely a question of choice.
    0:14:31 And I think the new administration is going to do that.
    0:14:33 And so, oh, and then I would say two other things.
    0:14:38 One is, we are the beneficiaries, and you’re an example of this, we’re a beneficiary, we’re
    0:14:43 the beneficiary of 50, 100, 200 years of like the basically most aggressive, driven, smartest
    0:14:46 people in the world, most capable people moving to the US and raising their kids here.
    0:14:50 And so, we just have, you know, by far the most dynamic, you know, we’re by far the
    0:14:54 most dynamic population, most aggressive, you know, we’re the most aggressive set of
    0:14:59 characters in certainly in any Western country and have been for a long time and certainly
    0:15:00 are today.
    0:15:03 And then finally, I would just say, look, we are overwhelmingly the advanced technology
    0:15:04 leader.
    0:15:08 You know, we have our issues and we have, I would say, a particular issue with manufacturing,
    0:15:12 which we could talk about, but for, you know, anything in software or anything in AI, anything
    0:15:16 in, you know, all these, you know, advanced biotech, all these advanced areas of technology,
    0:15:20 like we’re by far the leader, again, in part because many of the best scientists and engineers
    0:15:23 in those fields, you know, you don’t come to the US.
    0:15:29 And so, we just, we have all of the preconditions for a, for just a monster, boom, you know,
    0:15:32 I could see economic growth going way up, I could see productivity growth going way
    0:15:35 up, rate of technology adoption going way up, and then we could, we can do a global
    0:15:40 tour if you like, but like, basically all of our competitors have like profound issues
    0:15:44 and, you know, we could kind of go through them one by one, but the competitive landscape
    0:15:49 just is, it’s like, it’s, it’s remarkable how, how, how much better position we are
    0:15:50 for growth.
    0:15:54 What about the humans themselves, almost philosophical questions, you know, I travel across the world
    0:16:00 and there’s something about the American spirit, the entrepreneurial spirit that’s uniquely
    0:16:02 intense in America.
    0:16:03 I don’t know what that is.
    0:16:11 I’ve talked to a saga who claims it might be the Scots Irish blood that runs through
    0:16:12 the history of America.
    0:16:13 What is it?
    0:16:17 You, at the heart of Silicon Valley, is there something in the water?
    0:16:19 Why is there this entrepreneurial spirit?
    0:16:20 Yeah.
    0:16:22 So is this a family show or am I allowed to swear?
    0:16:23 You can say whatever the fuck you want.
    0:16:24 Okay.
    0:16:28 So the TV, the great TV show succession, the show, of course, that would, which you were
    0:16:30 intended to root for exactly zero of the characters.
    0:16:31 Yes.
    0:16:34 The show succession was in the final episode of the first season when the whole family
    0:16:39 is over in Logan Roy’s ancestral homeland of Scotland and they’re at this castle, you
    0:16:40 know, for some wedding.
    0:16:43 And Logan is just like completely miserable after having to, you know, because he’s been
    0:16:47 in New York for 50 years, he’s totally miserable being back in, in Scotland and he gets in
    0:16:51 some argument with somebody and he’s like, he says, finally, just says, my God, I cannot
    0:16:58 wait to get out of here and go back to America where we could fuck without condoms.
    0:17:01 So was that a metaphor or okay, exactly, right?
    0:17:04 And so no, but it’s exactly the thing and everybody instantly knows what they’re like.
    0:17:07 Everybody watching that instantly starts laughing because you know what it means, which is exactly
    0:17:08 this.
    0:17:09 I think there’s like an ethnographic way of it.
    0:17:12 There’s a bunch of books on like all, like you said, the Scots-Irish, like all the different
    0:17:15 derivations of all the different ethnic groups that have come to the U.S. over the course
    0:17:17 of the last 400 years, right?
    0:17:22 But what we have is this sort of amalgamation of like, you know, the northeast Yankees who
    0:17:26 were like super tough and hardcore, yeah, the Scots-Irish are super aggressive.
    0:17:31 You know, we’ve got the Southerners and the Texans, you know, and the sort of whole kind
    0:17:35 of blended, you know, kind of Anglo-Hispanic thing, super incredibly tough, strong driven,
    0:17:40 you know, capable characters, you know, the Texas Rangers, you know, we’ve got the, yeah,
    0:17:43 we’ve got the California, you know, we’ve got the, you know, the wild, we’ve got the
    0:17:47 incredibly, you know, inventive hippies, but we also have the hardcore engineers, we’ve
    0:17:50 got, you know, the best, you know, rocket scientists in the world, we’ve got the best,
    0:17:53 you know, artists in the world, you know, creative professionals, you know, the best
    0:17:54 movies.
    0:18:00 And so, yeah, there is, you know, all of our problems, I think, are basically, you know,
    0:18:04 in my view, to some extent, you know, attempts to basically sand all that off and make everything
    0:18:09 basically boring and mediocre, but there is something in the national spirit that basically
    0:18:10 keeps bouncing back.
    0:18:14 And basically what we discover over time is we basically just need people to stand up
    0:18:17 at a certain point and say, you know, it’s time to, you know, it’s time to build, it’s
    0:18:20 time to grow, you know, it’s time to do things.
    0:18:23 And so, and there’s something in the American spirit that just like, we’re just right back
    0:18:24 to life.
    0:18:28 And before I actually saw, you know, I saw it as a kid here in the early 80s, you know,
    0:18:34 because the 70s were like horribly depressing, right, in the U.S., like they were a nightmare
    0:18:35 on many fronts.
    0:18:40 And in a lot of ways, the last decade to me has felt a lot like the 70s, just being mired
    0:18:45 in misery and just this self-defeating, you know, negative attitude and everybody’s upset
    0:18:46 about everything.
    0:18:50 And, you know, and then by the way, like energy crisis and hostage crisis and foreign wars
    0:18:56 and just demoralization, right, you know, the low point for in the 70s was, you know,
    0:18:59 Jimmy Carter, who just passed away, he went on TV and he gave this speech known as the
    0:19:00 Malay speech.
    0:19:04 And it was like the weakest possible trend to like rouse people back to a sense of like
    0:19:05 passion completely failed.
    0:19:10 And, you know, we had the, you know, the hostages in, you know, Iran for I think 440 days and
    0:19:14 every night on the nightly news, it was, you know, lines around the block, energy crisis,
    0:19:16 depression, inflation.
    0:19:19 And then, you know, Reagan came in and, you know, Reagan was a very controversial character
    0:19:23 at the time and, you know, he came in and he’s like, nope, it’s morning in America.
    0:19:25 And we’re the shining city on the hill and we’re going to do it.
    0:19:26 And he did it.
    0:19:27 And we did it.
    0:19:29 And the national spirit came roaring back and, you know, word really hard for a full
    0:19:30 decade.
    0:19:33 And I think that’s exactly what, I think, you know, we’ll see, but I think that’s what
    0:19:34 could happen here.
    0:19:39 And I just did a super long podcast on Milton Friedman with Jennifer Burns, who’s this incredible
    0:19:41 professor at Stanford.
    0:19:42 And he was part of the Reagan.
    0:19:46 So there’s a bunch of components to that, one of which is economic.
    0:19:47 Yes.
    0:19:52 And one of which, maybe you can put a word on it of not to be romantic or anything, but
    0:19:58 freedom, individual freedom, economic freedom, political freedom, and just in general, individualism.
    0:20:00 Yeah, that’s right.
    0:20:01 Yeah.
    0:20:05 And as you know, as America has this incredible streak of individualism, you know, and individualism
    0:20:09 in America probably peaked, I think, between roughly, call it the end of the Civil War,
    0:20:14 1865 through to probably call it 1931 or something, you know, and there was this like incredible
    0:20:15 run.
    0:20:17 I mean, that period, you know, we now know that period is the Second Industrial Revolution.
    0:20:21 And it’s when the United States basically assumed global leadership and basically took
    0:20:24 over technological and economic leadership from England.
    0:20:27 And then, you know, that led to, you know, ultimately then, therefore being able to,
    0:20:30 you know, not only industrialize the world, but also win World War II and then win the
    0:20:31 Cold War.
    0:20:36 And yeah, you know, there’s a massive industrial, you know, massive individualistic streak.
    0:20:39 By the way, you know, Milton Friedman’s old videos are all on YouTube.
    0:20:46 They are every bit as compelling and inspiring as they were then, you know, he’s a singular
    0:20:51 figure and many of us, you know, I never knew him, but he was actually at Stanford for many
    0:20:52 years at the Hoover Institution.
    0:20:53 But I never met him.
    0:20:57 But I know a lot of people who worked with him and, you know, he was a singular figure,
    0:21:02 but his, all of his lessons, you know, live on are fully available.
    0:21:05 But I would also say it’s not just individualism and this is, you know, this is one of the
    0:21:08 big things that’s like playing out in a lot of our culture and kind of political fights
    0:21:12 right now, which is, you know, basically this feeling, you know, certainly that I have and
    0:21:16 I share with a lot of people, which is it’s not enough for America to just be an economic
    0:21:20 zone and it’s not enough for us to just be individuals and it’s not enough to just have
    0:21:23 line go up and it’s not enough to just have economic success.
    0:21:29 There are deeper questions at play and also, you know, there’s more to a country than just
    0:21:30 that.
    0:21:32 And, you know, quite frankly, a lot of it is intangible.
    0:21:37 A lot of it is, you know, involved spirit and passion and, you know, like I said, we
    0:21:41 have more of it than anybody else, but, you know, we have to choose to want it.
    0:21:43 The way I look at it is like all of our problems are self-inflicted.
    0:21:46 Like they’re, you know, decline is a choice.
    0:21:50 You know, all of our problems are basically demoralization campaigns, you know, basically
    0:21:53 people telling us, people in positions of authority telling us that we should, you know,
    0:21:55 we shouldn’t, you know, stand out.
    0:21:56 We shouldn’t be adventurous.
    0:21:57 We shouldn’t be exciting.
    0:21:58 We shouldn’t be exploratory.
    0:22:01 You know, we shouldn’t, you know, this, that and the other thing and we should feel bad
    0:22:02 about everything that we do.
    0:22:06 And I think we’ve lived through a decade where that’s been the prevailing theme and I think
    0:22:10 quite honestly, as of November, I think people are done with it.
    0:22:14 If we could go on a tangent of a tangent, since we’re talking about individualism and
    0:22:19 that’s not all that it takes, you’ve mentioned in the past the book, The Ancient City, by,
    0:22:24 if I could only pronounce the name French historian, Numa Denis Foustel de Coulombe.
    0:22:25 I don’t know.
    0:22:26 That was amazing.
    0:22:27 Okay.
    0:22:28 All right.
    0:22:29 From the 19th century.
    0:22:30 Anyway, you said this is an important book to understand who we are and where we come
    0:22:31 from.
    0:22:34 So what that book does, it’s actually quite a striking book.
    0:22:40 So the book is written by this guy, as a profusive, let’s do the pronunciations, foreign language
    0:22:42 pronunciations for the day.
    0:22:50 He was a professor of classics at the Sorbonne in Paris, you know, the top university in
    0:22:51 the, actually in the 1860s.
    0:22:57 So actually right around after the U.S. Civil War and he was a savant of a particular kind,
    0:23:00 which is he, and you can see this in the book, is he had apparently read and sort of absorb
    0:23:06 and memorized every possible scrap of Greek and Roman literature and so it’s like a walking
    0:23:09 like index on basically Greek and Roman, everything we know about Greek and Roman culture.
    0:23:11 And that’s significant.
    0:23:13 The reason this matters is because basically none of that has changed, right?
    0:23:17 And so he had access to the exact same materials that we have, we have access to.
    0:23:19 And so there, you know, we’ve learned nothing.
    0:23:21 And then specifically what he did is he talked about the Greeks and the Romans, but specifically
    0:23:23 what he did is he went back further.
    0:23:26 He reconstructed the people who came before the Greeks and the Romans and what their life
    0:23:27 and society was like.
    0:23:30 And these were the people who were now known as the, as the Indo-Europeans.
    0:23:33 And these were, or you may have heard of these, these are the people who came down from the
    0:23:34 steppes.
    0:23:37 And so they came out of what’s now like Eastern Europe, like around sort of the outskirts of
    0:23:38 what’s now Russia.
    0:23:40 And then they sort of swept through Europe.
    0:23:44 They ultimately took over all of Europe, by the way, you know, almost many of the ethnicities
    0:23:48 in the Americas, the hundreds of years to follow, you know, are Indo-European.
    0:23:51 So like, you know, they were this basically this warrior, basically class that like came
    0:23:55 down and swept through and, and, and, and, and, you know, essentially, you know, populated
    0:23:56 much of the world.
    0:23:58 And there’s a whole interesting saga there.
    0:24:01 But what he does, and then they basically, they, they, from there came basically what
    0:24:04 we know as the Greeks and the Romans were kind of evolutions off of that.
    0:24:08 And so what he reconstructs is sort of what life was like, what life was like, at least
    0:24:11 in the West for people in their kind of original social state.
    0:24:15 And the significance of that is, is the original social state is this is living in the state
    0:24:20 of the absolute imperative for survival with absolutely no technology, right?
    0:24:22 Like no modern systems, no nothing, right?
    0:24:23 You’ve got the clothes on your back.
    0:24:27 You’ve got your, you know, you’ve got whatever you can build with your bare hands, right?
    0:24:30 This is, you know, predates basically all concepts of, of, of technologies we understand
    0:24:31 that today.
    0:24:35 And so these are people under like maximum levels of physical survival pressure.
    0:24:37 And so what, what social patterns did they evolve to be able to do that?
    0:24:43 And then the social pattern basically was as follows, is a three part social structure,
    0:24:50 family, tribe and city and zero concept of individual rights and essentially no concept
    0:24:51 of individualism.
    0:24:54 And so you were not an individual, you were a member of your family.
    0:24:58 And then a set of families would aggregate into a tribe and then a set of tribes would
    0:25:01 aggregate into a, into a city.
    0:25:05 And then the morality was completely, it was actually what Nietzsche talks, Nietzsche
    0:25:08 talks about, the morality was entirely master morality, not slave morality.
    0:25:12 And so in their morality, anything that was strong was good and anything that was weak
    0:25:13 was bad.
    0:25:14 And it’s very clear why that is, right?
    0:25:18 It’s because strong equals good equals survive, weak equals bad equals die.
    0:25:22 And that led to what became known later as the master slave dialectic, which is, is it
    0:25:25 more important for you to live on your feet as a master, even if the risk of dying?
    0:25:28 Or are you willing to, you know, live as a slave on your knees in order to not die?
    0:25:32 And this is sort of the, the derivation of that moral framework.
    0:25:35 Christianity later inverted that moral framework, but it, you know, the original framework lasted
    0:25:38 for, you know, many, many thousands of years.
    0:25:40 No concept of individualism, the head of the family had total life and death control over
    0:25:44 the, over the family, the head of the tribe, same thing, head of the city, same thing.
    0:25:48 And then you were morally obligated to kill members of the, of the other cities on contact.
    0:25:49 Right?
    0:25:52 You were morally required to, like if you didn’t do it, you were a bad person.
    0:25:59 Um, and then the form of the society was basically maximum fascism combined with maximum communism.
    0:26:00 Right?
    0:26:04 And so it was maximum fascism in the form of this, like absolute top-down control where
    0:26:07 the head of the family tribe or city could kill other members of the community at any
    0:26:10 time with no repercussions at all.
    0:26:14 So maximum hierarchy, but combined with maximum communism, which is no market economy.
    0:26:16 And so everything gets shared, right?
    0:26:19 And sort of the point of being in one of these collectives is that it’s a collective and,
    0:26:21 and, and, you know, and people are sharing.
    0:26:24 And of course that limited how big they could get cause, you know, the problem with communism
    0:26:25 is it doesn’t scale.
    0:26:26 Right?
    0:26:27 It works at the level of a family.
    0:26:31 It’s much harder to make it work at the level of a country, impossible, maximum fascism,
    0:26:32 maximum communism.
    0:26:37 And then, and then it was all intricately tied into their religion and their, their religion
    0:26:39 was in two parts.
    0:26:43 It was a veneration of ancestors and it was veneration of nature.
    0:26:47 And the veneration of ancestors is extremely important because it was basically like basically
    0:26:50 the ancestors were the people who got you to where you were, the ancestors were the people
    0:26:52 who had everything to teach you.
    0:26:53 Right?
    0:26:55 And then it was veneration of nature cause of course nature is the thing that’s trying
    0:26:56 to kill you.
    0:27:00 Um, and then you had your ancestor, every family tribe or city had their ancestor gods
    0:27:02 and then they had their, um, they had their nature gods.
    0:27:03 Okay.
    0:27:04 So fast forward to today.
    0:27:07 Like we live in a world that is like radically different, but in the book takes you through
    0:27:11 kind of what happened from that through the Greeks and Romans through to Christianity.
    0:27:14 And so the, but it, but it’s very helpful to kind of think in these terms because the
    0:27:19 conventional view of the progress through time is that we are, you know, the cliche is the
    0:27:22 arc of the, you know, moral universe, you know, Ben Stor’s justice, right?
    0:27:25 Or so-called Whig history, which is, you know, that the arc of progress is positive, right?
    0:27:29 And so we, you know, what you hear all the time, what you’re taught in school and everything
    0:27:32 is, you know, every year that goes by, we get better and better and more and more moral
    0:27:35 and more and more people are in a better version of ourselves.
    0:27:39 Our Indo European ancestors would say, Oh no, like you people have like fallen to shit.
    0:27:43 Like you people took all of the principles of basically your civilization and you have
    0:27:47 deluded them down to the point where they barely even matter, you know, and you’re having,
    0:27:50 you know, children at a wedlock and you’re, you know, you regularly encounter people of
    0:27:54 other cities and you don’t try to kill them and like, how crazy is that?
    0:27:58 And they would basically consider us to be living like an incredibly diluted version of
    0:28:01 this sort of highly religious, highly cult-like, right?
    0:28:04 Highly organized, highly fascist, fascist communist society.
    0:28:10 I can’t resist noting that as a consequence of basically going through all the transitions
    0:28:14 we’ve been through, going all the way through Christianity, coming out the other end of Christianity,
    0:28:18 Nietzsche declares God is dead, we’re in a secular society, you know, that still has,
    0:28:21 you know, tinge is a Christianity, but, you know, largely prides itself on no longer being
    0:28:27 religious in that way, you know, we being the sort of most fully evolved, modern, secular,
    0:28:32 you know, expert scientists and so forth have basically re-evolved or fallen back on the
    0:28:36 exact same religious structure that the Indo Europeans had, specifically ancestor worship,
    0:28:42 which is identity politics, and nature worship, which is environmentalism.
    0:28:45 And so we have actually like worked our way all the way back to their cult religions without
    0:28:46 realizing it.
    0:28:49 And it just goes to show that, like, you know, in some ways we have fallen far from the, far
    0:28:53 from the family tree, but in some cases we’re exactly the same.
    0:29:00 You kind of described this progressive idea of wokeism and so on as worshipping ancestors.
    0:29:02 Identity politics is worshipping ancestors, right?
    0:29:07 It’s tagging newborn infants with either, you know, benefits or responsibilities or, you
    0:29:10 know, levels of condemnation based on who their ancestors were.
    0:29:13 The Indo Europeans would have recognized it on site.
    0:29:15 We somehow think it’s like super socially progressive.
    0:29:16 Yeah.
    0:29:17 And it is not.
    0:29:19 I mean, I would say obviously not.
    0:29:23 Let’s, you know, get new answers, which is where I think you’re headed, which is, look,
    0:29:27 is the idea that you can like completely reinvent society every generation and have no regard
    0:29:28 whatsoever for what came before you?
    0:29:30 That seems like a really bad idea, right?
    0:29:33 That’s like the Cambodians with your zero underpull pot and, you know, death, you know,
    0:29:34 follows.
    0:29:40 It’s obviously the Soviets tried that, you know, the, you know, the utopian fantasists
    0:29:43 who think that they can just rip up everything that came before and create something new
    0:29:44 in the human condition.
    0:29:47 And human society have a very bad history of causing, you know, enormous destruction.
    0:29:51 So on the one hand, it’s like, okay, there is like a deeply important role for tradition.
    0:29:56 And the way I think about that is it’s the process of evolutionary learning, right?
    0:30:00 Which is what tradition ought to be is the distilled wisdom of all, and, you know, this
    0:30:01 is not even what Europeans thought about it.
    0:30:04 It should be the distilled wisdom of everybody who came before you, right?
    0:30:07 All those important and powerful lessons learned.
    0:30:09 And that’s why I think it’s fascinating to go back and study how these people lived is
    0:30:12 because that’s part of the history and, you know, part of the learning of the goddess
    0:30:14 to where we are today.
    0:30:17 Having said that, there are many cultures around the world that are, you know, mired
    0:30:20 in tradition to the point of not being able to progress.
    0:30:23 And in fact, you might even say globally, that’s the default human condition, which
    0:30:26 is, you know, a lot of people are in societies in which, you know, there’s like absolute
    0:30:30 seniority by age, you know, kids are completely, you know, like in the U.S., like for some
    0:30:32 reason, we decided kids are in charge of everything, right?
    0:30:35 And like, you know, they’re the trendsetters and they’re allowed to like set all the agendas
    0:30:39 and like set all the politics and set all the culture and maybe that’s a little bit crazy.
    0:30:42 But like in a lot of other cultures, kids have no voice at all, no role at all, because
    0:30:46 it’s the old people who are in charge of everything, you know, they’re gerontocracies.
    0:30:50 And it’s all a bunch of 80-year-olds running everything, which by the way, we have a little
    0:30:52 bit of that too, right?
    0:30:57 And so I would say is like, there’s a down, there’s a real downside, you know, full traditionalism
    0:31:02 as communitarianism, you know, it’s ethnic particularism, you know, it’s ethnic chauvinism,
    0:31:07 it’s, you know, this incredible level of resistance to change, you know, that’s, I mean, it just
    0:31:08 doesn’t get you anywhere.
    0:31:12 It may be good and fine at the level of an individual tribe, but as a society living
    0:31:15 in the modern world, you can’t evolve, you can’t advance, you can’t participate in
    0:31:18 all the good things that, you know, that have happened.
    0:31:21 And so, you know, I think probably this is one of those things where extremeness on either
    0:31:23 side is probably a bad idea.
    0:31:29 And I, but, you know, but this needs to be approached in a sophisticated and nuanced way.
    0:31:35 So the beautiful picture you painted of the roaring 20s, how can the Trump administration
    0:31:37 play a part in making that future happen?
    0:31:38 Yeah.
    0:31:42 So look, a big part of this is getting the government boot off the neck of the American
    0:31:47 economy, the American technology industry, the American people, you know, and then again,
    0:31:50 this is a replay of what happened in the 60s and 70s, which is, you know, for what started
    0:31:54 out looking like, you know, I’m sure good and virtuous purposes, you know, we, we ended
    0:31:57 up both that and now with this, you know, what I, what I describe as sort of a form of soft
    0:32:01 authoritarianism, you know, the good news is it’s not like a military dictatorship.
    0:32:05 It’s not like, you know, you get thrown into Lou Bianca, you know, for the most part, it’s
    0:32:07 not coming at four in the morning, you’re not getting dragged off to a cell.
    0:32:10 So it’s not hard authoritarianism, but it is soft authoritarianism.
    0:32:15 And so it’s this, you know, incredible, suppressive blanket of regulation rules, you know, this
    0:32:17 concept of a vetocracy, right?
    0:32:20 What’s required to get anything done, you know, you need to get 40 people to sign off
    0:32:24 in anything, any one of them can veto it, you know, there’s a lot of how our now political
    0:32:26 system works.
    0:32:30 And then, you know, just this general idea of, you know, progress is bad and technology
    0:32:34 is bad and capitalism is bad and building businesses is bad and success is bad.
    0:32:39 You know, tall poppy syndrome, you know, basically anybody who sticks their head up,
    0:32:41 you know, deserves to get it, you know, chopped off.
    0:32:44 Anybody who’s wrong about anything deserves to get condemned forever.
    0:32:49 You know, just this very kind of, you know, grinding, you know, repression and then coupled
    0:32:55 with specific government actions such as censorship regimes, right and debanking, right?
    0:33:00 And you know, draconian, you know, deliberately kneecapping, you know, critical American industries.
    0:33:03 And then, you know, congratulating yourself in the back for doing it or, you know, having
    0:33:06 these horrible social policies like let’s let all the criminals out of jail and see what
    0:33:07 happens.
    0:33:08 Right.
    0:33:11 And so like, we’ve just been through this period, you know, I call it a demoralization
    0:33:14 campaign, like we’ve just been through this period where, you know, whether it started
    0:33:17 that way or not, it ended up basically being this comprehensive message that says you’re
    0:33:22 terrible and if you try to do anything, you’re terrible and fuck you.
    0:33:25 And the Biden administration reached kind of the full pinnacle of that in our time.
    0:33:29 They got really bad on many fronts at the same time.
    0:33:34 And so just like relieving that and getting kind of back to it reasonably, you know, kind
    0:33:40 of optimistic, constructive, you know, pro-growth frame of mind, there’s just, there’s so much
    0:33:43 pent-up energy and potentially the American system of that alone is gonna, I think, cause,
    0:33:46 you know, growth and spirit to take off.
    0:33:49 And then there’s a lot of things proactively, but yeah, and then there’s a lot of things
    0:33:50 proactively that could be done.
    0:33:52 So how do you relieve that?
    0:33:59 To what degree has the thing you described ideologically permeated government and permeated
    0:34:00 big companies?
    0:34:03 Disclaimer at first, which is I don’t want to predict anything on any of this stuff because
    0:34:08 I’ve learned the hard way that I can’t predict politics or Washington at all.
    0:34:11 But I would just say that the plans and intentions are clear and the staffing supports it.
    0:34:15 And all the conversations are consistent with the new administration and that they plan
    0:34:19 to take, you know, very rapid action on a lot of these fronts very quickly.
    0:34:21 They’re gonna do as much as they can through executive orders and then they’re gonna do
    0:34:24 legislation and regulatory changes for the rest.
    0:34:26 And so they’re gonna move, I think, quickly on a whole bunch of stuff.
    0:34:29 You can already feel, I think, a shift in the national spirit, or at least, let’s put
    0:34:30 it this way.
    0:34:33 I feel it for sure and Silicon Valley like it, you know, I mean, we, you know, we just
    0:34:36 saw a great example of this with what, you know, with what Mark Zuckerberg is doing.
    0:34:39 You know, obviously I’m involved with his company, but, you know, we just saw it kind
    0:34:44 of in public, the scope and speed of the changes, you know, are reflective of sort of this, of
    0:34:45 a lot of these shifts.
    0:34:49 But I would say that that same conversation, those same kinds of things are happening throughout
    0:34:50 the industry, right.
    0:34:54 And so the tech industry itself, whether people were pro-Trump or anti-Trump, like there’s
    0:34:57 just like a giant five shift mood shift that’s like kicked in already.
    0:35:02 And then I was with a group of Hollywood people about two weeks ago, and they were still,
    0:35:04 you know, people who at least, at least vocally were still very anti-Trump.
    0:35:08 But I said, you know, has anything changed since, since November 6th?
    0:35:10 And they immediately said, oh, it’s completely different.
    0:35:15 It feels like the ISIS thawed, you know, woke us over, you know, they said that all kinds
    0:35:18 of projects are going to be able to get made now that couldn’t before that, you know, probably
    0:35:20 was going to start making comedies again.
    0:35:24 You know, like, they were just like, it’s like, it’s like, it’s just like an incredible
    0:35:26 immediate environmental change.
    0:35:30 And I’m, as I talk to people kind of throughout, you know, certainly throughout the economy,
    0:35:33 people who run businesses, I hear that all the time, which is just this, this last 10
    0:35:34 years of misery is just over.
    0:35:38 I mean, the one that I’m watching that’s really funny, I mean, Facebook’s giving a lot, that
    0:35:39 is getting a lot of attention.
    0:35:42 But the other funny one is BlackRock, which I’m not, you know, and I don’t know him,
    0:35:44 but I’ve watched for a long time.
    0:35:48 And so, you know, Larry Fink is the CEO of BlackRock was like first in as a major, you
    0:35:56 know, investment CEO on like every dumb social trend and rule set, like every, all right,
    0:36:03 I’m going for it, every retarded, every retarded thing you can imagine, every ESG and every
    0:36:08 like, every possible satellite companies with every aspect of just these crazed ideological
    0:36:09 positions.
    0:36:12 And, you know, he was coming in, he literally was like, had aggregated together trillions
    0:36:17 of dollars of, of, of, of shareholdings that he did not, that were, you know, that were
    0:36:21 his, his customers rights and he, you know, seized their voting control of their shares
    0:36:24 and was using it to force all these companies to do all of this, like crazy ideological
    0:36:25 stuff.
    0:36:27 And he was like the typhoid Mary of all this stuff in corporate America.
    0:36:31 And if he in the last year has been like backpedaling from that stuff, like as fast as he possibly
    0:36:32 can.
    0:36:35 And I saw just an example last week, he pulled out of the, whatever the corporate net zero
    0:36:39 alliance, you know, he pulled out of the crazy energy, energy, energy stuff.
    0:36:42 And so like, you know, he’s backing away as fast as he can.
    0:36:43 He’s doing it.
    0:36:46 Remember the Richard Pryor backwards walk, Richard Pryor had this way where he could,
    0:36:50 he could back out of a room while looking at, like he was walking forward.
    0:36:54 And so, you know, even they’re doing that.
    0:36:58 And just the whole thing, I mean, if you saw the court recently ruled that NASDAQ had these
    0:37:03 crazy board of directors composition rules, one of the funniest moments of my life is
    0:37:07 when my friend Peter Thiel and I were on the, the, the meta board and these NASDAQ rules
    0:37:10 came down mandated diversity on corporate boards.
    0:37:13 And so we sat around the table and had to figure out, you know, which of us counted as diverse
    0:37:19 and the very professional attorneys that met up explained with a 100% complete straight
    0:37:24 phase that Peter Thiel counts as diverse by virtue of being LGBT.
    0:37:27 And this is a guy who literally wrote a book called the diversity myth.
    0:37:33 And he literally looked like he swallowed a live goldfish and this was imposed.
    0:37:36 I mean, this was like so incredibly offensive to him that like, it just like, it was just
    0:37:37 absolutely appalling.
    0:37:40 And I felt terrible for him, but the look in his face was very funny.
    0:37:44 It was imposed by NASDAQ, you know, your stock exchange is imposing this stuff on you.
    0:37:48 And then the court, whatever the court of appeals just nuked that, you know, it’s like
    0:37:51 these things basically are being like ripped down one by one.
    0:37:55 And what’s on the other side of it is basically, you know, finally being able to get back to,
    0:37:58 you know, everything that, you know, everybody always wanted to do, which is like run their
    0:38:03 companies, have great products, have happy customers, you know, like succeed, like succeed,
    0:38:07 achieve, outperform and, you know, work with the best and the brightest and not be made
    0:38:08 to feel bad about it.
    0:38:10 And I think that’s happening in many areas of American society.
    0:38:15 It’s great to hear that Peter Thiel is fundamentally a diversity hire.
    0:38:18 Well, so it was very, you know, there was a moment.
    0:38:22 So Peter, you know, Peter, of course, you know, is, you know, is publicly gay has been
    0:38:26 for a long time, you know, but, you know, there are other men on the board, right?
    0:38:28 And you know, we’re sitting there and we’re all looking at it and we’re like, all right,
    0:38:32 like, okay, LGBT and we just, we keep coming back to the B, right?
    0:38:39 And it’s like, you know, it’s like, all right, you know, I’m willing to do a lot for this
    0:38:44 company, but it’s all about sacrifice for diversity.
    0:38:45 Well, yeah.
    0:38:47 And then it’s like, okay, like, is there a test?
    0:38:48 Right.
    0:38:49 You know?
    0:38:50 Oh, yeah.
    0:38:51 Exactly.
    0:38:52 How do you prove it?
    0:38:56 The questions that got asked, you know, what are you willing to do?
    0:38:57 Yeah.
    0:39:03 I think I’m very good at asking lawyers completely absurd questions with a totally straight face.
    0:39:05 And do they answer with a straight face?
    0:39:06 Sometimes.
    0:39:07 Okay.
    0:39:09 I think in fairness, they have trouble telling when I’m joking.
    0:39:15 So you mentioned the Hollywood folks, maybe people in Silicon Valley and vibe shift.
    0:39:19 Maybe you can speak to preference falsification.
    0:39:21 What do they actually believe?
    0:39:23 How many of them actually hate Trump?
    0:39:31 But like what percent of them are feeling this vibe shift and are interested in creating
    0:39:34 the roaring twenties in the way they’ve described?
    0:39:36 So first we should maybe talk population.
    0:39:40 So there’s like all of Silicon Valley and the way to just measure that is just look
    0:39:41 at voting records.
    0:39:42 Right.
    0:39:44 And what that shows consistently is Silicon Valley is just a, you know, at least historically,
    0:39:49 my entire time there has been overwhelmingly majority just straight up Democrat.
    0:39:51 The other way to look at that is political donation records.
    0:39:57 And again, you know, the political donations in the Valley range from 90 to 99% to one side.
    0:39:59 And so, you know, we’ll, I just bring it up because like we’ll see what happens with
    0:40:03 the voting and with donations going forward.
    0:40:06 We maybe talk about the fire later, but I can tell you there is a very big question of
    0:40:08 what’s happening in Los Angeles right now.
    0:40:11 I don’t want to get into the fire, but like it’s catastrophic and, you know, there was
    0:40:14 already a rightward shift in the big cities in California.
    0:40:18 And I think a lot of people in LA are really thinking about things right now as they’re
    0:40:21 trying to, you know, literally save their houses and save their families.
    0:40:24 But you know, even in San Francisco, there was a big right, it was a big shift to the
    0:40:26 right in the voting in 24.
    0:40:30 So we’ll see where that goes, but, you know, you observe that by just looking at the numbers
    0:40:32 over time.
    0:40:35 The part that I’m more focused on is, you know, and I don’t know how to exactly describe
    0:40:39 this, but it’s like the top thousand or the top 10,000 people, right?
    0:40:43 And you know, I don’t have a list, but like it’s the, you know, it’s all the top founders,
    0:40:47 top CEOs, top executives, top engineers, top VCs, you know, and then kind of into the
    0:40:51 ranks, you know, the people who kind of built and run the companies and they’re, you know,
    0:40:58 I don’t have numbers, but I have a much more tactile feel, you know, for what’s happening.
    0:41:04 So I, the big thing I have now come to believe is that the idea that people have beliefs
    0:41:07 is mostly wrong.
    0:41:11 I think that most people just go along.
    0:41:13 And I think even most high status people just go along.
    0:41:17 And I think maybe the most high status people are the most prone to just go along because
    0:41:19 they’re the most focused on status.
    0:41:24 And the way I would describe that is, you know, one of the great forbidden philosophers
    0:41:29 of our time is the Unabomber, Ted Kaczynski, and amidst his madness, he had this extremely
    0:41:30 interesting articulation.
    0:41:35 You know, he was a, he was an insane lunatic murderer, but he was also a, you know, Harvard
    0:41:44 super genius, not that those are in conflict, but he was a very bright guy and he did this
    0:41:49 whole thing where he talked about, basically he was very right-wing and talked about leftism
    0:41:50 a lot.
    0:41:53 And he had this great concept that’s just stuck in my mind ever since I read it, which
    0:41:57 is he had this concept you just called oversocialization.
    0:42:00 And so, you know, most people are socialized, like most people are socialized, like most
    0:42:04 people are, you know, we live in a society, most people learn how to be part of a society,
    0:42:06 they give some deference to the society.
    0:42:10 There’s something about modern Western elites where they’re oversocialized and they’re just
    0:42:16 like overly oriented towards what other people like themselves, you know, think and believe
    0:42:20 and you can get a real sense of that if you have a little bit of an outside perspective,
    0:42:25 which I just do, I think as a consequence of where I grew up, like even before I had
    0:42:28 the views that I have today, there was always just this weird thing where it’s like, why
    0:42:31 does every dinner party have the exact same conversation?
    0:42:34 Why does everybody agree on every single issue?
    0:42:39 Why is that agreement precisely what was in the New York Times today?
    0:42:44 Why are these positions not the same as they were five years ago, right?
    0:42:47 But why does everybody like snap into agreement every step of the way?
    0:42:51 And that was true when I came to Silicon Valley and it’s just just true today, 30 years later.
    0:42:55 And so I think most people are just literally take, I think they’re taking their cues from
    0:42:59 it’s some combination, the press, the universities, the big foundations, so it’s like basically
    0:43:04 it’s like the New York Times, Harvard, the Ford Foundation, and you know, I don’t know,
    0:43:08 you know, a few CEOs and a few public figures and you know, maybe, you know, maybe the president
    0:43:13 of your parties in power and like whatever that is, everybody just everybody who’s sort
    0:43:18 of good and proper and elite and good standing and in charge of things and a sort of correct
    0:43:21 member of, you know, let’s call it coastal American society, everybody just believes
    0:43:22 those things.
    0:43:26 And then, you know, the two interesting things about that is number one, there’s no divergence
    0:43:28 among the organs of power, right?
    0:43:31 So the Harvard and Yale believe the exact same thing, the New York Times, the Washington
    0:43:34 Post believe the exact same thing, the Ford Foundation, the Rockefeller Foundation believe
    0:43:38 the exact same thing, Google and you know, whatever, you know, Microsoft believe the
    0:43:40 exact same thing.
    0:43:43 But those things change over time.
    0:43:46 But there’s never conflict in the moment, right?
    0:43:50 And so, you know, the New York Times and the Washington Post agreed on exactly everything
    0:43:58 in 1970, 1980, 1990, 2000, 2010 and 2020, despite the fact that the specifics changed radically,
    0:43:59 the lockstep was what mattered.
    0:44:03 And so I think basically we in the Valley, we’re on the tail end of that in the same
    0:44:05 way, Hollywood’s the tail end of that in the same way, New York’s the tail end of that,
    0:44:08 the same way the media is on the tail end of that.
    0:44:10 It’s like some sort of collective hive mind thing.
    0:44:13 And I just go through that to say like, I don’t think most people in my orbit, or you
    0:44:18 know, say the top 10,000 people in the Valley, or the top 10,000 people in LA, I don’t think
    0:44:21 they’re sitting there thinking, basically, I have rocks, I mean, they probably think
    0:44:25 they have rocks out of beliefs, but they don’t actually have like some inner core of rocks
    0:44:26 out of beliefs.
    0:44:28 And then they kind of watch reality change around them and try to figure out how to keep
    0:44:30 their beliefs, like correct, I don’t think that’s what happens.
    0:44:34 I think what happens is they conform to the belief system around them.
    0:44:37 And I think most of the time they’re not even aware that they’re basically part of
    0:44:38 a herd.
    0:44:45 Is it possible that the surface chatter of dinner parties, underneath that there is
    0:44:50 a turmoil of ideas and thoughts and beliefs that’s going on, but you’re just talking to
    0:44:55 people really close to you or in your own mind, and the socialization happens at the
    0:45:01 dinner parties, like when you go outside the inner circle of one, two, three, four people
    0:45:03 who you really trust, then you start to conform.
    0:45:09 But inside there, inside the mind, there is an actual belief or a struggle, attention
    0:45:17 within New York Times or with the listener, there’s a slow smile that overtook Marc Andreessen’s
    0:45:18 face.
    0:45:21 So look, I’ll just tell you what I think, which is at the dinner parties and at the
    0:45:24 conferences, no, there’s none of that.
    0:45:27 What there is is that all of the heretical conversations have anything that challenges
    0:45:33 the status quo, any heretical ideas in any new idea is a heretical idea.
    0:45:36 Any deviation, it’s either discussed a one-on-one face-to-face.
    0:45:40 It’s like a whisper network, or it’s like a real-life social network.
    0:45:43 There’s a secret handshake, which is like, okay, you meet somebody and you know each
    0:45:47 other a little bit, but not well, and you’re both trying to figure out if you can talk
    0:45:50 to the other person openly or whether you have to be fully conformist.
    0:45:51 It’s a joke.
    0:45:52 Oh, yeah.
    0:45:53 Humor.
    0:45:54 I’m sorry.
    0:45:55 Somebody cracks a joke.
    0:45:56 Somebody cracks a joke.
    0:45:59 If the other person laughs, the conversation is on.
    0:46:05 If the other person doesn’t laugh back slowly away from the scene, I didn’t mean anything
    0:46:06 by it.
    0:46:08 And by the way, it doesn’t have to be like a super offensive joke.
    0:46:12 It just has to be a joke that’s just up against the edge of one of the, use the Sam Bankman
    0:46:18 free term, one of the chivalrous, it has to be up against one of the things of one of
    0:46:21 the things that you’re absolutely required to believe to be the dinner parties.
    0:46:24 And then at that point, what happens is you have a peer-to-peer network.
    0:46:30 You have a one-to-one connection with somebody, and then you have your little conspiracy of
    0:46:32 a thought criminality.
    0:46:35 And then you have your network, you’ve probably been through this, you have your network of
    0:46:37 thought criminals, and then they have their network of thought criminals, and then you
    0:46:41 have this like delicate mating dances to whether you should bring the thought criminals together.
    0:46:42 Right?
    0:46:46 And the dance, the fundamental mechanism of the dance is humor.
    0:46:47 Yeah, it’s humor.
    0:46:48 Right.
    0:46:49 Well, of course.
    0:46:50 Memes.
    0:46:51 Yeah.
    0:46:52 Well, for two reasons.
    0:46:53 Number one, humor is a way to have deniability.
    0:46:55 It’s a way to discuss these things without having deniability.
    0:46:56 Oh, I’m sorry.
    0:46:57 It was just a joke, right?
    0:46:58 So that’s part of it.
    0:47:00 Which is one of the reasons why comedians can get away with saying things the rest of
    0:47:01 us can.
    0:47:04 Because they can always fall back on, “Oh, yeah, I was just going for the laugh.”
    0:47:08 But the other key thing about humor, right, is that laughter is involuntary, right?
    0:47:09 Like you either laugh or you don’t.
    0:47:12 And it’s not like a conscious decision whether you’re going to laugh, and everybody can tell
    0:47:14 when somebody’s fake laughing, right?
    0:47:16 And this every professional comedian knows this, right?
    0:47:18 The laughter is the clue that you’re onto something truthful.
    0:47:21 Like people don’t laugh at like made up bullshit stories.
    0:47:24 They laugh because like you’re revealing something that they either have not been allowed to
    0:47:27 think about or have not been allowed to talk about, right?
    0:47:28 Or is off limits.
    0:47:31 And all of a sudden, it’s like the ice breaks and it’s like, “Oh, yeah, that’s the thing.
    0:47:32 And it’s funny.”
    0:47:33 And like I laugh.
    0:47:36 And then, and then of course, this is why, of course, live comedy is so powerful is because
    0:47:37 you’re all doing that at the same time.
    0:47:38 So you start to have, right?
    0:47:39 The safety of, you know, the safety of numbers.
    0:47:43 And so, so the comedians have like the all, there’s no, no surprise to me like, for example,
    0:47:46 Joe has been as successful as he has because they have, they have this hack that the, you
    0:47:50 know, the rest of us who are not professional comedians don’t have, but you have your in-person
    0:47:51 version of it.
    0:47:52 Yeah.
    0:47:53 And then you’ve got the question of whether the, whether you can sort of join the networks
    0:47:54 together.
    0:47:57 And then you’ve probably been to this as, you know, then at some point there’s like a different,
    0:48:00 there’s like the alt dinner party, the Thorker middle dinner party and you get six or eight
    0:48:02 people together and you join the networks.
    0:48:05 And those are like the happiest moments, at least in the last decade, those are like the
    0:48:08 happiest moments of everybody’s lives because they’re just like, everybody’s just ecstatic
    0:48:12 because they’re like, “I don’t have to worry about getting yelled at and shamed like for
    0:48:16 every third sentence that comes out of my mouth and we can actually talk about real things.”
    0:48:17 So that’s the live version of it.
    0:48:22 And then of course the other side of it is the, you know, the group chat phenomenon, right?
    0:48:26 And then basically the same thing played out, you know, until Elon bought Axe and until
    0:48:30 Substack took off, you know, which were really the two big breakthroughs in free speech online.
    0:48:33 The same dynamic played out online, which is you had absolute conformity on the social
    0:48:37 networks, like literally enforced by the social networks themselves through censorship and
    0:48:41 then also through cancellation campaigns and mobbing and shaming, right?
    0:48:45 But then you had, but then group chats grew up to be the equivalent of a stop, right?
    0:48:50 Anybody who grew up in the Soviet Union under communism, you know, they had the hard version
    0:48:51 of this, right?
    0:48:53 It’s like, how do you know who you could talk to and then how do you distribute information
    0:48:58 and, you know, like, you know, again, that was the hard authoritarian version of this.
    0:49:01 And then we’ve been living through this weird mutant, you know, softer authoritarian version
    0:49:03 but with, you know, with some of the same patterns.
    0:49:10 And WhatsApp allows you to scale and make it more efficient to build on these groups
    0:49:13 of heretical ideas bonded by humor.
    0:49:14 Yeah, exactly.
    0:49:15 Well, and this is the thing.
    0:49:16 This is kind of the running joke about group chat, right?
    0:49:20 The running kind of thing about group chats, it’s not even a joke, it’s like, every group
    0:49:23 chat, if you’ve noticed this, like every, this principle of group chats, every group
    0:49:26 chat ends up being about memes and humor.
    0:49:29 And the goal of the game, the game of the group chat is to get as close to the line
    0:49:34 of being actually objectionable as you can get without actually tripping it, right?
    0:49:38 And I like literally every group chat that I have been in for the last decade, even if
    0:49:42 it starts some other direction, what ends up happening is it becomes the absolute comedy
    0:49:47 fest where, but it’s walking, they walk right at the line and they’re constantly testing.
    0:49:49 And every once in a while, somebody will trip the line and people will freak out and it’s
    0:49:50 like, oh, too soon.
    0:49:53 Okay, you know, we got to wait until next year to talk about that, you know, they walk
    0:49:54 it back.
    0:49:55 And so it’s that same thing.
    0:49:57 And yeah, and then group chats is a technological phenomenon.
    0:50:00 It was amazing to see because basically it was number one, it was, you know, obviously
    0:50:05 the rise of smartphones, then it was the rise of the new messaging services, then it was
    0:50:09 the rise specifically of, I would say, combination of what’s happened signal.
    0:50:13 And the reason for that is those were the two big systems that did the full encryption.
    0:50:15 So you actually felt safe.
    0:50:20 And then the real breakthrough, I think, was disappearing messages, which hit signal probably
    0:50:25 four or five years ago and hit WhatsApp three or four years ago.
    0:50:31 And then the combination of encryption and disappearing messages, I think really unleashed
    0:50:32 it.
    0:50:35 Well, then there’s the fight over the length of the disappearing messages, right?
    0:50:38 And so it’s like, you know, I often get behind of my things.
    0:50:43 So I set to seven day, you know, disappearing messages and my friends who are like, no,
    0:50:44 that’s way too much risk.
    0:50:45 Yeah.
    0:50:46 It’s got to be a day.
    0:50:48 And then every once in a while, somebody will set to five minutes before they send something
    0:50:49 like particularly inflammatory.
    0:50:50 Yeah.
    0:50:51 100%.
    0:50:54 Well, what, I mean, one of the things that bothers me about what’s up, the choice is
    0:50:58 between 24 hours and, you know, seven days, one day or seven days.
    0:51:04 And I have to have an existential crisis about deciding whether I can last for seven days
    0:51:06 with what I’m about to say.
    0:51:07 Exactly.
    0:51:09 Now, of course, what’s happening right now is the big thaw, right?
    0:51:10 And so the vibe shift.
    0:51:14 So what’s happening on the other side of the election is, you know, Elon on Twitter two
    0:51:17 years ago and now Mark with Facebook and Instagram.
    0:51:20 And by the way, with the continued growth of Substack and with other, you know, new platforms
    0:51:24 that are emerging, you know, like I think it may be, you know, I don’t know that everything
    0:51:29 just shifts back into public, but like a tremendous amount of the, a tremendous amount of the
    0:51:33 verboten conversations, you know, can now shift back into public view.
    0:51:36 And I mean, quite frankly, this is one of those things, you know, quite frankly, even
    0:51:40 if I was opposed to what those people are saying, and I’m sure I am in some cases, you
    0:51:43 know, I would argue it’s still like net better for society that those things happen in public
    0:51:49 instead of private, you know, do you really want, like, yeah, like, don’t you want to
    0:51:50 know?
    0:51:53 And, and so, and then it’s just, look, it’s just, I think clearly much healthier to live
    0:51:56 in a society in which people are not literally scared about their saying.
    0:52:01 I mean, to push back, to come back to this idea that we’re talking about, I do believe
    0:52:05 that people have beliefs and thoughts that are heretical, like a lot of people.
    0:52:09 I wonder what fraction of people have that.
    0:52:12 To me, this is the preference falsification is really interesting.
    0:52:18 What is the landscape of ideas that human civilization has in private as compared to
    0:52:25 what’s out in public, because like that, the, the, the dynamical system that is the difference
    0:52:30 between those two is fascinating, like there’s throughout history, the, the fall of communism
    0:52:36 in multiple regimes throughout Europe is really interesting because everybody was following,
    0:52:43 you know, the line until not, but you better, for sure, privately, there was a huge number
    0:52:49 of boiling conversations happening where like this is this, the bureaucracy of communism,
    0:52:53 the corruption of communism, all of that was really bothering people more and more and
    0:52:54 more and more.
    0:52:58 And all of a sudden, there’s a trigger that allows the vibe shift to happen.
    0:53:05 So to me, like the, the interesting question here is what is the landscape of private thoughts
    0:53:12 and ideas and conversations that are happening under the surface of, of, of Americans, especially
    0:53:17 my question is how much dormant energy is there for this roaring twenties where people
    0:53:18 are like, no more bullshit.
    0:53:19 Let’s get shit done.
    0:53:20 Yeah.
    0:53:21 So let’s go through that.
    0:53:22 We’ll go through the theory of preference falsification.
    0:53:23 Yeah.
    0:53:24 Just, just, just by the way, amazing.
    0:53:26 The books, unless it gets fascinating.
    0:53:27 Yeah.
    0:53:28 Yeah.
    0:53:29 Great books.
    0:53:32 Incredibly, about 20, 30 year old book, but it’s very, it’s completely modern and current
    0:53:36 in what it talks about as well as very deeply historically informed.
    0:53:42 So it’s called private truths, public lies, and it’s written by a social science professor
    0:53:46 named Timur Quran at, I think, Duke.
    0:53:47 And it’s, it’s definitive work on this.
    0:53:50 And so he, he has this concept, he calls preference falsification.
    0:53:53 And so preference falsification is two things, preference falsification.
    0:53:56 And you get it from the title of the book, private truths, public lies.
    0:54:00 So preference falsification is when you believe something and you can’t say it.
    0:54:05 Or, and this is very important, you don’t believe something and you must say it, right?
    0:54:10 And, and, and the commonality there is in both cases, you’re lying, you, you, you believe,
    0:54:13 you believe something internally and then you’re lying about it in public.
    0:54:17 And so the thing, you know, the, and there’s sort of two, the two classic forms of it.
    0:54:20 There’s the, you know, for example, there’s the, I believe communism is rotten, but I
    0:54:21 can’t say it, version of it.
    0:54:26 But then there’s also the, the, the famous parable of the real life example.
    0:54:30 But the thing that Voslav Havel talks about in the other good book on this topic, which
    0:54:34 is the power of the powerless, you know, who was an anti-communist resistance fighter
    0:54:37 who ultimately became the, you know, the president of Czechoslovakia after the fall
    0:54:38 of the wall.
    0:54:42 But he wrote this book and he, he describes the other side of this, which is workers
    0:54:44 of the world unite, right?
    0:54:48 And so he, he describes what he calls the parable, the greengrocer, which is your greengrocer
    0:54:51 in Prague in 1985.
    0:54:54 And for the last 70 years, it has been, or it’s 50 years, it’s been absolutely mandatory
    0:54:59 to have a sign in the window of your store that says workers of the world unite, right?
    0:55:00 And it’s 1985.
    0:55:04 It is like crystal clear that the world, the workers of the world are not going to unite.
    0:55:08 Like all the things that could happen in the world, that is not going to happen.
    0:55:10 The commies have been at that for 70 years.
    0:55:11 It is not happening.
    0:55:13 But that slogan had better be in your window every morning, because if it’s not in your
    0:55:16 window every morning, you are not a good communist.
    0:55:19 The secret police are going to come by and they’re going to, they’re going to get you.
    0:55:21 And so the first thing you do when you get to the store is you put that slogan in the
    0:55:23 window and you make sure that it stays in the window all day long.
    0:55:27 But he says the thing is every single person, the greengrocer knows the slogan is fake.
    0:55:29 He knows it’s a lie.
    0:55:32 Every single person walking past the slogan knows that it’s a lie.
    0:55:35 Every single person walking past the store knows that the greengrocer is only putting
    0:55:38 it up there because he has to lie in public.
    0:55:42 And the greengrocer has to go through the humiliation of knowing that everybody knows
    0:55:44 that he’s caving into the system and lying in public.
    0:55:48 And so it turns into the moralization campaign.
    0:55:50 It’s not just ideological enforcement.
    0:55:54 In fact, it’s not ideological enforcement anymore because everybody knows it’s fake.
    0:55:55 The authorities know it’s fake.
    0:55:56 Everybody knows it’s fake.
    0:55:59 It’s not that they’re enforcing the actual ideology of the world’s workers of the world
    0:56:00 uniting.
    0:56:05 It’s that they are enforcing compliance and compliance with the regime and fuck you, you
    0:56:06 will comply.
    0:56:09 And so anyway, that’s the other side of that.
    0:56:13 And of course, we have lived in the last decade through a lot of both of those.
    0:56:17 I think anybody listening to this could name a series of slogans that we’ve all been forced
    0:56:20 to chant for the last decade that everybody knows at this point are just like simply not
    0:56:21 true.
    0:56:26 I’ll let the audience speculate on their own group chats.
    0:56:29 >> Send mark your memes online as well, please.
    0:56:30 >> Yes, yes, exactly.
    0:56:32 But okay, so anyway, so it’s the two sides of that, right?
    0:56:36 So it’s private truth, it’s public lies.
    0:56:39 So then what preference falsification does is it talks about extending that from the
    0:56:42 idea of the individual experience of that to the idea of the entire society experiencing
    0:56:43 that, right?
    0:56:47 That’s just your percentages question, which is like, okay, what happens in a society in
    0:56:49 which people are forced to lie in public about what they truly believe?
    0:56:52 What happens, number one, is that individually they’re lying in public and that’s bad.
    0:56:56 But the other thing that happens is they no longer have an accurate gauge at all or any
    0:56:59 way to estimate how many people agree with them.
    0:57:02 And this is how, again, this literally is like how you get something like the communist
    0:57:08 system, which is like, okay, you end up in a situation in which 80 or 90 or 99% of society
    0:57:11 can actually all be thinking individually, I really don’t buy this anymore.
    0:57:14 And if anybody would just stand up and say it, I would be willing to go along with it,
    0:57:17 but I’m not going to be the first one to put my head on the chopping block.
    0:57:21 But you have no, because of the suppression censorship, you have no way of knowing how
    0:57:22 many other people agree with you.
    0:57:26 And if the people, if the people agree with you are 10% of the population and you become
    0:57:29 part of a movement, you’re going to get killed.
    0:57:33 If 90% of the people agree with you, you’re going to win the revolution, right?
    0:57:37 And so the question of like what the percentage actually is, is like a really critical question.
    0:57:41 And then basically, in any sort of authoritarian system, you can’t like run a survey to get
    0:57:42 an accurate result.
    0:57:45 And so you actually can’t know until you put it to the test.
    0:57:47 And then what he describes in the book is it’s always put to the test in the same way.
    0:57:51 And this is exactly what’s happened for the last two years, like 100% of exactly what’s
    0:57:52 happened.
    0:57:58 It’s like straight out of this book, which is somebody, Elon sticks his hand up and says,
    0:58:02 the workers of the world are not going to unite, right, or the emperor is actually wearing
    0:58:03 no clothes, right?
    0:58:05 You know, that famous parable, right?
    0:58:08 So one person stands up and does it and literally that person is standing there by themselves
    0:58:12 and everybody else in the audience is like, ooh, I wonder what’s going to happen to that
    0:58:13 guy.
    0:58:14 Right.
    0:58:15 But again, nobody knows.
    0:58:16 Elon doesn’t know.
    0:58:17 The first guy doesn’t know.
    0:58:19 Other people don’t know, like, which way is this going to go?
    0:58:22 And it may be that that’s a minority position and that’s a way to get yourself killed.
    0:58:26 Or it may be that that’s the majority position and that and you are now the leader of a revolution.
    0:58:29 And then basically, of course, what happens is, okay, the first guy does that, doesn’t get
    0:58:30 killed.
    0:58:33 The second guy does, well, a lot of the time that guy doesn’t get killed, but when the
    0:58:36 guy doesn’t get killed, then a second guy pops his head up, says the same thing.
    0:58:37 All right.
    0:58:40 Now you’ve got two, two leads to four, four leads to eight, eight leads to 16.
    0:58:44 And then as we saw with the fall of the Berlin Wall, this is what happened in Russia and
    0:58:47 Eastern Europe in ’89, when it goes, it can go, right?
    0:58:49 And then it rips.
    0:58:53 And then what happens is very, very quickly, if it turns out that you had a large percentage
    0:58:56 of the population that actually believed the different thing, it turns out all of a sudden
    0:59:00 everybody has this giant epiphany that says, oh, I’m actually part of the majority.
    0:59:05 And at that point, like, you were on the freight train of revolution, right, like, it is rolling,
    0:59:06 right?
    0:59:11 Now, the other part of this is the distinction between the role of the elites and the masses.
    0:59:14 And here, the best book is called The True Believer, which is the Eric Hoffer book.
    0:59:20 And so the nuance you have to put on this is the elites play a giant role in this, because
    0:59:24 the elites do idea formation and communication, but the elites by definition are a small minority.
    0:59:28 And so there’s also this giant role played by the masses, and the masses are not necessarily
    0:59:32 thinking these things through in the same intellectualized, formal way that the elites
    0:59:33 are.
    0:59:36 But they are for sure experiencing these things in their daily lives, and they for sure have
    0:59:38 at least very strong emotional views on them.
    0:59:42 And so when you really get the revolution, it’s when you get the elites lined up with
    0:59:46 or either the current elites change or the new set of elites, a new set of counter elites
    0:59:50 basically come along and say, no, there’s actually a different and better way to live.
    0:59:53 And then the people basically decide to follow the counter elite.
    0:59:55 So that’s the other dimension to it.
    0:59:57 And of course, that part is also happening right now.
    1:00:00 And again, case study number, one of that would be Elon and his, you know, he turns
    1:00:03 out, you know, truly massive following.
    1:00:07 And he has done that over and over in different industries, not just saying crazy shit online,
    1:00:13 but saying crazy shit in the realm of space, in the realm of autonomous driving, in the
    1:00:17 realm of AI, just over and over and over again, turns out saying crazy shit is one of the
    1:00:20 ways to do a revolution and to actually make progress.
    1:00:21 Yeah.
    1:00:22 And it’s like, well, but then there’s the test.
    1:00:23 Is it crazy shit?
    1:00:24 Or is it the truth?
    1:00:25 Yeah.
    1:00:27 And, you know, and this is where, you know, many, there are many more specific things
    1:00:31 about Elon’s genius, but one of the, one of the really core ones is an absolute dedication
    1:00:32 to the truth.
    1:00:36 And so when Elon says something, it sounds like crazy shit, but in his mind, it’s true.
    1:00:37 Now is he always right?
    1:00:38 No.
    1:00:39 Sometimes the rockets crash.
    1:00:40 Like, you know, sometimes he’s wrong.
    1:00:41 He’s human.
    1:00:42 He’s like anybody else.
    1:00:43 He’s not right all the time.
    1:00:46 But at least my, my through line with him, both in what he says in public and what he
    1:00:49 says in private, which by the way, are the exact same things.
    1:00:50 He does not do this.
    1:00:52 He doesn’t lie in public about what he believes in private, or at least he doesn’t do that
    1:00:53 anymore.
    1:00:56 But it’s 100% consistent in my, in my experience.
    1:01:00 By the way, there’s two guys who are 100% consistent like that, that I know, um, Elon
    1:01:01 and Trump.
    1:01:02 Yeah.
    1:01:06 Whatever you think of them, what they say in private is 100% identical to what they
    1:01:07 say in public.
    1:01:08 Like they are completely transparent.
    1:01:10 They’re completely honest in that way, right?
    1:01:13 Which is like, and again, it’s not like they’re perfect people, but they’re honest in that
    1:01:14 way.
    1:01:17 And it makes them potentially both, as they have been very powerful leaders of these
    1:01:21 movements, because they’re both willing to stand up and say the thing that if it’s true,
    1:01:25 it turns out to be the thing in many cases that, you know, many or most or almost everyone
    1:01:28 else actually believes, but nobody was actually willing to say out loud.
    1:01:29 And so they can actually catalyze these shifts.
    1:01:33 And I, I mean, I think this framework is exactly why Trump took over the Republican party is
    1:01:36 I think Trump stood up there on stage with all these other kind of conventional Republicans
    1:01:39 and he started saying things out loud that it turned out the base really was, they were
    1:01:42 either already believing or they were prone to believe.
    1:01:43 And he was the only one who was saying them.
    1:01:47 And so the, again, elite masses, he was the elite, the voters of the masses and the voters
    1:01:52 decided, you know, no, no more bushes, like we’re going this other direction.
    1:01:53 That’s the mechanism of social change.
    1:01:56 Like what we just described is like the actual mechanism of the social change.
    1:01:59 It is fascinating to me that we have been living through exactly this.
    1:02:03 We’ve been moving through everything exactly what Timur Karan describes, everything that
    1:02:08 Voslav Havel described, you know, black squares and Instagram, like the whole thing, right?
    1:02:09 All of it.
    1:02:14 And we’ve been living through the, you know, the true believer elites masses, you know,
    1:02:17 thing with, you know, with a set of like basically incredibly corrupt elites wondering
    1:02:19 why they don’t have the little masses anymore and a set of new elites that are running away
    1:02:20 with things.
    1:02:24 And so like we’re, we’re living through this like incredible applied case study of these
    1:02:25 ideas.
    1:02:28 And, you know, if there’s a moral of the story, it is, you know, I think fairly obvious, which
    1:02:33 is it is a really bad idea for a society to wedge itself into a position in which most
    1:02:36 people don’t believe the fundamental precepts of what they’re told they have to do, you
    1:02:40 know, to be, to be good people like that, that is just not, not a good state to be in.
    1:02:44 So one of the ways to avoid that in the future, maybe is to keep the delta between what’s
    1:02:47 said in private and what’s said in public small.
    1:02:48 Yeah.
    1:02:50 It’s like, well, this is sort of the, the siren song of censorship is we can keep people
    1:02:54 from saying things, which means we can keep people from thinking things.
    1:02:57 And you know, by the way, that may work for a while, right?
    1:03:00 Like, you know, this, I mean, again, the hard form of the Soviet Union, you know, Soviet
    1:03:05 Union, owning a mimeograph, pre-photocopiers, there were mimeograph machines that were
    1:03:08 used to make some was taught underground newspapers, which is the mechanism of written communication
    1:03:12 of radical ideas, radical ideas.
    1:03:14 Ownership of a mimeograph machine was punishable by death.
    1:03:15 Right?
    1:03:18 So that’s the hard version, right?
    1:03:21 You know, the soft version is somebody clicks a button in Washington and you are erased
    1:03:22 from the internet.
    1:03:23 Right?
    1:03:25 Like, which, you know, good news, you’re still alive.
    1:03:28 Bad news is, you know, shame about not being able to get a job, you know, too bad your
    1:03:31 family now, you know, hates you and won’t talk to you, you know, whatever, whatever the,
    1:03:34 you know, whatever the version of cancellation has been.
    1:03:36 And so, so, so like, does that work?
    1:03:40 Like, maybe it works for a while, like it worked for the Soviet Union for a while, you
    1:03:43 know, in its way, especially when it was coupled with, you know, official state power, but when
    1:03:48 it unwinds, it can only wind with like incredible speed and ferocity because to your point, there’s
    1:03:49 all this bottled up energy.
    1:03:52 Now, your question was like, what are the percentages?
    1:03:53 Like what’s the breakdown?
    1:03:58 And so my, my rough guess, just based on what I’ve seen in my world is it’s something
    1:04:01 like 20, 60, 20.
    1:04:05 It’s like you’ve got 20% like true believers in whatever is, you know, the current thing,
    1:04:08 you know, you got 20, you got 20% of people who are just like true believers of whatever
    1:04:12 they, you know, whatever, you know, whatever’s in the New York Times, Harvard professors and
    1:04:16 the Ford Foundation, like just digitally, by the way, maybe it’s 10, maybe it’s five,
    1:04:18 but let’s say generously it’s 20.
    1:04:22 So it’s a, you know, 20% kind of full on revolutionaries.
    1:04:26 And then you’ve got, let’s call it 20% on the other side that are like, no, I’m not
    1:04:27 on board with this.
    1:04:28 This is, this is crazy.
    1:04:31 I’m not, I’m not signing up for this, but, you know, you know, they, their view of themselves
    1:04:32 is they’re in a small minority.
    1:04:35 And in fact, they start out in a small minority because what happens is the 60% go with the
    1:04:38 first 20%, not the second 20%.
    1:04:41 So you’ve got this large middle of people and it’s not that there’s anything like, it’s
    1:04:44 not that people in the middle are not smart or anything like that.
    1:04:47 It’s that they just have like normal lives and they’re just trying to get by and they’re
    1:04:51 just trying to go to work each day and do a good job and be a good person and raise their
    1:04:55 kids and, you know, have a little bit of time to watch the game.
    1:04:59 And they’re just not engaged in the cut and thrust of, you know, political activism or
    1:05:01 any of this stuff is just not their thing.
    1:05:05 But then, but that’s where the over socialization comes in is just like, okay, by default, the
    1:05:11 60% will go along with the 20% of the radical revolutionaries at least for a while.
    1:05:14 And then the counter elite is in this other 20%.
    1:05:19 And over time, they build up a theory and network and ability to resist.
    1:05:22 And a new set of representatives and a new set of ideas.
    1:05:24 And then at some point, there’s a contest.
    1:05:27 And then, and then, and then right, and then the question is what happens in the middle,
    1:05:30 what happens in the 60% and it is kind of my point.
    1:05:34 It’s not even really does the 60% change their beliefs as much as it’s like, okay, what, what
    1:05:39 is the thing that that 60% now decides to basically fall into step with.
    1:05:44 And I think that 60% in the valley that 60% for the last decade decided to be woke.
    1:05:49 And you know, extremely, I would say on edge on a lot of things.
    1:05:52 And I, you know, that 60% is pivoting in real time.
    1:05:53 They’re just done.
    1:05:54 They’re just had it.
    1:05:59 And I would love to see where that pivot goes because there’s internal battles happening
    1:06:00 right now.
    1:06:01 Right.
    1:06:02 So this is the other thing.
    1:06:03 Okay.
    1:06:04 So there’s two, two forms of internal, there’s two forms of things.
    1:06:07 And Timur has actually talked about this, Professor Crown has talked about this.
    1:06:10 And so, so one is he said, he said, this is the kind of unwind where what you’re going
    1:06:11 to have is you’re not going to have people in the other direction.
    1:06:14 You’re going to have people who claim that they supported Trump all along who actually
    1:06:15 didn’t.
    1:06:16 Right.
    1:06:17 Right.
    1:06:19 So it’s going to swing the other way.
    1:06:21 And by the way, Trump’s not the only part of this, but you know, he’s just a convenient
    1:06:23 shorthand for, you know, for, for a lot of this.
    1:06:26 But you know, whatever it is, you’ll, you’ll have people who will say, well, I never supported
    1:06:30 the right or I never supported ESG or I never thought we should have canceled that person.
    1:06:31 Right.
    1:06:34 Where of course they were full on a part of the mob, like, you know, kind of at that
    1:06:35 moment.
    1:06:36 Right.
    1:06:39 So you’ll have preference falsification happening in the other direction and his prediction,
    1:06:43 I think basically is you’ll end up with the same quote problem on the, on the other side.
    1:06:44 Now, will that happen here?
    1:06:48 I don’t know, you know, how far is American society willing to go at any of these things?
    1:06:49 I don’t know.
    1:06:51 But like there is some, some question there.
    1:06:55 And then, and then the other part of it is, okay, now you have this, you know, elite that
    1:06:58 is used to being in power for the last decade.
    1:07:01 And by the way, many of those people are still in power and they’re in very, you know, important
    1:07:03 positions and the New York times is still the New York times and Harvard is still Harvard
    1:07:07 and like those people haven’t changed like at all, right.
    1:07:10 And they didn’t, you know, they’ve been bureaucrats in the government and, you know, senior democratic,
    1:07:12 you know, politicians and so forth.
    1:07:15 And they’re sitting there, you know, right now feeling like reality has just smacked them
    1:07:18 hard in the face because they lost the election so badly.
    1:07:22 But they’re now going into a, and specifically the Democratic party is going into a civil
    1:07:23 war.
    1:07:24 Right.
    1:07:27 And that form of the civil war is completely predictable.
    1:07:30 And it’s exactly what’s happening, which is half of them are saying, we need to go back
    1:07:31 to the center.
    1:07:34 And we need to de-radicalize because we’ve lost the people.
    1:07:35 We’ve lost that the people in the middle.
    1:07:39 And so we need to go back to the middle in order to be able to get 50% plus one in an
    1:07:40 election.
    1:07:41 Right.
    1:07:43 And then the other half of them are saying, no, we weren’t true to our principles.
    1:07:44 We were too weak.
    1:07:45 We were too soft.
    1:07:46 You know, we must become more revolutionary.
    1:07:48 We must double down and we must, you know, celebrate, you know, murders in the street
    1:07:50 of health insurance executives.
    1:07:52 And that’s, and that right now is like a real fight.
    1:07:57 If I can tell you a little personal story that breaks my heart a little bit, there’s a, there’s
    1:08:02 a professor, a historian, I won’t say who, who I admire deeply, love his work.
    1:08:05 He’s a kind of a heretical thinker.
    1:08:12 And we were talking about having a podcast or doing a podcast and he eventually said
    1:08:18 that, you know what, at this time, given your guest list, I just don’t want the headache
    1:08:24 of being in the faculty meetings in my particular institution.
    1:08:28 And I asked who are the particular figures in this guest list.
    1:08:31 He said, Trump.
    1:08:37 And the second one, he said that you announced your interest to talk to Vladimir Putin.
    1:08:39 So I just don’t want the headache.
    1:08:45 Now I fully believe he, it would surprise a lot of people if I said who it is, but you
    1:08:50 know, this is a person who’s not bothered by the guest list.
    1:08:55 And I should also say that 80 plus percent of the guest list is left wing.
    1:08:56 Okay.
    1:08:59 Nevertheless, he just doesn’t want the headache.
    1:09:04 And that speaks to the, the thing that you’ve kind of mentioned that you just don’t, don’t
    1:09:05 want the headache.
    1:09:10 You just want to just have a pleasant morning with some coffee and talk to your fellow professors.
    1:09:14 And I think a lot of people are feeling that in universities and in other contexts in tech
    1:09:16 companies.
    1:09:20 And I wonder if that shifts how quickly that shifts.
    1:09:26 And there the percentages you mentioned, 20, 60, 20 matters and the, and the, the contents
    1:09:30 of the private groups matters and the dynamics of how that shifts matters.
    1:09:32 Cause it’s very possible.
    1:09:36 Nothing really changes in universities and major tech companies or just, there’s a kind
    1:09:45 of excitement right now for potential revolution and these new ideas, this new vibes to reverberate
    1:09:51 through these companies and universities, but it’s possible the, the wall will hold.
    1:09:52 Yeah.
    1:09:53 So he’s a friend of yours.
    1:09:55 I respect that you don’t want to name him.
    1:09:56 I also respect you don’t want to beat on him.
    1:09:59 So I would like to beat on him on your behalf.
    1:10:00 Does he have tenure?
    1:10:01 Yes.
    1:10:04 He should use it.
    1:10:07 So this is the thing, right?
    1:10:10 This is the ultimate indictment of the corruption and the rot at the heart of our education
    1:10:12 system at the heart of these universities.
    1:10:14 And it’s by the way, it’s like across the board.
    1:10:16 It’s like all the, all the top universities.
    1:10:20 It’s like, cause the, the siren song for what it’s been for 70 years, whatever, the tenure
    1:10:25 system peer review system, tenure system, um, which is like, yeah, you work your butt
    1:10:29 off as an academic to get a professorship and then to get tenure, because then you can
    1:10:32 say what you actually think, right?
    1:10:37 Then you can do your work and your research and your speaking and your teaching without
    1:10:40 fear of being fired, right?
    1:10:43 Without fear of being canceled, um, like academic freedom.
    1:10:48 I mean, think of the term academic freedom and then think of what these people have done
    1:10:49 to it.
    1:10:52 Like it’s gone.
    1:11:02 Like that entire thing was fake and is completely rotten and these people are completely, completely
    1:11:06 giving up the entire moral foundation of the system has been built for them, which by the
    1:11:12 way is paid for virtually 100% by taxpayer money.
    1:11:16 That’s the, what’s the inkling of hope in this, like what this particular person and
    1:11:22 others who hear this, what can give them strength, inspiration, and courage, um, that the population
    1:11:25 at large is going to realize the corruption in their industry and it’s going to withdraw
    1:11:26 the funding.
    1:11:27 It’s okay.
    1:11:28 So desperation.
    1:11:30 No, no, no, no, no, think about what happens next.
    1:11:31 Okay.
    1:11:32 So let’s go, let’s go through it.
    1:11:35 So the, the universities, the university, the universities are funded by four primary sources
    1:11:36 of federal funding.
    1:11:39 The big one is a federal student loan program, which is, you know, in the many trillions of
    1:11:43 dollars at this point and only spiraling, you know, way faster than inflation.
    1:11:44 That’s number one.
    1:11:48 Number two is federal research funding, which is also very large and you probably know that
    1:11:53 when a scientist at the university gets a research grant, the university rakes as much
    1:11:58 as 70% of the money for central uses.
    1:12:01 Number three is tax exemption at the operating level, which is based in the idea that these
    1:12:06 are nonprofit institutions as opposed to let’s say political institutions.
    1:12:11 Number four is tax exemptions at the endowment level, you know, which is the financial buffer
    1:12:15 that these places have.
    1:12:18 Anybody who’s been close to university budget will basically see that what would happen
    1:12:20 if you withdrew those sources of federal taxpayer money.
    1:12:24 And then for the state schools, the state money, they still legal bankrupt.
    1:12:28 And then you could rebuild.
    1:12:30 Then you could rebuild because the problem right now, you know, like the folks at University
    1:12:32 of Austin are like mounting a very valiant effort.
    1:12:34 And I hope that they succeed and I’m sure I’m cheering for them.
    1:12:38 But the problem is you’re now inserting, you suppose you and I want to start a new university
    1:12:41 and we want to hire all the free thinking professors and we want to have the place that
    1:12:42 fixes all this.
    1:12:45 Practically speaking, we can’t do it because we can’t get access to that money.
    1:12:48 You’re the most direct reason we can’t get access to that money.
    1:12:50 We can’t get access to federal student funding.
    1:12:54 Do you know how universities are accredited for the purpose of getting access to federal
    1:12:57 student funding, federal student loans?
    1:13:00 They’re accredited by the government, but not directly, indirectly.
    1:13:02 They’re not accredited by the Department of Education.
    1:13:07 Instead what happens is the Department of Education accredits accreditation bureaus
    1:13:09 that are non-profits that do the accreditation.
    1:13:12 Guess what the composition of the accreditation bureaus is?
    1:13:16 The existing universities, they’re in complete control.
    1:13:20 The incumbents are in complete control as to who gets, as to who gets access to federal
    1:13:21 student loan money.
    1:13:26 Guess how enthusiastic they are about accrediting a new university, right?
    1:13:32 And so we have a government funded and supported cartel that has gone, I mean, it’s just obvious.
    1:13:36 Now it’s just gone sideways and basically any possible way it could go sideways, including,
    1:13:40 I mean, literally, as you know, students getting beaten up on campus for being the wrong religion.
    1:13:43 They’re just wrong in every possible way at this point.
    1:13:45 And it’s all in the federal taxpayer back.
    1:13:50 And there is no way, I mean, my opinion, there is no way to fix these things without replacing
    1:13:51 them.
    1:13:54 And there’s no way to replace them without letting them fail.
    1:13:56 And by the way, it’s like everything else in life.
    1:13:59 I mean, in a sense, this is like the most obvious conclusion of all time, which is what
    1:14:04 happens in the business world when a company has a bad job is they go bankrupt and another
    1:14:05 company takes its place, right?
    1:14:07 And that’s how you get progress.
    1:14:11 And of course, below that is what happens is this is the process of evolution, right?
    1:14:12 Why does anything ever get better?
    1:14:16 Because things are tested and tried and then you know, the things that are good survive.
    1:14:18 And so these places have cut themselves off.
    1:14:21 They’ve been allowed to cut themselves off from both from evolution at the institutional
    1:14:28 level and evolution at the individual level, as shown by the just widespread abuse of tenure.
    1:14:33 And so we’ve just stalled out, we built an ossified system, an ossified centralized corrupt
    1:14:34 system.
    1:14:36 We’re surprised by the results.
    1:14:38 They are not fixable in their current form.
    1:14:40 I disagree with you on that.
    1:14:44 Maybe it’s grounded in hope that I believe you can revolutionize the system from within
    1:14:48 because I do believe Stanford and MIT are important.
    1:14:51 Oh, but that logic doesn’t follow at all.
    1:14:53 That’s underpants-nome logic.
    1:14:55 Underpants-nome, can you explain what that means?
    1:14:56 Underpants-nose logic.
    1:14:59 I just started watching a key touchstone of American culture with my nine-year-old, which
    1:15:00 of course is South Park.
    1:15:01 Yes.
    1:15:02 Wow.
    1:15:05 And there is a, which by the way is a little aggressive for a nine-year-old.
    1:15:06 Very aggressive.
    1:15:07 But he likes it.
    1:15:10 So he’s learning all kinds of new words.
    1:15:11 All kinds of new ideas.
    1:15:12 But yeah.
    1:15:14 I told him, I said, “You’re going to hear words on here that you are not allowed to
    1:15:15 use.”
    1:15:16 Right.
    1:15:17 Education.
    1:15:22 And I said, “Do you know how we have an agreement that we never lie to mommy?”
    1:15:27 I said, “Not using a word that you learn in here does not count as lying.”
    1:15:28 Wow.
    1:15:29 And keep that in mind.
    1:15:32 Orwellian redefinition of lying, but yes, go ahead.
    1:15:35 Of course, in the very opening episode, in the first 30 seconds, one of the kids calls
    1:15:36 the other kid a dildo.
    1:15:37 Right?
    1:15:38 We’re off to the races.
    1:15:39 Yep.
    1:15:40 Let’s go.
    1:15:41 Daddy, what’s a dildo?
    1:15:42 Yep.
    1:15:48 You know, I’m sorry, I don’t know.
    1:15:56 So, famous episode of South Park, the underpants gnomes, and so there’s all the kids basically
    1:15:59 realize that their underpants are going missing from their dresser drawers.
    1:16:02 Somebody stealing the underpants, and it’s just like, “Well, who on earth would steal
    1:16:03 the underpants?”
    1:16:05 And it turns out it’s the underpants gnomes.
    1:16:07 And it turns out the underpants gnomes have come to town, and they’ve got this little
    1:16:10 underground warren of tunnels and storage places for all the underpants.
    1:16:14 And so they go out at night, they steal the underpants, and the kids discover the underpants
    1:16:16 gnomes, and they’re, “What are you doing?
    1:16:17 What’s the point of this?”
    1:16:21 And so the underpants gnomes present their master plan, which is a three-part plan, which
    1:16:24 is step one, collect underpants.
    1:16:26 Step three, profit.
    1:16:30 Step two, question mark.
    1:16:34 So you just proposed the underpants gnomes, which is very common in politics.
    1:16:37 So the form of this in politics is, we must do something.
    1:16:41 This is something, therefore we must do this.
    1:16:45 But there’s no causal logic chain in there at all to expect that that’s actually going
    1:16:48 to succeed, because there’s no reason to believe that it is.
    1:16:49 It’s the same thing.
    1:16:50 But this is what I hear all the time.
    1:16:56 I will let you talk as the host of the show in a moment, but I hear this all the time.
    1:17:00 I have friends who are on these boards, very involved with these places, and I hear this
    1:17:02 all the time, which is like, “Oh, these are very important.
    1:17:07 We must fix them, and so therefore they are fixable.”
    1:17:09 There’s no logic chain there at all.
    1:17:14 If there’s that pressure that you described in terms of cutting funding, then you have
    1:17:22 the leverage to fire a lot of the administration and have new leadership that steps up, that
    1:17:27 aligns with this vision that things really need to change at the heads of the universities,
    1:17:33 and they put students and faculty primary, fire a lot of the administration, and realign
    1:17:40 and reinvigorate this idea of freedom of thought and intellectual freedom.
    1:17:45 Because there is already a framework of great institutions that’s there, and the way they
    1:17:50 talk about what it means to be a great institution is aligned with this very idea that you’re
    1:17:51 talking about.
    1:17:56 It’s this meaning like intellectual freedom, the idea of tenure, right?
    1:18:00 On the surface, it’s aligned, underneath is become corrupted.
    1:18:03 If we say free speech and academic freedom often enough, sooner or later these tenured
    1:18:04 professors will get brave.
    1:18:07 Well, do you think the universities are fundamentally broken?
    1:18:09 Okay, so how do you fix it?
    1:18:19 How do you have institutions for educating 20-year-olds and institutions that host researchers
    1:18:24 that have the freedom to do epic shit, like research-type shit that’s outside the scopes
    1:18:27 of R&D departments and inside companies?
    1:18:29 So how do you create an institution like that?
    1:18:31 How do you create a good restaurant when the one down the street sucks?
    1:18:34 All right, you invent something new?
    1:18:36 You open a new restaurant?
    1:18:37 Yeah.
    1:18:38 Okay.
    1:18:41 How often in your life have you experienced a restaurant that’s just absolutely horrible
    1:18:43 and it’s poisoning all of its customers and the food tastes terrible?
    1:18:46 And then three years later, you go back and it’s fantastic.
    1:18:49 Charlie Munger actually had the best comment on his great investor, Charlie Munger, the
    1:18:50 great comment.
    1:18:52 He once asked, he’s like, you know, he’s, you know, General Electric was going through
    1:18:55 all these challenges and he was asked to the Q&A, he said, “How would you fix the culture
    1:18:56 of General Electric?”
    1:18:58 And he said, “Fix the culture of General Electric.”
    1:19:02 He said, “I couldn’t even fix the culture at a restaurant.”
    1:19:03 Like it’s insane.
    1:19:04 Like obviously you can’t do it.
    1:19:07 I mean, nobody in business thinks you can do that.
    1:19:09 Like, it’s impossible.
    1:19:13 Like, it’s not, it’s, no, no, look, having said all that, I should also express this
    1:19:17 because I have a lot of friends to work at these places and are involved in various attempts
    1:19:18 to fix these.
    1:19:19 I hope that I’m wrong.
    1:19:20 I would love to be wrong.
    1:19:23 I would love for the, I would love for the underpants known step two to be something
    1:19:26 clear and straightforward that they can figure out how to do.
    1:19:27 I would love to, love to fix it.
    1:19:29 I’d love to see them come back to their spoken principles.
    1:19:30 I think that’d be great.
    1:19:33 I’d love to see the professors with tenure get bravery.
    1:19:34 I would love to see.
    1:19:38 I mean, it’d be fantastic, you know, my partner and I’ve done like a lot of public speaking
    1:19:39 on this topic.
    1:19:42 It’s, it’s been intended to not just be harsh, but also be like, okay, like these, these
    1:19:44 challenges have to be confronted directly.
    1:19:48 By the way, let me also say something positive, you know, especially post October 7th, there
    1:19:52 are a bunch of very smart people who are major donors and board members of these institutions
    1:19:56 like Mark Rowan, you know, who are really coming in trying to, you know, I think legitimately
    1:19:57 trying to, trying to fix these places.
    1:20:00 I have a friend on the executive committee at one of the top technical universities.
    1:20:02 He’s working over time to try to do this.
    1:20:05 Man, I hope they can figure it out.
    1:20:08 But I, but the counter question would just be like, do you see it actually happening
    1:20:10 at a single one of these places?
    1:20:13 I’m a person that believes in leadership.
    1:20:18 If you have the right leadership, the whole system can be changed.
    1:20:21 So here’s a question for your friend who have tenure at one of these places, which is who
    1:20:23 runs his university.
    1:20:28 I think, you know, you know, I think runs it whoever the fuck says they run it.
    1:20:29 That’s what great leadership is.
    1:20:31 Like a president has that power.
    1:20:36 But how does he has the leverage because they can mouth off like Elon can fire the professors.
    1:20:39 They can fire them through being vocal publicly.
    1:20:40 Yes.
    1:20:41 Fire the professors.
    1:20:42 What do you talk about legally?
    1:20:44 No, they cannot fire the professors.
    1:20:45 Then we know who runs the university.
    1:20:46 The professors.
    1:20:47 Yeah.
    1:20:49 Professors, the professors and the students, the professors and the Ferrell students.
    1:20:53 And they’re of course in a radicalization feedback cycle, driving each other crazy.
    1:20:54 The Ferrell students.
    1:20:55 Yeah, the Ferrell students.
    1:20:56 Yeah, the Ferrell students.
    1:20:59 What happens when you’re put in charge of your bureaucracy, where the, where the thing
    1:21:02 that the bureaucracy knows is that they can outlast you?
    1:21:05 The thing that the tenure professors at all these places know is it doesn’t matter who
    1:21:09 the president is because they can outlast them because they cannot get fired.
    1:21:12 By the way, it’s the same thing that bureaucrats in the government know.
    1:21:14 It’s the same thing that the bureaucrats in the Department of Education know.
    1:21:16 They know the exact same thing.
    1:21:17 They can outlast you.
    1:21:20 It’s, I mean, it’s the whole thing that the resistance, like they can be the resistance.
    1:21:23 They can just sit there and resist, which is what they do.
    1:21:24 They’re not fireable.
    1:21:26 That’s definitely a crisis that needs to be solved.
    1:21:27 It’s a huge problem.
    1:21:30 And I also don’t like that I’m defending academia here.
    1:21:37 I, I agree with you that the situation is dire and, uh, but I just think that institutions
    1:21:38 are important.
    1:21:41 And I should also add context, since you’ve been grilling me a little bit.
    1:21:45 You were using restaurants as an analogy and earlier offline in this conversation, you
    1:21:47 said the Dairy Queen is a great restaurant.
    1:21:51 So let’s, let’s let the listener take that.
    1:21:52 Dairy Queen is the best restaurant.
    1:21:53 The best restaurant.
    1:21:54 There you go.
    1:21:57 I think Marcaadresa is saying today, I don’t want it to cut.
    1:21:58 You should go order a blizzard.
    1:22:00 Just one day you should walk down there and order a blizzard.
    1:22:01 Yeah.
    1:22:03 They can get like 4,000 calories in a cup.
    1:22:04 They can.
    1:22:05 And they’re delicious.
    1:22:06 Amazing.
    1:22:07 They are truly delicious.
    1:22:08 And they’ll put, they’ll put anything in there you want.
    1:22:09 All right.
    1:22:10 Okay.
    1:22:12 So, but anyway, let me just close by saying, look, I, I, my friends at the university system,
    1:22:14 I would just say, look, like this is the challenge.
    1:22:16 Like I would just, I would just pose this as the challenge.
    1:22:19 Like to me, like this is having had a lot of these conversations.
    1:22:20 Like this is the bar.
    1:22:22 In my view, this is the conversation that actually has to happen.
    1:22:24 This is the bar that actually has to be hit.
    1:22:27 These problems need to be confronted directly because I think there’s just, I think there’s
    1:22:28 been way too much.
    1:22:31 I mean, I’m actually worried kind of on the other side, there’s too much happy talk in
    1:22:32 these conversations.
    1:22:35 I think the taxpayers do not understand this level of crisis.
    1:22:39 And I think if the taxpayers come to understand it, I think the funding evaporates.
    1:22:43 And so I think the, the fuse is going through, you know, no fault of any of ours, but like
    1:22:44 the fuse is going.
    1:22:47 And there’s some window of time here to fix this and address it and justify the money.
    1:22:53 Because it just normal taxpayers sitting in normal towns, in normal jobs, are not going
    1:22:56 to tolerate this for, for that much longer.
    1:23:00 You mentioned censorship a few times, let us if we can go deeper into the darkness of
    1:23:04 the past and how censorship mechanism was used.
    1:23:09 So you are a good person to speak about the history of this because you were there on
    1:23:14 the ground floor in 2013 ish Facebook.
    1:23:23 I heard that you were there when they invented or maybe developed the term hate speech in
    1:23:28 the context of censorship on social media.
    1:23:33 So take me to through that history, if you can, the use of censorship.
    1:23:37 So I was there on the ground floor in 1993.
    1:23:39 There’s multiple floors to this building apparently.
    1:23:40 There are.
    1:23:41 Yeah.
    1:23:45 So I was first asked to implement censorship on the internet, which was in the web browser.
    1:23:46 That is fast.
    1:23:47 Yeah.
    1:23:48 Yeah.
    1:23:51 In actually in 1982, I was asked to implement a nudity filter.
    1:23:53 Did you have the courage to speak up back then?
    1:23:56 I didn’t have any problem speaking up back then.
    1:23:58 I was making six dollars and 25 cents an hour.
    1:23:59 I did not have a lot to lose.
    1:24:03 No, I was asked at the time and then look, you know, legitimate, you know, in some sense
    1:24:07 of legitimate request, which is working on a research project actually funded by the
    1:24:09 federal government and a public university.
    1:24:12 So you know, I don’t think my boss was like in any way out of line, but it was like, yeah,
    1:24:15 like this web browser thing is great, but like, could it just make sure to not have
    1:24:17 any photos of naked people that show up?
    1:24:21 But if you think about this for a second as a technologist, I had an issue, which is this
    1:24:22 was like pre-image net, right?
    1:24:26 And so I had a brief period where I tried to imagine an algorithm that I referred to
    1:24:32 as the breast detection algorithm that I was going to have to design.
    1:24:36 And then apparently a variety of other apparently body parts people are also sensitive about.
    1:24:41 And and then I politely declined to do this for just the technical difficulties.
    1:24:43 Well, number one, I didn’t actually didn’t know how to do it, but number two is just
    1:24:46 like, no, I’m not, I’m not building, I’m just not building a censorship engine.
    1:24:48 Like I’m, you know, I’m just not doing it.
    1:24:51 And in those days, it was, you know, in those days, the internet generally was, you know,
    1:24:55 free fire zone for everything is actually interesting as sort of pre-93.
    1:24:57 The internet was such a specific niche community.
    1:25:02 Like it was like the million kind of highest IQ nerds in the world.
    1:25:06 And so it actually like didn’t really have a lot of issues that people were like super
    1:25:10 interested in talking about like astrophysics and not very interested in, you know, even
    1:25:11 politics at that time.
    1:25:16 So there really was not an issue there, but yeah, I didn’t want to start the process.
    1:25:19 So I think the way to think about this, so first of all, you know, yeah, so I was involved
    1:25:22 in this at Facebook every step, by the way, I’ve been involved this at Facebook every
    1:25:24 step of the way I joined the board there in 2007.
    1:25:28 So I saw, I’ve seen everything in the last, you know, almost 20 years every step of the
    1:25:29 way.
    1:25:31 But also I’ve been involved in most of the other companies over time.
    1:25:33 So I was an angel investor in Twitter, I knew them really well.
    1:25:38 We were the founding investor in Substack, I’m part of the Elon takeover of Twitter
    1:25:40 with X, I was an angel at LinkedIn.
    1:25:44 So I’ve been in these, we were the funder of Pinterest, we were one of the main investors
    1:25:46 there, Reddit as well.
    1:25:48 And I was having these conversations with all these guys all the way through.
    1:25:52 So as much talk specifically about Facebook, but I can just tell you like the general pattern
    1:25:55 and for quite a while it was kind of all the same across these companies.
    1:26:00 Yeah, so basically the way to think about this, the true kind of nuanced view of this
    1:26:05 is that there is practically speaking no internet service that can have zero censorship.
    1:26:09 And by the way, that also mirrors, there is no country that actually has limited free
    1:26:11 speech either.
    1:26:15 The US First Amendment actually has 12 or 13 formal carve outs from the Supreme Court
    1:26:21 over time, you know, so incitement to violence and terrorist recruitment and child abuse
    1:26:23 and so, you know, child pornography and so forth, they’re like, they’re not covered by
    1:26:25 the First Amendment.
    1:26:28 And just practically speaking, if you and I are going to start an internet company and
    1:26:32 have a service, we can’t have that stuff either, right, because it’s illegal or it will just
    1:26:33 clearly, you know, destroy the whole thing.
    1:26:36 So you’re always going to have a censorship engine.
    1:26:39 I mean, hopefully it’s not actually in the browser, but like you’re going to have it
    1:26:42 for sure at the level of an internet service.
    1:26:45 But then what happens is now you have a machine, right?
    1:26:50 Now you have a system where you can put in rules saying we allow this, we don’t allow
    1:26:51 that.
    1:26:54 You have enforcement, you have consequences, right?
    1:26:59 And once that system is in place, like it becomes the ring of power, right, which is
    1:27:03 like, okay, now anybody in that company or anybody associated with a company or anybody
    1:27:06 who wants to pressure that company will just start to say, okay, you should use that machine
    1:27:11 for more than just terrorist recruitment and child pornography, you should use it for XYZ.
    1:27:17 And basically that transition happened to call it 2012-2013 is when there was this like
    1:27:19 very, very kind of rapid pivot.
    1:27:22 I think the kickoff to it for some reason was this, it was the beginning of the second
    1:27:24 Obama term.
    1:27:29 I think it also coincided with the sort of arrival of the first kind of super woke kids
    1:27:34 into these schools, you know, that kind of, you know, it’s the kids that were in school
    1:27:37 between like, you know, for the Iraq war and then the global financial crisis and like,
    1:27:40 they came out like super radicalized, they came into these companies and they immediately
    1:27:45 started mounting these social crusades to ban and censor lots of things.
    1:27:48 And then, you know, quite frankly, the Democratic Party figured this out and they figured out
    1:27:51 that these companies were, you know, very subject to being controlled and the, you know,
    1:27:55 the executive teams and boards of directors are almost all Democrats and, you know, there’s
    1:27:58 tremendous circulation, a lot of Obama people from the first term actually came and worked
    1:28:02 in these companies and a lot of FBI people and other, you know, law enforcement intelligence
    1:28:07 people came in and worked and they were all Democrats for that set.
    1:28:10 And so they just, you know, the ring of power was lying on the table.
    1:28:15 It had been built and they, you know, pick it up and put it on and then they just ran.
    1:28:18 And the original discussions were basically always on two topics.
    1:28:21 It was hate speech and misinformation.
    1:28:23 Hate speech was the original one.
    1:28:26 And the hate speech conversation started exactly like you’d expect, which is we can’t have
    1:28:29 the n-word in which the answer is fair enough.
    1:28:30 Let’s not have the n-word.
    1:28:31 Okay.
    1:28:34 Now we’ve set a precedent, right?
    1:28:37 And then, and then Jordan Peterson has talked a lot about this, the definition of hate speech
    1:28:41 ended up being things that make people uncomfortable, right?
    1:28:43 So we can’t have things that make, you know, people uncomfortable.
    1:28:46 I, of course, you know, people like me that are disagreeable, raise their hands and say,
    1:28:49 well, that idea right there makes me uncomfortable.
    1:28:51 But of course, that doesn’t count as hate speech, right?
    1:28:56 So, you know, the ring of power is on one hand and not on the other hand.
    1:29:01 And then basically that began this slide where it ended up being that, you know, completely
    1:29:05 anodyne is the point that Mark has been making recently, completely anodyne comments that
    1:29:08 are completely legitimate on television or on the Senate floor.
    1:29:10 All of a sudden our hate speech can’t be set online.
    1:29:14 So that, you know, the ring of power was wielded in grossly irresponsible ways.
    1:29:16 We can talk about all the stuff that happened there.
    1:29:17 And then the other one was misinformation.
    1:29:20 And that wasn’t as there was a little bit of that early on.
    1:29:23 But of course, that really kicked in with with Trump.
    1:29:28 So, so the hate speech stop, the hate speech stop predated Trump by like three or four years.
    1:29:32 The misinformation stuff was basically, it was a little bit later, and it was the consequence
    1:29:33 of the Russiagate hoax.
    1:29:38 And then that was, you know, a ring of power that was even more powerful, right?
    1:29:42 Because, you know, hate speech is like, okay, at some point, if some if something offensive
    1:29:44 or not, like at least you can have a question as to whether that’s the case.
    1:29:48 But the problem with misinformation is like, is it the truth or not?
    1:29:52 You know, you know, what do we know for 800 years or whatever Western civilization?
    1:29:56 It’s that, you know, there’s only a few entities that can determine the truth on every topic.
    1:29:58 You know, there’s God, you know, there’s the king.
    1:29:59 We don’t have those anymore.
    1:30:02 And the rest of us are all imperfect and flawed.
    1:30:05 And so the idea that any group of experts is going to sit around the table and decide
    1:30:08 on the truth is, you know, deeply anti-Western and deeply authoritarian.
    1:30:14 And somehow the misinformation kind of crusade went from the Russiagate hoax into just full-blown.
    1:30:17 We’re going to use that weapon for whatever we want.
    1:30:20 And then, of course, then the culminating moment on that that really was the straw that
    1:30:25 broke the camel’s back was we’re going to censor all theories that the COVID virus might
    1:30:28 have been manufactured in a lab as misinformation.
    1:30:32 And inside these companies, like that was the point where people for the first time, this
    1:30:36 is like what, three years ago, for the first time they were like, that was when it sunk
    1:30:39 in where it’s just like, okay, this has spun completely out of control.
    1:30:42 But anyway, that’s how we got to where we are.
    1:30:47 And then basically that spell lasted, that that that complex existed and got expanded
    1:30:51 basically from call it 2013 to 2023.
    1:30:54 I think basically two things broke it.
    1:30:55 One is sub-stack.
    1:31:00 And so when I’m super proud of those guys, because they started from scratch and declared
    1:31:04 right up front that they were going to be a free speech platform.
    1:31:09 And they came under intense pressure, including from the press and, you know, they tried to
    1:31:12 just beat them to the ground and kill them and intense pressure, by the way, from, you
    1:31:16 know, let’s say certain of the platform companies, you know, basically threatening them.
    1:31:17 And they stood up to it.
    1:31:21 And, you know, sitting here today, they have the widest spectrum of speech and conversation.
    1:31:24 I’ve, you know, anywhere on planet Earth and they’ve done a great job and it’s worked.
    1:31:25 By the way, it’s great.
    1:31:30 And then obviously Elon, you know, with X was the, you know, the hammer blow.
    1:31:34 And then I did the third one now was what Marcus doing at Facebook.
    1:31:39 And there’s also like singular moments, I think you’ve spoken about this, which like
    1:31:45 John Stuart going on Stephen Colbert and talking about the lab leak theory.
    1:31:46 Yes.
    1:31:50 I just, there’s certain moments that just kind of shake everybody up.
    1:31:54 The right person, the right time, just it’s a wake up call.
    1:31:58 So that there, and I will tell you like, and I should say John Stuart attacked me recently
    1:32:03 so I’m not that thrilled about him, but I would say I was a long run fan of John Stuart.
    1:32:08 I watched probably every episode of the Daily Show when he was on it for probably 20 years.
    1:32:11 But he did a very important public service and it was that appearance on the Colbert
    1:32:12 show.
    1:32:15 And I don’t know how broadly this is, you know, at the time it was in the news briefly,
    1:32:18 but I don’t know how if people remember this, but I will tell you in, in the rooms where
    1:32:22 people discuss what is misinformation and these policies, that was a very big moment.
    1:32:23 That was probably actually the key catalyzing moment.
    1:32:28 And I think he exhibited, I would say conspicuous bravery and had a big impact with that.
    1:32:31 And yeah, what for people who don’t recall what he did, what, and this was in the full
    1:32:35 blown like you absolutely, you know, you absolutely must lock down for two years, you absolutely
    1:32:38 must keep all the schools closed, you absolutely must have everybody work from home.
    1:32:41 You absolutely must wear a mask, like the whole thing.
    1:32:46 And one of those was you absolutely must believe that COVID was completely natural.
    1:32:51 You must believe that and not believing that means you’re a fascist Nazi Trump supporter,
    1:32:53 mega evil Q and on person, right.
    1:32:57 And that was like uniform and that was enforced by the social media companies.
    1:33:01 And like I said, that was the peak and John Stuart went on the Colbert show and I don’t
    1:33:04 know if they planned it or not because Colbert looked shocked, I don’t know how much it was
    1:33:09 a bit, but he went on there and he just had one of these like the emperors wearing no
    1:33:13 clothes things where he said, it’s just not plausible that you had the COVID super virus
    1:33:20 appear 300 yards down the street from the Wuhan Institute of lethal coronaviruses like it’s
    1:33:23 just not plausible that that certainly that you could just rule that out.
    1:33:26 And then there was another key moment actually, the more serious version was I think the author
    1:33:30 Nicholson Baker wrote a big piece for New York magazine and Nicholson Baker is like
    1:33:34 one of our great novelist writers of our time and he wrote the piece and he did the complete
    1:33:35 addressing of it.
    1:33:39 And that was the first, I think that was the first legit, there had been like alt, you
    1:33:42 know, renegade, there had been, you know, people running around saying this, but getting
    1:33:43 censored all over the place.
    1:33:46 That was the first one that was like in the mainstream press where he and he talked to
    1:33:49 all the heretics and he just like laid the whole thing out.
    1:33:52 And and that was a moment and I remember, let’s say a board meeting at one of these companies
    1:33:56 after that where basically, you know, everybody looked around the table and it was like, all
    1:34:01 right, I guess we’re not, we don’t need to censor that anymore.
    1:34:03 And you know, and then of course, what immediately follows from that is, well, wait a minute,
    1:34:06 why were we censoring that in the first place?
    1:34:09 And okay, like, and then, you know, the downstream, not that day, but the downstream conversations
    1:34:14 were like, okay, if we made such a giant, in retrospect, if we all made such a giant
    1:34:17 collective mistake, censoring that, then what does that say about the rest of our regime?
    1:34:21 And I think that was the thread in the sweater that started to unravel it.
    1:34:24 I should say it again, I do think that the John Stuart appearance and the statement he
    1:34:26 made was a courageous act.
    1:34:27 Yeah, I agree.
    1:34:30 I think we need to have more of that in the world.
    1:34:38 And like you said, Elon, everything he did with X is a series of courageous acts.
    1:34:45 And I think what Zuck, what Mark Zuckerberg did on Rogan a few days ago is a courageous
    1:34:46 act.
    1:34:49 Can you just speak to that?
    1:34:51 He has become, I think, an outstanding communicator, right?
    1:34:54 And he’s, you know, somebody who came in for a lot of criticism earlier in his career
    1:34:55 on that front.
    1:35:00 And I think he’s one of these guys who can sit down and talk for three hours and make
    1:35:01 complete sense.
    1:35:05 And, you know, as you do with all of your episodes, like when somebody sits and talks
    1:35:09 for three hours, like you really get a sense of somebody, because it’s really hard to bear
    1:35:10 official for that long.
    1:35:12 And, you know, he’s not done that repeatedly.
    1:35:13 He’s really good at it.
    1:35:16 And then look, again, I would maybe put him in the third category now with, certainly
    1:35:20 after that appearance, I would say I would put him up there now with, you know, kind of
    1:35:23 Elon and Trump in the sense of the public and the private are now synchronized.
    1:35:24 I guess I’d say that.
    1:35:27 Like, he said on that show what he really believes.
    1:35:28 He said all the same things that he says in private.
    1:35:31 Like I don’t think there’s really any discrepancy anymore.
    1:35:38 I would say he has always taken upon himself a level of obligation, responsibility to running
    1:35:43 a company the size of Metta and to running services that are that large.
    1:35:46 And I think, you know, his conception of what he’s doing, which I think is correct is he’s
    1:35:48 running services that are bigger than any country, right?
    1:35:52 He’s running, you know, over 3 billion people use those services.
    1:35:55 And so, and then, you know, the company has, you know, many tens of thousands of employees
    1:35:57 and many investors and it’s a public company.
    1:36:01 And he thinks very deeply and seriously about his responsibilities.
    1:36:05 And so, you know, he has not felt like he has had, let’s just say the complete flexibility
    1:36:07 that Elon has had.
    1:36:10 And you know, people could argue that one way or the other, but, you know, he’s, he’s,
    1:36:12 you know, yeah, he’s, he’s, you know, he talked about a lot.
    1:36:14 He’s evolved a lot.
    1:36:15 A lot of it was he learned a lot.
    1:36:17 And by the way, I’m going to put myself right back up there.
    1:36:20 Like I’m not claiming any huge foresight or heroism on any of this.
    1:36:22 Like I’ve also learned a lot.
    1:36:26 Like, like my views on things are very different than they were 10 years ago on lots of topics.
    1:36:29 And so, you know, I’ve been on a learning journey.
    1:36:31 He’s been on a learning journey.
    1:36:33 He is a really, really good learner.
    1:36:39 He assimilates information, you know, as good as or better than anybody else I know.
    1:36:42 The other thing I guess I would just say is he talked on that show about something very
    1:36:46 important, which is when you’re in a role where you’re running a company like that, there
    1:36:50 are a set of decisions that you get to make and you deserve to be criticized for those
    1:36:53 decisions and so forth and it’s valid.
    1:36:57 But you are under tremendous external pressure as well.
    1:36:59 And by the way, you’re under tremendous internal pressure.
    1:37:01 You’ve got your employees coming at you.
    1:37:03 You’ve got your executives in some cases coming at you.
    1:37:06 You’ve got your board in some cases coming at you.
    1:37:08 You’ve got your shareholders coming at you.
    1:37:11 So you’ve got your internal pressures, but you also have the press coming at you.
    1:37:13 You’ve got academia coming at you.
    1:37:17 You’ve got the entire non-profit complex coming, activist complex coming at you.
    1:37:21 And then really critically, you know, he talked about Enrogan and these companies all went
    1:37:27 through this in this last, especially five years, you had the government coming at you.
    1:37:31 And you know, that’s the really, you know, stinky end of the pool where, you know, the
    1:37:35 government was in my view, you know, illegally exerting, you know, just in flagrant violation
    1:37:40 of the First Amendment and federal laws on speech and coercion and conspiracy, forcing
    1:37:44 these companies to engage in activities, you know, then again, in some cases, they may
    1:37:46 have wanted to do, but in other cases, they clearly didn’t want to do and felt like they
    1:37:48 had to do.
    1:37:54 And the level of pressure, like I just say, like I’ve known every CEO of Twitter, they’ve
    1:37:58 all had the exact same experience, which when they were in the job, it was just daily beatings.
    1:38:02 Like it’s just getting punched in the face every single day, constantly.
    1:38:10 And you know, Mark is very good at getting physically punched in the face and he’s very
    1:38:13 good at, you know, taking a punch and he has taken many, many punches.
    1:38:17 So I would encourage people to have a level of sympathy for these are not kings.
    1:38:20 These are people who operate with like, I would say, extraordinary levels of external
    1:38:21 pressure.
    1:38:26 I think if I had been in his job for the last decade, I would be a little puddle on the floor.
    1:38:30 And so it says, I think a lot about him that he has, you know, risen to this occasion the
    1:38:31 way that he has.
    1:38:33 And by the way, I should also say, you know, the cynicism, of course, is immediately out.
    1:38:37 And, you know, it’s a legitimate thing for people to say, but you know, it’s like, oh,
    1:38:39 you’re only doing this because of Trump or, you know, whatever.
    1:38:43 And it’s just like, no, like he has been thinking about and working on these things and trying
    1:38:45 to figure them out for a very long time.
    1:38:50 And so I think what you saw are legitimate, deeply held beliefs, not some, you know, sort
    1:38:52 of just in the moment thing that could change at any time.
    1:38:59 So what do you think it’s like to be him and other leaders of companies to be you and withstand
    1:39:01 internal pressure and external pressure?
    1:39:02 What’s that life like?
    1:39:04 Is it deeply lonely?
    1:39:05 That’s a great question.
    1:39:07 Leaders are lonely to start with.
    1:39:10 And this is one of those things where almost nobody has sympathy, right?
    1:39:11 Nobody feels sorry for a CEO, right?
    1:39:13 Like, it’s not a thing, right?
    1:39:17 And, you know, and again, legitimately so, like CEOs get paid a lot, like the whole thing.
    1:39:18 There’s a lot of great things about it.
    1:39:21 So it’s not like they should be out there asking for a lot of sympathy, but it is the
    1:39:23 case that they are human beings.
    1:39:24 And it is the case that it is a lonely job.
    1:39:30 And the reason it’s a lonely job is because your words carry tremendous weight.
    1:39:33 And you are dealing with extremely complicated issues and you’re under a tremendous amount
    1:39:36 of emotional, you know, personal emotional stress.
    1:39:40 And, you know, you often end up not being able to sleep well and you end up not being
    1:39:43 able to, like, keep up an exercise routine and all those things and, you know, you come
    1:39:45 under family stress because you’re working all the time.
    1:39:48 Or my partner, Ben, you know, was, he was CEO of our last company before we started
    1:39:49 the venture firm.
    1:39:52 He said, you know, the problem he had, like, with his family life was he would, even when
    1:39:57 he was home at night, he wasn’t home because he was in his head trying to solve all the
    1:39:58 business problems.
    1:40:00 And so he was like supposed to be like having dinner with his kids and he was physically
    1:40:01 there, but he wasn’t mentally there.
    1:40:05 So, you know, you kind of get, you get that a lot, but the key thing is like you can’t
    1:40:06 talk to people, right?
    1:40:08 So you can’t, I mean, you can talk to your spouse and your kids, but like they don’t
    1:40:11 understand that they’re not working in your company, they don’t understand, have the context
    1:40:13 to really help you.
    1:40:16 You, if you talk to your executives, they all have agendas, right?
    1:40:20 And so they’re all, they’re all, and they can’t resist, like it’s just human nature.
    1:40:23 And so you can’t necessarily rely on what they say.
    1:40:28 It’s very hard in most companies to talk to your board because they can fire you.
    1:40:29 Right.
    1:40:32 Now, Mark has the situation because he has control, it actually turns out he can talk
    1:40:35 to his board and Mark talks to us about many things that he does, that most CEOs won’t
    1:40:39 talk to the boards about because we, literally because we can’t fire him.
    1:40:42 But the general, a general, including all the CEOs of Twitter, none of them had control
    1:40:44 and so they, they could all get fired.
    1:40:47 So you can’t talk to the board members, they’re going to fire you.
    1:40:51 You can’t talk to the shareholders because they’ll just like dump your stock, right?
    1:40:54 Like, okay, so who’s the, so, so the, so every once in a while what you find is basically
    1:40:58 the best case scenario they have is they can talk to other CEOs and there’s these little
    1:41:00 organizations where they kind of pair up and do that.
    1:41:03 And so they maybe get a little bit out of that, but, but even that’s fraught with peril
    1:41:08 because can you really talk about confidential information with another CEO, insider trading
    1:41:09 risk?
    1:41:13 And so it’s just a very, it’s just a very lonely and isolating thing to start with.
    1:41:16 And then you, and then on top of that, you apply pressure, right.
    1:41:17 And that’s where it gets painful.
    1:41:22 And then maybe I’ll just spend a moment on this internal, external pressure thing.
    1:41:28 My general experience with companies is that they can withstand most forms of external
    1:41:32 pressure as long as they retain internal coherence, right?
    1:41:39 So as long as the internal team is really bonded together and supporting each other,
    1:41:41 most forms of external pressure you can withstand.
    1:41:46 And by that, I mean investor stuff, your stock, you lose your biggest customers, you know,
    1:41:51 whatever negative article, you know, negative headline, you know, you can, you can withstand
    1:41:52 all that.
    1:41:54 And basically, in fact, many of those forms of pressure can be bonding experiences for
    1:41:57 the team where they, where they come out stronger.
    1:42:01 What you 100% cannot withstand is the internal crack.
    1:42:05 And what I always look for in high pressure corporate situations now is the moment when
    1:42:07 the internal team cracks.
    1:42:13 Because I know the minute that happens, we’re in a different regime, like it’s like the,
    1:42:16 you know, the solidest turn into liquid, like we’re in a different regime and like the whole
    1:42:17 thing can unravel in the next week.
    1:42:20 Because then people turn it, I mean, this, this is what’s happening in Los Angeles right
    1:42:21 now.
    1:42:26 The mayor and the fire chief turned on each other and that’s it.
    1:42:27 That government is dysfunctional.
    1:42:29 It is never going to get put back together again.
    1:42:30 It is over.
    1:42:32 It is not going to work ever again.
    1:42:34 And that’s what happens inside companies.
    1:42:40 And so, so, so somebody like Mark is under like profound internal pressure and external
    1:42:41 pressure at the same time.
    1:42:45 Now he’s been very good at maintaining the coherence of his executive team, but he has
    1:42:50 had over the years a lot of activist employees as a lot of these companies have had.
    1:42:52 And so that’s been continuous pressure.
    1:42:55 And then the final thing I’d say is I said that companies can withstand most forms of
    1:43:00 external pressure, but not all in the special, though not all one is government pressure.
    1:43:05 Is it when your government comes for you like, yeah, any CEO who thinks that they’re bigger
    1:43:09 than the government has that notion beaten out of them in short order.
    1:43:16 Can you just linger on that because it is maybe educating and deeply disturbing.
    1:43:21 You’ve spoken about it before, but we’re speaking about again, this government pressure.
    1:43:27 So you think they’ve crossed the line into essentially criminal levels of pressure,
    1:43:32 flagrant criminality, felonies, like obvious felonies, and I can, I can actually cite
    1:43:33 the laws.
    1:43:36 But yes, absolute criminality.
    1:43:43 Can you explain how those possible to happen and maybe on a hopeful note, how we can avoid
    1:43:44 that happening again?
    1:43:49 So as to start with is a lot of this now is in the public record, which is good because
    1:43:50 it needs to be in the public record.
    1:43:52 And so there’s there’s three forms of things that are in the public record that people
    1:43:53 can look at.
    1:43:57 So one is the Twitter files, right, which Elon put out with the set of journalists when
    1:43:58 he took over.
    1:44:01 And I will just tell you, the Twitter files are 100% representative of what I’ve seen
    1:44:03 at every other one of these companies.
    1:44:05 And so you can just see what happened in Twitter.
    1:44:08 And you can just assume that that happened in these other companies, you know, for the
    1:44:11 most part, certainly in terms of the kind of pressure that they got.
    1:44:15 So that’s that’s number one, that stuff, you can just read it and you should if you haven’t.
    1:44:19 The second is Mark referenced this in the Rogan podcast.
    1:44:22 There’s a congressman, Jim Jordan, who has a committee congressional committee called
    1:44:23 the Weaponization Committee.
    1:44:27 And they in the last, you know, whatever three years have done a full scale investigation
    1:44:28 of this.
    1:44:31 And Facebook produced a lot of documents into that investigation.
    1:44:35 And those have many of those have now been made public and you can download those reports.
    1:44:38 And there’s like, I’d like 2000 pages worth of material on that.
    1:44:41 And that’s essentially the Facebook version of the Twitter files just arrived at with
    1:44:43 a different mechanism.
    1:44:45 And then third is Mark himself talking about this on on on Rogan.
    1:44:47 So, you know, just defer to his comments there.
    1:44:53 But yeah, basically what those three forms of information show you is basically the government,
    1:44:58 you know, over time, and then culminating in 2020, 2021, you know, in the last four years
    1:45:01 just decided that the First Amendment didn’t apply to them.
    1:45:06 And they just decided that federal laws around free speech and around conspiracies to take
    1:45:10 away the rights of citizens just don’t apply.
    1:45:14 And they just decided that they can just arbitrarily pressure, just like literally arbitrarily
    1:45:19 call up companies and threaten and bully and yell and scream and, you know, threaten repercussions
    1:45:22 and force people to force them to censor.
    1:45:25 And you know, there’s this old thing of like, well, the First Amendment only applies to,
    1:45:27 you know, the government doesn’t apply to companies.
    1:45:30 It’s like, well, there’s actually a little bit of nuance to that.
    1:45:34 First of all, it definitely applies to the government like 100%.
    1:45:36 The First Amendment applies to the government.
    1:45:39 By the way, so does the Fourth Amendment and the Fifth Amendment, including the right to
    1:45:41 due process also applies to the government.
    1:45:45 There was no due process at all to any of the censorship regime that was put in place.
    1:45:48 There was no due process put in place, by the way, for debanking either.
    1:45:52 Those are just as serious violations as the free speech violations.
    1:45:55 So this is just like flagrant, flagrant unconstitutional behavior.
    1:45:57 And then there are specific federal statutes.
    1:46:00 There’s it’s 18241 and 18242.
    1:46:04 And one of them applies to federal employees, government employees, and the other one applies
    1:46:10 to private actors around what’s called deprivation of rights and conspiracy to deprive rights.
    1:46:14 And it is not legal, according to the United States Criminal Code, for government employees
    1:46:19 or in a conspiracy private entities to take away constitutional rights.
    1:46:23 And interestingly, some of those constitutional rights are enumerated, for example, in the
    1:46:24 First Amendment, freedom of speech.
    1:46:28 And then some of those rights actually do not need to be enumerated.
    1:46:32 It is if the government takes away rights that you have, they don’t need to be specifically
    1:46:36 enumerated rights in the Constitution in order to still be a felony.
    1:46:40 The Constitution does not very specifically does not say you only have the rights that
    1:46:41 it gives you.
    1:46:44 It says you have all the rights that have not been previously defined as being taken
    1:46:45 away from you.
    1:46:46 Right.
    1:46:49 And so debanking qualifies as a right, you know, right to access to the financial system
    1:46:53 is every bit something that’s subject to these laws as free speech.
    1:46:54 And so yeah, this has happened.
    1:46:57 And then I’ll just add one final thing, which is we’ve talked about two parties so far.
    1:47:01 Start with the government employees, and then we’ve talked about the companies.
    1:47:04 The government employees, for sure, have misbehaved.
    1:47:07 The companies, there’s a very interesting question there as to whether they are victims
    1:47:12 or perpetrators or both, you know, they will defend and they will argue and I believe they
    1:47:15 have a good case that they are victims not perpetrators, right?
    1:47:19 They are the downstream subjects of pressure, not the cause of pressure.
    1:47:23 But there’s a big swath of people who are in the middle and specifically the ones that
    1:47:26 are funded by the government that I think are in possibly pretty big trouble.
    1:47:29 And that’s all of these third party censorship bureaus.
    1:47:35 I mean, the one that sort of is most obvious is the so-called Stanford Internet Observatory
    1:47:37 that got booted up there over the last several years.
    1:47:43 And they basically were funded by the federal government to be third party censorship operations.
    1:47:47 And they’re private sector actors, but acting with federal funding.
    1:47:52 And so it puts them in this very interesting spot where there could be very obvious theory
    1:47:55 under which they’re basically acting as agents of the government.
    1:47:59 And so I think they’re also very exposed on this and have behaved in just flagrantly illegal
    1:48:00 ways.
    1:48:06 Obviously government should not do any kind of pressure, even soft pressure on companies
    1:48:07 to censor.
    1:48:08 Can’t.
    1:48:09 Not allowed.
    1:48:11 It really is disturbing.
    1:48:20 I mean, it probably started soft, lightly, slowly, and then it escalates as the old will
    1:48:27 to power will instruct them to do because you get, I mean, yeah, I mean, that’s why there’s
    1:48:31 protection because you can’t put a check on power for government, right?
    1:48:34 There are so many ways that they can get you like there are so many ways they can come
    1:48:35 at you and get you.
    1:48:39 And, you know, the thing here to think about is a lot of times we really think about government
    1:48:40 action.
    1:48:41 They think about legislation, right?
    1:48:45 Because you, so when I was a kid, we got trained at how does government work?
    1:48:49 There was this famous animated short, the thing we got shown was just a cartoon of how a bill
    1:48:50 becomes a law.
    1:48:52 It’s like this, you know, if it’s a little bill snicked along and guessed this, I’m just
    1:48:53 a bill.
    1:48:54 Yeah.
    1:48:55 Exactly.
    1:48:56 Like it’s like, all right.
    1:48:57 It works at all.
    1:48:58 Like that doesn’t actually happen.
    1:48:59 We could talk about that.
    1:49:03 But even beyond that, mostly what we’re dealing with is not legislation.
    1:49:06 When we talk about government power these days, mostly it’s not legislation.
    1:49:10 Mostly it’s either regulation, which is basically the equivalent of legislation, but having not
    1:49:14 gone through the legislative process, which is a very big open legal issue and one of
    1:49:16 the things that the doge is very focused on.
    1:49:20 Most government rules are not legislated, they’re regulated, and there’s tons and tons
    1:49:24 of regulations that these companies are, so this is another cliche you’ll hear a lot, which
    1:49:25 is, oh, private companies can do whatever they want.
    1:49:27 It’s like, oh, no, they can’t.
    1:49:32 They’re subject to tens of thousands of regulations that they have to comply with, and the hammer
    1:49:35 that comes down when you don’t comply with regulations is profound, like they can completely
    1:49:38 wreck your company with no ability for you to do anything about it.
    1:49:41 So regulation is a big part of the way the power gets exercised.
    1:49:45 And then there’s what’s called just flat out administrative power, the term that you’ll
    1:49:46 hear.
    1:49:48 And administrative power is just literally the government telling you, calling you and
    1:49:49 telling you what to do.
    1:49:50 Here’s an example of how this works.
    1:49:55 So Facebook had this whole program a few years back to do a global cryptocurrency for payments
    1:49:56 called Libra.
    1:49:59 And they built the entire system, and it was this high-scale sort of new cryptocurrency,
    1:50:01 and they were going to build it in every product, and there were going to be 3 billion people
    1:50:05 who could transact with Libra, and they went to the government, and they went to all these
    1:50:06 different, try to figure out how to make it.
    1:50:09 So it was like fully compliant with anti-money laundering and all these controls and everything,
    1:50:11 and they had the whole thing ready to go.
    1:50:16 Two senators wrote letters to the big banks saying, we’re not telling you that you can’t
    1:50:21 work with Facebook on this, but if you do, you should know that every aspect of your business
    1:50:26 is going to come under greatly increased level of regulatory scrutiny.
    1:50:29 Which is, of course, the exact equivalent of, it sure is a nice corner restaurant you
    1:50:33 have here, it would be a shame if somebody tossed a Molotov cocktail through the window
    1:50:34 and burned it down tonight.
    1:50:37 And so what is that letter?
    1:50:42 It’s not a law, it’s not even a regulation, it’s just like straight direct state power.
    1:50:47 And then it culminates in literally calls from the White House where they’re just flat
    1:50:50 out telling you what to do, which is, of course, what a king gets to do, but not what
    1:50:52 a president gets to do.
    1:50:57 And so anyway, so what these companies experienced was, they experienced the full panoply of
    1:51:00 this, but the level of intensity was in that order.
    1:51:03 It was actually legislation was the least important part.
    1:51:06 Regulation was more important, administrative power was more important, and then just flat
    1:51:10 out demands and flat out threats were ultimately the most important.
    1:51:11 How do you fix it?
    1:51:15 Well, first of all, you have to elect people who don’t do it.
    1:51:19 So as with all these things, ultimately, the fault lies with the voters.
    1:51:21 And so you have to decide you don’t want to live in that regime.
    1:51:24 I have no idea what part of this recent election map to the censorship regime.
    1:51:28 I do know a lot of people on the right got very angry about the censorship, but I think
    1:51:32 it probably at least helped with enthusiasm on that side.
    1:51:37 Maybe some people in the left will now not want their democratic nominees to be so pro-censorship.
    1:51:40 So the voters definitely get a vote.
    1:51:45 Number one, number two, I think you need transparency, you need to know what happened.
    1:51:46 We know some of what happened.
    1:51:50 Peter Teal has written in the FT just now saying we just need like, after what we’ve
    1:51:55 been through in the last decade, we need broad-based truth and reconciliation efforts to really
    1:51:57 get to the root of things.
    1:51:59 So maybe that’s part of it.
    1:52:02 We need investigations for sure.
    1:52:03 Ultimately we need prosecutions.
    1:52:06 We need ultimately, we need people to go to jail because we need to set object lessons
    1:52:09 that say that you don’t get to do this.
    1:52:13 And on those last two, I would say that those are both up to the new administration and I
    1:52:15 don’t want to speak for them and I don’t want to predict what they’re going to do.
    1:52:19 But they have, they for sure have the ability to do both of those things and we’ll see
    1:52:20 where they take it.
    1:52:21 Yeah, it’s truly disturbing.
    1:52:26 I don’t think anybody wants this kind of overreach of power for government, including perhaps
    1:52:28 people that are participating in it.
    1:52:35 It’s like this dark momentum of power that you just get caught up in it and that’s the
    1:52:36 reason there’s that kind of protection.
    1:52:38 Nobody wants that.
    1:52:41 So I use the metaphor of the ring of power and for people who don’t catch the reference
    1:52:44 as Lord of the Rings and the thing with the ring of power and Lord of the Rings is the
    1:52:48 ring the Gollum has in the beginning and it turns you invisible and it turns out it like
    1:52:52 unlocks all this like fearsome power is the most powerful thing in the world is key to
    1:52:53 everything.
    1:52:56 And basically the moral lesson of Lord of the Rings, which was written by a guy who thought
    1:53:00 very deeply about these things is, yeah, the ring of power is inherently corrupting.
    1:53:03 The characters at one point, they’re like, end off, just put on the ring and like fix
    1:53:04 this.
    1:53:05 Right.
    1:53:10 And he’s like, he will not put the ring on even to like end the war because he knows
    1:53:11 that it will corrupt him.
    1:53:17 And then as it starts, the character of Gollum is the result of like a normal character who
    1:53:20 ultimately becomes this incredibly corrupt and deranged version of himself.
    1:53:24 And so, I mean, I think you said something actually quite profound there, which is the
    1:53:27 ring of power is infinitely tempting.
    1:53:29 The censorship machine is infinitely tempting.
    1:53:32 If you have it, like you are going to use it.
    1:53:37 It’s overwhelmingly tempting because it’s so powerful and that it will corrupt you.
    1:53:41 And yeah, I don’t know whether any of these people feel any of this today.
    1:53:42 They should.
    1:53:43 I don’t know if they do.
    1:53:47 But yeah, you go out five or 10 years later, you know, you would hope that you would realize
    1:53:51 that your soul has been corroded and you probably started out thinking that you were a patriot
    1:53:55 and you were trying to defend democracy and you ended up being, you know, extremely authoritarian
    1:53:57 and anti-democratic and anti-western.
    1:54:05 Can I ask you a tough question here staying on the ring of power, Elon is quickly becoming
    1:54:11 the most powerful human on earth?
    1:54:13 I’m not sure about that.
    1:54:14 You don’t think he is?
    1:54:16 Well, he doesn’t have the nukes, so.
    1:54:17 Nukes.
    1:54:22 Yeah, there’s different definitions and perspectives on power, right?
    1:54:30 How can he and or Donald Trump avoid the corrupting aspects of this power?
    1:54:31 I mean, I think the danger is there with power.
    1:54:32 It’s just, it’s flat out there.
    1:54:36 I would say with Elon, I mean, you know, we’ll see, I would say with Elon, and I would say
    1:54:40 by the way, overwhelmingly, I would say so far so good, I’m extremely, extremely thrilled
    1:54:45 by what he’s done on almost every front for like, you know, the last 30 years, but including
    1:54:48 all this stuff recently, like I think he’s been a real hero on a lot of topics where
    1:54:50 we needed to see heroism.
    1:54:53 But look, I would say I guess the sort of case that he has this level of power is some
    1:54:57 combination of the money and the proximity to the president.
    1:55:00 And obviously both of those are instruments of power.
    1:55:05 The counterargument to that is I do think a lot of how Elon is causing change in the
    1:55:06 world right now.
    1:55:08 I mean, there’s, there’s the companies he’s running directly where I think he’s doing
    1:55:13 very well and we’re investors in multiple of them and doing very well.
    1:55:17 But I think like a lot of the stuff that gets people mad at him is like, it’s the social
    1:55:20 and political stuff and it’s, you know, it’s his statements and then it’s the downstream
    1:55:21 effects of his statements.
    1:55:25 So like, for example, it’s, you know, for the last couple of weeks, it’s been him, you
    1:55:28 know, kind of weighing in on this rape gang scandal, you know, this rape organized child
    1:55:30 rape thing in the UK.
    1:55:34 And you know, it’s, it’s, you know, it’s, it’s actually a, it’s a preface cascade.
    1:55:36 It’s one of these things where people knew there was a problem.
    1:55:37 They weren’t willing to talk about it.
    1:55:39 It kind of got suppressed.
    1:55:43 And then Elon brought it up and then all of a sudden there’s now in the UK, this like
    1:55:46 massive explosion of basically open conversation about it for the first time.
    1:55:49 And, you know, it’s like this catalyzing, all of a sudden everybody’s kind of woken
    1:55:52 up and being like, Oh my God, you know, this is really bad.
    1:55:55 And then there will be now, you know, I’m pretty sure pretty, pretty clearly big changes
    1:55:56 as a result.
    1:56:00 And Elon was, you know, he played the role of the boy who said, the emperor has no clothes,
    1:56:01 right?
    1:56:02 But, but, but here’s the thing.
    1:56:03 Here’s my point.
    1:56:05 Like he said it about something that was true, right?
    1:56:09 And so had he said it about something that was false, you know, he would get no credit
    1:56:10 for it.
    1:56:11 He wouldn’t deserve any credit for it.
    1:56:12 But he said something that was true.
    1:56:16 And by the way, everybody over there instantly, they were like, Oh yeah, he’s right.
    1:56:17 Right.
    1:56:20 Like nobody, like nobody seriously said, they’re just arguing the details now.
    1:56:22 So, so number one, it’s like, okay, he says true things.
    1:56:26 And so it’s like, okay, how far a bit of this way, like how worried are we?
    1:56:30 Are we about somebody becoming corrupt by virtue of their power being that they get to speak
    1:56:31 the truth?
    1:56:34 And I guess I would say, especially in the last decade of what we’ve been through where
    1:56:37 everybody’s been lying all the time about everything, I’d say, I think we should run
    1:56:39 this experiment as hard as we can to get people to tell the truth.
    1:56:42 And so I don’t feel that bad about that.
    1:56:47 And then the money side, you know, this rapidly gets into the money and politics question.
    1:56:51 And the money and politics question is this very interesting question because it seems
    1:56:55 like there’s a clear cut case that the more money and politics, the worse things are and
    1:56:58 the more corrupted the system is.
    1:57:02 That was a very popular topic of public conversation up until 2016, when Hillary outspent Trump
    1:57:05 three to one and lost.
    1:57:09 You’ll notice that money and politics has all most vanished as a topic in the last eight
    1:57:10 years.
    1:57:14 And once again, Trump spent far, you know, Kamala raised and spent 1.5 billion on top
    1:57:16 of what Biden spent.
    1:57:18 So they were, they were at, I don’t know, something like three billion total and Trump,
    1:57:22 I think spent again, like a third or a fourth of that.
    1:57:26 And so the money and politics kind of topic has kind of vanished from the popular conversation
    1:57:27 the last eight years.
    1:57:34 It has come back a little bit now that Elon is spending, you know, but, but again, like
    1:57:37 it’s like, okay, he’s spending, but the data would seem to indicate in the last, at least
    1:57:39 in the last eight years that money doesn’t win the political battles.
    1:57:43 It’s actually like the voters actually have a voice and they actually exercise it and
    1:57:44 they don’t just listen to ads.
    1:57:47 And so again, there I would say like, yeah, clearly there’s some power there, but I don’t
    1:57:50 know if it’s like, I don’t know if it’s some like, I don’t know if it’s some weapon that
    1:57:54 he can just like turn on and use in a definitive way.
    1:57:59 And I don’t know if there’s parallels there, but I could also say just on a human level,
    1:58:04 he has a good heart and I interact with a lot of powerful people and that’s not always
    1:58:05 the case.
    1:58:07 So that’s a good thing there.
    1:58:08 Yeah.
    1:58:13 If we, if we can draw parallels to the Hobbit or whatever who gets to put on the ring.
    1:58:14 Frodo.
    1:58:15 Frodo, yeah.
    1:58:17 Yeah, maybe one of the lessons of Lord of the Rings, right, is even, even Frodo would
    1:58:19 have been, you know, even Frodo would have been corrupted, right?
    1:58:23 But, you know, nevertheless, you had somebody who could do what it took at the time.
    1:58:27 The thing that I find just so amazing about the Elon phenomenon and all the critiques
    1:58:31 is, you know, the one thing that everybody in our societies universally agrees on because
    1:58:36 of our, it’s sort of our post-Christian egalitarian, so, you know, we live in sort of this post
    1:58:42 secularized Christian context in the West now and it’s, you know, we consider Christianity
    1:58:45 kind of, you know, backwards, but we still believe essentially all the same things.
    1:58:49 We just dress them up in sort of fake science.
    1:58:53 So the, the one thing that we’re all told, we’re all taught from, from, is that the best
    1:58:55 people in the world are the people who care about all of humanity, right?
    1:58:59 And we venerate, you know, all of our figures are people who care about all of, you know,
    1:59:02 Jesus cared about all of humanity, Gandhi cared about all of humanity, Martin Luther
    1:59:05 King cared about all of humanity, like, it’s, it’s, it’s, the person who cares the most
    1:59:07 about everybody.
    1:59:11 And with Elon, you have a guy who literally like is, he’s literally, he talks about this
    1:59:15 constantly and he talks about it exactly the same in private is literally, he is operating
    1:59:18 on behalf of all of humanity to try to get us to, you know, he goes through to get us
    1:59:21 through multi-planetary civilization so that we can survive a strike on anyone planet so
    1:59:25 that we can extend the light of human consciousness into the world and, you know, into the universe
    1:59:27 and have it persist, you know, and the good of the whole thing.
    1:59:31 And like literally the critique is, yeah, we want you to care about all of humanity,
    1:59:32 but not like that.
    1:59:39 Yeah, all the critics, all the, all the surface turmoil, the critics will be forgotten.
    1:59:42 Yeah, I think that’s, yeah, that’s clear.
    1:59:47 You said that we always end up being ruled by the elites of some kind.
    1:59:50 Can you explain this law, this idea?
    1:59:55 So this comes from a Italian political philosopher from about a hundred years ago named Robert.
    2:00:02 I’m going to mangle the, let you pronounce the Italian, Michelle’s or Michael’s.
    2:00:06 And then it was, I learned about it through a famous book on politics, probably the best
    2:00:10 book on politics written in the 20th century called the Machiavellians by this guy, James
    2:00:12 Burnham, who has had a big impact on me.
    2:00:16 But in the Machiavellians, he resurrects what he calls this sort of Italian realist school
    2:00:19 of political philosophy from the, from the 10s and 20s.
    2:00:21 And these were people, to be clear, this was not like a Mussolini thing.
    2:00:26 These were people who were trying to understand the actual mechanics of how politics actually
    2:00:27 works.
    2:00:31 So to get to the actual sort of mechanical substance of like how the political machine
    2:00:32 operates.
    2:00:38 And this guy, Michelle, said this concept he ended up with called the iron law of oligarchy.
    2:00:42 And so what the iron law of oligarchy, and I mean, take a step back to say what he meant
    2:00:44 by oligarchy because it has multiple meanings.
    2:00:47 So basically, in classic political theory, there’s basically three forms of government
    2:00:48 at core.
    2:00:51 There’s democracy, which is rule of the many.
    2:00:53 There’s oligarchy, which is rule of the few.
    2:00:55 And there’s monarchy, which is rule of the one.
    2:00:58 And you can just use that as a general framework of any government you’re going to be under
    2:01:01 is going to be one of those, just a mechanical observation, without even saying which ones
    2:01:05 are good or bad, just a structural observation.
    2:01:08 And so the question that Michelle’s asked was, like, is there such a thing as democracy?
    2:01:10 Like, is there actually such a thing as democracy?
    2:01:13 Is there ever actually like direct, direct government?
    2:01:17 And what he did was he mounted this sort of incredible historical exploration of whether
    2:01:19 democracies had ever existed in the world.
    2:01:22 And the answer basically is almost never, and we could talk about that.
    2:01:27 But the other thing he did was he sought out the most democratic private organization in
    2:01:31 the world that he could find at that point, which he concluded was some basically communist
    2:01:35 German Auto Workers Union that was like wholly devoted to the workers of the world uniting,
    2:01:37 you know, back when that was like the hot thing.
    2:01:40 And he went in there and he’s like, okay, this is the organization out of all organizations
    2:01:43 on planet Earth that must be operating as a direct democracy.
    2:01:46 And he went in there and he’s like, oh, nope, there’s a leadership class.
    2:01:49 You know, there’s like six guys at the top and they control everything and they lead
    2:01:53 the rest of the membership along, you know, by the nose, which is of course the story
    2:01:54 of every union.
    2:01:58 The story of every union is always the story of, you know, there’s a Jimmy Hoffa in there,
    2:01:59 you know, kind of running the thing.
    2:02:04 You know, we just saw that with the Dock Workers Union, right, like, you know, there’s a guy.
    2:02:05 And he’s in charge.
    2:02:09 And by the way, the number two is his son, right, like that’s not like a, you know, an
    2:02:10 accident, right?
    2:02:14 So the iron law of oligarchy basically says democracy is fake.
    2:02:17 There’s always a rule in class, there’s always a ruling elite structurally.
    2:02:21 And he said the reason for that is because the masses can’t organize, right?
    2:02:22 What’s the fundamental problem?
    2:02:26 Whether the mass is 25,000 people in union or 250 million people in a country, the masses
    2:02:31 can’t organize, the majority cannot organize, only a minority can organize and to be effective
    2:02:33 in politics, you must organize.
    2:02:38 And therefore every political structure in human history has been some form of a small
    2:02:44 organized elite ruling, a large and dispersed majority, every single one.
    2:02:51 The Greeks and the Florentines had brief experiments in direct democracy and they were total disasters.
    2:02:54 In Florence, I forget the name of it, it was called like the workers revolt or something
    2:02:55 like that.
    2:02:59 There was like a two year period where they basically experimented with direct democracy
    2:03:02 during the Renaissance and it was a complete disaster.
    2:03:04 And they never tried it again.
    2:03:08 In the state of California, we have our own experiment on this, which is the proposition
    2:03:13 system, which is an overlay on top of the legislature and anybody who looks at it for
    2:03:15 two seconds concludes it’s been a complete disaster.
    2:03:19 It’s just a catastrophe and it’s caused enormous damage to the state.
    2:03:23 And so basically the presumption that we are in a democracy is just sort of by definition
    2:03:24 fake.
    2:03:27 Now, good news for the US, it turns out the founders understood this and so of course they
    2:03:30 didn’t give us a direct democracy, they gave us a representative democracy, right?
    2:03:34 And so they built the oligarchy into the system in the form of Congress and the executive
    2:03:37 branch, the judicial branch.
    2:03:40 But so anyway, so as a consequence, democracy is always and everywhere fake.
    2:03:43 There is always a ruling elite.
    2:03:47 And basically the lesson of the Machiavellians is you can deny that if you want, but you’re
    2:03:48 fooling yourself.
    2:03:52 The way to actually think about how to make a system work and maintain any sort of shred
    2:03:56 of freedom is to actually understand that that is actually what’s happening.
    2:04:02 And lucky for us, the founders saw this and figured out a way to, given that there’s
    2:04:09 going to be a ruling elite, how to create a balance of power among that elite so it
    2:04:10 doesn’t get out of hand.
    2:04:11 It was very clever, right?
    2:04:13 And some of this was based on earlier experiments.
    2:04:16 Some of this, by the way, these were very, very smart people, right?
    2:04:18 And so they knew tremendous amounts of like Greek and Roman history.
    2:04:23 They knew the Renaissance history, the Federalist papers, they argued this at great length.
    2:04:24 You can read it all.
    2:04:29 They ran like one of the best seminars in world history, trying to figure this out.
    2:04:30 And they went through all this.
    2:04:33 And yeah, and so they thought through it very carefully, but just to give you an example,
    2:04:34 which continues to be a hot topic.
    2:04:38 So one way they did it is through the three branches of government, right?
    2:04:42 Executive legislative and judicial sort of balance of powers.
    2:04:45 But the other way they did it was they sort of echoing what had been done earlier, I think
    2:04:50 in the UK Parliament, they created the two different bodies of the legislature, right?
    2:04:54 And so the House and the Senate, and as you know, the House is a portion on the basis
    2:04:56 of population and the Senate is not, right?
    2:05:00 The small states have just as many senators as the big states.
    2:05:02 And then they made the deliberate decision to have the House get reelected every two
    2:05:05 years to make it very responsive to the will of the people.
    2:05:09 And they made the decision to have the Senate get reelected every six years so that it had
    2:05:12 more buffer from the passions at the moment.
    2:05:14 But what’s interesting is they didn’t choose one or the other, right?
    2:05:16 They did them both.
    2:05:18 And then to get legislation passed, you have to get through both of them.
    2:05:22 And so they built in like a second layer of checks and balances.
    2:05:26 And then there’s 1,000 observations we could make about like how well the system is working
    2:05:30 today and like how much does it live up to the ideal and how much are we actually complying
    2:05:31 with the Constitution?
    2:05:34 And there’s lots of, you know, there’s lots of open questions there.
    2:05:39 But you know, this system has survived for coming on 250 years with a country that has
    2:05:42 been spectacularly successful that I don’t think at least, you know, I don’t think any
    2:05:44 of us would trade this system for any other one.
    2:05:46 And so it’s one of the great all-time achievements.
    2:05:47 Yeah, it’s incredible.
    2:05:52 And we should say they were all pretty young relative to our current set of leaders.
    2:05:53 Many in their 20s at the time.
    2:05:54 And like super geniuses.
    2:05:57 This is one of those things where it’s just like, all right, something happened where
    2:06:01 there was a group of people where, you know, nobody ever tested their IQs, but like, these
    2:06:02 are Einstein’s of politics.
    2:06:03 Yeah.
    2:06:04 The amazing thing.
    2:06:07 But anyway, I just, I go through all that, which is they were very keen students of the
    2:06:12 actual mechanical practice of democracy, not fixated on what was desirable.
    2:06:16 They were incredibly focused on what would actually work, which is, you know, I think
    2:06:17 the way to think about these things.
    2:06:22 There were engineers of sorts, not the fuzzy humanity students of sorts.
    2:06:24 They were shape rotators, not word cells.
    2:06:26 I remember that.
    2:06:27 Wow.
    2:06:29 That meme came and went.
    2:06:30 I think you were centered to them.
    2:06:31 You’re centered to a lot of memes.
    2:06:32 I was.
    2:06:36 You’re the meme dealer and the meme popularizer.
    2:06:37 That meme I guess we credit for.
    2:06:39 And then the current thing is the other one I get some credit for.
    2:06:42 I don’t know that I invented either one, but I popularized them.
    2:06:44 Take credit and run with it.
    2:06:52 If you can just linger on the Machiavellians, it’s a, it’s a study of power and power dynamics.
    2:06:59 Like you mentioned, looking at the actual reality of the machinery of power from everything
    2:07:04 you’ve seen now in government, but also in companies, what are some interesting things
    2:07:08 you can sort of continue to say about the dynamics of power, the jostling for power that
    2:07:10 happens inside these institutions.
    2:07:11 Yeah.
    2:07:15 So it, a lot of it, you know, we already talked about this a bit with the universities, which
    2:07:19 is you can apply a Machiavellian style lens to the, it’s why I posed the question to you
    2:07:24 that I did, which is, okay, who runs the university, the trustees, the administration, the students
    2:07:25 or the faculty.
    2:07:28 And then, you know, the answer, the true answer is some combination of the three or of the
    2:07:33 four, plus the donors, by the way, plus the government, plus the press, et cetera, right.
    2:07:36 And so there, you know, there’s a, there’s a mechanical interpretation of that.
    2:07:41 I mean, companies operate under the exact same, you know, set of questions, who runs a company,
    2:07:44 you know, the CEO, but like the CEO runs the company basically up to the day that either
    2:07:47 the shareholders or the management team revolt.
    2:07:50 If the shareholders revolt, it’s very hard for the CEO to stay in the seat.
    2:07:53 If the management team revolts, it’s very hard for the CEO to stay in the seat.
    2:07:56 By the way, if the employees revolt, it’s also hard to stay in the seat.
    2:07:59 By the way, if the New York Times comes at you, it’s also very hard to stay in the seat.
    2:08:02 If the Senate comes at you, it’s very hard to stay in the seat.
    2:08:07 So, you know, like a reductionist version of this that is a good shorthand is who can
    2:08:09 get who fired.
    2:08:13 You know, so, so who has more power, you know, the newspaper columnist who makes, you know,
    2:08:17 $200,000 a year or the CEO who makes, you know, $200 million a year.
    2:08:20 And it’s like, well, I know for sure that the columnist can get the CEO fired.
    2:08:21 I’ve seen that happen before.
    2:08:25 I have yet to see a CEO get a columnist fired.
    2:08:32 Did anyone ever get fired from the Bill Ackman assault on journalism?
    2:08:36 So Bill, Bill like really showed the bullshit that happens in journalism.
    2:08:39 No, because what happens is they, they, they were at with the, I mean, they, and I would
    2:08:41 say to their credit, they were as a badge of honor.
    2:08:43 And then to their shame, they were as a badge of honor, right?
    2:08:48 Which is if, you know, if they’re doing the right thing, then they are justifiably proud
    2:08:50 of themselves for standing up under pressure.
    2:08:53 But it also means that they can’t respond to legitimate criticism.
    2:08:56 And, you know, they’re obviously terrible at that now.
    2:09:01 As I recall, he went straight to the CEO of the actual Springer that owns Insider.
    2:09:04 And I, you know, and I happen to know the CEO and I think he’s quite a good CEO.
    2:09:08 But like, I, like, well, it’s a good example is the CEO of actual Springer run his own
    2:09:09 company.
    2:09:10 Right.
    2:09:12 Like, well, there’s a fascinating, okay, so there’s a fascinating thing playing out right
    2:09:13 now.
    2:09:18 Not to dwell on these fires, but it’s a, you see, the pressure reveals things, right?
    2:09:22 And so if you’ve been watching what’s happened with LA Times recently, so this guy, biotech
    2:09:26 entrepreneur buys the LA Times, like whatever, eight years ago, it is just like the most
    2:09:30 radical social revolutionary thing you can possibly imagine.
    2:09:32 It endorses every crazy left-wing radical.
    2:09:36 You can imagine it endorses Karen Bass, it endorses Gavin Newsom, it’s just like a litany
    2:09:39 of all the people who are currently burning the city to the ground.
    2:09:42 It’s just like endorsed every single bad person, every step of the way.
    2:09:44 He’s owned it the entire time.
    2:09:47 You know, he put his foot down right before, for the first time, I think put his foot down
    2:09:50 right before the November election and said, we’re not, we’re getting, he said, we’re
    2:09:52 going to get out of this thing where we just always endorse the Democrat.
    2:09:53 And we said, we’re not endorsing.
    2:09:57 I think he said, we’re not endorsing for the presidency and like the paper flipped out.
    2:09:58 Right.
    2:10:01 It’s like our billionaire backer who’s, I don’t know what he spends, but like, he must
    2:10:05 be burning 50 or 100 million dollars a year out of his pocket to keep this thing running.
    2:10:09 He paid 500 million for it, which is amazing.
    2:10:13 Back when people still thought these things were businesses.
    2:10:17 And then he’s probably burned another 500 million over the last decade, keeping it running.
    2:10:20 And he burns probably another 50, a hundred million a year to do this.
    2:10:24 And the journalists at the LA Times hate him with the fury of a thousand sons.
    2:10:27 Like they just like absolutely freaking despise him.
    2:10:29 And they have been like attacking him and, you know, the ones that can get jobs elsewhere
    2:10:32 quit and do it and the rest just stay and say the worst, you know, most horrible things
    2:10:33 about him.
    2:10:36 And they want to constantly run these stories, attacking him.
    2:10:40 And so he has had this reaction that a lot of people in LA are having right now to this
    2:10:44 fire and to this just like incredibly vivid collapse of leadership and all these people
    2:10:48 that he had his paper head and tourists are just disasters.
    2:10:50 And he’s on this tour.
    2:10:54 He’s basically just, he’s decided, he’s, he’s decided to be the boy who says the emperor
    2:10:57 has no clothes, but he’s doing it to his own newspaper.
    2:10:58 Very smart guy.
    2:11:01 And he’s basically saying, yeah, we, we, yes, we did all that and we endorsed these
    2:11:04 people and it was a huge mistake and we’re going to completely change.
    2:11:08 And his paper is, you know, in a complete internal revolt.
    2:11:09 But I go through it, which is okay.
    2:11:12 Now we have a very interesting question, which is who runs the LA Times.
    2:11:17 Because for the last eight years, it hasn’t been him.
    2:11:19 It’s been the reporters.
    2:11:23 Now for the first time, the owner is showing up saying, oh no, I’m actually in charge and
    2:11:25 the reporters are saying, no, you’re not.
    2:11:28 And like, like it is freaking on.
    2:11:32 And so again, if the Machiavellian’s mindset on this is like, okay, how is power actually
    2:11:33 exercised here?
    2:11:37 Can, can, can a guy who’s like even super rich and super powerful, who even owns his
    2:11:39 own newspaper, can he stand up to a full-scale assault?
    2:11:43 Not only by his own reporters, but by every other journalism outlet who also now thinks
    2:11:45 he’s the antichrist.
    2:11:50 And he is trying to exercise power by speaking out publicly and so that’s the game of power
    2:11:51 there.
    2:11:52 And firing people.
    2:11:54 And you know, he has removed people and he has set new rules.
    2:11:57 I mean, he is, he is now, I think at long, I think he’s saying that he’s now at long
    2:12:01 last actually exercising prerogatives of an owner of a business, which just decide on
    2:12:02 the policies and staffing of the business.
    2:12:06 There are certain other owners of these publications that are doing similar things right now.
    2:12:08 He’s the one I don’t know.
    2:12:10 So he’s the one I can talk about.
    2:12:13 But there are others that are going through this same thing right now.
    2:12:17 And I think it’s a really interesting open question, like, you know, in a fight between
    2:12:20 the employees and the employer, like it’s not crystal clear that the employer wins that
    2:12:21 one.
    2:12:23 And just to stay on journalism for a second, we mentioned Bill Ackman.
    2:12:28 I just want to say, put him in the category we mentioned before of a really courageous
    2:12:29 person.
    2:12:37 I don’t think I’ve ever seen anybody so fearless in going after, you know, in following what
    2:12:40 he believes in publicly.
    2:12:46 That’s courage that, that, that several things he’s done publicly has been really inspiring,
    2:12:47 just being courageous.
    2:12:49 What do you think is like the most impressive example?
    2:12:57 Where he went after a journalist whose whole incentive is to like, I mean, it’s like sticking
    2:13:02 your like kicking the beehive or whatever, you know, what’s going to follow.
    2:13:08 And to do that, I mean, that’s why it’s difficult to challenge journalistic organizations because
    2:13:12 they’re going to, you know, there’s just so many mechanisms they use, including like writing
    2:13:16 articles and get cited by Wikipedia, then drive the narrative and then they can get
    2:13:18 you fired, all this kind of stuff.
    2:13:27 Bill Ackman, like a bad MFR, just, just tweets these essays and just goes after them legally
    2:13:32 and also in the public eye and just, I don’t know, that was truly inspiring.
    2:13:36 There’s not many people like that in public.
    2:13:42 And hopefully that inspires not just me, but many others to be like, to be courageous themselves.
    2:13:45 Did you know of him before he started doing this in public?
    2:13:49 I knew of Neri, his wife, who’s just a brilliant researcher and scientist, and so I admire
    2:13:50 her look up to her.
    2:13:51 I think she’s amazing.
    2:13:55 Well, the reason I ask if you knew about Bill is because a lot of people had not heard
    2:13:58 of him before, especially like before October 7th and before some of the campaigns he’s
    2:14:01 been running since in public, and with Harvard and so forth.
    2:14:05 But he was very well known in the investment world before that.
    2:14:10 So he was a famous, he was a so-called activist investor for, you know, very, very successful
    2:14:15 and very widely respected for probably 30 years before, before, before now.
    2:14:19 And I bring that up because it turns out they weren’t for the most part battles that happened
    2:14:20 in kind of full public view.
    2:14:23 They weren’t national stories, but in the business and investing world, the activist
    2:14:30 investor is a very, it’s like in the movie Taken, it’s a very specific set of skills.
    2:14:34 How to like really take control of situations and how to wreck the people who you’re going
    2:14:36 up against.
    2:14:41 And just to, and there’s a lot, there’s been controversy over the years on this topic
    2:14:44 and there’s too much detail to go into, but the, the defensive activist investing, which
    2:14:48 I think is valid is, you know, these are the guys who basically go in and take stakes in
    2:14:51 companies that are being poorly managed or under optimized.
    2:14:55 And, and, and then generally what that means is at least the theory is that means the existing
    2:15:00 management has become entrenched and lazy, mediocre, you know, whatever, not responding
    2:15:04 to the needs of the shareholders, often not responding to the customers.
    2:15:09 And the activists basically go in with a minority position and then they rally support among
    2:15:11 other investors who are not activists.
    2:15:16 And then they basically show up and they force change, but they are the aggressive version
    2:15:17 of this.
    2:15:19 And I’ve been on the, I’ve been involved in companies that have been on the receiving
    2:15:24 end of these, where it is amazing how much somebody like that can exert pressure on situations
    2:15:26 even when they don’t have formal control.
    2:15:30 So it’s another, it would be another chess piece on the mechanical board of kind of how
    2:15:31 power gets exercised.
    2:15:34 And basically what happens is the effect of analysts a large amount of the time they end
    2:15:37 up taking, they end up taking over control of companies, even though they never own more
    2:15:39 than like 5% of the stock.
    2:15:42 And so anyway, so it turns out with Bill’s been such a fascinating case because he has
    2:15:48 that like complete skill set and he has now decided to bring it to bear in areas that
    2:15:50 are not just companies.
    2:15:53 And two interesting things for that, one is, you know, some of these places, you know,
    2:15:57 and some of these battles are still ongoing, but number one, like a lot of people who run
    2:16:00 universities or newspapers are not used to being up against somebody like this.
    2:16:04 And by the way, also now with infinitely deep pockets and lots of experience in courtrooms
    2:16:06 and all the things that kind of go with that.
    2:16:12 But the other is, through example, he is teaching a lot of the rest of us, the activist playbook,
    2:16:13 like in real time.
    2:16:17 And so the Liam Neeson skill set is getting more broadly diffused just by being able to
    2:16:19 watch and learn from him.
    2:16:22 So I think he, I think he’s having a, you know, I would put him up there with Elon in
    2:16:25 terms of somebody who’s really affecting how all this is playing out.
    2:16:29 But even skill set aside, just courage and yes, including by the way, courage to go outside
    2:16:30 of his own zone.
    2:16:31 Yeah.
    2:16:32 Right.
    2:16:35 You know, cause like he hasn’t, I’ll give you an example, like my firm venture capital
    2:16:36 firm, we have LPs.
    2:16:40 There are things that I feel like I can’t do or say cause I feel like I would be bringing,
    2:16:44 you know, I would be bringing embarrassment or other consequences to our LPs.
    2:16:47 He has investors also where he worries about that.
    2:16:50 And so his, so a couple of things, one is his willingness to go out a bit and risk his
    2:16:52 relationship with his own investors.
    2:16:55 But I will tell you the other thing, which is his investors, I know this for a fact his
    2:16:59 investors have been remarkable to be supportive of him doing that because as it turns out,
    2:17:02 a lot of them actually agree with him.
    2:17:06 And so he’s the same thing he does in his activism campaigns.
    2:17:09 He is able to be the tip of the spear on something that actually a lot more people agree with.
    2:17:10 Yeah.
    2:17:14 It turns out if you have truth behind you, it helps.
    2:17:18 And just again, you know, how I started is a lot of people are just fed up.
    2:17:23 You’ve been spending a bunch of time in Mar-a-Lago and Palm Beach helping the new administration
    2:17:26 in many ways, including interviewing people who might join.
    2:17:31 So what’s your general sense about the talent about the people who are coming in into the
    2:17:33 new administration?
    2:17:36 So I should start by saying I’m not a member of the new administration.
    2:17:40 I’m not, I’m not in the room, I’m not like in the room when a lot of these people are
    2:17:41 being selected.
    2:17:42 I believe you said unpaid intern.
    2:17:43 I am an unpaid intern.
    2:17:48 So I’m a volunteer and I, you know, when helpful, but I’m not, I’m not making the decisions
    2:17:50 nor am I in a position to, you know, speak for the administration.
    2:17:53 So I don’t want to say anything that will cause people to think I’m doing that.
    2:17:54 It’s a very unusual situation, right?
    2:17:57 Where you had an incumbent president and then you had a four-year gap where he’s out of
    2:17:59 office and then you have him coming back, right?
    2:18:04 And as you’ll recall, there was a fair amount of controversy over the end of the first term.
    2:18:05 Oh, yeah.
    2:18:09 The fear, the specific concern was, you know, the first Trump administration, you know,
    2:18:12 they will all say this is like they didn’t come in with a team, right?
    2:18:15 So they, you know, they didn’t come into the team and most of the sort of institutional
    2:18:19 base of the Republican party were Bush Republicans and they were, and many of them had become
    2:18:20 never Trumpers.
    2:18:22 And so they had a hard time putting the team together.
    2:18:24 And then by the way, they had a hard time getting people confirmed.
    2:18:27 And so if you talk to the people who were there in the first term, it took them two
    2:18:30 to three years to kind of even get the government in place.
    2:18:33 And then they basically only had the government in place for, you know, for basically like
    2:18:37 18 months and then COVID hit, you know, and then sort of aftermath and everything and all
    2:18:39 the drama and headlines and everything.
    2:18:42 And so the concern, you know, including from some very smart people in the last two years
    2:18:46 has been, boy, if Trump gets a second term, is he going to be able to get a team that
    2:18:50 is as good as the team he had last time or a team that is actually not as good because
    2:18:53 maybe people got burned out, maybe they’re more cynical now, maybe they’re not willing
    2:18:55 to go through the drama.
    2:18:57 By the way, a lot of people on in the first term came under like, you know, with their
    2:19:01 own withering legal assaults and, you know, some of them went to prison and like, you
    2:19:05 know, a lot, a lot of stuff happened, lots of investigations, lots of legal fees, lots
    2:19:09 of bad press, lots of debanking, by the way.
    2:19:14 A lot of the officials in the first term got debanked, including the president’s wife
    2:19:15 and son.
    2:19:16 Yeah.
    2:19:17 I heard you tell that story.
    2:19:18 It’s insane.
    2:19:19 That’s just insane.
    2:19:20 In the wake of the first term.
    2:19:21 Yes.
    2:19:25 We now take out spouses and children with our ring of power.
    2:19:28 And so there’s like this legitimate question as to like whether, okay, what will the team
    2:19:29 for the second term look like?
    2:19:33 And at least what I’ve seen and what you’re seeing is the appointments is it looks much,
    2:19:34 much better.
    2:19:37 First of all, it just looks better than the first term and not because the people in the
    2:19:40 first term were not necessarily good, but just you just have this like influx of like
    2:19:44 incredibly capable people that have shown up that want to be part of this.
    2:19:46 And you just didn’t have that the first time.
    2:19:49 And so they’re just drawing on a much deeper, richer talent pool than they had the first
    2:19:50 time.
    2:19:53 And they’re drawing on people who know what the game is, like they’re drawing on people
    2:19:57 now who know what is going to happen and they’re still willing to do it.
    2:20:00 And so they’re going to get, I think, you know, some of the best people from the first
    2:20:05 term, but they’re bringing in a lot of people who they couldn’t get the first time around.
    2:20:07 And then second is there’s a bunch of people, including people in the first term where they’re
    2:20:09 just 10 years older.
    2:20:13 And so they went through the first term and they just learned how everything works.
    2:20:16 Or they’re young people who just had a different point of view, and now they’re 10 years older
    2:20:19 and they’re ready to go serve in government.
    2:20:21 And so there’s a generational shift happening.
    2:20:25 And actually one of the interesting things about the team that’s forming up is it’s remarkably
    2:20:26 young.
    2:20:29 Some of the cabinet members and then many of the second and third level people are like
    2:20:33 in their 30s and 40s, you know, which is a big change from the gerontocracy that, you
    2:20:36 know, we’ve been under for the last 30 years.
    2:20:39 And so I think the caliber has been outstanding, you know, and we could sit here and list tons
    2:20:42 and tons of people, but like, you know, the people who are running, you know, it’s everything
    2:20:46 from the people who are running all the different departments at HHS, it’s the people running,
    2:20:50 you know, the number two at the Pentagon is Steve Feinberg, who’s just like an incredible
    2:20:53 legend of private equity, incredible capable guy.
    2:20:57 We’ve got two, actually two of my partners are going in, who I both think are amazing.
    2:20:58 Yeah.
    2:21:02 Like many, many parts of the government that people are like really impressive.
    2:21:10 Well, I think one of the concerns is actually that given the human being of Donald Trump,
    2:21:18 that there would be more tendency towards, let’s say, favoritism versus meritocracy,
    2:21:22 that there’s kind of circles of sycophancy that form.
    2:21:30 And if you’re be able to be loyal and never oppose and just be basically suck up to the
    2:21:32 president, then you’ll get a position.
    2:21:33 So that’s one of the concerns.
    2:21:40 And I think you’re in a good position to speak to the degree that’s happening versus
    2:21:43 hiring based on merit and just getting great teams.
    2:21:44 Yeah.
    2:21:48 So look, I just start by saying any leader at that level, by the way, any CEO, there’s
    2:21:49 always some risk of that.
    2:21:50 Right.
    2:21:53 So there’s always some, you know, it’s just, it’s like a natural reality warps around powerful
    2:21:54 leaders.
    2:21:55 And so there’s always some risk to that.
    2:21:57 Of course, the good and powerful leaders are, you know, very aware of that.
    2:22:01 And Trump at this point in his life, I think, is highly aware of that, at least my interactions
    2:22:03 with him, like he definitely seems very aware of that.
    2:22:06 So that’s one thing.
    2:22:09 I would just say that I think the way to look at that, I mean, and look like I said, I don’t
    2:22:11 want to predict what’s going to happen once this whole thing starts unfolding.
    2:22:14 But I would just say, again, the caliber of the people who are showing up and getting
    2:22:18 the jobs and then the fact that these are some of the most accomplished people in the
    2:22:24 business world and in the medical field, I just, you know, Jay Battitaria coming in
    2:22:25 around NIH.
    2:22:27 So I was actually in the, I was actually, I was part of the interview team for a lot
    2:22:29 of the HHS folks.
    2:22:30 Nice.
    2:22:31 Jay is amazing.
    2:22:32 I was so happy to see that.
    2:22:36 So I literally got, this is a story, I got to the transition office for one of the days
    2:22:38 of the HHS interviews and I was on one of the interviewing teams and they gave us, I didn’t
    2:22:41 know who the candidates were and they gave us the sheet in the beginning and I go down
    2:22:46 the sheet and I saw Jay’s name and I like, I almost physically fell out of my chair.
    2:22:51 And I was just like, you know, and I have, I happen to know Jay and I like respect him
    2:22:52 enormously.
    2:22:55 And then he proved himself under this like talk about a guy who proved himself under
    2:23:01 extraordinary pressure over the last five years and then go radical under the pressure.
    2:23:04 He maintained balance and thoughtfulness and depth.
    2:23:05 I mean, incredibly.
    2:23:10 Very serious, very analytical, very applied and, and, and, and yes, 100% tested under
    2:23:15 pressure came out like the more people look back at what he said and did and you know,
    2:23:19 he’s not, you know, none of us are perfect, but like overwhelmingly, like overwhelmingly
    2:23:21 insightful throughout that whole period.
    2:23:24 And you know, we, you know, we would all be much better off today had he been in charge
    2:23:26 of the response.
    2:23:29 And so just like an incredibly capable guy and look, and then he learned from all that
    2:23:30 right.
    2:23:31 He learned a lot in the last five years.
    2:23:35 And so the idea that somebody like that could be ahead of NIH as compared to the people
    2:23:37 we’ve had is just like breathtakingly.
    2:23:41 It’s just a gigantic upgrade, you know, and then Marty McRae coming in to run FDA exact
    2:23:42 same thing.
    2:23:47 The guy coming to run a CDC exact same thing.
    2:23:49 I mean, I’ve been spending time with Dr. Oz.
    2:23:52 So, you know, I’m not like, again, I’m not like it, I’m not on these teams.
    2:23:56 I’m not in the room, but like I’ve been spending enough time trying to help that like his level
    2:24:00 of insight into the healthcare system is like, it’s like astounding and it comes from being
    2:24:03 a guy who’s been like in the middle of the whole thing and been talking to people about
    2:24:07 this stuff and working on it and serving as a doctor himself and in medical systems for,
    2:24:11 you know, his entire life and it’s just like, you know, he’s like a walking encyclopedia
    2:24:12 on these things.
    2:24:17 And so, and you know, very dynamic, you know, very charismatic, very smart, organized, effective.
    2:24:20 So, you know, to have somebody like that in there.
    2:24:24 And so anyway, they’re just, I have like 30 of these stories now across all these different,
    2:24:25 all these different positions.
    2:24:29 And so I, and then I just, I’d be quite honest, I do do the compare and contrast to the last
    2:24:30 four years.
    2:24:32 And it’s not even, these people are not in the same ballpark.
    2:24:36 They’re just like wildly better.
    2:24:40 And so, you know, the pound for pound is maybe the best team in the White House since, you
    2:24:48 know, I don’t even know, maybe the 90s, maybe the, maybe the 30s, maybe the 50s, you know,
    2:24:52 maybe Eisenhower had a team like this or something, but it’s, it’s, it’s, there’s a lot of really
    2:24:53 good people in there now.
    2:24:56 Yeah, the potential for change is certainly extremely high.
    2:24:59 Well, can you speak to Doge?
    2:25:04 What’s the most wildly successful next two years for Doge?
    2:25:06 Can you imagine?
    2:25:11 Maybe also, can you think about the trajectory that’s the most likely and what kind of challenges
    2:25:12 would it be facing?
    2:25:13 Yeah.
    2:25:18 So, and start by saying again, I’m not disclaimer after disclaimer, I’m not on Doge.
    2:25:19 I’m not a member of Doge.
    2:25:25 We should say there’s about 10 lawyers in the room staring now, I’m just kidding.
    2:25:27 Both the angels and the devils on my shoulder.
    2:25:28 Okay.
    2:25:29 Yeah.
    2:25:30 So I’m not speaking for Doge.
    2:25:32 I’m not in charge of Doge.
    2:25:33 Those guys are doing it.
    2:25:34 I’m not doing it.
    2:25:38 But I am, you know, again, I’m volunteering to help as much as I can and I’m 100% supportive.
    2:25:39 Yeah.
    2:25:43 So look, I, I think the way to think, I mean, the, the basic outlines are in public, right?
    2:25:47 Which is it’s a, it’s a time limited, you know, basically commission.
    2:25:48 It’s not a formal government agency.
    2:25:51 It’s a, you know, time limited 18 month.
    2:25:55 It’ll, it’ll, in terms of implementation, it will advise the executive branch, right?
    2:26:00 And so the, the, the implementation will happen through the, the White House and the president
    2:26:02 has total attitude on what he wants to, what he wants to implement.
    2:26:07 Um, and then basically what I think about it is three kind of streams, you know, kind
    2:26:09 of target sets and they’re related, but different.
    2:26:12 So money, uh, people and regulations.
    2:26:16 Um, and so, you know, the headline number, they’ve, you know, put us the two trillion
    2:26:19 dollar number and there’s already, you know, disputes over, over that and whatever.
    2:26:23 And there’s all question there, but then there’s the people thing and the people thing is interesting
    2:26:26 because you get into these very, um, kind of, um, fascinating questions.
    2:26:30 Um, and I’ve been doing this, I, I won’t do this for you as a pop quiz, but I do this
    2:26:34 for people in government as a pop quiz and I can stump them every time, which is a, how
    2:26:36 many federal agencies are there?
    2:26:41 And the answer is somewhere between 450 and 520 and nobody’s quite sure.
    2:26:43 And then the other is how many people will work for the federal government.
    2:26:47 Um, and the answer is, you know, something on the order, I forget, but like 4 million
    2:26:52 full time employees and maybe up to 20 million contractors and nobody is quite sure.
    2:26:54 And so there’s a large people component to this.
    2:26:57 Um, and then by the way, there’s a related component to that, which is how many of them
    2:27:01 are actually in the office and the answer is not many.
    2:27:03 Most of the federal buildings are still empty, right?
    2:27:06 And so, and then there’s questions of, like, are people, you know, working from home or
    2:27:08 we’re actually working from home.
    2:27:11 So there’s the people to mention and of course the money and the people are connected and
    2:27:13 then there’s the third, which is the regulation thing, right?
    2:27:17 And I described earlier how basically our system of government is much more now based
    2:27:20 on regulations than legislation, right?
    2:27:24 Most of the rules that we all live under are not from a bill that went through Congress.
    2:27:27 They’re from an agency that created a regulation.
    2:27:28 That turns out to be very, very important.
    2:27:32 So one is, a lot of already described, we want to do the doge wants to do broad based
    2:27:33 regulatory relief.
    2:27:36 And Trump has talked about this and basically get the government off his backs and liberate
    2:27:39 the American people to be able to do things again.
    2:27:40 Um, so that’s part of it.
    2:27:43 But there’s also something else that’s happened, which is very interesting, which was there
    2:27:47 were a set of Supreme Court decisions about two years ago, um, that went directly after
    2:27:53 the idea that the executive branch can create regulatory agencies and issue regulations
    2:27:57 and enforce those regulations without corresponding congressional legislation.
    2:28:03 Um, and most of the federal government that exists today, including most of the departments
    2:28:07 and most of the rules and most of the money and most of the people, most of it is not
    2:28:09 enforcing laws that Congress passed.
    2:28:11 Most of it is, is regulation.
    2:28:16 And the Supreme Court basically said large parts, you know, large to maybe all of that
    2:28:20 regulation that did not directly result from a bill that went through Congress, the way
    2:28:25 that the cartoon said that it should, um, that may not actually be legal.
    2:28:30 Now, the previous White House, of course, was super in favor of big government.
    2:28:31 They had no desire to act.
    2:28:32 They did nothing based on this.
    2:28:34 They didn’t, you know, pull anything back in.
    2:28:39 But the new regime, if they choose to, could say, look, the thing that we’re doing here
    2:28:43 is not, you know, challenging the laws were actually complying with the Supreme Court decision
    2:28:46 that basically says we have to unwind a lot of this.
    2:28:50 And we have to unwind the regulations, which are no longer legal constitutional.
    2:28:53 We have to unwind the spend and we have to unwind the people.
    2:28:56 And so that, and that’s how you get from basically connect the thread from the regulation part
    2:28:59 back to the money part, back to the people part.
    2:29:01 They have work going on all three of these threads.
    2:29:05 They have, I would say, incredibly creative ideas on how to deal with this.
    2:29:09 I’m, I know lots of former government people who 100% of them are super cynical on this
    2:29:10 topic.
    2:29:11 And they’re like, this is impossible.
    2:29:12 This can never possibly work.
    2:29:17 And I’m like, well, I can’t tell you what the secret plans are, but like, like blow
    2:29:21 my mind, like, and all three of those, like they have ideas that are like really quite
    2:29:24 amazing, as you’d expect from, you know, from the people involved.
    2:29:28 And so over the course of the next few months, you know, that’ll start to become visible.
    2:29:33 And then the final thing I would say is this is going to be very different than attempts
    2:29:34 like that.
    2:29:38 There have been other programs like this in the past, the Clinton Gore administration
    2:29:42 had one and then there were others before that Reagan had one.
    2:29:46 The difference is this time, there’s social media.
    2:29:52 And so there has never been, it’s interesting, one of the reasons people in Washington are
    2:29:57 so cynical is because they know all the bullshit, like they know all the bad spending and all
    2:30:01 the bad rules and all the like, you know, I mean, look, we’re adding a trillion dollars
    2:30:04 to the national debt every 100 days right now.
    2:30:08 And that’s compounding and it’s now passing the size of the Defense Department budget and
    2:30:10 it’s compounding and it’s pretty soon it’s going to be adding a trillion dollars every
    2:30:13 90 days and then it’s going to be adding a trillion dollars over 80 days and then it’s
    2:30:15 going to be a trillion dollars every 70 days.
    2:30:18 And then if this doesn’t get fixed at some point, we enter a hyperinflationary spiral
    2:30:23 and we become Argentina or Brazil and Kablooey, right?
    2:30:26 And so like everybody in DC knows that something has to be done.
    2:30:30 And then everybody in DC knows for a fact that it’s impossible to do anything.
    2:30:31 Right.
    2:30:34 They know all the problems and they also know the sheer impossibility of fixing it.
    2:30:37 But I think what they’re not taking into account that what the critics are not taking into account
    2:30:42 is these guys can do this in the full light of day and they can do it on social media.
    2:30:44 They can completely bypass the press.
    2:30:46 They can completely bypass the cynicism.
    2:30:51 They can expose any element of unconstitutional or silly government spending.
    2:30:54 They can run victory laps every single day on what they’re doing.
    2:30:56 They can bring the people into the process.
    2:30:59 And again, if you think about it, this goes back to our Machiavellian structure, which
    2:31:05 is if you think about, again, you’ve got democracy, oligarchy, monarchy, rule of the many, rule
    2:31:07 of the few, rule of the one.
    2:31:10 You could think about what’s happening here as a little bit of a sandwich, which is you
    2:31:15 have, we don’t have a monarch where we have a president, rule of the one with some power.
    2:31:19 And then we have the people who can’t organize, but they can be informed and they can be aware
    2:31:22 and they can express themselves through voting and polling.
    2:31:26 And so there’s a sandwich happening right now is the way to think about it, which is
    2:31:30 you’ve got basically monarchy, rule of one, combining with rule of many, right?
    2:31:32 And rule of many is that you get to vote, right?
    2:31:34 The people do get to vote, basically.
    2:31:38 And then essentially Congress as in the sort of permanent bureaucratic class in Washington
    2:31:40 as the oligarchy in the middle.
    2:31:45 And so the White House plus the people, I think have the power to do all kinds of things
    2:31:46 here.
    2:31:48 And I think that would be the way I would watch it.
    2:31:56 The transparency, I mean, Elon just by who he is, is incentivized to be transparent and
    2:32:00 show the bullshit in the system and to celebrate the victories.
    2:32:02 So it’s going to be so exciting.
    2:32:08 I mean, honestly, it just makes government more exciting, which is a win for everybody.
    2:32:11 These people are spending our money.
    2:32:14 These people have enormous contempt for the taxpayer.
    2:32:16 Okay, here’s the thing you hear in Washington.
    2:32:17 Here’s one of the things.
    2:32:18 So the first thing you hear is this is impossible.
    2:32:19 They’ll be able to do nothing.
    2:32:21 And then yeah, I walk them through this and they’re like, they start to get, they start
    2:32:24 to don and then this is a new kind of thing.
    2:32:27 And then they’re like, well, it doesn’t matter because all the money is in entitlements and
    2:32:32 the debt and the military.
    2:32:34 And so, yeah, you’ve got like this silly fake, whatever, NPR funding or whatever, and it
    2:32:36 just, it’s a rounding error and it doesn’t matter.
    2:32:41 And you look it up in the budget and it’s like, whatever, $500 million or $5 billion.
    2:32:44 Or it’s the charging stations that don’t exist.
    2:32:47 It’s the $40 billion of charging stations and they build eight charging stations.
    2:32:52 Or it’s the broadband internet plan that delivered broadband to nobody, right?
    2:32:53 And cost you $30 billion.
    2:32:57 So these boondoggles and what everybody in Washington says is the $30 billion is a rounding
    2:32:58 error on the federal budget.
    2:32:59 It doesn’t matter.
    2:33:00 Who cares if they, if they make it go away.
    2:33:05 And of course, any taxpayer is like, what the?
    2:33:06 What do you mean?
    2:33:07 It’s $30 billion.
    2:33:08 Yeah.
    2:33:09 Right.
    2:33:12 And then the experts are like, and the press is in on this too, then the experts are like,
    2:33:14 well, it doesn’t, it doesn’t matter because it’s a rounding error.
    2:33:15 No, it’s $30 billion.
    2:33:20 And if you’re this cavalier about $30 billion, imagine how cavalier you are about the $3
    2:33:21 trillion.
    2:33:22 Yeah.
    2:33:23 Okay.
    2:33:24 $30 billion is $30 billion.
    2:33:27 A lot of the federal budget and percentage know it’s not, but $30 billion divided by
    2:33:31 30 do the math, $30 billion divided by let’s say 300 million taxpayers, right?
    2:33:36 Like what’s that math expert $100 per taxpayer per year.
    2:33:37 Okay.
    2:33:43 So $100 to an ordinary person working hard every day to make money and provide for their
    2:33:44 kids.
    2:33:46 $100 is a meal out.
    2:33:48 It’s a trip to the amusement park.
    2:33:51 It’s the ability to, you know, buy additional educational materials.
    2:33:54 It’s the ability to have a babysitter, to be able to have a romantic relationship with
    2:33:55 your wife.
    2:33:59 It’s, there’s like a hundred things that that person can do with $100 that they’re not doing
    2:34:03 because it’s going to some bullshit program that is being basically where the money’s
    2:34:07 being looted out in the form of just like ridiculous, ridiculousness and graft.
    2:34:11 And so the idea that that $30 billion program is not something that is like a very important
    2:34:17 thing to go after is just like the level of contempt for the taxpayer is just off the charts.
    2:34:21 And then that’s just one of those programs and there’s like a hundred of those programs
    2:34:22 and they’re all just like that.
    2:34:24 Like it’s not like any of this stuff is running well.
    2:34:26 Like the one thing we know is that none of this stuff is running well.
    2:34:27 Like we know that for sure.
    2:34:28 Right.
    2:34:31 And we like, we know these people aren’t showing up to work and like we know that all this crazy
    2:34:32 stuff is happening.
    2:34:33 Right.
    2:34:37 And like, you know, the, do you remember Elon’s story of the, do you remember Elon’s story
    2:34:39 of what got the Amish to turn out to vote in Pennsylvania?
    2:34:40 Oh, okay.
    2:34:41 So like Pennsylvania.
    2:34:42 Okay.
    2:34:43 So Pennsylvania is like a wonderful state, great history.
    2:34:46 It has these cities like Philadelphia that have descended like other cities into just
    2:34:49 like complete chaos, violence, madness and death, right?
    2:34:53 And the federal government has just like let it happen is incredibly violent places.
    2:34:56 And so the Biden administration decided that the big pressing law enforcement thing that
    2:35:00 they needed to do in Pennsylvania was that they needed to start raiding Amish farms to
    2:35:04 prevent them from selling raw milk with armed raids.
    2:35:05 Right.
    2:35:10 And it turns out it really pissed off the Amish and it turns out they weren’t willing to drive
    2:35:14 to the polling places because they don’t have cars, but if you came and got them, they would
    2:35:15 go and they would vote.
    2:35:17 That’s one of the reasons why Trump won anyway.
    2:35:21 So like the law enforcement agencies are off working on like crazy things.
    2:35:23 Like the system’s not working.
    2:35:26 And so you, you add up, pick 130 billion dollar programs.
    2:35:27 All right.
    2:35:28 Now you’re okay.
    2:35:30 Math major, a hundred times a hundred.
    2:35:31 Ten thousand.
    2:35:32 Ten thousand dollars.
    2:35:33 Okay.
    2:35:34 Ten thousand dollars per taxpayer per year.
    2:35:36 And but it’s also not just about money.
    2:35:40 That’s really obviously money is a hugely important thing, but it’s the cavalier attitude.
    2:35:41 Yes.
    2:35:48 And in sort of in the ripple effect of that, it makes it so nobody wants to work in government
    2:35:49 and be productive.
    2:35:53 It makes it so the corruption can, it breeds corruption.
    2:35:55 It breeds laziness.
    2:35:59 It breeds secrecy because you don’t want to be transparent about having done nothing all
    2:36:00 year.
    2:36:01 All those kinds of stuff.
    2:36:02 And you don’t want to reverse that.
    2:36:08 So it would be exciting for the future to work in government to, because the, the amazing
    2:36:13 thing if you’re the steel man government is you can do shit at scale.
    2:36:20 You have money and you can directly impact people’s lives in a positive sense at scale.
    2:36:22 That’s super exciting.
    2:36:28 As long as there’s no bureaucracy that slows you down or not huge amounts of bureaucracy
    2:36:30 that slows you down significantly.
    2:36:31 So here’s the trick.
    2:36:36 This blew my mind because I was, you know, once you open the hell mouth of looking into
    2:36:40 the federal budget, you learn all kinds of things.
    2:36:44 So there is a term of art in government called impoundment.
    2:36:48 And so you, if you’re like me, you’ve learned this the hard way when your car has been impounded.
    2:36:52 The government meaning of impoundment, the federal budget meaning is a different meaning.
    2:36:54 Impoundment is as follows.
    2:36:58 The constitution requires Congress to authorize money to be spent by the executive branch,
    2:36:59 right?
    2:37:02 So the executive branch goes to Congress says we need money acts.
    2:37:03 Congress does their thing.
    2:37:05 They come back and they say, you can have money, why?
    2:37:08 The money’s appropriated from Congress, the executive branch spends it on the military
    2:37:11 or whatever they spend it on, or on roads to nowhere or charging stations to nowhere
    2:37:14 or whatever.
    2:37:18 And what’s in the constitution is the Congress appropriates the money.
    2:37:23 Over the last 60 years, there has been an additional interpretation of appropriations
    2:37:29 applied by the courts and by the system, which is the executive branch not only needs Congress
    2:37:33 to appropriate X amount of money, the executive branch is not allowed to underspend.
    2:37:37 Yeah, I’m aware of this, I’m aware of this.
    2:37:40 And so there’s this thing that happens in Washington at the end of every fiscal year,
    2:37:45 which is September 30th, and it’s the great budget flush, and any remaining money that’s
    2:37:47 in the system that they don’t know how to productively spend, they deliberately spend
    2:37:53 it on productively, to the tune of hundreds and hundreds of billions of dollars.
    2:37:57 A president that doesn’t want to spend the money can’t not spend it.
    2:37:58 Yeah.
    2:38:02 Like, okay, A, that’s not what’s in the constitution, and there’s actually quite a good Wikipedia
    2:38:05 page that goes through the great debate on this is played out in the legal world over
    2:38:06 the last 60 years.
    2:38:10 And basically, if you look at this with anything resembling, I think I don’t mind you’re like,
    2:38:13 “All right, this is not what the founders meant.”
    2:38:16 And then number two, again, we go back to this thing of contempt.
    2:38:21 Can you imagine showing up and running the government like that, and thinking that you’re
    2:38:24 doing the right thing, and not going home at night, and thinking that you’ve sold your
    2:38:25 soul?
    2:38:29 I actually think you sort of had a really good point, which is it’s even unfair to the
    2:38:31 people who have to execute this.
    2:38:32 Yeah.
    2:38:35 It makes them bad people, and they didn’t start out wanting to be bad people.
    2:38:37 And so, there is stuff like this, like…
    2:38:38 Yeah.
    2:38:39 Everywhere.
    2:38:40 Everywhere.
    2:38:42 And so, we’ll see how far these guys get.
    2:38:44 I am extremely encouraged what I’ve seen so far.
    2:38:48 It seems like a lot of people will try to slow them down, but yeah, I’m up to get far.
    2:38:50 Another difficult topic, immigration.
    2:38:56 What’s your take on the, let’s say, heated H-1B visa debate that’s going on online and
    2:38:58 legal immigration in general?
    2:38:59 Yeah.
    2:39:04 By saying I am not involved in any aspect of government policy on this, I am not planning
    2:39:05 to be.
    2:39:07 This is not an issue that I’m working on, or that I’m going to work on.
    2:39:08 We’re not.
    2:39:11 This is not part of the agenda of what the firm is doing, so my firm is doing.
    2:39:17 So, I’m not in this, in the new administration of the government, I’m not planning to be,
    2:39:19 so purely just personal opinion.
    2:39:25 So, I would describe what I have as a complex or hopefully nuanced view on this issue that’s
    2:39:28 maybe a little bit different than what a lot of my peers have.
    2:39:32 And I think, and I kind of thought about this, I didn’t say anything about it all the way
    2:39:36 through the big kind of debate over Christmas, but I thought about it a lot and read everything.
    2:39:39 I think what I realized is that I just have a very different perspective on some of these
    2:39:44 things and the reason is because of the combination of where I came from and then where I ended
    2:39:45 up.
    2:39:50 And so, let’s start with this, where I ended up in Silicon Valley.
    2:39:54 And I have made the pro high-skilled immigration argument many, many times, the H-1B argument
    2:40:00 many times, in past lives, I’ve been in DC many times arguing with prior administrations
    2:40:03 about this, always on the side of trying to get more H-1Bs and trying to get more high-skilled
    2:40:04 immigration.
    2:40:11 And I think that argument is very strong and very solid and very, has paid off for the
    2:40:15 US in many, many ways and we can go through it, but I think it’s the argument everybody
    2:40:16 already knows, right?
    2:40:17 It’s like the stock.
    2:40:19 You take any Silicon Valley person, you press the button and they tell you why we need to
    2:40:21 drain the world to get more H-1Bs, right?
    2:40:23 So, everybody kind of gets that argument.
    2:40:27 So, it’s basically just to summarize, it’s a mechanism by which you can get super smart
    2:40:33 people from the rest of the world, import them in, keep them here to increase the productivity
    2:40:35 of the US companies.
    2:40:36 Yeah.
    2:40:40 And then it’s not just good for them and it’s not just good for Silicon Valley or the tech
    2:40:41 industry.
    2:40:44 It’s good for the country because they then create new companies and create new technologies
    2:40:49 and create new industries that then create many more jobs for Native-born Americans than
    2:40:53 would have previously existed and so you’ve got a, it’s a positive sum, flywheel thing
    2:40:54 where everybody wins.
    2:40:56 Like everybody wins, there are no trade-offs.
    2:40:59 It’s all absolutely glorious in all directions.
    2:41:04 You cannot possibly, there cannot possibly be a moral argument against it under any circumstances.
    2:41:08 Anybody who argues against it is obviously doing so from a position of racism is probably
    2:41:10 a fascist and a Nazi, right?
    2:41:11 Right.
    2:41:12 I mean, that’s the thing.
    2:41:13 And like I said, I’ve made that argument many times.
    2:41:16 I’m very comfortable with that argument and then I’d also say, look, I would say number
    2:41:20 one, I believe a lot of it, I’ll talk about the parts I don’t believe, but I believe a
    2:41:21 lot of it.
    2:41:23 And then the other part is, look, I benefit every day.
    2:41:28 I always describe it as I work in the United Nations, like I, my own firm and our founders
    2:41:35 and our companies and the industry and my friends, you know, are just this like amazing,
    2:41:40 you know, panoply cornucopia of people from all over the world.
    2:41:43 And you know, I just, I’ve worked, I don’t know at this point where people from, it’s
    2:41:45 got to be, I don’t know, 80 countries or something.
    2:41:47 And hopefully over time, it’ll be, you know, the rest as well.
    2:41:50 And, you know, it’s just, it’s been amazing and they’ve done many of the most important
    2:41:52 things in my industry and it’s been really remarkable.
    2:41:55 So that’s all good.
    2:41:58 And then, you know, there’s just the practical version of the argument, which is we are the,
    2:41:59 we are the main place.
    2:42:00 These people get educated anyway.
    2:42:01 Right.
    2:42:03 They, the best and the brightest tend to come here to get educated.
    2:42:06 And so, you know, this is the old kind of Mitt Romney staple of green card to every,
    2:42:11 you know, at least, you know, maybe not every university degree, but every technical degree.
    2:42:15 The sociologists we could quibble about, but, you know, the roboticists for sure.
    2:42:16 For sure.
    2:42:17 For sure.
    2:42:18 We can all agree that.
    2:42:19 At least I want you over on something today.
    2:42:21 Well, no, I’m exaggerating for a fact.
    2:42:23 So, and I lost you.
    2:42:25 I had you for half a second.
    2:42:27 I haven’t gotten to the other side of the argument yet.
    2:42:28 Okay.
    2:42:29 Thank you.
    2:42:31 So surely we can all agree that we need to staple a green card.
    2:42:33 The rollercoaster is going up.
    2:42:35 The rollercoaster is ratcheting slowly up.
    2:42:36 So, yeah.
    2:42:38 So surely we can all agree that the roboticists should all get green cards.
    2:42:41 And again, like there’s a lot of merit to that, obviously, like, look, we want the U.S.
    2:42:43 to be the world leader in robotics.
    2:42:46 What’s step one to being the world leader in robotics is have all the great robotics
    2:42:47 people, right?
    2:42:50 Like, you know, very unlike the underpass know, it’s like a very straightforward formula.
    2:42:51 Right.
    2:42:52 Yeah.
    2:42:53 All right.
    2:42:54 That’s all well and good.
    2:42:55 All right.
    2:42:57 But it gets a little bit more complicated because there is a kind of argument that’s sort
    2:43:00 of right underneath that that you also hear from, you know, these same people.
    2:43:04 And I have made this argument myself many times, which is we need to do this because we don’t
    2:43:06 have enough people in the U.S. who can do it otherwise.
    2:43:07 Right.
    2:43:08 We have all these unfilled jobs.
    2:43:10 We’ve got all these, you know, all these companies that wouldn’t exist.
    2:43:11 We don’t have enough good founders.
    2:43:12 We don’t have enough engineers.
    2:43:16 We don’t have enough scientists or then the next version of the argument below that is
    2:43:20 our education system is not good enough to generate those people.
    2:43:23 And which is a weird argument, by the way, because, like, our education system is good
    2:43:27 enough for foreigners to be able to come here preferentially in, like, a very large number
    2:43:31 of cases, but somehow not good enough to educate our own native born people.
    2:43:34 So there’s like a weird, these little cracks in the matrix that you can kind of stick your
    2:43:38 fingernail into and kind of wonder about and we’ll come back to that one.
    2:43:41 Like, at least, yes, our education system has this flaws.
    2:43:45 And then underneath that is the argument that Vivek made, you know, which is, you know,
    2:43:50 we have a cultural rot in the country and native born people in the country don’t work hard
    2:43:53 enough and spend too much time watching TV and TikTok and don’t spend enough time studying
    2:43:54 differential equations.
    2:43:59 And again, it’s like, all right, like, you know, yeah, there’s a fair amount to that.
    2:44:04 Like there’s a lot of American culture that is, you know, there’s a lot of frivolity.
    2:44:07 There’s a lot of, you know, look, I mean, we have well documented social issues in many
    2:44:11 fronts, many things that cut against having a culture of just like straightforward high
    2:44:13 achievement and effort and striving.
    2:44:16 Anyway, like, you know, those are the basic arguments.
    2:44:19 But then I have this kind of other side of my, you know, kind of personality and thought
    2:44:23 process, which is, well, I grew up in a small farming town in rural Wisconsin, the rural
    2:44:24 Midwest.
    2:44:27 And, you know, it’s interesting, there’s not a lot of people who make it from rural
    2:44:31 Wisconsin to, you know, high tech.
    2:44:33 And so it’s like, all right, why is that exactly, right?
    2:44:37 And I know I’m an aberration, like I was the only one from anybody I ever knew who ever
    2:44:38 did this, right?
    2:44:40 I know what an aberration I am, and I know exactly how that aberration happened.
    2:44:46 And it’s a very unusual set of steps, including, you know, many that were just luck.
    2:44:51 But like it, there is in no sense a talent flow from rural Wisconsin into high tech,
    2:44:55 like not at all.
    2:44:59 There is also like in no sense a talent flow from the rest of the Midwest into high tech.
    2:45:01 There is no talent flow from the South into high tech.
    2:45:03 There is no flow from the Sun Belt into high tech.
    2:45:08 There is no flow from, you know, the deep South into high tech, like just like literally
    2:45:12 it’s like the blanks, but there’s this whole section of the country that just where the
    2:45:15 people just like for some reason don’t end up in tech.
    2:45:20 Now, that’s a little bit strange because these are the people who put a man on the moon.
    2:45:23 These are the people who built the World War II war machine.
    2:45:27 These are the people, at least their ancestors are the people who built the Second Industrial
    2:45:32 Revolution and built the railroads and built the telephone network and built, you know,
    2:45:36 logistics and transportation and the auto industry was built in Cleveland and Detroit.
    2:45:40 And so at least these people’s parents and grandparents and great grandparents somehow
    2:45:44 had the wherewithal to like build all of this like amazing things and invent all these things.
    2:45:48 And then there’s many, many, many, many stories in the history of American invention and innovation
    2:45:52 and capitalism where you had people who grew up in the middle of nowhere, Philo Farnsworth
    2:45:55 who invented the television and just like, you know, tons and tons of others, endless
    2:45:57 stories like this.
    2:46:00 Now you have, I’d look up a puzzle, right, in the conundrum, which is like, okay, like
    2:46:03 what is happening on the blank spot of the map?
    2:46:07 And then of course, you also can’t help noticing that the blank spot on the map, the Midwest,
    2:46:12 the South, you’ve also just defined Trump country, the Trump voter base, right?
    2:46:13 And it’s like, oh, well, that’s interesting.
    2:46:15 Like how did that happen?
    2:46:16 Right.
    2:46:19 And so either you really, really, really have to believe the very, very strong version of
    2:46:22 like the Vivec thesis or something where you have to believe that like that basically
    2:46:26 culture, the whole sort of civilization in the middle of the country and how the country
    2:46:31 is so like deeply flawed, either inherently flawed or culturally flawed such that for
    2:46:35 whatever reason, they are not able to do the things that their, you know, parents and
    2:46:38 grandparents were able to do and that their peers are able to do it or something else
    2:46:39 is happening.
    2:46:40 Would you care to guess on what else is happening?
    2:46:41 I mean, what?
    2:46:42 Affirmative action?
    2:46:43 Affirmative action.
    2:46:44 Okay.
    2:46:48 This is very, think about this, this is very entertaining, right?
    2:46:51 What are the three things that we know about affirmative action?
    2:46:55 It is absolutely 100% necessary.
    2:47:00 But however, it cannot explain the success of any one individual, nor does it have any
    2:47:01 victims at all.
    2:47:07 They could explain maybe disproportionate, but surely it doesn’t explain why you’re
    2:47:11 probably the only person in Silicon Valley from Wisconsin.
    2:47:15 What educational institution the last 60 years has wanted Farm Boys from Wisconsin?
    2:47:18 But what institution rejected Farm Boys from Wisconsin?
    2:47:19 All of them.
    2:47:20 All of them.
    2:47:21 Of course.
    2:47:22 Okay.
    2:47:23 So we know this.
    2:47:26 This is the Harvard and UNC Supreme Court cases.
    2:47:28 So this was like three years ago.
    2:47:31 These were big court cases, you know, because the idea of affirmative action has been litigated
    2:47:35 for many, many, many years and through many court cases and the Supreme Court repeatedly
    2:47:38 in the past had upheld that it was a completely legitimate thing to do.
    2:47:41 And a lot of these, and there’s basically two categories of affirmative action that
    2:47:43 like really matter, right?
    2:47:47 One is admissions into educational institutions and then the other is jobs, right, getting
    2:47:48 hired.
    2:47:49 Like those are the two biggest areas.
    2:47:53 The education one is like super potent has been a super potent political issue for a
    2:47:56 very long time for all, you know, people have written and talked about this for many decades.
    2:47:57 I don’t need to go through it.
    2:47:59 There’s many arguments for why it’s important.
    2:48:01 There’s many arguments as to how it could backfire.
    2:48:02 It’s been this thing.
    2:48:06 But the Supreme Court upheld it for a very long time.
    2:48:08 The most recent ruling, I’m not a lawyer, I don’t have the exact reference in my head,
    2:48:15 but there was a case in 2003 that said that Sandra Day O’Connor famously wrote that, you
    2:48:20 know, although it had been 30 years of affirmative action and although it was not working remotely
    2:48:24 as it had been intended, she said that, you know, well, basically we need to try it for
    2:48:25 another 25 years.
    2:48:29 But she said basically as a message to future Supreme Court justices, if it hasn’t resolved
    2:48:33 basically the issues it’s intended to resolve within 25 years, then we should probably call
    2:48:34 it off.
    2:48:36 By the way, we’re coming up on the 25 years.
    2:48:39 It’s a couple years away.
    2:48:43 The Supreme Court just had these cases as a Harvard case, and I think a University of
    2:48:44 North Carolina case.
    2:48:48 And what’s interesting about those cases is the lawyers in those cases put a tremendous
    2:48:53 amount of evidence into the record of how the admissions decisions actually happen at
    2:48:59 Harvard and happen at UNC, and it is like every bit as cartoonishly garish and racist
    2:49:04 as you could possibly imagine, because it’s a ring of power.
    2:49:07 And if you’re an admissions officer at a private university or an administrator, you
    2:49:11 have unlimited power to do what you want, and you can justify any of it under any of
    2:49:14 these rules or systems.
    2:49:17 And up until these cases, it had been a black box where you didn’t have to explain yourself
    2:49:19 and show your work.
    2:49:23 And what the Harvard and USC cases did is they basically required showing the work.
    2:49:26 And there was all kinds of phenomenal detail.
    2:49:29 Number one is there were text messages in there that will just curl your hair of students
    2:49:33 being spoken of and just crude racial stereotypes that would just make you want to jump out
    2:49:34 the window.
    2:49:35 It’s horrible stuff.
    2:49:38 But also, there was statistical information.
    2:49:41 And of course, the big statistical kicker to the whole thing is that at top institutions,
    2:49:46 it’s common for different ethnic groups to have different cutoffs for SAT that are as
    2:49:48 wide as 400 points.
    2:49:52 So different groups.
    2:49:57 So specifically, Asians need to perform at 400 SAT points higher than other ethnicities
    2:50:00 in order to actually get admitted into these– I mean, it’s not even about– I mean, white
    2:50:02 people are a part of this, but Asians are a very big part of this.
    2:50:06 And actually, the Harvard case is actually brought by an activist on behalf of actually
    2:50:09 the Asian students who are being turned away.
    2:50:12 And it’s basically– I mean, it’s the cliche now in the valley and in the medical community,
    2:50:16 which is if you want a super genius, you hire an Asian from Harvard, because they are guaranteed
    2:50:21 to be freaking Einstein, because if they weren’t, they were never getting admitted, right?
    2:50:24 Almost all the qualifications get turned away.
    2:50:29 So they’ve been running this– it’s a very, very explicit, very, very clear program.
    2:50:32 This of course has been a third rail of things that people are not supposed to discuss under
    2:50:34 any circumstances.
    2:50:37 The thing that has really changed the tenor on this is, I think, two things.
    2:50:40 Number one, those Supreme Court cases, the Supreme Court ruled that they can no longer
    2:50:42 do that.
    2:50:45 I will tell you, I don’t believe there’s a single education institution in America that
    2:50:48 is conforming with the Supreme Court ruling.
    2:50:51 I think they are all flagrantly ignoring it, and we could talk about that.
    2:50:53 Mostly because of momentum, probably, or what?
    2:50:55 They are trying to make the world a better place.
    2:50:57 They are trying to solve all these social problems.
    2:50:59 They are trying to have diverse student populations.
    2:51:02 They are trying to live up to the expectations of their donors.
    2:51:04 They are trying to make their faculty happy.
    2:51:09 They are trying to have their friends and family think that they’re good people.
    2:51:13 They’re trying to have the press write nice things about them.
    2:51:18 It’s nearly impossible for them, and to be clear, nobody has been fired from an admissions
    2:51:20 office for 25 years of prior.
    2:51:24 What we now, the Supreme Court now, is ruled to be illegality.
    2:51:28 They’re all the same people under the exact same pressures.
    2:51:32 The numbers are moving a little bit, but I don’t know anybody in the system who thinks
    2:51:35 that they’re complying with the Supreme Court.
    2:51:36 Who’s in charge?
    2:51:39 In the rank ordering of who rules who, the university’s rule of the Supreme Court way
    2:51:42 more than the Supreme Court rules the university’s.
    2:51:45 Another example of that is that every sitting member of the Supreme Court went to either
    2:51:48 Harvard or Yale.
    2:51:53 The level of incestuousness here is like … Anyway, so there’s that.
    2:51:54 This has been running for a very long time.
    2:51:58 One is the Harvard and USC cases gave up the game, number one, or at least showed what
    2:51:59 the mechanism was.
    2:52:04 And then number two, the other thing is obviously the aftermath of October 7th, and what we
    2:52:08 discovered was happening with Jewish applicants, and what was happening at all the top institutions
    2:52:14 for Jewish applicants was they were being actively managed down as a percentage of the
    2:52:17 base.
    2:52:23 I’ve heard reports of extremely explicit, basically, plans to manage the Jewish admissions
    2:52:28 down to their representative percentage of the US population, which is 2%.
    2:52:31 There’s a whole backstory here, which is 100 years ago, Jews were not admitted into a lot
    2:52:34 of these institutions, and then there was a big campaign to get them in.
    2:52:37 Once they could get in, they immediately became 30% of these institutions because there’s
    2:52:39 so many smart, talented Jews.
    2:52:43 So it went from 0% to 30%, and then the most recent generation of leadership has been trying
    2:52:45 to get it down to 2%.
    2:52:49 And a lot of Jewish people, at least a lot of Jewish people I know, sort of, they kind
    2:52:53 of knew this was happening, but they discovered it the hard way after October 7th, right?
    2:52:57 And so all of a sudden … So basically, the Supreme Court case meant that you could address
    2:53:00 this in terms of the Asian victims.
    2:53:04 The October 7th meant that you could address it in terms of the Jewish victims, and for
    2:53:07 sure both of those groups are being systematically excluded, right?
    2:53:10 And then, of course, there’s the thing that you basically can’t talk about, which is all
    2:53:13 the white people are being excluded.
    2:53:17 And then it turns out it’s also happening to black people.
    2:53:21 And this is the thing that blew my freaking mind when I found out about it.
    2:53:28 So I just assumed that this was great news for American blacks, because obviously if
    2:53:31 whites, Asians, and Jews are being excluded, then the whole point to this in the beginning
    2:53:35 was to get the black population up, and so this must be great for American blacks.
    2:53:41 So then I discovered this New York Times article from 2004 called, “Blacks are being admitted
    2:53:44 into top schools at greater numbers, but which ones?”
    2:53:45 Uh-oh.
    2:53:48 And again, and by the way, this is in the New York Times.
    2:53:53 This is not in, like, you know, whatever, national review, this is New York Times, 2004.
    2:53:57 And the two authorities that were quoted in the story are Henry Lewis Gates, who’s the
    2:54:01 dean of the African-American studies community in the United States, super brilliant guy,
    2:54:07 and then Lonnie Guinear, who was a, she was a potential Supreme Court appointee under,
    2:54:10 I think, close friend of Hillary Clinton, and there was, for a long time, she was on
    2:54:12 the shortlist for Supreme Court.
    2:54:18 So one of the top, you know, jurists, lawyers in the country, both black, was sort of legendarily
    2:54:22 successful in their, in their, in the academic and legal worlds, and black.
    2:54:24 And they are quoted as the authorities in this story.
    2:54:29 And the story that they tell is actually very, it’s amazing.
    2:54:33 By the way, it’s happening today in education institutions, and it’s happening in companies,
    2:54:38 and you can see it all over the place and the government, which is, at least at that
    2:54:44 time, the number was half of the black admits into a place like Harvard were not American
    2:54:45 born blacks.
    2:54:53 They were foreign born blacks, specifically, northern African, generally Nigerian, or West
    2:54:54 Indian.
    2:54:55 Right.
    2:54:59 And by the way, many Nigerians and northern Africans have come to the U.S. and have been
    2:55:00 very successful.
    2:55:03 Nigerian Americans as a group, like way outperform, they’re, you know, this is a super smart cohort
    2:55:04 of people.
    2:55:07 And then West Indian blacks in the U.S. are incredibly successful.
    2:55:12 Most recently, by the way, Kamala Harris, as well as Colin Powell, like just two sort
    2:55:13 of examples of that.
    2:55:18 And so basically what Henry Louis Gates, Alana Guarnira said in the story is Harvard is basically
    2:55:23 struggling to either whatever it was, identify a recruit, make successful whatever it was,
    2:55:25 American born native blacks.
    2:55:30 And so therefore, they were using high school immigration as an escape hatch to go get blacks
    2:55:31 from other countries.
    2:55:35 And then this was 2004 when you could discuss such things.
    2:55:39 Obviously, that is a topic that nobody has discussed since.
    2:55:40 It has sailed on.
    2:55:45 All of the DEI programs of the last 20 years have had this exact characteristic.
    2:55:48 There’s large numbers of black people in America who are fully aware of this and are like,
    2:55:51 “It’s obviously not us that are getting these slots.
    2:55:54 We’re obviously, we’re literally competing with people who are being imported.”
    2:55:58 And if you believe in the basis of affirmative action, you are trying to make up for historical
    2:56:00 injustice of American black slavery.
    2:56:06 So the idea that you’re import somebody from Nigeria that never experienced that is tremendously
    2:56:08 insulting to black Americans.
    2:56:11 Anyway, so you can see where I’m heading with this.
    2:56:16 We have been in a 60-year social engineering experiment to exclude native born people from
    2:56:20 the educational slots and jobs that high school immigration has been funneling foreigners
    2:56:21 into.
    2:56:22 Right.
    2:56:24 And so it turns out it’s not a victim-free thing.
    2:56:27 There’s like 100% there’s victims because why?
    2:56:28 There’s only so many.
    2:56:30 For sure, there’s only so many education slots and then for sure, there’s only so many of
    2:56:31 these jobs.
    2:56:32 Right.
    2:56:35 Google only hires so many, you know, whatever level seven engineers.
    2:56:36 Right.
    2:56:38 And so that’s the other side of it.
    2:56:39 Right.
    2:56:44 And so you’re a farm boy in Wisconsin, right, or a black American whose ancestors arrived
    2:56:53 here on a slave ship 300 years ago in Louisiana, or a Cambodian immigrant in the Bronx and
    2:56:58 your kid or a Jewish immigrant or from a very successful Jewish family.
    2:57:02 And your entire, you know, for three generations, you and your parents and grandparents went
    2:57:03 to Harvard.
    2:57:07 And what all of those groups know is the system that has been created is not for them.
    2:57:08 Right.
    2:57:11 It’s designed specifically to exclude them.
    2:57:14 And then what happens is all of these tech people show up in public and say, “Yeah, let’s
    2:57:15 bring in more foreigners.”
    2:57:16 Right.
    2:57:21 And so anyway, so the short version of it is you can’t anymore, I don’t think, just
    2:57:29 have the “high school immigration” conversation for either education or for employment without
    2:57:32 also having the DEI conversation.
    2:57:34 And then DEI is just another word for affirmative action.
    2:57:36 So it’s the affirmative action conversation.
    2:57:39 And you need to actually deal with this at substance and to see what’s actually happening
    2:57:42 to people you need to join these topics.
    2:57:46 And I think it is much harder to make the moral claim for high school immigration given
    2:57:52 the extent to which DEI took over both the education process and the hiring process.
    2:57:53 Okay.
    2:57:57 So first of all, that was brilliantly laid out, the nuance of it.
    2:58:02 So just to understand, it’s not so much a criticism of H1B, high school immigration,
    2:58:08 it’s that there needs to be more people saying, “Yay, we need more American-born hires.”
    2:58:12 So I spent the entire Christmas holiday reading every message on this and not saying anything.
    2:58:17 And what I was – which you know me well enough to know that’s a serious level of –
    2:58:18 Yeah, that’s very zen.
    2:58:19 Yes, thank you.
    2:58:20 Thank you.
    2:58:21 No, it wasn’t.
    2:58:25 There was tremendous rage on the other side of it, but I suppressed it.
    2:58:29 So I was waiting for the dog that didn’t bark, right?
    2:58:33 And the dog that didn’t bark was I did not – and tell me if you saw one, I did not see
    2:58:36 a single example of somebody pounding the table for more high school immigration who
    2:58:40 was also pounding the table to go get more smart kids who are already here into these
    2:58:42 educational institutions and into these jobs.
    2:58:44 I didn’t see a single one.
    2:58:45 That’s true.
    2:58:47 I think I agree with that.
    2:58:49 There really was a divide.
    2:58:51 But it was like literally, it was like the proponents of high school immigration.
    2:58:53 And again, this was me for a very long time.
    2:58:57 I mean, I kind of took myself by surprise on this because I was on – you know, I had
    2:58:59 the much simpler version of this story for a very long time.
    2:59:03 Like I said, I’ve been in Washington many times under past presidents lobbying for this.
    2:59:05 By the way, never made any progress, which we could talk about.
    2:59:08 Like it never actually worked.
    2:59:10 But you know, I’ve been on the other side of this one.
    2:59:14 But I was literally sitting there being like, all right, which of these like super geniuses
    2:59:17 who many of whom by the way are very successful high school immigrants or children of high
    2:59:23 school immigrants, which of these super geniuses are going to like say, actually we have this
    2:59:25 like incredible talent source here in the country, which again, to be clear, I’m not
    2:59:26 talking about white people.
    2:59:30 I’m talking about native-born Americans, whites, Asians, Jews, blacks, for sure.
    2:59:31 For sure.
    2:59:32 For sure.
    2:59:33 Those four groups.
    2:59:34 But also white people.
    2:59:35 Yeah.
    2:59:36 And also white people.
    2:59:44 Making the case for American-born hires are usually not also supporting H1B.
    2:59:50 It’s an extreme divide and those people that are making that case are often not making it
    2:59:55 in a way that’s like making it in quite a radical way.
    2:59:56 Yeah.
    2:59:57 Let’s put it this way.
    2:59:58 Yeah.
    2:59:59 But you have this interesting thing.
    3:00:01 You have a split between the sides that I’ve noticed, which is one side has all of the
    3:00:02 experts.
    3:00:03 Right.
    3:00:04 Right.
    3:00:05 And I’m using scare quote for people listening to audio.
    3:00:08 I’m making quotes in the air with my fingers as vigorously as I can.
    3:00:11 One side has all the certified experts.
    3:00:13 The other side just has a bunch of people who are like, they know that something is wrong
    3:00:16 and they don’t quite know how to explain it.
    3:00:19 So unusual about the Harvard UNC cases, by the way, in front of Supreme Court, is they
    3:00:22 actually had sophisticated lawyers for the first time in a long time actually put all
    3:00:25 the seven s together and actually put it in the public record.
    3:00:28 They actually had experts, which is just really rare.
    3:00:31 Generally what you get is you get, because if you don’t have experts, what do you have?
    3:00:35 You know something is wrong, but you have primarily an emotional response.
    3:00:42 You feel it, but can you put it in the words and tables and charts that a certified expert
    3:00:43 can?
    3:00:44 No, you can’t.
    3:00:45 That’s not who you are.
    3:00:48 That doesn’t mean that you’re wrong and it also doesn’t mean that you have less of a
    3:00:49 moral stance.
    3:00:50 Yeah.
    3:00:51 And so it’s just like, all right.
    3:00:54 Now, by the way, look, I think there are ways to square the circle.
    3:00:56 I think there’s a way to have our cake and eat it too.
    3:00:58 Like I think there’d be many ways to resolve this.
    3:01:04 I think, again, I think the way to do it is to look at these issues combined, at DEI combined
    3:01:05 with high school immigration.
    3:01:12 It so happens the DEI is under much more scrutiny today than it has been for probably 20 years.
    3:01:18 Affirmative action is, the Supreme Court did just rule that it is not legal for universities
    3:01:19 to do that.
    3:01:23 They are still doing it, but they should stop.
    3:01:28 And then there are more and more, you’ve seen more companies now also dishing their DEI
    3:01:29 programs.
    3:01:33 In part, that’s happening for a bunch of reasons, but it’s happening in part because a lot of
    3:01:37 corporate lawyers will tell you that the Supreme Court rulings and education either already
    3:01:43 apply to businesses or just as a clear foreshadowing, the Supreme Court will rule on new cases that
    3:01:44 will ban any businesses.
    3:01:51 And so there is a moment here to be able to look at this on both sides.
    3:01:55 Let me add one more nuance to it that makes it even more complicated.
    3:01:57 So the cliche is we’re going to drain the world, right?
    3:01:58 You’ve heard that?
    3:02:00 We’re going to take all the smart people from all over the world.
    3:02:01 We’re going to bring them here.
    3:02:02 We’re going to educate them.
    3:02:04 And then they’re going to raise their families here, create businesses here, create jobs
    3:02:05 here, right?
    3:02:07 In the cliche, that’s a super positive thing.
    3:02:08 Yeah.
    3:02:09 Okay.
    3:02:12 So what happens to the rest of the world?
    3:02:13 They lose?
    3:02:18 Well, how fungible are people?
    3:02:24 How many highly ambitious, highly conscientious, highly energetic, high achieving, high IQ
    3:02:28 super geniuses are there in the world?
    3:02:30 And if there’s a lot, that’s great.
    3:02:34 But if there just aren’t that many, and they all come here, and they all aren’t where
    3:02:39 they would be otherwise, what happens to all those other places?
    3:02:43 So it’s almost impossible for us here to have that conversation in part because we become
    3:02:46 incredibly uncomfortable as a society talking about the fact that people aren’t just simply
    3:02:50 all the same, which is the whole thing we could talk about.
    3:02:54 But also we are purely the beneficiary of this effect, right?
    3:02:57 We are brain draining the world, not the other way around.
    3:02:58 There’s only four.
    3:03:02 So if you look at the flow of high-skill immigration over time, there’s only four permanent sinks
    3:03:05 of high-skill immigration in places people go.
    3:03:07 It’s the US, Canada, the UK, and Australia.
    3:03:10 It’s the four of the five eyes.
    3:03:12 It’s the major English fear countries.
    3:03:16 And so for those countries, this seems like a no-lose proposition.
    3:03:20 It’s all the other countries that basically what we four countries have been doing is
    3:03:21 draining all those smart people up.
    3:03:25 It’s actually much easier for people in Europe to talk about this I’ve discovered because
    3:03:27 the Eurozone is whatever, 28 countries.
    3:03:31 And within the Eurozone, the high-skill people over time have been migrating to originally
    3:03:36 the UK, but also specifically, I think it’s the Netherlands, Germany, and France.
    3:03:40 But specifically, they’ve been migrating out of the peripheral Eurozone countries.
    3:03:43 And the one where this really hit the fan was in Greece, right?
    3:03:47 So Greece falls into chaos, disaster, and then you’re running the government in Greece
    3:03:51 and you’re trying to figure out how to put an economic development plan together.
    3:03:54 All of your smart young kids have left.
    3:03:56 Like what are you going to do, right?
    3:04:01 By the way, this is a potential, I know you care a lot about Ukraine, this is a potential
    3:04:02 crisis for Ukraine.
    3:04:06 Not because, in part because of this, because we enthusiastically recruit Ukrainians of
    3:04:07 course.
    3:04:09 And so we’ve been brain draining Ukraine for a long time.
    3:04:12 But also, of course, war does tend to cause people to migrate out.
    3:04:18 And so when it comes time for Ukraine to rebuild as a peaceful country, is it going to have
    3:04:20 the talent base even that it had five years ago?
    3:04:22 It’s like a very big and important question.
    3:04:25 By the way, Russia, like we have brain drain a lot of really smart people out of Russia.
    3:04:29 A lot of them are here over the last 30 years.
    3:04:31 And so there’s this thing.
    3:04:33 It’s actually really funny if you think about it.
    3:04:37 The one thing that we know to be the height of absolute evil that the West ever did was
    3:04:40 colonization and resource extraction.
    3:04:44 So we know the height of absolute evil was when the Portuguese and the English and everybody
    3:04:47 else went and had these colonies and then went in and we took all the oil and we took
    3:04:51 all the diamonds and we took all the whatever lithium or whatever it is, right?
    3:04:55 Well, for some reason, we realized that that’s a deeply evil thing to do when it’s a physical
    3:04:58 resource, when it’s a non-conscious physical matter.
    3:05:02 For some reason, we think it’s completely morally acceptable to do it with human capital.
    3:05:08 In fact, we think it’s glorious and beautiful and wonderful and the great flowering of peace
    3:05:10 and harmony and moral justice of our time to do it.
    3:05:13 And we don’t think for one second what we’re doing to the countries that we’re pulling
    3:05:15 all these people out of.
    3:05:18 And this is one of these things like I don’t know, like maybe we’re just going to live
    3:05:22 in this delusional state forever and we’ll just keep doing it and it’ll keep benefiting
    3:05:23 us and we just won’t care what happens.
    3:05:27 But like, I think there may come, this is one of these, this is like one of these submarines
    3:05:28 under 10 feet under the waterline.
    3:05:32 Like, I think it’s just a matter of time until people suddenly realize, “Oh my god, what
    3:05:33 are we doing?”
    3:05:37 Because like, we need the rest of the world to succeed too, right?
    3:05:39 Like, we need these other countries to like flourish.
    3:05:42 Like we don’t want to be the only successful country in the middle of just like complete
    3:05:46 chaos and disaster and we just extract and we extract and we extract and we don’t think
    3:05:47 twice about it.
    3:05:51 Well, this is so deeply profound actually.
    3:05:55 So what is the cost of winning, quote unquote?
    3:06:01 If these countries are drained in terms of human capital on the level of geopolitics,
    3:06:02 what does that lead to?
    3:06:08 Even if we talk about wars and conflict and all of this, we actually want them to be strong
    3:06:13 in the way we understand it’s strong, not just in every way.
    3:06:19 So that cooperation and competition can build a better world for all of humanity.
    3:06:22 It’s interesting.
    3:06:27 This is one of those truths where you just speak and it resonates and I didn’t even
    3:06:28 think about it.
    3:06:29 Yeah, exactly.
    3:06:34 So this is, you were sitting in the June the holidays, he said, just boiling over.
    3:06:39 So all that said, there’s still to use some good to the H1B.
    3:06:40 Okay.
    3:06:42 So then you get this other, okay.
    3:06:43 So then there’s, quote, come all the way around.
    3:06:44 There’s another nuance.
    3:06:45 So there’s another nuance.
    3:06:48 There’s another nuance, which is mostly the value we don’t use H1Bs anymore.
    3:06:49 Mostly we use O1s.
    3:06:55 So there’s a separate class of visa and O1 is like this.
    3:06:57 It turns out the O1 is the super genius visa.
    3:06:59 So the O1 is basically our founder.
    3:07:02 Like when we have like a, when we have somebody from anywhere in the world and they’ve like
    3:07:06 invented a breakthrough new technology and they want to come to the U.S. to start a company,
    3:07:11 they come in through an O1 visa and that actually is like a, it’s a fairly high bar.
    3:07:13 It’s a high acceptance rate, but it’s like a pretty high bar and they, they do a lot
    3:07:17 of work and they, there’s like a, you have to put real work into it and really, really
    3:07:19 prove your case.
    3:07:24 Mostly what’s happened with the H1B visa program is that it has gone to basically two categories
    3:07:25 of employers.
    3:07:29 One is the basically a small set of big tech companies that hire in volume, which is exactly
    3:07:31 the companies that you would think.
    3:07:34 And then the other is it goes to these, what they call kind of the mills, the consulting
    3:07:35 mills, right.
    3:07:38 And so there’s these set of companies with names I don’t want to pick on companies,
    3:07:43 you know, names like Cognizant that, you know, hire basically have in their business model
    3:07:47 is primarily Indian being in primarily Indians in large numbers.
    3:07:51 And you know, they often have, you know, offices next to company owned housing and they’ll
    3:07:53 have, you know, organizations that are, you know, they’ll have, you know, organizations
    3:07:56 that are literally thousands of Indians, you know, living and working in the U.S. and
    3:08:01 they do basically call it mid-tier like IT consulting.
    3:08:04 So you know, these folks, they’re making good, good, good, good wages, but they’re making
    3:08:11 68 a year, $100,000 a year, not the, you know, 300,000 that you’d make in the valley.
    3:08:15 And so like in practice, the startups basic like little tech, as we call it or the startup
    3:08:20 world mainly doesn’t use H1Vs at this point, and mainly can’t because the system is kind
    3:08:23 of rigged in a way that we really can’t.
    3:08:26 And then, and then, and then again, you get to the sort of underlying morality here, which
    3:08:30 is it’s like, well, you know, Amazon like Amazon’s in like, I love Amazon, but like
    3:08:33 they’re a big powerful company, you know, they’ve got, you know, more money than God,
    3:08:37 they’ve got resources, they’ve got long-term planning horizon, they do big, you know, profound
    3:08:42 things over, you know, decades at a time, you know, they could, you know, or any of
    3:08:45 these other companies could launch massively effective programs to go recruit the best
    3:08:48 and brightest from all throughout the country.
    3:08:52 And, you know, you’ll notice they don’t do that, you know, they bring in, you know, 10,000,
    3:08:55 20,000 H1Vs a year.
    3:08:57 And so you’ve got a question there.
    3:09:00 And then these mills, like there’s lots of questions around them and whether they should,
    3:09:03 you know, whether that’s even a ethical way, you know, I don’t want to say they’re unethical,
    3:09:08 but there’s questions around like exactly what the trade-offs are there.
    3:09:11 And so, you know, this, yeah, and this is like a Pandora’s box that really, you know,
    3:09:16 nobody really wanted to be opened, you know, to play devil’s advocate on all this in terms
    3:09:19 of like national immigration issues, you know, none of this is like a top-end issue just
    3:09:21 because the numbers are small, right.
    3:09:24 And so, you know, I don’t think, you know, the administration has said like, this is
    3:09:27 not like a priority of theirs for right now.
    3:09:30 But I guess what I would say is like there is actually a lot of complexity and nuance
    3:09:32 here.
    3:09:35 I have a lot of friends, like I said, I have a lot of friends and colleagues who are,
    3:09:39 you know, who came over on H1Vs or O1s, green cards, many are now citizens.
    3:09:42 And you know, every single one of them was not every single one.
    3:09:45 A lot of them were enthusiastic to, you know, defend the honor of immigrants throughout
    3:09:46 this whole period.
    3:09:48 And they said to me, it’s like, well, Mark, how can we, you know, how can we, how can
    3:09:51 we more clearly express, you know, the importance of high school immigration to the U.S.?
    3:09:57 And I was like, I think you can do it by advocating for also developing our native born talent.
    3:10:01 Like, do you want to inflame the issue or do you want to diffuse the issue?
    3:10:02 Right.
    3:10:04 And I think the answer is to diffuse the issue.
    3:10:09 Let me give you one more positive scenario, which, and then I’ll also beat up on the university
    3:10:10 some more.
    3:10:14 Do you know about the National Merit Scholarship System?
    3:10:16 Have you heard about this?
    3:10:17 Not really.
    3:10:22 So there’s a system that was created during the Cold War called the National Merit Scholars.
    3:10:27 And it is a basically, it was created, I forget, in the 1950s or ’60s when it was when people
    3:10:31 in government actually wanted to identify the best and the brightest as heretical an
    3:10:33 idea as that sounds today.
    3:10:39 And so it’s basically a national talent search for basically IQ.
    3:10:44 Its goal is to identify basically the top 0.5% of the IQ in the country.
    3:10:46 By the way, completely regardless of other characteristics.
    3:10:51 So there’s no race, gender, or any other aspect to it is just going for straight intelligence.
    3:10:57 It uses the, first the PSAT, which is the preparatory SAT that you take, and then the SAT.
    3:11:02 So it uses those scores, that is the scoring, it’s a straight PSAT-SAT scoring system.
    3:11:09 So they use the SAT as a proxy for IQ, which it is.
    3:11:13 They run this every year, they identify, it’s like they get down to like 1% of the population
    3:11:17 of the kids, 18-year-olds in a given year who score highest on the PSAT, and then they
    3:11:22 get down to further qualify down to the 0.5% that also replicate on the SAT.
    3:11:25 And then it’s like the scholarship amount is like $2,500, right?
    3:11:30 So it’s like, it was a lot of money 50 years ago, not as much today, but it’s a national
    3:11:33 system being run literally to find the best and the brightest.
    3:11:37 How many of our great and powerful universities use this as a scouting system?
    3:11:39 Like, our universities all have sports teams.
    3:11:44 They all have national scouting, full-time scouts who go out and they go to every high
    3:11:47 school and they try to find all the great basketball players and bring them into the
    3:11:50 NCAA, into all these leagues.
    3:11:53 How many of our great and powerful and enlightened universities use the national merit system
    3:11:58 to go do a talent search for the smartest kids and just bring them in?
    3:12:02 Let me guess, very few, zero.
    3:12:03 As you say it, that’s brilliant.
    3:12:07 There should be that same level of scouting for talent internally.
    3:12:08 Go get the smartest ones.
    3:12:11 I’ll give you one more clicker on this topic if you’re not, if I haven’t beaten it to
    3:12:12 death.
    3:12:16 The SAT has changed.
    3:12:22 The SAT used to be a highly accurate proxy for IQ that caused a bunch of problems.
    3:12:25 People really don’t like the whole idea of IQ.
    3:12:29 The SAT has been actively managed over the last 50 years by the college board that runs
    3:12:32 it and it has been essentially like everything else.
    3:12:37 It’s been dumbed down in two ways.
    3:12:42 Number one has been dumbed down where an 800 from 40 years ago does not mean what an 800
    3:12:43 means today.
    3:12:48 40 years ago it was almost impossible to get an 800.
    3:12:53 Today there’s so many 800s that you could stock the entire Ivy League with 800s.
    3:12:55 It’s been deliberately dumbed down.
    3:12:59 Then two is they have tried to pull out a lot of what’s called the G-loading.
    3:13:03 They’ve tried to detach it from being an IQ proxy because IQ is such an inflammatory
    3:13:04 concept.
    3:13:07 The consequence of that is, and this is sort of perverse, they’ve made it more coachable
    3:13:08 people.
    3:13:13 Right, so the SAT 40 years ago coaching didn’t really work and more recently it has really
    3:13:14 started to work.
    3:13:18 One of the things you see is that the Asian spike, you see this giant leap upward in
    3:13:21 Asian performance over the last decade and I think looking at the data, I think a lot
    3:13:26 of that is because it’s more coachable now and the Asians do the most coaching.
    3:13:28 There’s a bunch of issues with this.
    3:13:31 The coaching thing is really difficult because the coaching thing is a subsidy then to the
    3:13:34 kids whose parents can afford coaching.
    3:13:37 I don’t know about you, but where I grew up there was no SAT coaching.
    3:13:38 There’s like an issue there.
    3:13:41 I didn’t even know what the SAT was until the day I took it, much less that there was
    3:13:45 coaching, much less that it could work, so much less we could afford it.
    3:13:46 So number one, there’s issues there.
    3:13:50 But the other issue there is think about what’s happened by the dumbing down.
    3:13:55 800 no longer captures all the smart, 800 is too crude of a test.
    3:13:57 It’s like the AI benchmarking problem.
    3:13:59 It’s the same problem they have in AI benchmarking right now.
    3:14:02 800 is too low of a threshold.
    3:14:06 There are too many kids scoring 800 because what you want is you want whatever, if it’s
    3:14:09 going to be 100,000 kids, I don’t know what it is, it’s going to be 50,000 kids a year
    3:14:10 scoring 800.
    3:14:15 You also then want kids to be able to score 900 and 1100 and 1200 and you want to ultimately
    3:14:19 get to, you’d like to ultimately identify the top 100 kids and make sure that you get
    3:14:21 them in MIT.
    3:14:25 And the resolution of the test has been reduced so that it actually is not useful for doing
    3:14:26 that.
    3:14:29 And again, I would say this is like part of the generalized corruption that’s taken
    3:14:33 place throughout this entire system where we have been heading in the reverse direction
    3:14:37 from wanting to actually go get the best and brightest and actually put them in the places
    3:14:38 where they should be.
    3:14:41 And then just the final comment would be the great thing about standardized testing and
    3:14:45 the national merit system is, like I said, it’s completely race blind, it’s gender blind,
    3:14:47 it’s blind on every other characteristic.
    3:14:49 It’s only done on test scores.
    3:14:54 And you can make an argument about whether that’s good or bad, but it is for sure, it’s
    3:14:57 the closest thing that we had to get to merit.
    3:15:00 It was the thing that they did when they thought they needed merit to win the Cold War.
    3:15:03 And of course, we could choose to do that anytime we want.
    3:15:07 And I just say, I find it like incredibly striking and an enormous moral indictment
    3:15:10 of the current system that there are no universities to do this today.
    3:15:13 So back to the immigration thing just real quick, it’s like, okay, we aren’t even trying
    3:15:16 to go get the smart kids out of the center or south.
    3:15:19 And even if they think that they can get into these places, they get turned down.
    3:15:21 And the same thing for the smart Asians and the same thing for the smart Jews and the
    3:15:23 same thing for the smart black people.
    3:15:29 And like, it’s just like, I don’t know how, like, I don’t know how that’s moral.
    3:15:31 Like, I don’t get it at all.
    3:15:37 As you said about the 800, so I took the SAT and the ACT many times and I’ve always gotten
    3:15:39 perfect on math 800.
    3:15:47 It’s just, and I’m not that, I’m not special, like it doesn’t identify genius.
    3:15:54 I think you want to search for genius and you want to create measures that find genius
    3:15:57 of all different kinds, speaking of diversity.
    3:16:06 And I guess we should reiterate and say over and over and over, defend immigrants, yes,
    3:16:09 but say we should hire more and more native born.
    3:16:13 Well, you asked me in the beginning, like, what’s the most optimistic forecast, right,
    3:16:21 that we could have in the most optimistic forecast would be my God, what if we did both?
    3:16:25 So that’s the reasonable, the rational, the smart thing to say here.
    3:16:26 In fact, we don’t have to have a war.
    3:16:30 Well, it would diffuse, it would diffuse the entire issue.
    3:16:32 If everybody in the center in the south of the country and every Jewish family, Asian
    3:16:37 family, black family knew they were getting a fair shake, like it would diffuse the issue.
    3:16:38 Like how about diffusing the issue?
    3:16:43 Like what a crazy radical, sorry, I don’t mean to really get out of my skis here, but
    3:16:47 I think your profile on X states it’s time to build.
    3:16:52 It feels like 25, 2025 is a good year to build.
    3:17:02 So I wanted to ask your advice and maybe for advice for anybody who’s trying to build,
    3:17:08 who’s trying to build something useful in the world, maybe launch a startup or maybe just
    3:17:14 launch apps, services, whatever, ship software products.
    3:17:21 So maybe by way of advice, how do you actually get to shipping?
    3:17:24 So I mean, a big part of the answer I think is we’re in the middle of a legit revolution
    3:17:29 and I know you’ve been talking about this on your show, but like AI coding, I mean,
    3:17:34 this is the biggest earthquake to hit software in certainly my life, maybe since the investment
    3:17:35 software.
    3:17:39 And I’m sure we’re involved in various of these companies, but these tools from a variety
    3:17:46 of companies are absolutely revolutionary and they’re getting better at leaps and bounds
    3:17:47 right every day.
    3:17:52 You know all this, but the thing with coding, there’s open questions of whether AI can get
    3:17:57 better at understanding philosophy or creative writing or whatever, but for sure we can make
    3:18:01 it much better at coding because you can validate the results of coding.
    3:18:05 And so there’s all these methods of synthetic data and self-training and reinforcement learning
    3:18:07 that for sure you can do with coding.
    3:18:12 And so everybody I know who works in the field says AI coding is going to get to be phenomenally
    3:18:14 good and it’s already great.
    3:18:17 And you can, I mean, anybody wants to see this just go on YouTube and look at AI coding
    3:18:21 demos, you know, little kids making apps in 10 minutes working with an AI coding system.
    3:18:23 And so I think it’s the golden age.
    3:18:25 I mean, I think this is an area where it’s clearly the golden age.
    3:18:29 The tool set is extraordinary, you know, in a day as a coder for sure in a day, you can
    3:18:34 retrain yourself, you know, start using these things, get a huge boost in productivity as
    3:18:37 a non-coder you can learn much more quickly than you could before.
    3:18:41 That’s actually a tricky one in terms of learning as a non-coder to build stuff.
    3:18:45 But still, I feel like you still need to learn how to code.
    3:18:47 It becomes a superpower.
    3:18:49 It helps you be much more productive.
    3:18:56 Like you could legitimately be a one person company and get quite far.
    3:18:57 I agree with that up to a point.
    3:19:03 So the, I think for sure for quite a long time, the people who are good at coding are going
    3:19:06 to be the best at actually having AI’s code things because they’re going to understand
    3:19:08 what, I mean, very basic, they’re going to understand what’s happening, right?
    3:19:11 And they’re going to be able to evaluate the work and they’re going to be able to, you
    3:19:13 know, literally like manage AI’s better.
    3:19:16 Like even if they’re not literally handwriting the code, they’re just going to have a much
    3:19:17 better sense of what’s going on.
    3:19:21 So I definitely think like 100%, my nine year old is like doing all kinds of coding classes
    3:19:24 and he’ll keep doing that for certainly through 18.
    3:19:26 We’ll see after that.
    3:19:29 And so for sure that’s the case.
    3:19:32 But look, having said that, one of the things you can do with an AI is say, teach me how
    3:19:35 to code, right?
    3:19:40 And so, and you know, there’s a whole bunch of, you know, I’ll name names, you know,
    3:19:43 like there’s a whole bunch of work that they’re doing economy for free.
    3:19:47 And then, you know, we have this company, Replet, which is originally specifically built
    3:19:52 for kids for coding that is as AI built in, that’s just absolutely extraordinary now.
    3:19:56 And then, you know, there’s a variety of other systems like this.
    3:20:00 And yeah, I mean, the AI is going to be able to teach you to code AI, by the way, is as
    3:20:04 you know, spectacularly good at explaining code, right?
    3:20:08 And so, you know, the tools have these features now where you can talk to the code base.
    3:20:12 So you can like literally like ask the code based questions about itself.
    3:20:15 And you can also just do the simple form, which is you can copy and paste code into
    3:20:20 chat GPT and just ask it to explain it, what’s going on, rewrite it, improve it, make recommendations.
    3:20:23 And so there’s, yeah, there’s dozens of ways to do this.
    3:20:26 By the way, you can also, I mean, even more broadly than code, like, you know, okay, you
    3:20:31 want to make a video game, okay, now you can do AI, art generation, sound generation, dialogue
    3:20:34 generation, voice generation, right?
    3:20:37 And so all of a sudden, like you don’t need designers, you know, you don’t need, you know,
    3:20:38 voice actors.
    3:20:43 You know, so yeah, so there’s just like unlimited, and then, you know, because, you know, a big
    3:20:47 part of coding is so called glue, you know, it’s interfacing into other systems.
    3:20:50 So it’s interfacing in a, you know, stripe to take payments or something like that.
    3:20:54 And you know, AI is fantastic at writing glue code.
    3:20:57 So you know, really, really good at making sure that you can plug everything together,
    3:21:01 really good at helping you figure out how to deploy, you know, it’ll even write a business
    3:21:03 plan for you.
    3:21:06 So it’s just this, it’s like everything happening with AI right now, it’s just it’s like this
    3:21:10 latent superpower, and there’s this incredible spectrum of people who have really figured
    3:21:14 out massive performance increases, productivity increases with it already.
    3:21:16 There’s other people who aren’t even aware it’s happening.
    3:21:21 And there’s some gearing to whether you’re a coder or not, but I think there are lots
    3:21:23 of non coders that are after the races.
    3:21:27 And I think there are lots of professional coders who are still like, you know, the blacksmiths
    3:21:32 were not necessarily in favor of, you know, car business.
    3:21:36 So yeah, there’s the old William Gibson quote, the future is here, it’s just not evenly
    3:21:37 distributed yet.
    3:21:41 And this is maybe the most potent version of that that I’ve ever seen.
    3:21:48 Yeah, there’s a, you know, the old meme with the, with the bell curve, the people on both
    3:21:51 extremes say AI coding is the future.
    3:21:52 Right.
    3:21:56 It’s very common to the programmers to say, you know, if you’re any good of a programmer,
    3:21:57 you’re not going to be using it.
    3:21:58 That’s just not true.
    3:22:04 No, I consider myself a reasonably good programmer and I, my productivity has been just skyrocketed
    3:22:12 and the joy of programming skyrocketed is every aspect of programming is more efficient,
    3:22:15 more productive, more fun, all that kind of stuff.
    3:22:19 I would also say code is, you know, code has, code has of anything in like industrial society,
    3:22:24 code has the highest elasticity, which is to say the easier it is to make it, the more
    3:22:25 it gets made.
    3:22:29 I think effectively there’s unlimited demand for code, like in other words, like there’s
    3:22:34 always some other idea for a thing that you can do, a feature that you can add or a thing
    3:22:36 that you can optimize.
    3:22:40 And so, and so like overwhelmingly, you know, the amount of code that exists in the world
    3:22:43 is a fraction of even the ideas we have today and then we come up with new ideas all the
    3:22:44 time.
    3:22:50 And so I think that like, you know, I was, I was late 80s, early 90s, when sort of automated
    3:22:53 coding systems started to come out, expert systems, big deal in those days.
    3:22:56 And there were all these, there was a famous book called The Decline and Fall of the American
    3:22:59 Programmer, you know, that predicted that these new coding systems were going to mean
    3:23:00 we wouldn’t have programmers in the future.
    3:23:04 And of course, the number of programming jobs exploded by like a factor of 100.
    3:23:07 Like my guess will be, we’ll have more, my guess is we’ll have more coding jobs probably
    3:23:11 by like an order of magnitude 10 years from now.
    3:23:12 That will be different.
    3:23:13 There’ll be different jobs.
    3:23:17 They’ll involve orchestrating AI, but there will be, we will be creating so much more
    3:23:21 software that the whole industry will just explode in size.
    3:23:26 Are you seeing the size of companies decrease in terms of startups?
    3:23:28 What’s the landscapes of little tech?
    3:23:31 All we’re seeing right now is the AI hiring boom of all time.
    3:23:37 Oh, for the big tech people and little tech, everybody’s trying to hire as many engineers
    3:23:38 as they can to build AI systems.
    3:23:40 It’s just, it’s a hundred percent.
    3:23:44 I mean, there’s a handful of company, you know, there’s a little bit, there’s customer
    3:23:45 service.
    3:23:48 You know, there, we have some companies and others, I think it’s Klarna that’s publicizing
    3:23:55 a lot of this in Europe where, you know, there are jobs that can be optimized and jobs that
    3:23:56 can be automated.
    3:24:02 But like for engineering jobs, like it’s just an explosion of hiring, that at least so far
    3:24:05 there’s no trace of any sort of diminishing effect.
    3:24:07 Now, having said that, I am looking forward to the day.
    3:24:12 I am waiting for the first company to walk in saying, yes, like the more radical form
    3:24:13 of it.
    3:24:16 So basically the companies that we see are basically one of two kinds.
    3:24:20 We see the companies that are basically, sometimes use weak form, strong form.
    3:24:25 So the weak form companies sometimes use the term, it’s called the sixth bullet point.
    3:24:28 AI is the sixth bullet point on whatever they’re doing.
    3:24:29 Sure.
    3:24:30 Right.
    3:24:31 And it’s on the slide, right?
    3:24:33 So they’ve got the, you know, whatever, dot, dot, dot, dot, and then AI is the sixth thing.
    3:24:35 And the reason AI is the sixth thing is because they had already previously written the slide
    3:24:37 before the AI revolution started.
    3:24:40 And so they just added the sixth bullet point on the slide, which is how you’re getting
    3:24:44 all these products that have like the AI button up in the corner, right, the little sparkly
    3:24:45 button.
    3:24:46 Right.
    3:24:48 And all of a sudden, Gmail is offering to summarize your email, which I’m like, I don’t
    3:24:49 need that.
    3:24:53 Like I need you to answer my email, not summarize it like what the hell.
    3:24:54 Okay.
    3:24:55 So we see those.
    3:24:56 And that’s fine.
    3:24:59 That’s like, I don’t know, putting sugar on the cake or something.
    3:25:02 But then we see the strong form, which is the companies that are building from scratch
    3:25:03 for AI.
    3:25:04 Right.
    3:25:05 And they’re building it.
    3:25:08 I actually just met with a company that is building literally an AI email system as an
    3:25:09 example.
    3:25:10 Oh, nice.
    3:25:11 I can’t wait.
    3:25:12 Yeah.
    3:25:13 They’re going to completely, right.
    3:25:14 It’s going to be an obvious idea.
    3:25:15 Very smart team.
    3:25:17 You know, it’s going to be great.
    3:25:20 And then, you know, Notion just, you know, another, not one of our companies, but just
    3:25:21 came out with a product.
    3:25:24 And so now companies are going to basically come through, sweep through, and they’re going
    3:25:27 to do basically AI first versions of basically everything.
    3:25:31 And those are like companies built, you know, AI is the first bullet point is the strong
    3:25:32 form of the argument.
    3:25:33 Yeah.
    3:25:34 Cursor is an example of that.
    3:25:38 They basically said, okay, we’re going to rebuild the thing with AI as the first citizen.
    3:25:41 What if we knew from scratch that we could build on this?
    3:25:45 And again, this is like, this is part of the full employment act for startups and VCs is
    3:25:50 it just like if a technology transformation is efficiently powerful, then you actually
    3:25:54 need to start the product development process over from scratch because you need to reconceptualize
    3:25:55 the product.
    3:25:58 And then usually what that means is you need a new company because most incumbents just
    3:25:59 won’t do that.
    3:26:02 And so, yeah, so that’s underway across many categories.
    3:26:07 What I’m waiting for is the company where it’s like, no, our org chart is redesigned
    3:26:08 as a result of AI, right?
    3:26:12 And so, I’m looking, I’m waiting for the company where it’s like, no, we’re going to have like,
    3:26:15 you know, and the cliche, here’s a thought experiment, right?
    3:26:18 The cliche would be we’re going to have like the human executive team and then we’re going
    3:26:20 to have the AI’s be the workers, right?
    3:26:25 So we’ll have VP of engineering supervising 100 instances of coding agents, right?
    3:26:26 Okay, maybe.
    3:26:27 Right.
    3:26:31 By the way, or maybe, maybe the VP of engineering should be the AI.
    3:26:34 Maybe supervising human coders who are supervising AI’s, right?
    3:26:39 Because one of the things that AI should be pretty good at is managing because it’s like
    3:26:41 not, you know, it’s like a process driven.
    3:26:43 It’s the kind of thing that AI is actually pretty good at, right?
    3:26:46 Performance evaluation coaching.
    3:26:49 And so, should it be an AI executive team?
    3:26:54 And then, you know, and then of course the ultimate question, which is AI CEO, right?
    3:26:57 And then, you know, and then there’s, and then maybe the most futuristic version of it would
    3:27:00 be an actual AI agent that actually goes fully autonomous.
    3:27:01 Yeah.
    3:27:04 What if you really set one of these things loose and let it, let it basically build itself
    3:27:05 a business?
    3:27:08 And so I will say like, we’re not yet seeing those.
    3:27:13 And I think there’s a little bit of the systems aren’t quite ready for that yet.
    3:27:16 And then I think it’s a little bit of, you really do need at that point, like a founder
    3:27:21 who’s really willing to break all the rules and really willing to take the swing.
    3:27:22 And those people exist.
    3:27:23 And so I’m sure we’ll see that.
    3:27:27 And some of it is, as you know, with all the startups, this is the execution.
    3:27:34 The idea that you have a AI first email client, seems like an obvious idea, but actually creating
    3:27:38 one, executing it and then taking on Gmail is really, is really difficult.
    3:27:45 I mean, Gmail, it’s fascinating to see Google can’t do it because, because why?
    3:27:49 Because the momentum, because it’s hard to re-engineer the entirety of the system feels
    3:27:52 like Google is perfectly positioned to, to do it.
    3:27:59 Same with like your perplexity, which I love, like Google could technically take on perplexity
    3:28:02 and do it much better, but they haven’t, not yet.
    3:28:06 So it’s fascinating why that is for large companies.
    3:28:08 I mean, that, that is an advantage for little tech.
    3:28:09 They could be agile.
    3:28:10 Yeah, that’s right.
    3:28:11 They could move fast.
    3:28:12 Yeah.
    3:28:14 Little companies can break glass in a way big companies can’t.
    3:28:15 Right.
    3:28:18 This is sort of the big breakthrough that Clay Christians had in the innovators dilemma,
    3:28:21 which is sometimes when big companies don’t do things, it’s because they’re screwing up
    3:28:23 and that certainly happens.
    3:28:26 But a lot of times they don’t do things because it would break too much glass.
    3:28:30 It was specifically, it would, it would interfere with their existing customers and their existing
    3:28:31 businesses.
    3:28:32 And they just simply won’t do that.
    3:28:34 And by the way, responsibly, they shouldn’t do that.
    3:28:35 Right.
    3:28:41 And so they just get, Clay Christians, this big thing is they, they often don’t adapt
    3:28:46 because they are well run, not because they’re probably run, but they’re optimizing machines.
    3:28:49 They’re, they’re, they’re optimizing, I guess, existing business and, and, and, and as, as
    3:28:54 you kind of just said, this is like a permanent state of affairs for large organizations, like
    3:28:56 every once in a while, one breaks the pattern and actually does it.
    3:28:59 But for the most part, like this is a very predictable form of human behavior.
    3:29:03 And this fundamentally is why startups exist.
    3:29:08 It feels like 2025 is when the race for dominance and AI will see some winners.
    3:29:10 Like it’s a big year.
    3:29:12 So who do you think wins the race?
    3:29:16 Open AI, Meta, Google, XAI, who do you think wins the AI race?
    3:29:18 I would say, I’m not going to predict, I’m going to say there’s questions all over the
    3:29:19 place.
    3:29:22 And then we have, we have this category of question we call the trillion dollar question,
    3:29:26 which is like literally depending on how it’s answered, people make or lose a trillion dollars.
    3:29:30 And I think there’s like, I don’t know, five or $6 trillion questions right now that are
    3:29:33 hanging out there, which is an unusually large number.
    3:29:36 And I just, you know, I’ll just hit a few of them and we can talk about them.
    3:29:38 So one is big models versus small models.
    3:29:40 Another is open models versus closed models.
    3:29:44 Another is whether you can use synthetic data or not.
    3:29:45 Another is chain of thought.
    3:29:48 How far can you push that in reinforcement learning?
    3:29:52 And then another one is political trillion dollar questions, policy questions, which,
    3:29:57 you know, the U.S. and the EU have both been flunking dramatically and the U.S. hopefully
    3:29:59 is about to really succeed at.
    3:30:00 Yeah.
    3:30:03 And then there’s probably another, you know, half dozen big important questions after that.
    3:30:08 And so these are all just like, say, this is an industry that’s in flux in a way that
    3:30:11 I even more dramatic, I think, than the ones I’ve seen before.
    3:30:15 And look, the most example, most obvious example, the flux is sitting here three, sitting here
    3:30:19 in the summer, you know, sitting here less than three years ago, sitting here in December
    3:30:23 22, we would have said that OpenAI is just running away with everything.
    3:30:27 And sitting here today, it’s like, you know, there’s at least six, you know, world-class
    3:30:33 God model companies and teams that are, by the way, generating remarkably similar results.
    3:30:36 That’s actually been one of the most shocking things to me is like, it turns out that once
    3:30:40 you know that it’s possible to build one incredibly smart Turing test passing large
    3:30:44 language model, which was a complete shock and surprise to the world.
    3:30:48 It turns out within, you know, a year you can have five more.
    3:30:51 There’s also a money component thing to it, which is to get the money to scale one of
    3:30:53 these things into the billions of dollars.
    3:30:56 There’s basically right now only two sources of money that will do that for you.
    3:31:00 One is the hyperscalers giving you the money, which you turn around and round trip back
    3:31:01 to them.
    3:31:05 Or, you know, foreign sovereigns, you know, other, you know, country sovereign sovereign
    3:31:10 wealth funds, which can be, you know, difficult in some cases for companies to access.
    3:31:14 So there’s a, there’s another, there’s maybe another trillion dollar question is the financing
    3:31:15 question.
    3:31:16 Here’s one.
    3:31:19 Sam Altman has been public about the fact that he wants to transition open AI from being
    3:31:21 a nonprofit, being a for-profit.
    3:31:25 The way that that is legally done is that, and there is a way to do it, there is a way
    3:31:30 in U.S. law to do it, the IRS and other legal entities, government entities scrutinize this
    3:31:34 very carefully because the U.S. takes foundation nonprofit law very seriously because of the
    3:31:36 tax exemption.
    3:31:40 And so the way that, historically the way that you do it is you start a for-profit and
    3:31:44 then you, you raise money with the for-profit to buy the assets of the nonprofit at fair
    3:31:47 market value.
    3:31:51 And you know, the last financing round at open AI was, you know, 150 some billion dollars.
    3:31:56 And so logically, the, if, if, if the flip is going to happen, the for-profit has to
    3:32:02 go raise 150 billion dollars out of the chute to buy the assets, you know, raising 150 billion
    3:32:03 is a challenge.
    3:32:06 Um, so, you know, is that even possible?
    3:32:09 If that is possible, then open AI maybe as often the races as a for-profit company.
    3:32:13 If not, you know, you know, I don’t know, and then, you know, obviously the Elon lawsuit.
    3:32:17 So, so just because they’re the market leader today, you know, there’s big important questions
    3:32:18 there.
    3:32:20 You know, Microsoft has this kind of love-hate relationship with them.
    3:32:21 Where does that go?
    3:32:25 Apple’s, you know, lagging badly behind, but, you know, they’re very good at catching up.
    3:32:29 Amazon, you know, is primarily a hyperscalar, but they now have their own models.
    3:32:33 And then there’s the other questions like you laid out brilliantly, briefly and brilliantly,
    3:32:39 open versus closed, big versus little models, synthetic data, that’s a huge, huge question.
    3:32:45 And then test on compute with a chain of thought, the role of that, and this is fascinating.
    3:32:48 And these are, I think it’s fair to say, trillion-dollar questions.
    3:32:49 You know, these are big.
    3:32:51 Like, look, you know, it’s like, here’s a trillion-dollar question, which is kind of
    3:32:54 embedded in that, which is just hallucinations, right?
    3:32:58 Like, so if you are trying to use these tools creatively, you’re thrilled because they can
    3:33:02 draw new images and they can make new music and they can do all this incredible stuff,
    3:33:03 right?
    3:33:04 They’re creative.
    3:33:07 The flip side of that is if you need them to be correct, they can’t be creative.
    3:33:11 That’s, you know, the term hallucination and these things do hallucinate.
    3:33:16 And you know, there have been, you know, court cases already where lawyers have submitted
    3:33:20 legal briefs that contain made-up court citations, case citations, the judge is like, wait a
    3:33:21 minute, this doesn’t exist.
    3:33:24 And the very next question is, did you write this yourself?
    3:33:26 And the lawyer goes, “Uh…”
    3:33:30 I mean, that’s why you, along with Grock, looking for truth.
    3:33:33 I mean, that’s an open, technical question.
    3:33:35 How close can you get to truth with LLMs?
    3:33:36 Yeah, that’s right.
    3:33:43 And my sense, this is very contentious topic at the industry, my sense is if to the extent
    3:33:47 that there is a domain in which there is a definitive and checkable and provable answer,
    3:33:51 and you might say math satisfies that, coding satisfies that, and maybe some other fields,
    3:33:54 then you should be able to generate synthetic data.
    3:33:55 You should be able to do chain of thought reasoning.
    3:33:57 You should be able to do reinforcement learning.
    3:34:02 And you should be able to ultimately, you know, eliminate hallucinations for, but by the way,
    3:34:05 that’s a trillion dollar question right there as to whether that’s true.
    3:34:08 But then, but then there’s question like, okay, is that going to work in the more general
    3:34:09 domain?
    3:34:12 Like, so for example, one possibility is these things are going to get truly superhuman like
    3:34:17 math and coding, but at like discussing philosophy, they’re going to just, they’re basically as
    3:34:19 smart as they’re ever going to be.
    3:34:23 And they’re going to be kind of, you know, say mid-wit grad student level.
    3:34:26 And the theory there would just be they’re already out of training data.
    3:34:30 Like they literally, you know, you talk to these people like literally the big models,
    3:34:33 the big models are like within a factor of two X of consuming all the human generated
    3:34:36 training data to the point that some of these big companies are literally hiring people
    3:34:39 like doctors and lawyers to sit and write new training data by hand.
    3:34:42 And so does this mean that like you have to, if you want your model to be better at philosophy,
    3:34:45 you have to go hire like a thousand philosophers and have them write new content?
    3:34:47 And is anybody going to do that?
    3:34:50 And so, you know, maybe, maybe these things are topping out in certain ways and they’re
    3:34:52 going to leap way ahead in other ways.
    3:34:57 And so anyway, so we just don’t, you know, I guess this is maybe my main conclusion is
    3:35:02 I don’t any of these, anybody tell anybody telling you these big sweeping conclusions,
    3:35:05 you know, this whole super, you know, all of these abstract generalized super intelligence
    3:35:09 AGI stuff, like it, you know, maybe it’s the engineer in me, but like, no, like that’s
    3:35:16 not the, that’s not the, that’s too abstract, like it’s got to actually work.
    3:35:18 And then by the way, it has to actually pay for it.
    3:35:22 I mean, this is a problem right now with the, you know, the big models, the big models that
    3:35:25 are like really good at coding a math, they’re like actually very expensive to run.
    3:35:28 You know, they’re quite slow.
    3:35:33 Another trillion dollar question, future chips, which I know you’ve talked a lot about.
    3:35:37 Another trillion dollar question, yeah, I mean, all the global issue, oh, another trillion
    3:35:43 dollar question, censorship, right, like, and, and, and, and all the, as they say, all
    3:35:48 the human feedback training process.
    3:35:49 Exactly what are you training these things to do?
    3:35:51 What are they allowed to talk about?
    3:35:55 How long do they give you these and how often do they give these incredibly preachy moral
    3:35:56 lectures?
    3:35:59 Here’s a, here’s a, here’s a good, here’s a trillion dollar question.
    3:36:05 How many other countries want their country to run its education system, healthcare system,
    3:36:08 news system, political system on the basis of an AI that’s been trained according to
    3:36:13 the most extreme left-wing California politics, right, because that’s kind of what they have
    3:36:15 on offer right now.
    3:36:17 And I think the answer to that is not very many.
    3:36:22 So there’s like massive open questions there about like what, you know, and by the way,
    3:36:25 like what morality of these things are going to get trained on as a.
    3:36:32 In that one, we’re cracking wide open with what’s been happening over the past few months,
    3:36:38 censorship on every level of these companies and just the very idea what truth means and
    3:36:45 what it means to be, expand the Overton window of LLMs or the Overton window of human discourse.
    3:36:47 So what, what I experienced, you know, going back to how we started, what I experienced
    3:36:53 was, all right, social media censorship regime from hell, debanking, right, at like large
    3:36:58 scale, and then the war on the crypto industry, trying to kill it, and then basically declared
    3:37:03 intent to do the same thing to AI and to put AI under the same kind of censorship and control
    3:37:06 regime as social media and the banks.
    3:37:11 And I think this election tips in America, I think this election tips us from a timeline
    3:37:15 in which things were going to get really bad on that front to a timeline in which I think
    3:37:17 things are going to be quite good.
    3:37:21 But look, those same questions also apply outside the US and, you know, the EU is doing
    3:37:25 their thing, they’re being extremely draconian, and they’re trying to lock in a political
    3:37:27 censorship regime on AI right now.
    3:37:29 That’s so harsh that even American AI companies are not even willing to launch new products
    3:37:31 in the EU right now.
    3:37:35 Like, that’s not going to last, but like what happens there, right?
    3:37:38 And what are the tradeoffs, you know, what levels of censorship are American companies
    3:37:42 going to have to sign up for if they want to operate in the EU or is the EU still capable
    3:37:50 of generating its own AI companies or have we brain drained them so that they can’t.
    3:37:52 So big questions.
    3:37:53 Quick questions.
    3:38:03 So you’re very active on X, a very unique character, flamboyant, exciting, bold.
    3:38:05 You post a lot.
    3:38:10 I think there’s a meme, I don’t remember it exactly, but Elon posted something like inside
    3:38:12 Elon there are two wolves.
    3:38:16 One is please be kind or more positive.
    3:38:22 And the other one is, I think, you know, doing the, take a big step back and fuck yourself
    3:38:24 in the face guy.
    3:38:28 How many wolves are inside your mind when you’re tweeting?
    3:38:30 To be clear, a reference from the comedy classic, “Tropic Thunder.”
    3:38:31 “Tropic Thunder.”
    3:38:32 Yeah.
    3:38:33 Legendary movie.
    3:38:34 Yes.
    3:38:39 Any zoomers listening to this who haven’t seen that movie, go watch it immediately.
    3:38:40 Yeah.
    3:38:41 There’s nothing offensive about it.
    3:38:50 I’m Cruz’s greatest performance.
    3:38:55 So yeah, no, look, just start by saying like I’m not supposed to be tweeting at all.
    3:38:56 So yeah.
    3:38:57 Yes.
    3:38:58 Yes.
    3:38:59 Yes.
    3:39:00 But you know.
    3:39:01 So how do you approach that?
    3:39:02 Like, how do you approach what to tweet?
    3:39:03 I mean, I don’t.
    3:39:08 Like, so it’s a, it’s a, I don’t, I don’t well enough.
    3:39:10 It’s mostly an exercise in frustration.
    3:39:13 Look, there’s a glory to it and there’s, there’s, there’s an issue with it and the glory of
    3:39:18 it is like, you know, instantaneous global communication that, you know, in X in particular
    3:39:21 is like the, you know, the town square on all these, you know, social issues, political
    3:39:24 issues, everything else, current events.
    3:39:26 But I mean, look, there’s no question, the format, the format of at least the original
    3:39:29 tweet is, you know, prone to be inflammatory.
    3:39:34 You know, I’m the guy who at one point, the entire nation of India hated me because I
    3:39:38 once tweeted something and it turned out that it’s still politically sensitive and the entire
    3:39:39 continent.
    3:39:43 I stayed up all night that night as, as I became front page headline and leading television
    3:39:46 news in each time zone in India for a single tweet.
    3:39:50 So like the single tweet out of context is a very dangerous thing.
    3:39:55 Obviously, X now has the middle ground where they, you know, they now have the longer form
    3:39:56 essays.
    3:40:01 And so, you know, probably the most productive thing I can do is, is longer form, is longer
    3:40:02 form things.
    3:40:05 You’re not going to do it though, are you?
    3:40:06 I do, I do from time to time.
    3:40:07 Sometimes.
    3:40:08 I should, I should do more of them.
    3:40:11 And then, yeah, I mean, look, and yeah, obviously X is doing great.
    3:40:14 And then like I said, like stuff stack, you know, has become the center for a lot, you
    3:40:15 know, a lot of them.
    3:40:19 I think the best kind of, you know, deeply thought through, you know, certainly intellectual
    3:40:23 content, you know, tons of current events, stuff there as well.
    3:40:26 And then, yeah, so, and then there’s a bunch of other, you know, a bunch of new systems
    3:40:27 that are very exciting.
    3:40:30 So I think one of the things we can look forward to in the next four years is number one, just
    3:40:34 like a massive reinvigoration of social media as a consequence of the changes that are happening
    3:40:35 right now.
    3:40:37 And I’m very excited to see the, to see what’s going to happen with that.
    3:40:42 And then, I mean, it’s happening on X, but it’s now going to happen on other platforms.
    3:40:47 And then the other is crypto is going to come, you know, crypto is going to come right back
    3:40:48 to life.
    3:40:49 And actually, that’s very exciting.
    3:40:54 Actually, that’s worth noting is that’s another trillion dollar question on AI, which is in
    3:40:58 a world of pervasive AI, and especially in a world of AI agents, imagine a world of billions
    3:41:03 or trillions of AI agents running around, they need an economy.
    3:41:07 And crypto, in our view, happens to be the ideal economic system for that, right?
    3:41:08 Because it’s a programmable money.
    3:41:10 It’s a very easy way to plug in and do that.
    3:41:13 And there’s this transaction processing system that can do that.
    3:41:16 And so I think the crypto-AI intersection, you know, is potentially very, a very, very
    3:41:17 big deal.
    3:41:22 And so that was, that was going to be impossible under the prior regime.
    3:41:25 And I think under the new regime, hopefully, it’ll be something we can do.
    3:41:30 Almost for fun, let me ask a friend of yours, Jan Lacoon, what are your top 10 favorite things
    3:41:33 about Jan Lacoon?
    3:41:37 He’s a, I think he’s a, he’s a brilliant guy.
    3:41:38 I think he’s important to the world.
    3:41:41 I think you guys disagree on a lot of things.
    3:41:44 But I personally like vigorous disagreement.
    3:41:48 I, as a person in the stands, like to watch the gladiators go at it.
    3:41:50 No, he’s a super genius.
    3:41:53 I mean, look, he, I haven’t said we’re super close, but you know, casual, casual friends,
    3:41:56 I worked with him at Meta, you know, he’s the chief scientist at Meta for a long time
    3:42:02 and it still, you know, works with us and, and, you know, and as obviously as a legendary
    3:42:06 figure in the field and one of the main people responsible for what’s happening, it’s my
    3:42:10 serious observation would be that it’s, it’s, it’s the thing I keep, I’ve talked to him
    3:42:13 about for a long time and I keep trying to read and follow everything he does is he’s
    3:42:19 probably, he is the, I think, see if you agree with this, he is the smartest and most credible
    3:42:23 critic of LLMs is the path for AI.
    3:42:26 And he’s not, you know, there’s certain, I would say troll like characters who are
    3:42:30 just like crapping everything, but like, yeah, and has like very deeply thought through basically
    3:42:35 theories as to why LLMs are an evolutionary dead end.
    3:42:40 And I actually like, I try to do this thing where I try to model, you know, I try to have
    3:42:43 a mental model of like the two different sides of a serious argument.
    3:42:46 So I, I’ve tried to like internalize that argument as much as I can, which is difficult
    3:42:49 because like we’re investing it behind LLMs as aggressively as we can.
    3:42:54 So if he’s right, like, that can be a big problem, but like we should also know that.
    3:42:59 And then I sort of use his ideas to challenge all the bullish people, you know, to really
    3:43:01 kind of test their level of knowledge.
    3:43:06 So I like to kind of grill people like I’m not like, I’m not, you know, I was not, you
    3:43:09 know, I was got my CS degree 35 years ago.
    3:43:12 So I’m not like deep in the technology, but like if, if to the extent I can understand
    3:43:16 Jan’s points, I can use them to, you know, to really surface a lot of the questions for
    3:43:18 the people who are more bullish.
    3:43:20 And that’s been, I think, very productive.
    3:43:21 Yeah.
    3:43:24 So, yeah, it’s just, it’s very striking that you have somebody who is like that central
    3:43:28 in the space who is actually like a full on, a full on skeptic.
    3:43:31 And you know, and again, you could, this could go different ways.
    3:43:33 He could end up being very wrong.
    3:43:37 He could end up being totally right, or it could be that he will provoke the evolution
    3:43:39 of these systems to be much better than they would have been.
    3:43:40 Yeah.
    3:43:41 He could be both right and wrong.
    3:43:44 And first of all, I do, I do agree with that.
    3:43:51 He’s one of the most legit and regress and deep critics of the LLM path to AGI, you know,
    3:43:56 his basic notions that they’re needs, AI needs to have some physical understanding of the
    3:43:57 physical world.
    3:44:01 And that’s very difficult to achieve with LLM.
    3:44:05 And that, that is a really good way to challenge the limitations of LLMs and so on.
    3:44:11 He’s also been a vocal and a huge proponent of open source, which is a whole nother which
    3:44:12 you have been as well.
    3:44:13 Which is very useful.
    3:44:14 Yeah.
    3:44:15 And that’s been just fascinating to watch.
    3:44:16 And anti-dumer.
    3:44:17 Anti-dumer.
    3:44:18 Yeah.
    3:44:19 Yeah.
    3:44:20 He’s, he’s, he’s very anti-dumer.
    3:44:21 He embodies.
    3:44:22 He also has many wolves.
    3:44:23 He does.
    3:44:24 He does.
    3:44:25 He does.
    3:44:26 He does.
    3:44:27 So it’s been really, really fun to watch.
    3:44:28 The other two.
    3:44:29 Okay.
    3:44:30 Here’s my other wolf coming out.
    3:44:31 Yeah.
    3:44:36 The other two of the three Godfathers of AI are like radicals, like, like full on left,
    3:44:40 you know, far left, you know, like they, I would say like either Marxists or borderline
    3:44:41 Marxists.
    3:44:44 And they’re like, I think quite extreme in their social political views.
    3:44:47 And I think that feeds into their demerism.
    3:44:50 And I think, you know, they, they, they are lobbying for like draconian government.
    3:44:54 I think what would be runnously destructive government legislation and regulation.
    3:44:58 And so it’s, it’s actually super helpful, super, super helpful to have Jan as a counterpoint
    3:44:59 to those two.
    3:45:00 Another fun question.
    3:45:02 Our mutual friend, Andrew Huberman.
    3:45:03 Yes.
    3:45:08 First, maybe what do you love most about Andrew and second, what score on a scale of one to
    3:45:09 10?
    3:45:11 Do you think he would give you on your approach to health?
    3:45:12 Oh, three.
    3:45:13 Physical three.
    3:45:15 You think you score that high, huh?
    3:45:16 Okay.
    3:45:17 That’s good.
    3:45:18 Exactly.
    3:45:23 Well, so he did, he convinced me to stop drinking alcohol, which was a big deal.
    3:45:24 Successfully.
    3:45:27 Well, it was like my, other than my family, it was my favorite thing in the world.
    3:45:29 And so it was a major, major reduction.
    3:45:32 Like having like a glass of scotch at night was like a major, like it was like the thing
    3:45:33 I would do to relax.
    3:45:38 So he has profoundly negatively impacted my emotional health.
    3:45:43 I blame him for making me much less happy as a person, but much, much, much healthier.
    3:45:44 Physically healthier.
    3:45:46 So that, that I credit him with that.
    3:45:48 I’m glad I did that.
    3:45:50 But then his sleep stuff, like, yeah, I’m not doing any of that.
    3:45:51 Yeah.
    3:45:52 I have no interest in his sleep.
    3:45:53 Shit.
    3:45:54 Like, no.
    3:45:57 This whole light, natural light, no, we’re not doing that.
    3:45:58 Too hardcore for this.
    3:46:01 I don’t see any, I don’t see any natural, I don’t see any natural light in here.
    3:46:02 It’s all covered.
    3:46:03 It’s all horrible.
    3:46:04 And I’m very happy.
    3:46:09 I would be very happy living and working here because I’m totally happy without natural
    3:46:10 light.
    3:46:11 In darkness.
    3:46:12 It must be a metaphor for something.
    3:46:13 Yes.
    3:46:14 It’s a test.
    3:46:16 Look, it’s a test of manhood as to whether you can have a blue screen in your face for
    3:46:17 three hours and then go right to sleep.
    3:46:22 Like I don’t understand why you should want to take shortcuts.
    3:46:25 I now understand what they mean by toxic masculinity.
    3:46:29 All right.
    3:46:37 So let’s see, you’re exceptionally successful by most measures, but what to use the definition
    3:46:39 of success?
    3:46:43 I would probably say it is a combination of two things.
    3:46:48 I think it is contribution.
    3:46:56 So have you done something that mattered ultimately and specifically a matter to people?
    3:47:01 And then the other thing is, I think happiness is either overrated or almost a complete myth.
    3:47:05 And in fact, interesting, Thomas Jefferson did not mean happiness the way that we understand
    3:47:08 it when he said, “Pursuade to happiness and the declaration of independence.”
    3:47:16 He meant it more of the Greek meaning, which is closer to satisfaction or fulfillment.
    3:47:23 So I think about happiness as the first ice cream cone makes you super happy, the first
    3:47:27 mile of the walk in the park during sunset makes you super happy.
    3:47:33 The first kiss makes you super happy, the thousandth ice cream cone, not so much.
    3:47:38 The thousandth mile of the walk through the park, the thousandth kiss can still be good,
    3:47:42 but maybe just not right in a row.
    3:47:46 And so happiness is this very fleeting concept and the people who anchor on happiness seem
    3:47:48 to go off the rails pretty often.
    3:47:54 It’s sort of the deep sense of having been, I don’t know how to put it, useful.
    3:48:00 So that’s a good place to arrive at in life.
    3:48:01 Yeah, I think so.
    3:48:02 Yeah.
    3:48:03 I mean, can you sit?
    3:48:04 Yeah.
    3:48:07 Who was it who said the source of all the ills in the world with man’s inability to sit in
    3:48:11 a room by himself doing nothing?
    3:48:14 But if you’re sitting in a room by yourself and you’re like, “All right,” four in the
    3:48:18 morning, it’s like, “All right, have I lived up to my expectation of myself?”
    3:48:24 Like, if you have, the people I know who feel that way are pretty centered and generally
    3:48:33 seem very, I don’t know how to put it, pleased, but proud, calm at peace.
    3:48:40 The people who are sensation seekers, by the way, there’s certain entrepreneurs, for example,
    3:48:45 who are like in every form of extreme sport and they get huge satisfaction out of that.
    3:48:48 Or there’s sensation seeking in sort of useful and productive ways.
    3:48:52 Larry Allison was always like that, Zuckerberg was like that.
    3:49:00 And then there’s a lot of entrepreneurs who end up no drugs, like sexual escapades that
    3:49:02 seem like they’ll be fun at first and then backfire.
    3:49:07 Yeah, but at the end of the day, if you’re able to be at peace by yourself in a room
    3:49:08 at 4 a.m.
    3:49:09 Yeah.
    3:49:15 I would even say happy, but I know, I understand Thomas Jefferson didn’t mean it the way maybe
    3:49:20 I mean it, but I can be happy by myself at 4 a.m. with a blue screen.
    3:49:21 That’s good.
    3:49:22 Exactly.
    3:49:23 Staring at cursor.
    3:49:24 Exactly.
    3:49:31 As a small tangent, a quick shout out to an amazing interview you did with Barry Weiss
    3:49:34 and just to her in general, Barry Weiss of The Free Press.
    3:49:37 She has a podcast called “Honestly with Barry Weiss.”
    3:49:38 She’s great.
    3:49:39 People should go listen.
    3:49:45 You were asked if you believe in God.
    3:49:49 One of the joys, see we talked about happiness, one of the things that makes me happy is making
    3:49:50 you uncomfortable.
    3:49:51 Thank you.
    3:49:55 So this question is designed for, many of the questions today are designed for that.
    3:50:01 You were asked if you believe in God and you said after a pause, you’re not sure.
    3:50:09 So it felt like the pause, the uncertainty there was some kind of ongoing search for wisdom
    3:50:11 and meaning.
    3:50:14 Are you in fact searching for wisdom and meaning?
    3:50:15 I guess I put it this way.
    3:50:21 There’s a lot to just understand about people and then I feel like I’m only starting to
    3:50:29 understand and that’s certainly a simpler concept than God.
    3:50:33 So that’s what I’ve spent a lot of the last 15 years trying to figure out.
    3:50:37 I feel like I spent my first like whatever 30 years figuring out machines and then now
    3:50:41 I’m spending 30 years figuring out people, which turns out to be quite a bit more complicated.
    3:50:47 And then I don’t know, maybe God’s the last 30 years or something.
    3:50:52 And then look, I mean, just like Elon is just like, okay, the known universe is very complicated
    3:50:53 and mystifying.
    3:50:58 I mean, every time I pull up in astronomy, I get super in astronomy and it’s like, daddy,
    3:51:03 how many galaxies are there in the universe and how many galaxies are there in the universe?
    3:51:04 100 billion.
    3:51:05 Okay.
    3:51:06 Like how?
    3:51:07 Yeah.
    3:51:08 Yeah.
    3:51:11 Like how is that freaking possible?
    3:51:16 Like what, like it’s just, it’s such a staggering concept that I-
    3:51:21 I actually wanted to show you a tweet that blew my mind from Elon from a while back.
    3:51:25 He said, Elon said, as a friend called it, this is the ultimate skill tree.
    3:51:31 This is a wall of galaxies, a billion light years across.
    3:51:32 So these are all galaxies.
    3:51:33 Yeah.
    3:51:36 Like what the, like how, how is it that big?
    3:51:37 Like how the hell?
    3:51:40 I’m like, you know, I can read the textbook into this and then that and the whatever eight
    3:51:42 billion years and the big bang and the whole thing.
    3:51:44 And then it’s just like, all right, wow.
    3:51:48 And then it’s like, all right, the big bang, all right, like what was, what was before the
    3:51:49 big bang?
    3:51:56 Do you think we’ll ever, we humans will ever colonize like a galaxy and maybe even go beyond?
    3:51:57 Sure.
    3:51:58 I mean, yeah.
    3:51:59 I mean, in the fullness of time.
    3:52:00 Yeah.
    3:52:01 So you have that kind of optimism.
    3:52:02 You have that kind of hope that extends across thousands of years.
    3:52:03 In the fullness of time.
    3:52:04 I mean, yeah.
    3:52:06 I mean, yeah, you know, all the, all the problems, all the challenges with it that I do, but
    3:52:07 like, yeah, why not?
    3:52:10 I mean, again, in the fullness of time, it’ll, it’ll take a long time.
    3:52:12 You don’t think we’ll destroy ourselves?
    3:52:13 No.
    3:52:14 I doubt it.
    3:52:15 I doubt it.
    3:52:18 And, you know, fortunately we have Elon giving us, giving us the backup plan.
    3:52:19 So I don’t know.
    3:52:21 Like I grew up, you know, real Midwest sort of just like conventionally kind of Protestant
    3:52:22 Christian.
    3:52:25 It never made that much sense to me.
    3:52:26 Got trained as an engineer and a scientist.
    3:52:27 I’m like, oh, that definitely doesn’t make sense.
    3:52:31 I’m like, I know, I’ll spend my life as an empirical, you know, rationalist and I’ll figure
    3:52:32 everything out.
    3:52:37 You know, and then again, you walk up against these things, you know, you bump up against
    3:52:40 these things and you’re just like, all right, I like, okay, I guess there’s a scientific
    3:52:44 explanation for this, but like, wow.
    3:52:46 And then there’s like, all right, where did that come from?
    3:52:47 Right.
    3:52:50 And then how far back can you go on the causality chain?
    3:52:51 Yeah.
    3:52:54 And then, yeah, I mean, then even, even just, you know, experiences that we all have on
    3:52:56 earth, it’s hard to, it’s hard to rationally explain it all.
    3:53:01 And then, you know, so yeah, I guess I just say I’m kind of radically open-minded at peace
    3:53:04 with the fact that I’ll probably never know.
    3:53:07 The other thing that has happened, and maybe the more practical answer to the question
    3:53:12 is, I think I have a much better understanding now of the role that religion plays in society
    3:53:14 than I didn’t have when I was younger.
    3:53:18 And my partner, Ben, has a great, I think he quotes his father on this.
    3:53:22 He’s like, if man does not have a real religion, he makes up a fake one.
    3:53:25 And the fake ones go very, very badly.
    3:53:30 And so there’s this class, it’s actually really funny, there’s this class of intellectual,
    3:53:33 there’s this class of intellectual that has what appears to be a very patronizing point
    3:53:37 of view, which is, yes, I’m an atheist, but it’s very important that the people believe
    3:53:40 in something, right?
    3:53:43 And Marx had like the negative view on that, which is religion is the opiate of the masses,
    3:53:46 but there’s a lot of like right-wing intellectuals who are themselves, I think, pretty atheist
    3:53:49 diagnostic that are like, it’s deeply important that the people be Christian or something
    3:53:50 like that.
    3:53:53 And on the one hand, it’s like, wow, that’s arrogant and presumptive.
    3:53:58 But on the other hand, you know, maybe it’s right because, you know, what have we learned
    3:54:02 in the last hundred years is in the absence of a real religion, people will make up fake
    3:54:03 ones.
    3:54:07 There’s this writer, there’s this political philosopher who’s super interesting on this
    3:54:08 name, Eric Vogelin.
    3:54:12 And he wrote this, he wrote in that sort of mid part of the century, mid and late part
    3:54:13 of the 20th century.
    3:54:17 He was like born and I think like 1900 and like died in like 85.
    3:54:23 So he saw the complete run of communism and Nazism and himself, you know, fled, I think
    3:54:26 he fled Europe and, you know, the whole thing.
    3:54:30 And, you know, his sort of big conclusion was basically that both communism and Nazism
    3:54:36 and fascism were basically religions were, but like in the deep way of religions, like
    3:54:39 they were, you know, we call them political religions, but they were like actual religions.
    3:54:43 And, you know, they were the, they were what Nietzsche forecasted when he said, you know,
    3:54:47 God is dead, we’ve killed him and we won’t wash the blood off our hands for a thousand
    3:54:48 years, right?
    3:54:53 Is we will come up with new religions that will just cause just mass murder and death.
    3:54:57 And like, you read his stuff now and you’re like, yep, that happened, right.
    3:55:00 And then of course, as fully, you know, elite moderants, of course, we couldn’t possibly
    3:55:02 be doing that for ourselves right now.
    3:55:04 But of course we are.
    3:55:08 And you know, I would argue that Eric Vogelin for sure would argue that the last 10 years,
    3:55:11 you know, we have been in a religious frenzy, you know, the woke that woke has been a full
    3:55:16 scale religious frenzy and has had all of the characteristics of a religion, including
    3:55:21 everything from patron saints to holy texts to, you know, sin.
    3:55:26 It said, woke, wokeness has said every aspect of a, wokeness has said every, I think it’s
    3:55:31 that every single aspect of an actual religion other than redemption, right, which is maybe
    3:55:34 like the most dangerous religion you could ever come up with is the one where there’s
    3:55:35 no forgiveness.
    3:55:36 Right.
    3:55:39 And so I think if Vogelin were alive, I think he would have zeroed right in on that would
    3:55:40 have said that.
    3:55:43 And, you know, we just like sailed right off.
    3:55:46 I mentioned earlier, like we, we somehow rediscover the religions of the Indo Europeans
    3:55:49 were all into identity politics and environmentalism.
    3:55:52 Like, I don’t think that’s an accident.
    3:55:58 So it’s anyway, like there, there is something very deep going on in the human psyche on
    3:56:07 religion that is not dismissible and needs to be taken seriously, even if one struggles
    3:56:10 with the, the specifics of it.
    3:56:15 I think I speak for a lot of people that has been a real joy and for me, an honor to get
    3:56:21 to watch you seek to understand the human psyche as you described you in that 30 year
    3:56:24 part of your life.
    3:56:26 And it’s been an honor to talk with you today.
    3:56:27 Thank you, Mark.
    3:56:28 Thank you, Alex.
    3:56:29 Is that it?
    3:56:31 That’s only, only how long is that?
    3:56:36 Four hours with Mark Andreessen is like 40 hours of actual content.
    3:56:41 So I’ll accept being one of the short ones for the listener.
    3:56:47 Mark looks like he’s ready to go for 20 more hours and I need a nap.
    3:56:48 Thank you, Mark.
    3:56:49 Thank you, Alex.
    3:56:52 Thanks for listening to this conversation with Mark Andreessen.
    3:56:57 To support this podcast, please check out our sponsors in the description.
    3:57:02 And now let me leave you with some words from Thomas Sowell.
    3:57:09 It takes considerable knowledge just to realize the extent of your own ignorance.
    3:57:21 Thank you for listening and hope to see you next time.
    3:57:25 [Music]
    3:57:27 (gentle music)
    3:57:30 (upbeat music)

    Marc Andreessen is an entrepreneur, investor, co-creator of Mosaic, co-founder of Netscape, and co-founder of the venture capital firm Andreessen Horowitz.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep458-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/marc-andreessen-2-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Marc’s X: https://x.com/pmarca
    Marc’s Substack: https://pmarca.substack.com
    Marc’s YouTube: https://www.youtube.com/@a16z
    Andreessen Horowitz: https://a16z.com

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Encord: AI tooling for annotation & data management.
    Go to https://encord.com/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (12:46) – Best possible future
    (22:09) – History of Western Civilization
    (31:28) – Trump in 2025
    (39:09) – TDS in tech
    (51:56) – Preference falsification
    (1:07:52) – Self-censorship
    (1:22:55) – Censorship
    (1:31:34) – Jon Stewart
    (1:34:20) – Mark Zuckerberg on Joe Rogan
    (1:43:09) – Government pressure
    (1:53:57) – Nature of power
    (2:06:45) – Journalism
    (2:12:20) – Bill Ackman
    (2:17:17) – Trump administration
    (2:24:56) – DOGE
    (2:38:48) – H1B and immigration
    (3:16:42) – Little tech
    (3:29:02) – AI race
    (3:37:52) – X
    (3:41:24) – Yann LeCun
    (3:44:59) – Andrew Huberman
    (3:46:30) – Success
    (3:49:26) – God and humanity

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #457 – Jennifer Burns: Milton Friedman, Ayn Rand, Economics, Capitalism, Freedom

    AI transcript
    0:00:07 The following is a conversation with Jennifer Burns, a historian of ideas, including the
    0:00:14 evolution of economic, political and social ideas in the United States in the 20th century to today.
    0:00:22 She wrote two biographies, one on Milton Friedman and the other on Ian Rand, both of which I highly
    0:00:30 recommend. This was a super technical and super fascinating conversation. At the end, I make a
    0:00:36 few comments about my previous conversation with President Zelensky, for those of you who may be
    0:00:43 interested. And now a quick few second mention of a sponsor. Check them out in the description,
    0:00:51 it’s the best way to support this podcast. We’ve got brain.fm for focus, github for programming and
    0:00:59 AI, element for delicious electrolytes, Shopify for merch and AG1 for health. She’s wise with my
    0:01:06 friends. Also, if you want to get in touch with me, go to electstreaming.com/contact and now
    0:01:10 onto the full ad reads. As always, no ads in the middle. I try to make this interesting,
    0:01:16 but if you must skip them, please still check out our sponsors. If I can only speak English,
    0:01:24 I enjoy their stuff. Maybe you will too. Click the links by the stuff. Glory shall be ours.
    0:01:32 This episode is brought to you by brain.fm, a platform that offers music specifically made
    0:01:39 for focus. I talk about listening to brown noise a lot. It’s actually funny, but I don’t believe
    0:01:46 brain.fm has brown noise. But that’s not what I use it for. I usually play brown noise because
    0:01:52 basically everything has brown noise. YouTube has brown noise, Spotify has brown noise.
    0:02:01 I use that as one layer and as the second layer, I’ll use music from brain.fm. So there’s all kinds
    0:02:12 of almost like ethereal soundtracks. Maybe there’s a bit of like a techno beat. I like the stuff with
    0:02:20 the beat, just very light. Where the beat does not have this kind of edge that distracts me,
    0:02:26 but there’s still a rhythm to it. So I’ll have that plus a bit of brown noise and that’s like a
    0:02:32 really beautiful focus. I believe these ads are for the episode with Jennifer Burns, Milton Friedman.
    0:02:36 Did you know that he wrote capitalism and freedom in just six months
    0:02:42 while teaching full-time? Also, did you know that Brad Knight wrote JavaScript
    0:02:47 and I think a week, maybe 10 days? Passion and focus, ladies and gentlemen,
    0:02:54 gets a lot of stuff done. And you should try to increase yours by trying brain.fm for free
    0:03:01 for 30 days by going to brain.fm/lex. That’s brain.fm/lex for 30 days free.
    0:03:11 This episode is also brought to you by GitHub and GitHub Co-Pilot, the super amazing AI that
    0:03:18 helps you program. If you don’t know what GitHub Co-Pilot is, ladies and gentlemen, you are missing
    0:03:28 out. I’m going to be doing a lot of programming podcasts coming up and I mean I really just don’t
    0:03:37 even program without AI anymore. It is true, it is fully an assistant at this point, but not a
    0:03:44 kind of guide. So I’ve not really had success with anything agentic. Really, the thing I’m
    0:03:48 interested in, especially when I’m actually trying to get work done, I’m interested in maximizing
    0:03:54 my productivity. And for that, the difficult things that an agent is supposed to be able to do, I
    0:04:00 still do faster and better, those difficult decisions. I don’t like the task of fixing
    0:04:12 decisions made by agents, but fixing code generated by Co-Pilot, for example, that is much
    0:04:16 more pleasant. It’s much more fun, it’s much more efficient, especially because the mistakes are not
    0:04:25 that numerous. Anyway, I have a lot of people writing to me trying to get into programming.
    0:04:31 One of the things you should definitely get to know is GitHub and you should get to know GitHub
    0:04:38 Co-Pilot and all the suite of developer tools they have to help you write code with the help of AI.
    0:04:51 It’s great. To try out GitHub Co-Pilot for free, go to gh.io/copilot. That’s gh.io/copilot.
    0:05:00 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix.
    0:05:06 Did you know that Ayn Rand’s daily diet was black coffee, french fries and cigarettes?
    0:05:14 Well, she should have been consuming some element. I mean, listen, let’s not be judgmental here.
    0:05:21 Churchill did quite a few impactful things in the world and his diet and
    0:05:29 liquids and substances he consumed were just atrocious and the guy was out of shape and it was
    0:05:36 just a mess. But he lived a long life and a productive life and one of the most impactful
    0:05:43 and influential humans in history. So there you go. But it’s not like element
    0:05:53 makes you not impactful. It just is a little boost, but it’s not going to get your shit done for you.
    0:05:59 You still need to take big risks and take on the world and do epic shit,
    0:06:06 but might as well be a little healthier for it, especially when you’re doing like crazy physical
    0:06:14 endurance event. Your electrolytes need to be on point. Get a simple path for free with any
    0:06:22 purchase. Try it at drinkelement.com/lex. This episode is also brought to you by Shopify,
    0:06:27 a platform designed for anyone to sell anywhere with a great looking online store.
    0:06:35 I often talk about capitalism when I do the ad read for Shopify and no better episode than
    0:06:43 for many hours focuses on the work of Milton Friedman, who was the seminal figure of the
    0:06:51 Chicago School of Economics. And Ein Rand, who is basically the most hardcore and saying the so
    0:06:58 defender of capitalism. Howard Work, his architectural principles that we talk about with Jennifer
    0:07:07 Burns, I mean is the embodiment of this spirit of, “Fuck you, I’ll do whatever I want. I’ll do it my
    0:07:16 way.” That radical individualism that makes up America, that makes up the individual gears that
    0:07:23 make up the machinery of capitalism. That is the American way and that has some downsides,
    0:07:28 but mostly it’s upsides. It’s the reason we have so many amazing things and the quality of life is
    0:07:33 going up and the productivity, the GDP is going up, not just in the United States, but across the
    0:07:41 world, thanks to the incredible innovation by US inventors, US companies. So anyway, Shopify is
    0:07:46 just one implementation of that. First of all, of course, the engineers that create Shopify,
    0:07:51 but if you yourself want to sell stuff, you’re creating something and you want to sell it,
    0:08:01 Shopify enables you to do that. Sign up for $1 per month trial, period, at Shopify.com/Lex.
    0:08:06 That’s all lowercase. Go to Shopify.com/Lex to take your business to the next level today.
    0:08:14 This episode is also brought to you by AG1. And all in one daily drink to support better health
    0:08:20 and peak performance as I slide slowly down in my chair. It is late late at night,
    0:08:27 embarrassingly so. And I’ve lost all energy and I’m slowly losing my mind.
    0:08:38 And there’s a cup next to me that I am swirling gradually. It is a cup of ice with some water
    0:08:46 and element in it, but it makes me feel like maybe it’s a whiskey. And whiskey is probably
    0:08:52 something I need at this moment. But let us focus on the essentials. And definitely not whiskey,
    0:08:57 but something way healthier, which is AG1. I already had it twice today, did crazy exercise,
    0:09:05 didn’t sleep much the night before, had to do a super long podcast, had to do a lot of reading.
    0:09:13 It was just an insane day, my friends. I’m so grateful to be alive. And yeah, there’s the little
    0:09:20 joys of drinking a bit of AG1. Does it do much for me? I don’t know. It makes me feel like it does.
    0:09:26 It’s like a really nice multivitamin. Brings joy to my life. I miss it when it’s not there.
    0:09:29 Who knows? We’re all going to die in the end.
    0:09:38 Anyway, they’ll give you a one month supply of fish oil when you sign up with drinkag1.com/lex.
    0:09:45 This is the Lex Friedman podcast. To support it, please check out our sponsors in the description.
    0:09:55 And now, dear friends, here’s Jennifer Burns.
    0:10:13 You have written two biographies, one on Milton Friedman and one on Ayn Rand. So if we can,
    0:10:16 we will focus on each one separately. But first, let’s talk about the ideas that
    0:10:22 two of them held in common, the value of individual freedom, skepticism of collectivism,
    0:10:27 and the ethics of capitalism. Can you talk about the big picture ideas they converge on?
    0:10:33 Yeah. So Milton Friedman and Ayn Rand, in the biggest picture, they’re both
    0:10:38 individualists and they’re skeptical of collectivities and collectivism.
    0:10:42 So their unit of analysis is the individual, what’s good for the individual,
    0:10:46 what works for the individual, and their understanding of society flows from that.
    0:10:55 They also both use this focus on individualism to justify and to support capitalism as a social
    0:11:01 and economic system. So we can put them in a similar category. We can call them individualists.
    0:11:06 We could call them libertarians of a sort. They’re also really different in how they approach
    0:11:14 capitalism, how they approach thinking. Ayn Rand developed her own moral and philosophical system
    0:11:20 to justify individualism and to connect the individual to capitalism and to support
    0:11:25 capitalism as a social and economic system. Friedman struggles a bit more with how to justify
    0:11:32 capitalism and he’ll ultimately come down to freedom as his core value, his God, as he says.
    0:11:38 And so freedom does connect back to the individual, but he’s not justifying capitalism for his own
    0:11:44 sake. He’s justifying it for its ability to underwrite freedom in a social sense and also in the
    0:11:48 individual sense. At a high level, are there interesting differences between them? You already
    0:11:53 mentioned a few, maybe in terms of who they are personally, maybe in terms of how they approach
    0:11:58 the justification for capitalism or maybe other ways. Yeah, for sure. So beyond this idea that
    0:12:03 that Milton Friedman takes a while to come to his justification of capitalism,
    0:12:09 Morzine Rand kind of has it from the start. She really focuses on the core quality of
    0:12:15 rationalism and rationality. Rationality is the defining feature of human beings. And so
    0:12:22 she works from there, whereas Milton Friedman eventually converges on this idea of freedom.
    0:12:27 So that’s one part of it. The other is their intellectual styles are really, really different.
    0:12:32 Their interpersonal styles are really different. So Friedman has big ideas, big principles that
    0:12:38 guide him, but he’s also deeply empirical. He spends most of his career doing historical research,
    0:12:43 economic research, pulling data from how people actually make economic decisions and live in
    0:12:49 the world and using them to test and refine his theories. Where Rand, to some degree, we could
    0:12:53 say she’s empirical and that she lives through the Russian Revolution and takes a very big lesson
    0:13:01 from that. But her style of thinking is really first principles, an axiomatic approach, going from
    0:13:08 the basic idea of rationality and then playing that out in different spheres. And so those are
    0:13:14 just very different intellectual approaches. And then they lead in some ways to really different
    0:13:21 ways of thinking about how you get things done in the world. Ein Rand is a purist. She wants to
    0:13:28 start with the pure belief. She doesn’t want it to be diluted. One of her favorite sayings was,
    0:13:32 it’s earlier than you think. In other words, we’re still moving towards a place where we can really
    0:13:38 hold and express these ideals purely. Friedman, although he didn’t use this terminology, was
    0:13:43 much more half a loaf guy. I’ll take what I can get and then I’ll try to move to where I really
    0:13:49 want to be. But he is able to compromise, especially when he moves from being an economist into being
    0:13:55 more of a political thinker. And so that’s a really different intellectual style. And then
    0:14:02 it also plays out in their lives in that Ein Rand is incredibly schismatic. I mean, she wants her
    0:14:08 friends to believe what she believes and support what she supports. And she’s willing to break
    0:14:14 a relationship if it doesn’t match. Milton Friedman, he also does tend to have friends
    0:14:21 who agree with him. Yet he’s always willing to debate his opponents and he’s willing to do so
    0:14:27 with a smile on his face. He’s the happy warrior. And he actually will win a lot of debates simply
    0:14:33 by his emotional affect and his cheerfulness and his confidence, where Rand will lose debates because
    0:14:39 she gets so angry in the face of disagreement. So yeah, they have a lot of similarities and a
    0:14:43 lot of differences. And it’s been really fascinating to kind of dive deep into both of them.
    0:14:50 I just re-listened to Ein Rand’s, I think, last lecture or at least it’s called that. And just
    0:14:58 the confrontational nature of how she answers questions or how she addresses critics and so
    0:15:05 on, there is a kind of charisma to that. So I think both of them are very effective at winning over
    0:15:12 sort of popular support, but in very different styles. It seems like Ein Rand is very cranky,
    0:15:16 but there’s, I mean, it’s the most charismatic, cranky person I think I’ve ever listened to.
    0:15:24 Yeah, I mean, people talked about her meeting her and coming to believe in her ideas
    0:15:30 in a similar way as they did with Marxism in that suddenly everything made sense.
    0:15:33 And that when they came to believe in objectivism, they felt they had this
    0:15:39 engine for understanding the entire world. Now after a while, for most people, that then became
    0:15:45 confining. But yeah, that’s certainty. And Friedman had some of that as well. He clothed it differently.
    0:15:50 He clothed it in happiness, where Rand kind of closed it, as you said, in crankiness or anger.
    0:15:55 I mean, there’s also an arc to Rand. She gets kind of angrier and angrier and crankier and crankier
    0:16:00 over the course of her life. What I enjoyed about my research is I was able to get into this early
    0:16:06 moment when she was different and a little more open. And then I kind of watched her clothes and
    0:16:12 her heart in over time. Would it be fair to say that Milton Friedman had a bit more intellectual
    0:16:19 humility, where he would be able to sort of evolve over time and be convinced by the reality of the
    0:16:26 world to change sort of the nuances of policy, the nuances of how he thought about economics
    0:16:31 or about the world? Yeah, absolutely. Friedman believed in being able to say I was wrong.
    0:16:36 And there are some things he said he was wrong about, will delve more into
    0:16:42 monetarism and monetary policy. But he was able to talk about the ways his ideas hadn’t mapped
    0:16:46 onto the world the way he thought they would. He does a really interesting interview at the
    0:16:53 end of his life where he’s beginning to voice some doubts about globalization, which was,
    0:16:56 he was sort of a prophet of globalization, a cheerleader of globalization. He really thought
    0:17:00 it would lead to a better world in all respects. And towards the end of his life, it’s about two
    0:17:07 years before he dies, there’s a note of doubt about how globalization unfolded and what it would mean,
    0:17:11 particularly for the American worker. And so you can see him still thinking. And that to me,
    0:17:17 I had sort of assumed he became crankier and crankier and more and more set in his ways. And
    0:17:20 of course, there’s a phase where he does become that way, especially since he’s in the public
    0:17:24 eye and there’s not room for nuance. But to find in the last years of his life,
    0:17:30 of his life, him being so reflective, that was absolutely not something Rand could do.
    0:17:34 I think there’s a thread throughout this conversation where we should actually also
    0:17:40 say that you’re kind of a historian of ideas. I am a historian of ideas, yes.
    0:17:48 And so we’re talking about today, in part, about two people who kind of fought for ideas,
    0:17:53 for an idea, like we mentioned, freedom for capitalism. And they did it in very different
    0:18:00 ways. And it’s so interesting to see sort of the impact they both had and how their
    0:18:08 elucidation explanation of those ideas like reverberated throughout society and how we together
    0:18:14 as a society figure out what works, the degree to which they have influence on the public,
    0:18:17 the degree to which they have influence on individual administrations like the Reagan
    0:18:24 administration, Nixon and so on, and how it might return like fadeaway and then come back
    0:18:31 in the modern times. And it’s so interesting if you just see this whole world as a game of ideas
    0:18:38 where we were like pushing and pulling and trying to figure stuff out. A bunch of people got real
    0:18:45 excited over a hundred years ago about communism and then they tried stuff out and then the
    0:18:52 implementation broke down and we keep playing with ideas. So these are the two greats of playing
    0:18:55 with ideas. I think that’s a thread that just runs through this.
    0:19:01 Yeah. And kind of pushing back against that movement towards communism, social democracy,
    0:19:06 but one difference that I really should emphasize, Rand is a writer of fiction.
    0:19:10 She’s a philosopher, but she’s also a writer of fiction. So she is working
    0:19:16 almost in the mythic register, much more in the psychological register. She’s creating characters
    0:19:22 that people identify with and people relate to experiences they’ve had. And that’s one of the
    0:19:27 reasons she hits so deep. And she’s also offering people, I read all the fan letters to her. People
    0:19:35 would say things like, “I read the fountain head and now I’m getting a divorce.” Having
    0:19:40 just these incredible realizations. Mill and Freeman didn’t get such things.
    0:19:45 And Mill and Freeman didn’t get such things. Or I’ll meet someone and they’ll say to me,
    0:19:51 “Ian Rand is the reason I went to medical school.” A couple of women said this to me a few years back.
    0:19:55 I never even occurred to me that I could be a doctor until I read “Ian Rand” and I said,
    0:19:59 “I’m going to go to medical school.” And so she has that really intense impact on people.
    0:20:07 So she thought of herself as rational. She thought of rationality as what she was doing,
    0:20:14 but she was actually doing a mythopoetic psychological work as well. Whereas Freeman,
    0:20:19 on the one hand, was much more rational. There’s a whole set of economic thinking and he provides
    0:20:25 a rational framework for understanding the world and it’s the framework of neoclassical economics.
    0:20:32 At the same time, he does pull on mythologies of the idea of America and the Gilded Age,
    0:20:38 the frontier mythology, the individual immigrant, the settler mythology. He pulls on these,
    0:20:44 but he doesn’t create them and he’s more kind of playing a tune he already has.
    0:20:50 Whereas I think Rand really does something a little bit deeper in her ability to reach into
    0:20:57 people’s psyche and then take that emotional, psychological experience and fuse it to an
    0:21:03 intellectual world and a political world. And that’s really what makes her so powerful.
    0:21:09 And so I think she comes back in to relevancy in a different way than Friedman does because
    0:21:16 I think in some way she’s tapped into a more universal human longing for independence and
    0:21:22 autonomy and self-creation and self-discovery. Nevertheless, there are still pragmatic ideas
    0:21:28 that are still important today for Milton Friedman, even just on the economics level.
    0:21:36 So let’s dig in. Let me try. I took some notes. Let me try to summarize who Milton Friedman is
    0:21:42 and then you can correct me. Okay. So he is widely considered to be one of the greatest,
    0:21:46 the most influential economists in history, not just the 20th century, I think, ever.
    0:21:53 He was an advocate of economic freedom, like we said, and just individual freedom in general.
    0:21:59 He strongly advocated for free market capitalism and limited government intervention in the economy,
    0:22:04 though you do give… I’ve listened to basically everything you have on the internet.
    0:22:08 You give some more depth and nuance on his views on this and in your books.
    0:22:17 He led the famed Chicago School of Economics and he won the Nobel Prize in Economics in 1976.
    0:22:24 He greatly influenced economic policies during the Reagan administration and other administrations.
    0:22:29 He was an influential public intellectual, highly influential, not just among economists.
    0:22:38 He lived 1912 to 2006. So that means he lived and worked through some major world events
    0:22:43 where his ideas were really important, the Great Depression, with the New Deal, World War II,
    0:22:50 with the post-war reconstruction, the rise and fall of the Bretton Woods Monetary System,
    0:22:56 as we may talk about, the Cold War and all the conflicts involved in that,
    0:23:01 sort of the tensions around communism and so on, so the fall of the Soviet Union.
    0:23:08 And also he has some interesting relationships to China’s economic transformation since the 1970s,
    0:23:11 the stagflation of the 1970s, and I’m sure there’s a lot more.
    0:23:19 Can you maybe continue this thread and give a big picture overview of the ideas he is known for?
    0:23:27 Yeah, sure. And that’s a great summary. You learn fast. So let me start with the economics and
    0:23:35 then I can kind of transition to how he used those economic ideas to become a real voice
    0:23:37 in the American conservative movement, the American political realm.
    0:23:43 So I’ll kind of highlight four ideas or contributions or episodes.
    0:23:49 One was his work with Anna Schwartz in revising our understanding of the Great Depression.
    0:23:54 And that’s tightly related to the second, which is the School of Monetarism
    0:24:03 that he and Schwartz really become founders of. Then there is the prediction of stagflation
    0:24:09 and the explanation of that in the 1970s, which really is one of these sort of career-making
    0:24:14 predictions. And we can dig into that. And then in terms of technical economics,
    0:24:21 he’s known for the permanent income hypothesis which he develops with a group of female collaborators
    0:24:27 that I can talk about. So those are kind of four technical pieces and being really brought together
    0:24:32 in what becomes the Chicago School of Economics. He’s undoubtedly the head and the leader of the
    0:24:38 Chicago School of Economics. There’s an earlier generation that he learns from. There’s his
    0:24:44 generation. There’s also a Chicago School of Law and Economics that’s really profoundly influential.
    0:24:48 And then there’ll be kind of a third generation that he’s somewhat distinct from,
    0:24:54 but that goes on to really shape economics. But let me go back to these kind of four pieces,
    0:25:01 and let me start with Great Depression. So Milton Friedman actually lives through the
    0:25:09 Great Depression. He’s in college when it hits, and he is, so he’s in college just 1928 to 1932.
    0:25:16 And he’s aware of the Depression, and he’s deciding, should I study mathematics or should
    0:25:23 I study economics? And he’s had some good economics teachers, but it’s really the context.
    0:25:29 It’s looking around at the slow dissolving of economic prosperity. So he decides to go to
    0:25:34 Chicago. He decides to study economics. And what’s really interesting is that
    0:25:42 the Great Depression is so unexpected. It’s unpredicted. It’s unprecedented. And economists
    0:25:47 are really struggling to know how to respond to it. And so he’s going to arrive at the University
    0:25:54 of Chicago when the field is struggling to know what to do. So he’s in this kind of really open
    0:26:00 space where the institutional economics of the 1920s has failed to predict, which was focused
    0:26:05 on business cycles. This is the irony. Their big thing was charting and understanding business
    0:26:09 cycles. And then we have the biggest business cycle of all time, and they haven’t seen it coming,
    0:26:19 and they don’t have a good explanation for it. And what he will get at Chicago is the remnants of
    0:26:26 the monetary understanding of the economy. And so his teachers, they don’t know exactly what’s
    0:26:33 going on, but they look first to the banking crisis. They look first to the, in 1933, it’s,
    0:26:37 you know, bank runs, failures of, maybe it’s up to a third of American banks. Thousands of banks
    0:26:42 are failing per week. So they’re focused on that. So that’s the first kind of imprint he will have.
    0:26:48 The Great Depression has something to do with a banking system. The second imprint he will have
    0:26:54 is that all of his professors are profoundly concerned about the social crisis. They want
    0:26:59 relief programs. They want them now. They want bank regulation and financial reform. They’re
    0:27:04 very active. This is not laissez-faire by any stretch of the imagination. So Friedman has
    0:27:14 that imprinting. And then about, so that’s, he gets there in ’32, ’36, ’37, the ideas of John Manured
    0:27:18 Keynes from Britain, which has a different explanation. Keynes has a different explanation,
    0:27:23 the Great Depression will kind of make landfall in American economics and be very profoundly
    0:27:29 influential on most American economists, but Friedman already, it’s too late for Friedman. He
    0:27:36 already has a different perspective. So Keynesianism unfolds. I can say more about that, but it basically
    0:27:44 leads to more active federal government participation in the economy. And what underlies
    0:27:49 a lot of that, it’s adaptation in America particularly, is the idea that capitalism
    0:27:58 has failed. Capitalism has revealed itself to have a profound flaw in that it’s two,
    0:28:04 it’s cycles of boom and bust, create social instability, chaos. It needs to be tamed. It
    0:28:12 needs to be regulated. And so that becomes the kind of baseline of politics in the United States,
    0:28:16 the understanding of the New Deal, the understanding of the Democratic Party, even to some extent
    0:28:22 the understanding of the Republican Party. And Friedman never quite sure about that. He has
    0:28:26 a hunch that there’s something else going on, and he does not buy that capitalism has sort of
    0:28:31 ground to a halt, or the other idea is that capitalism has gone through some sort of phase
    0:28:38 transition. And it worked great maybe while we had a frontier. This is a very serious argument
    0:28:44 that people are making. United States used to have a frontier, a place where Europeans hadn’t
    0:28:48 fully settled. Of course, they’re pushing out the native tribes. That’s another story, but
    0:28:53 that this frontier is the engine of economic growth, and the frontier is now over, it’s closed,
    0:28:58 and we’re going to stagnate. There’s a theory of secular stagnation. And so to deal with secular
    0:29:03 stagnation, we’re just going to have to have a more active state. So Friedman is suspicious of all
    0:29:09 these assumptions. And he has this idea that there’s something to do with money. Money is somehow
    0:29:16 important. And so he joins together with Anna Schwartz, who is an economist. She doesn’t at
    0:29:21 this time hold a PhD. She’s working for the National Bureau of Economic Research, and they come
    0:29:27 together to do this study of money in the US economy. And it takes them 12 years to write the
    0:29:33 book. And they’re releasing their ideas, and they’re arguing, and Friedman is writing papers,
    0:29:39 giving talks, saying money’s really important. And nobody’s really believing him. He’s a crank.
    0:29:44 He’s at Chicago. Chicago is a well-known university, but he’s sort of considered a crank.
    0:29:52 And then in ’63, he and Anna Schwartz published this book, and it’s 800 pages. It’s a reinterpretation
    0:29:57 of the history of the United States through money. The central character is money, whether it’s
    0:30:02 specie, greenback, or the US currency. And they have a whole chapter on the Great Depression.
    0:30:08 What they’ve literally done, Schwartz has done most of this. Schwartz has gone to banks and said,
    0:30:13 show me your books. And then she’s added up column by column. How much money is in your vault? How
    0:30:18 much money is on deposit? How much money is circulating? And so they literally have graphs.
    0:30:23 You can see them in the book of how much money has been circulating in the US at various different
    0:30:28 points in time. And when they get to the Great Depression, they find the quantity of money
    0:30:33 available, and the economy goes down by a third. And in some ways, this is completely obvious,
    0:30:42 because so many banks have failed. And we don’t have any type of bank insurance at that point.
    0:30:46 So if your bank goes under, your savings are there, the money essentially vanishes. And it’s
    0:30:51 fractional reserve banking, right? So you’ve put in, they can loan up to 90% off on their deposits.
    0:30:58 And so Friedman and Schwartz present this argument that what really made the Great
    0:31:03 Depression so bad was this drop in the amount of money, the 30% drop in the money. They called
    0:31:08 the Great Contraction. And then they go further and they say, well, how did this happen? And why?
    0:31:15 And they pinpoint the Federal Reserve, which is a fairly new institution at that time. And they
    0:31:19 say, what did the Federal Reserve do, the lender of last resort? What did it do in the face of what
    0:31:25 they’re depicting as a massive, unprecedented liquidity crisis? And they find it’s not really
    0:31:32 doing much. And they really dig into the details. And they find that the Federal Reserve has gone
    0:31:37 through a sort of personnel change. And some of the key leaders in the 1920s, Benjamin Strong,
    0:31:42 is one of them. He’s now deceased. And the dominance of the New York Federal Reserve,
    0:31:49 which in their telling is global, it’s interconnected, it’s seen a lot of financial
    0:31:55 things come and go. And they believe that the New York Fed had the understanding to recognize
    0:31:58 this is a liquidity crisis. We should be very generous. We should support all the banks.
    0:32:05 Their influence has diminished for the kind of banks that are more, they don’t say like the
    0:32:08 rubes and the hicks, but it basically is. It’s like, people in charge don’t know what they’re
    0:32:14 doing. And so the Fed pursues this kind of policy of masterly inactivity. They don’t see it as a
    0:32:22 problem. They don’t do much. There’s an enormous liquidity crisis. And that’s their version of
    0:32:27 what the Great Depression is all about, that it’s a financial system meltdown. It’s a liquidity
    0:32:34 crisis. And that in some ways, well, in many ways, they argue very strong counterfactual argument.
    0:32:39 The Federal Reserve could have prevented it, and it did not. And so it becomes then
    0:32:46 an institutional failure and a political failure, not a failure of capitalism as a system.
    0:32:53 And so this book comes out, it’s a blockbuster. And even those economists who’ve been like,
    0:32:58 “Freedmen is a crank. I don’t buy it,” are like, “Freedmen in shorts are onto something.
    0:33:04 Milton Friedman on a shorts are onto something.” And so that really changes the game. And
    0:33:11 this is also one of his most influential contributions, because Friedman in shorts becomes
    0:33:17 the playbook for the Federal Reserve. And we have lived through this, right? The financial crisis,
    0:33:23 the Federal Reserve is ready to loan. COVID, the Federal Reserve is all kinds of new things,
    0:33:30 because no Federal Reserve chair wants to be in Friedman in shorts 2.0 that somebody writes,
    0:33:36 or they’re the bad guy who let the economy melt down. So the specifics of what they say to do
    0:33:42 have obviously evolved as the system has changed. But this is a playbook for how to deal with economic
    0:33:48 crisis. It’s Friedman in shorts. And so it’s absolutely fundamental. And that is really going
    0:33:53 to be the place he makes his mark. There’s a lot of things to say here. So first, the book we’re
    0:33:58 talking about is a monetary history of the United States in part for which Milton Friedman won the
    0:34:03 Nobel Prize. You’ve also mentioned the influence of the Great Depression. If you’re going to even
    0:34:12 just rewind to that. So he went to, I guess, college in Rutgers. And he was mathematical
    0:34:18 proclivities. So he was kind of wanted to be a mathematician. And so it’s kind of a cool crossroads.
    0:34:27 It’s interesting how the right time, the right person arrives, right? So you describe this really
    0:34:32 well that he had his choice to be a mathematician or an economist. An economist is the University of
    0:34:41 Chicago. A mathematician is Brown University, whichever. And then this is also the beginnings,
    0:34:48 as you’ve described, of mathematical economics. So he fits in nicely into this using,
    0:34:54 I think you said the number of equations started going up per paper, which is a really nice way
    0:35:02 to put it. So really, the right person at the right time to try to solve this puzzle of the economy
    0:35:08 melting down. It’s so interesting. Just one human, it’s just from just zooming in on a single human
    0:35:16 making a decision about life. And it’s hard to know when you’re in it that the world is melting
    0:35:22 down from an economics perspective. And then I could do something about this to figure out what
    0:35:27 it is. And also, I’m going to reject the mainstream narrative about why this happened.
    0:35:32 Yeah. So the other piece of the puzzle, when he goes to Rutgers, he thinks he’ll be an
    0:35:38 actuary. So Milton Friedman’s family, his parents are immigrants, Jewish immigrants from Eastern
    0:35:45 Europe, they’re pretty atypical in that they don’t stay in New York. And they moved to
    0:35:51 Raway, New Jersey, and they put together a fairly middle class life as kind of, they have a shop,
    0:35:54 they do some wholesale buying and selling. And then his father dies when he’s 16.
    0:36:00 His life becomes more precarious. But it’s never as precarious as he makes it up to be.
    0:36:04 He’s got three older sisters, they earn a good living, and suddenly they all have better grades
    0:36:11 in high school than he does, but he’s the one that goes to college. But it’s actually really
    0:36:17 important that he loses his father figure because he’s then looking for other father figures. And
    0:36:22 he meets two at Rutgers. One is Arthur Burns, who will go on to have a huge influence in his
    0:36:30 career. No relation to me, by the way. But Arthur Burns is like him, a fellow Jewish immigrant boy
    0:36:36 on the make. He’s older. And he’s making a career as an economist. And then there’s Homer Jones,
    0:36:41 who has gone to the University of Chicago and is studying with Frank Knight at Chicago and says,
    0:36:47 you have to go to Chicago. So he has these two mentors. And Burns in particular suggests, oh,
    0:36:52 I could be an economist. That could be my career path. The idea to be an actuary for an insurance
    0:36:57 company, I’m not sure where he got that idea, but he just thought that was something he could do
    0:37:01 as someone who was good at math. And so the college really opens, the perspective opens the door.
    0:37:10 And then I think it’s really key that again, he doesn’t get an explanation that he buys
    0:37:16 for the Great Depression. So then he’s looking for one. And the math part is really interesting
    0:37:23 aspect of his career. Now, he actually comes to Chicago to study with the mathematical economist,
    0:37:31 Henry Schultz. But he gets there and he thinks Schultz is kind of dumb. He really does. He’s
    0:37:36 incredibly arrogant and he just thinks this guy’s not that smart. And it seems that, I mean, Schultz
    0:37:41 did some really important work in the early stages of mathematical economics, but a lot of the oral
    0:37:46 histories about him are like, yeah, he wasn’t that bright. So Friedman’s maybe onto something.
    0:37:54 So he falls into the set of students who are really enthralled with his other professor, Frank
    0:38:00 Knight. And Frank Knight is against math and economics. Frank Knight is like a neoclassical
    0:38:05 economist, but not a mathematical economist. He’s an old school liberal. He’s really concerned about
    0:38:13 liberal democracy, economic liberalism. And Friedman is very deeply influenced by Knight.
    0:38:18 And he continues to pursue mathematical economics. So he’ll go for part of his graduate career. He
    0:38:24 goes to Columbia University, where he actually gets his PhD from. And he works with a mathematical
    0:38:30 economist there. And so he comes out trained in what will eventually be econometrics.
    0:38:36 Statistics and economics, his early publications are in statistics, but it’s not really where his
    0:38:42 intellectual heart and soul are. And eventually, he will turn very profoundly against mathematics
    0:38:47 in economics and become a sort of heterodox strain throughout 20th century economics. It says,
    0:38:55 simple models are better. We need to work on empirical, work off empirical data,
    0:39:02 not construct elegant models, and becomes really sort of counter cultural within economics in
    0:39:06 that way. And the test of a good model is it should actually predict stuff that happened.
    0:39:09 It should predict stuff that happened. It should tie back to what’s going on.
    0:39:14 I’m wondering which direction to go. So first, actually, if we could zoom out on the different
    0:39:20 schools of economics, just the basics. You mentioned neoclassical. We mentioned
    0:39:25 Kenzian economics. What else did we mention? Well, the Chicago School of Economics. Where does
    0:39:33 Austrian economics fit into that pile and Marxian economics? And can we just even just linger and
    0:39:39 try to redefine Kenzian economics and Chicago School of Economics and neoclassical economics
    0:39:44 and Austrian economics, because there’s some overlap and tension.
    0:39:51 Schools of economics. So we could start with classical economics. Classical economics,
    0:39:55 we could think of, Adam Smith is kind of your classic classical economist,
    0:40:02 the founder of the discipline. Classical economics does not really use math. It’s very close to
    0:40:09 political economy. It’s concerned with, as Smith puts it, the wealth of nations. It’s concerned
    0:40:14 to some degree with distribution. It’s concerned to some degree with what makes a good political
    0:40:22 system. And what tends to really define classical economics when you’re looking from a great
    0:40:29 distance is what’s called the labor theory of value. So where does value come from in classical
    0:40:37 economics? It comes from the labor that a person puts into it. So maybe this in some ways comes
    0:40:42 from Locke’s notion of property that you kind of mingle your labor with the natural world.
    0:40:49 We can say labor theory of value. So classical economics concerned with Smith is arguing against
    0:40:55 mercantilism for more free trade often goes by the name of political economy to show it’s more
    0:41:03 capacious. It’s thinking of politics and economics. You can still read these books today. The sentences
    0:41:08 are long. The words are different, but you can still follow along. So the real big transition
    0:41:14 from classical economics and political economy to economics, as it’s understood today, comes
    0:41:20 with the marginal revolution. And the marginal revolution is a scientific revolution that happens
    0:41:24 in a couple of different places simultaneously. This is one of these things that you see in the
    0:41:30 history of science. There’ll be some breakthrough. Darwin has a breakthrough, but somebody else has
    0:41:34 sort of the same breakthrough at the same time, totally differently. So there’s a version of
    0:41:41 marginalism that’s continental. There’s a version in the German-speaking lands, in the French-speaking
    0:41:48 lands, and in Britain. And they all kind of come together. And the shift is in the theory of value.
    0:41:58 So the theory of value in marginalism is on the margin. So say you have one apple and you want
    0:42:06 a second one. How much is going from one apple to two apple worth for you? Probably quite a bit.
    0:42:11 If you had 10 apples, maybe going to 11 apples, doesn’t matter that much. The marginal value is
    0:42:18 less. So what marginalism does, though, most importantly, is it opens the door to math and
    0:42:26 economics, because it means you can graph this. Now, you can depict this relationship graphically.
    0:42:31 And there’s some really interesting work in the history of economics that shows a lot of the
    0:42:38 people who developed marginalism were looking to physics as a model, physics, the queen of the
    0:42:45 sciences. And so they were thinking, they imported terms from the natural world to describe the
    0:42:52 social world through the lens of economics, terms like equilibrium. So the idea being that if you
    0:42:59 looked at a market, a market would reach equilibrium when everybody is bought and sold,
    0:43:05 all that they want, or the price will settle at an equilibrium price when it’s really the demand
    0:43:11 and supply are matching up. And some of these ideas are things we would pick up at a microeconomics
    0:43:18 class? Oh, yes. This is still out there. This is sort of the basic foundation of microeconomics,
    0:43:25 marginal analysis. And so in the German-speaking intellectual tradition, this is the root of
    0:43:31 Austrian economics. And people picking up the marginal revolution in the German-speaking lands
    0:43:39 are opposed to the historicists who are thinking in a more evolutionary way about how societies
    0:43:49 kind of grow and change. And they have a vision of economic ideas as applying differently to different
    0:43:55 types of social arrangements. Or the marginalists, remember, are inspired by physics. And this is
    0:44:02 a set of natural laws that applies anywhere to any sort of human society. So that’s this first
    0:44:10 really big fissure that we’ll see again and again. Are you historically minded? Do certain traits of
    0:44:17 economic life adhere and become expressed in certain types of societies? Or are there universal
    0:44:22 economic laws that flow through any type of society? So that’s kind of a juncture, a break.
    0:44:29 And so marginalism, first, people start using really geometry to kind of graph things, but
    0:44:35 marginalism is also opening up to the possibility of calculus and the possibility of creating models.
    0:44:40 But at that point in time, late 19th century, a model is something like a physicist does,
    0:44:44 like think of an inclined plane and how fast does the ball roll from one to the other? It’s
    0:44:49 a physical representation of the world. And eventually economists will start to create
    0:44:53 mathematical representations of the world. But we’re not quite there yet. So we’re late 19th
    0:44:59 century and we have this fissure, we have this introduction of marginal analysis that marks the
    0:45:05 juncture from classical economics to economics. So let’s say now we have economics, but we still
    0:45:12 have this fissure between historical thinking and let’s call it natural law thinking. That’s not
    0:45:19 quite right, but physical laws versus contingency. And then in the United States, this ends up mapping
    0:45:27 onto debates about capitalism. And so more historically minded economists tend to be
    0:45:33 interested in the progressive movement, and which is invested in taming and regulating
    0:45:41 industrial capitalism and changing its excesses, you know, factory safety laws, wage laws, working
    0:45:48 conditions laws. Yet in general, American economists all use marginal analysis just in
    0:45:54 different ways. The ones who are more drawn to marginal analysis become known as neoclassical
    0:45:59 economists. They’re neoclassical. The neo is because they’re using marginal analysis. The
    0:46:05 classical is because they don’t think we need to change the way the economy operates or the
    0:46:08 government operates. They’re not progressive. Whereas the progressives are saying things like
    0:46:16 we need to use social control. The state and the people collectively and democratically need to
    0:46:26 control the way economics unfolds and make sure things are fair and equal. So that school of
    0:46:31 thought becomes known as institutional economics in the United States by the 20th century. So it’s
    0:46:36 part of the progressive movement late 19th century. Into the 20th century, it really becomes institutional
    0:46:42 economics. And it’s quite dominant. And the neoclassical economists are still there, but they’re
    0:46:48 very much a minority. And Frank Knight, Milton Friedman’s teacher, is one of the minority
    0:46:55 neoclassical economists. And the institutionalists are much more progressive still.
    0:47:01 Is it fair to say that the neoclassical folks and even the classical folks versus the institutional
    0:47:07 economics folks, they have a disagreement about how much government intervention that should be
    0:47:13 in the economy. So neoclassical is less intervention. And then institutional economists,
    0:47:20 the progressive folks, has more intervention. Yes, exactly right. So this is the situation in the
    0:47:28 1920s. But the other piece I should mention is the first generation of progressive economists
    0:47:34 were very radical. They were closely allied with the socialist movement, with labor radicalism.
    0:47:39 And many of them lost their jobs at universities. This kind of connects to the early dawn of
    0:47:45 academic freedom. This is before academic freedom. And they were chastened. They became much more
    0:47:51 mainstream. By the time we get to the 1920s, we don’t really have radical critiques of society
    0:47:58 coming from economists. Much smaller profession, much less important than it is today. And
    0:48:04 fairly peaceful, because the 1920s are a fairly peaceful decade in the United States.
    0:48:11 So this is a situation when the Great Depression hits. And as I mentioned before, the head,
    0:48:17 the kind of most important institutional economist is Wesley Mitchell. And he has said,
    0:48:23 he’s written a whole book on business cycles. But he doesn’t see this business cycle coming,
    0:48:28 and it hits, and he doesn’t have a good explanation for it. Now, perhaps the preeminent neoclassical
    0:48:34 economist was Irving Fisher. Now, Irving Fisher is big into the stock market. And Irving Fisher
    0:48:42 says sometime in late summer, 1929, stocks are going ever higher and will continue to go ever
    0:48:48 higher forever. And so he loses his reputation after the stock market crash. So Milton Friedman
    0:48:53 is stepping into a field in which the greats have been discredited, and there’s an enormous
    0:48:59 economic crisis all around. And everybody’s struggling to figure out why the crisis happened.
    0:49:04 Yes. And the other thing he’s stepping into is a world where in the United States, there’s a
    0:49:11 great deal of anger at capitalism, at the system, unemployed people on the street in Europe. There’s
    0:49:18 rising fascist movements in Asia. There’s rising fascist movements. And so everyone’s very concerned
    0:49:23 about this. And Friedman is seeing a lot of this through the lens of Frank Knight, who feels like
    0:49:29 we are maybe reaching the end of what he calls liberalism. He calls himself an old-fashioned
    0:49:33 liberalism. We’re reaching the end of representative democratic government, because
    0:49:40 representative democratic government cannot solve these social problems. And capitalism,
    0:49:45 as it has developed, Knight is very pro-capitalist, but he says it’s generating inequality, and this
    0:49:51 is putting too many strains on the system. So Knight will become one of the people who helps
    0:50:00 Friedman think, how do I develop a new theory of capitalism that works in an era of mass democracy,
    0:50:06 where people can vote and people can express at the ballot box their unhappiness with what’s
    0:50:12 happening economically. So this larger movement will generate, of which F.A. Hayek is a part,
    0:50:18 Friedman is a part. That becomes the very early stirrings of trying to think about a new sort
    0:50:24 of liberalism, which will eventually be called neoliberalism. Okay. So if we can just linger on
    0:50:30 definitions of things. So we mentioned what neoclassical is in the institutional economics is.
    0:50:37 What’s Kenzie in economics? And the Chicago School of Economics, I guess, is a branch of
    0:50:44 neoclassical that’s a little bit more empirical versus maybe model-based. And Kenzie in this very
    0:50:52 model, model heavy, more intervention of government. So the real battle is Kenzie in versus everybody
    0:50:58 else. That is what eventually comes to pass in the United States and in the kind of overall developed
    0:51:03 kind of developed profession of economics. The other piece of the puzzle here is the
    0:51:10 introduction of mathematics. And it’s been around the edges, but it will pick up speed in the 1930s,
    0:51:18 like the econometrics society has founded. They start publishing. People start using more statistical
    0:51:23 and mathematical tools to think about economics. And they’re given a boost sort of inadvertently
    0:51:28 by the rise of Kenzie in economics. So Kenzie is trained in the neoclassical tradition.
    0:51:35 He’s an absolutely fascinating figure. He’s been there in peace negotiations at Versailles. He
    0:51:41 basically calls World War II. He’s like, hey, we’re going to have another war here,
    0:51:46 caused by Germany, because this peace treaty has been done in such an vindictive way. And people
    0:51:52 have made such bad decisions. He’s there. He sees it happening. And so when the Great Depression
    0:51:58 unfolds, he basically comes up with a new theory for explaining what’s going on. And
    0:52:04 the previous neoclassical understanding is where things go up and things go down. And when they
    0:52:09 go down, there’s a natural mechanism to bring them back up. So when the economy is going down,
    0:52:15 prices are going down, wages are going down. Everybody’s losing money, but eventually firms
    0:52:21 are going to realize, hey, I can hire people cheap. Hey, I can buy stuff cheap. I don’t have a lot of
    0:52:25 competition. Maybe I should get in the game here. And then others will start to get in and then you
    0:52:32 regenerate prosperity in that way. And so Keynes says, sure, that’s one theory, but something
    0:52:38 different is happening right now. Part of why it’s happening is because we have– the working
    0:52:43 class is more empowered now. They’re not simply going to just take low wages and ride them down
    0:52:51 to the floor. We might not hit the floor. But also, he says, people might become too anxious
    0:52:58 to spend. They might not want to invest. And Keynes has these discussions of animal spirits.
    0:53:04 He’s still enough of a political economist to think not just in terms of human rationality,
    0:53:08 but what are some other things going on in human beings? And people might decide to sit on their
    0:53:15 money. They might not invest it. And so what happens then is you could get stuck in a bad
    0:53:20 equilibrium. So in the neoclassical model, the equilibrium kind of restarts and resets itself.
    0:53:25 And he says, no, we could get stuck here. We could get stuck in the depression. And in that case,
    0:53:30 what has to happen, he says, the government stimulates investment and the government itself
    0:53:36 invests. And then he argues that– this is a student of his, Richard Kahn, says,
    0:53:41 as the government invests a dollar, it has a multiplier effect. A dollar spent by the government
    0:53:47 kind of ramifies out throughout the economy. So it takes the government and puts it in the center,
    0:53:50 as opposed to, say, the banking system or the financial system, which would be the
    0:53:57 more Friedman analysis. And for many economists of Friedman’s generation– and he’s a weird
    0:54:02 generation because it’s the generation that becomes dominant. It’s just like four years older,
    0:54:06 the men who become Keynesian economics. But that four years is really important because they come
    0:54:11 in to graduate school in economics and they get exposed to the new ideas of John Maynard Keynes.
    0:54:18 And I think it’s Paul Samuelson calls it– it was like a South Sea virus that
    0:54:24 attacked all of the younger economists, immediately succumbed, and no one under 50
    0:54:32 ever got the disease because their thinking’s already set. And so Keynesianism, Keynes himself,
    0:54:38 is very suspicious of math and economics. And he and Friedman is fascinating. One of the first
    0:54:43 books by Jan Tingerman, a Dutch economist, to use math and economics. He has huge volumes.
    0:54:51 Volume one, Keynes pans it. Volume two, Friedman pans it. So they’re in the same page, but what
    0:54:59 happens is as Keynesianism arrives in the United States, Franklin Roosevelt is not really a Keynesian.
    0:55:05 He’s kind of an accidental or experimental Keynesian. And there’s a bunch of different ideas
    0:55:09 in the United States that are very similar to Keynesianism. They’re not theorized,
    0:55:14 but they’re similar ideas that the government has to do something. So this all comes together
    0:55:22 and American economists realize that you can construct models in the Keynesian perspective.
    0:55:29 And if you can use numbers in these models, you can go to Washington, D.C. with numbers,
    0:55:37 and you seem like you have a lot more authority. And so math becomes really
    0:55:46 twinned into Keynesian economics. So the numbers are used as a symbol of expertise.
    0:55:50 We really know what the hell’s going on because we have some numbers, right?
    0:55:54 Right. And we can create a model. And so we can say, okay, in the model, the interest rate is here
    0:55:59 and taxes are here. So let’s play with government spending. Let’s make it up. Let’s make it down.
    0:56:04 And then we can get an estimation. It’ll spit out here’s predicted GDP. So the other piece of
    0:56:11 the Keynesian revolution is it really gets people thinking kind of holistically about the economy
    0:56:21 as one conceptual unit. And you then have what Hollis-Hamulson will end up calling the neoclassical
    0:56:27 synthesis. And this was still in economics today. If you take micro, you’re going to get supply and
    0:56:32 demand, scarcity, marginal analysis. If you take macro, you’re going to get a very different approach.
    0:56:38 And that’s more Keynesian-based. And so the idea is that, and this makes sense, I mean, you can think
    0:56:43 of this from statistics, right? The way things act individually versus when they’re all added
    0:56:49 together can be very different. So there’s this kind of uneasy piece where economists are using
    0:56:54 kind of neoclassical tools to analyze individual behavior and individual market
    0:56:57 behavior, and they’re shifting to a different paradigm when they think about the economy as
    0:57:03 a whole. And in this paradigm of the economy as a whole, the federal budget, the taxing and
    0:57:08 spending power of the federal government become paramount. And that is called the fiscal revolution.
    0:57:16 And that’s really the essence of Keynesianism. But the key thing to remember is that Keynesianism
    0:57:22 and Keynes are different. And there’s this famous episode where John Maynard Keynes comes to DC and
    0:57:27 he goes to dinner, and he comes back and he says to one of his friends in London, he’s, “Oh, yeah,
    0:57:36 it was really interesting. I was the only non-Keynesian there.” Yeah. So Keynesianism is more government
    0:57:45 intervention, fiscal policy. So put the government at the center of influencing the economy. And then
    0:57:51 the different flavors of whether it’s Austrian economics or Chicago School of Economics
    0:57:59 is saying, “No, we have to put less government intervention and trust the market more.” And
    0:58:06 the formulation of that from Milton Friedman is trust the money more, not trust, but the money
    0:58:13 supply is the thing that should be focused on. Yes. So the Austrians and the Chicago School see
    0:58:21 economic prosperity and growth comes from individual initiative, individual entrepreneurship,
    0:58:25 kind of private sources. The private market is what drives economic growth, not the public sector.
    0:58:32 And so for Friedman, then the question is, what is the government’s role? And because he’s lived
    0:58:38 through the Great Depression, he’s not laissez-faire, and he won’t ever be laissez-faire. Now, interestingly,
    0:58:44 Hayek, living through the Great Depression, at first is laissez-faire. And he’s like, “Sure,
    0:58:50 like let it rip.” And things get so bad that Hayek’s like, “Okay, that’s not going to work.”
    0:58:54 Can we actually define laissez-faire? So what do we mean? Like, what’s the free market? What’s
    0:59:00 laissez-faire? What’s the extreme version here? So yeah, laissez-faire means levabie in France.
    0:59:07 It’s more often used as an insult than as an actual. Very few people are completely and totally
    0:59:12 laissez-faire. That would be like the pure laissez-faire would be the sort of pure,
    0:59:16 maybe pure anarchist position, like the state does nothing, or the state isn’t even there.
    0:59:23 But it tends to, if I could maybe make it more precise, it would be focused on freedom of contract
    0:59:32 would be essential. And that means the buyer of labor and the seller of labor must have absolute
    0:59:40 freedom to contract. So that means no minimum wage law, no working hours law, no employment law,
    0:59:45 things like that. That was, and this is all pre-progressive movement. A lot of things are
    0:59:50 that way, right? You know, imagine you’re in 19th century America and you have a farm and you hire
    0:59:56 someone to help you on the farm. You offer the money, they take it. If they fall off a ladder and
    1:00:00 break their back, maybe you help them out, maybe you don’t, right? But there’s not a whole apparatus
    1:00:06 of legal liability and safety and things like that. So that would be one piece. Another piece of
    1:00:15 laissez-faire would be free trade amongst nations. So no regulation of who can invest in a nation or
    1:00:22 who can take money out of a nation. So Nippon Steel could come and invest in US Steel and there would
    1:00:28 be no grounds in which to reject that. Or you could, as a billionaire in the United States,
    1:00:33 relocate you and all your money to another country and the United States couldn’t try to keep you
    1:00:40 and nobody else could stop you from coming in. And then in the context of economic crisis,
    1:00:50 laissez-faire would not encompass centrally provided relief because in the pure theory,
    1:00:57 again, very seldom applied purely, but in the pure theory, the wages need to come down far enough
    1:01:03 and people need to be desperate enough to start taking work and to start the machine again.
    1:01:07 So the theory would be if you give people relief, they might not go back to work.
    1:01:14 Now, almost nobody says that in the Great Depression because the situation is so bad
    1:01:20 and people are starving on the street and people feel, for humanitarian and ethical reasons,
    1:01:25 it’s not okay to say that. The Austrians, though at first, Hayek and Lionel Robbins,
    1:01:30 are like, this is a business cycle and it needs to run its course and it will be detrimental
    1:01:34 if we intervene. And then pretty soon, Hayek has to change his tune.
    1:01:38 So the Austrians are the most hardcore in terms of laissez-faire.
    1:01:44 Absolutely. And so Hayek will make the turn towards accepting more of a state and then
    1:01:50 we’ll come to talk about how the state needs to support what he calls the competitive order.
    1:01:58 But his mentor, Ludwig von Mises, still remains very hardcore and is not really open to things
    1:02:03 like unemployment insurance or other state-based interventions.
    1:02:08 What does von Mises say about human suffering that’s witnessed in the Great Depression,
    1:02:13 for example? What are we supposed to do as economists, as humans that define policy?
    1:02:18 What are we supposed to see when people are suffering at scale?
    1:02:24 Yeah, I wish I knew and answer that question. I don’t know enough about von Mises and his
    1:02:33 reaction in the Great Depression. I think I would hazard that he would look more down the road and
    1:02:40 say, well, if you start here, you’re going to go places that are bad. But I don’t factually
    1:02:44 know what he said in response. I do know that Hayek’s position doesn’t last very long.
    1:02:51 It’s not a position you can hold to. Maybe you could hold to it in other cycles. The other thing
    1:03:00 that was interesting is I found very few Americans saying this. Most who were were kind of small town
    1:03:07 electeds or the most famous is Andrew Mellon, quoted by Herbert Hoover. So not directly,
    1:03:12 you don’t have him on record saying this, but apparently Hoover records in his memoirs that
    1:03:20 Mellon said something like, liquidate real estate, liquidate stocks, purge the rottenness
    1:03:26 out of the system. People will live a healthier life. And certainly, they were members of the
    1:03:31 Federal Reserve who felt like it would create, they didn’t say moral hazard, but it would create
    1:03:37 what we now call moral hazard, bad habits, where we to intervene and to save failing banks because
    1:03:42 failing banks need to be taught a lesson, they need to be taught discipline. And so a lot of
    1:03:47 people, I think, saw it in the context of discipline. This is discipline. And if you remove the
    1:03:51 discipline, you’ll be taking away something fundamental in society.
    1:03:55 So Milton Friedman never quite went all the way to Lise Fair?
    1:04:01 No. No, he didn’t see that. And what’s really interesting is the number of incredibly radical
    1:04:07 proposals that he and his teachers were floating. So I’ve mentioned Frank Knight. Another really
    1:04:15 important influence on Friedman was Henry Simons, who was a junior professor at Chicago. And Simons
    1:04:23 had this idea for what he called 100% money, which would be a law that says banks have to
    1:04:27 hold 100% of the deposits they receive. They can’t loan them out on the margin.
    1:04:32 So this would completely and totally have overhauled the US banking system. And he would have said,
    1:04:36 there’s a category of things called banks where you get deposits. And then there’s going to be a
    1:04:41 category of sort of, he didn’t say investment banks, but investment vehicles that will invest.
    1:04:48 So similar to what did happen in some ways in the banking reforms, in the 1930s, the investment
    1:04:53 banks were split from the deposit banks. And the banks that took deposits were much more
    1:04:58 highly regulated, and they were supported by the FDIC. But the point being, the Chicago
    1:05:04 School had these very radical proposals for reform, go off the gold standard, restrict
    1:05:12 the currency, change the banks, immediately relief payments now. What is important to note,
    1:05:17 though, is that they thought of all of those as emergency measures to get through the emergency,
    1:05:24 not as permanent alterations in the state of what had to be and not permanent alterations
    1:05:29 between state and market. Where the Keynesian assumption is things have changed, times have
    1:05:37 changed, we’re in a new dispensation, and we need a new relationship. So Milton Friedman
    1:05:44 is very open to doing things differently in a state of emergency. He will have different ideas
    1:05:48 during World War II than any other time. And that’s why I argue I think he would have been
    1:05:53 supportive of at least the first rounds of coronavirus relief, because I think he would
    1:05:59 have put his emergency thinking hat on. So in that way, he was definitely more flexible.
    1:06:07 You mentioned Hayek. Who is this guy? What’s his relationship to Milton Friedman in the space
    1:06:12 of ideas and in the context of the Great Depression? Can we talk about that a little bit?
    1:06:21 Sure. So F.A. Hayek is an Austrian economist who takes up a posting in London, and he’s
    1:06:27 in a mentor, a mentee rather of Ludwig von Mises. He’s writing about business cycles,
    1:06:35 Austrian capital theory, and the depression hits. And he’s one of the few economists who in the
    1:06:41 beginning really is not calling for much intervention. Although, as he realizes how politically
    1:06:45 unpalatable that is, he will develop a more softened version of Austrian economics that has
    1:06:52 room for a whole range of social services. What’s significant about Hayek is that he is also watching
    1:06:57 what’s happening in Austria, what’s happening in Germany, and he’s really worried the same
    1:07:04 thing is going to happen to the Western democracies. And he sees the root cause of this is socialism,
    1:07:08 the shift towards an expanded role for government, which we’ve been talking about is happening in
    1:07:13 the United States. It’s also happening in Britain. And so he writes this book that becomes incredibly
    1:07:20 famous, “The Road to Serfdom,” basically saying taking these steps towards a planned economy
    1:07:26 or an economy that’s a modified form of capitalism is going to could. He’s very clear that this is
    1:07:31 not an inevitability, but if the same steps are taken and people follow the same line of thinking,
    1:07:37 we may end up in a sort of coercive totalitarian state. So this becomes enormously popular in the
    1:07:43 United States. First of all, he’s in good touch with Friedman’s teachers, even before this book
    1:07:47 comes out. They see them as kindred spirits. Frank Knight is in touch with him. Henry Simons
    1:07:52 is in touch with him. They all see themselves as liberals. They call themselves old-fashioned,
    1:07:58 unreconstructed liberals. And so even before he becomes famous, Hayek will be trying to kind of
    1:08:04 organize thinkers and intellectuals who he believes shares his values of what we would call
    1:08:10 today classical liberalism and to kind of create a counter-consensus to the one that’s gathering.
    1:08:17 Now, Hayek also chooses not to argue against Keynes, and he feels that this is a huge missed
    1:08:22 opportunity, that he should have staked out the case against Keynes, and that because he did not,
    1:08:27 people come to believe there is no case against Keynes. Keynes is literally unanswerable.
    1:08:34 So Hayek will have this great regret. He will channel some of his regrets into sort of community
    1:08:41 building, specifically developing the Montpelerin Society. And it will fall to Friedman to really
    1:08:50 make that case against Keynes. But Hayek will end up at Chicago, and Hayek really influences
    1:08:58 Friedman to think about what Hayek calls the competitive order and how the state can and must
    1:09:05 maintain a competitive order. That is the system of laws, of norms, of practices that makes it
    1:09:11 possible for markets to function. And this is one of these key differentiators between the older
    1:09:18 philosophy of laissez-faire and the newer reconceptualization of liberalism, which says, “Yes,
    1:09:25 we need a state. We need a state that’s not intervening in markets under social democratic
    1:09:30 auspices, but is structuring and supporting markets so that they can function with maximum
    1:09:38 freedom, keeping in mind that if there aren’t basic social supports needed, the market is apt to
    1:09:44 generate the type of either inequality or social instability that will call the whole system into
    1:09:51 question.” So Hayek is really key in promoting this modified liberalism. But from being a very
    1:09:58 prominent economist in the 1920s and 1930s, as mathematics becomes the language of economics,
    1:10:03 Hayek is completely left out in the cold. Now, Friedman to some degree is left out in the cold,
    1:10:09 but Friedman at least has proved the mathematical economists that he knows what they’re up to,
    1:10:15 and he’s rejecting it from a position of expertise and knowledge. And he literally drives the
    1:10:20 mathematical economists out of Chicago. They’re clustered in a group called the Kohl’s Commission,
    1:10:28 and he makes their life hell. They flee. They flee the Friedman slot. But then when Hayek arrives
    1:10:33 at the University of Chicago, he would like to be considered for a position in the economics
    1:10:37 department. And Friedman, Milton Friedman says, “No way. You’re not really an economist because
    1:10:44 you’re not empirical because you just developed these theories.” So he has an appreciation for
    1:10:51 Hayek as a social thinker, but not as an economist. So what Friedman decides to do, his answer to
    1:10:58 Keynes will be deeply empirical, but it will also be theoretical. And it will create an alternative
    1:11:05 intellectual world and approach for economists who aren’t satisfied with Keynesianism. And almost
    1:11:11 single-handedly, Friedman will introduce sort of political and ideological diversity
    1:11:17 into the field of economics because from his beachhead in Chicago, he will develop the theory
    1:11:26 of monetarism. So what is monetarism? The easy way to summarize it is this famous dictum of Milton
    1:11:34 Friedman’s. Inflation is always and everywhere a monetary phenomenon. And it’s fascinating that he
    1:11:41 becomes an expert in inflation because the first research and the first major research product
    1:11:45 of monetarism is that theory of the Great Depression in a monetary history of the United
    1:11:53 States. And that is the theory of a deflation, all prices going down. And he will go back to an idea
    1:11:59 that Irving Fisher had popularized, but a very old idea, almost a truism, the quantity theory of money,
    1:12:05 which says the level of the price level is related to the amount of money circulating in an economy.
    1:12:11 So if you have more money, prices go up. If you have less money, prices go down. Now, this seems
    1:12:17 like very basic and almost too basic to bear repeating. But Friedman is saying this very basic
    1:12:24 relationship holds true even in an advanced industrial economy. And that is what people
    1:12:30 have started to doubt. And if you think about money, you think about banks, you don’t think
    1:12:37 necessarily about the federal budget spending and taxation. And what you see happens in American
    1:12:42 economics, the textbooks previous to the Keynesian Revolution, they spent a lot of time on money,
    1:12:47 they spent a lot of time on interest rates, you can do word counts and other scholars have done
    1:12:52 the word counts. And then word count for money after World War II just plummets. And you start
    1:13:00 seeing things like taxation, budget, those things go up. So what happens is the economics profession
    1:13:05 shifts its attention. It just looks away from money to other things. And Friedman is one of the
    1:13:13 few who’s saying, no, money still matters, money still counts. And it’s a very counterintuitive
    1:13:19 argument to make. It’s a very historical argument to make. And this is absolutely fascinating to me.
    1:13:25 With Anna Schwartz, he develops this 150-year time frame. He also has students working on
    1:13:30 episodes of hyperinflation in different periods of time. He’s also looking back
    1:13:37 to ancient history, inflationary episodes there. And he’s saying this is a law of economics.
    1:13:42 This is something that recurs throughout time. It’s not historical, right? It’s not contingent.
    1:13:49 It’s a law of economics. And his Keynesian counterpoints are saying, no, that’s not
    1:13:54 relevant any longer. Maybe once it was relevant, but it’s not relevant today. Now, in some ways,
    1:14:02 they have a point because in order to pay for World War II, the federal government
    1:14:09 sells a lot of bonds. It issues a lot of debt. And it wants to pay this debt back at a low
    1:14:14 interest rate. And it wants people to keep buying it. It wants the low interest rate
    1:14:19 to be competitive with other interest rates. So once in general, low interest rates throughout
    1:14:25 the economy. And the Federal Reserve has been so discredited by the Great Depression that the
    1:14:31 Treasury basically runs a Federal Reserve and says, keep interest rates low. And so that’s
    1:14:37 what it’s doing. And so the Federal Reserve has stopped being an independent entity. It’s just
    1:14:43 a sub sort of department of the Treasury. But in 1951, they negotiate what’s called the Treasury
    1:14:49 Fed Accord. And the Federal Reserve gets its independence, but it doesn’t really use it.
    1:14:57 But statuatorily, it now has it. And so most economists are just observing a regime in which
    1:15:01 the Federal Reserve has no power, a regime in which there is really little inflation,
    1:15:05 the inflation that is seen as post, there’s a little burst of inflation in the Korean War.
    1:15:10 And they’re saying inflation is not really important. It’s not really relevant. And money’s
    1:15:14 not really relevant and important. And so to break through and to make the argument,
    1:15:20 that’s why Friedman and Schwartz go to history. And they’re able to make that argument for history.
    1:15:25 So then Friedman is coming out with a variety of papers that are saying,
    1:15:31 you know, when I look at economic fluctuations, he maps them side by side to fluctuations. And
    1:15:36 the money supply and says, look, they fit. And other economists, remember, they’re building
    1:15:41 complicated mathematical models. And Friedman’s doing extremely simple stuff. And they just think
    1:15:47 it’s dumb. It’s not interesting. It’s not true. They just, they don’t buy it at all. And so,
    1:15:53 but after a monetary history of the United States, they have to pay attention. So it’s really in
    1:16:00 those years, Friedman is hammering this idea of monetarism, and it starts to become something
    1:16:06 respectable, bordering on respectable for other economists to look to and think about. And that’s
    1:16:10 really the beginning of the kind of Keynesian monetarist split, where if you start to give
    1:16:16 Friedman any credence, you’re heading towards a monetarist position. Now, at the same time,
    1:16:26 Friedman comes out very publicly in 1964 as a supporter of Barry Goldwater. And Keynesian economics
    1:16:31 has found a home in the Democratic Party. It’s probably the brightest moment in the sun is
    1:16:36 the administration of John F. Kennedy, who brings in a lot of Harvard and Yale professors to the
    1:16:42 Council of Economic Advisers. He proposes a series of spending programs that are really guided by
    1:16:49 the Keynesian philosophy. And the Barry Goldwater is tremendously controversial, part for his votes
    1:16:54 against civil rights, which Friedman really supports in part because he’s a hardcore libertarian
    1:16:59 in an age when that’s not in the political mainstream or not discussed in the political
    1:17:04 mainstream. And I mean, he’s just tremendously unpopular, particularly in all the educated
    1:17:09 precincts where Friedman lives. So Friedman is like an outcast on a pariah for his support of
    1:17:15 Goldwater. And so that actually really affects monetarism because people feel that this is now
    1:17:21 becoming a package deal. And so there’s a great reluctance to embrace Friedman’s ideas because
    1:17:28 it seems like you would then have to embrace his politics. So it’s associated with conservatism.
    1:17:35 So this is the years when conservatism, there is a movement that calls itself conservatism.
    1:17:40 And Friedman is very tightly allied with this movement from the beginning, partly through his
    1:17:45 friendship with William F. Buckley. And a lot of people say to me, yeah, but Friedman’s not
    1:17:52 conservative. And this is like a bigger, you have a whole separate podcast on this. But for now,
    1:17:58 I’ll just say that conservative in the United States becomes a political brand that contains
    1:18:04 elements of conservatism that are recognizable across time and space, embrace of tradition,
    1:18:11 for comfort with hierarchy, et cetera. And it also has something new and different, which is
    1:18:17 Friedman’s ideas about Milton Friedman’s advocacy of more free markets, less government regulation
    1:18:21 and the benefits of capitalism and the benefits of freedom. And that gets folded into American
    1:18:28 conservatism in part because Milton Friedman is such a powerful intellectual figure. And after
    1:18:34 his advocacy, Goldwater media realizes this guy is really smart. He has really interesting things
    1:18:39 to say. He makes great copy. He makes a great guest. And he starts writing a column for Newsweek
    1:18:45 magazine, which is a very big deal in a much more consolidated media environment. And he’s quoted
    1:18:50 in all the newspapers. And so his public profile really starts to rise right as he’s pushing
    1:18:55 monetarism as an alternative to the Keynesian synthesis.
    1:18:59 Can we just linger on what is monetarism?
    1:19:01 Yes, okay. I didn’t come into it.
    1:19:04 So like what, okay, the money supply.
    1:19:04 Yes.
    1:19:12 So money is this thing that you can leave it a note, like a notion where people buy and sell
    1:19:20 stuff. And there’s this fascinating complex dynamical system of people contracting with
    1:19:24 each other in this beautiful way. I mean, there’s so many pod head questions I want to ask about
    1:19:30 the nature of money. I mean, money is fascinating in that way. And I think for Milton Friedman,
    1:19:39 trusting the flow of money is really important. And the signals that pricing and money in general
    1:19:42 provides is really important.
    1:19:48 So yeah, and some of this, I could take some of this back again to Frank Knight. So one thing
    1:19:55 Frank Knight said to all his students was the market is the best allocation mechanism we have.
    1:20:02 The market is what allocates resources in a situation of scarcity. The market allocates them.
    1:20:09 The best. And Hayek will add to that by saying prices are information signals, and a price
    1:20:14 sends information to buyers and sellers about how they should act. And these are the two of the
    1:20:20 strongest arguments for why the government should not intervene in the price system because it will
    1:20:26 blur information or because it will allocate less efficiently than market allocation will.
    1:20:32 And so what Friedman is really going to add to that is maybe going up a level and thinking
    1:20:40 in the macro about the whole economy and how money circulates through that economy as a whole.
    1:20:47 And so what he and Anna Schwartz do is they construct what are called monetary aggregates.
    1:20:53 This is adding together, say, all the money that’s on deposit in banks and all the money that’s
    1:20:59 believed to be circulating in people’s wallets. And you also have to really go back in time.
    1:21:06 We don’t have credit cards. There is a stock market, but it’s tiny in terms of the number
    1:21:13 of people who invest. There aren’t mutual funds. When travelers checks are introduced,
    1:21:20 this is a big deal. So we have a very simple monetary system. And so Schwartz and Milton
    1:21:25 Friedman start measuring what they call the monetary aggregates. They focus on M1 and M2,
    1:21:32 and their favorite aggregate is M2, which I believe is encompassing deposits and circulating medium.
    1:21:37 The other thing to recall, there’s some fine distinctions between
    1:21:48 money in savings accounts and money in checkings accounts. And money in savings accounts
    1:21:53 can earn interest and is generally believed not to circulate, or money in checking accounts
    1:21:58 does not at that time bear interest and cannot legally bear interest. And so his thought of
    1:22:02 is circulating. And then there’s different institutional architectures of postal savings
    1:22:09 banks and credit unions. But Friedman is, one, taking the focus to these aggregate amounts of
    1:22:17 money and saying, “These really have a lot to do with economic booms and busts. When we have
    1:22:23 an expansion in the amount of available money, we see an expansion in economic activity. When we
    1:22:32 have a contraction in available money, we have a contraction.” And so he says, “At this stage,
    1:22:38 the government, through the mechanism of the Federal Reserve and its influence on interest rates,
    1:22:44 can either make money more cheaply available and more freely available in the economy,
    1:22:52 or can make money more expensive and slow things down.” But the central core idea of
    1:22:59 monetarism is this is potentially very bad if the government can hit the gas and then hit the
    1:23:06 break and hit the gas and hit the break based on, say, what a politician wants or what somebody
    1:23:13 at the Federal Reserve wants. You have a lot of instability in the system. And so one of the core
    1:23:20 policy proposals of monetarism is let’s grow the money supply at a steady rate. And in the beginning,
    1:23:26 Friedman just says K percent. He doesn’t even put a number on it because he says the number
    1:23:32 doesn’t matter. What matters is the steadiness in the growth rate because if it’s a steady growth rate,
    1:23:38 it will fade away and then people will make economic decisions based on the fundamentals,
    1:23:45 not based on what they think is going to happen, not based on hedging against inflation
    1:23:52 or hedging against deflation. They’ll just be able to function. So this is sort of the paradox
    1:23:59 of monetary policy. When it’s happening right, you don’t see it, you don’t notice it. When it’s
    1:24:03 happening wrong, Friedman argues, it can just fundamentally destabilize everything. It can
    1:24:10 cause a great depression, it can cause an artificial boom. And so he’s taking monetary policy at a
    1:24:14 time when most economists think it’s completely irrelevant and saying this is the central game
    1:24:21 of the economy. Now, we live in a world where we believe this and the Federal Reserve chair can’t
    1:24:27 open their mouth without headlines being generated. But Friedman is saying this at a time when the
    1:24:33 Federal Reserve is like a mysterious and secretive organization. It’s not well known,
    1:24:38 it’s not deeply appreciated. Some of the only people who appreciate the Fed’s power are
    1:24:46 hardcore rural populace who have constituents who think the banks and money power are the problem,
    1:24:52 who are like throwbacks from the frontier days. So Friedman in the beginning has no constituency
    1:24:59 for this policy, he has no constituency for this analysis. And so just going back to summarize
    1:25:06 monetarism, it’s looking, it’s using the quantity theory of money to analyze the macro economy.
    1:25:15 It’s proposing a policy of slow and steady growth in the money supply. And then it is arguing that
    1:25:21 inflationary episodes when they emerge are profoundly driven by changes in the money supply,
    1:25:28 not by anything else. I mean, and going even up a level as we started,
    1:25:37 how epic is it to develop this idea, to hold this idea and then to convince
    1:25:45 the United States of this idea that money matters, that today we believe is mostly correct
    1:25:54 for now. And so just this idea that goes against the experts and then eventually wins out
    1:26:00 and drives so much of the economy, the biggest, the most powerful economy in the world. So
    1:26:05 fascinating. Yeah. So I mean, that’s a fascinating story. And so what happens is Friedman has
    1:26:10 advanced all these ideas. He’s roiled the economics profession. He’s built a political profile.
    1:26:18 And then he becomes the head of the American Economics Association. And he is asked in that
    1:26:23 role to give a presidential address. And so he gives his presidential address December 1967.
    1:26:31 And he says, I’m going to talk about inflation. And I’m going to talk about the trade-off between
    1:26:36 inflation and unemployment. And this is what’s generally known as the Phillips curve. And the
    1:26:42 Phillips curve in its original form is derived of post-World War II data. So it’s derived of
    1:26:51 about 12 years of data. And it shows that when inflation goes up, unemployment goes down. And
    1:26:56 the idea would make sense that as the economy is heating up and lots of things are happening,
    1:27:03 more and more people are getting hired. And so this relationship has led policymakers to think
    1:27:09 that sometimes inflation is good. And if you want to lower unemployment, you could let inflation
    1:27:17 kind of go a little bit. And in accrued forms, it becomes to seem like a menu, like you could
    1:27:22 take your model and you could plug in, I want this much unemployment. And it would say, well,
    1:27:27 great, this is how much inflation you should do. And so then you would target that inflation rate.
    1:27:34 So Freeman gets up and he says, this is wrong. This might work in the short term, but it’s not
    1:27:39 going to work in the long term because in the long term, inflation has, first of all,
    1:27:45 it has a momentum of its own. Once it gets going, it tends to build on itself, the acceleration
    1:27:52 as thesis. It accelerates. And once inflation gets going, and the reason it gets going is because
    1:27:59 workers go to the store and they see the price level has gone up, things have cost more.
    1:28:07 They ask for the wages to go up. Then people, eventually, the wages will go up too high,
    1:28:11 and they will no longer be hireable or companies will decide, at these high wages,
    1:28:16 I can’t hire as many workers, I’d better lay off. So if inflation keeps going, eventually,
    1:28:21 over the long term, it will result in high unemployment. So he says, theoretically,
    1:28:26 you could end up in a situation where you have high inflation and high unemployment.
    1:28:29 This hasn’t been seen, but he says, theoretically, this could happen. And then he goes and he says,
    1:28:35 and the government has started expanding the money supply, started expanding the money supply
    1:28:41 in 1966. So we’re going to get a bunch of inflation and then we’re going to get a bunch
    1:28:46 of unemployment. And he estimates about how long it will take. And then he says, once this all
    1:28:54 happens, it will take about 20 years to get back to normal. And he predicts the stagflation of the
    1:29:03 1970s. Stagflation of the ’90s. Again, against the mainstream belief represented by the Phillips
    1:29:10 Curve. Yeah. And what really makes it happen is that many of the economists who most deeply
    1:29:16 dislike Friedman and most deeply dislike his politics in the 1970s as they’re running their
    1:29:21 models, they start to say, Friedman’s right. They start to see in the data that he’s right.
    1:29:26 And a very parallel process happens in Britain. Britain is going through a very similar burst
    1:29:32 of spending, burst of inflation. And so Friedman is vindicated in a very profound way in the way
    1:29:36 that he himself said would be the ultimate vindication, which is my theory should predict.
    1:29:44 So that prediction of stagflation is really this sort of final breakthrough of his ideas
    1:29:52 and also their importance to policy and to thinking about how we should intervene or not in the
    1:29:56 economy and what the role of the Federal Reserve is. Because he’s saying the Federal Reserve is
    1:30:02 incredibly powerful. And finally, people start to believe it. And I don’t know if we said,
    1:30:08 but to make clear, stagflation means high unemployment and high inflation, which is a thing
    1:30:15 like you mentioned was not seen before. And he predicted accurately. And it also disproves the
    1:30:21 relationship, the inverse relationship between unemployment and inflation.
    1:30:28 Yeah. Now I should say the Phillips Curve is still out there. It’s been expectations augmented. And
    1:30:36 it is relevant in the short term, but Friedman’s warning is still very much apt that if you get
    1:30:43 too focused on unemployment, you can let inflation out of the bag. And so until very recently,
    1:30:49 the Federal Reserve’s tradition has been focusing on inflation, believing that’s fundamental,
    1:30:54 and that will keep unemployment low rather than trying to lower unemployment at the cost of
    1:31:00 raising inflation. Can we go back to Frank Knight and the big picture thing we started
    1:31:05 with, which is the justification of capitalism? Yes. So as you mentioned, Milton Friedman
    1:31:12 searched for a moral justification of capitalism. Frank Knight was a big influence on Milton Friedman
    1:31:19 and including on this topic of understanding the moral justification of capitalism. I think you
    1:31:25 spoke about Knight’s case for capitalism was grounded in the idea that the ability to act
    1:31:31 in the face of uncertainty creates profit. And it should because taking risks should be rewarded.
    1:31:37 So this idea that taking risks in the face of uncertainty should create profit. And that
    1:31:43 becomes a justification that the ethics of capitalism. Can you just speak to that?
    1:31:49 Yeah. So Knight is talking about where does profit come from? And to his mind, it comes
    1:31:55 from the entrepreneurial function and the risk taking function. And so he weaves that into why
    1:32:04 capitalism works best and why it’s the most effective allocation machine and why it assigns
    1:32:11 responsibility in a way he believes that a socialist system never could. Now, Knight, though, is not a
    1:32:16 booster of capitalism. It could be in part because he’s just a darkly pessimistic kind of depressive
    1:32:22 guy. And so he’s afraid capitalism is going to collapse and socialism or fascism is going to
    1:32:29 take over or communism. And so he kind of descends into darkness there. Friedman as the more
    1:32:35 optimist believes with Hayek that you can develop a different approach to capitalism that would
    1:32:40 preserve the price system, preserve allocation, but build in social supports, build in a social
    1:32:45 minimum, things like this. But there’s a moment in his career where he’s really struggling to figure
    1:32:50 out like, how do I make this case for capitalism? And basically, the whole sort of conservative
    1:32:53 movement or people who we later call the conservative movement are struggling to make this case.
    1:33:00 And he starts thinking about what makes capitalism work is that if you put forth effort,
    1:33:04 you get a reward. So then you could say, well, people get what they deserve under capitalism.
    1:33:09 But then he kind of stops and he says, that’s not really true because we’re born with such
    1:33:14 different endowments and there’s a huge quotient of luck, right? So some people are just in the
    1:33:20 right position and some people aren’t. So if I say capitalism is moral because people get what
    1:33:27 they deserve, that’s not really true. And he also kind of has like an ethical reaction, which he
    1:33:33 ends up calling like an aesthetic reaction. He’s kind of like, it just doesn’t feel right to say
    1:33:38 that. And so he struggles for a while with like, what do I say? And then he basically says, capitalism,
    1:33:44 it can’t be the core. Discipline of the market can’t be the core to your ethics. It has to be
    1:33:48 something else. So that’s when he will decide it’s freedom is individual freedom. That’s really
    1:33:54 the ethical core and capitalism makes individual freedom possible because capitalism is dedicated
    1:34:04 to maximizing that. And so the defense of capitalism comes through freedom. And at his stage in history,
    1:34:10 he’s able to set aside nice worry about inequality and say, when I look at the data, and this is true
    1:34:16 for the macro data mid-century, incomes are actually converging, right? And also, if you
    1:34:21 look historically, if the country goes from, say, a more feudal agrarian society to a more
    1:34:26 market-based society, incomes will converge. Now, then they might start to diverge, but
    1:34:30 freedom is in the moment when he’s seeing the convergence. And so that’s what he’s really
    1:34:37 focused on. So he believes he can justify capitalism through the ethic of freedom. And he also believes
    1:34:43 that inequality is a problem that can be addressed through specific policies. And it’s not a
    1:34:49 fundamental feature of capitalism. In other words, he doesn’t see capitalism as an engine of inequality
    1:34:53 the way that Frank Knight did and the way that maybe some critics on the left would.
    1:34:59 How did he conceive of freedom? So individual freedom, economic freedom, political freedom,
    1:35:04 civil freedom, what was the tension, the dynamic between those different freedoms for him?
    1:35:10 So he really begins focusing on economic freedom. And he says it’s really important to focus on
    1:35:16 economic freedom because in the United States, we don’t value it enough. So by economic freedom,
    1:35:23 he means the ability to keep what you’ve earned, the ability to make decisions about your business,
    1:35:28 the ability to make decisions about the work that you do. So this will translate into things like
    1:35:32 there shouldn’t be a minimum wage. He believes the minimum wage has bad social
    1:35:36 effects, but he also believes you should be free to accept a job at a wage that you yourself have
    1:35:44 determined is acceptable to you. And there should be very minimal regulation, questions around safety
    1:35:48 and other things because the market will ultimately, if you create an unsafe product,
    1:35:55 it won’t sell. And that will be that’s sort of your incentive. So he really centers economic
    1:35:59 freedom because he thinks especially, and he’s really speaking from his vantage point in the
    1:36:04 universities and speaking to the kind of liberal consensus of the 50s and 60s, he thinks economic
    1:36:09 freedom has been undervalued in the American context. So he really wants to push that forward.
    1:36:13 He’s really kind of taking political freedom for granted. Now later in his career, when he becomes
    1:36:19 famous, he’s traveling the world, he spends time in Chile, and this country is now being
    1:36:25 ruled by a dictator, Gustav Pinochet, who starts introducing economic freedom, but there’s no
    1:36:29 political freedom. And Milton Friedman believes eventually these two things are going to go
    1:36:35 together and tells Pinochet, “You’ve got economic freedom, and eventually it’s going to mean
    1:36:39 political freedom.” Pinochet is like, “Okay, fine. I’m not really interested in that. I want to
    1:36:44 know what I should do about inflation.” But then when Milton Friedman leaves Chile, he is
    1:36:50 attacked and vilified for having been a supporter. He’s a supporter of the regime,
    1:36:55 which he’s not, but he realizes he has talked too much about economic freedom and he hasn’t
    1:36:58 talked enough about political freedom. And he’s kind of assumed political freedom because he’s
    1:37:04 come from the American context. So then he starts recalibrating them and saying, “You know what?
    1:37:08 If you don’t have political freedom, you’re never going to be able to hold on to economic freedom.”
    1:37:14 So he sees that they need to go together and they don’t naturally go together. And so he starts to
    1:37:20 become more clear in talking about political freedom. Now let’s fast forward to the end of
    1:37:26 his life, and he’s witnessing the emergence of what we call the Asian Tigers. So capitalist economies
    1:37:32 that are doing very well, but they don’t have political freedom. But then he observes, they
    1:37:37 don’t have political freedom in that you can’t vote in a free and fair election, but they also
    1:37:44 don’t have a stazi. They don’t have a KGB. They’re not hauling people off for their wrong opinions.
    1:37:49 So then he says they have something called civic freedom. And so he kind of defines this third
    1:37:55 sphere, civic freedom of debate, discussion, interpersonal relations, but you can’t be political.
    1:38:02 So this is a late in life addition. I don’t think it’s fully theorized. I think what it shows is
    1:38:08 that during the Cold War, he very much believed economic and political freedom, capitalism and
    1:38:15 freedom, democracy, the United States capitalism, this all went together. And he starts to see at
    1:38:19 the end of his life the emergence of different social systems that are using market trading
    1:38:24 and allocation, but aren’t giving people similar freedoms. And he’s kind of puzzling over that.
    1:38:31 Now he always believes that China will democratize. And he thinks China’s on the path to democratization,
    1:38:36 in part because Chile does democratize. Eventually, Pinochet has voted out and it’s
    1:38:41 become a democratic capitalist and very prosperous country. And he thinks as exactly what’s happening
    1:38:46 in China, he sees Tiananmen and he doesn’t live long enough to get to where we are now,
    1:38:51 in which doesn’t look like political or civic freedom is coming to China anytime soon.
    1:38:58 And he did oppose the dual-track system of China, meaning like the market is bottom up,
    1:39:03 the government in China is top down, and you can’t have both.
    1:39:06 He thought you couldn’t have both. Yeah.
    1:39:08 He thought eventually the market would triumph.
    1:39:12 Well, it’s a really powerful idea to say, okay, maybe there’s not political freedom,
    1:39:18 but just hold on to the economic freedom and eventually that’s going to give political freedom.
    1:39:23 Is that correct to say like start to work on the economic freedom
    1:39:26 and the political freedom piece will take care of itself?
    1:39:31 That’s what he believed. That’s what he believed. Yeah, I think it’s more complicated than that,
    1:39:36 right? The people who gain out of a system of economic freedom could decide to collude
    1:39:41 in a system where there isn’t political freedom. That’s certainly a scenario.
    1:39:46 So, but that was, again, that’s that core idea of freedom, right? And that core belief
    1:39:50 that people want freedom and that people are drawn to freedom.
    1:39:56 Just to go back to Frank Knight a little bit, he wrote an essay called The Ethics of Competition,
    1:40:01 the metaphor that economic life is a game, and then maybe that extends the society as a whole,
    1:40:07 like the entirety of it is a competitive game. And Milton Friedman,
    1:40:12 I think, adapted some of this, appreciated some of this. Can you speak to this metaphor?
    1:40:18 Yeah, I think what the metaphor of the game does is it asks you, okay, well, what are the rules then?
    1:40:24 And let’s focus on the rules that keep the game going. So, he didn’t use the concept of an
    1:40:28 infinite game, but I think that’s an interesting one, a game that all the players are in and keep
    1:40:33 going again and again and again. And so, that helped Knight, along with Hayek,
    1:40:41 shift from the allocation question, who’s getting what, are things allocated fairly
    1:40:46 to the more structural question of, like, what are the rules of the game that we need to keep
    1:40:52 this system going? And so, for a while, that led to the discussion of monopoly, well, we need rules
    1:40:58 against concentration, or we need the rule of law. Everyone needs to be treated equally.
    1:41:06 People need to know what they’re up against. And then, going back to monetarism,
    1:41:14 the core of monetarism is a rule. Friedman called it a monetary growth rule. And so, again, what
    1:41:21 keeps the economic game going is a rule about how much the money grows that everybody knows.
    1:41:27 Nobody’s guessing. Nobody’s changing the rules to help their side or to help the people they’re
    1:41:34 friendly with. We all know it’s there. It’s clear. It’s easy. And so, that emphasis on rules, I think,
    1:41:38 really has a through line. It goes into Hayek’s competitive order, and then it goes into the
    1:41:48 monetary growth rule. And then, today, monetary policy makes use of monetary policy rules. We
    1:41:54 have not abandoned discretion, but rules are used as a heuristic or a check, and those come out of
    1:42:03 Friedman’s thinking. And so, it’s really profound. And it was always counterposed to discretion,
    1:42:09 which Friedman worried would be subject to capture or political corruption if you had
    1:42:14 discretion in policymaking or if you had discretion in these very big areas. Then,
    1:42:20 people would stop competing against each other in a market, and they would turn their attention
    1:42:27 to getting control of the rules or the rule makers. So, if there’s clear, transparent rules,
    1:42:33 then you’re free to play the game. Yes, exactly. But then, depending on the rules,
    1:42:40 the game can turn out the equilibrium that arrives at might be different. So, that speaks
    1:42:46 to the mechanism design, the design of the rules. Yeah, and that was, again, to go back to the idea
    1:42:52 separating new liberalism or neoliberalism from classical liberalism was more of a focus on what
    1:42:56 are the rules that are needed. What is the competitive order that we want to set out?
    1:43:03 How do we design in social safeguards? How do we think about it? And so, that shift
    1:43:09 towards monetary policy and focusing on stable monetary growth, that becomes really important
    1:43:15 in the post-70s era is one of the basic rules of how capitalist economies should function. And it
    1:43:21 becomes really important because they see the example of, say, countries most notably in Latin
    1:43:28 America where monetary rules weren’t followed and different governments played politics with
    1:43:35 their currencies, and that created just huge upheaval and huge social loss, economic loss,
    1:43:41 just economic disaster. So, my friend, she’s a poker player, philosopher of sorts,
    1:43:46 great human being. She has a podcast called Win-Win that everybody should listen to.
    1:43:51 And the whole purpose of the podcast and her whole way of being in spirit is to find win-win
    1:43:59 solutions. So, do you think of economic life as having such win-win solutions? So, being able
    1:44:05 to find rules where everybody wins or is it always going to be zero sum? I definitely believe
    1:44:12 in win-win, but with the big asterisks, like you can have win-win, but it can feel like win-lose,
    1:44:20 which is it’s not just are people getting more, it has a lot to do with do people feel
    1:44:25 they’re getting more and do people feel they’re getting what’s fair and equal. So, you could have
    1:44:33 a situation, for instance, if you look at the history of going back to Chile, it has
    1:44:40 steady growth, steady income growth, steady diminution of inequality, and a high level of
    1:44:46 discontent within the society and a high level of belief that the society is corrupt and unfair.
    1:44:51 And that’s what matters. How people feel about it, how people perceive it,
    1:44:57 matters. And we saw this recently, you can’t just come out with a bunch of statistics and
    1:45:03 tell people you’re winning in this game if they feel like they’re losing. So, that goes to all
    1:45:10 the non-rational factors and all the comparative factors that people have when they think about
    1:45:15 where they are vis-a-vis other people in society. So, we’re just incredibly social creatures. We’re
    1:45:20 incredibly attuned to our status, to rising and falling, to where we sit vis-a-vis others.
    1:45:26 And so, that absolutely has to be attended to. It can’t just be an economic analysis.
    1:45:32 That’s so interesting that the experience of the economy is different than the reality of
    1:45:38 the economy. On the topic of corruption, I think the reality of corruption versus the perception
    1:45:43 of corruption is really important in a lot of these nations. You take Ukraine, for example,
    1:45:50 the perception of corruption has a big impact on the economy. You don’t want to invest, you’re
    1:45:54 very cautious as a business person. The reality of corruption could be way different than the
    1:46:01 actual perception. But if narratives take hold, it’s a self-fulfilling prophecy that it has a
    1:46:06 big effect on the psychology that people involved. It’s interesting. Yeah. I mean, this goes back to
    1:46:12 Keynes’ analysis of the Great Depression, right? If people won’t invest, if they’re spooked,
    1:46:18 if the investing classes are spooked, you could be in real trouble. And in some ways,
    1:46:24 this simple analysis of the problem and proposal of a solution was enough to restore
    1:46:30 eventually the path to academic prosperity, right? That’s Franklin Roosevelt, nothing to fear but
    1:46:36 fear itself. The sense of we know we have a future, we have optimism, then you believe in it. And to
    1:46:42 go back to thinking about money, right? Money works because we all believe in it. It’s a form
    1:46:48 of social trust. And it’s a form of belief and faith in our society and in the other people in it.
    1:46:51 And when that breaks down, the money system will break down as well.
    1:46:57 Is there something Milton Friedman said and thought about how to control the psychology of
    1:47:04 humans at scale? No. I mean, what’s interesting is he does talk, especially in his later work,
    1:47:11 he says we have fiat currency and this is an experiment. And we don’t know how it’s going
    1:47:16 to turn out. And it’s turning out okay right now, but we’ve always had a commodity based or backed
    1:47:24 currency of some form or another. And this is the first time. And so who really knows, so far,
    1:47:30 so good. And he also is very attuned. It’s interesting in his later writings when he’s
    1:47:35 thinking about this to, sure, I could design a monetary system that would be different. But
    1:47:41 when I look at history, I see that monetary systems have always say incorporated the role of the
    1:47:47 state because it’s so important to people. And so therefore, my theoretical designs really have
    1:47:52 to be tempered by what I’ve actually seen happen in history. So maybe you could speak to this
    1:47:59 tension between how much government intervention is okay for Milton Friedman. So he was against
    1:48:04 minimum wage, but he was for guaranteed minimum income. Can you explain actually the difference
    1:48:09 between the two? Yeah. So this was one of the discoveries I made in my research. I found a
    1:48:14 paper from 1938, he wrote advocating what we would call today a universal basic income,
    1:48:20 a minimum income. And he basically sees this as part of the effort to create a new liberalism,
    1:48:25 right? And he basically says we have advanced societies, we have prosperous societies,
    1:48:32 we have decided in keeping with our morals and our ethics that people should not be starving
    1:48:36 in an advanced society like this. The question is how are we going to make that happen?
    1:48:41 And he ended up believing the best thing to do was to put a floor under everybody.
    1:48:48 And he said you can get that based on your income. If you have a lot of income, you don’t get it.
    1:48:52 If you have a little income, you might get a little bit of it. If you have no income,
    1:48:57 you get enough of it. And he believed in the beginning, you should base that on what was
    1:49:02 required to buy food, right? That that would be kind of an objective. You could objectively determine
    1:49:07 the nutrition and the price of food. And so that for him, it’s important, he says,
    1:49:12 it’s keeping with a liberal polity because it’s not intervening in the price system,
    1:49:18 it’s not intervening in economic relations. And it does not, in his view, require a bureaucracy
    1:49:25 to administer. It is not, in his view, require that you qualify for it by virtue of being in a
    1:49:32 protected class. You just get it as kind of part of your membership in this general citizenship
    1:49:40 body. And so that, to him, was really different than a minimum wage because it did not interfere
    1:49:46 with the work bargain. His belief about minimum wages was specifically that it priced out unskilled
    1:49:52 labor. That what an unskilled laborer had to offer was a willingness to work for a very low wage.
    1:49:58 And if you set the minimum wage too high, businesses instead of hiring that higher
    1:50:03 priced labor would not hire, or like we could think of today, right? They put in an electronic
    1:50:08 checkout, you know, or something like this where you don’t actually need the labor. So he really
    1:50:13 believed the minimum wage had that perverse incentive. Now, there’s, this is a live debate
    1:50:18 on what minimum wages do. And there seems to be a level at which you can set them that they can
    1:50:24 not have that perverse effect and, in fact, can kind of create people with more spending money
    1:50:30 that then powers the economy. So he had a very sort of clinical analysis of that, rather than
    1:50:37 an empirical one or a really abstract analysis. But the minimum income is fascinating because it
    1:50:45 seems very leftist to us. But what it is, is it’s purely individualistic. And it never really happened
    1:50:52 because it was so purely individualistic because American social policy typically identifies
    1:50:57 like this group of people is deserving and will give them benefits. So the classic example is
    1:51:03 soldiers, veterans. Another example is mothers raising dependent children. These people deserve
    1:51:08 money. The rest of you, you better go out and work. And so Friedman’s proposal, it really
    1:51:15 caught on in the ’60s. It ultimately went nowhere, but it was no litmus test, no income analysis.
    1:51:20 Just we’re going to give you this much. Everyone’s going to get this much. And he decided once mass
    1:51:25 taxation had come in, you could do it through taxes. And you could just rebate people who didn’t pay
    1:51:30 income taxes, got a rebate. That actually came to pass. It’s the earned income tax credit. And it’s
    1:51:36 considered extremely successful by policy analysts. It does what it’s supposed to do. It’s not that
    1:51:44 expensive. And so I see that as a kind of paradigm of his thinking in that instead of creating a
    1:51:50 bureaucracy that does some form of redistribution, or instead of trying to intervene in the market
    1:51:56 for labor or the market for something else, the market for housing, you provide a cash grant that
    1:52:03 people spend for themselves. And so interestingly, that’s what happened in the emergency situation
    1:52:07 of COVID, right? That’s exactly what people did. They followed that model. We just get money out
    1:52:12 quick. And there’s a lot of discussion still about UBI’s is something that should be done.
    1:52:20 And I think it’s always going to be hard to pull off because I think Americans and their elected
    1:52:24 representatives don’t want to provide a universal benefit. They want to provide a targeted benefit
    1:52:30 because they believe there’s like a moral component here. And Friedman advanced a policy that was
    1:52:37 really abstract and really just kind of, it was devoid of judgment. It was like pure and beautiful
    1:52:43 in that way, but utterly impractical. And it really focused on not interfering with the market
    1:52:48 and the signals that the market provides. It was really against price controls for the same kind of
    1:52:54 reason. Yeah, exactly. You could say, okay, but how does this not interfere with the market, right?
    1:52:58 If you provide people with a minimum income, won’t that change their incentives to work, etc?
    1:53:02 I mean, there’s a big body of research on this. Most of it seems to show
    1:53:09 one, it’s way better than the current benefits cliff where you have to not work to get your
    1:53:17 benefits. And any incentive impact on working seems to be much lower than would be expected. But
    1:53:23 I’ll let the economist and the social science to spite that one out and figure it out empirically.
    1:53:27 Hopefully we should be able to. Yeah, there’s been a bunch of studies. It’s interesting,
    1:53:31 even just how you conduct studies like this, how you do these kinds of experiments,
    1:53:38 especially if you’re empirically minded. Because a lot of the studies I saw are pretty small.
    1:53:46 So how do you make big conclusions about how to run the world, how to run the economies
    1:53:55 from such small studies? It’s all a fascinating experiment of ideas. And it’s also inspiring to
    1:54:01 see individuals and maybe small groups of individuals like the Chicago School of Economics
    1:54:09 to sort of shake out what we believe and how we run the world. Yeah, inspiring. Yeah.
    1:54:14 You call Milton Friedman, the last great conservative,
    1:54:22 maybe to be a little bit sort of controversial and make bold statements that get everybody excited.
    1:54:25 But what do you mean by that? And what makes a great conservative?
    1:54:31 So I was really thinking of that in terms of kind of American political identities
    1:54:37 and particularly the 20th century conservative movement, which people are always saying this
    1:54:42 isn’t conservatism. And I said, yes, in America, conservatism is different. It looks different.
    1:54:47 It feels different. Conservatism in America builds in a big component of what we could
    1:54:55 call libertarianism, pro-capitalism, anti-government ideas. And critics will say, but conservatism
    1:55:01 is about conserving institutions and practices and it has a role for the state and an organic
    1:55:07 community. But in the United States, it’s always had since the 20th century, also this anti-statist.
    1:55:14 Let’s let the market rip. Let’s not worry about what the market does to establish traditions.
    1:55:19 The market is our tradition. Capitalism is our tradition. So that was really synthesized.
    1:55:24 Many people were there, but Friedman and the importance of his books,
    1:55:31 Free to Choose, Capitalism and Freedom, the television series he did, all of these were
    1:55:37 like core components of this American conservative synthesis as it evolved. And I really see that
    1:55:44 as having broken down. It is scattered into different pieces. We don’t know where they’re
    1:55:51 going to come back together again. But Friedman’s push for open global markets,
    1:55:55 unfettered free trade, that’s getting pushback on both the left and the right.
    1:56:02 That I think is just a major sign that both parties have turned away from this vision.
    1:56:07 I don’t know what they’ve turned to, but the way that Friedman brought these pieces together,
    1:56:11 I think that political moment has passed. So that’s what I was trying to talk about
    1:56:16 with the book title. There’s another way, though, in which I think of him also as a
    1:56:22 conservative, which is that within the field of economics, he went back to this older idea,
    1:56:28 the quantity theory of money, and said, this still has value. This can be applied in the
    1:56:33 modern day. It is something to teach us. And he pushed back against this trend towards
    1:56:38 mathematicization. So he kept writing books. He can still pick up a Friedman book and read it.
    1:56:44 There’s lots of economics articles and outputs, like “unreadable unless you’re in the field.”
    1:56:50 And so I think in that way, he was trying to conserve methodologically and intellectually
    1:56:55 the traditions of the field. The work that he, and particularly Anna Schwartz did that
    1:57:01 literal counting of things and deep analysis of data from the field, that was completely
    1:57:06 unfashionable in his time. Now, we’ve sort of gone back to it with big data and with computers,
    1:57:10 but he helped bring that forward and preserve that tradition. So I think of him kind of
    1:57:15 intellectually as a conservative, if you think of the mode of his thought. And so,
    1:57:21 I mean, what makes a great conservative is one who takes those older ideas and makes them fresh
    1:57:26 for a new time period. I think that’s exactly what he did.
    1:57:32 You’ve also spoken about the fact that the times when he was sort of out in public,
    1:57:42 there was more of an open battle of ideas, where conservatism often had William F. Buckley. He had
    1:57:52 a more vibrant, deep debate over ideas, where it seems less deep now.
    1:57:58 I mean, that is the thing that it’s hard, especially for the students I teach today,
    1:58:03 to be like, there were arguments about ideas and conservatives want a bunch of them,
    1:58:10 and that happened in the ’70s and late 1960s and 1970s, when one set of arguments was about
    1:58:16 economics, like, okay, this idea of stimulating the economy by spending more, it has a downside.
    1:58:21 The downside’s called inflation, and the downside’s called too much regulation.
    1:58:29 You’ve gone too far in kind of bottling up the actual sources of economic growth and dynamism,
    1:58:34 and we have to let those free. In social policy, there was also a critique.
    1:58:40 The Great Society had all these ways of ideas of ending poverty, and people came and analyzed them
    1:58:45 and said, the programs aren’t helping. In some ways, you’ve actually created engines to trap
    1:58:50 people in poverty because you’ve given them a benefit and said, if they actually start to work,
    1:58:55 they lose the benefit. You’ve created all these perverse incentives, and these ideas were fought
    1:59:00 out, they were empirical, they were controversial, and they were based on really deep research
    1:59:10 and really deep argumentation. It seems that era has passed. It seems like we’re driven much more
    1:59:17 quickly by moods rather than thought through ideas. Right now, it seems like the ideas come
    1:59:24 after they follow the political mood and try to put together the underpinning of it, where it
    1:59:28 really was the opposite for much of the 20th century. It does seem like we lead with emotional
    1:59:36 turmoil, and the ideas follow versus lead with the ideas, and sort of the emotion of the masses
    1:59:41 respond. Right, exactly. If we think of the evolution of conservatism, it was a whole set
    1:59:49 of ideas that was crafted, refined, the 1950s, 1960s, 1970s, sort of really found their emotional
    1:59:56 standard bearer, translator, salesperson in Ronald Reagan, who incidentally had been following these
    2:00:01 ideas as they developed and had been honing his ability to express them and apply them politically.
    2:00:08 It’s very opposite if we look at Trump as the political definer of the era. There’s a set of
    2:00:15 ideas, but it was more attitudes, impulses, vibes, and the ideas are coming after that,
    2:00:22 trying to figure out how they patch on. It’s interesting to watch, to see that difference,
    2:00:28 and I hazard that a lot of it just has to do with the immediacy of the media environment we’re in,
    2:00:33 and it’s just power of the media messages to get out so fast.
    2:00:41 What do you think Milton Friedman would say about Donald Trump, about him winning in 2024,
    2:00:46 and just in general, this political moment? I think he would love Doge.
    2:00:54 I think he would focus on that part because I think he would really love it. He would be
    2:01:01 very alarmed by the idea of tariffs and very alarmed by the return to protectionism. I mean,
    2:01:07 I think he believed that part of what made the world peaceful in the second half of the 20th
    2:01:13 century, as opposed to during World War II, was that the world was knit together more by trade,
    2:01:18 and that was the great hope that people traded with each other. They wouldn’t fight. He was also
    2:01:25 a proponent of the freeing movement of capital. He would absolutely oppose this idea that
    2:01:33 Nippon Steel wasn’t allowed to invest in the United States. I think he would struggle, and he
    2:01:39 wholeheartedly embraced Reagan, and he worked to minimize the parts of the Reagan legacy he didn’t
    2:01:44 like. I think he would find it harder to embrace Trump because he’s not of that
    2:01:49 style. He just had a different style, but I’m guessing he would have come around through
    2:01:56 I think he would just say, okay, we have a chance to reduce the size of government. At the same time,
    2:02:03 the spending plans of the Trump administration are not fiscally conservative in any way,
    2:02:09 and that was his concern. It was not so much with debt, but with the feeling that there’s no
    2:02:14 mechanism to stop the growth of government, that it just grows and grows and grows. He ended up
    2:02:22 believing even deficits aren’t so bad because they make politicians cautious he thought about
    2:02:29 continuing to spend. I have to believe he would be concerned about the potential threats to the
    2:02:36 US currency’s position as the world’s reserve currency with increased levels of debt and spending.
    2:02:48 He was concerned about low interest rates. He died, I think it’s 2004, 2006, but it was in the
    2:02:52 beginning he didn’t see the zero low bound, but he saw low interest rates and he said this isn’t
    2:02:57 necessarily good. Everyone’s talking about low interest rates as if they’re good, but there
    2:03:06 should be a price on capital. There should be a price on this. It shouldn’t be so low. He had
    2:03:12 some of still the macro insights that I think are important. You wrote the Wall Street Journal essay
    2:03:19 titled How Inflation Ended Neoliberalism and Re-Elected Trump. Can we weave that into this
    2:03:25 discussion in terms of inflation and Trump? What’s the main idea of the essay?
    2:03:34 The main idea is looking back and saying, today we have been living in a world where people have
    2:03:44 been focused on monetary policy, steady monetary policy, free trade, reducing regulation. This is
    2:03:52 all called the neoliberal era. My argument was a lot of that arose was driven by inflation.
    2:03:58 We have Milton Friedman predict inflation in 1967. It starts breaking out in the 1970s,
    2:04:08 Britain and the United States. Every institution was designed around stable prices. Once inflation
    2:04:15 broke out, prices were no longer stable. For example, tax rates weren’t inflation adjusted.
    2:04:21 If your income went up because of inflation, you might bump from a low tax rate to an extremely
    2:04:25 high tax rate, but you don’t actually have more money. On paper, you have more money,
    2:04:28 but everything costs more. You don’t actually have more money and your taxes have gone up.
    2:04:35 That kicks off the taxpayer revolt. There’s a whole shift of American corporations towards
    2:04:41 focusing on financial investments because the tax breaks they used to get for depreciation,
    2:04:46 for building new factories, are not inflation adjusted. They no longer pay off in an inflationary
    2:04:54 environment. Then when Paul Volcker comes in, early 1980s and starts fighting inflation,
    2:05:00 really pushes up interest rates to bring down inflation. That completely reorders the banking
    2:05:07 sector because banks had statutory legal limits on the interest they could charge. Once
    2:05:14 general market interest rates exceeded that, it was proliferation of new financial forms to take
    2:05:22 advantage of that. My point was the era we live in was ushered in by inflation. Then everyone
    2:05:29 turned against all the formulations we had and said, “Well, these have hollowed out our industrial
    2:05:36 base. We’ve got too much immigration. We’ve got too much economic openness. We need to
    2:05:40 reshore. We need to focus. We need to turn against all these things. We need to spend more. We’ve
    2:05:49 disinvested.” The net result of that turning away, I argued, is people forgot about inflation.
    2:05:53 They really forgot it could ever exist. You had a whole set of theories on the left,
    2:05:56 modern monetary theory that basically said, “We don’t really need to worry about inflation.
    2:06:03 We can spend what we want.” Lo and behold, inflation came back. My argument is that
    2:06:10 is now open the door to the presidency of Donald Trump, which is potentially a deeply
    2:06:17 transformative moment that will change the size and shape of government, that may change our foreign
    2:06:23 policy profoundly, that may change our immigration policy, that may change the demographics of our
    2:06:30 country. All of that in my thesis is that it’s all been made possible by inflation. The great
    2:06:37 mistake of the past years was to forget how fundamental inflation was to the rise of the
    2:06:46 last political order and to profoundly underestimate how much inflation would change the current
    2:06:50 political order. I just think it’s one of these things. This is why I think you should study
    2:06:55 history, because I think if you had studied history, you would be aware of this. It’s so easy
    2:07:01 for people to forget just like the banks forgot that interest rates could ever go up. They got so
    2:07:07 used to it. It’s only a 10, 15-year thing, but to them, that seems like forever. I really do
    2:07:14 believe what history teaches you to do is just have a much vaster scope in your vision and then
    2:07:18 take into account the possibilities of so many things happening that are different
    2:07:24 than what’s happening today. I just hope we don’t forget about inflation entirely, but here’s the
    2:07:31 thing. It is quite a strong chance that Trump’s policies will initiate even worse inflation,
    2:07:36 and then they will prove to be his undoing. The ironies of inflation could be continuing.
    2:07:45 Like you said, Milton Friedman would be a big fan of Doge. If he was still here today and rolled
    2:07:51 with Elon Musk and Vivek, what advice would he give? What do you think he would focus on in terms
    2:07:58 of where to cut, how to cut, how to think about cutting? His signature policy move I talk about
    2:08:09 this is taking the price mechanism and trying to make that into the policy. That seems obvious to
    2:08:14 us today, but in the era that he came in, there would be rent controls. Let’s take away rent
    2:08:21 controls. Let’s let housing prices set themselves. He was very against national parks. I actually
    2:08:27 think the national parks are good, so I hope the Doge people don’t take this up. Rather than an
    2:08:31 allocation to fund the national parks, they should be funded by the revenue that they bring in when
    2:08:39 people visit them. Let’s let prices make the decisions here. I think that would be one of the
    2:08:44 key pieces. The other thing I think he’d really be thinking about, he wrote about this a lot about
    2:08:51 occupational licensure and barriers to entry. He felt like one of the worst things that government
    2:08:57 does and sometimes it’s private entities that do this is create barriers to entry to protect
    2:09:01 industries and markets. He talked about this in the case of the medical profession, which I think
    2:09:07 is actually not a good example because I think we all have a collective investment in having medical
    2:09:14 doctors be highly trained. For instance, you could look at nail technicians or hair cutting.
    2:09:18 There’s often these licensing requirements or there’s a big kerfuffle. I think it’s the
    2:09:22 DC passed a law that to run a childcare center, you have to have a college degree. What does
    2:09:26 that do? That disenfranchises a whole bunch of would-be entrepreneurs who don’t happen to have
    2:09:30 a college degree, but probably could be really good at this particular business. I think he would
    2:09:39 be saying, look out for where private interests have used the state to protect themselves and
    2:09:47 clear away those types of barriers and let competition through prices guide outcomes.
    2:09:53 Yeah, so open up for more competition and allow for more signals from the market
    2:10:02 to drive decisions, which would actually naturally lead to cutting a lot of the bureaucracy of
    2:10:07 government. I think the other thing he would probably be arguing for is again, go back to
    2:10:14 the design of the minimum income or the negative income tax, that there’s a way he ultimately
    2:10:18 decided to run it through the tax system. The government’s already collecting this data.
    2:10:22 They already have your information and they can just send the money out through the system.
    2:10:27 Rather than having a social bureaucracy where you have to come in in person, you have to fill
    2:10:33 out forms, you have to document, do you own a car? What’s your income? Who lives in the household?
    2:10:42 I think he would say and his analysis of that was who that really benefited was the bureaucracy,
    2:10:48 that process, that paper that implemented those norms and that if you could pull that away,
    2:10:54 you could get help out where it was needed much quicker without having this drag of people doing
    2:10:58 sort of unproductive work of administering these systems. I think trying to cut administrative
    2:11:03 overhead and what he didn’t have then, which we have now, is the technology that we have and the
    2:11:11 ability to send benefits out via smartphone or just to move so much faster and to handle
    2:11:17 information on a mass scale so much faster. It’s painful, but I think one of the big things you
    2:11:22 can do is just that, which is digitalize. I don’t know if that’s a word, but just
    2:11:33 convert everything to where the speed of signal can be instantaneous. There’s no paperwork.
    2:11:41 It goes immediately. Then that means that the pricing signals and all these kinds of things
    2:11:45 are just immediately available to people. That seems to be the low-hanging fruit government,
    2:11:53 IT systems could be vastly improved. But that would result again with a lot of people getting
    2:12:02 fired. I think somebody submitted a question for me saying, “What are your thoughts as a person
    2:12:07 who cares about compassion? What are your thoughts about government employees, which there’s a lot of
    2:12:15 that are going to be hurt by doge?” It’s always a really difficult question.
    2:12:22 A lot of people get fired to make room for a new system that’s going to lead to a lot of pain.
    2:12:28 There is going to be a lot of pain. I don’t know what the solution is. I think that’s also part of
    2:12:35 why Friedman favored a minimum income. He talked about it being countercyclical. In other words,
    2:12:41 when things were really bad, the spending level on it would naturally go up. This is what economists
    2:12:48 today call an automatic stabilizer. Then when it’s not needed, the cost of it goes down.
    2:12:54 Maybe there’s a way to make it sweeten it with honey and have people take buyouts or things
    2:12:59 like that. That would certainly be a way better way to go. I did a podcast with Javier Malay.
    2:13:05 He has consistently praised Milton Friedman and cited him as one of his inspirations.
    2:13:11 So, what do you think Milton Friedman would say about what’s going on in Argentina and
    2:13:15 what Javier Malay is trying to do in Argentina? Yeah, I think he would appreciate it. I mean,
    2:13:22 I think Malay is much more of an Austrian-inspired thinker, but I think he definitely appreciates
    2:13:28 Friedman. On the macro level, Friedman always understood it’s really painful to treat inflation,
    2:13:35 but the more you put it off, the harder it is. So, I think he would be trying to get him,
    2:13:44 as he’s doing, to just message that short-term pain, long-term gain. I think he’d be very supportive.
    2:13:49 I think he’d be thrilled to see also that Malay is very good at explaining these abstract ideas
    2:13:53 and putting his policies in the framework of the bigger picture. That was really meaningful
    2:14:01 to Friedman. I don’t know how politically persuasive it is overall. Malay is very intense.
    2:14:07 He doesn’t have the same sort of gifts of salesmanship and sending people at ease that say
    2:14:11 someone like Ronald Reagan had, but it seems to be that’s what his country was calling for right
    2:14:22 now. Yeah, he has more chainsaw-less, more blanket. Javier recollects this line from
    2:14:26 Milton Friedman. I don’t know if this is accurate, but if you strive for equality over freedom,
    2:14:30 you often get neither, but if you strive for freedom, you often get both. Do you think
    2:14:39 there’s truth to this? I think on the big picture, definitely. We’ve seen focusing too much on
    2:14:47 equality. Because equality is such an alluring word, it can lead you to downgrade all kinds of
    2:14:52 other things that are really important. But I really think it depends on how you’re defining
    2:15:03 freedom. The statement is too big and too broad. If you’re talking about freedom, if by freedom,
    2:15:10 you mean not having to pay taxes if you’re successful, I think that can have all kinds
    2:15:15 of knock-on effects. The idea that people are able to prosper when they’re educated,
    2:15:20 where is education going to come from? How is that going to be paid for and supported?
    2:15:28 Again, to go back to night, if you’re generating too much inequality or people are feeling that
    2:15:33 you’re generating too much inequality, sometimes they value that more than they value freedom.
    2:15:41 I think there has to be more of a balance. It’s hard to make such global statements
    2:15:45 if you have to break them down into what actually do you mean. But again,
    2:15:50 Malay is coming from a very different context, a very different country that has seen
    2:15:56 so much upheaval, so much government intervention, so much inflation, so much political turmoil.
    2:16:01 He’s probably thinking about it differently than Friedman was thinking about it.
    2:16:08 There probably still is a real threat of hyperinflation. There seems to be a very high
    2:16:14 level of corruption or the capacity for corruption, so it’s a really messy situation.
    2:16:20 So, Javier Malay likes to recollect this great line from Milton Friedman, that if you strive for
    2:16:26 equality over freedom, you often get neither, but if you strive for freedom, you often get both.
    2:16:33 Do you think there’s truth to this? Yeah, I think in the macro, for sure. We’ve seen,
    2:16:40 if you really put equality as your goal, it’s such a seductive ideal, and people believe in
    2:16:46 it so much that they can carry out horrible crimes in the name of equality. But then,
    2:16:52 focusing on freedom, these words are too big. They’re so hard to define. So, I think you have
    2:16:58 to ask what is the freedom you’re talking about? If you’re talking about the freedom of ordinary
    2:17:04 people to be entrepreneurial, to make their own way, to start new things, to continue what
    2:17:08 they’re doing, to keep what they’ve earned, for sure, I think that can increase the equality
    2:17:15 overall. If you’re talking about lower taxes, if freedom is just a code for lower taxes, there has
    2:17:22 to be, I mean, lower taxes in general, great. But if you’re one of the top generators of wealth,
    2:17:29 there has to be some way to ensure that, say, education, people prosper when they’re well
    2:17:35 educated. That’s when economies do better. Education is generally state-funded, and you need some way
    2:17:41 to support that and provide for those institutions that structure society that make competition
    2:17:48 possible. So, I think it’s just a really broad statement. Again, Malay is coming from a really
    2:17:54 different context. He’s coming from the South American context from such upheaval, such economic
    2:18:00 devastation, in pursuit of the goal of equality that I think trying to rebalance with that emphasis
    2:18:05 on freedom, I definitely see where he’s coming from. If we can pivot a little bit. We’ve talked
    2:18:11 about Reagan. What are some interesting stories about how Milton Friedman navigated to Reagan,
    2:18:16 and maybe even the Nixon administrations, and how he was able to gain influence?
    2:18:22 Well, the Nixon administration is an interesting case because, so I’ve been talking about inflation
    2:18:29 and the different consequences it had. One consequence it had is that it began to undermine
    2:18:33 the Bretton Woods currency system that was established in the wake of World War II. Now,
    2:18:40 Bretton Woods, what it did basically, it ended up inadvertently putting the U.S. dollar at the
    2:18:45 center of the world economic system. But under Bretton Woods, countries of the industrialized
    2:18:52 West agreed to trade their currency in set ratios that governments sent. A franc was worth so many
    2:18:58 dollars or a German mark was worth so many francs. Then also under this system, countries could come
    2:19:05 to the United States and they could trade the dollars that they held for gold because the U.S.
    2:19:14 was on a modified gold standard. There was a ratio of gold to paper money. The system was set up
    2:19:21 and very quickly, most countries were, the dollar was at the heart of it and that the converting
    2:19:25 into and out of dollars was really the mechanism of trade for many of these countries. So,
    2:19:35 Friedman said, what we should have is floating exchange rates. This is an idea, again, of instead
    2:19:41 of having a top-down design of policy, an administered policy, we will have policy set by
    2:19:46 prices and usually be able to trade currencies on an open market. They should trade and they
    2:19:52 should fluctuate and that would be fine. Totally outlandish idea. But he was pinpointing the fact
    2:19:58 that Bretton Woods had an instability and that instability began to emerge in the time of inflation.
    2:20:08 So, you have more and more dollars being printed. They’re worth less and less. If European nations
    2:20:14 keep trading their currency for dollars, they’re going to be importing inflation into their own
    2:20:19 economies. So, they say, we don’t want these dollars, we’d like some gold instead and they
    2:20:26 have the right to go to the treasury, send in an order and get gold out. So, they start doing this
    2:20:32 more and more and it becomes, it’s called the gold drain and the United States starts running out of
    2:20:39 gold. They’re aware this is happening through the ’60s. They’re trying various things to fix it and
    2:20:47 when Nixon comes into office in ’68, Friedman sends him a memo and it says,
    2:20:57 “This is going to be a real problem.” He says something like, “This is a running sore and you
    2:21:05 have to lance it right away.” Some very graphic metaphor. Otherwise, it’s going to explode and
    2:21:13 Nixon just files the memo away. Nixon loved people to think he was influenced by and following the
    2:21:18 wisdom of Milton Friedman, but he didn’t actually want to do that. He just wanted the political
    2:21:27 benefit that came from it. So, then comes the moment where the US Treasury Department realizes
    2:21:33 we’re going to run out of gold. What should we do? Everybody de-camps to Camp David and Nixon
    2:21:40 decides we’re just going to stop redeeming currency for gold. It’s called slamming the
    2:21:47 gold window shut, done. He also, at that same meeting, decides to institute price controls.
    2:21:52 He does a whole bunch of stuff. It’s an emergency. He calls it the new economic plan,
    2:21:57 which is an unconscious echo of the Soviet new economic plan, so a problematic name,
    2:22:02 a problematic policy. Friedman is livid at the price controls, but he’s like,
    2:22:07 “Actually, it’s great that you closed the gold window. Let’s go all the way to floating exchange
    2:22:14 rates.” This idea was heresy within the Treasury Department. Everyone’s very committed to the
    2:22:19 idea of the gold standard, convertibility, possibility, the United States at the court,
    2:22:24 the financial system, kind of hem and haw. But at this point, Friedman has a very close
    2:22:31 relationship with George Schultz. George Schultz is a high-level appointee who will eventually,
    2:22:35 over the course of the Nixon administration, become the Treasury Secretary. So,
    2:22:42 Friedman is feeding Schultz all his ideas about how we should move to floating exchange rates,
    2:22:48 how we shouldn’t try to reconstruct Bretton Woods. The people in Treasury, it’s funny because I’ve
    2:22:51 read some of their accounts, and actually Paul Volcker is in the Treasury Department at this
    2:22:57 time. He can sense that Friedman is in here somewhere, like feeding his boss ideas. He
    2:23:02 doesn’t quite know. In the oral history, Schultz talks about this quite a bit.
    2:23:09 So, at any rate, Friedman exerts this behind-the-scenes influence, and what Schultz does is just let’s
    2:23:17 Bretton Woods fade away. He doesn’t make grand pronouncements. It just slowly, the world shifts
    2:23:24 to a regime. For a while, it was like a regime of steady prices, and then they call it a steady
    2:23:28 regime of changing prices or whatever. The language changes, the reality changes, and they kind of end
    2:23:33 up where they are. So, that’s a real measure of Friedman’s influence. If there had been another
    2:23:38 economist in Schultz’s ear that said, “No, catastrophe is imminent. We have to go back to
    2:23:42 Bretton Woods,” he probably would have worked harder. The U.S. government would have worked
    2:23:49 harder. That becomes one of these pieces of globalization. What people don’t realize is
    2:23:54 there used to be, in addition to these floating set capital ratios, you couldn’t bring capital
    2:23:58 in and out of different countries. You had to register. You couldn’t invest. All these
    2:24:03 rules and strictures and the falling of Bretton Woods really blows that all open. It’s a precursor
    2:24:10 to globalization. So, Friedman is right there. Now, he’s very ambivalent about Nixon. I mean,
    2:24:14 he sees that Nixon is not an honest person. He thinks he’s very intelligent,
    2:24:22 and Nixon’s dream is to create a new centrist majority. So, he does many things to go back on
    2:24:27 his supposed economic principles and ideals. So, Friedman does not like this. He doesn’t
    2:24:32 like the price controls. He’s in communication with his old mentor, Arthur Burns, who’s now
    2:24:37 the chair of the Federal Reserve, and Burns is basically doing everything wrong in monetary
    2:24:42 policy. And I described this in the book in some detail, these anguished letters back and forth.
    2:24:50 And basically, as I see it, Burns doesn’t have a solid theory of inflation. And the more Friedman
    2:24:55 pushes him, it’s almost like Burns is willfully ignoring Friedman and kind of doing the opposite
    2:25:00 of what Friedman says. So, Burns is running a very loose monetary policy. Inflation is quite
    2:25:04 considerable over the ’70s. I mean, we were all spooked by what did it get to 6% something like
    2:25:11 that recently for a very short time. This is inflation going over 10%, hovering an 8% for
    2:25:15 basically the whole decade of the ’70s. It’s going up and down with extremely elevated rates.
    2:25:21 And so, the Carter presidency largely falls, foreign policy is a big part of it, but the
    2:25:25 failure to tame inflation is part of it. And then Reagan comes in. And now,
    2:25:31 Reagan loves Friedman and Friedman loves Reagan. Very mutual feeling. The Reagan administration
    2:25:37 creates an advisory economic board. Friedman’s on it. He’s retired now. He’s entering his golden
    2:25:44 years. But he really has Reagan’s ear. And here, what he does is he convinces Reagan of his theory
    2:25:50 of inflation, which is inflation has been caused. It’s a monetary phenomenon that has been caused
    2:25:59 by bad monetary policy. Inflation has an accelerating dynamic. The only way to end inflation is by
    2:26:04 really showing and signaling that government policy has changed. And when you do that,
    2:26:10 it’s very painful for a short amount of time. People will suffer. But then, you will come out on the
    2:26:17 other side into stable prices. And this is what you need for economic prosperity. So, the man who
    2:26:27 implements this policy, Paul Volcker, is definitely influenced by Friedman. He even buys Friedman’s
    2:26:33 specific technique of the monetary growth rule and of the focus on monetary aggregates, which
    2:26:38 Friedman has said, right, money matters. Aggregates matter. And that’s what money is. Pretty quickly,
    2:26:45 Volcker finds that because of inflation and the financial deregulation and response to it,
    2:26:50 the aggregates don’t work the way Friedman said they would. And so, the specific policy Friedman
    2:26:56 recommends. Volcker tries it for a year or so. It doesn’t work super well. But what does work
    2:27:03 is letting interest rates go high, go above inflation to a point where both the general
    2:27:07 citizenry and the financial markets believe like, oh, they’re actually serious about inflation.
    2:27:11 And because we’ve had a decade of inflation with all these presidents saying,
    2:27:17 forward, we’re going to whip inflation now, that monetary policy has lost credibility.
    2:27:21 This is why people focus so much on credibility today, because once it’s lost,
    2:27:25 it’s really hard to get it back. And one way Volcker gets it back is interest rates over 20%.
    2:27:33 Unemployment, very high, as high as 25% in construction sectors. And as this is happening,
    2:27:37 Milton Friedman is whispering in Reagan’s ear, this is the right thing.
    2:27:43 Stay the course. This is going to work. Now, interestingly, he hates Volcker. Volcker hates
    2:27:48 him. And Friedman will never give Volcker credit for this policy, but he will give Reagan credit
    2:27:57 for this policy. But he owes credit himself for keeping Reagan from wobbling on this policy and
    2:28:02 just pushing it through. And he also tells Reagan very pragmatically, you better do this now.
    2:28:06 You’ve got a four-year term. Do this in the first two years of your term.
    2:28:11 Things will have turned around by 1984 when you run for reelection and you’ll benefit from it. And
    2:28:16 that’s absolutely what happens. If we could take a small tangent. Sort of a question I have to ask
    2:28:21 about, this is so much of Bretton Woods and maybe the gold standard, maybe just
    2:28:27 have a general discussion about this whole space of ideas. There’s a lot of people today that care
    2:28:35 about cryptocurrency. What do you think that Milton Friedman would say about cryptocurrency and
    2:28:43 what role crypto might play in the economy, whether he would be for this idea,
    2:28:51 against this idea. And if we could look at it for today and also just 10, 100 years from now.
    2:28:57 There’s a clip, I think it’s in 1992 where people say, oh, Friedman predicted cryptocurrencies
    2:29:03 because he’s talking about how payments will eventually be electronic. So in some ways,
    2:29:07 he definitely, as he was looking at the computer and money, he knew these would come together in
    2:29:15 some way. I think he probably would see a use case for a crypto. He definitely would not buy
    2:29:22 the stronger forms, I think of crypto ideology in which we could be heading towards a future
    2:29:25 in which there’s many different currencies that compete or that are distributed or there’s a
    2:29:31 stateless currency. And he addresses this very, very clearly because a Hayek’s denationalization
    2:29:37 of money, it’s a paper in the late ’70s, Hayek argues for this kind of competing currency model
    2:29:42 or regime. And so he’s responding to that. He’s responding to people writing about free banking.
    2:29:48 And he basically says, look, even if you developed a variety of competing currencies,
    2:29:53 eventually society would converge on one. And that’s because people just want one currency
    2:29:57 that they know they don’t want a bunch of different options. Even in places where there have been
    2:30:03 options to do that, they’ve been used very minimally. And then he says, secondly, the state always steps
    2:30:09 in. He says, technically, theoretically, it doesn’t have to, I could draw you a model. I could tell
    2:30:14 you about how it could work without the state. But in actual reality, all human societies,
    2:30:21 through time and space, the state eventually becomes involved in the provision of money because it has
    2:30:27 so many knock-on effects to so many people. So sure, I think he would, again, find a use case
    2:30:32 for crypto. I think it’s interesting, but I don’t think he would see it as this is going to display
    2:30:38 state money, and we’re going to have a variety of distributed currencies. The other thing he
    2:30:46 really stresses is that a change in a monetary system, it only happens amid great, great crisis.
    2:30:52 So again, you see in countries where the state is not controlling the money well, right? That’s when
    2:30:57 people are more turning to crypto. But he says, because money is so fundamental, they’re going to
    2:31:05 be so much political pressure on any country that gets the currency profoundly wrong that the government
    2:31:09 will fall and another one will replace it, right? So if you look at episodes of hyperinflation,
    2:31:14 they don’t go on very long because they’re so upsetting to people.
    2:31:19 If we can go back in time, we’ve talked about it a bunch, but it’s still a fascinating time.
    2:31:27 The Great Depression, the University of Chicago, there’s these folks like Jacob Weiner, Frank Knight,
    2:31:31 Henry Simons, all of these influence the thinking of Milton Friedman.
    2:31:38 There’s this Room 7 situation in the University of Chicago. Just going back there,
    2:31:44 even just speaking almost philosophically, what does it take to explore ideas together,
    2:31:49 sort of like deliberate, argue in that space? And maybe there might be interesting stories
    2:31:55 about that time. It would just be interesting to understand how somebody like Milton Friedman
    2:32:04 forms. The seed is planted and the flower blooms. Yeah, yeah. So he gets to University of Chicago,
    2:32:11 he makes fast friends. And in his third and fourth year, they become what I call them the Room 7 gang.
    2:32:16 So Room 7 is they find an old store room in the basement, they take it over, and that’s where
    2:32:22 they have their jam sessions. And what made this world come together was Frank Knight. There was
    2:32:28 a charismatic leader, and there are a bunch of acolytes who clustered around him. That, I think,
    2:32:33 was a key piece of the ingredient. And then there was a sense that they were on to something that
    2:32:38 the rest of the economics field had forgotten or was rejecting. So there was that sense of mission.
    2:32:45 So that seems to have been, there was a formal education piece. And then there was a parallel
    2:32:52 education piece rooted in admiration for a thinker, a shared admiration. And then what that led Friedman
    2:33:00 to do, what I found syllabi that he had from non-economics courses, lists of books, and he’d
    2:33:05 written the prices of different ones he wanted to read. So he had John Stuart Mill on Liberty,
    2:33:11 like 50 cents, written in the margin. So he began to educate himself. He gave himself a parallel
    2:33:16 curriculum alongside this very formal economics curriculum. He started reading the traditions
    2:33:21 of political liberalism and then talking them through with friends and then developing a shared
    2:33:28 sense of mission. And the incredible thing is, of those friends in the group, they scattered
    2:33:32 for like 10 years, and then they all came back together. George Stigler, his great friend,
    2:33:40 was hired at Chicago. Aaron Director, who was his wife’s brother, was at Chicago. So many of these
    2:33:45 people continued. He became Frank Knight’s colleague. So that was the base. That was what
    2:33:52 really grew him, that really profound peer group. Now, the other piece I talk about a lot is,
    2:33:58 Friedman was a collaborator, an open-minded collaborator, and he had incredible connections
    2:34:04 with economists who were women. And he basically found, first in the figure of Anna Schwartz,
    2:34:09 later in the figure of this group of women who were his wife’s friends, this kind of untapped
    2:34:14 pool of talent. And so he immersed himself in this whole other world of consumption economics,
    2:34:21 and that resulted in his more technical work on a theory of the consumption function,
    2:34:26 which is the theory of permanent income. So for Friedman, intellectual work and intellectual
    2:34:33 production was always done in this very social context, in a context that blended like friendship
    2:34:40 and intellectual partnership. And he only had a handful of friends who were not also economists
    2:34:47 interested in the same questions he was. So he just lived and breathed ideas all day long.
    2:34:51 Can you speak to the jam sessions? Like, what do we know about the jam sessions? What are we
    2:34:55 talking about here? You’re sitting in the room. Are they analyzing, are they reading papers and
    2:34:59 discussing papers, or are they arguing more like over beers kind of situation?
    2:35:06 Yeah, more arguing over beers. And in this case, there are several people who say it was all about
    2:35:12 Frank Knight. What did he say? What did he mean when he said it? Is he right? And so Knight was
    2:35:17 very, he would say one thing and then say another. If you read him, it’s very hard to follow what he’s
    2:35:22 actually saying because he’s full of qualifications and ironies, it blends. And so he would throw out
    2:35:27 these pieces, and then the students would kind of clutch at them, and then they would come back
    2:35:32 together and try to assemble this sort of worldview. And then Frank Knight fell into this terrible
    2:35:38 depression, and to cheer him up, they planned a big party. And they went back through all of his
    2:35:43 previous writings, and they assembled them into a book that was published. This is the Ethics of
    2:35:48 Competition. And you can read the introduction written in part by Milton Friedman. So not only
    2:35:52 were they talking about Knight and what he said, but then they started pouring over his work.
    2:35:57 And one of them described it as like a general equilibrium system where you had to know all
    2:36:02 the parts, and then all of a sudden it all fit together in a whole. So if we step back, what
    2:36:08 they were doing was getting inside the mind of a great thinker and understanding the ways that all
    2:36:14 fit together, and then kind of testing their ideas against Knights. And what’s fascinating is one of
    2:36:21 the first papers that Friedman publishes in Statistics is a rebuttal of Frank Knight. He
    2:36:27 publishes a rebuttal of Frank Knight’s ideas about risk and uncertainty. And Frank Knight,
    2:36:33 he kind of took a black swan argument. He said, “Risk, you can calculate uncertainty. You can’t,
    2:36:39 like, existentially, philosophically. You can’t get your hands around it. It is the black swan.”
    2:36:44 And Friedman publishes this statistical paper, and he says, “I can put uncertainty on a graph.”
    2:36:50 And so there’s that sort of Freudian killing of the father element when he comes back, and he will
    2:36:57 in some ways turn his back on Knight’s approach and Knight’s pessimism even while it’s like a
    2:37:03 foundation of his thinking. Fascinating. Is there something you could say about the thinking process
    2:37:10 that Milton Friedman followed, like how he developed his ideas? You mentioned there’s a
    2:37:18 strong collaborative component, but there’s another story I saw about that I think his son
    2:37:24 recalled about the argument number system that you mentioned, which I, by the way, if you’re
    2:37:29 going to explain that as a tangent of a tangent, that’s really awesome. I think it’s like number
    2:37:37 one if the other person is right. Number two means you were right and I was wrong. And the number
    2:37:42 system evolved in some ways to be quick and efficient, but in other ways, they also were
    2:37:47 really clear about it. So, you know, something like there’s kind of like three reasons behind it.
    2:37:54 First is if you use a number, it reminds the listener that it’s really hard to say the words
    2:38:00 I was wrong. So, you’re kind of calling on their sympathy by using the number, reminding them that
    2:38:07 you’re doing a hard thing. And then it’s also reminding them that you’re in this family with
    2:38:14 this code. And so you’re signaling your membership and your closeness and your love, really. And
    2:38:18 so it’s supposed to be an easy way to disagree without like breaking the relationship. Yeah. So,
    2:38:25 admitting you’re wrong now comes with this warm fuzzy feeling. Yeah. Yeah. And that’s really, I mean,
    2:38:34 that’s so powerful. I think so much of the friction of human interaction can be boiled down to
    2:38:38 just not being able to admit that you’re wrong, efficiently and quickly and regularly and just
    2:38:44 often. And to be able to do that, that’s really, that’s really powerful. Yeah. I think it’s a really,
    2:38:50 a really neat aspect of their family life for sure. That’s a fun story, but like can we just
    2:38:57 generalize to how he engaged in collaboration, how he developed his ideas as they’re like a
    2:39:02 thinking process. So, he taught at the University of Chicago and he tended to teach for six months
    2:39:08 and then have six months off. And he spent the summers in New Hampshire or Vermont. He had a,
    2:39:12 right near that border, they had two different houses. And that to him was the deep thinking time.
    2:39:19 And so, when he’s at Chicago, he’s teaching, he’s arguing, you know, some people love his
    2:39:25 teaching style very much in charge, very much keeping students on their toes, confrontational,
    2:39:30 others found it too much overwhelming, kind of shut them down intellectually and they couldn’t,
    2:39:36 they couldn’t cope with it. And so, I think it was kind of go time when he was teaching. In that
    2:39:41 case, he was, that was a lot of social time interacting, talking, other professors, going
    2:39:46 out and giving papers, arguing with the people at Yale or Harvard. Then he would go and do these
    2:39:51 very deep dives over the summer. He would also regularly do these trips to New York
    2:39:55 to see Anna Schwartz. So, it was a 12-year collaborator. They didn’t have, phone calls were
    2:39:59 really expensive. They did have quite an extensive correspondence, but then they would do these
    2:40:04 meetings. So, he would basically come in at the beginning of the summer, going to Raleigh,
    2:40:09 stop in New York, see Schwartz, and then again on the way back to Chicago. So, you’d have these
    2:40:13 deep check-ins at that point. The other thing that happened is people would come visit him
    2:40:18 in New Hampshire. And so, he would have these, he had a studio separate from the house. He would
    2:40:23 go and he would work. And then at night, his friends would come. His friends were all economists.
    2:40:27 There’s a whole like cluster of economists. They all clustered within driving distance of the
    2:40:32 Dartmouth Library so that they could get their hands on books. And so, they would come over and
    2:40:38 then they would argue and talk into the night. So, I think he did need that deep focus time.
    2:40:44 But it was not like, he also lived a very engaged, very embedded social life.
    2:40:51 A part of which was his marriage. Is there something you could say about love, about marriage,
    2:40:56 about relationship? They made the whole thing work because it was vibrant and they wrote a
    2:41:00 biography together. They did. I mean, they were very complimentary. They were kind of the ying and
    2:41:09 the yang. She was very introverted, somewhat suspicious of others, skeptical. And he was
    2:41:17 extremely extroverted, optimistic, high energy. And they also were at a time when it was really
    2:41:21 clear like for a broader society, these are the roles of a man. These are the roles of a woman.
    2:41:28 And they pretty much adopted those. Now, Rose Friedman did some very important economic work.
    2:41:32 She’s part of the early stages of the theory of the consumption function. She didn’t complete her
    2:41:38 degree because she really knew there wasn’t, if she wanted to be married and have children in the
    2:41:44 world she lived in, there wasn’t a real pathway to also being an economist. I do think that a lot of
    2:41:50 that, it’s not, although it feels very gendered, like he’s the man out in the world and she’s in
    2:41:54 private. It’s interesting because her brother, Aaron, director was the same way. He was very
    2:41:59 private man, very shy, very introverted. And he exerted this quiet intellectual influence on
    2:42:05 all of his friends. So I think that was just kind of a family treat of being more quiet,
    2:42:09 preferring to be behind the scenes. It wouldn’t have worked any other way. Because Friedman was
    2:42:17 so out there, so extroverted. And there’s a bit of a sad thing she said. She said,
    2:42:22 “When I married Milton, I lost half of my conversations. When David came along, I lost the
    2:42:29 other half.” So it was a household that was just dominated by male voices in which she didn’t have
    2:42:34 a lot of room. What was tricky for me in my research is she didn’t leave much of a trace.
    2:42:39 She put together Milton Friedman’s archive and she took herself out of it. So I really
    2:42:44 had trouble finding her actual voice in the historical documents. And she didn’t want to
    2:42:50 leave that behind. So it’s an absolutely essential piece of his success because she’s the one who
    2:42:55 pushed him to do the Newsweek column, to do Free to Choose. And she really wrote Capitalism and
    2:43:01 Freedom. She took all his random notes and she put them together into a book. And that became this
    2:43:08 kind of testimony of his ideas. But she shared many of his ideas. And she, without… When I think
    2:43:12 of Friedman, if you take away… Anna Schwartz, if you take away Rose Friedman, if you take away
    2:43:17 the other woman who collaborated with him, you have a much thinner resume than the one he actually
    2:43:25 has. Yeah, it’s always sad. It always makes me wonder about the private secret conversations
    2:43:32 between partners. Yeah. Because they’re… They might not show up in the record, but they probably
    2:43:39 influence the person more than almost anything else. Those quiet little conversations. Yeah.
    2:43:49 If we can switch our path to another great mind of the 20th century,
    2:43:55 Ayn Rand. We talked about some of the similarities here about them being fighters for freedom and
    2:44:03 fighters for capitalism. What is Ayn Rand’s philosophy if you can give a big 10-thousand
    2:44:08 summary of Objectivism? Yeah. So she called it Objectivism. China, she used to do this thing
    2:44:15 like, I can stand on one foot and say it. So it goes something like epistemology,
    2:44:21 reason, ethics, selfishness, politics, capitalism. That was kind of how she summarized it. So
    2:44:26 what she did, there’s a couple things she did with Objectivism. First of all, she says the key
    2:44:32 defining element of humanity is rationalism, the rational faculty. So that’s what defines
    2:44:38 what humanity is. Therefore, there is an objective reality that we can access and know with our
    2:44:45 reason. That’s the objective of epistemology. And the one social and economic system that lets
    2:44:53 rationality flower and is based upon rationality is capitalism. And then rationality only works
    2:45:01 in her view as an individual capacity. And that rationality teaches that what you should do is
    2:45:11 pursue your interests. And so she ends up calling that selfishness. Now, it’s tricky because selfishness
    2:45:18 has so many strong and negative connotations. And she meant, I think, something closer to
    2:45:27 like self-actualization because she really tried to create this idea and express the idea that
    2:45:35 to be truly selfish did not mean trampling on others, it meant just being motivated by your
    2:45:44 own kind of internal measures and metrics. And so in her fiction, she tries to show this by
    2:45:50 showing the false selfishness of some of Peter Keating, who’s an architect who kind of steps
    2:45:55 over everybody to advance his career. And she says it’s not true selfishness because true selfishness
    2:46:01 would recognize it’s false to take others’ work and pass it off as your own. Now, the other big
    2:46:10 piece of objectivism is a very approach that’s really inspired and related to Friedrich Nietzsche’s
    2:46:19 idea of revaluing values or a genealogy of morals. And so she says, what’s happened here is
    2:46:26 Western culture has converged on this idea of altruism as good, being selfless and altruistic
    2:46:34 is good. And this has led us to communism and has led us to devalue the individual in favor of the
    2:46:40 collective. So what we need is a new moral code which elevates selfishness, which elevates the
    2:46:45 individual and which takes all the things that we have been told are bad and actually says their
    2:46:50 values. This is what she’s trying to do with objectivism. I mean, it is about as ambitious
    2:46:57 of an intellectual project as there can be. And that’s what really draws people in. Yet at the
    2:47:04 same time, she’s flying in the face of the way human morals and ethics and societies have evolved.
    2:47:09 And she’s not able to single-handedly recreate them the way she wants them to be.
    2:47:14 Yeah, I mean, she’s not doing herself any favors by taking on the words and trying to rebrand them
    2:47:21 completely, like writing the virtue of selfishness. It’s like, can we just call it self-actualization?
    2:47:26 There’s a negative connotation to selfishness and a positive connotation to altruism.
    2:47:34 So she sometimes it seems takes on the hardest possible form of argument.
    2:47:41 Yeah, I mean, she had a student who ended up being very close to her, Nathaniel Brandon,
    2:47:44 and he was the Reverend Advisor, and he said, “Can you please not use selfishness,
    2:47:49 like just come up with another word.” But part of her liked it. Part of her wanted to provoke
    2:47:51 and unsettle. She didn’t want to give that up.
    2:48:02 I mean, people should listen to her public talks. Her whole aura, the way of being is
    2:48:08 provocative. And she’s a real powerhouse of an intellectual. So she loves the challenge.
    2:48:16 And that just listening to her in itself is just inspiring, because you could see the individualism
    2:48:23 radiate from her. Yeah, I mean, that was one of the things I found in researching and writing
    2:48:29 about her. She’s an incredibly unusual human being. And so that was her strength, right?
    2:48:34 Because she’s so unusual, but it was also her downfall, because she looked to herself as a model
    2:48:40 or to get insight about humanity. And she never quite processed how different she was from other
    2:48:48 people. So just because we talked about Milton Friedman so much, can we just return to what to
    2:48:54 you, given everything we’ve said, because the interesting difference is about Ayn Rand, her ideas
    2:49:04 related to Milton Friedman. Yeah, I mean, broadly, we could put Milton Friedman and Ayn Rand in
    2:49:14 some sort of category together, but she has this focus on ethics and rationality and this desire
    2:49:21 to be revolutionary. That’s much stronger than Friedman. Friedman wanted to overthrow the economic
    2:49:28 consensus. He didn’t want to overturn the moral basis of Western society. So she’s also, she does
    2:49:33 something. So in one of Frank Knight’s essays, he talks about the ethics of competition. And he
    2:49:40 says, you basically cannot build an ethics out of competition, because it would be monstrous to do so,
    2:49:47 because it would say the winner of this competition is ethically right. And that would open the door
    2:49:51 to sort of might makes right. And this is what Friedman struggles with. And he says, I can’t
    2:49:56 take capitalist outcomes as ethical unto themselves. I can’t do it. It doesn’t feel right.
    2:50:01 And there’s this line where Frank Knight says, no one would ever do this. And I was like, oh,
    2:50:07 Frank Knight, you haven’t read Ayn Rand yet. You’re a little too early, because that’s what she does.
    2:50:13 She takes the outcomes of capitalism and of marked competition and says, these have ethical meaning.
    2:50:20 And this is where ethical meaning inheres. And it is ethical to try to succeed and to succeed in a
    2:50:25 capitalist society. Now, what she’s able to do is create a fictional world in which people succeed
    2:50:31 in her fictional capitalist world through ethical behavior. And so she doesn’t really have to wrestle
    2:50:38 with a capitalist world in which people succeed through fraud and corruption and all the other
    2:50:43 things that might go into someone’s success. She creates the best possible take on success under
    2:50:49 capitalism. And then she holds that up as an ideal. And I think what’s important is that so few people
    2:50:55 have done that. And she comes at a time when everybody is emphasizing the downsides of capitalism.
    2:50:59 And she says, there’s another way to look at it. Here are the good sides of capitalism.
    2:51:04 And like you said, she was operating, which I really loved the phrasing of that in the mythic
    2:51:12 register. So she was constructing these characters, these capitalists that are like the highest form,
    2:51:20 these great heroic figures, almost romanticizing them. You mentioned We The Living as one of the
    2:51:28 books that you like of hers the most. But can we stay in the mythic register with the Fountainhead
    2:51:35 and Alice Rugg? What to you are some sort of memorable, inspiring moments, insightful moments
    2:51:44 from those books that may be scenes or ideas that you take away from them that are important for
    2:51:54 people to understand? Yeah. So the Fountainhead is this story of a struggling architect, Howard Rourke,
    2:51:59 and she kind of follows his life and his career. And the message is really,
    2:52:06 it’s a version of “to thine own self be true,” right? And Rourke’s designs are too avant-garde.
    2:52:13 Nobody appreciates him. And he just keeps doing what he wants to do and is just focused on his
    2:52:18 own visions, his own genius. I think that’s been really inspiring to kind of creators of all types.
    2:52:24 I think it’s barely unrealistic as a portrait of human achievement, but it’s an aspirational idea.
    2:52:31 I mean, one phrase that comes to mind is there’s a character, I forget which one, who is in some
    2:52:34 sort of adversarial relationship with Howard Rourke and says something to him like, “Well,
    2:52:41 Mr. Rourke, what do you think of me?” And Rourke says, “I don’t think of you.” And that to Rand
    2:52:47 was the ideal. You’re not thinking of other people. You’re an island unto yourself. You’re
    2:52:53 focused on your own goals, your own capacities. And you’re not doing it to impress other people
    2:52:57 or to be better than other people or to dominate other people. You’re doing it because you’re
    2:53:04 expressing your inner soul in a way. So that has been very captivating to so many. And The
    2:53:08 Fountainhead is one of those books we talked about that causes people rooted and they make
    2:53:13 changes in their life or they feel called to their higher self. And I think there’s also
    2:53:18 the scene where Rourke, with the Dean of Architecture at school that’s speaking to what
    2:53:25 you’re saying, I think to me is inspiring. So this is the Dean of Architecture that expels Rourke
    2:53:31 and it brings him into a meeting thinking Rourke will plead for a second chance. And the Dean says
    2:53:37 that Rourke’s work is contrary to every principle we have tried to teach you, contrary to all
    2:53:43 established precedents and traditions of art. Do you mean to tell me that you’re thinking seriously
    2:53:50 of building that way when and if you are an architect? And then in a gangster-like way,
    2:53:56 Rourke says yes. And then Dean asks, my dear fellow, who will let you? And Rourke replies,
    2:54:03 that’s not the point. The point is, who will stop me? Yes. I mean, Rand’s coming from communist
    2:54:09 Russia, but it has a bit of the don’t mess with Texas flavor. I might say that really resonates
    2:54:15 with this idea of anyone who’s felt like they’re fighting the powers that be. Yeah, it’s interesting.
    2:54:21 I thought you might be going to the quote where he says something like, I inherit no tradition.
    2:54:27 I stand at the beginning of one. And I really think Rand’s thinking about herself when she says
    2:54:32 that. She inherits nothing. She stands at the start. But the fountainhead comes out in the
    2:54:38 middle of World War II and it’s not expecting Rand as an unknown writer. This is kind of a strange
    2:54:44 book. It’s a classic story. It’s turned down by 12 publishers before one takes a chance on it. And
    2:54:50 Rand really loved this story. The editor who read it said, this book is great. And his boss said,
    2:54:57 no. And he said, if you don’t take this book, I’m quitting. And so she idolized him for doing that.
    2:55:03 So they print it and it becomes a bestseller just through word of mouth. So it’s not advertised,
    2:55:08 it gets one good book review, but people tell each other how much they like this book. And it
    2:55:13 keeps printing and selling out printings. It’s made into a movie. And so it lands in this time
    2:55:18 when Americans are engaged in this great collective endeavor of World War II. They’re making all kinds
    2:55:23 of sacrifices for the collective. And I think paradoxically, as they do that, they’re drawn
    2:55:28 to this vision of someone who doesn’t have to compromise at all, who is leading their life
    2:55:32 exactly as they want to. Meanwhile, they might be sleeping on an ocean liner because they’ve
    2:55:35 been drafted to fight in this war. And they’re reading The Fountainhead and they’re feeling
    2:55:40 better about themselves. And so it’s also really interesting. The Fountainhead is hugely popular
    2:55:45 in India, which is fascinating. And talk to people about this. And they basically say,
    2:55:51 this book comes like a breath of fresh air into a very traditional and conformist culture. And
    2:55:55 people just latch onto it and they love it. And it gives them that feeling of freedom and
    2:56:00 possibility that they’re hoping for. Yeah, I mean, it really is a book. Alice
    2:56:06 Shrugged can be a bit of that too, but it’s more the philosophy of objectivism and the details
    2:56:12 and the nuance of that seeps into Alice Shrugged. The Fountainhead is very much like a thing that
    2:56:18 makes you change the path of your life. Yeah. And that, I mean, that’s beautiful to see that books
    2:56:25 can have that power. And Rand knew that she was doing that and she knew what she was doing.
    2:56:31 This wasn’t an accident. And people say, oh, she’s a bad writer. Oh, her characters are so heavy-handed.
    2:56:36 You know, she started as a screenwriter. She started as someone who analyzed films
    2:56:42 for movie studios. She knew exactly how to manipulate plot and character and drama.
    2:56:47 And she also knew that she was writing. You know, people say, oh, Rand is for, you know,
    2:56:51 adolescence. Adolescent teenagers love Rand. And that’s kind of who she was writing for. And she
    2:56:54 said, you know, I’m writing for people as they start out on their life and they’re thinking
    2:57:01 about who they want to be. So she’s not writing, you know, for the weary middle age. She’s writing
    2:57:05 for the young who are looking for inspiration. You know, people say that to me sometimes about
    2:57:12 certain books like Rand, but also about the alchemist. I know a lot of people for whom the
    2:57:17 alchemist and their adults and their brilliant people, the alchemist changed our life. And the
    2:57:25 same can be said about the fountainhead. And I sometimes get criticized for using words that
    2:57:34 are too simple. I think simple words can have power. And the cliche thing sometimes needs to
    2:57:43 be said. And sometimes, if effectively needs to be said in an over-the-top way in the mythic
    2:57:48 register, because that’s the thing that resonates with us. Because we are like heroes of our own
    2:57:54 story. And we need to hear that message sometimes to take the bold step, to take the risk, to take
    2:58:00 the leap. Yeah. And I mean, the other thing, she knew she was doing kind of propaganda in a way.
    2:58:04 She was like, I’m doing pro-capitalist propaganda. She has a degree from the University of Leningrad.
    2:58:09 You know, she’s raised up in Soviet Russia. She said, we need to present the case for the other
    2:58:15 side in the same way. And that’s what she did. Why do you think she’s so divisive? People either
    2:58:22 love her or hate her? I mean, I think it’s because of that purity that I’m willing to say,
    2:58:29 sort of you get what you deserve, and that kind of lack of charity. And part of that in her work
    2:58:35 is because she creates this fictional world where she can set everything up just so. And so you don’t
    2:58:43 have contingency or accident or bad luck. Or you don’t really have a lot of children. You don’t
    2:58:49 have handicapped people. You just have this idealized world. And I think it’s really infuriating
    2:58:55 for people who feel that’s so inaccurate. How can you be depriving a social theory and philosophy
    2:59:00 around this? And how can you be missing what seems to many people she’s missing the kind of
    2:59:07 ethical instinct or the altruistic or charitable instinct? And so they just become enraged at
    2:59:12 that. And they don’t want to see anyone go that far. And they’re outraged that someone went that far.
    2:59:19 Did the thing that Frank Knight said no one would do. Yeah, it’s just it’s very unsettling.
    2:59:25 Would you say that’s her main blind spot? The main flaw of objectivism is just
    2:59:33 how black and white it paints the world. Or if not, what would you say are the flaws of objectivism?
    2:59:40 So, I mean, the big flaw is that it’s justified through a fictional world. It’s not justified
    2:59:49 through reference to the real world. It’s not empirical in a way. And Rand herself would say
    2:59:56 that she’s not writing about things how they are, but how they should be. And so that idealism
    3:00:04 just really undermines it as a mechanism to understand where we’re actually living.
    3:00:10 And that is a big contrast with Milton Friedman who would focus on how things are versus how
    3:00:16 things should be. And then I think it’s the problem of elevating rationality or any other
    3:00:21 mode of insight or thinking. And so what happens in Rand’s life when I describe this in some detail
    3:00:29 in the book is she essentially creates a cult of reason around her. And people who are drawn into
    3:00:34 this cult, it’s called the collective. It’s a group of young people in New York City who are
    3:00:39 drawn to her work. And she’s already famous, but she’s writing Atlas Shrugged. And so she’s sharing
    3:00:45 drafts of Atlas Shrugged as she goes along. And one of the members of the collective
    3:00:50 to bring all of this together is Alan Greenspan, later behead of the Federal Reserve. And he’s
    3:00:55 incredibly taken with her. He’s one of these people who says, I was a narrow technical thinker. I
    3:01:00 never thought about ethics or politics or anything bigger until I met Ein Rand. And she really opened
    3:01:06 my mind. He’s part of this tight-knit group. But in this tight-knit group, they think of themselves,
    3:01:10 we are all individualists. We’re dedicated to individualism and capitalism. We’re different
    3:01:17 than everybody else. Over time, they all come to share Ein Rand’s views and opinions on everything
    3:01:23 from music to art to clothes. She gets a dining room table and a bunch of them get the same dining
    3:01:30 room table. And it becomes incredibly conformist because they’ve all believed they’re acting
    3:01:36 rationally. And they believe that to act rationally is to agree with Ein Rand. And they believe there’s
    3:01:43 no other way to make decisions than rationality. And so to just agree with her is to be irrational.
    3:01:48 They don’t want to be irrational. So people get really caught up in this very damaging
    3:01:56 cult-like circle around her. Plus, for a cult of reason, they get awfully emotional when there’s
    3:02:06 any disagreement with Ein Rand. I mean, it’s kind of hilarious. It’s absurd. But it’s also beautiful
    3:02:12 to watch this singular figure. We’ve talked about several singular figures in Like Frank, right?
    3:02:19 That shakes up the world with her ideas. And of course, it would form a cult. And of course,
    3:02:22 that cult would be full of contradictions and hypocrisies.
    3:02:28 Yeah, I mean, it’s amazing. So Murray Rothbard is a famous anarchist, falls into the Ein Rand cult.
    3:02:36 And then he disagrees. And there’s some type of show trial where he’s told he’s wrong about
    3:02:41 everything. And then he has a little sort of pseudo cult of his own. And two of his cult members
    3:02:50 switch over to Ein Rand. And then one of them, to gesture their breaking of their relationship,
    3:02:56 mails him a dollar bill that’s been torn in half. I mean, this is high theatrics, right?
    3:03:05 Okay, sticking on the drama and the theatrics. Who was Nathaniel Brandon? Can you take me through
    3:03:11 the arc of Ein Rand’s relationship with Nathaniel Brandon to their dramatic falling out in 1968?
    3:03:19 Yes. So after the fountainhead, the fountainhead’s auction is sold to be a film. So Ein Rand moves
    3:03:22 to Hollywood, where she’s going to help in the writing of the film. She wants a lot of creative
    3:03:27 control. And then she’s also still working in screenwriting and things like this. And so she
    3:03:33 gets a letter from a Canadian student who’s written to her several times. And then he writes
    3:03:37 again, and he says, I’m at UCLA. And she’s like, young man, you’re so full of error. Why don’t
    3:03:42 you come visit me? And I’ll straighten you out. So he comes and they have this real meeting of the
    3:03:48 minds. They talk all night. He comes again. He brings his girlfriend. She loves him. And they
    3:03:54 start this very intense relationship of spending every weekend at her house, basically, staying
    3:03:58 up all night talking about ideas. He becomes completely converted to the Objectivist worldview.
    3:04:05 Rand begins counseling him and his girlfriend about their relationship. Very intense thing.
    3:04:11 Then eventually, they graduate from college and they both enroll in a graduate program in Columbia
    3:04:18 and they leave. And after they’ve left, I and Rand is just bereft. And within a few months,
    3:04:23 she packs up her home and she moves to New York. Here I am. I like New York better. And so that
    3:04:28 becomes the seedbed of the collective. And the Brandon’s, they get married. They change their
    3:04:34 name to Brandon. They’ve never publicly spoken on this, but many people have pointed out it has
    3:04:41 the word Rand in the name. So it’s some type of acknowledgement of how important she is to them.
    3:04:47 And time goes on and romantic feelings develop between I and Rand and Nathaniel Brandon,
    3:04:54 some 20 years or junior. And they discuss them and they realize that rationality has led them
    3:04:59 to the conclusion that they should be lovers. Right. Right. They’ve rationally decided this,
    3:05:04 but because they’re rational, they need the consent or at least to inform their partners.
    3:05:07 They’re both married. They’re both married. So they call a meeting and they
    3:05:15 obtain the consent or maybe simply inform the others of the rationality of the choice. And then
    3:05:21 they say, but this is only going to be an intellectual relationship, but we’d like a few hours alone
    3:05:25 each week. And we don’t want to be deceptive. So we want you to know and approve of this. So the
    3:05:32 spouses bought into rationality, know and approve. One thing leads to another, it becomes a full,
    3:05:36 romantic and sexual relationship. And although it’s open within these four people, it is not
    3:05:42 open more broadly. And so in all these meetings of the collective, Alan Greenspan, all these other
    3:05:47 people coming up, drinking coffee all night, talking, talking, they all know that Nathaniel
    3:05:53 Brandon is objectivist number one. They don’t know that there’s a romantic and sexual relationship
    3:05:59 happening. It’s kept a secret. And then when Atlas Shrug comes out, it’s panned by reviewers. People
    3:06:05 absolutely hate this book. And Rand is not Howard Rourke. She falls into a deep depression
    3:06:12 because her masterpiece has been rejected. And so then the romantic relationship ends,
    3:06:18 but the close personal relationship continues. And then over time, Brandon, who’s still married
    3:06:24 to his wife, begins an affair with another young woman. And at this point, he has started the
    3:06:30 Nathaniel Brandon Institute to teach objectivism. And he’s making good money. He’s becoming quite
    3:06:35 famous. She supported the Institute. She supported it. And at first, it was to help her in her
    3:06:39 depression because he said, “The world needs to recognize your genius. They missed Atlas Shrug,
    3:06:44 but I’m going to teach them. I will bring the message.” And it’s very successful. It becomes
    3:06:48 its own business. It has a newsletter. It’s a whole world. So that small cult around,
    3:06:53 I’d Rand, expands to this whole social network. And it’s very much a piece with this burgeoning
    3:06:57 conservative movement. Objectivists are involved in criticizing the draft. And
    3:07:04 they’re kind of a libertarian, objectivist world going on. All of this is happening.
    3:07:09 In the meantime, Nathaniel Brandon has found a new partner. And he doesn’t tell I ran this
    3:07:18 because he knows she’ll be upset. And so it goes on for years. And I ran knows something is going
    3:07:24 on, but she can’t quite figure it out. And finally, Barbara Brandon says to Nathaniel Brandon,
    3:07:30 “You have to tell her. This has just gone on too long.” So she finds out and the whole thing
    3:07:37 blows up and she exiles him and she breaks off contact with him. And nobody has ever told what
    3:07:42 happens. It’s called the objectivism. Objectivism breaks in two because some people say,
    3:07:49 “How could I do anything wrong?” And other people say, “What is this letter all about?
    3:07:52 And what did Nathaniel Brandon do? And I’m not just going to take her word for it. I need more
    3:07:57 information.” And then a bunch of people, I read all the accounts of this, a bunch of people are
    3:08:02 like, “Okay, they were having an affair.” And a bunch of other people are like, “No, that couldn’t
    3:08:08 possibly be happening.” And so the whole thing breaks up. But what I argue in my book is actually
    3:08:15 this is to the benefit of Rand’s ideas because Rand herself was so controlling over her ideas.
    3:08:22 And now that she steps back from a public role, objectivism flows into the student libertarian
    3:08:27 movement. Some objectivists become conservatives. It just kind of spreads out more generally. And
    3:08:32 you don’t have to drink the Kool-Aid. You don’t have to take the official course. Nathaniel Brandon
    3:08:37 goes on to be part of the self-esteem movement, killer human potential movement, California.
    3:08:43 And Ayn Rand lives another 10 years or so, but she doesn’t do major work after that.
    3:08:53 Since we were talking about some of the, although rationalized, some strange sexual
    3:08:59 partnerships that they’re engagement in, I have to ask about the Fountainhead and the
    3:09:05 quote-unquote “rape scene” in the Fountainhead. Was she intending to add that there to be
    3:09:12 controversial? How are we supposed to read into it? Is it a glimpsing to Ayn Rand’s sexuality?
    3:09:20 And maybe broadly, we can say, well, what was her view on sexuality, on sex, on power dynamics,
    3:09:26 and relationships? Yeah. I mean, there’s also an objectivist theory of sexuality that probably the
    3:09:32 least convincing of all the parts of objectivism. And it goes something like your sexual desires
    3:09:40 express your highest values. And they are related in some ways to your rationality,
    3:09:46 right, which is also related to your highest values. So for her, that explained her attraction
    3:09:51 to Nathaniel Brandon and Nathaniel Brandon’s attraction to her was a function of their highest
    3:09:57 values. And in fact, Brandon imbibed this so deeply that the fact that he was later drawn
    3:10:03 sexually to a woman who was not particularly accomplished, but was beautiful, caused him
    3:10:09 deep anguish and guilt for being non-objectivist. So this is the objectivist theory. Then the
    3:10:15 gender politics are just crazy. And we have to kind of back up and think, okay, so who is Ayn Rand?
    3:10:21 She’s born in Lisa Rosenbaum in Russia. She is someone who stands out from the crowd from the
    3:10:26 beginning. She never really fits in. She’s not conventionally beautiful by any stretch of the
    3:10:30 imagination. She struggles with her weight, and she doesn’t consider herself to have a beautiful
    3:10:37 face. She’s very independent. She meets none of the metrics of traditional femininity at all.
    3:10:41 She finds love with a man who is very handsome, but very passive.
    3:10:48 Yet she writes in all her fiction about strong manly heroes. So this seems to be like a projection.
    3:10:54 The man she’s actually with is not a strong manly hero. The hero she writes about, she probably
    3:10:57 wouldn’t be able to be in the same room with them for more than one minute before they got
    3:11:04 into raging argument, right? And then she develops this theory about women and men in that
    3:11:12 a woman should worship her man, and a woman finds her true expression in worshiping the
    3:11:17 man she’s with. So again, this is not at all how Ayn Rand lives her life. This is like this,
    3:11:25 I would say, compensatory theory for her lack of ability to conform to the gender norms of her day.
    3:11:33 She then articulates them in a very strong and almost distorted and exaggerated way to compensate
    3:11:37 for the fact that she doesn’t actually meet them, can actually enact them.
    3:11:45 The rape scene, to some degree, embodies that idea that to some degree, that the woman should
    3:11:53 worship the man. I tend to read it more in terms of literary genre. So Rand is a screenwriter,
    3:12:03 a consumer of movies, and that rape scene is paradigmatic for the romance genre. In other
    3:12:11 words, these pulpy romance novels, the hero rapes the heroine, and then they fall in love. That’s
    3:12:16 just the trope of how it works. So it’s crazy when you read it, but if you were reading a bunch of
    3:12:23 novels in this genre, you would find this is very standard. And so that is a huge part of
    3:12:28 this appeal at the time. There’s this feminist who hates Rand, Susan Brownmiller, and she wants to
    3:12:33 write an angry denunciation of the rape scene. So she goes to get the fountain head, and she’s
    3:12:38 wondering how is she ever going to find the scene in this 800-page book? It’s a library copy because
    3:12:43 she doesn’t want to buy it. And it just falls open to the rape scene because everybody’s gone
    3:12:49 and read it because it’s very racy and explicit for that time. So I’m almost positive she also knew
    3:12:55 that. Like, if I put in this kind of taboo-breaking sex scene, that’s also going to probably be why
    3:13:02 people tell their friends about it. So I think it’s a mess. I think all of the gender and sexuality
    3:13:10 stuff is that she states is just a total mess. I think it also reminds me of another guy related
    3:13:19 Fugius Nietzsche who had very strong opinions on women and wrote about what women’s role in society
    3:13:23 should be and different power dynamics and relationships and all that kind of stuff when
    3:13:30 he himself really had trouble getting laid. Yeah. And so you have to sort of always maybe
    3:13:36 chuckle or take with a grain of salt the analysis of power dynamics and relationship from these
    3:13:45 figures which failed in many regards in their own private life. You mentioned feminists.
    3:13:49 Would you consider Ayn Rand a feminist? I mean, she’s almost an anti-feminist
    3:13:59 because she then goes on and someone writes her a letter about, like, should there be a
    3:14:06 female president or something? This is like the beginning of feminism. And she says, no. No women
    3:14:12 should ever be president because if she’s president, she wouldn’t be able to look up to any man
    3:14:16 because she would be so powerful and therefore she would be corrupt and rotten in the soul
    3:14:23 and unfit to be a leader. It just makes no sense. But that said, she’s a woman and she’s one of
    3:14:30 the most powerful intellects in the 20th century. Yeah. And so the contradictions, I mean, Nietzsche’s
    3:14:39 full contradictions of this sort, that the very fact that she’s one of the most powerful minds
    3:14:47 in history to me means that she is a feminist in the spirit she embodies, right, and what she
    3:14:53 represents. I mean, she lived the ideals of individualism in her life and set aside gender
    3:14:58 norms in her own life. But she did not see herself as part of any… She did not see herself as doing
    3:15:04 this for the benefit of other women or to change society’s views about women. There was no collective
    3:15:13 essence to it. So if feminism has some sort of collective aspect to it or at least some
    3:15:18 identification, one needs to identify with a broader category of women and feel they’re acting
    3:15:27 in behalf of that, she’s definitely not doing that. And she was fair to women in her life,
    3:15:32 promoted them in her life, but did not… I mean, she was very negative about feminism. And then
    3:15:38 because they dress terribly. And then the other thing, it’s really interesting, there’s all these
    3:15:45 kind of homoerotic themes in her writing. And for that reason, many gay men were drawn to her writing.
    3:15:50 And then she would say homosexuals are dirty, terrible people. She would denounce
    3:15:55 people for being homosexual. So there’s a whole actual literature of gay men wrestling with
    3:16:03 Rand and what she says about gay people. So yeah, it’s hard to make sense of. And I just
    3:16:08 think of the enormous pressures. I want to be charitable. I just think of the enormous pressure
    3:16:13 she was under in the culture she was raised in, the expectations that were placed upon her and
    3:16:19 her uttering ability to meet any of them. And it came out in this very tortured set of ideals
    3:16:27 that she tried to promote. And this kind of lack of ability to introspect in herself and to,
    3:16:31 it was probably too painful to introspect and to think about that. So she just
    3:16:37 tried to rationalize her way through it. And it came out in these very strange theories.
    3:16:43 Why do you think that Iran is, maybe you can correct me, but as far as I can see,
    3:16:48 never mentioned in the list of great thinkers in history or even the great thinkers of the 20th
    3:16:54 century or even the great female thinkers of the 20th century. So you have somebody like Simone de
    3:17:02 Brevard, Hannah Arendt. I almost never see her in the list. If you Google those silly lists, top
    3:17:08 whatever, top thinkers of the 20th century, she’s not mentioned. Why is that?
    3:17:14 A lot of people just deeply dislike Rand. They deeply dislike her ideas. They don’t think they’re
    3:17:21 profound because they’re disconnection from other ideas and other understandings of human society.
    3:17:28 I think, I think where you could look at them and say, these ideas are very provocative and
    3:17:32 they’re very deep because she’s not taking anything for granted and she’s flipping everything around
    3:17:38 and forcing you to really think. To a lot of other readers, to her critics, they just look absurd.
    3:17:46 Like, how could you even make these contentions? And I think that because she’s not without
    3:17:51 precedence and she’s not without followers, but she doesn’t knit herself into an intellectual
    3:17:59 community the way that these other thinkers do very naturally, that you can see who they influence,
    3:18:05 you can see who they’re in dialogue with. I think my book was one of the first to really
    3:18:09 take Rand and say, she’s a figure in American history. Here’s who she’s connected to. Here’s
    3:18:16 who she’s influenced. And I got a lot of pushback for that. I think now people are more open to it,
    3:18:23 but I think the people who compile these lists really dislike her work and they think it’s shallow
    3:18:31 because they find her fiction overdrawn. They find her work in the mythic register simple and she’s
    3:18:39 also a grand systematic thinker in an age that’s over systems. She’s almost creating an inverse
    3:18:47 Marxism. Marx was writing in 1848. He’s not a thinker of the mid-20th century. I think that’s
    3:18:52 part of it. The lack of a legacy and the dislike of what she had to say and the feeling that she’s
    3:18:58 too detached, her insights are not insights because they’re too idealized rather than being rooted in
    3:19:01 a theory of human nature that people find plausible.
    3:19:10 You study and write about history of ideas in the United States over the past 100 years,
    3:19:19 100 plus years. How do you think ideas evolve and gain power over the populace, over our government,
    3:19:26 over culture? Just looking at evolution of ideas as they dance and challenge each other and
    3:19:35 play in public discourse. What do you think is the mechanism by which they take hold and have
    3:19:42 influence? There’s a couple different ways I think it happens. I really am interested in
    3:19:50 the relationship between the thinker and then the reader and the interpreter of the ideas
    3:19:56 and then the conditions on the ground that make that idea resonate or not resonate.
    3:20:05 As an intellectual historian, I’m studying ideas and I’m always putting them in their
    3:20:10 historical context. What is happening that is making these things resonate, that is making them,
    3:20:18 people seek them out. For Ranskay, she has this credibility because of her experience of communism.
    3:20:24 She’s one of these defining moments of the time. Then I think the idea comes out in a sort of
    3:20:30 pure form and then other people rework it and reshape it as they read it. I’m really interested
    3:20:35 in how people form communities around these ideas. A bunch of people started calling themselves
    3:20:43 objectives and getting together to read Rans’ work. That was spontaneous and ground up and wasn’t
    3:20:48 supported by any money nobody planted. It just happened. Friedman’s a different case in that he
    3:20:53 joins an established tradition of thought that’s been institutionalized in universities. People
    3:20:58 are signing up and paying money and getting credentialed to learn these ideas. To my mind,
    3:21:04 these are two different ways but really emblematic ways of how ideas spread. Rand, I think of this
    3:21:10 more bottom-up, people encounter the idea in a book. They’re blown away by it or they imbibe it
    3:21:14 without even realizing they’re imbibing it and then they’re like, “Well, maybe I don’t like
    3:21:20 Franklin Roosevelt so much or maybe I’ll look another time at Barry Goldwater.” Whereas Friedman,
    3:21:25 you get the idea more top-down. I know I’m getting the idea. I know I’m being positioned
    3:21:31 within a elite discourse of economics. I think they go top-down and bottom-up and then they
    3:21:37 hit the events. Friedman’s ideas wouldn’t have gone anywhere without that episode
    3:21:42 of stagflation that really made people think they proved out. I think Rand’s idea has really
    3:21:47 caught fire in Cold War America that’s looking for a statement of what does it mean to be an
    3:21:52 individual? What does it mean to live in this mass society because it’s also a time of great
    3:21:57 social conformity and where people are suddenly, they’re working for large corporations.
    3:22:04 They’ve been served in a large military. The United States is stepping out onto the
    3:22:07 world stage. Everything is bigger. What does it mean to be an individual in that world? That’s
    3:22:12 where Rand’s ideas catch fire. I think a lot about that, about how they trickle through
    3:22:17 different levels of society and then how ideas collide with experience I think is critical.
    3:22:22 What do you think about when they actually take power in government? I think about ideas like
    3:22:30 Marxism and how that evolves into the Bolshevik Revolution and how that takes hold in its
    3:22:37 implementations or you can think about Nazism and with Hitler where it goes from a small number of
    3:22:43 people that get real excited about a thing and then somehow just becomes viral and takes hold
    3:22:52 in power and then that has its consequences. When I think about this historical path of
    3:23:00 Communism and the kind of logics and dynamics of Communism, in many ways it has some echoes with
    3:23:07 Rand in that the ideology in its purest form is almost, it’s a rationalist ideology of some ways.
    3:23:11 It’s an analysis of history and how things are supposed to be and I think you mentioned Hannah
    3:23:16 Arendt. I think she is one of the most kind of penetrating analyses of Communism which she really
    3:23:24 puts in the category of it’s a logical ideology. Logic leads inexorably to its conclusions and
    3:23:31 then experience crops up and experience is different. What does a sort of cult of rationality do when
    3:23:36 it hits experience? Well, it tries to bend experience to its will and that I think is really the
    3:23:46 story of Communism writ large. The question though is why does it catch fire? Why does it draw people
    3:23:52 into political allegiance? I think in the case of Communism, it’s this dream of a more ethical
    3:24:00 world, dream of equality, dream of the powerless rising up against the powerful. That’s drawn in
    3:24:07 so many and then you had the whole addition of Leninism which gave a kind of international
    3:24:12 cast to that and helped people think about what are the relations between poorer and richer countries
    3:24:16 and what can we expect out of them and what might happen, gave a sort of framework for thinking about
    3:24:22 that in a time when the world was becoming more interconnected and those differences were becoming
    3:24:31 more obvious. Fascism to me is unleashing more something primal, something sort of dark and
    3:24:39 primal within people and it’s more a permission structure to indulge in that that is normally
    3:24:44 not there. Those impulses are normally channeled or held down and it seems that when the fascist
    3:24:48 regimes come into power, they give people permission to let those forces out.
    3:24:54 I think on Communism, going back to that lecture that Anne Rand gave,
    3:25:04 I think what ranks true to me a little bit is that what fuels it is a kind of maybe not resentment
    3:25:11 but envy towards the people that have the have-nots versus the haves and there’s some
    3:25:17 degree to which Nazism has the same of envy towards some group, resentment towards some group.
    3:25:24 So it’s given the environment of hard times, hard economic times, combined with the more primal
    3:25:32 just envy of not having and seeing somebody who has it and just constructing a narrative around
    3:25:39 that, that can become a real viral idea. Yeah, it seems like Communism is more animated by this
    3:25:46 idea of injustice. The world is unjust. It should be different and fascism seems like the process
    3:25:54 of scapegoating. We’ve identified the source of the problem and it’s this group and they need
    3:26:00 to be punished for what they’ve done to the rest of us. There is a primal thing, going back to
    3:26:08 literature in 1984, two minutes of hate where you can get everybody real excited about hating a thing
    3:26:14 and there’s something primal about us humans where once you’re in that state of hate,
    3:26:24 anyone can direct that hate towards anything, towards any group, towards any idea,
    3:26:29 towards anything because we could get caught up in the masochistic area of the hatred. It’s a
    3:26:40 dangerous thing. You floated the idea, I forget where, of pivoting for your next book towards maybe
    3:26:47 writing about postmodernism, which is a set of ideas, almost the opposite of Onerant’s philosophy.
    3:26:56 Can you maybe explain your curiosity about, first of all, spaces of ideas, but maybe postmodernism?
    3:27:04 Yeah, I think in the broadest sense, what I’m interested in, two dimensions that guide me
    3:27:07 in doing intellectual history. One is what I talked about, how does an idea go from
    3:27:14 a book, an elite space out to more popular dimensions? How does that happen? What happens
    3:27:20 to the idea along the way? How is it distorted or changed? The other is just search for meaning in
    3:27:25 of a post-Christian era or a secular era. What are people coming up with? I think
    3:27:32 to replace that void in their religious or spiritual lives, I think both Rand and Friedman
    3:27:38 offered these sort of alternatives, right? Objectivism, quasi-rationalist religion. People
    3:27:45 take economics as a theory of the world that almost, you can almost believe in it, right? It
    3:27:49 can almost take that place. And in both cases, how do those ideas travel? When I think about
    3:27:56 postmodernism, it first struck me, if you read the original postmodern thinkers, it’s really
    3:28:01 tough going. I mean, I make my students do it and they suffer. I think they see it’s worthwhile,
    3:28:08 but it’s no fun to read Derrida. But somehow it’s trickled down into, how do we go from like Derrida
    3:28:13 to Tumblr? And I sort of realized, oh, this has happened with postmodernism. It’s followed the
    3:28:20 same path that say from Milton Friedman’s economic theory to free to choose on YouTube. We’ve had
    3:28:27 a similar path of high French theory down to Tumblr and I sexually identify as an attack
    3:28:33 helicopter or whatever it may be. And so that was really interesting. And then I also thought,
    3:28:41 well, at the same time, this is clearly a structure of meaning. And I actually think it’s followed
    3:28:47 the same path of objectivism, which is turning into its opposite, just still down and then
    3:28:51 turning into its opposite. So if objectivism was a group of people who considered themselves
    3:28:56 individualists who ended up deeply conforming to the dictates of a charismatic leader,
    3:29:02 postmodernism started about disrupting binaries. We’re going to be fluid. We’re going to go
    3:29:07 beyond the border. We’re going to disrupt the binary. And it’s devolved in its popular forms
    3:29:14 to the reinscribing of many different binaries. Pressor and oppressed has become this like
    3:29:18 paradigmatic set of glasses you put on to understand the world. So I think the dynamics
    3:29:24 are very, very similar. So I think it’s something in the traffic of the idea from its pure form to
    3:29:30 its popular form, and then how it gets politicized or mobilized in different ways. And behind it
    3:29:36 all, I think, is this human longing for meaning and the inadequacy of the traditional ways that
    3:29:42 need was met at this point in time. By the way, that going from pure form to popular form,
    3:29:49 I remember this might be before the internet, but when I was in college reading Derrida and Foucault
    3:29:58 and not knowing context at all, it was just interesting. I’m able to read pure encapsulations
    3:30:03 of an idea and just kind of like, oh, all right, well, that person believes that and you just kind
    3:30:08 of hold it. But then you realize if you actually take the pure form of that idea and then it creates
    3:30:13 a community around it, you realize what that actually becomes. And you’re like, oh, yeah, no,
    3:30:21 that’s not, although I do consider myself sexually an attack helicopter. That’s it.
    3:30:23 Identify sexually. Yes, beautiful. Okay.
    3:30:32 Your process of researching for, let’s say, the biographies of Mr. Friedman and I’m Rand
    3:30:40 seems like an insane amount of work. Yeah. You did incredible work there going to the original
    3:30:51 sources. Can you maybe speak to that? What is required to persevere and to go for so many years,
    3:30:59 to go so deep to the sources? Yeah. So I mean, I go to the archive. That’s where I feel like I’m
    3:31:06 communing with the dead in some ways. I’m seeing what they saw in some ways and reading what they
    3:31:11 felt. And I tell my doctoral students, it’s got to be something that gets you out of bed
    3:31:16 in the morning because there comes a point in your doctoral career where nobody’s,
    3:31:20 there’s nowhere to go. There’s nowhere to be. You got to be getting up because you’re interested
    3:31:24 in what you want to study. And so with Rand, it was this real sense of discovery. I am discovering,
    3:31:28 I want to know about this woman. I want to know where she fits. And the only way to find out
    3:31:37 is to do the research. And so, yeah, I like to go deep. It’s really interesting to me.
    3:31:42 And I should say, in both of these cases, I’ve done it in an institutional structure. I don’t
    3:31:46 know that I would do it independently. So the first was the graduate program in history. It was at
    3:31:53 UC Berkeley. And so I had coursework and then I had structures. I did have people to check in with
    3:31:57 and read, but I had a great deal of latitude. I’m very grateful for people are like, you wrote a
    3:32:02 dissertation on I ran at Berkeley. I’m like, yeah, hell I did. Berkeley’s like, it’s a great place.
    3:32:06 At the time I was there, there was absolute room for free inquiry.
    3:32:11 Oh, can you just linger on that? So when you said that you’re doing that and doing a dissertation
    3:32:22 on I ran, was there, did people get upset? No, I did have a friendly critic who took it upon
    3:32:26 himself to throw at me everything he thought the outside world would throw at me. I think maybe
    3:32:32 five or 10 years earlier, it wouldn’t have been possible. But the most important thing I had to
    3:32:38 the person I really had to convince this was worth doing was myself, you know, because I knew it was
    3:32:44 an unconventional choice for the field and for a dissertation. But once I convinced myself, I just
    3:32:48 said, well, we’re going to do this and see. And because it was unconventional, it ended up standing
    3:32:56 out. And when it really was the time there was a was I started it during second Bush administration,
    3:33:02 second George W. Bush, second term, people were interested in just conservatism in general and
    3:33:06 felt, no matter where they stood on the political spectrum felt like objectively, we don’t know
    3:33:11 enough about this. And this is a problem. And so they were open to learning more. So I really kind
    3:33:15 of caught that caught that wave in scholarship and caught that wave in American culture where people
    3:33:22 wanted to know more. And we should probably say that, I mean, I ran is at the very least, as you’ve
    3:33:27 mentioned, a kind of gateway to conservatism. Yes, I called her the gateway drug and people
    3:33:34 start with Rand, they’re taken by her, you know, in some ways, she takes the worldview of Milton
    3:33:40 Friedman in terms of what capitalism can accomplish economically. And then she puts it in this
    3:33:46 mythopoetic register and she fictionalizes it. So once people have absorbed that, they want more,
    3:33:52 you know, they go on to learning more of the ideas behind that vision, or they have become true
    3:33:56 believers, they’ve converted. And so then they head off to work for a politician to work for a
    3:33:59 think tank to work for a party. And so absolute traffic. Now, not everyone. There’s plenty of
    3:34:03 people who read on Rand who don’t take the politics in. It’s a nice story. It’s interesting.
    3:34:09 Just an episode in their life. But for others, it’s really foundational. It really changes them.
    3:34:14 So those were the people I wanted to track very deliberately. I wasn’t trying to do in the round
    3:34:18 everything about on Rand. I was like, I’m Rand and the American right, you know, goddess of the
    3:34:23 market, I’m Rand and the American right is the title. So where did they, where did they take
    3:34:26 her, those who took her in this political direction? What difference did she make?
    3:34:32 If we return to like the actual, your process. Yeah. So you’re showing up and you’re reading
    3:34:39 sources and you’re like, is it kind of like the process of discovery? You’re just kind of like
    3:34:47 taking it all in and seeing what unifying ideas emerge or maybe special moments that
    3:34:54 illustrate an idea emerge? Yeah. I mean, I know with the biography of a person, I am already
    3:35:00 given a start and an end date and a rough narrative of what happens. So I have a kind of structure.
    3:35:06 And then with Rand, both with Rand and Friedman, I started by reading their major books before I
    3:35:11 really read anything about them because I wanted my own experience of the material to be fresh.
    3:35:16 And I had read some on Rand, but not a lot. Similarly, I had read some Friedman, but not
    3:35:21 a lot. So at first it’s like, let me read the major stuff, get oriented, and then just dive into
    3:35:28 the archive and see what’s there. Who are they talking to? What’s going on? In Rand’s case,
    3:35:34 I was interested in her in the United States, not her in Russia. I didn’t have the language
    3:35:39 skills to do that. So I start her in the United States and I start when she publishes her first
    3:35:43 book and she starts getting letters. And who is she writing to? Who’s writing to her?
    3:35:48 And then I start to uncover this world of kind of nascent conservatism. And I’m kind of putting
    3:35:52 that together. And once I have enough, I say, well, that’s a chapter. I’m going to cover that
    3:35:58 chapter. And then there’s going to be the book has come out. And so now I need to start a different
    3:36:02 chapter. What’s her life like after the book has been published? And then I look for that. But I’m
    3:36:07 really, although I have this very high level structure, it’s coming out of the archive,
    3:36:13 the material I’m finding. And if I’m not finding the material there, I won’t cover it in great
    3:36:18 detail. Or if I’ve decided it’s outside my am, but I’m not going to go into great depth on it.
    3:36:22 And you’re trying to understand the relationships is so fascinating, like reconstruct
    3:36:29 in a dark room, trying to reconstruct shine a light on relationships through reading letters.
    3:36:33 It’s interesting. Yeah. Yeah. I mean, correspondence is really, really helpful.
    3:36:39 Drafts, correspondence, and you know, someone this famous, they have oral histories,
    3:36:43 other people write about them. So you’re reading all these different things and kind of triangulating
    3:36:47 and trying to sort of put them together. And then think about, how do I present this in a
    3:36:53 compelling story? And what do I need to explain? And then also for me, what was really helpful was
    3:36:59 is that because I teach, and I am explaining the kind of broad sweep of 20th century history. So
    3:37:05 you know, I know that Rand’s involved in a labor action at Warner Brothers. But through my teaching,
    3:37:10 I realized, oh, yes, this is a moment of labor strikes across the country. And so then that
    3:37:17 really changes the origin story of Atlas Shrugged, because she’s looking at labor actions. And she
    3:37:23 originally thinking of the book as being called The Strike. So she’s really responding in real
    3:37:29 time and being inspired by what’s happening, you know, in the mid 1940s in the United States.
    3:37:32 So then I can kind of take that and run with that and figure out where to go.
    3:37:37 So you’re super passionate about teaching. You mentioned Milton Friedman had a very
    3:37:45 interesting way of teaching. So what’s your, how do you think of teaching, teaching history,
    3:37:50 teaching history of ideas, teaching great young minds about the past?
    3:37:56 Yeah, I mean, it’s great. It’s really inspiring. The ways the old school kind of dominating way in
    3:38:02 which Friedman taught would not fly in today’s university wouldn’t be permitted. And also the
    3:38:08 students wouldn’t respond to it, you know? So I try to share my enthusiasm. I think that’s like
    3:38:12 almost the number one thing I bring is my enthusiasm, like, look how neat and interesting
    3:38:18 these ideas are. I try to keep my own views out pretty much. I try to give the fairest possible
    3:38:24 rendition I can of each thinker. If I find someone really disturbing, I might sidebar at the end of
    3:38:29 the lecture and say, this kind of, you know, I find this unsettling and this, you know, tells me
    3:38:34 something about myself. But most of the time, I’m bringing people into the, like the biography of
    3:38:39 a great thinker, the context of them. And then we, in the lecture, we’ll literally read the work
    3:38:44 together and we’ll talk about it. And I’ll ask the students, what are you finding here? What’s
    3:38:51 jumping out at you? Kind of breaking down the language and really teaching them how to do deep
    3:38:56 reading. So I feel like that is my contribution right now. We’re having trouble reading collectively.
    3:39:00 We’re having trouble paying attention collectively. And I’m trying to cultivate
    3:39:06 their skills to doing that and showing them how I do it and also modeling like this is how I would
    3:39:11 read a text. This is what jumps out to me when I look at, you know, Thomas Kuhn or something like
    3:39:17 this. And just show them that I’d studying a history of ideas is really fun. I feel incredibly
    3:39:22 perfect to do it, you know. And the other thing is I think this is the time for students in college
    3:39:28 figuring out who they are. Their minds are developing and growing. They can really handle
    3:39:32 complicated hard ideas. They don’t always have the context behind them. So I need to give them
    3:39:36 the hard ideas and then show them this is kind of the context of what’s happening in the world.
    3:39:41 But really, I’m just, I’m showing them the landscape. I don’t have time to go deep.
    3:39:48 We have a 10 week quarter, you know, giving them a flyover. And then I want them to know
    3:39:52 how to go deep and know where they want to go deep. Do the thing that Milton Friedman did, which is
    3:40:00 in parallel. Yes, do their own parallel curriculum. Exactly. Exactly. What advice
    3:40:05 would you give in terms of reading about ideas you agree with and reading ideas you disagree with?
    3:40:10 I mean, even though I think the passion is important for the teaching of the ideas, like,
    3:40:16 dispassion is more important for the reading and understanding of them. So a lot of people have
    3:40:21 said to me like, I could never write about Ayn Rand like she makes me so angry. You know,
    3:40:26 and I’ve never become, I don’t get angry reading her. Like, I’m like, oh, there you go again,
    3:40:32 you know, or like, well, that’s going to cause trouble. You know, and so I guess I’m approaching
    3:40:38 it with a sort of charity, but also with, I’m not, I don’t have huge expectations. I’m not expecting
    3:40:43 to have the light shine on me. I’m not expecting to agree. I’m like, I can be very clinical about
    3:40:49 it. So that’s what’s worked for me. It might not work for others. But and then I just try to find
    3:40:54 the humor in it. You know, like, how, how funny is it? Like these different aspects of them,
    3:40:58 you know, like when teaching my students about Oliver Wendell Holmes, like his,
    3:41:05 his dad wrote a poem about him. He called him the astronaut about how he came from outer space.
    3:41:09 He seemed like he came from outer space. I’m like, this is his dad’s view of his son. Like,
    3:41:14 that’s how weird of a guy he was, you know? And so I try to like find that, keep alert for those
    3:41:19 funny kind of human touches that like, these are ultimately just people, you know, people with
    3:41:23 ideas that they spent enough time polishing up and developing that we still want to read about them
    3:41:28 a hundred years later. What about the dramatic formulation of that same question? Do you think
    3:41:33 there’s some ideas that are good and some of that are evil? Do you think we can draw such lines? Or
    3:41:38 is it more complicated, like the old soul genius in line between good and evil that runs to the
    3:41:44 heart of every person? I mean, I philosophically agree with souls and it’s in for sure. I do think
    3:41:50 some ideas pull on the good side and some ideas pull on the bad side, like absolutely. And I think
    3:41:55 that’s probably, that’s probably why people dislike Rand so much is they feel like she’s
    3:42:00 giving license to the bad side. And she’s saying it’s okay to be selfish and it’s okay, you know,
    3:42:07 they feel like she’s unloosing the dark forces. And, you know, in some cases that may be true,
    3:42:14 but she’s also unloosing some of the light forces in terms of reflecting on yourself and trying to
    3:42:19 be true. But definitely there are ideas that are dangerous to play with and there are ideas that
    3:42:26 I think give license to the darker sides of human nature. But I think you can see that in the
    3:42:34 historical record. So I think that it’s possible to show that. And obviously there’s some places,
    3:42:37 you know, like Germany, they’re trying, they think the ideas are so dangerous,
    3:42:42 they can’t be allowed to circulate. And in some contexts that may absolutely be true.
    3:42:48 And then still even that we should take with a grain of salt because perhaps censorship of an
    3:42:53 idea is more dangerous than the idea. So all of that, that’s the beautiful thing about us humans,
    3:43:00 we’re always at tension trying to figure out what ideas are the ones that are going to help
    3:43:08 humanity flourish. Pothead question, do humans have ideas or do ideas have us? So where do
    3:43:12 ideas come from? You have Milton Friedman sitting there after Rutgers trying to figure out what
    3:43:21 he can do about the Great Depression. Where, do you ever think about this? I sometimes think that
    3:43:27 aliens are actually ideas. They’re just kind of like travel through human brains and
    3:43:38 like captivate us. So we get all real excited. Like with a monolith in 2001 Space Odyssey,
    3:43:44 a monolith lands and everybody gets excited and somehow this idea just gets everybody
    3:43:51 to be on the same page and it reverberates through the community. And then that results in an
    3:43:56 implementation of some action that results in us figuring out that that idea was actually bad and
    3:44:02 we learned new ideas. But it feels like the idea is right in the show. Yeah. I mean, I think in a
    3:44:09 lot of cases, I think it’s true. Kane says this famous quote, “Most men are slaves of some defunct
    3:44:19 economist.” That’s funny. So I do think it’s really hard to have an original thought. We are social
    3:44:25 creatures. We encounter the same situations again and again. And so it’s really hard. You’re born
    3:44:30 into these traditions of thinking and being and knowing and most people are never going to question
    3:44:34 them and most people are never going to become aware of them. So again, that’s some of the work of
    3:44:39 what I do as an intellectual historian is like, let’s become aware. Let’s realize that you’re
    3:44:46 carrying a map that’s orienting you to the world in a certain way. And so I think you have to work
    3:44:51 really, really hard to have an original idea. And even then, it’s not a completely original idea.
    3:44:56 It’s a reworking and a reassembling of ideas others have had. So I definitely think it’s
    3:45:03 possible to create autonomy in the realm of ideas and to be an autonomous consumer of ideas. But I
    3:45:09 think on balance, most people are not. And that’s fine. They want to have experiences. They want
    3:45:15 to do other things with their life. Well, Jennifer, thank you so much for this journey through ideas
    3:45:21 today. And thank you so much for your incredible work. It was really fun and fascinating to talk
    3:45:27 with you today. Thank you. Thank you. Thank you for listening to this conversation with Jennifer
    3:45:33 Burns. And now let me try to reflect on and articulate some things I’ve been thinking about.
    3:45:39 If you’d like to submit questions or topics that I can comment on in this way here at the end of
    3:45:49 episodes, go to lexfreeman.com/ama or contact me for whatever other reason at lexfreeman.com/contact.
    3:45:54 Please allow me to say a few words about my interview with the president of Ukraine,
    3:46:00 Volodymyr Zelensky. Now that a few days have passed and I’ve had the chance to think about
    3:46:06 the conversation itself, the response, future upcoming conversations, and what it all means for
    3:46:13 the war in Ukraine, for global geopolitics, and for us humans in general. I’ve gotten a lot of
    3:46:20 heartfelt, positive words from all sides, including, at least so far, literally everybody who knows
    3:46:25 me personally inside Ukraine, which includes a lot of soldiers and many high-profile figures,
    3:46:29 some who are supportive of the president and some who are critical of him.
    3:46:36 Literally all private communication has been positive and supportive. This is usually not the
    3:46:42 case with me. Friends usually will write to me to criticize and to disagree. That’s the whole point
    3:46:49 of friendship. To argue and have fun doing it. There was none of that here, at least so far.
    3:46:54 So, thank you for your support and kind words, it means the world.
    3:47:00 The most common message was please keep pushing for peace. I will.
    3:47:09 But online, on the interwebs, I saw a lot of attacks, sometimes from swarms of online accounts,
    3:47:12 which of course makes me suspicious about the origin of those attacks.
    3:47:19 One of my friends in Ukraine, who by the way thinks the attacks are all propped up by Ukrainian
    3:47:25 bot farms, said there’s no need to say anything extra. Let the interview stand on its own.
    3:47:28 Just keep focused on the mission of pushing for peace.
    3:47:36 Basically, he’s a Ukrainian version of my other friend, Joe Rogan, who to this day says,
    3:47:41 don’t read the comments. This is generally good advice and I try to follow it. But I’m also a
    3:47:48 human being. I worry my heart on my sleeve in this interview. This war, for me, is deeply personal.
    3:47:56 And the level of vitriol, misrepresentation and lies about the conversation and about me personally
    3:48:01 was particularly intense and disingenuous. So, I thought I would use this opportunity to say a
    3:48:07 few words, just speak a bit more about how I approach this conversation with President Zelensky
    3:48:13 and conversations in general. This interview is something I poured my heart and soul into,
    3:48:19 preparing a lot. I’ve described parts of the preparation process I follow in the outro to
    3:48:25 the Zelensky conversation. But in general, let me say that I’ve read a lot, listened to a lot,
    3:48:31 and had a lot of private conversations with people on the ground. I have many flaws, but being
    3:48:38 unprepared for this conversation is not one of them. Two low effort attacks got to me a bit,
    3:48:46 if I’m being honest, though I am learning to take it all in stride. First attack is that I’m unprepared,
    3:48:54 uninformed, or naive. I don’t give a damn about the trolls, but I want people who listen to me,
    3:48:59 who support me, who care about my words to know that this is not the case. It never will be the
    3:49:07 case for future conversations, especially ones of this importance. I work extremely hard to prepare.
    3:49:15 Second low effort attack that got to me a bit, is that I’m a shill for Zelensky or a shill for
    3:49:22 Putin. Both accusations were hurled readily and freely by the online mob of all persuasions,
    3:49:28 by the left and the right in the United States, and Europe, by the pro and the anti Zelensky people
    3:49:35 in Ukraine, or of Ukrainian origins, and by the pro and anti-Putin people in Russia, or of Russian
    3:49:44 origins. As I’ve said, over and over, this is not the case, and will never be the case. I’m a shill
    3:49:50 for no one. More than that, I just simply refuse to be caught in any one single echo chamber.
    3:49:56 It’s an ongoing battle, of course, because social media algorithms and the various dogmatic groups
    3:50:03 and tribes out there want to pull you in to their warm embrace of belonging, and humans want to
    3:50:10 belong. But the cost of the path I have chosen is that I will never belong to any one group.
    3:50:19 In the end, like many of us must, I walk alone. And I try to do my best to do what is right,
    3:50:24 to my independent heart and mind, not what is popular with any one group.
    3:50:31 My goals for this conversation were twofold. First, give a large platform to President Zelensky
    3:50:36 to explain his perspective on the war, and to do so in a way that brings out the best in
    3:50:44 who he is as a leader and human being. Second goal was to push for peace, and to give him every
    3:50:49 opportunity possible to signal that he’s ready to make peace, and to provide his vision for what
    3:50:56 that might look like. And just to be clear, by peace, I mean long-lasting peace that minimizes
    3:51:02 suffering of people in the region and maximizes the flourishing of humanity in the coming decades.
    3:51:10 The war in Ukraine has led to over one million casualties and growing every single day.
    3:51:17 For some people, torn apart by loss, tormented and forced into a state of anger and hate,
    3:51:25 peace is a dirty word. To them, nothing less than justice must be accepted.
    3:51:35 I hear this pain. I’ve seen the bodies and the suffering. It’s true, peace will not bring back
    3:51:41 your loved ones, but it will prevent further slaughter of more people, each of whom are someone
    3:51:49 else’s loved ones. So again, the second goal of this conversation was to push for this kind of peace.
    3:51:58 So how did I approach it? Every conversation is its own puzzle, so let me try to explain my
    3:52:04 approach for this one. As I’ve said, I read and listened to a lot of material since February 24,
    3:52:11 2022. There would be many weeks over the past three years where I would spend every day over
    3:52:18 eight hours a day of focused reading and research. There were several rabbit holes that I consistently
    3:52:24 returned to and researched, but the most important line of inquiry was always peace talks. Not just
    3:52:31 in this war, but in other wars in modern history. For this specific war, as part of the background
    3:52:37 prep, I would take notes on every single perspective I could find on every single major diplomatic
    3:52:43 meeting and negotiation that happened in Ukraine-Russia relations since 1991.
    3:52:51 There is a lot of material to go through, and there are a lot of perspectives, even on the very
    3:52:57 2019 meeting that President Zelensky spoke about in this podcast. Just as a small but important
    3:53:04 example, Andrei Bogdan was interviewed twice by Dmitry Gordon and gave a deep inside look
    3:53:11 of the administration of President Zelensky, including that very 2019 meeting. The two interviews
    3:53:18 are seven and a half hours, by the way, and from my interviewer perspective are a masterclass of
    3:53:24 interviewing. Andrei Bogdan worked directly with President Zelensky as the head of the office of
    3:53:30 the President of Ukraine. He was there for the 2019 face-to-face meeting between Volodymyr Zelensky
    3:53:37 and Vladimir Putin at the Paris summit, along with French President Emmanuel Macron and German
    3:53:45 Chancellor Angela Merkel. This was part of the Normandy format peace talks. In those two interviews,
    3:53:52 Andrei Bogdan gave a very different perspective on that 2019 meeting than did President Zelensky
    3:53:58 to me in our conversation. The perspective being that the failure to negotiate a ceasefire
    3:54:05 and peace was not a simple one-sided story. I don’t think this is the right time for me to dive
    3:54:10 into that data point and be critical. I’m not interested in being critical for the sake of
    3:54:17 criticism. I am interested, once again, in productive conversations, critical or otherwise,
    3:54:24 that push towards peace. The kind I described earlier. This is merely an example of a data
    3:54:31 point I was collecting in my brain. There are many, many others. But all of it taken together
    3:54:38 made it clear to me, and I still believe this, that it is indeed very difficult, but possible,
    3:54:45 to negotiate long-lasting peace with Vladimir Putin. It is certainly true that Ukraine is
    3:54:51 best positioned to negotiate from a place of strength. After the invasion of February 24,
    3:54:59 2022, I believe there were three chances where peace was most achievable. First chance was March
    3:55:06 and April of 2022, with a successful defense of the North. Second chance was the fall of 2022,
    3:55:13 with a successful counter-offensive in Hassan and Kharkiv. The third chance is now.
    3:55:19 As he has stated multiple times publicly, Donald Trump is very interested in making peace.
    3:55:25 It is likely that the U.S. financial support for this war will continue to dwindle. So,
    3:55:32 the leverage and the timing for peace negotiation is now. There is unlikely to be another chance like
    3:55:40 this for a long time. Just to zoom out on the conversation piece of this, I interviewed Donald
    3:55:47 Trump and may do so again. I interviewed Vladimir Zelensky and may do so again. And it seems likely
    3:55:55 that I will interview Vladimir Putin in Russia, in the Kremlin. I understand the risks and I accept
    3:56:00 them. The risks for me are not important. I’m not important. I merely want to do my small part in
    3:56:07 pushing for peace in a moment in history when there’s a real chance for that peace to actually be
    3:56:14 achieved. I may be speaking too long, I’m sorry, but I can probably speak for many more hours,
    3:56:20 so this is in fact me trying to be brief. So again, my two goals were to bring out the best in
    3:56:26 President Zelensky as a leader and a human being and to give him every opportunity possible to
    3:56:32 signal that he is ready to make peace and to lay out his vision for what that peace might look like.
    3:56:41 Like I said, step one through ten is prepare well. I did. But step 11 is the actual conversation.
    3:56:46 There the specific psychological and personality quirks and qualities of the guest matter a lot.
    3:56:50 My job is to try to cut through the bullshit walls we put up with human beings
    3:56:56 and reveal directly or indirectly who the person truly is and how they think.
    3:57:03 With Zelensky, he is a deeply empathic and emotional human being who personally feels
    3:57:10 the suffering of the people of Ukraine in this war. This is a strength and perhaps also a weakness.
    3:57:17 But it is an important part of the reason why I said many times that he is a truly historic figure.
    3:57:24 Very few leaders in recent history would be able to pull off what he did, to stay in Kiev,
    3:57:29 to unite the country, to convince the West to join the war effort to the degree they did.
    3:57:37 He is also a showman to borrow the title of the biography I recommended. A man with many layers
    3:57:46 of humor and wit, but also ego and temper. Sometimes fully self-aware and sometimes losing himself
    3:57:52 in the emotional roller coaster of a painful memory or a turn of phrase that he can use as
    3:57:59 a springboard for an angry soliloquy. After this, the fact that we didn’t agree to anything,
    3:58:05 what we will talk about or how long we will talk about it. The interview could have easily been
    3:58:11 five minutes or three hours, so I had to quickly gain his trust enough to open up
    3:58:17 and stay for a long-form conversation, but push him enough to reveal the complexities of his
    3:58:24 thought process and his situation. This is where humor and camaraderie was essential and I would
    3:58:29 return to it often, though it was very difficult given the stakes, the heaviness, the seriousness
    3:58:35 of the topic of the war. So in this case, the approach I followed for this conversation is
    3:58:41 constant nudges and questions about peace, often using almost childlike statements or questions.
    3:58:47 I generally like these kinds of questions. On the surface, they may seem naive, but they’re not.
    3:58:53 They are often profound in their simplicity, like a lot of questions that children ask.
    3:58:59 Remember, it was a child who pointed out that the emperor was not wearing any clothes.
    3:59:05 I like the simplicity, the purity, the boldness of such questions to cut through the bullshit
    3:59:11 to the truth. And that truth is that hundreds of thousands of people died in this war
    3:59:18 and are dying every day. And all the other problems, from corruption to suspended elections,
    3:59:25 to censorship, cannot be solved until peace is made. I give the president every single chance
    3:59:31 to signal willingness to negotiate, knowing that both Trump and Putin will listen to this
    3:59:38 conversation. I don’t think he took it and instead chose to speak very crude words towards
    3:59:44 Vladimir Putin. This is fully understandable, but not directly productive to negotiation.
    3:59:51 To clarify, I have hosted many conversations that were intensely critical of Vladimir Putin,
    3:59:56 from Sir Hiplohi to Stephen Kotkin. But this conversation is with a world leader,
    4:00:02 speaking about another world leader during a historic opportunity for peace.
    4:00:08 Crude words of disrespect, while powerful, may harm negotiations.
    4:00:15 Peacemaking in this situation requires compromise in order to avoid further death and suffering.
    4:00:22 And I believe it requires treating the other leader with a seriousness you expect him to treat you
    4:00:29 with. This is what I was pushing for. All that while also putting my ego aside and letting the
    4:00:35 president shine, which is necessary to accomplish both goals one and two that I mentioned previously.
    4:00:41 This is also why I wanted the president to speak about Elon and Trump, to extend the olive branch
    4:00:48 for further avenues of peacemaking. This is not about politics. It is, once again, simply about peace.
    4:00:55 Now, all of this, my words, my attempts were taken out of context and used to attack me by
    4:01:01 some online mobs. As an example, President Zelensky said in a mocking tone that he thinks
    4:01:10 that Vladimir Putin is simply irritated by people who are alive in Ukraine. And I answered, “If you
    4:01:15 believe this, it will be very difficult to negotiate. If you think that the president of a country is
    4:01:22 completely crazy, it is really hard to come to an agreement with him. You have to look at him as a
    4:01:28 serious person who loves his country and loves the people in this country. And he conducts,
    4:01:35 yes, destructive military actions.” The president interrupted me at this point and said,
    4:01:42 “Who are you talking about now? Who loves this country?” And I said, “Putin. Do you think he
    4:01:49 doesn’t love this country?” And the president answered, “No.” Again, this is not a podcast
    4:01:55 conversation with a historian or activist. And I somehow, out of nowhere, just for fun,
    4:02:02 waxed poetic about Putin’s or Zelensky’s or Trump’s love of nation. It is a conversation
    4:02:09 with a world leader discussing the opportunity to negotiate peace when a large number of people
    4:02:17 are dying every single day. Even if the heart boils over with hate, leadership now requires
    4:02:23 sitting at the negotiation table and compromising. This may be painful, but it is necessary.
    4:02:28 There are a few other places in the conversation where some online mobs took my words out of
    4:02:35 context and used them to call me naive and to call for more war, saying peace is impossible
    4:02:42 with a man who they claim is the second coming of Hitler. My friends, if you make such attacks on
    4:02:49 this conversation, it is in fact you who are naive and ignorant of the facts of history and
    4:02:59 geopolitics. Peace must be made now in order for death and suffering to stop, in order for Ukraine
    4:03:05 to have a chance to flourish, and in order for the drums of a global war to stop beating,
    4:03:14 a global war that would cripple humanity. This was my goal, once again, to push for peace.
    4:03:22 And I will continue this effort to the best of my ability. Thank you. I love you all.
    4:03:38 [Music]

    Jennifer Burns is a historian of ideas, focusing on the evolution of economic, political, and social ideas in the United States in the 20th century. She wrote two biographies, one on Milton Friedman, and the other on Ayn Rand.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep457-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Jennifer’s X: https://x.com/profburns
    Jennifer’s Website: https://www.jenniferburns.org

    Jennifer’s Books:
    Milton Friedman biography: https://amzn.to/4hfy1HO
    Ayn Rand biography: https://amzn.to/4afr3A0

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Brain.fm: Music for focus.
    Go to https://brain.fm/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (10:05) – Milton Friedman
    (24:58) – The Great Depression
    (39:15) – Schools of economic thought
    (50:22) – Keynesian economics
    (58:10) – Laissez-faire
    (1:06:00) – Friedrich Hayek
    (1:11:18) – Money and monetarism
    (1:26:03) – Stagflation
    (1:30:56) – Moral case for capitalism
    (1:34:53) – Freedom
    (1:39:51) – Ethics of competition
    (1:43:37) – Win-win solutions
    (1:45:26) – Corruption
    (1:47:51) – Government intervention
    (1:54:10) – Conservatism
    (2:00:33) – Donald Trump
    (2:03:09) – Inflation
    (2:07:38) – DOGE
    (2:12:58) – Javier Milei
    (2:18:03) – Richard Nixon
    (2:25:17) – Ronald Reagan
    (2:28:24) – Cryptocurrency
    (2:43:40) – Ayn Rand
    (2:51:18) – The Fountainhead
    (3:02:58) – Sex and power dynamics
    (3:19:04) – Evolution of ideas in history
    (3:26:32) – Postmodernism
    (3:37:33) – Advice to students
    (3:45:50) – Lex reflects on Volodymyr Zelenskyy interview

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #456 – Volodymyr Zelenskyy: Ukraine, War, Peace, Putin, Trump, NATO, and Freedom

    AI transcript
    0:00:06 The following is a conversation with Volodymyr Zelensky, the president of Ukraine.
    0:00:12 It was an intense, raw, and heartfelt conversation, my goal for which was to understand
    0:00:21 and to do all I can to push for peace. Please allow me to say a few words, first about language,
    0:00:26 then about the president, and finally about history. Please skip ahead,
    0:00:33 straight to our conversation, if you like. We spoke in a mix of languages, continuously switching
    0:00:41 from Ukrainian to Russian to English. So, the interpreter was barely hanging on. It was indeed,
    0:00:47 in many ways, a wild ride of a conversation, as the president said, the first of many.
    0:00:55 Language, like many other things in a time of war, is a big deal. We had a choice. Speaking Russian,
    0:01:02 Ukrainian, or English. The president does speak some English, but he’s far from fluent in it,
    0:01:08 and I sadly don’t speak Ukrainian, yet. So, Russian is the only common language we’re both
    0:01:14 fluent in. In case you don’t know, the Russian language is one that the president speaks fluently
    0:01:20 and was his primary language for most of his life. It’s the language I also speak fluently,
    0:01:27 to the degree I speak any language fluently, as does a large fraction of the Ukrainian population.
    0:01:34 So, the most dynamic and powerful conversation between us would be in Russian, without an interpreter,
    0:01:42 who in this case added about two to three second delay, and frankly translated partially and poorly,
    0:01:48 for me at least. Taking away my ability to feel the humor, the wit, the brilliance, the pain,
    0:01:55 the anger, the humanity of the person sitting before me, that I could clearly feel when he was
    0:02:03 speaking fluently in the language I understand, Russian. But all that said, war changes everything.
    0:02:08 The Ukrainian language has become a symbol of the Ukrainian people’s fight for freedom
    0:02:15 and independence. So, we had a difficult choice of three languages, and faced with that choice,
    0:02:23 we said yes, to all three, to the consternation and dismay of the translators.
    0:02:31 We make captions and voice over audio tracks available in English, Ukrainian and Russian,
    0:02:37 so you can listen either to a version that is all one language, or to the original mixed language
    0:02:42 version with subtitles in your preferred language. The default is English overdub.
    0:02:48 On YouTube, you can switch between language audio tracks by clicking the settings gear icon,
    0:02:56 then clicking audio track, and then selecting the language you prefer, English, Ukrainian,
    0:03:05 Russian. To listen to the original mixed language version, please select the English UK audio track.
    0:03:12 Big thank you to Eleven Labs for their help with overdubbing using a mix of AI and humans.
    0:03:17 We will continue to explore how to break down the barriers that language creates,
    0:03:24 with AI and otherwise. This is a difficult but important endeavor. Language, after all,
    0:03:31 is much more than a cold sequence of facts and logic statements. There are words when spoken
    0:03:38 in the right sequence and at the right time that can shake the world and turn the ties of history,
    0:03:46 that can start and end wars. Great leaders can find those words, and great translators
    0:03:52 can help these words reverberate to the outskirts of a divided civilization.
    0:03:58 On another note, let me say that President Zelensky is a truly remarkable person
    0:04:05 and a historic figure. I say this as somebody who deeply understands the geopolitical complexity
    0:04:12 and history of the region. I am from this region. My parents were both born in Ukraine,
    0:04:20 Kiev and Kharkiv, both my grandfathers too. I was born in Tajikistan and lived for a time there,
    0:04:29 then in Kiev, then Moscow, then United States. And while I am now for almost 30 years and to
    0:04:37 the day I die, I am a proud American. My family roots grow deep in the soil of nations that comprised
    0:04:44 the Soviet Union, including Ukraine, Russia, Belarus, and Tajikistan. I’ve gotten to know and
    0:04:48 have spoken for hours with members of the President’s team and people close to him.
    0:04:55 I spoke to hundreds of Ukrainians since 2022, including soldiers, civilians, politicians,
    0:05:00 artists, religious leaders, journalists, economists, historians, and technologists.
    0:05:07 I listened to hundreds of hours of programs that both support and criticize the President,
    0:05:13 in Ukraine, in Russia, in the United States. I’ve read countless books about this war
    0:05:20 and the long arc of history that led up to it. A force to recommend too, at this moment,
    0:05:26 I would say the Russo-Ukrainian War by Sergei Plohe and the Showman by Simon Schuster,
    0:05:33 which is a good personal behind the scenes biography of the President, focused on 2022.
    0:05:41 But there are many, many more. This is why I can comfortably say that he is a truly singular
    0:05:47 and remarkable human being. It was an honor and pleasure to talk with him on and off the mic.
    0:05:55 Now, it is true that I plan to travel to Moscow and to speak with President Vladimir Putin.
    0:06:01 And I hope to be back in Kiev as well, as President Zelensky said this was our first
    0:06:07 of many more meetings. In all these cases, I seek to do my small part in pushing for peace.
    0:06:13 And in doing all this, I’m deeply grateful for the trust people have given me on all sides,
    0:06:20 for the people attacking me, sometimes lying about me, for the critics in the stands,
    0:06:26 chanting the latest slogans of the mass hysteria machine like the sheep and animal farm.
    0:06:34 I love you too. And I assure you that drawing lines between good and evil on a world map
    0:06:41 is much easier than seeing that line between good and evil in every human being,
    0:06:48 including you and me. This is what I try to do. I’m simply a human being who seeks to find and
    0:06:58 surface the humanity in others. And as I’ve said, no amount of money, fame, power, access can buy my
    0:07:06 opinion or my integrity. Now, finally, please allow me to briefly overview some history to give
    0:07:10 background for several topics that President Zelensky references in this conversation.
    0:07:16 I recommend my conversation with Sergey Plohe and many others about the history of the region.
    0:07:23 But here let me start with 1991, when Ukraine declared its independence and the Soviet Union
    0:07:30 collapsed. From this point on, Russia-Ukraine relations were defined in large part by whether
    0:07:36 Ukraine aligned more with Russia or with the West, meaning Europe, United States, NATO, and so on.
    0:07:44 In 2004, with the Orange Revolution, a pro-Western candidate, Viktor Yushenko became president.
    0:07:51 In 2010, it went the other way, a pro-Russia candidate, Viktor Yanukovych became president.
    0:07:57 The internal tensions grew, and in 2013, Euromaidan protests broke out
    0:08:03 over Yanukovych’s decision to suspend talks with the European Union in favor of closer ties with
    0:08:10 Russia. This set forward a chain of important events in 2014. On the politics front, Yanukovych was
    0:08:17 ousted and fled to Russia, leading to the election of a pro-Western president. Also, in 2014, on the
    0:08:24 war front, Russia annexed Crimea and war broke out in the Donbass region of eastern Ukraine,
    0:08:31 which eventually killed over 14,000 people and continued all the way to 2022, when,
    0:08:39 on February 24, 2022, Russian forces initiated a full-scale invasion of Ukraine.
    0:08:43 This is when the world started to really pay attention.
    0:08:50 Now, some history of peace talks. Volodymyr Zelensky won the presidency in 2019,
    0:08:55 and he discusses, in this conversation, the ceasefire agreements he made with Vladimir Putin
    0:09:03 in 2019, which was one of many attempts at peace, from the two Minsk agreements in 2014 and ’15
    0:09:12 to a series of ceasefire agreements in 2018, ’19, and ’20, all of which failed, in part or in whole.
    0:09:17 All this shows just how difficult ceasefire and peace negotiations are,
    0:09:24 but they are not impossible. It is always worth trying, over and over again, to find the path to
    0:09:32 peace. I believe that presidents Zelensky, Putin, and Trump should meet soon after January 20 this
    0:09:39 year and give everything they got to negotiate a ceasefire and security guarantees that pave the
    0:09:45 way for a long-lasting peace. We discussed several ideas for this in this conversation.
    0:09:54 As I said, this was one of my main goals here, to push for peace. This trip to Kyiv and this
    0:09:59 conversation was a truly special moment for me in my life. It is one I will never forget.
    0:10:05 So to reflect, I say a few more words and answer some questions at the very end if you like to
    0:10:14 listen. But here, let me say thank you to everyone for your support over the years. It means the world.
    0:10:20 And now, a quick few second mention of each sponsor. Check them out in the description.
    0:10:24 It’s the best way to support this podcast. There are no sponsor reads in the middle,
    0:10:32 so, you know, you can skip these, but I do try to make them interesting in case you stick around.
    0:10:36 In either case, still please check out the sponsors by their stuff. It’s the best way
    0:10:45 to support this podcast. We’ve got Notion for Notes and Team Collaboration, Github for all things
    0:10:52 programming, including with the help of AI, AG1 for Health, Elements for Electrolytes,
    0:10:58 Aidsleep for Naps, and BetterHelp for Your Mind. If you want to get in touch with me for whatever
    0:11:05 reason, go to lexfeedman.com/contact. And now onto the full ad reads. This episode is brought to you
    0:11:10 by Notion, a note-taking and team collaboration tool. I believe I mentioned it at the end of the
    0:11:17 podcast. It’s something I use regularly as a big part of my podcast prep and research process.
    0:11:24 I currently only use it at the computer when I’m doing really sort of rigorous systematic
    0:11:30 note-taking. But it is, like I mentioned, really the best integration of AI that I’ve
    0:11:37 used in any note-taking application. I’m a bit delirious at the moment because through the insane
    0:11:46 amount of work that had to be done to bring together the translation for this episode with
    0:11:52 President Zelensky, I’ve gotten very little sleep. So here I am trying to put together a few words
    0:12:04 when the neurons required to assemble said words are just not firing. Anyway, the amount of research,
    0:12:10 the amount of note-taking that I had to do, just the chaos, the whirlpool, the overwhelming amount
    0:12:18 of notes that I took across many books and blog posts. And I was listening to just a large number
    0:12:24 of conversations from all different kinds of perspectives. And I’m not sure those notes were
    0:12:30 sort of directly useful, but they’re building up a knowledge base. They’re building up an intuition.
    0:12:38 They’re making sure that I have a chance to understand. So anyway, Notion played a big part
    0:12:45 of that. Try Notion AI for free when you go to Notion.com/lex. That’s all lowercase Notion.com/lex
    0:12:50 to try the power of Notion AI today. This episode is also brought to you by a new sponsor,
    0:13:01 but obviously one I’ve used for many, many years. It’s GitHub and GitHub Co-Pilot. So GitHub for
    0:13:07 people who somehow don’t know if you’re listening to this and you’re not a developer, it’s basically
    0:13:16 a place where developers go to be happy and to collaborate and to share and to build, especially
    0:13:24 for people who are part of the open source world. So it really is a magical place. And also they
    0:13:31 were pioneers in the AISs decoding space with GitHub Co-Pilot. Now GitHub Co-Pilot is not just
    0:13:39 available in VS Code. It’s also available in Neo Vim. It’s available in all the JetBrains IDEs.
    0:13:47 I’ve used JetBrains for a long time and loved it and eventually drifted away. Still have not
    0:13:54 tried Neo Vim. I probably should. Vim Neo Vim. That’s what all the cool kids are using. Anyway,
    0:14:01 GitHub Co-Pilot and all the different features of AI-assisted coding that they’re continually
    0:14:08 developing are available in those IDs. As I mentioned, at the end of the episode, I was an
    0:14:16 Emacs user for probably over 20 years, way more than 20 years. And so I don’t remember exactly when,
    0:14:22 but a few months ago, I switched to VS Code. And that was just such a lightball moment. It took
    0:14:27 a little bit of time to get adjusted. I missed a bunch of stuff in Emacs, especially because I
    0:14:32 customized everything with Lisp, which is what Emacs is written in. And it’s the sort of the
    0:14:39 back end customization is written in Lisp. And Lisp is its own programming language with an aura
    0:14:48 and a spirit that permeated my being for a long time. So it took a little bit of time to get used
    0:14:56 to VS Code. But really, the magic of Co-Pilot is the thing that allowed me to transition so quickly.
    0:15:02 And they’re facing a lot of steep competition right now. So I’m excited just how seriously
    0:15:09 they’re taking this competitive space of AI-assisted coding and developers win.
    0:15:16 The more competition, the more features developers win. And I, as a developer myself,
    0:15:23 just full of joy when I get to pair a program with a good LLM. Anyway, get started with GitHub
    0:15:32 Co-Pilot for free today at gh.io/copilot. This episode is also brought to you by AG1 and all
    0:15:40 in one daily drink to support better health and peak performance. I’ve been traveling crazy places,
    0:15:49 intense schedules, just chaos, taking risks, all that kind of stuff. So to get back to where I can
    0:16:01 drink AG1 and have for brief moments of time the feeling of home is really nice. AG1, for whatever
    0:16:08 reason, is the thing that makes me feel like home. It’s the symbol of the daily habits that I do when
    0:16:18 I have my shit together. And I’m exercising and eating okay and making sure that I’m getting the
    0:16:26 nutrition I need. So in that sense, it’s good to be home. They’ll give you a one month supply of
    0:16:32 fish oil when you sign up at drinkag1.com/lex. This episode is also brought to you by Element,
    0:16:38 my daily zero sugar and delicious electrolyte mix. Now some number of packets of element I actually
    0:16:46 did bring to Ukraine, to Eastern Europe, to Europe as I’m traveling. It’s just so easy to travel with
    0:16:54 and especially when I’m fasting for 24 hours or more, which I was doing not by choice, but
    0:17:04 for the flexibility that it enables, electrolytes really help me avoid the headaches associated
    0:17:12 with not consuming enough calories or fasting or eating only meat or all that kind of stuff.
    0:17:18 It really helps make sure that you avoid what people call the keto flu, but I find that when I’m
    0:17:24 fasting or doing really low carbs at any stage, it just makes me feel better if I make sure the
    0:17:31 electrolytes are correct. And the same is true with the intense exercise. So get a simple pack
    0:17:38 for free with any purchase. Try to drink element.com/lex. This episode is brought to you by EighthSleep
    0:17:50 and it’s Pod4Ultra. And yes, the irony of the fact that I haven’t slept for probably 40 hours
    0:17:57 and I’m about to crash the irony of the fact that I am talking or attempting to
    0:18:06 about a really, really nice mattress. I just can’t wait. I can’t wait. It cools the bed,
    0:18:14 warm blanket. It really is, it’s an escape from the insanity, the cruelty,
    0:18:24 the madness of the world. Yeah. So I look forward to that. I’ll look forward to that whenever I
    0:18:32 take a power nap or trying to get a phone that sleep. Yeah. It’s a little ruspous from the madness
    0:18:41 of the world. Go to EighthSleep.com/lex and use code Lex to get $350 off the Pod4Ultra.
    0:18:48 This episode is also brought to you by BetterHelp, spelled H-E-L-P Help. It’s difficult for me to
    0:18:59 explain the kind of things that war does to people’s minds, to how they see the world,
    0:19:06 how they interact with each other. I’ve seen a lot of pain in my travels and it breaks my heart.
    0:19:16 So that said, the human mind is remarkably resilient to suffering. And that too
    0:19:25 gives me a kind of hope that no matter what, the human spirit prevails and flourishes.
    0:19:31 Sometimes it takes years, sometimes it takes generations, but it does flourish.
    0:19:39 Anyway, I’m reminded of that from BetterHelp. It’s a service that helps you figure out what
    0:19:43 you need to match with a licensed therapist in under 48 hours. You can check them out at
    0:19:50 BetterHelp.com/lex and save on your first month. That’s BetterHelp.com/lex.
    0:19:58 This is the Lex Friedman podcast. And now, dear friends, here’s the president of Ukraine,
    0:20:07 Volodymyr Zelensky.
    0:20:20 If we can explain why the Ukrainian language is very important,
    0:20:23 our conversation will be most effective and impactful if we speak in Russian.
    0:20:27 I speak Russian perfectly, of course, and I understand everything you are talking about.
    0:20:34 However, I can’t respond in Russian the entire interview. It’s because this is how it is today.
    0:20:39 I am not making anything up. You can see it all for yourself. You can feel and hear it.
    0:20:46 Today, there were 73 missile attacks against us and people were killed. There were over 100 drones
    0:20:53 today and this is a daily occurrence. The people who attack us, they speak Russian.
    0:20:59 They attack people who were only recently told that this was actually in defense of
    0:21:08 Russian-speaking people. And this is why I respect neither the leader or director of today’s Russia,
    0:21:16 nor the people. I just, that’s it. And I don’t think that you can just pretend that nothing’s
    0:21:24 happening and give Putin a pass once again for saying that we are one people, that we speak
    0:21:30 one language, etc. They speak the language of weapons. That is a fact. And we are peaceful people,
    0:21:38 peaceful people who want to protect themselves and defend their freedom and their human choice.
    0:21:46 You know, at the beginning of the war, I addressed Russians in Russian.
    0:21:58 Zero effect. They’re mute. They do not listen. They did not listen.
    0:22:03 Some are afraid. Some have other issues. They have different reasons. It’s like when a person is
    0:22:08 drowning. Drowning and people walk by because they can’t hear them. And someone walks on by crying.
    0:22:13 Afraid to save them. It doesn’t change anything for the one drowning.
    0:22:19 They need someone to help them. This is why I honestly despise these people as they are deaf.
    0:22:27 They began the occupation in the supposed defense of the Russian language. And that’s why,
    0:22:32 with all due respect, I would like to give an interview in Ukrainian. This is very,
    0:22:41 this is very important to me. If there are some points that you want me to explain,
    0:22:47 in Russian, I can certainly do that. I can certainly occasionally speak Russian.
    0:22:54 But in general, in general, no, I’m not sure that that you will understand me completely.
    0:22:59 Despite your Ukrainian roots, you are a citizen of the United States, right?
    0:23:10 Yes. That’s why I’m surprised that you don’t understand. Well, it was a long time ago. I
    0:23:20 understand that it was a long time ago. Moreover, a lot has changed. A lot has changed.
    0:23:28 If I may please allow me to say this in Russian. Yes, many things have changed. But I have hope.
    0:23:34 I hope that today, many Russians will hear this, that Vladimir Putin will hear this,
    0:23:38 that the American president, Donald Trump, and the American people will hear this,
    0:23:43 that everyone will hear this. And yes, Ukrainian language is important symbolically.
    0:23:46 But what is also important is that we understand each other well.
    0:23:51 For Donald Trump? Is it important for Donald Trump whether I speak Russian or not?
    0:23:57 Yes. Because unfortunately, and it hurts to admit, but I cannot speak or understand Ukrainian yet.
    0:24:02 So your wit, dynamism, and your humanity will not come through as well and as quickly.
    0:24:06 Remember, I need to wait for two to three seconds to hear it.
    0:24:12 You have a great sense of humor, great stories. With an interpreter translating,
    0:24:15 I simply won’t see this, but I understand that it’s painful.
    0:24:21 Another reason is that I hoped we could show that even though
    0:24:26 it is sometimes said that Russian is banned in Ukraine.
    0:24:29 This is not true. I’m speaking Russian now, right?
    0:24:32 We have people who speak Russian. This is not true, really, it’s not.
    0:24:38 It’s really not true. We disrespect Russian now because of Russians.
    0:24:44 That’s all. When they were saving Russian speakers, they killed Russian speakers,
    0:24:48 many people who actually, many of whom are in the East, right?
    0:24:53 In the East, they lived, lived in the East.
    0:24:56 They destroyed their houses, destroyed their lives.
    0:24:59 It’s not a rhetorical thing. It’s not all talk and blah, blah, blah.
    0:25:01 I don’t have time for blah, blah, blah. Yes.
    0:25:05 So it’s a very, very, very important and sensitive moment.
    0:25:12 The message is that we are not one nation. We are not, you know, the same country.
    0:25:15 We’re different countries. Yes, different countries.
    0:25:23 And I think what is most important is what we’re talking about, not how.
    0:25:27 We’re speaking about it. This is what I think. You’re a smart guy.
    0:25:30 So you have a lot of experience in dialogue of this kind.
    0:25:33 That’s why I think you will, you will understand me.
    0:25:44 Yeah. I, anyway, I think it is far better for Donald Trump to hear my English, not my Russian.
    0:25:46 Your English is much better than my Ukrainian.
    0:25:48 You’re getting better and better at everything.
    0:25:53 That’s true. I’m a very honest guy. That’s why I will be very honest with you.
    0:25:58 Okay. Your Ukrainian is not very good, but we will, but we will work on it.
    0:26:01 Yes. I have many flaws. That’s one of them.
    0:26:06 Sometimes I can speak English. Sometimes, as I understand, we can be very flexible, right?
    0:26:10 Very flexible. Spanish, Swahili.
    0:26:11 Yeah, you see?
    0:26:13 Yeah. Javier Malay needs to understand us.
    0:26:17 So by the way, Javier understood me without any words.
    0:26:20 The language of love, maybe.
    0:26:21 Of respect. Respect.
    0:26:25 I respect him. I had a very good conversation with him. Really brilliant.
    0:26:27 May I sometimes speak Russian and sometimes English?
    0:26:29 Yes. You can use any language you like.
    0:26:33 And I think that’s a very good rule for this first meeting between us.
    0:26:38 As you said, maybe we will meet in the future for the second time.
    0:26:39 Second and third and fourth?
    0:26:43 Yeah, this is good. You can ask questions in the language you’d like,
    0:26:45 and I will answer in the language I can.
    0:26:49 Well, you said you wanted to meet by the sea at some point.
    0:26:52 So for our next meeting, let’s meet by the sea.
    0:26:53 With pleasure.
    0:26:59 Next time, it would be much better to meet by our Ukrainian Black or our Azov Sea.
    0:27:01 You know, I’ve been to a lot of…
    0:27:06 I have traveled to many cities in Ukraine, but I have never been to Odessa.
    0:27:08 And everyone tells me that, and I don’t know why.
    0:27:09 You have to.
    0:27:12 Can you explain to me why everyone loves Odessa so much?
    0:27:14 What’s there?
    0:27:19 You know, what’s in Odessa? That’s how they say it.
    0:27:21 What’s there? In Odessa, we’ve got it all.
    0:27:21 Okay.
    0:27:27 Odessa, I love Odessa because of its particular temperament.
    0:27:32 People have their own accent, and it’s so…
    0:27:34 There are many nationalities, you know.
    0:27:39 There are a lot of stories, authentic Odessa cuisine.
    0:27:43 By the way, you know, the cuisine is very different from others.
    0:27:47 The dishes are not like any other dishes, and everything is very tasty.
    0:27:50 Also, there are beautiful people.
    0:27:51 And today, you know,
    0:28:00 you understand people very well, especially after the attacks on Odessa.
    0:28:03 You understand what the people are like.
    0:28:06 Just how Odessites are, very Ukrainian.
    0:28:09 And that’s very cool.
    0:28:10 I love Odessa.
    0:28:12 I go there several times a year.
    0:28:16 I go there several times a year now because…
    0:28:20 Well, now because of strengthening of air defense systems,
    0:28:23 because of this grain corridor, etc.
    0:28:25 I go there more often.
    0:28:28 They have the sun there.
    0:28:29 They have the sea.
    0:28:33 It’s Ukraine, and it’s very cool there.
    0:28:37 Well, when you come and visit me in Texas as a guest for the third time…
    0:28:39 With pleasure.
    0:28:39 Let’s do this.
    0:28:42 How about you?
    0:28:47 My friend Joe Rogan and I will go get some Texas barbecue together.
    0:28:48 Who will pay?
    0:28:50 That’s a good question.
    0:28:53 Putin, Putin, for everything.
    0:28:54 He has to pay.
    0:28:55 Well, yes, we’ll invite him to.
    0:28:56 No, no, no, no.
    0:28:57 Okay.
    0:28:57 Without him.
    0:28:58 Okay, I get it.
    0:28:58 Understood.
    0:29:09 But if the Rome Statute will be accepted by your government before this moment.
    0:29:11 By the way, I don’t know if you know this,
    0:29:14 but Joe has a great comedy club in Austin.
    0:29:15 Joe Rogan.
    0:29:16 Joe Rogan, yes.
    0:29:21 And I think that as a person who respects comedy and stand-up comedy,
    0:29:23 it would be interesting for you to have a look at it.
    0:29:27 No, no, I know him, and I saw a lot of different videos.
    0:29:30 He’s a very talented person.
    0:29:35 So it would be a pleasure if you invite me and I’m able to do it.
    0:29:42 I am a little bit busy, but if I’ll be in the United States,
    0:29:46 I hope that I will have a conversation and a meeting with President Trump.
    0:29:50 And of course, during my visit, if I’ll have the time,
    0:29:52 it would be a pleasure if you’ll invite me with pleasure.
    0:29:53 You know what?
    0:29:55 I will pay.
    0:29:56 Good.
    0:30:00 Yeah, I had to think about it, but you are the president.
    0:30:01 Yes, with you, with pleasure.
    0:30:03 When the war is over, please come.
    0:30:03 Thanks so much.
    0:30:05 And when you’re less busy.
    0:30:06 Thanks so much.
    0:30:09 If we can go back many years, World War II,
    0:30:13 tell me the story of your grandfather who fought in World War II.
    0:30:21 My grandfather, he graduated from the military, military academy,
    0:30:25 and from the very beginning of the war, he went to fight.
    0:30:30 He was in the infantry and he fought through the entire war.
    0:30:31 He had many wounds.
    0:30:37 As they used to say back then, his chest is covered in medals.
    0:30:38 And it’s true.
    0:30:39 He had more than 30.
    0:30:42 Yes, more than 30.
    0:30:45 He was the kind of man he was such.
    0:30:49 He was such a serious man.
    0:30:51 I loved him very much.
    0:30:54 And we had a very close relationship.
    0:30:59 Um, he didn’t like to tell details about the war.
    0:31:03 He never, he never boasted.
    0:31:10 Although I asked him, as a boy would, how many fascists did you kill?
    0:31:12 He never talked about it.
    0:31:21 He believed that the war was a great, a great tragedy, a tragedy for everyone.
    0:31:28 And, uh, Ukraine was occupied and it was a tragedy for Ukraine,
    0:31:31 a tragedy for Europe, and a tragedy for the Jewish people.
    0:31:38 His own brothers, his entire family were executed.
    0:31:46 They were tortured by fascists who had occupied Ukraine and their village.
    0:31:54 His father was the head of the village and he was killed.
    0:31:55 They were shot.
    0:32:00 It was a mass, a mass grave, right?
    0:32:03 Yes, it was a communal burial.
    0:32:08 Some of them were killed outright and others were, they were buried alive.
    0:32:13 His four brothers, they all went to war.
    0:32:15 As soon as the war began, they were all there.
    0:32:23 He was the only one who had a military education and they all died in the war.
    0:32:25 He was the only one who came back.
    0:32:27 He had nobody.
    0:32:37 He came back and he found, found my grandmother, his future wife,
    0:32:40 and she was, she managed, what was it called then?
    0:32:42 I don’t know, they don’t have them anymore.
    0:32:50 It was a childcare facility and orphanage, so to speak, a place where orphans lived,
    0:32:56 children who, who don’t have parents, children of war.
    0:33:03 And she managed this childcare facility with difficult children, as they used to call them,
    0:33:08 difficult children who went through the war, who saw their parents killed.
    0:33:15 And this is how they met, because these difficult children, they,
    0:33:18 well, sometimes behave differently.
    0:33:21 They could steal something, do something bad.
    0:33:27 There were many, many children in the orphanage.
    0:33:31 Yes, that’s how she met my grandfather.
    0:33:34 And I loved him very much.
    0:33:44 And I think that my grandfather, frankly, would never have believed that this war is possible.
    0:33:49 He would never have believed it, because he worked in the police after the war.
    0:33:51 He was a colonel.
    0:33:57 He worked in a criminal investigation all his life.
    0:34:06 So he fought with bandits all his life after the Second World War.
    0:34:12 But also, I believe he fought for justice all his life.
    0:34:14 And we all lived in one apartment.
    0:34:21 And even after his death, I lived with both of my grandmothers and my parents,
    0:34:25 two grandmothers, who both lost their husbands.
    0:34:27 Both of them died.
    0:34:31 Well, it was an ordinary family.
    0:34:36 An ordinary family that lived like everyone lived back then in the Soviet Union.
    0:34:42 And even after the Soviets in the 90s, we lived in one apartment all together.
    0:34:45 What else is there to say?
    0:34:51 But I think the most important thing was values, respect.
    0:34:53 They gave me an education.
    0:34:56 My parents gave me an education.
    0:35:02 No one left me money or apartments, so I didn’t inherit anything material.
    0:35:09 But I believe that our real inheritance is here in our minds and in our hearts.
    0:35:09 I believe that.
    0:35:19 This is one second.
    0:35:26 So if I’m sorry, if you tell a joke, I will laugh about one, two or three seconds later.
    0:35:27 There’s a delay.
    0:35:34 So an ordinary family, but not an ordinary time, a World War II.
    0:35:35 World War II.
    0:35:39 Speaking of mass graves, I was at Babinyar yesterday.
    0:35:41 A large part of my family died there.
    0:35:45 In moments like this, such a place serves as a stark reminder
    0:35:49 of the profound historical gravity of the Second World War.
    0:35:54 I remember, I remember this song from my youth.
    0:35:59 On June 22nd at four o’clock, Kiev was bombed and the war began.
    0:36:06 I always wondered how it would feel to live in a moment when, when everything changed.
    0:36:11 The path of humanity completely shifts in a single moment, just like that.
    0:36:13 What do you think?
    0:36:17 What do you think about that moment in 1941?
    0:36:22 Now, after the 2022 invasion, how do you perceive the Second World War
    0:36:24 after you have witnessed all of it?
    0:36:32 Well, firstly, the war actually started earlier.
    0:36:35 It started here in Ukraine.
    0:36:41 Kiev was bombed, as you quoted, but the war had already begun before that.
    0:36:50 And I think I perceived it as a start of the full-scale invasion.
    0:36:56 Well, I think it’s hard.
    0:37:01 It’s hard to understand why nobody wants to listen,
    0:37:06 look at and analyze history.
    0:37:15 War, the rise of fascism and Nazism, the emergence of Hitler,
    0:37:18 Goebbels and their entire team.
    0:37:22 At the time, this wasn’t just about one party or even one country.
    0:37:29 It was essentially a wave, a wave of hatred,
    0:37:38 a wave of one race, one race above the rest.
    0:37:48 They were, in fact, constructing and ultimately implemented a theory around this idea later seizing Europe.
    0:37:55 They created a theory of one nation, one race, one world, their world.
    0:38:04 Of course, this idea is absolutely senseless, but it has become radicalized over the years and even gained support.
    0:38:16 A vision of one world, and in principle the so-called Russian world, the ideology Putin promotes and imposes, it wasn’t originally like that.
    0:38:22 He was a different person back then, or maybe he was always like this, but his rhetoric was different.
    0:38:28 At the beginning, remember, he talked about the EU and even about Russia’s future being tied to NATO.
    0:38:34 There were even talks of joining the European Union. NATO, he spoke about shared values with the West.
    0:38:36 That’s how it all sounded back then.
    0:38:46 And we must also look at Hitler, who was seriously, before the radical idea of taking over the whole world,
    0:38:54 he actually made certain steps and everyone believed he was helping the economy.
    0:39:02 And to be fair, he did take some steps in that direction, but he was a terrifying person.
    0:39:09 None of those actions justify him, nor do they excuse his actions.
    0:39:14 And that’s why we cannot look at the Second World War as if it started in 1939.
    0:39:23 It didn’t begin in 1941 either. We need to draw conclusions. When did it start? With the weaknesses of the world.
    0:39:30 The division of European states, the Molotov-Ribbentrop pact, all of this happened before 1941.
    0:39:37 People who were more informed, those who dug deeper, whether they were politicians or not,
    0:39:49 whether they were from different walks of life, including business, which was different back then, were speaking about all of this.
    0:39:59 Hitler won’t stop. There’ll be a world war. Hitler will destroy nations. Nations.
    0:40:05 And that’s what happened. Someone looked the other way. What I told you about. Europe was sinking then.
    0:40:12 I gave you an example of it. But the whole world looked the other way and didn’t pay attention and said,
    0:40:17 “No, we can negotiate with him. I’m telling you he is okay. We can negotiate with him.
    0:40:27 He’s just more right-leaning or it does not matter what they said. He’s just pro, very pro nationalist.”
    0:40:37 This is all nonsense and this is not the first time. And Hitler isn’t the first such case in history.
    0:40:49 We’re dealing with a person who is allowed to stick to this desire to destroy.
    0:40:56 He was consumed by it and enjoying it. And what happened to Hitler? Now, what about Putin?
    0:41:01 This invasion was also at four in the morning, around four in the morning.
    0:41:09 There were missile strikes on Ukraine. This is the same. I believe that intentions are also the same, but more on that later.
    0:41:15 By the way, you tell me if this is too long, you can stop me.
    0:41:17 Never long enough. It’s beautiful.
    0:41:29 Okay, so it happened here around four in the morning. Before this, I must honestly say,
    0:41:35 everyone said something, predicted something, etc., but I asked only for one thing.
    0:41:45 Primarily from the United States, if you are sure, if you have the evidence, if you talk to him and he tells you that there’ll be an invasion, if all this scares you,
    0:41:58 I only asked for two things. Send us weapons or better yet, strengthen us with preventive measures so there would be no war.
    0:42:04 It wasn’t the weapons that I was asking for. I asked for sanctions. Intimidate him.
    0:42:11 Please don’t say that. If he comes, if he crosses borders, if he kills, we’re imposing sanctions.
    0:42:16 Well, this is complete bullshit. Sorry, but really.
    0:42:17 Oh, I understand this.
    0:42:18 Oh, wonderful. Yes.
    0:42:20 I understood one word.
    0:42:23 Yeah.
    0:42:25 So they did not help.
    0:42:28 I believe that no, and this is a fact.
    0:42:41 We didn’t receive help. If we assume that words are help, well, then yes, we received a lot of it because there were plenty of words.
    0:42:44 Even more than plenty, yes?
    0:42:49 At four in the morning, there were strikes.
    0:42:53 Morally, is it possible to prepare for war?
    0:42:58 No, it doesn’t happen like you read in books, see in movies and so on.
    0:43:06 What happens to you? I was just looking at my wife and children. My children were asleep, but my wife was awake.
    0:43:13 There were strikes, missile strikes. We heard them.
    0:43:24 To you as a living person, how can this be? You just can’t fully believe this.
    0:43:39 You just don’t understand why now, given everything that happened in World War II, when millions of people died, none of it mattered.
    0:43:45 Still at four, at four in the morning, around four, three, forty, three, forty-five, remember?
    0:43:48 Around this time, yes, there were missile strikes.
    0:44:04 And later, by the way, a few days after, after the first days of the war, I spoke with Lukashenko on the phone.
    0:44:15 And he apologized. And he said that it was not me. Missiles were launched from my territory, and Putin was the one launching them.
    0:44:21 These are his words. I have witnesses. And I apologize, he said.
    0:44:28 But believe me, that’s what he told me. Volodya, this is not me. I’m not in charge, he told me.
    0:44:33 I’m not in charge. These are just missiles. This is Putin. I told him, don’t do that.
    0:44:41 This was done without me. That’s it. He just, on the phone, I remember this conversation.
    0:44:48 I told him that I believed. I told him, you are a murderer too, I’m just saying.
    0:44:55 And he told me, you must understand, you can’t fight the Russians. I told him that we never fought them.
    0:45:02 I said, it’s war. The missiles came from your land, from Belarus. How did you allow this?
    0:45:11 Then he replied, all right, retaliate then. I still remember him telling me, hit the refinery.
    0:45:16 You know how much I care about it. Moser oil refinery, is that it? Can’t recall.
    0:45:22 Moser oil refinery, I told him, what are you on about? What retaliation?
    0:45:28 Forgive me, Volodya. Yes. This was at five in the morning?
    0:45:33 No, no, no. This was during the first or maybe the second day, second or third day of the war.
    0:45:34 Ah, I see.
    0:45:43 Well, after that, I went back home. I was home with my children, with my wife.
    0:45:50 I just went to my wife very quickly that night at four o’clock. Yes, and just told her, get the children, get ready.
    0:45:56 You’ll probably need to go to my office very soon. And I left. That’s it.
    0:46:01 At this moment, you’re no longer a father.
    0:46:10 What happened to me, unfortunately, because I believe that this is, and not only do I believe, I understand,
    0:46:18 especially now that all of this is the most important thing, because your country is your family.
    0:46:25 The strength is in your family, and this is the most important thing, and I’m the president.
    0:46:31 And therefore, I had to stop being a father in my own family, and my wife had to do everything.
    0:46:41 She had to do everything regarding children, regarding safety, and I had to deal with the state because I’m the president.
    0:46:53 And this is my duty. And I, by the way, am taking this very seriously. I went to the office, and here we are now. You’re very welcome.
    0:47:03 Well, at that moment, on February 24th, 2022, everything changed again, just like in June 1941. Everything changed.
    0:47:12 And history took a turn, the history of humanity took a turn. And for you too, you were the president.
    0:47:21 You were talking about fighting corruption, about the country’s freedom, about interesting and innovative reforms.
    0:47:26 But that morning, on February 22nd, everything changed.
    0:47:31 Could you tell me about that morning, the details of your actions?
    0:47:36 When you had to quickly make difficult decisions.
    0:47:40 What was the process for you? How did you make these decisions?
    0:47:53 Did you discuss them with people you trust to understand how to respond to this invasion in every technical, political, and military aspect?
    0:47:56 What was the process for you? How did you make the decision?
    0:48:07 According to our legislation, in principle, I’m the supreme commander of the armed forces of Ukraine, so I had to give corresponding orders.
    0:48:14 Yes, I have a military office, and then later there was a military headquarters where all key people gathered.
    0:48:18 This is not only about the military, it’s about energy, etc., all key things.
    0:48:30 But at that moment, I made the decisions quickly and without a doubt, and I cannot say that I am just that kind of person.
    0:48:42 I’m just a living person who believed that if help is needed right now to help evacuate people, help with children, several cities were blocked.
    0:48:49 I was only thinking about how to deliver food there within a day.
    0:49:01 We did a lot of things, although we understood that they, in fact, occupied part of our state.
    0:49:12 And we distributed weapons to people. That’s how it was.
    0:49:21 Trucks came and simply distributed weapons to people so that they could defend the capital to ordinary people, just on the street.
    0:49:34 To ordinary people who understood that if the Russians entered the city, then we would have the same thing that’s happening in other cities per the information we received.
    0:49:43 Thanks to digitalization, by the way, we had very good digitalization before this, and we preserved a lot.
    0:49:50 And even when they were surrounding certain cities, a lot of things still worked.
    0:50:03 The banking system, the internet, we had television, and thanks to this, I made several decisions to ensure that people are united and have all the information.
    0:50:09 Russia is very good at spreading large-scale disinformation.
    0:50:24 Fortunately, I have two decades of experience managing a production studio, TV channels, and large media resources.
    0:50:30 I understood that we needed to build an information network very quickly.
    0:50:37 Thanks to this, I began to address the people constantly. This happened several times, three to five times a day.
    0:50:50 In fact, I became an information source for people who were in cities that were cut off from other information.
    0:51:01 And it was very important for me to keep all things digital, to keep the internet, to stay in touch with everyone, with all the people.
    0:51:13 Initially, that’s the contact we had, and then we also built a media platform where we had all the news agencies of Ukraine.
    0:51:23 And this network was called Marathon, and it was also very important for the people to trust us, and people had to receive information.
    0:51:32 Why? There were waves. There were waves of Russian on the first day who said he ran away.
    0:51:37 I had to go out into the street. I left the office and went outside.
    0:51:48 I had to do this because I was showing that this was no green screen, to show that it was the street, not some digital manipulation.
    0:51:54 I mean, I did these things, then I touched various objects. Now, people might think that these are small things,
    0:52:01 but I was actually showing that I was in a real place. All of this had an impact.
    0:52:07 I was absolutely sure of my actions. And these contacts, several contacts.
    0:52:14 And then I spoke to the Russians. I addressed Russians. I really did. And then only after that, I gathered.
    0:52:19 It was the first day when I invited all of the journalists here, wasn’t it?
    0:52:27 That was on the first day, I think. Well, not here, here, to the press center in this building.
    0:52:34 I talked to journalists. I asked them not to leave because we needed weapons.
    0:52:44 At that moment, they were handing out rifles to people. And for me, journalists and media platforms were essential voices.
    0:52:51 There were various journalists from different countries here, and they were essentially stuck.
    0:52:59 And I asked them for contacts, those who had access to Russians, Belarusians,
    0:53:04 Kazakhs who understood everything, the same information. And I spoke to them.
    0:53:11 And I spoke to them and spoke in Russian. I told them, you must stop Putin.
    0:53:15 This is terrible. This is horror. This is war. You must stop him.
    0:53:20 And if you stand up now, if you speak out, and if you go out into the streets, this was very important.
    0:53:24 I spoke to them in Russian to show them that there was no problem.
    0:53:30 And that all of these pretexts were made up.
    0:53:37 This is why it’s so painful to talk about the Russian language too, because look, if a person does not want to listen,
    0:53:41 they will not listen no matter what language we speak.
    0:53:49 I disagree with you here. I think and hope that many people in Russia will hear us today.
    0:53:55 They blocked YouTube recently. Are you aware of this in their country?
    0:54:00 I know. And I simply guarantee that this conversation will travel fast on the Internet.
    0:54:03 Everyone will hear you. They will hear you.
    0:54:08 Including the President of Russia will hear you. This is why I have hope.
    0:54:15 He is actually deaf, even if he speaks to you. He is deaf by his very nature.
    0:54:21 Do you understand the difference? You know, for instance, when you talk to Musk,
    0:54:31 you’re talking to an innovator, a scientist about rockets.
    0:54:35 You talk about how to save on costs and how they land.
    0:54:41 And on the other hand, Putin doesn’t launch rockets to save money but to kill people.
    0:54:47 Do you think you can talk to Putin about technology?
    0:54:54 Your guys were interviewing him and he told them about tribal history.
    0:54:59 Do you understand? Imagine a Russian man in his country listening to him.
    0:55:04 You know what Musk is about? Technology, Mars, artificial intelligence.
    0:55:09 And this guy, Putin, is standing there bare-assed, pontificating about tribes.
    0:55:14 You’ve got to understand. You think that when you do interviews,
    0:55:21 like Mr. Tucker, who did an interview there, that you’re about to make them friends.
    0:55:26 How could you… What does this have to do with friends?
    0:55:31 He’s different. He is simply different.
    0:55:33 But it’s still necessary.
    0:55:35 A mammoth stands before you.
    0:55:40 By the way, I must say that when you said bare-assed, it was not translated.
    0:55:42 Could the interpreter please translate?
    0:55:44 This is so that you can understand.
    0:55:46 Now he explained everything to me. I understand.
    0:55:48 That’s great.
    0:55:50 But we still need to talk.
    0:55:53 One should always speak with someone who listens.
    0:55:58 And you must speak when you know that this will benefit you,
    0:56:04 bring peace and calm to the world, not the other way around.
    0:56:08 I love President Trump’s message when he speaks.
    0:56:13 I think that we share a position on peace through strength.
    0:56:15 That is very important.
    0:56:20 It means that if you are strong, you can speak.
    0:56:22 And we need to be strong.
    0:56:27 And Ukraine has to be strong, strong enough.
    0:56:29 Otherwise, what for?
    0:56:41 So you know who, like Voldemort, who must not be named.
    0:56:44 Yes, he’s like Voldemort.
    0:56:51 He thrives, subsists, and lives on being subjectivized.
    0:56:56 Instead of isolation, he is offered to step out into the light.
    0:57:03 He’s darkness, personified, and you offer him, as it were, to be subjectivized.
    0:57:04 Why?
    0:57:07 There’s only one reason.
    0:57:09 Fear.
    0:57:12 And you say, we need to talk.
    0:57:18 Listen, we need to be in a strong position and not talk, but end the war.
    0:57:21 Yes, yes, it is possible through dialogue.
    0:57:23 We’re not opposed to it.
    0:57:30 But you just need to be in a strong position to make the other person want it.
    0:57:33 Do you think he wants to end the war?
    0:57:35 That’s what you suggested.
    0:57:36 I think this is naive.
    0:57:37 I’m sorry.
    0:57:42 With all due respect, it’s naive to think he wants to finish the war.
    0:57:45 Let’s tell you what.
    0:57:48 The circumstances, sorry for interrupting.
    0:57:49 There’s something we need.
    0:57:57 I think that President Trump not only has will, he has all these possibilities, and it’s not just talk.
    0:57:59 I really count on him.
    0:58:02 And I think that our people really count on him.
    0:58:14 So he has enough power to pressure him, to pressure Putin not into wanting to stop it.
    0:58:18 No, he will not want to, to pressure him to actually stop it.
    0:58:19 That is the difference.
    0:58:21 Don’t rely on his will.
    0:58:23 Putin’s will to stop.
    0:58:25 You won’t see it.
    0:58:26 That’s what I think.
    0:58:27 Sorry.
    0:58:28 No, sorry.
    0:58:29 I interrupted you first.
    0:58:39 But what I would want, I do have what some might call a naive dream of you sitting down with Putin and Trump
    0:58:47 and negotiating a deal about a ceasefire and together finding a path to long-term peace.
    0:58:54 And I think this requires strength, requires negotiations.
    0:58:58 There are a lot of carrots and sticks here that can be used to make a real deal.
    0:59:03 And Trump is very keen on making a deal and ready to negotiate.
    0:59:05 Can I ask you a question?
    0:59:06 Yeah.
    0:59:13 I just really want you and I to be on the same page.
    0:59:21 It’s very important to be in the same information space, extremely important.
    0:59:24 Let’s talk a bit about the ceasefire.
    0:59:28 Let me describe the situation to you.
    0:59:39 In December 2019, in Normandy, in Paris, at the Elysees Palace, Macron, Merkel, Putin and I agreed.
    0:59:42 On the ceasefire, the U.S. wasn’t there.
    0:59:46 And this, by the way, was a weak point of the meeting.
    0:59:50 If you’d like, we can later discuss why they weren’t there.
    0:59:53 It’s a security guarantee thing in general.
    0:59:57 It’s Germany’s position, etc.
    1:00:02 We agreed on an exchange of hostages and all-for-all exchange.
    1:00:04 We made a deal to exchange everyone for everyone.
    1:00:05 I think you know that.
    1:00:09 And there was also a meeting that lasted many hours.
    1:00:13 A meeting where we made a deal with him.
    1:00:14 Everyone was tired.
    1:00:17 It was just the two of us in the end.
    1:00:19 And I proposed a ceasefire.
    1:00:23 By the way, no one in Ukraine believed.
    1:00:26 Few believed in the ceasefire.
    1:00:28 And he wanted troop withdrawal.
    1:00:36 I calculated that if there were a withdrawal of troops from the line of contact the way Russians proposed, it would take 20 years.
    1:00:39 I proved it to him just in terms of time.
    1:00:41 Square kilometers.
    1:00:45 Namely the length of the line of contact or delimitation line.
    1:00:51 And we agreed on what I told him that it will not work out.
    1:00:57 But I had many points because I was deeply involved in the issue.
    1:00:59 I was involved very deeply.
    1:01:01 It’s my thing in general.
    1:01:09 If I start doing something, I can’t stand there like that guy I spoke about with my ass out, you know?
    1:01:12 I must be dressed.
    1:01:14 I must be prepared.
    1:01:17 I must be prepared better.
    1:01:20 Better than anyone in front of me.
    1:01:22 You do sports, right?
    1:01:25 I practiced for many years.
    1:01:30 And we know what fights are like, what boxing is, what type boxing is.
    1:01:33 This is what I did and I loved it very much.
    1:01:39 When you step into the ring, you understand everything pretty much.
    1:01:46 And so I stepped into it and I was definitely well prepared.
    1:01:48 But he wasn’t.
    1:01:51 He was not deeply involved in the process.
    1:01:53 What border?
    1:01:54 Where is it?
    1:01:57 How long will it take to disengage troops?
    1:01:58 And why wasn’t he involved?
    1:02:00 You want to know?
    1:02:02 Because he wasn’t going to do any of this.
    1:02:04 This is what confused me.
    1:02:13 If you are not deeply involved in the issue, well, then it’s as if you don’t really need the result.
    1:02:15 That’s what I think.
    1:02:16 So what happened?
    1:02:27 We agreed that there will be gas continuation, gas transit in 2019.
    1:02:28 We agreed with him.
    1:02:30 This was the security for Europe.
    1:02:32 Merkel asked me for it.
    1:02:36 And this was extremely important for Germany.
    1:02:38 We agreed with him.
    1:02:42 Secondly, we agreed that for him it was just money.
    1:02:46 So secondly, we agreed on an exchange.
    1:02:49 For me, this was the most important thing for them.
    1:02:52 Gas was for me, was the people.
    1:03:04 And this is a fact because I wanted to have a humanitarian advantage so that there would be further meetings that would lead to sustained peace.
    1:03:08 And third, ceasefire.
    1:03:12 Ceasefire you spoke about.
    1:03:14 What happened?
    1:03:18 The gas contract was signed because he needed it.
    1:03:21 And by the way, he knew everything about it.
    1:03:30 As for exchange, we took the first step and exchanged the people.
    1:03:37 Regarding the ceasefire, well, they started killing us in about a month.
    1:03:47 So I called him and I told him we agreed on a ceasefire.
    1:03:48 Didn’t we?
    1:03:50 Well, it wasn’t a piece of toilet paper, was it?
    1:03:52 This is serious business.
    1:03:53 Or so it seemed.
    1:03:55 It really was serious.
    1:04:01 Merkel, Macron, you and I, we all agreed on this together.
    1:04:05 A ceasefire is important, isn’t it?
    1:04:11 Not for New Year’s because everyone was celebrating New Year’s and now they’re offering us a Christmas ceasefire.
    1:04:12 It’s all the same.
    1:04:15 A ceasefire for two, three days just to get some praise.
    1:04:16 But this isn’t a performance.
    1:04:18 This isn’t some kind of theater.
    1:04:21 No, this, this is about people’s lives.
    1:04:22 And that’s what happened.
    1:04:25 After that, I called him a few more times.
    1:04:28 I think I only had two, three calls with him in total.
    1:04:30 I asked him for a ceasefire.
    1:04:32 He told me it couldn’t be.
    1:04:36 We will, we will figure it out now.
    1:04:44 People from, people from the occupied territory, Russians and separatists, they were all there together.
    1:04:47 They continued to shoot and kill our people.
    1:04:55 Yes, the front lines were quiet, but they killed people.
    1:05:00 They were killing people and I kept calling him.
    1:05:05 I called again and again, but there was nothing until after a few months, the Russians stopped answering the phone.
    1:05:08 We did not have any contact since.
    1:05:12 I wanted another meeting like we had in Normandy.
    1:05:14 I wanted the next meeting.
    1:05:19 I wanted to find a solution, but the Russians refused.
    1:05:26 We tried to make it happen through various European countries and not only European, but the Russians refused.
    1:05:31 They passed along some kind of bullshit, made excuses, they didn’t want it.
    1:05:35 Meanwhile, they were sending their snipers.
    1:05:41 We had evidence, living proof, even video evidence, because some of them were captured back then.
    1:05:43 Those were the snipers in training.
    1:05:44 They were training them.
    1:05:50 They were training them and later those snipers operated in Syria and Africa.
    1:05:54 These snipers were training in our country in the East.
    1:05:57 Ukrainians were living targets.
    1:06:03 They were shooting from the other side, killing people, women, people, children.
    1:06:04 They were shooting.
    1:06:05 It was a hunt.
    1:06:12 By the way, it was in the Russian speaking region in the East where, according to him, everyone is speaking Russian.
    1:06:18 That’s where they were shooting, where the situation currently is the most tense.
    1:06:19 They killed people.
    1:06:25 We sent this information, sent pictures, we sent them to the UN, sent them everywhere.
    1:06:28 We worked very hard, very persistently.
    1:06:32 I met with everyone, but who thought of Ukraine back then?
    1:06:34 They didn’t notice it much.
    1:06:40 They didn’t pay much attention to Crimea being illegally occupied either.
    1:06:46 And to be honest, the United States of America too, everyone was somewhat silent about this issue.
    1:06:47 That’s how it was.
    1:06:52 It was like that before a full-scale war.
    1:06:58 I want to ask you a question about the ceasefire.
    1:07:09 For example, in Mariupol, in Mariupol today, there are American and Ukrainian journalists.
    1:07:15 And everyone will tell you who had contact, who has contact now with Mariupol,
    1:07:20 who fled from there in the last minutes just before the occupation,
    1:07:24 or who was able to leave to escape after the occupation.
    1:07:28 Chernoff, who won an Oscar, was among them.
    1:07:32 And the journalists that left Mariupol, they are here.
    1:07:35 By the way, we had a conversation.
    1:07:44 They will tell you that 20,000, 30,000 civilians were tortured and buried there.
    1:07:47 We do not know the number of victims.
    1:07:51 People who didn’t want to work with them, who refused to cooperate with them,
    1:07:53 people who went on strikes to protest,
    1:07:57 people who did not want to work with the Russians who occupied Mariupol.
    1:07:59 And this is one example, just with this city.
    1:08:01 And I have a question for you.
    1:08:03 What about the millions of children?
    1:08:07 And I will ask you in Russian so that you hear this without delay.
    1:08:10 What about the millions of children over there?
    1:08:14 What if we just arranged a ceasefire without understanding what would happen next?
    1:08:20 Without understanding, what will happen to Ukraine’s security guarantees?
    1:08:23 What about the millions of children in the occupied territories?
    1:08:25 What should I tell them?
    1:08:27 What am I to tell them?
    1:08:29 What is it I should tell them?
    1:08:32 What? Whatever?
    1:08:35 Hey, all of you over there, see ya.
    1:08:39 And those tens of thousands of people buried there, they were.
    1:08:41 Is that what we want?
    1:08:44 Are we ready to forgive them for this?
    1:08:47 We must at least take the first step.
    1:08:52 If this is a ceasefire, we must know that there is a security guarantee
    1:08:55 for the part of Ukraine under our control.
    1:08:59 We need it so that he will not come back.
    1:09:01 This is very important.
    1:09:04 And what do we say to the people who live in those territories?
    1:09:06 These are millions of people.
    1:09:12 Did you know that since 2014 in Donetsk, in the Crimea,
    1:09:15 this is happening in Melitopol as well?
    1:09:17 As in Berdiansk now.
    1:09:21 They are making all these kits of drafting age.
    1:09:26 Go and fight.
    1:09:28 And if they don’t go, they will be killed.
    1:09:31 This is, do you understand what’s happening?
    1:09:36 That is why a ceasefire, everything I said.
    1:09:43 What I wish for and I believe in President Trump’s power to use
    1:09:50 all of this information to come up with a way to make Ukraine strong.
    1:09:53 And be strong.
    1:09:56 Why am I saying that?
    1:10:00 I will give you an example.
    1:10:07 President Trump will be in the same situation as I was in 2019.
    1:10:09 Precisely the same situation.
    1:10:11 I want to end the war.
    1:10:13 We want a lasting peace for Ukraine.
    1:10:15 We must do this.
    1:10:20 The ceasefire, exchange people, and then diplomatically return all territories.
    1:10:24 And we will do this through diplomacy.
    1:10:27 What will happen next with President Trump?
    1:10:31 If the ceasefire happens without security guarantees,
    1:10:35 at least for the territory we control, what does he get?
    1:10:40 If he manages to make a ceasefire deal.
    1:10:45 And three months later, Putin launches a new wave of attacks.
    1:10:48 What will Trump look like?
    1:10:51 What will Ukraine look like?
    1:10:54 What will everyone look like?
    1:10:56 Putin will just do it.
    1:10:58 And why would Putin do it?
    1:11:01 Because today, he’s afraid of Trump.
    1:11:11 But once Trump manages, for example, to do a ceasefire deal without serious security guarantees for Ukraine,
    1:11:13 he will give a pass to Putin.
    1:11:15 Not that he wants to.
    1:11:17 No, he does not want that.
    1:11:19 I believe in what he says.
    1:11:22 But he will give Putin an opportunity.
    1:11:26 Because in Putin’s head, he wants me to fight with Trump.
    1:11:30 Putin’s plan is to end the occupation of our territory.
    1:11:34 This is in his sick head.
    1:11:37 And I’m absolutely sure of this.
    1:11:44 That is why I told you, don’t wait for Putin to want to stop the war.
    1:11:50 Pressure him so that he is forced to stop the war.
    1:11:52 That’s important.
    1:11:56 It’s important to say that what you said about the children is a tragedy.
    1:11:57 War is hell.
    1:12:01 But let me say again, we must find a path to peace.
    1:12:02 There is one.
    1:12:03 What is it?
    1:12:04 There is one.
    1:12:07 Before ceasefire, strong Ukraine.
    1:12:08 Strong Ukraine’s position?
    1:12:10 Yes, we can speak about it with Trump.
    1:12:16 For me, we can speak about security guarantees.
    1:12:20 But a quick step, a quick step is NATO.
    1:12:23 A partial membership NATO.
    1:12:25 Yes, I understand.
    1:12:28 I understand Trump’s feelings about NATO.
    1:12:29 I heard him.
    1:12:32 He’s thinking through all of it, of course.
    1:12:38 But anyway, yes, NATO is a strong security guarantee for all the people for us.
    1:12:40 A part of security guarantees.
    1:12:45 The second part is the arms aid package, which we will not use.
    1:12:50 If a ceasefire works, nobody will use the weapons.
    1:12:51 For what?
    1:12:52 But it has to stay.
    1:12:57 But with all due respect to the United States and to the administration.
    1:12:58 Not like before.
    1:13:01 I don’t want the same situation like we had with Biden.
    1:13:04 I ask for sanctions now, please.
    1:13:06 And weapons now.
    1:13:08 And then we will see.
    1:13:14 If they start it again, of course, we’ll be happy if you’ll give us more and you will stand with us shoulder to shoulder.
    1:13:15 Of course, that is right.
    1:13:21 But it’s different when you have weapons.
    1:13:25 Putin wouldn’t have been able to occupy so much territory.
    1:13:28 It was very difficult for us to push him out.
    1:13:31 But we didn’t have weapons before and that is the same situation.
    1:13:33 It can be the same situation.
    1:13:35 I’m just sharing this with you.
    1:13:40 Like I said at the very beginning, I want to be very honest with you and with your audience.
    1:13:42 Yes, it’s true.
    1:13:47 If we do not have security guarantees, Putin will come again.
    1:13:52 To make it clear, let’s describe the idea that you are speaking about.
    1:13:54 I would like to offer you other ideas too.
    1:14:05 But right now, your idea is that NATO accepts Ukraine minus the five regions of Luhansk, Donetsk, Zaporizhia, Kursyn and Crimea.
    1:14:14 Just so you understand the situation, the invitation to NATO is legislatively issued to Ukraine.
    1:14:19 So to us, all those territories are still Ukraine.
    1:14:25 But NATO so far can only act in the part that is under Ukrainian control.
    1:14:26 This can be negotiated.
    1:14:28 I am sure about that.
    1:14:32 Yes, this would not be a great success for us.
    1:14:38 But if we see a diplomatic way to end the war, this is one of the ways.
    1:14:39 So it is.
    1:14:42 Sorry, that is a start.
    1:14:47 Secondly, weapons, arms aid package.
    1:14:50 I’m not ready to discuss this publicly right now.
    1:14:55 It’s all written down and President Trump might have seen it or not, but we’ve got no secrets from him.
    1:14:56 Yes.
    1:15:06 But mostly it depends on the willingness of the United States because some of it will come from the EU, some from the United States, of course, together.
    1:15:08 So not just from the United States.
    1:15:11 No, no, no, we need unity with this package.
    1:15:14 So the package and sanctions.
    1:15:16 Yes, sanctions.
    1:15:23 But I think it’s in the interest of all the smart people to not have Russian energy on the market in general.
    1:15:25 So he has to stop it.
    1:15:27 That’s all.
    1:15:28 It’s fine.
    1:15:30 American oil, American gas is okay.
    1:15:31 Why not?
    1:15:32 And it’s cheaper.
    1:15:34 So it will be cheaper for the whole world.
    1:15:36 The money will go to the United States.
    1:15:41 And I think he will be happy and the president and your people will be happy.
    1:15:42 But it’s your decision.
    1:15:43 I’m just sharing.
    1:15:44 Yes, and cheap oil.
    1:15:48 So Putin won’t have so much money for for the war.
    1:15:50 And that that’s it.
    1:15:52 But this is difficult because it’s a lot.
    1:15:57 You’re saying to continue the sanctions on Russia to accept Ukraine into NATO.
    1:16:00 I need to ask you some difficult questions about this.
    1:16:01 Yes, go on.
    1:16:03 I trust and respect your words today.
    1:16:06 Many people respect and love you in America.
    1:16:08 Trump respects you.
    1:16:10 Loves me.
    1:16:12 Oh, come on now.
    1:16:15 Remember last time you corrected me when I said that you love Javier Millet?
    1:16:16 You said no, no, no.
    1:16:17 I respect him.
    1:16:20 So let’s not talk about love today.
    1:16:26 But could we talk seriously about about guaranteeing Russia’s security?
    1:16:27 Okay.
    1:16:31 Can I interview you a little question is what land is the war happening on?
    1:16:36 And where did it start on our soil, on our territory?
    1:16:39 International law was violated.
    1:16:42 The sovereignty of our country was violated.
    1:16:44 Civilians were killed.
    1:16:47 Tens of thousands of our people were taken hostage.
    1:16:51 And everyone will tell you this happened.
    1:16:57 This is what happened when I speak with the global south, which is trying to balance the two sides because of the history,
    1:17:05 because of their roots and because of their shared economic interests with Russia in the past.
    1:17:12 And now, of course, when you talk to them, they are speaking a little bit like you.
    1:17:19 I mean, they’re balancing a little bit, you know, yeah, a little bit in between, but we will work on it.
    1:17:20 Yeah.
    1:17:21 It’s our first meeting.
    1:17:26 During the second one, you will be more on our side, but it’s just just just very convincing.
    1:17:27 Very charismatic.
    1:17:28 Yeah, thank you.
    1:17:33 But when I speak with them, when I speak, it’s very important.
    1:17:45 Even with their balancing attitude towards the war, they all recognize that this is a war.
    1:17:49 This is not just internal conflict.
    1:18:04 This is a full scare war that began, that Putin began and all of them, all of them, if you talk to them, they say,
    1:18:17 but then they all recognize that that it’s his own big mistake, Putin’s mistake and that he’s not right.
    1:18:21 That’s why I said, no, no, he’s not right.
    1:18:22 And you have to begin from this.
    1:18:28 If you begin at the middle between Ukraine and Russia, of course, we can speak like this.
    1:18:31 You are in the middle and say, OK, what’s going on?
    1:18:32 There is a fight.
    1:18:33 Where is the fight?
    1:18:42 It’s not the fight like in Europe when Napoleon is fighting against somebody in the middle of Europe.
    1:18:47 No, this is not in the middle of somewhere of the planet, not the planet.
    1:18:49 It’s concretely on our land.
    1:18:57 So one country with one army, one person came to another.
    1:18:58 That’s it.
    1:19:00 It’s very clear.
    1:19:03 Again, I would like us to find a path to peace.
    1:19:07 So let us nevertheless try to start in the middle.
    1:19:11 What other ideas do you think might you are a very intelligent person?
    1:19:15 Your Russian isn’t that good either.
    1:19:18 And I told you that this is only our first meeting.
    1:19:20 My English is not very good either.
    1:19:22 Your English is very good.
    1:19:23 Thank you.
    1:19:25 To be honest, I’m terrible at speaking in every language.
    1:19:28 Well, there are other ideas.
    1:19:29 For instance, sorry to say this.
    1:19:34 It sounds crazy, but what if both Ukraine and Russia are accepted into NATO?
    1:19:39 Putin himself spoke about Russia, maybe about NATO.
    1:19:43 What you just said is very correct.
    1:19:45 What are the guarantees for Russia?
    1:19:48 It’s not like I’m even interested what happens to them.
    1:19:53 To be honest, I don’t care what will happen to them in the future after the war ends.
    1:20:01 But these are our borders and we must understand what is going on there.
    1:20:05 Well, the NATO guarantees for Ukraine.
    1:20:09 Actually, this is also a security guarantee for the Russians.
    1:20:13 Frankly, I talked about this many times before.
    1:20:21 Sorry, I’m speaking figuratively, but as an example, if you were a father who lost his children,
    1:20:29 a grown man, a grown man, a man, an adult, and the war has ended.
    1:20:35 And he never got justice for real.
    1:20:38 For example, somebody decides to free support.
    1:20:39 We won’t give you anything.
    1:20:41 You can’t fight, you can’t continue.
    1:20:49 So we stop when we stop without any guarantees, without any support, without financing, without okay.
    1:20:55 And nobody is held accountable, but the man lost his children.
    1:20:59 He will not get anything.
    1:21:01 None of the killers will be in prison.
    1:21:07 All the sanctions will be removed and he lost his children.
    1:21:10 And we have thousands of such people.
    1:21:15 Why do you think they will not go to Russia?
    1:21:21 We’ll find a way and we’ll not kill the Russian soldiers there or somebody there.
    1:21:22 Why wouldn’t they?
    1:21:23 It’s human nature.
    1:21:24 It’s not about us.
    1:21:25 It’s everyone.
    1:21:32 Read American writers always after any war.
    1:21:37 If there is no justice for people, there must be punishment for the crime.
    1:21:39 It is only justice.
    1:21:41 How come my child was taken away?
    1:21:43 The war took him.
    1:21:45 This is very scary.
    1:21:54 And even whether it was my son who was fulfilling his constitutional duty or simply a missile that struck a civilian child.
    1:22:04 And if there is no justice and the killers are not punished, why wouldn’t these people come back with hate?
    1:22:06 They will definitely come back.
    1:22:13 So when we talk about NATO, NATO is not only stopping Russia.
    1:22:20 Do not forget NATO is stopping us too.
    1:22:23 Because there will not be justice for everyone.
    1:22:29 We know that NATO does not have the right to solve certain issues with war.
    1:22:32 NATO is a security alliance.
    1:22:35 It is protection, not brainwashing.
    1:22:39 What Putin claims that this is offensive is not true.
    1:22:45 NATO is a defensive alliance, a security alliance, and it is security for Russia.
    1:22:54 But unfortunately, there are many options for peace that don’t involve NATO inviting Ukraine as a member.
    1:22:58 Can you imagine security guarantees without NATO membership?
    1:23:07 For example, if America simply leaves NATO, I believe there is a high likelihood that Donald Trump would do such a thing.
    1:23:10 I think it’s very bad for NATO.
    1:23:14 That’s the end. That’s the death of NATO.
    1:23:18 It is a pity because I think that it’s a very good alliance.
    1:23:22 Maybe not everything is good there from the bureaucracy or money, etc.
    1:23:29 But totally, countries who are in NATO, they don’t fight.
    1:23:35 There is no war on the land of any of these NATO countries.
    1:23:36 I think that is the answer.
    1:23:40 It works or not. It works politically or militarily.
    1:23:42 I don’t know, but it works.
    1:23:48 So without Trump, without the United States of America, there will not be NATO.
    1:23:50 That is the first.
    1:23:54 So, and you say, can we imagine that?
    1:23:55 That what?
    1:23:57 That there could be security guarantee without.
    1:24:02 No, we don’t need guarantees without the United States.
    1:24:07 That’s it, because the United States is a very strong, powerful country.
    1:24:13 The United States puts the point, of course, Putin said that it’s just the Soviet Union,
    1:24:18 where, by the way, Ukraine was the second strong republic militarily.
    1:24:23 Yes, by the way, but he, of course, always forgets about it.
    1:24:28 But during the World War II, without help of the United States,
    1:24:34 support of your troops, support of your industry, industrially, militarily,
    1:24:42 without your money, without your people, Hitler could win.
    1:24:44 So the United States helped a lot.
    1:24:50 Of course, Europe, USSR, and of course everybody fought.
    1:24:52 Everybody did a lot.
    1:24:55 But without the United States, it couldn’t be such.
    1:25:03 I don’t use the word success, because I think that there is no war which ends successfully.
    1:25:10 Because this is a war, seven figure losses, heavy losses in World War II, millions of people.
    1:25:16 And that’s why without the United States, security guarantees are not possible.
    1:25:22 I mean these security guarantees which can prevent Russian aggression.
    1:25:24 Of course, we have security guarantees.
    1:25:29 Bilaterally, with some countries, financing, support of our internal military,
    1:25:34 and defending, and humanitarian issues, and demining which is very important,
    1:25:38 and helping our children in the school networks.
    1:25:40 By the way, this is a very sensitive point.
    1:25:43 How many? How many bomb shelters?
    1:25:47 How many bomb shelters we built with the partners for the children?
    1:25:52 And it’s a pity that they are underground, but can you imagine their eyes?
    1:25:56 When they came after COVID, you understand what does it mean COVID?
    1:26:01 But they had COVID in the war, and together they didn’t see each other for so many years.
    1:26:09 And when they saw each other, even underground, they were very happy and smiling.
    1:26:17 So we have such security guarantees, but it’s not enough to prevent.
    1:26:21 Yes, preventive measures also work to prevent the aggression of Putin.
    1:26:27 Your English is better than my Russian. This is wonderful.
    1:26:29 I’m not sure.
    1:26:30 I’m just giving you compliments.
    1:26:31 Thank you. No, no, thank you.
    1:26:33 I’m supposed to do that kind of thing to a president.
    1:26:35 Thank you so much.
    1:26:39 Okay, once again, without NATO guarantees,
    1:26:46 I have a dream that, let’s say, on January 25, or sometime at the end of January this year,
    1:26:51 you will sit down with Donald Trump, with Vladimir Putin,
    1:26:56 and together negotiate a ceasefire with strict security guarantees.
    1:27:01 And an agreement will be signed.
    1:27:03 What will this look like without NATO?
    1:27:05 I will make it clear.
    1:27:11 And so, first of all, I think January 25 or some other day.
    1:27:14 Well, you just call it January 25.
    1:27:18 And I don’t mind. It’s my birthday.
    1:27:22 And we sit down.
    1:27:26 First of all with Trump.
    1:27:33 We agree with him on how we can stop the war, stop Putin.
    1:27:38 It is important for us to sit down with him.
    1:27:45 Secondly, it is very important for us that Europe, which is very important for us,
    1:27:51 because we are part of Europe, and not only geographically, geopolitically,
    1:27:54 but also in the European Union where we will be.
    1:27:59 For us, it is very important that Europe also has a voice.
    1:28:01 It’s the second thing.
    1:28:07 It won’t be long because Europe will be looking at us and we’ll be looking at Trump.
    1:28:13 And by the way, I now see that when I talk about something with Donald Trump,
    1:28:16 whether we meet in person or we just have a call,
    1:28:20 all the European leaders always ask, “How was it?”
    1:28:23 This shows the influence of Donald Trump.
    1:28:26 And this has never happened before.
    1:28:31 With an American president, I tell you from my experience,
    1:28:37 this also gives you confidence that he can stop this war.
    1:28:43 That is why we and Trump come first and Europe will support Ukraine’s position.
    1:28:50 Because they understand that Ukraine has every right to have its voice heard in this
    1:28:52 because we are at war.
    1:28:55 Trump and I will come to an agreement.
    1:29:04 And I am sure that he can offer strong security guarantees together with Europe.
    1:29:08 And then we can talk to the Russians.
    1:29:14 That’s right. Not just three of us sitting down at once.
    1:29:17 And you still talk to me like that.
    1:29:24 Do you know how, as if Putin wants to sit down and talk, but Ukraine does not?
    1:29:25 This is not true.
    1:29:28 I think that, yes, he is, in fact, ready to talk.
    1:29:30 Did you talk to him?
    1:29:31 On the phone or what?
    1:29:33 How do you normally talk to him?
    1:29:36 I don’t know. Normally by the sea. The same as with you.
    1:29:39 He invites you to the sea with me. Just the three of us.
    1:29:41 No, no, one of us may drown.
    1:29:43 Who? Are you good at swimming?
    1:29:44 Yes, I am a good swimmer.
    1:29:47 You’re a good swimmer. Well…
    1:29:55 And I would like to add that if you have any contact with them, I just want to hear what happens then.
    1:30:03 I have never talked to Vladimir Putin, but I have a feeling that he is ready because Donald Trump is ready.
    1:30:06 I hope you are ready.
    1:30:09 And this is not just a feeling, but a dream.
    1:30:18 I have a dream here that the three of you will get together in a room and make peace.
    1:30:27 And I want to understand what it looks like, what security guarantees look like that would satisfy Ukraine, that would satisfy Russia.
    1:30:33 Ukraine needs security guarantees first and foremost. We are in danger.
    1:30:35 That is why they are called so.
    1:30:38 This is no joke to me.
    1:30:41 Let’s take a few steps back.
    1:30:45 Interesting.
    1:30:51 Why are security guarantees a strong position of Ukraine, strong weapons and so on so important?
    1:30:55 I will give you a little history lesson.
    1:31:00 Although I think you have prepared yourself and know everything perfectly.
    1:31:02 Well, you can correct me on that.
    1:31:09 Yes, Ukraine had security guarantees, the Budapest memorandum.
    1:31:15 Nuclear weapons are the security guarantees that Ukraine had. Ukraine had nuclear weapons.
    1:31:18 I do not want to characterize it as good or bad.
    1:31:21 Today, the fact that we do not have them is bad.
    1:31:23 Why? Because this is war.
    1:31:31 Today we are at war because you have unleashed the hands of a nuclear power.
    1:31:38 A nuclear power is fighting against us, against Ukraine and doing what it wants.
    1:31:45 By the way, even you are now talking about ceasefire, just a ceasefire.
    1:31:51 Maybe give flowers to Putin, maybe to say thank you so much for these years.
    1:31:53 That was a great part of my life.
    1:31:56 No, we are not just ready for this.
    1:32:01 Why? The Budapest memorandum, nuclear weapons, this is what we had.
    1:32:03 Ukraine used them for protection.
    1:32:06 This does not mean that someone attacked us.
    1:32:08 That doesn’t mean that we would have used it.
    1:32:10 We had that opportunity.
    1:32:12 These were our security guarantees.
    1:32:14 Why am I talking about this in detail?
    1:32:20 Because if you take the Budapest memorandum, by the way, I discussed this with President Trump.
    1:32:23 We have not finished this conversation yet.
    1:32:26 We will continue it regarding the Budapest memorandum.
    1:32:30 The Budapest memorandum included security guarantees for Ukraine.
    1:32:33 At first, three.
    1:32:36 The most important security guarantors for Ukraine.
    1:32:43 Three strategic friends and partners of Ukraine.
    1:32:45 This was in agreement.
    1:32:51 United States of America, Russia, Britain, France and China joined.
    1:32:57 There were five states that these are not even security guarantees.
    1:33:00 We now understand that this is not a guarantee of security.
    1:33:04 Because, on the one hand, these are security guarantees.
    1:33:09 But there was an English word, as far as I understand, assurance.
    1:33:13 It is translated as assurance.
    1:33:15 Assurance, right?
    1:33:22 In Russian, it will be an assurance.
    1:33:34 That is, give up nuclear weapons because you were under pressure of the US and Russia for Ukraine to give them up.
    1:33:37 These two powers were exerting pressure.
    1:33:42 These two states negotiated to ensure that Ukraine does not have nuclear weapons.
    1:33:45 They then agreed, these are the largest states.
    1:33:50 This is the nuclear five that does not even provide security guarantees.
    1:33:59 Now we just need to find these people and we just need to put in jail all of those who, frankly, invented all this.
    1:34:01 So, confidence.
    1:34:03 So, confidence.
    1:34:05 Assurance.
    1:34:11 Assurance that Ukraine will be territorially integral with its sovereignty.
    1:34:22 It was a piece of paper, if you are curious, by the way, that after occupying part of our Donbas and Crimea,
    1:34:30 Ukraine sent diplomats three times, I don’t think I remember, three times within a few years.
    1:34:35 We sent letters to all security guarantors, to all members of the Budapest memorandum.
    1:34:37 What did they send?
    1:34:41 What was written on the piece of paper?
    1:34:43 Consultations.
    1:34:48 Ukraine holds consultations if its territorial integrity is violated.
    1:34:52 And everyone should be in consultation.
    1:34:54 Everyone must come.
    1:34:58 Everyone must meet urgently.
    1:35:04 USA, Britain, Russia, France, China.
    1:35:06 Did anyone come?
    1:35:08 You ask?
    1:35:09 No.
    1:35:14 Did anyone reply to these letters, official letters, they are all recorded by diplomats?
    1:35:16 Did anyone conduct consultations?
    1:35:17 No.
    1:35:18 And why not?
    1:35:20 They didn’t give a fuck.
    1:35:23 This is understandable in Russian, right?
    1:35:31 That as Russia didn’t give a damn, neither did all the other security guarantors of the Budapest memorandum.
    1:35:40 None of them gave a damn about this country, these people, these security guaranties, etc.
    1:35:44 We take a break, this will be a Budapest memorandum.
    1:35:52 The last time with me, imagine how many years it was with me, in February 2022.
    1:35:55 In February 2022, the war began.
    1:36:01 A full-scale war, letters for consultations, have been sent.
    1:36:04 No one answers.
    1:36:08 Next, we are taking a break from the Budapest memorandum.
    1:36:10 The question is simple about Budapest.
    1:36:11 Can we trust this?
    1:36:12 No.
    1:36:20 Whichever country out of these five sat at the negotiating table, just a piece of paper.
    1:36:24 Believe me, we will save you.
    1:36:25 No.
    1:36:27 Another.
    1:36:29 This is a train.
    1:36:39 This is a train with waste paper, with security guarantees, which Ukraine has been riding for many years.
    1:36:50 The second car on this train is the Minsk Agreements, the Normandy Format and the Minsk Agreements, where it was written, where the signatories were.
    1:36:53 The United States of America was no longer there.
    1:36:55 I understand that Obama was here at the time.
    1:37:04 And as far as I know, I think they were simply not interested in what happened to Ukraine, and where it was in general, where it was located, well, somewhere there.
    1:37:10 Part of something, people, well, people, and let it be, let it be with these people.
    1:37:14 The United States simply did not participate.
    1:37:21 In the Minsk Agreements, there are no claims to the U.S. because they were not guarantors.
    1:37:24 Where is the claim?
    1:37:26 A step back.
    1:37:29 2008, Bucharest.
    1:37:33 Everyone has already learned from the Budapest memorandum.
    1:37:36 Bucharest, 2008.
    1:37:38 Bucharest.
    1:37:43 Mr. Bush, President of the United States.
    1:37:48 Republican says that Ukraine should be in NATO.
    1:37:51 This is the voice of Republicans.
    1:37:52 Check it out.
    1:37:55 Ukraine should be in NATO.
    1:37:58 Everybody is looking at the U.S., always.
    1:37:59 All in favor.
    1:38:00 Who is against?
    1:38:01 Merkel.
    1:38:08 So she opposes, and she forced everyone not to give Ukraine an invitation to join NATO, because that would be a step.
    1:38:11 Seriously, Republicans were in favor.
    1:38:18 The U.S. was in favor, because Republicans and Bush were not afraid of anyone.
    1:38:23 They were not afraid of anyone, and they knew that Ukraine rightly wanted to join NATO.
    1:38:24 She chooses so.
    1:38:25 And what is the question?
    1:38:27 Well, people made their choice.
    1:38:30 Well, and the Russians will not look that way.
    1:38:32 That was not the case then.
    1:38:33 Why?
    1:38:38 Because the Russians were different.
    1:38:40 Next, Minsk.
    1:38:43 We didn’t succeed.
    1:38:49 After the Minsk agreements, as I told you, hundreds of meetings were held.
    1:38:55 I have had hundreds of meetings since 2019.
    1:38:58 We could not think about a ceasefire.
    1:39:01 A ceasefire is our offer.
    1:39:04 This is not somebody’s suggestion.
    1:39:05 This is mine.
    1:39:07 I would like…
    1:39:09 I wanted to in Ukraine.
    1:39:12 Society was divided.
    1:39:13 Not everyone wanted to.
    1:39:14 Half did not want to.
    1:39:15 Half were against.
    1:39:16 Half were in favor.
    1:39:18 Some of them shouted, “Do not believe it.”
    1:39:20 Some of them shouted, “Believe it.”
    1:39:23 I am the president of Ukraine.
    1:39:29 I was given a mandate of trust by 70% of the population to take appropriate steps.
    1:39:31 And I made them.
    1:39:33 This is not a joke.
    1:39:35 We’ll just sit the three of us.
    1:39:38 I am simply telling you what is.
    1:39:41 This is, how can I tell you?
    1:39:47 These meetings must be serious and prepared.
    1:39:50 And prepared with those who want peace.
    1:39:52 Ukraine wants peace.
    1:39:53 US wants peace.
    1:39:55 We have to sit down with Trump.
    1:39:57 And that is 100%.
    1:39:59 First and foremost, number one.
    1:40:04 Moreover, he told me on the phone that he is waiting for us to meet.
    1:40:07 And there will be an official visit.
    1:40:11 And my visit would be the first or one of the first to him.
    1:40:13 And for him, this topic is very important.
    1:40:17 I know that he has his own matters, American issues, I understand.
    1:40:19 I heard his election program.
    1:40:27 But regarding international affairs, I think our issue is one of the most pressing issues for President Trump.
    1:40:29 Therefore, I believe very much I trust his words.
    1:40:31 And I hope we will meet again.
    1:40:34 We need to prepare.
    1:40:36 We have many plans to build on.
    1:40:37 And they exist.
    1:40:40 And they are supported by many countries.
    1:40:43 But we need his vision.
    1:40:46 He needs to look at all these details.
    1:40:48 But his vision, please.
    1:40:50 Because he can stop Putin.
    1:40:53 Because Putin is afraid of him.
    1:40:55 That’s a fact.
    1:40:59 But Trump is a president of a democratic country.
    1:41:01 And he does not come for life.
    1:41:03 He is not Putin.
    1:41:05 He will not come for 25 years.
    1:41:08 He will come for his term.
    1:41:10 Please tell me.
    1:41:14 Well, for example, he came for four years.
    1:41:20 And for the fifth year, Putin came with a war.
    1:41:24 Will it make Trump feel better that there was no war during his time?
    1:41:27 And that Ukraine was destroyed after him?
    1:41:29 Why destroyed?
    1:41:31 Putin is whoever.
    1:41:33 A killer whoever, but not a fool.
    1:41:37 He will be prepared.
    1:41:39 He knows all mistakes.
    1:41:43 He understands how we defeated his army after the invasion began.
    1:41:46 He realized that this was not a Soviet war.
    1:41:48 And that this would not happen with us.
    1:41:49 He will prepare.
    1:41:52 He will let everything into arms production.
    1:41:54 He will have lots of weapons.
    1:41:56 And there will be a very large army.
    1:42:02 And you think that after such humiliation, four years without a war,
    1:42:04 he did not finish us.
    1:42:08 He will return and fight only against Ukraine.
    1:42:11 He will destroy everything around.
    1:42:15 And if you say there is a risk that Trump, President Trump,
    1:42:17 will withdraw from NATO, for example.
    1:42:19 This is a decision of the United States.
    1:42:23 I’m simply saying that if it does, Putin will destroy Europe.
    1:42:26 Calculate the size of army in Europe?
    1:42:29 It’s just that I say it for a reason.
    1:42:31 Do the calculation.
    1:42:33 Why did Hitler conquer all of Europe then?
    1:42:35 Almost.
    1:42:40 Just count, remember, his armies of millions.
    1:42:42 Calculate what Europe has.
    1:42:44 What are the largest armies?
    1:42:46 We have the largest army.
    1:42:50 The Ukrainian army is the largest in Europe.
    1:42:55 The second place after us is four times smaller than us.
    1:42:56 France?
    1:42:58 Yes, 200,000.
    1:43:01 I think the French have about 200,000.
    1:43:04 We have 980.
    1:43:06 So this powerful coalition of European nations?
    1:43:08 That will not be enough.
    1:43:10 Yes, it’s not going to be enough.
    1:43:12 But you’re a smart man, there’s a lot of ideas.
    1:43:16 Partnerships with Global South, India,
    1:43:18 Middle East, Saudi Arabia,
    1:43:22 economic partnerships, political partnerships.
    1:43:24 It all protects you.
    1:43:26 First of all, look at one example.
    1:43:31 North Korea.
    1:43:33 Just look at this example.
    1:43:40 12,000 has arrived.
    1:43:48 Today, 3,800 killed or wounded.
    1:43:56 They can bring more, 30,000, 40,000,
    1:44:02 or maybe 500.
    1:44:04 They can bring many people.
    1:44:05 Why?
    1:44:09 Because they have order, autocracy and everything.
    1:44:12 Can Europe bring people together?
    1:44:14 No.
    1:44:19 Will Europe be able to build an army consisting of 2 to 3 million people?
    1:44:21 No Europe will not want to do this.
    1:44:22 And for what?
    1:44:25 We definitely don’t want a world war with you.
    1:44:27 There is no such purpose.
    1:44:30 There is no such purpose as gathering everyone.
    1:44:31 We do not want any war.
    1:44:36 We want to stop the Russians and they invite North Korean soldiers.
    1:44:41 Invited.
    1:44:44 Their faces are burned.
    1:44:47 They themselves burn their faces.
    1:44:51 Those who cannot escape, injured or killed.
    1:44:52 There’s a video.
    1:44:56 Everything I’m telling you, there is evidence of this.
    1:45:01 So that they are not recognizable, right?
    1:45:05 It means, what does it mean?
    1:45:08 It’s out of values which share Europe.
    1:45:10 Europe counts.
    1:45:15 It means that those guys, they don’t count.
    1:45:17 It’s count, yes?
    1:45:19 They don’t count the number of people.
    1:45:21 That is the answer.
    1:45:22 Can they move more?
    1:45:23 Yes.
    1:45:25 Can they move dozens of thousands?
    1:45:27 Yes, because we see what they have.
    1:45:35 Last year, for example, Europe gave us one million artillery rounds.
    1:45:41 We produced a lot ourselves, but they gave us initiative.
    1:45:42 It was initiative.
    1:45:52 One million artillery rounds and of 155 and etc.
    1:46:03 We produced more, but North Korea gave Putin 3.7, just gave him.
    1:46:05 So he also has a deficit for today.
    1:46:07 It means he needs what?
    1:46:08 He needs time.
    1:46:16 But the number of soldiers and the number of artillery rounds is not everything.
    1:46:23 As you have said, let’s say Donald Trump guarantees security for four years.
    1:46:36 You can form partnerships with India, with Saudi Arabia that enforce punishment, the stick, on oil prices, for example, if any aggressive action is taken.
    1:46:45 You can actually even build, I’ve met a lot of incredible Ukrainian tech people, IT people.
    1:46:51 You can build great companies that form partnerships with the United States, that form partnerships with China.
    1:46:59 And that is a big leverage against the aggression of however many million artillery rounds.
    1:47:01 And that is a sheet of paper.
    1:47:03 You don’t need a sheet of paper of protection.
    1:47:06 Ah, that’s you.
    1:47:09 Well, when you speak.
    1:47:10 In English.
    1:47:11 In English, yeah.
    1:47:18 You don’t even need answers because when you now we’re talking, you already answered on all the questions.
    1:47:28 The first one is that during this time, you need just cooperation, a lot of money for this military industry.
    1:47:38 In Ukraine or in Europe, with India, Saudi Arabia, Saudi and the United States, you need a lot of money.
    1:47:40 So the question is where you will get it.
    1:47:42 So my answer was to Trump.
    1:47:45 I said, this is one of the security guarantees.
    1:47:50 Take 300 billions of frozen Russian assets.
    1:47:51 We will take it.
    1:47:57 Take money, what we need for our interior production, and we will buy all the weapons from the United States.
    1:48:00 We don’t need gifts from the United States.
    1:48:03 It will be very good for your industry.
    1:48:11 For the United States, we will put money there, Russian money, not Ukrainian, not European, Russian money, Russian assets.
    1:48:13 They have to pay for this.
    1:48:15 We will put it and we will make it.
    1:48:17 This is one of security guarantees.
    1:48:20 Yes, of course, because this is a military guarantee.
    1:48:21 Yes.
    1:48:32 But then the second you said that energy price and a lot of sanctions on products and the Russian shadow fleet and etc.
    1:48:35 That is the second answer we spoke about before.
    1:48:37 Yes, put more sanctions on them.
    1:48:39 More sanctions.
    1:48:43 It’s okay, not to take off sanctions.
    1:48:47 It’s okay with you, but it’s not going to be okay with the president of Russia.
    1:48:50 Yes, but I’m not thinking how it will be very good for him.
    1:48:52 He’s still a killer.
    1:48:57 I understand, but unfortunately the reality is that a compromise is needed in order to reach an agreement.
    1:49:04 So in your understanding the fact that he is no in jailed after all the murders, he is not in jailed assuming all the murders,
    1:49:12 and no one in the world is able to put him in his place, send him to prison, do you think this is a small compromise?
    1:49:17 This is not a small compromise, and to forgive him will not be a small compromise.
    1:49:19 To forgive, no one will forgive.
    1:49:22 This is absolutely impossible to forgive him.
    1:49:25 We cannot get into the head and soul of a person who lost their family.
    1:49:28 Nobody never will accept this.
    1:49:30 Absolutely impossible.
    1:49:32 I don’t know, do you have children?
    1:49:34 No, not yet, but I would like to.
    1:49:36 Yes, God bless.
    1:49:38 And this is the most important thing in life.
    1:49:42 And they simply took away the most precious thing from you, will you ask?
    1:49:46 Who ruined your life before going to rip their head off?
    1:49:48 I’m just curious, they took your child away.
    1:49:50 Are you going to ask who did this?
    1:49:52 And they will answer that dude did this.
    1:49:54 You will say, oh well, then there are no questions.
    1:49:59 No, no, no, you will go fucking hell and bite their head off.
    1:50:02 And it will be fair.
    1:50:04 Can murderers be forgiven?
    1:50:09 That’s why you make security guarantees.
    1:50:14 What I told you, for those who are here, and what we control, and what will not happen.
    1:50:19 And that those who lost, we will never forget.
    1:50:21 And a matter of time.
    1:50:26 But when you gave us NATO, I just said, this means that after a while,
    1:50:32 everything I said about NATO, after a while, Ukraine will not go against Russia.
    1:50:36 And Russia will not go against Ukraine because you are in NATO.
    1:50:38 I am just saying, is not that a compromise?
    1:50:40 So NATO is a compromise.
    1:50:45 This is not just a security guarantee, in my opinion.
    1:50:51 Look, when rockets were attacking Israel, and Israel is not in NATO.
    1:50:56 NATO countries, aircrafts were deployed.
    1:50:58 Air defense.
    1:51:01 The air defense worked.
    1:51:06 Operated by different Middle Eastern countries.
    1:51:10 These are also security guarantees.
    1:51:16 And, by the way, Israel has nuclear weapons.
    1:51:21 So why do they need NATO, when in fact they have more than NATO has?
    1:51:26 The American, British, and French aviation stepped in.
    1:51:27 There was ADA.
    1:51:32 I don’t remember from Jordan.
    1:51:35 Listen, thousands of missiles were shot down that way.
    1:51:38 This is, what is this?
    1:51:40 So it’s a guarantee of safety.
    1:51:44 It’s just that it’s not called NATO.
    1:51:48 Is some Uncle Vova irritated by the word NATO?
    1:51:51 There’s a problem with the word?
    1:51:56 And I think he’s simply irritated by people who are alive and living here.
    1:52:00 If you believe this, it will be very difficult to negotiate.
    1:52:04 If you believe that the president of a country is completely crazy,
    1:52:07 it is really hard to come to an agreement with him.
    1:52:12 You have to look at him as a serious person who loves his country
    1:52:14 and loves the people in his country.
    1:52:16 And he conducts, yes, destructive military actions.
    1:52:19 Who are you talking about now, who loves his country?
    1:52:20 Putin.
    1:52:22 Do you think he doesn’t love his country?
    1:52:23 No.
    1:52:25 What is his country?
    1:52:27 He happened to consider Ukraine his country.
    1:52:28 What is his country?
    1:52:29 Explain it.
    1:52:31 Tomorrow he will say that it’s America.
    1:52:33 No pity for the Chechens?
    1:52:36 Do they look like Russians?
    1:52:39 Do they speak Russian?
    1:52:41 Of course.
    1:52:45 Of course they learn in schools like anywhere there’s been Russification.
    1:52:47 Who are the Chechens?
    1:52:50 A different people.
    1:52:52 Another faith.
    1:52:54 Other people.
    1:52:56 Another language.
    1:52:58 A million.
    1:53:01 Eliminated.
    1:53:03 And eliminated how?
    1:53:05 How did he kill them?
    1:53:06 With love?
    1:53:07 I know, fuck.
    1:53:08 By hugging.
    1:53:11 In Ukrainian, as we say,
    1:53:13 strangling by hugging.
    1:53:14 I love you so, so much.
    1:53:17 I love you so much that I want to kill you.
    1:53:18 That’s his love.
    1:53:20 And that’s not love.
    1:53:22 You’re mistaken.
    1:53:24 He does not love his people.
    1:53:25 He loves his inner circle.
    1:53:27 It’s only a small part of the people.
    1:53:31 He doesn’t love them.
    1:53:33 Why, I’ll explain.
    1:53:39 You cannot send your people to another land.
    1:53:45 To die knowing that they will die.
    1:53:46 Children.
    1:53:48 My daughter.
    1:53:49 My daughter.
    1:53:53 She is 20 years old.
    1:53:55 For me, this is a child.
    1:53:57 She is already an adult.
    1:53:59 Of course.
    1:54:01 But she is a child.
    1:54:05 The boys he sends are 18 years old.
    1:54:07 18 years old.
    1:54:08 They are children.
    1:54:10 He sends them.
    1:54:13 It’s not that fascists came to his land
    1:54:16 and he needs to defend it.
    1:54:20 He came to ours and he sent them.
    1:54:22 Chechnya, he sent them.
    1:54:24 Syria, he sent them.
    1:54:26 Africa, he sent them.
    1:54:28 Georgia, he sent them.
    1:54:32 Moldova, Transnistria, that was before him.
    1:54:34 Fine, we can leave that aside.
    1:54:36 He has enough sins of his own.
    1:54:43 And, and then there’s Ukraine, the largest part.
    1:54:57 780,000, 788,000 killed or wounded Russians.
    1:54:59 He calls them all Russians.
    1:55:02 Even those who don’t know, who don’t know how to speak.
    1:55:03 Russian.
    1:55:05 On his territory of Russia.
    1:55:07 Everything they’ve enslaved.
    1:55:08 Yes.
    1:55:10 Proud Varangians.
    1:55:12 So I wonder, is that love?
    1:55:13 What love is this?
    1:55:15 And for what?
    1:55:16 Does he love his people?
    1:55:17 No.
    1:55:19 Does he love his land?
    1:55:22 His country is bigger than America.
    1:55:24 How much land do you need?
    1:55:25 America is huge.
    1:55:28 America is simply an outstanding country.
    1:55:31 Outstanding country.
    1:55:34 Russia is bigger.
    1:55:37 Well, just bigger.
    1:55:39 So, so ask yourself.
    1:55:41 Does he love them?
    1:55:42 What is he doing?
    1:55:45 And what does he love?
    1:55:47 Do you think he’s been everywhere?
    1:55:49 In his Russia?
    1:55:51 It’s impossible to get around it.
    1:55:53 He hasn’t been everywhere.
    1:55:54 He just hasn’t.
    1:55:57 Well, I believe that Donald Trump loves America.
    1:56:01 And I don’t think he has been to every single American city.
    1:56:02 No, no, no.
    1:56:04 I saw his rallies.
    1:56:05 So many rallies.
    1:56:07 No, no, let’s, let’s be honest.
    1:56:08 Let’s be honest.
    1:56:11 He had it and I saw it and it’s very difficult.
    1:56:13 He’s not, I mean, he’s not 18.
    1:56:15 Yes, but he’s strong.
    1:56:17 And this is his will.
    1:56:20 Everywhere where the war is.
    1:56:22 I’m sure.
    1:56:25 I pray to God it never will be on your land.
    1:56:26 Yes.
    1:56:28 And I’m sure that it will not be.
    1:56:32 But I’m sure that if you have in some region the problems,
    1:56:37 how to say earthquake, hurricane, you have it all.
    1:56:43 Well, I’m sure that President Trump would be there.
    1:56:45 After one day, two or three days,
    1:56:47 I don’t know the security of all these things,
    1:56:48 but he will be.
    1:56:51 Otherwise, how will people look at him?
    1:56:53 Yes, of course he will.
    1:56:54 Of course.
    1:56:56 The same about me.
    1:56:58 I’m not comparing myself with him.
    1:57:00 I’m just where it is difficult for people.
    1:57:01 I have to come.
    1:57:06 The question, the next question is very simple.
    1:57:08 Region.
    1:57:11 Kursk region.
    1:57:14 The operation there.
    1:57:21 Did Putin, was Putin in Kursk during, during four months?
    1:57:22 No.
    1:57:25 Listen, I have tremendous respect for you.
    1:57:27 Admiration for many reasons.
    1:57:30 One of which is you stayed in Kiev.
    1:57:35 And another one is that you visit the front and you talk to the soldiers
    1:57:38 in the front and you talk to people all across Ukraine.
    1:57:39 Absolutely.
    1:57:41 Tremendous respect for that.
    1:57:46 And not enough people say that, you know,
    1:57:50 I had a conversation with Taka Carlson, for example.
    1:57:53 And, you know, I said that you’re a hero for staying in Kiev.
    1:57:58 And he said, well, he just did a thing that every leader should do.
    1:58:02 But I think not enough leaders do the thing that every leader should do.
    1:58:04 So tremendous respect.
    1:58:06 And I agree with you totally.
    1:58:10 Yes, a leader should go to the, to the front of a war.
    1:58:15 You know, that said, America has waged wars all across the world.
    1:58:22 That has the war in the, you know, in Afghanistan and Iraq cost $9 trillion
    1:58:27 and killed over a million people.
    1:58:35 War is hell and just because war is waged in terrible ways that it is,
    1:58:38 does not mean the leader does not love their country.
    1:58:40 But I take your point.
    1:58:46 I once again have a dream that even if there’s hate that you sit down
    1:58:53 with Donald Trump and Vladimir Putin and you find a way to peace.
    1:58:55 Let me ask you a question.
    1:58:56 What do you think?
    1:59:00 Will there ever be a day when the Ukrainian people forgive the Russian people
    1:59:07 and both peoples will travel back and forth again and marry each other?
    1:59:09 Rekindle and form friendships.
    1:59:11 Will there be such a time in the future?
    1:59:15 I think history has long answered this question.
    1:59:18 I don’t know how it will be for us.
    1:59:22 It will be in the future without a doubt.
    1:59:24 History has shown this time.
    1:59:28 Again, after every devastating war,
    1:59:43 one generation, one country recognizes that it is, was an aggressor
    1:59:49 and it comes to realize this is impossible to forgive.
    1:59:53 This is precisely the kind of education they’ve had in Germany
    1:59:57 for many years, even though these children had nothing to do with it.
    2:00:03 It was their grandfathers who participated and not all of them were participants
    2:00:09 of Nazi Germany’s war against, essentially against the world.
    2:00:12 Yes, and against life.
    2:00:17 And therefore they’re still apologizing.
    2:00:19 Apologizing is not easy.
    2:00:22 They know that they were the aggressors.
    2:00:27 If they were guilty, they do not look for compromise in history.
    2:00:30 Compromise in itself buys time.
    2:00:32 And they understand this.
    2:00:39 There are convicted murderers condemned both historically and by their own people.
    2:00:45 Reparations have been paid and security guarantees have been established, by the way,
    2:00:47 and all this is done.
    2:00:51 And when all this is done and recognized in any case,
    2:00:55 people develop relations with each other.
    2:00:56 That’s clear.
    2:01:03 But it can only happen the way it always has, always has in history.
    2:01:06 Russia will have to apologize.
    2:01:07 It will.
    2:01:10 This will happen because they are guilty.
    2:01:11 They are guilty.
    2:01:14 And as I told you, the guilty are different.
    2:01:19 Both those who participated and those who remain silent
    2:01:30 because silence is also about participating, in my opinion.
    2:01:32 Can I ask about Donald Trump?
    2:01:37 We’ve already mentioned him a lot, but let’s focus there.
    2:01:38 What do you admire?
    2:01:41 What do you respect about Donald Trump?
    2:01:51 And also maybe why do you think he won overwhelmingly the election in 2024 that American people chose him?
    2:01:53 He was stronger.
    2:02:00 He was much more stronger than Kamala Harris, Biden first, and then Kamala Harris, yes?
    2:02:07 He showed that he can intellectually and physically.
    2:02:13 It was an important point to show that if you want to have a strong country, you have to be strong.
    2:02:14 And he was strong.
    2:02:18 And this number of rallies, what I said is not a simple thing.
    2:02:19 He showed that he can.
    2:02:21 He is strong.
    2:02:27 So he doesn’t have any questions with his, I mean, this age and etc.
    2:02:28 Nothing.
    2:02:29 He is young.
    2:02:32 He is young here and his brains work.
    2:02:35 So I think it’s important, very important.
    2:02:38 And of course, a lot of interior questions.
    2:02:41 I understand the prices and etc.
    2:02:44 Economic questions and the questions off.
    2:02:47 You have the questions with other things.
    2:02:48 Immigration, yeah.
    2:02:50 A lot of things.
    2:02:51 I understand.
    2:02:56 So maybe he answered on those questions which people had.
    2:02:58 One of the questions.
    2:03:00 That he will finish the war.
    2:03:01 That he will finish the war.
    2:03:04 Yeah, for me, this is the main question.
    2:03:08 But I said that for him, he’s the president of the United States.
    2:03:12 For him, his priority is his questions in the United States.
    2:03:14 And I understand and I respect it.
    2:03:19 But the second he was speaking about the world, yes, he said that he will finish the war.
    2:03:30 And I hope very much because I think that our people really support his idea.
    2:03:32 That’s why I said it is for me.
    2:03:40 It’s very, very important to have enough people around him.
    2:03:44 Who will have connections with him with the right things.
    2:03:48 For me, the truth is very right things.
    2:03:51 What’s going on really in the battlefield?
    2:03:56 What’s going on really with Putin and Russia?
    2:03:58 What he really wants.
    2:04:00 And that is just to have it.
    2:04:07 You know, before any decision, you have to be at the same level of information.
    2:04:13 And we need, really, we need him to know everything from us.
    2:04:14 From you.
    2:04:17 From people in Ukraine.
    2:04:21 From people around who are really afraid.
    2:04:26 Afraid that Putin doesn’t want to stop the war.
    2:04:30 Afraid that he will come back with his aggression.
    2:04:45 So first of all, I should mention that our conversation today will be translated and dubbed into Ukrainian, English, Russian, other languages, Spanish.
    2:04:51 So you’re in your voice. So there are great guys originally from Poland.
    2:04:53 It’s a company called Eleven Labs.
    2:04:56 They’ve trained an AI.
    2:05:00 Artificial intelligence sounds truly remarkable in your voice.
    2:05:02 You have the freedom to speak in any language you choose.
    2:05:07 But no matter what, you will always find yourself returning to speaking in Ukrainian.
    2:05:11 That is, when you talk about Donald Trump, you can do it in Ukrainian or Russian.
    2:05:12 Everybody understands.
    2:05:14 Everybody understands.
    2:05:22 But you said that there’s some things about the war that maybe Americans don’t understand.
    2:05:24 So we talked about Putin.
    2:05:27 We talked about the security guarantees.
    2:05:35 But the reality of war, what’s happening on the ground, what do you think that people should understand?
    2:05:39 First of all, they have to understand the idea of Putin’s war.
    2:05:42 It is very important for him.
    2:05:45 I consider this process.
    2:05:52 I think it is very important for him not to give Ukraine independence.
    2:05:56 To prevent Ukraine from developing is an independent country.
    2:06:02 For him, influence, influence on Ukraine cannot be lost.
    2:06:15 And for him, it is like, I think for him, this is such a goal in this last mile.
    2:06:29 And certainly for him, the last mile and of his political life.
    2:06:32 And I think that this is the goal for him.
    2:06:39 The second story, I do not want to talk about these banalities that he wants to return.
    2:06:43 All the territories of the Soviet Union influence over them.
    2:06:45 He does this little by little.
    2:06:48 I just don’t want to, people need to know details.
    2:06:54 For example, Georgia, which was headed towards the EU and NATO, completely turns towards Russia,
    2:06:59 regardless of the fact that they have frozen conflicts.
    2:07:04 They have in Abkhazia what we have with Donbass, which is controlled by militant rebels.
    2:07:08 Abkhazia is not developing, it’s just a part, a very beautiful part of Georgia.
    2:07:10 That has died.
    2:07:13 And if you have the opportunity, then go there someday.
    2:07:16 You will understand it simply died because Putin wanted to.
    2:07:22 He wanted not to allow them to develop because a frozen conflict means that you will not be accepted.
    2:07:26 The EU and certainly will not be accepted into NATO because right now, yes,
    2:07:29 they do not take you because of a frozen conflict.
    2:07:31 And this is what Putin did.
    2:07:33 It’s very important for him not to lose this influence.
    2:07:39 That is, he turned back Georgia, young people, students, everyone leaves, and this is a fact.
    2:07:45 Georgia is quite small and they will leave, they want to live in Europe, they want to develop.
    2:07:49 Somebody in the United States, somebody in Europe, somebody in the EU, somebody in Britain.
    2:07:55 He will now fight for the Moldovan parliament.
    2:07:57 This is his second step.
    2:08:00 You will see in April what happens.
    2:08:04 You will see, oh, he will start turning Moldova away from Europe.
    2:08:07 Although they want to go there, he does not care.
    2:08:16 There will be a pro-Russian party and they will do something with the current president because she has won the elections.
    2:08:20 She is pro-European, but he will turn this back.
    2:08:24 The next steps are completely clear.
    2:08:32 He will do everything wherever he has lost influence, where there was influence, influence of the Soviet Union.
    2:08:38 He’ll turn it back as much as possible and we understand at what price you have seen Syria.
    2:08:43 You saw these tortures, what we saw in Bucha, what we saw everywhere we came
    2:08:46 and where our territories were occupied.
    2:08:50 In Syria, the same happened, there were a thousand people there
    2:08:54 and you have seen it, scientists were found, doctors were found.
    2:09:01 It is clear that any people are capable of generating their own opinion.
    2:09:05 Show their skills, develop society.
    2:09:11 Everyone who can express an opinion, everyone who can shape the independence
    2:09:17 and maturity of society such people are not needed and he wants this in Ukraine.
    2:09:26 And therefore, everyone should understand that Ukraine is like a large wall.
    2:09:33 From that Europe and if God willing, President Trump does not withdraw from NATO.
    2:09:37 Because again, I believe that this is the biggest risk.
    2:09:48 I think two steps, two steps that Putin would like to see is a weak NATO
    2:09:57 and this without Trump and a weak Ukraine which cannot survive on the battlefield
    2:10:04 simply cannot survive and prevent me from building a strong relationship with Trump.
    2:10:13 I think these two steps leaving NATO and Ukraine’s weakness will lead to a large-scale war
    2:10:19 which Putin will wage on all the territories of that Europe.
    2:10:26 Post-Soviet Europe, I mean Soviet Europe, not post-Soviet, but post-World War II period.
    2:10:35 That is Soviet Europe, Soviet-era Europe in order to completely control everything there.
    2:10:42 This is what he will do and besides this, this will happen in any case.
    2:10:52 Even if the US is thinking about leaving NATO, this war will affect the United States
    2:10:55 because North Korea is the first sign.
    2:11:02 North Korean skills, North Korean knowledge which they are now gaining from this war.
    2:11:10 These include mastering new technologies, large-scale drones, missiles, how it works,
    2:11:15 the kind of technological war we have today, cyber war, etc.
    2:11:24 All these skills Korea will bring home and scale up in that region and this will be a risk for the Pacific region.
    2:11:33 Security first and foremost, for Japan and for South Korea, they will face these risks 100%
    2:11:41 and it will be clear that Taiwan will also have to face them.
    2:11:51 Without this, it is impossible. This is already happening. This is already happening.
    2:12:03 Therefore, I think that President Trump has all power to stop Putin and give Ukraine strong security guarantees.
    2:12:07 We’ve been talking for two hours at the pause. Do you want to take the break?
    2:12:13 Yes, we will make a pause. We can have coffee, right? Coffee?
    2:12:16 Let’s do it.
    2:12:22 And give the interpreter some water.
    2:12:24 We’ll keep switching languages.
    2:12:28 Like a dragon, you know? Three heads, three translators.
    2:12:38 So one of the difficult decisions you had to make when the war began is to enact martial law.
    2:12:44 So when you won the presidency, you were the warrior for freedom.
    2:12:54 In fact, this war is for freedom, for freedom of the individual, freedom of speech, freedom of religion, freedom.
    2:13:02 But a lot of freedoms had to be curtailed, sacrificed in this fight, because there’s so much focus on the war.
    2:13:14 Do you feel the tension of that, the sacrifice that had to be made in democracy, in freedom, in fighting this war?
    2:13:19 In any case, this war is for our freedom.
    2:13:33 Generally speaking, to be honest, when you understand, over time, when the war passes, you understand that your main values are at home.
    2:13:41 This is your home, your children, your love, God willing, parents are alive.
    2:13:55 And if, and if not alive, then their memory, visiting their grave, choosing how to work, how much, preferably choosing where to work.
    2:13:57 All this is freedom.
    2:14:01 Freedoms are not just a desire, they are an opportunity.
    2:14:08 In any case, you are right because war is a limitation of opportunities.
    2:14:19 In any case, you fight for these opportunities, your parents, your parents and God gave you life, right?
    2:14:22 You fight for your life, your life.
    2:14:28 But we need to understand that first there is a war and then martial law is introduced.
    2:14:32 Martial law is not introduced because someone wanted to.
    2:14:36 You say this is not Pinochet, this is not Pinochet and so on.
    2:14:38 This is a completely different story.
    2:14:49 An aggressor came and according to your legislation, if the border is violated, if there is armed aggression, you have all this written down long ago, written out in legislation.
    2:14:59 You introduce martial law and the introduction of martial law everywhere at all times means, in any case, a restriction of opportunities.
    2:15:06 If opportunities are limited, rights and freedoms are restricted, therefore the war itself restricts rights and freedoms.
    2:15:10 Yes, and you can’t do anything about it.
    2:15:18 We try honestly to balance as much as possible.
    2:15:29 I believe that the business sector works despite the difficulties of the war and we do everything somewhere, you know, there somewhere to reduce some load.
    2:15:34 Unfortunately, we cannot reduce taxes.
    2:15:38 On the contrary, military tax is used for war.
    2:15:40 You need to take money somewhere.
    2:15:47 This, by the way, is about the fact, the fact that the US gave us a lot and Europe too.
    2:15:53 But compared to how much we needed for the war, this is not all.
    2:16:00 As for military salaries, you know, you know that we could not pay the salaries of a million strong army.
    2:16:03 We could not pay it using the money from our partners.
    2:16:05 These are all expenses.
    2:16:12 This is all the money that the country and people have accumulated.
    2:16:14 You can’t do anything.
    2:16:15 I really want to reduce taxes.
    2:16:18 I will tell you frankly, I really want to.
    2:16:26 Well, I think that the whole new tax system, new deregulation, new steps, new reforms, all this will be after the war.
    2:16:30 Although there is something to brag about, this is proof.
    2:16:36 And this is a document.
    2:16:44 Because if you want to get a candidacy for the European Union, you must implement the appropriate number of reforms.
    2:16:46 We do everything.
    2:16:55 During the war, we voted for many reforms, including anti-corruption, banking reforms, land reforms, major reforms.
    2:17:00 We started a large privatization and the war did not stop us.
    2:17:04 Yes, it slowed down, but we went through a lot.
    2:17:07 When do you think you will hold elections?
    2:17:13 Because for people who don’t know, part of the martial law elections were suspended and they were delayed and delayed and delayed.
    2:17:19 And I think the next sort of plan is in February of 2025.
    2:17:24 But when do you think there will be presidential elections in Ukraine?
    2:17:28 Elections were postponed once.
    2:17:30 They were not delayed to be clear.
    2:17:33 Elections did not take place in 2024.
    2:17:37 That year, first of all, we need to understand the Constitution.
    2:17:42 They were scheduled to be held in the spring of 2024.
    2:17:48 Due to martial law, under the Constitution, you cannot do this.
    2:17:51 These are the presidential elections.
    2:18:01 The parliamentary elections did not take place in the fall of 2024, according to the Constitution.
    2:18:03 Yes, there are security things.
    2:18:06 There is the Constitution, but there are security things.
    2:18:14 That is, everyone in Ukraine understands that this cannot be done until the war is over or legislation needs to be changed.
    2:18:19 I believe that elections will take place immediately after the end of martial law.
    2:18:21 This is according to the law.
    2:18:29 Or members of the parliament need to get together and change legislation, which will be very difficult to do.
    2:18:31 Because society is against it.
    2:18:33 Why society against it?
    2:18:38 It is understandable why.
    2:18:42 Because we want elections that we want to trust.
    2:18:47 8.5 million people went abroad.
    2:18:53 The infrastructure needs to be created for these millions of people to vote.
    2:18:56 Millions of people in the occupied territories.
    2:19:00 I’m not even talking about the occupation of 2014.
    2:19:03 I’m talking about the occupation right now.
    2:19:05 What to do with these people?
    2:19:08 This is a difficult question.
    2:19:14 And one of the most unfair ones is how to vote without having a million soldiers.
    2:19:19 That is, it is impossible.
    2:19:25 We need to think about how to change the system if the elections are held in times of war.
    2:19:30 Change the legislation, which should include changes to the voting system.
    2:19:32 To think about online voting.
    2:19:41 Everyone is afraid because of certain attacks, like cyber attacks and so on.
    2:19:43 But we need to think about it.
    2:19:50 I really think that it’s possible that we can end the war in 2025.
    2:19:51 In January.
    2:19:53 We’ve already agreed on it.
    2:19:55 I would very much like to.
    2:19:56 I would very much like to.
    2:19:58 After the war?
    2:20:00 And immediately.
    2:20:02 Yes, immediately.
    2:20:04 In the year of the end of the war, it’s a fact.
    2:20:05 Why?
    2:20:12 Because when martial law ends, you can immediately vote in parliament to hold elections.
    2:20:16 And then everyone, everyone will vote.
    2:20:19 Because there are no restrictive measures.
    2:20:23 And after they vote, I think elections can be held in 90 days.
    2:20:26 Something, something like that.
    2:20:27 Yes.
    2:20:33 And this means that immediately after the end of the war, elections may take place in 90 days.
    2:20:35 Are you running?
    2:20:37 For reelection?
    2:20:38 Even I don’t know, really.
    2:20:39 I don’t know.
    2:20:40 I don’t know.
    2:20:43 It is a very difficult question.
    2:20:47 It depends on how this war will finish.
    2:20:52 It depends on what people will want.
    2:20:56 Mostly it depends on people.
    2:21:00 First of all, and of course, my family.
    2:21:06 We had no time to speak about it with my family.
    2:21:11 And of course, didn’t have a chance because we don’t think about it now.
    2:21:13 I mean, it’s something, you know.
    2:21:22 There are a lot of some, not a lot of, but enough voices in Ukraine from politicians, opposition and etc.
    2:21:23 About this.
    2:21:24 Yes.
    2:21:33 But we don’t think really seriously, didn’t think seriously with my family about it.
    2:21:35 So this is war.
    2:21:37 I mean, how to think about what we’ll be after.
    2:21:41 It’s very difficult, really very difficult.
    2:21:49 If we look at the field of candidates, maybe you can give your opinion about the set of ideas you see out there,
    2:21:52 including your own about the future of Ukraine.
    2:22:00 As I understand, the candidates include Parshenko, Zaluzhny, Aystovich, Budanov, Klitschko, many others.
    2:22:03 This is the internet speaking to me.
    2:22:06 What do you think of the space of ideas that these candidates represent?
    2:22:08 You know, I think it can be.
    2:22:11 There can be even a bigger number of candidates.
    2:22:14 I don’t really know what will be.
    2:22:17 They have rights to participate if they want to.
    2:22:18 Yes.
    2:22:25 If they really want to and can, they can go and do what they want, honestly.
    2:22:27 Most important is what are they doing now?
    2:22:37 I think that all these people are famous Ukrainian people and it’s important for them to do everything they can today,
    2:22:40 not begin any election campaign.
    2:22:46 I think this what can divide our people to have the elections, you know, during the war.
    2:22:52 I mean, this make steps, speak about elections a lot, you know, make a big mess about it.
    2:22:54 I think this is not right.
    2:22:57 That’s why I’m not agreeing with some of these people.
    2:23:05 But they can and I think that they can and maybe some of them will and it’s okay.
    2:23:06 It’s normal.
    2:23:08 It’s very normal.
    2:23:11 Our system differs from the system in the United States.
    2:23:14 You have two parties and the parties decide who will be the leader.
    2:23:17 And in Ukraine, everybody can participate.
    2:23:20 Let them.
    2:23:23 You think you’re going to win the debate?
    2:23:29 You versus Zeluzhnik Parshenko or Estovic and you decide to run.
    2:23:31 Do you think you’re going to win the debate?
    2:23:33 Or you’re again focused on the war?
    2:23:35 Oh, I’m really focusing on the war.
    2:23:36 I understand.
    2:23:46 I think the most difficult debate is what will be brought to the table and we spoke about it.
    2:23:49 It will be during the war, how to finish the war.
    2:23:55 I think that is my goal because it will be one of my most complicated debates.
    2:24:03 And for any president who is in a war, of course, but I think this is my goal to win those debates.
    2:24:07 And the other things are not for today.
    2:24:18 As I said, the dream I have is a historic opportunity to make peace, to make lasting peace soon.
    2:24:20 So I’m glad you’re focused on that.
    2:24:26 Let me ask a question about that a lot of people in the United States think about.
    2:24:32 And I care a lot about the future of Ukraine is corruption.
    2:24:36 This is something you have cared a lot about for a long time.
    2:24:43 You won the presidency 2019 in big part, your message of fighting corruption.
    2:24:53 But there’s a lot of accusations that during war, I mentioned 9 trillion dollars in the United States, war breeds corruption.
    2:25:04 So can you speak to that? How you have been fighting corruption and you can you respond to the accusations that has been corruption in Ukraine?
    2:25:05 You know, it’s very simple.
    2:25:11 First of all, we really have a very sophisticated anti-corruption system.
    2:25:17 Sophisticated not in the sense that it’s difficult to understand, but in that it really consists of many elements.
    2:25:21 It’s the most sophisticated in all of Europe.
    2:25:24 This is another requirement of the European Union.
    2:25:27 It was a requirement for Ukraine.
    2:25:31 And for many years, Ukraine was not trusted.
    2:25:45 I want to tell you that under me, we all voted for bills, all the anti-corruption reforms, all well, almost all reforms and all anti-corruption bodies today are independent.
    2:25:48 They work as requested.
    2:25:50 I still believe that they are not perfect yet.
    2:25:52 There are many issues.
    2:26:00 There is a judicial system, but also a judicial reform that our partners, the United States, plus the EU demanded from us.
    2:26:02 This is all written out.
    2:26:08 This is written out in specific laws, in specific decrees, in specific decisions.
    2:26:09 We did this.
    2:26:13 We’ve done 99% of this.
    2:26:17 If something has not been done, it means that it is on the way.
    2:26:21 But in principle, all this exists and there is no such system as we have in Europe.
    2:26:25 To say that we do not have corruption would be lying.
    2:26:28 We just talk about it openly.
    2:26:30 We are genuinely fighting against it.
    2:26:42 Look, we have sitting in our prison, Ihor Kolomoisky, who is the most influential Ukrainian oligarch since independence.
    2:26:45 And no one could do anything about him.
    2:26:51 The United States of America wanted to have Kolomoisky and they went to great lengths because of money laundering, etc.
    2:26:57 There are criminal cases in the United States, I think in Delaware, something like that.
    2:26:59 Neither Europe could do anything about it.
    2:27:02 That is, we did a lot with oligarchs.
    2:27:06 Russian oligarchs, sanctions were imposed, they were thrown out.
    2:27:11 Some of them fled the state, but they are all under sanctions.
    2:27:19 We exchanged some of them for our soldiers, such as Medvedchuk, to whose daughter Putin is godfather.
    2:27:32 That is, we fought against the strongest influential oligarchs, which are and were in Ukraine and we eliminated a lot of corruption.
    2:27:35 Of course, corruption exists in everyday life.
    2:27:41 It exists, but institutionally, I am sure that Ukraine will overcome all this.
    2:27:48 This takes a little time, I would say honestly, that, listen, what we call corruption.
    2:27:58 And in some state of the world is called lobbyism, but this does not mean that there is no corruption there.
    2:28:03 Let’s take the aid you mentioned during the war.
    2:28:08 First of all, we have no money.
    2:28:13 We have no money except for the war.
    2:28:22 We received weapons from the United States of America, from Europe, if we take, for example, money from the United States of America.
    2:28:32 During all this time of the war, around 177 billion have been voted for or decided upon.
    2:28:38 177 billion, let’s be honest.
    2:28:43 We have not received half of this money.
    2:28:48 The second point, which is very important just as an example, is a corruption.
    2:28:52 The first question, whose corruption?
    2:28:54 This is the second.
    2:28:58 Here is just one small example for you.
    2:29:06 When the United States began to transfer us weapons, it was American money, but American weapons.
    2:29:09 Money for these weapons.
    2:29:15 I had, as a president, I had cargo jets.
    2:29:19 Not in Ukraine because of the war, we moved them very quickly to Europe.
    2:29:28 We had cargo. We have good cargo fleet, very good, because of Antonov.
    2:29:41 So I asked American side to grant me the opportunity because our jets are at another airfield.
    2:29:51 And I asked America to give me the opportunity to use our jets for transfer, not to pay a lot.
    2:29:55 To whom? To your companies, to American companies.
    2:29:58 No, I didn’t get this opportunity.
    2:30:01 My jets stayed put.
    2:30:07 And the United States cargo jets moved these weapons.
    2:30:11 But everywhere you have to spend money.
    2:30:21 So we could get more weapons, but we have to pay for this very expensive fleet.
    2:30:28 My question, is this corruption or not? Or lobbyism? What is it?
    2:30:31 You mean corruption on the part of the US companies?
    2:30:33 Yes, making such decisions.
    2:30:38 The lobbying for such decisions involves some companies that make these decisions.
    2:30:42 But I can’t be open about it and I couldn’t speak loudly about it.
    2:30:46 I didn’t want, nor did I intend to cause any scandals to arise.
    2:30:49 Because otherwise, you can freeze the support and that’s it.
    2:30:55 And that’s why when we talk about corruption, we must ask who is involved.
    2:31:03 If we had 177 and if we get the half, where is the half?
    2:31:07 If you will find the second half, you will find corruption.
    2:31:10 There is a perception of corruption.
    2:31:16 People like Donald Trump and Elon Musk really care about fighting corruption.
    2:31:25 What can you say to them to gain their trust that the money is going towards this fight for freedom, towards the war effort?
    2:31:30 In most cases, we did not receive money, we received weapons.
    2:31:36 And where we saw risks that something could be a weapon, we would slap everyone on the wrist.
    2:31:42 And believe me, this is not only about Ukraine, on the supply chain, everywhere.
    2:31:49 There are some or other people and companies who want to make money because everyone makes money on the war.
    2:31:51 We did not profit from the war.
    2:31:55 If we found someone, believe me, we slapped everyone on the wrist.
    2:32:03 And we did that, we did that, and we will continue to do so because to this day,
    2:32:11 when someone says that Ukraine was selling weapons, and by the way, Russia was the one pushing this narrative,
    2:32:19 we always responded, our soldiers would kill such people with their own hands without any trial.
    2:32:25 Do you honestly think anyone could steal weapons by the truckload when we ourselves don’t have enough on the front lines?
    2:32:35 And yet we have to provide proof to defend ourselves because when there’s an abundance of such misinformation, distrust starts to grow.
    2:32:41 And you’re right, people listen to various media outlets, see this and lose faith in you.
    2:32:47 In the end, you lose trust, and with it, you lose support.
    2:32:55 Therefore, believe me, we are fighting more against disinformation than against particular cases,
    2:33:02 although I still emphasize once again, at the everyday level, such things are still important.
    2:33:07 We catch these people and we fight them.
    2:33:18 I mentioned Elon Musk. I would be interested to hear what you think of him, why you respect him as a person, as an engineer, as an innovator, as a businessman.
    2:33:22 I would just like to hear from you, what do you think about Elon Musk?
    2:33:27 First of all, I had a conversation with him at the beginning of the war.
    2:33:35 I talked with him. I respect him, first and foremost.
    2:33:39 I respect the self-made man, right? In English, I love such people.
    2:33:46 You know, no one and nothing fell into their lap, but the man did something, did it all himself.
    2:33:56 I worked myself, created a big production company, and I know what it means to make money, to make money, to select talented people,
    2:34:06 to impart knowledge to them, to invest money and to create something, something important for certain people, you know.
    2:34:17 And I’m not comparing myself to Musk. He just, well, the man is a great leader of innovations in the world.
    2:34:22 And I believe that such people move the world forward.
    2:34:30 Therefore, I respect the result of his work, and we see this result.
    2:34:39 And for me, it has always been important that your result can be used, that these are not words, but facts.
    2:34:41 Let’s take the war.
    2:34:46 We are very grateful for Starlink. It has helped.
    2:34:51 We used it after Russian missile attacks on the energy infrastructure.
    2:34:55 There were problems with the internet, etc., with connection.
    2:35:02 We used Starlink both at the front and in kindergartens. It was used in schools. It helped children.
    2:35:08 We used it in various infrastructure, and it helped us very much.
    2:35:19 And I would very much like Elon to be on our side as much as possible to support us.
    2:35:23 And yes, I am grateful to him for Starlink. Truly I am.
    2:35:30 First of all, so that our guys have a connection and children too.
    2:35:44 And I am really grateful to him for that. I think we need, I would like him to come to Ukraine, to talk to people here, and to look around, and so on.
    2:35:46 Has Elon visited Kiev or Ukraine yet?
    2:35:47 No.
    2:35:53 I hope the Kiev airport will open soon. Then it will be easier to fly in.
    2:36:05 Yes, I am looking forward to it. Maybe we will open it, but only. And you must understand if the war is over, there must be sustainable peace and air defense systems, to be honest.
    2:36:20 And we must ensure that they are long-lasting and effective. Let’s take the airport, for example, and let’s focus on the airport in Zesho, which you know very well as it is handling important cargo for Ukraine in Poland.
    2:36:24 And there are patriot systems there, because everyone understands what the risk is.
    2:36:28 Well, Russia is a risk, and therefore we need air defense systems.
    2:36:38 And today, today take, for example, the air defense system of one city or another that is being shelled and move it, move it to the airport.
    2:36:44 Well, that would be dishonest. People are more important than planes.
    2:36:55 But there will be a moment. And Trump, by the way, I think that the war will end, and President Trump may be the first leader to travel here by airplane.
    2:36:58 I think it would be symbolic by airplane.
    2:37:03 Again, January 25th around that date, right? Flying in, meeting the Air Force One.
    2:37:04 That would be cool.
    2:37:08 Elon Musk. I will meet you there for the second time, too, on the plane.
    2:37:09 With pleasure.
    2:37:20 And you, by the way, before I forget, let me ask, are you coming on January 20th for President Trump’s inauguration?
    2:37:35 I would like to, of course. I will be considering what is happening then in the war, because there are moments of difficulties, escalation, many missiles, etc.
    2:37:45 But honestly, well, I can’t. I can’t come, especially during the war, unless President Trump invites me personally.
    2:37:59 I’m not sure it’s proper to come, because I know that in general leaders are, for some reason, not usually invited, to the inauguration of presidents of the United States of America.
    2:38:07 Well, and I know that there are leaders who can simply come, want to come, and will come.
    2:38:09 Yeah, I know.
    2:38:16 And I know the temperament of some of these people. They can come at their discretion.
    2:38:19 This is very, very difficult for me.
    2:38:24 I am the kind of person that cannot come without an invitation.
    2:38:32 This is Putin. We did not invite him. He came to us, so to say. And me? I can’t do that.
    2:38:38 No, but didn’t he publicly say that? It would be great if you came to the inauguration, or you mean did he invite it officially?
    2:38:46 No, wait, look, look, look. Listen, I am against any bureaucracy. I get rid of it as much as I can.
    2:38:59 Well, you know, there are some complexities involving security. I decide and I fly, and the United States of America officially provides security.
    2:39:08 Not that I need this, mind you. I do not ask for helicopters to fly around and protect me, but they will simply do it themselves, the security service itself.
    2:39:17 They had to do it. I don’t want it. And sometimes I don’t need it. And I am asking them. It was, for example, before the war.
    2:39:25 I think, yes, it was before the war. I had a meeting, yes, with President Trump. It was in 2019.
    2:39:30 I just wanted to go for a run early in the morning because I really wanted to exercise.
    2:39:39 And they, those tall bodyguards, a lot of them, they decided to join me, but I couldn’t really do it because they were in suits.
    2:39:51 And I was in sportswear. I said, no, I can’t. It’s always funny. I’m not, I don’t want to, you know, I don’t want to disturb anybody and cause anyone problems with me.
    2:39:57 And that’s why, if he will invite me, I will come.
    2:39:59 I thought he invited you.
    2:40:00 Yeah?
    2:40:03 Yeah, I thought he publicly invited you. But okay, I hope to see you there.
    2:40:08 I think they had to do some of their steps. I don’t know, but…
    2:40:12 Step, yeah. The stamp was missing.
    2:40:18 Yeah, but with pleasure with my wife, of course. And I think it’s important. It’s important.
    2:40:25 All right, let’s get back to a serious question. Sometimes they say it in America, this question of who is really in power.
    2:40:29 So let me ask, is someone controlling you?
    2:40:37 For example, oligarchs, American politicians, Yarmuk.
    2:40:48 I wanted to bring this up because I have been here in Ukraine since the, twice since the invasion of 2022.
    2:40:53 And one of the things I’ve learned, well, is that actually nobody controls you.
    2:41:05 And this is, this is one of your strengths as a president, as a person that oligarchs and other rich and powerful people like that cannot control you.
    2:41:07 Can you explain why that is, how you see it?
    2:41:15 I think, and it is indeed true, that I’m generally difficult to deal with.
    2:41:22 I am an ambitious person. I can’t submit to anyone.
    2:41:32 I can live by rules, by laws. I believe that this is the only thing that can control any person today.
    2:41:39 These are the rules and laws of the society or state where you live.
    2:41:45 And I believe that this is the most important thing. There is no person who could control me.
    2:41:55 As I once told President Trump, when we had a meeting, by the way, journalists asked if Trump influenced me during the phone call.
    2:42:03 I told him, I told the journalist the truth then, who can influence me, only my boy, my son.
    2:42:07 This is the fact, when he calls asking for something, well, then I lift up my arms.
    2:42:12 Yes, and I cannot do anything about it because children are children.
    2:42:18 I have so little time with them. And therefore, when there are these moments, they are precious and important to me.
    2:42:27 I am ready to do anything. Also, probably my parents, they are an authority for me.
    2:42:34 Beyond that, I view it more as a system. No one can control the president.
    2:42:44 Therefore, we have oligarchs who either fled or are in prison because oligarchs usually control cash flows and people and influence politics.
    2:42:49 And we have concrete examples. With sentences, they are not just under house arrest.
    2:42:55 Not just that there are some judgments under which their assets were frozen or sanctions were imposed.
    2:43:01 There are specific people who are behind bars. I think this is the answer regarding the influence.
    2:43:06 Would they like to influence me in the same way as any president of Ukraine?
    2:43:11 Because finance and cash flows always influence politics.
    2:43:18 Well, at least they want to do this. This is regarding the influence.
    2:43:27 And other people on the vertical, they perform tasks as my managers.
    2:43:32 Andri, you mentioned is one of those managers.
    2:43:38 Well, I am glad that I have such people.
    2:43:42 Well, probably there is nothing else to add here.
    2:43:47 I will just say that your team that I spoke with is an excellent team, excellent people.
    2:43:48 Thank you.
    2:44:00 Okay, one last question. The future of Ukraine. If you look 5, 10, 20 years into the future, what can help Ukraine flourish economically, culturally, politically in the future?
    2:44:05 Digital? It’s very important. Digitalization of all the process.
    2:44:09 We began this work. We have special Ministry of Digital Transformation.
    2:44:14 Yeah, so this is very good. And we also have our DEA.
    2:44:17 This is the name for all of these services. Yeah.
    2:44:20 So I think that is the most important.
    2:44:29 This is again, this is not only convenient that will cancel all the any possibilities for future corruption.
    2:44:34 Because you don’t have any, you know, you don’t have any personal connections with people in the government or elsewhere.
    2:44:38 So you’re just on your phone or any other device. That’s it.
    2:44:42 And I think we are doing very well. We are the best in Europe.
    2:44:44 All of Europe recognizes it.
    2:44:53 Some countries of the African Union asked us to provide this the same service and we will do it after the war immediately.
    2:44:57 And I think that we can bring money to Ukraine from this.
    2:45:01 And I think what we also need, we need a tax reform.
    2:45:06 I think it will be very important for the businesses to return.
    2:45:17 A lot of support will come, I think, from USA business investment, not as direct aid to us, just to the private sector and resources.
    2:45:25 And I mentioned this to President Trump and to some European leaders who are our key strategic partners that will be happy,
    2:45:33 especially with the Americans, will be happy to sign these contracts and engage in joint investments in many areas.
    2:45:40 And I think we can develop oil, gas, green energy, including solar power.
    2:45:44 And we already have the resources. We can invest money into this.
    2:45:56 We have oil reserves in the Black Sea that we can exploit and we need your expertise and the investment of your companies.
    2:46:03 We have gold and uranium reserves, the largest in Europe, by the way, which is also very important.
    2:46:07 For example, Russia has pushed France out of Africa.
    2:46:13 They urgently need uranium, which we have, so we are ready to open up for investments.
    2:46:20 And this will give us, of course, opportunities, jobs for people, revenue.
    2:46:22 I don’t want cheap labor, honestly.
    2:46:31 What I truly want, especially after the war, to open up for those people who can really contribute and earn, yes.
    2:46:34 And give a reason to the 8 million people to come back.
    2:46:41 Yes, it’s so important and they will come and we will recover and rebuild Ukraine.
    2:46:47 We will be very open to companies and, of course, we will welcome our people back.
    2:46:50 It’s so important culturally.
    2:46:55 I think the most important thing is to remain open and not change our direction.
    2:47:01 Because culturally aligning with Russia, it’s one idea while aligning with Europe is another.
    2:47:06 Our people have chosen Europe, it’s their choice, it’s our choice, the choice of our nation.
    2:47:08 And I think it’s very important.
    2:47:09 But first you have to end the war.
    2:47:10 Yes, you’re right.
    2:47:11 And we will.
    2:47:13 We want peace, you know?
    2:47:17 I mean, just to make it clear, we want peace.
    2:47:19 Just what I always say.
    2:47:22 You have to come to Ukraine and see for yourself.
    2:47:30 And people will tell you, “No, we can’t forgive those murderers who took our lives.”
    2:47:34 But we still want to make peace.
    2:47:45 And honestly, I think that the highest approval rating of the President of the United States of Trump now is in Ukraine.
    2:47:55 People really believe that he can truly help bring peace.
    2:48:04 Now they have faith, faith that he can make it happen, that he can support Ukraine and he can stop Putin.
    2:48:09 And that he will make sure Putin doesn’t get everything he wants.
    2:48:16 This is very important and it’s why we believe that we must not lose this opportunity.
    2:48:19 I hope you find the path to peace. Thank you.
    2:48:20 Thank you so much.
    2:48:21 Thank you for talking to me.
    2:48:22 Thank you for coming.
    2:48:26 Thank you.
    2:48:30 You started.
    2:48:32 Thank you very much.
    2:48:39 Thank you for listening to this conversation with the President of Ukraine Volodymyr Zelensky.
    2:48:46 And now let me answer some questions and try to reflect on and articulate some things I’ve been thinking about.
    2:48:59 If you would like to submit questions, including in audio and video form, go to lexfreedman.com/ama or to contact me for whatever other reason, go to lexfreedman.com/contact.
    2:49:02 First, I got a bunch of questions about this.
    2:49:10 So let me chat about the topic of language and let’s say the mechanics of multilingual conversation.
    2:49:13 Perhaps the details are interesting to some people.
    2:49:20 It also allows me to reflect back on the puzzle of it in this episode and what I can do better next time.
    2:49:30 I already explained in the intro the symbolic, historic, and geopolitical complexity of the choice of language in the conversation with President Zelensky.
    2:49:37 As I said, the Russian language is one that the President speaks fluently and was his primary language for most of his life.
    2:49:41 I speak Russian fluently as well.
    2:49:44 It’s the only common language we are both fluent in.
    2:49:50 So any other combination of language is required an interpreter, including when I spoke English.
    2:50:01 He did need an interpreter when I spoke English and, just like I was, was visibly encumbered and annoyed by the process of interpretation.
    2:50:09 This is why I tried to speak in Russian to the President instead of English so that he can directly understand me without an interpreter.
    2:50:13 I’m willing to take the hit for that as I am for everything else.
    2:50:15 I’m not trying to protect myself.
    2:50:21 I’m trying to do whatever is best for the conversation, for understanding.
    2:50:28 Though it has been getting harder and harder to stay open, vulnerable, and raw in public,
    2:50:39 while the swarms of chanting internet mobs stop by with their torches and their color-coded hats, flags, frogs, pronouns, and hashtags.
    2:50:47 Anyway, there is a lot of nuanced aspects of the conversational language that I would like to explain here.
    2:50:49 I’ll try to be brief.
    2:50:56 I can recommend a lot of books on this topic of language and communication that reveal just how amazing this technology of language is.
    2:51:04 For example, for a good overview, I recommend John McWarder’s books and especially his lecture series for the Great Courses on Language.
    2:51:06 There are several.
    2:51:14 In the Story of Human Language series, he gives a great discussion on spoken language versus written language,
    2:51:17 and that spoken language often relaxes the rules of communication.
    2:51:31 It uses shorter packets of words, loads in a bunch of subtle cues and meanings, all of which, like I’m trying to describe, are lost when there’s an interpreter in the loop.
    2:51:39 Let me also describe some relevant characteristics of my peculiar language abilities in quotes.
    2:51:41 I was never good at speaking.
    2:51:44 I listen, think, and understand better than I speak.
    2:51:51 For me, this is true for both English and Russian, but it is especially true for Russian.
    2:51:59 The Russian language allows for much more room for wit, nonstandard terms of phrase, metaphors, humor, rhyme, musicality,
    2:52:07 and, let’s say, deforming of words that create a lot of room for creativity in how meaning and emotion are conveyed.
    2:52:10 You could do the same in English, but it’s harder.
    2:52:15 I actually find that brits are sometimes very good at this.
    2:52:18 Like, one of my favorite humans to talk to is Douglas Murray.
    2:52:28 Setting the content of the conversation aside, the sheer linguistic brilliance and wit of dialogue with Douglas is a journey in itself.
    2:52:35 I think Christopher Hitchens had the same, and many others, like I said, especially Brits.
    2:52:45 Anyway, I’m able to detect and understand a lot of dynamism and humor in the Russian language, but I’m slow to generate it,
    2:52:47 in part because I just don’t practice.
    2:52:50 I have very few Russian-speaking friends.
    2:52:56 Funny enough, most of them are Ukrainian, but they speak with me and each other in Russian.
    2:53:01 But of course, as I mentioned, this is slowly changing due to the war.
    2:53:09 But I tried to speak to the president in Russian, so he would avoid needing an interpreter as much as possible.
    2:53:16 One of the things I want to improve for next time is to make sure I give very good equipment for interpretation,
    2:53:27 and arrange for an interpreter I trust to be exceptionally good for the dynamism and the endurance of a three-hour conversation in the style that I tried to do.
    2:53:31 Just to give you some behind-the-scenes details of the experience.
    2:53:42 Equipment-wise, funny enough, it’s not actually so trivial to set up wireless connections from us, the two people talking to the interpreter, and then back to us,
    2:53:46 in a way that’s super robust and has clean audio.
    2:53:51 The audio I had in my ear from the interpreter had a loud background noise,
    2:54:00 so the whole time I’m hearing a shh sound with the voice of the interpreter coming in very quietly.
    2:54:05 What a wonderful experience. This whole life is, frankly.
    2:54:14 Plus, his translation was often incomplete, at least for me, so I had to put together those puzzle pieces continuously.
    2:54:21 But, again, it worked out, and hopefully our constant switching of languages and having a meta-discussion about language
    2:54:29 provided good insights as to the complexity of this fight for our nation’s identity and sovereignty that Ukraine is going through.
    2:54:40 Behind-the-scenes, off-mic, on a personal level, President Zelensky was funny, thoughtful, and just a kind-hearted person.
    2:54:46 And really, the whole team were just great people. It was an experience I’ll never forget.
    2:54:54 After the conversation was recorded, the next challenge was to translate all of this and overdub it and do it super quickly.
    2:55:02 Like, these words I’m speaking now have to be translated and dubbed into Ukrainian and Russian.
    2:55:09 Eleven Labs were really helpful here, especially in bringing the President’s voice to life in different languages.
    2:55:16 But even more than that, they’re just an amazing team who inspired me and everyone involved.
    2:55:22 Please go support Eleven Labs. They are a great company and great people.
    2:55:30 The translation is separate from the text-to-speech and was done in part by AI and a lot by human.
    2:55:35 This is where the fact that we had constant switching between three languages was a real challenge.
    2:55:40 So there are six transition mappings that have to be done.
    2:55:48 English to Ukrainian and Russian, Ukrainian to English and Russian, and then Russian to English and Ukrainian.
    2:55:53 Continuously, sentence by sentence, sometimes word by word.
    2:56:00 And each combination of language to language translation is best done by a person who specializes in that kind of mapping.
    2:56:03 So it was all a beautiful mess, all of it.
    2:56:07 And on top of all that, great translation is super hard.
    2:56:12 For example, I’ve read and listened to a lot of the CF scheme, both English and Russian,
    2:56:16 and studied the process of how these books are translated by various translators.
    2:56:22 You can spend a week discussing how to translate a single important sentence well.
    2:56:28 Obviously, in this situation, we don’t have weeks. We have hours for the whole thing.
    2:56:37 One of the things I regret is not putting enough time into the hiring and selecting great translators from Russian and Ukrainian to English, especially.
    2:56:47 I think translation is an art, and so getting a good translator that works well with us is a process that needs more time and effort.
    2:56:49 I’ll be doing that more this month.
    2:56:54 By the way, we have a small but amazing team.
    2:56:58 If you want to join us, go to lexfreeman.com/hiring.
    2:57:04 If you’re passionate, work hard, and everyone on the team loves working with you, then we’ll do some epic stuff together.
    2:57:06 We’d love to work with you.
    2:57:16 Like I said about 11 Labs, there are a few things as awesome in life as being able to work hard with an amazing team towards a mission all of us are passionate about.
    2:57:20 Anyway, I’ll probably be doing a few more interviews in the Russian language.
    2:57:28 I do have a lingering goal of interviewing the mathematician Gagore Perlman, but there’s also others.
    2:57:37 I will also work on improving my whole pipeline, both equipment-wise and interpreter-wise, in doing these conversations in other languages.
    2:57:47 Because there are many that I would like to do in languages I don’t speak at all, like Chinese, Mandarin, or Spanish, Arabic, Hindi, Portuguese, French, German.
    2:57:56 I see language as both a barrier for communication and a portal into understanding the spirit of a people connected by that language.
    2:58:02 It’s all a weird and beautiful puzzle, and I’m just excited to get the chance to explore it.
    2:58:06 Alright, I got a question on how I prepare for podcasts.
    2:58:11 So this has evolved and expanded more and more over time.
    2:58:15 There are some podcasts that I prepare hundreds of hours for.
    2:58:23 In AI terms, let’s say, first I’m training a solid background model by consuming as much variety on the topic as possible.
    2:58:32 A lot of this comes down to picking high signal sources, whether it’s blogs, books, podcasts, YouTube videos, ex-accounts, and so on.
    2:58:41 For this conversation with President Zelensky, for example, since February 2022, I’ve spoken with hundreds of people on the ground.
    2:58:50 I’ve read Kindle or audiobook about 10 books fully, and then I skimmed about 20 more.
    2:58:55 And I don’t mean books about Zelensky, although he does appear in some of them.
    2:59:01 I mean books where this conversation was fully in the back of my mind as I’m reading the book.
    2:59:06 So, for example, I read Red Famine by Ann Applebaum.
    2:59:09 It’s about Hallinomore.
    2:59:11 Does it directly relate to Zelensky?
    2:59:19 Not on the surface, no, but it sort of continues to weave the fabric of my understanding of people, of the history of the region.
    2:59:30 But it’s really important for me to read books from various perspectives, and I’m always trying to calculate the bias under which the author operates,
    2:59:35 and adjusting for that in my brain as I integrate the information.
    2:59:43 For example, Ann Applebaum’s book, Gulag, is very different from Alexander Solzhenitsyn’s Gulag Archipelago.
    2:59:47 The former is a rigorous comprehensive historical account.
    2:59:54 The latter is a literary, psychological, and personal portrait of Soviet society.
    2:59:57 Both, I think, are extremely valuable.
    3:00:03 On the bias front, for example, The Rise and Fall of the Third Reich by William Sharer is a good example.
    3:00:13 It is full of bias, but he was there, and to me, he has written probably one of the greatest, if not THE greatest book on the Third Reich ever.
    3:00:18 But like I said, it has a lot of inaccuracies and biases. You can read about them online if you like.
    3:00:28 But my job in this case, and in all cases, is to adjust based on my understanding of the author’s biases, and take the wisdom from the text where it could be found,
    3:00:34 and putting the inaccuracies aside into the proverbial dustbins of history.
    3:00:41 So as I’m reading, I’m writing down my thoughts as they come up, always digging for some deeper insight about human nature.
    3:00:49 If I’m at my computer, I’ll write it down in Google Doc, sometimes use Notion or Obsidian.
    3:00:52 If I’m not at my computer, I’ll use Google Keep.
    3:01:00 So for example, if I’m listening to an audiobook and I’m running along the river, if a good idea comes to mind, I’ll stop, think for a few seconds,
    3:01:04 and then do speech-to-text note in Google Keep.
    3:01:11 By the way, listening to audiobook at 1x speed. Old school.
    3:01:17 And eventually I get a gigantic pile of thoughts and notes that I look over to refresh my memory.
    3:01:23 But for the most part, I just throw them out. It’s a background model building process.
    3:01:27 By the way, LLMs are increasingly becoming useful here for organization purposes,
    3:01:36 but have not yet been useful at least for me, and I do try a lot for insight extraction or insight generation purposes.
    3:01:43 I should mention that my memory for specific facts, names, dates, quotes is terrible.
    3:01:49 What I remember well is high-level ideas. That’s just how my brain works for better or for worse.
    3:02:01 I realize that sometimes forgetting all of the details and the words needed to express them makes me sound simplistic and even unprepared.
    3:02:07 I’m not. But that’s life. We have to accept our flaws and roll with them.
    3:02:13 Aside from books, I also listen to a lot of podcasts and YouTube videos where people are talking about the topic.
    3:02:22 So, for the President Zelensky episode, I listen probably to hundreds of hours of content from his supporters and from his critics from all sides.
    3:02:29 Again, I choose who to listen to based not on their perspective, but based on SNR, signal to noise ratio.
    3:02:36 If I’m regularly getting insights from a person, I will continue listening to them, whether I agree or disagree.
    3:02:46 In the end, this turns out to be a lot of hours of prep, but to say that it’s X hours per episode is not accurate because a lot of this preparation transfers from one guest to another,
    3:02:51 even when there’s an insane level of variety in the guests. We’re all humans after all.
    3:02:57 There is a thread that connects all of it together. Somehow, you feel it closely enough.
    3:03:07 For more technical guests in STEM fields, I’ve read papers, a lot of papers, and also technical blog posts and technical tweet threads.
    3:03:13 This is a very different process. For AI or CS related topics, I will run other people’s code.
    3:03:19 I will write my own, implement stuff from scratch. If it’s a software company, I’ll use their tools and software if relevant.
    3:03:28 But in the actual conversation, I constantly am searching for simple but profound insights at various levels of abstraction.
    3:03:40 Sometimes this means asking a trivial question in hopes of uncovering the non-trivial, counterintuitive but fundamental idea that opens the door to a whole new way of looking at the field.
    3:03:53 And actually, every guest is their own puzzle, like preparing for Rick Rubin was me listening to hundreds of songs he produced and even learning some on guitar, like “Heart” by Johnny Cash.
    3:04:05 Preparing for the cursor team episode meant, obviously, I had to use cursor fully for several weeks, all of its features, so I switched completely for VS Go to cursor.
    3:04:18 For Paul Rosalie, round two, especially, I literally went deep into the jungle with Paul and almost died, fully taking the leap toward adventure with him.
    3:04:24 When he gets close to the conversation, I’ll start working on the actual interview questions and notes.
    3:04:29 And there I’m asking myself, what am I personally curious about?
    3:04:39 Like, I love podcasts. I’m a big fan of many, many podcasts. And so I ask myself, what would I want this person to explain on a podcast?
    3:04:48 And maybe what aspect of their thought process or their humanity would I want to be surfaced or have the chance to be surfaced?
    3:04:57 In the actual conversation, I always try to put my ego aside completely and do whatever it takes to have a good conversation and serve the listener.
    3:05:09 This means asking questions simply, trying to define terms and give context if needed, being open-minded, vulnerable, curious, and challenging the guests when needed.
    3:05:17 Despite the claims on the internet, I do ask a lot of challenging questions, including follow-ups, but always with empathy.
    3:05:24 I don’t need to be right. I don’t need to signal my moral or intellectual superiority to anyone.
    3:05:41 I try to do the opposite, actually, because I want the guests to open up, and I trust the intelligence of the listener to see for themselves if the guest is full of shit or not, to detect the flaws and the strengths of how the guest thinks or who they are deep down.
    3:05:51 A lot of times, when interviewers grill the guest, it doesn’t reveal much, except give a dopamine hit to the echo chambers who hate the guest.
    3:05:58 As I said in the intro, I believe the line between good and evil does run through the heart of every man.
    3:06:09 The resulting conversations are sometimes a failure, sometimes because they are too short, sometimes because the chemistry was just not working, sometimes because I fucked it up.
    3:06:16 I try to take risks, give it everything I got, and enjoy the roller coaster of it all, no matter what.
    3:06:26 And, as I said, I trust the listener to put it all together, and I trust the critic to tear it apart, and I love you all for it.
    3:06:31 Alright, I got a bit of a fun question. It’s a long one.
    3:06:38 So, Deleon, cool name, wrote in saying he spotted me out in the wild and had a question about it.
    3:06:49 He wrote, I saw Lex working at the Detroit airport between flights. I hesitated and ultimately decided not to interrupt since he was in focus mode, true.
    3:06:53 Lex had his headphones earbuds on, listening to brown noise.
    3:07:03 Microsoft Surface propped up at eye level, Kinesis advantage keyboard on the table. The use of Microsoft Windows is surprising, but it has been discussed in the past, true.
    3:07:15 The ergonomics of the setup, surface at eye level, means that Lex cares about his health, but the anomalously large Kinesis advantage keyboard seems like such a burden to lug around airports.
    3:07:23 I cannot help but ask, why is it that Lex is going through the hassle to bring this absolutely large keyboard with him as carry on?
    3:07:26 It barely fits in a backpack.
    3:07:32 Carrying it around must be necessary for Lex for some reason. I love the puzzle of this that you’re trying to think through this.
    3:07:38 The pain of lugging this tool around must be much smaller than the problem it solves for a question mark.
    3:07:49 What problem does this keyboard solve? What makes it necessary at the airport? Productivity, health, RSI? Good questions. Thank you, Delia.
    3:07:54 Great question. It made me smile. So I thought I’d answer. I remember that day.
    3:08:07 There was something else about that day aside from the keyboard that I miss. So I am filled with a melancholy feeling that is appropriate for the holiday season.
    3:08:14 So let me try to set the melancholy feeling aside, answer a question about my computer setup when I’m traveling.
    3:08:25 So whether I’m going to SF Boston, Austin, London or the front in Ukraine, I am always bringing the Kinesis keyboard.
    3:08:38 I don’t have RSI or any other health issues of that kind that I’m aware of, even though I’ve been programming, playing guitar, doing all kinds of combat sports my whole life.
    3:08:44 All of which put my hands and fingers in a lot of precarious positions and situations.
    3:08:49 For that reason, and in general, ergonomics have never been a bit concerned for me.
    3:08:58 I can work on a crappy chair and table, sleep on the floor. It’s all great. I’m happy with all of it.
    3:09:02 So why Kinesis, which by the way is right here.
    3:09:16 I had to think about it. Your question actually made me reflect and I was hoping as I’m answering it, the truth will come out on many levels.
    3:09:27 So it is true that I’m more productive with it. I can type and correct mistakes very fast compared to a regular keyboard, both in natural language typing and in programming.
    3:09:39 So fast enough, I think where it feels like I can think freely without the physical bottlenecks and constraints of fingers moving.
    3:09:48 The bit rate in New Orleans parlance is high enough for me to not feel like there is cognitive friction of any kind.
    3:09:52 But the real answer may be the deeper, more honest answer or something else.
    3:10:02 I’ve used the Kinesis keyboard for over 20 years. So maybe it’s like one of those love stories where a guy and a girl love each other.
    3:10:08 And you try to quit because it doesn’t quite work. But every time you leave, you ask yourself why.
    3:10:15 And then you realize that when you’re together, your life is just full of simple joys.
    3:10:22 So what’s the point of leaving? What’s the point of life if not to keep close to you the things that bring you joy?
    3:10:34 Delian, like this keyboard, it brings me joy. It’s a bad metaphor over anthropomorphized perhaps, but I never promised a good one.
    3:10:39 I’m like a cheap motel on a road trip, low quality is part of the charm.
    3:10:44 I do have some good motel stories for another time. This does not feel like the appropriate time.
    3:10:56 All that said, to disagree with myself, I did use Emacs also for over 20 years and in a single week recently switched to VS Code and then cursor and never look back.
    3:11:00 So take my romantic nature with a grain of salt.
    3:11:12 So yes, eventually I’ll have to leave. But for now, you’ll keep finding me on occasion in a random airport somewhere listening to brown noise, writing away the hours on this Kinesis keyboard.
    3:11:25 Now, if you see me without it, maybe I’ll give you the same tanger of melancholy feeling I feel now in looking back to that airport in Detroit.
    3:11:37 Anyway, more about my travel setup. If anyone is curious, I usually do travel with a Windows laptop, but I am mostly using Linux on it through WSL, Windows Subsystem for Linux.
    3:11:42 And in some cases, I’m dual booting Linux and Windows.
    3:11:51 I also need to be able to video edit. So on longer trips, I usually have a bigger laptop with a bigger screen, lots of memory, good CPU, good GPU.
    3:11:55 All of that helps with video editing on Adobe Premiere.
    3:12:07 In general, I’m extremely minimalist except for the few let’s call them sentimental things like all my podcasts recording equipment fits into a small suitcase.
    3:12:13 I try to keep it as simple as possible. Thank you for the question and see you at the next airport.
    3:12:25 Alright, I think it’s time to bring things to a close. I’d like to give a big thanks to you for giving me your time and your support over the years. It means the world.
    3:12:41 If you want to get in touch with me, go to lexsrooming.com/contact. There you can get feedback, ask questions, request guests for the podcast, or submit the Coffee with Lex form if you just want to chat with me over a cup of coffee.
    3:12:50 I’ll be traveling across the world a bunch this year from Europe to South America and more so it would be cool to do some small meetups and meet some interesting people.
    3:12:56 This has been a journey of a lifetime. Thank you for everything.
    3:13:00 On to the next adventure. I love you all.
    3:13:16 [Music]

    Volodymyr Zelenskyy is the President of Ukraine. On YouTube this episode is available in English, Ukrainian, and Russian. Captions and voice-over audio tracks are provided in English, Ukrainian, Russian, and the original mixed-language version, with subtitles available in your preferred language. To listen to the original mixed language version, please select the English (UK) audio track audio track. The default is English overdub.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep456-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/volodymyr-zelenskyy-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    President Zelenskyy’s X: https://x.com/ZelenskyyUa
    President Zelenskyy’s Instagram: https://instagram.com/zelenskyy_official
    President Zelenskyy’s Website: https://www.president.gov.ua/

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    GitHub: Developer platform and AI code editor.
    Go to https://gh.io/copilot
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    Eight Sleep: Temp-controlled smart mattress cover.
    Go to https://eightsleep.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex

    OUTLINE:
    (00:00) – Introduction
    (20:17) – Language
    (30:06) – World War II
    (46:54) – Invasion on Feb 24, 2022
    (53:30) – Negotiating Peace
    (1:13:47) – NATO and security guarantees
    (1:26:39) – Sitting down with Putin and Trump
    (1:46:09) – Compromise and leverage
    (1:51:38) – Putin and Russia
    (2:01:30) – Donald Trump
    (2:12:01) – Martial Law and Elections
    (2:24:21) – Corruption
    (2:33:06) – Elon Musk
    (2:37:10) – Trump Inauguration on Jan 20
    (2:40:18) – Power dynamics in Ukraine
    (2:43:50) – Future of Ukraine
    (2:48:32) – Choice of language
    (2:58:02) – Podcast prep and research process
    (3:06:27) – Travel and setup
    (3:12:13) – Conclusion

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

    SOCIAL LINKS:
    – X: https://x.com/lexfridman
    – Instagram: https://instagram.com/lexfridman
    – TikTok: https://tiktok.com/@lexfridman
    – LinkedIn: https://linkedin.com/in/lexfridman
    – Facebook: https://facebook.com/lexfridman
    – Patreon: https://patreon.com/lexfridman
    – Telegram: https://t.me/lexfridman
    – Reddit: https://reddit.com/r/lexfridman

  • #455 – Adam Frank: Alien Civilizations and the Search for Extraterrestrial Life

    AI transcript
    0:00:11 The following is a conversation with Adam Frank, an astrophysicist interested in the evolution of star systems and the search for alien civilizations in our universe.
    0:00:19 And now a quick few second mention of each sponsor. Check them out in the description. It’s the best way to support this podcast.
    0:00:28 Let me say as a side note that I had to put a bunch of podcast episodes on hold to focus deeply on preparing for conversations with world leaders.
    0:00:34 So I apologize to include more sponsors on this episode than usual.
    0:00:40 They really wanted me to mention them this year and I’m not sure when I’m going to do another episode.
    0:00:45 We were going to do eight episodes this month, but instead I think we’re doing two.
    0:00:52 We’ll see every single day, every single hour, changes the plan, changes the situation, changes my life.
    0:01:00 So please be patient with me. There are no sponsor reads in the middle so you can skip this long and beautiful list.
    0:01:05 But I do try to make them interesting in case you do listen and I hope you do.
    0:01:13 In either case, please still check out the sponsors, buy their stuff. It is the best way to support this podcast.
    0:01:26 The sponsors are Encore for your ML stack, AidSleep for naps, Shopify for e-commerce, Natsuite for business, BetterHelp for the mind, Notion for notes, Element for electrolytes, and AG1 for nutrition.
    0:01:31 If you want to get in touch with me for whatever reason, go to lexfreeman.com/contact.
    0:01:36 Perhaps you could tell from my voice on top of everything else, I’m also sick.
    0:01:43 What a wonderful, beautiful, challenging life this is and I’m grateful for every second of it.
    0:01:47 All right, and now on to the full ad reads. Let’s go.
    0:01:59 This episode is brought to you by Encore, a platform that provides data focused AI tooling for data annotation, curation and management, and for model evaluation.
    0:02:11 For example, if you are an independent private or government agency that is running the drones that is flying all over New Jersey and the tri-state area,
    0:02:20 you might be doing the same kind of data annotation and collection, curation and management that Encore excels at.
    0:02:30 Also, if you’re an extraterrestrial species performing the same, I wonder what kind of computation tools alien civilizations have.
    0:02:36 At the physics level, computation is fundamentally a part of the fabric of the universe.
    0:02:49 So every advanced civilization would or surely would discover how to leverage that computation, how to organize that computation, how to access and communicate with that computation.
    0:02:58 Anyway, think of it, if you have a swarm of drones and you are the ruler of an alien civilization, want to collect some data about New Jersey,
    0:03:06 you are going to have to do some great machine learning and great machine learning is not just about the algorithms.
    0:03:08 It is so much more about the data.
    0:03:18 So whoever you are running the drone program over New Jersey, go try out Encore to curate, annotate and manage your AI data at Encore.com/Lex.
    0:03:20 That’s Encore.com/Lex.
    0:03:28 By the way, in all seriousness, I will probably talk about drones in New Jersey soon.
    0:03:38 I think it’s a fascinating mystery. Is it China? Is it aliens? Is it the U.S. government? Is it private companies within the U.S. government?
    0:03:43 Is it other nation states? Are nuclear weapons involved?
    0:03:49 And what are the mechanisms that ensure that the U.S. government is transparent about communicating with the discoverer?
    0:03:51 These are essential questions.
    0:03:53 Okay, onto A-Sleep.
    0:03:56 This episode is brought to you by A-Sleep and it’s pod 4 Ultra.
    0:04:02 You know, sleep makes me think about the night and I’ve been watching a lot of war movies.
    0:04:05 I’ve been watching a lot of war reporting.
    0:04:12 I’ve been watching a lot of conversations with soldiers and I’ve been talking to soldiers and there’s something about the night.
    0:04:18 There’s something about the quiet night that serves as the break from the hell of war.
    0:04:28 That’s the song from the Second World War, a song about a soldier writing to a woman he loves.
    0:04:34 That’s just it. Just like a man searched for meaning in the darkest hours of war.
    0:04:38 Those are the things that keep the flame of the heart going.
    0:04:48 Talking about these topics makes it difficult for me to then talk about A-Sleep and the technology and the comfort of a good night.
    0:04:52 Sleep, somewhere in America.
    0:05:00 That’s one of the things you discover when you travel, especially travel to a country that’s participating in war.
    0:05:09 That the basic comforts, the basic securities, the basic dreams and hopes and the ways of life are taken away.
    0:05:12 And still the human spirit persists.
    0:05:15 Anyway, this is supposed to be an ad read.
    0:05:23 Go to acleep.com/lex, use Code Lex to get up to $600 off your Pod 4 Ultra Purchase when bundled.
    0:05:25 That’s acleep.com/lex.
    0:05:32 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great looking online store.
    0:05:40 I’ve been reading a lot about the long history of the Silk Road, especially before and after the Mongol Empire and Jenghis Khan.
    0:05:47 I’ve been reading a lot about Jenghis Khan and the influence he had on revolutionizing the trade network.
    0:05:57 A lot of networks, the trade of not just goods, but information of knowledge, of languages, of ideas, of religions, of peoples.
    0:06:07 And it’s fascinating how roads of that nature, trade, first and foremost, can break down the barriers that divide peoples.
    0:06:10 I suppose it all starts with incentives.
    0:06:19 People are people and they have stuff they don’t need and they want to sell it and other people have stuff they want and they are willing to buy it.
    0:06:28 And those incentives that scale, overpower any kind of emotional, psychological, historical hatred and all those kinds of things.
    0:06:37 And it’s funny, the little incentives and the mechanisms of capitalism at its best can heal the wounds of war.
    0:06:45 Of course, they can also fuel the military industrial complex, which is the fuel of war.
    0:06:47 Oh, the double-edged sword.
    0:06:57 Anyway, take the Silk Road and fast forward to today and we have Shopify that you can sign up to for $1 per month trial period at Shopify.com/Lex.
    0:06:58 That’s all lowercase.
    0:07:02 Go to Shopify.com/Lex to take your business to the next level today.
    0:07:08 This episode is also brought to you by Netsuite and all in one cloud business management system.
    0:07:20 When I think about Netsuite and all the different administrative modules and the language, standardized language that allows them to communicate with each other.
    0:07:33 I think about all the empires throughout history that were able to create remarkable administrative systems, the Byzantine Empire, the Roman Empire, the Mongol Empire, as I mentioned.
    0:07:37 None of it works without paperwork.
    0:07:49 You know, bureaucracy rightfully so gets a bad rep, but it is best bureaucracy is necessary to manage the affairs of large organizations.
    0:07:55 You know, humans are very good at working with each other when they scale beyond a thousand people.
    0:07:57 So you need great administrative systems.
    0:08:03 And thankfully today, we have technology, we have tools like Netsuite to do just that.
    0:08:09 Take advantage of Netsuite’s flexible financing plan at Netsuite.com/Lex, that’s Netsuite.com/Lex.
    0:08:14 This episode is also brought to you by BetterHelp, spelled H-E-L-P, Help.
    0:08:25 One day in the distant future, AI systems will make for great therapists, but I think that’s a very dangerous road to walk down in the short term.
    0:08:31 I am a person who loves conversation and not small talk.
    0:08:36 The fake nice cities that alleviate social frictions, I’m not for that.
    0:08:41 I’m in for diving deep through conversation.
    0:08:47 And I think that is something that AI just can’t quite do yet and I would say not even close.
    0:08:48 It is an assistant.
    0:08:50 It is not a therapist.
    0:09:01 So the distinction, the differences, quite fascinating to analyze, to watch, to try to sort of elucidate and articulate clearly.
    0:09:10 Yeah, so I’m a big fan of talking to a human to explore your own mind and BetterHelp is a very easy, accessible way of doing that.
    0:09:16 Check them out at betterhelp.com/Lex and save in your first month as BetterHelp.com/Lex.
    0:09:27 This episode is brought to you by Notion, a note-taking system service app that I use and you should use, especially if you’re on a large team,
    0:09:34 to collaborate on all kinds of stuff, including notes and project management, wikis, all that kind of stuff.
    0:09:47 Nuclear weapons have been on my mind quite a bit and I think about the Manhattan Project and I think about the amount of incredible, rapid organization that was involved in that project.
    0:10:00 Just think about the coordination, the coordination of brilliant people working on separate parts of an incredibly complicated project where all of it has to be secret.
    0:10:07 So many of the people working on it may not even be aware of the bigger picture of it or the different modules involved.
    0:10:12 Just imagine the coordination required there, just truly, truly, truly incredible.
    0:10:16 And of course, imagine what modern day tools can do for that.
    0:10:29 Obviously, the Manhattan Project is a top secret project and a controversial one and a complicated one and one that I’ve done many episodes on in terms of its implications.
    0:10:42 But there’s a less controversial perspective on the Manhattan Project of just seeing it as a project that the entirety of a nation or maybe the entirety of a civilization takes on the moonshot project.
    0:10:48 We’re going to go to Mars, we’re going to go out there, we’re going to build something big together.
    0:10:58 I love projects like that at any scale, just the big togetherness where all the bullshit of distraction is thrown away and you just focus.
    0:11:03 So yeah, Notion helps with that kind of thing and they integrate AI extremely well.
    0:11:13 So you should try Notion AI for free when you go to Notion.com/Lex, that’s all lowercase, Notion.com/Lex to try the power of Notion AI today.
    0:11:21 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix.
    0:11:29 Did you know that salt in ancient Rome was a currency also referred to as white gold?
    0:11:47 How crazy is it that things like salt or cinnamon or frankly, gold and silver are things that all of us humans imbue with value for a time and even do horrific things to each other in order to attain more of it, the human greed for salt.
    0:11:51 So dark and so fascinating we humans are.
    0:12:07 Anyway, on a basic level, just thirst, something I’ve experienced in the Amazon jungle thirst for water and for that you need electrolytes to not just water, water and salt plus magnesium and potassium.
    0:12:13 That is the basic thing you want the most when it is gone.
    0:12:22 And I got the chance, the gift to experience it. Get a sample pack for free with any purchase. Try it at drinkelement.com/Lex.
    0:12:31 This episode is also brought to you by AG1, a drink I drink every day to feel better about myself.
    0:12:47 It’s basically a great multivitamin, it’s delicious and frankly, I feel quite sad that I’m out of travel packs and I’m going to be gone for a time and I will not have AG1.
    0:13:00 AG1 and Element are things that make me feel like I’m home, like everything’s going to be okay. I am bringing Element with me because it has these packets, but I went through all the AG1 travel packs.
    0:13:11 So that silly little thing is one of the things that will make me feel homesick. Funny how that is. It’s the little things.
    0:13:29 Anyways, the crazy things I do in terms of physical and mental perturbations to the bodily equilibrium on a daily basis is something that is rescued in part by making sure I get AG1 every single day.
    0:13:38 What am I going to do without AG1? You know what, I’ll probably bring some with me. I changed my mind now and you should do the same.
    0:13:45 They’ll give you one month supply of fish oil when you sign up at drinkag1.com/Lex.
    0:13:56 If you’re still listening to this, thank you. I’m deeply grateful for you, for your support, for being there for so many years. I love you all.
    0:14:06 This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Adam Frank.
    0:14:27 You wrote a book about aliens. So the big question, how many alien civilizations are out there?
    0:14:41 Yeah, that’s the question, right? The amazing thing is that after two and a half millennia of, you know, people yelling at each other or setting each other on fire occasionally over the answer, we now actually have the capacity to answer that question.
    0:14:52 So in the next 10, 20, 30 years, we’re going to have data relevant to the answer to that question. We’re going to have hard data finally that will one way or the other.
    0:15:01 You know, even if we don’t find anything immediately, we will have gone through a number of planets. We’ll be able to start putting limits on how common life is.
    0:15:07 The one answer I can tell you, which is was an important part of the problem is how many planets are there, right?
    0:15:18 And just like people have been arguing about the existence of life elsewhere for 2,500 years, people have been arguing about planets for the exact same amount of time, right?
    0:15:27 You can see Aristotle yelling at Democritus about this. You know, you can see that they had very wildly different opinions about how common planets were going to be and how unique Earth was.
    0:15:36 And that question got answered, right? Which is pretty remarkable that in a lifetime, you can have a 2,500 year old question. The answer is they’re everywhere.
    0:15:43 There are planets everywhere. And it was possible that planets were really rare. We didn’t really understand how planets formed.
    0:15:55 And so if you go back to, say, the turn of the 20th century, there was a theory that said planets formed when two stars passed by each other closely and then material was gravitationally squeezed out.
    0:16:05 In which case those kinds of collisions are so rare that you would expect one in a trillion stars to have planets. Instead, every star in the night sky has planets.
    0:16:18 So one of the things you’ve done is simulated the formation of stars. How difficult do you think it is to simulate the formation of planets, like simulator solar system through the entire evolution of the solar system?
    0:16:25 This is kind of a numerical simulation sneaking up to the question of how many planets are there.
    0:16:42 That actually we’re able to do now. There is, you can run simulations of the formation of planetary system. So if you run the simulation, really where you want to start is a cloud of gas, these giant interstellar clouds of gas that may have, you know, a million times the mass of the sun in them.
    0:16:56 And so you run a simulation of that. It’s turbulent. The gas is roiling and tumbling. And every now and then you get a place where the gas is dense enough that gravity gets hold of it and it can pull it downward. So you’ll start to form a protostar.
    0:17:03 And a protostar is basically the young star of, you know, this ball of gas where nuclear reactions are getting started.
    0:17:17 But it’s also a disk. So you, as material falls inward, because it’s everything’s rotating, as it falls inward, it’ll spin up and then it’ll form a disk. Material will collect in what’s called an accretion disk or a protoplanetary disk.
    0:17:27 And you can simulate all of that. Once you get into the disk itself and you want to do planets, things get a little bit more complicated because the physics gets more complicated. Now you got to start worrying about dust.
    0:17:44 Because actually dust, which is just dust is the wrong word. It’s smoke, really. These are the tiniest bits of solids. They will coagulate in the disk to form pebbles, right? And then the pebbles will collide to form rocks and the rocks will form boulders, etc, etc.
    0:17:59 That process is super complicated, but we’ve been able to simulate enough of it to begin to get a handle on how planets form, how you accrete enough material to get the first protoplanets or planetary embryos, as we call them.
    0:18:17 And then the next step is those things start slamming into each other to form planetary-sized bodies. And then the planetary bodies slam into each other. Earth, the moon came about because there was a Mars-sized body that slammed into the earth and basically blew off all the material that eventually formed the moon.
    0:18:23 And all of them have different chemical compositions, different temperatures?
    0:18:43 Yeah. So the temperature of the material in the disk depends on how far away you are from the star. So it decreases, right? And so there’s a really interesting point. So like, you know, close to the star, temperatures are really high. And the only thing that can condense, that can kind of freeze out is going to be stuff like metals.
    0:19:00 So that’s why you find Mercury is this giant ball of iron, basically. And then as you go further out, stuff, you know, the gas gets cooler, and now you can start getting things like water to freeze, right? So there’s something we call the snow line, which is somewhere in our solar system out around between Mars and Jupiter.
    0:19:21 And that’s the reason why the giant planets in our solar system, Jupiter, Saturn, Uranus and Neptune, all have huge amounts of ice in them, or water and ice. Actually, Jupiter and Saturn don’t have so much, but the moons do. The moons have so much water in them that there’s oceans, right? That we’ve got a number of those moons have got more water on them than there’s water on earth.
    0:19:41 Do you think it’s possible to do that kind of simulation to have a stronger and stronger estimate of how likely an earth-like planet is? Can we get the physics simulation done well enough to where we can start estimating, like, what are the possible earth-like things that could be generated?
    0:20:02 Yeah, I think we can. I think we’re learning how to do that now. So, you know, one part is, like, trying to just figure out how planets form themselves and doing the simulations, like that cascade from dust grains up to planetary embryos. That’s hard to simulate, because you’ve got to do both the gas and you’ve got to do the dust and the dust colliding and all that physics.
    0:20:30 Once you get up to a planet-sized body, then, you know, you kind of have to switch over to almost like a different kind of simulation. There often what you’re doing is you’re doing, you know, sort of, you’re assuming the planet is this sort of spherical ball, and then you’re doing, you know, like a 1D, a radial calculation, and you’re just asking, like, all right, how is this thing going to, what is the structure of it going to be? Like, am I going to have a solid iron core, or am I going to get a solid iron core with that liquid iron core out around it, like we have on earth?
    0:20:43 And then you get, you know, a silicate, kind of a rocky mantle, and then across all of those details, those are kind of beyond being able to do full 3D simulations from ab initio, from scratch. We’re not there yet.
    0:20:47 How important are those details, like the crust and the atmosphere, do you think?
    0:21:17 Hugely important. So I’m part of a collaboration at the University of Rochester where we’re using the giant laser. It’s literally, this is called the laboratory for laser energetics. We got a huge grant from the NSF to use that laser to, like, slam tiny pieces of silica to understand what the conditions are like at, you know, the center of the earth, or even more importantly, the center of super earths, like, the most, this is what’s wild. The most common kind of planet in the universe we don’t have in our solar system.
    0:21:42 Which is amazing, right? So the, we’ve been able to study enough or observe enough planets now to get a census. You know, we pretty, you know, we kind of have an idea of what who’s average, who’s weird, and our solar system’s weird, because the average planet has a mass between somewhere between a few times the mass of the earth to maybe, you know, 10 times the mass of the earth, and that’s exactly where there are no planets in our solar system.
    0:21:59 So the smaller ones of those we call super earths, the larger ones we call sub-neptunes. And they’re anybody’s guess. Like, we don’t really know what happens to material when you’re squeezed to those pressures, which is like millions, tens of millions of times the pressure on the surface of the earth.
    0:22:07 So those details really will matter of what’s going on in there, because that will determine whether or not you have, say, for example, plate tectonics.
    0:22:20 We think plate tectonics may have been really important for life on earth, for the evolution of complex life on earth. So it turns out, and this is sort of the next generation where we’re going with the, the understanding the evolution of planets in life.
    0:22:31 It turns out that you actually have to think hard about the planetary context for life. You can’t just be like, oh, there’s a warm pond, you know, and then some interesting, you know, chemistry happens in the warm pond.
    0:22:39 You actually have to think about the planet as a whole and what it’s gone through in order to really understand whether a planet is a good place for life or not.
    0:22:44 Why do you think plate tectonics might be useful for the formation of complex life?
    0:22:51 There’s a bunch of different things. One is that, you know, the earth went through a couple of phases of being a snowball planet.
    0:23:03 Like we, you know, we went into a period of glaciation where the pretty much the entire planet was under ice. The oceans were frozen. You know, early on in earth history, there was no, there was barely any land.
    0:23:10 We were actually a water world, you know, with just a couple of Australia sized cratons, they call them proto continents.
    0:23:14 So those, we went through these snowball earth phases.
    0:23:22 And if it wasn’t for the fact that we had kind of an active plate tectonics, which had a lot of volcanism on it, we could have been locked in that forever.
    0:23:33 Like once you get into a snowball state, a planet can be trapped there forever, which is, you know, maybe you already had life form, but then because it’s so cold, you may never get anything more than just microbes, right?
    0:23:46 So what plate tectonics does is because it fosters more volcanism, is that you’re going to get carbon dioxide pumped into the atmosphere, which warms the planet up and gets you out of the snowball earth phase.
    0:23:49 But even more, there’s even more really important things.
    0:24:01 I just finished a paper where we were looking at something called the hard steps model, which is this model that’s been out there for a long time that purports to say, intelligent life in the universe will be really rare.
    0:24:07 And it made all these assumptions about the earth’s history, particularly at the history of life and the history of the planet or have nothing to do with each other.
    0:24:15 And it turns out, as I was doing the reading for this, that earth probably early on had a had a more mild form of plate tectonics.
    0:24:18 And then somewhere about a billion years ago, it ramped up.
    0:24:21 And that ramping up changed everything on the planet.
    0:24:22 Because here’s a funny thing.
    0:24:25 The earth used to be flat, what I mean by that, right?
    0:24:28 So all the flat earthers out there can get excited for one second clip it.
    0:24:35 What I mean by that is that there really weren’t many mountain ranges, right?
    0:24:39 The beginning of, I think the term is orogenesis, mountain building.
    0:24:48 The true Himalayan style giant mountains didn’t happen until this more robust form of plate tectonics, where the plates are really being driven around the planet.
    0:24:54 And that is when you get the crusts hitting each other and they start pushing into these Himalayan style mountains.
    0:25:02 The weathering of that, the erosion of that puts huge amounts of nutrients, you know, things that microbes want to use into the oceans.
    0:25:12 And then what we call the net primary productivity, the, you know, the photo, the bottom of the food chain, how much sugars they are producing, how much photosynthesis they’re doing.
    0:25:15 Shot up by a factor of almost a thousand, right?
    0:25:21 So the fact that you had plate tectonics supercharged evolution in some sense.
    0:25:33 You know, like we’re not exactly sure how, how it happened, but it’s clear that the amount of life, the amount of living activity that was happening really got a boost from the fact that suddenly there was plate, this new vigorous form of plate tectonics.
    0:25:43 So it’s nice to have turmoil in terms of temperature, in terms of surface geometries, in terms of the chemistry of the planet, turmoil.
    0:25:45 Yeah, that’s actually really true.
    0:25:49 Because what happens is if you look at the history of life, that’s a really, you know, it’s an excellent point you’re bringing up.
    0:25:56 If you look at the history of life on earth, we get, you know, a biogenesis somewhere around at least 3.8 billion years ago.
    0:26:03 And that’s the first microbes they kind of take over enough that they really do, you get a biosphere, you get a biosphere that is actively changing the planet.
    0:26:09 But then you go through this period, they call the boring billion, we’re like, it’s a billion years and it’s just microbes.
    0:26:09 Nothing’s happening.
    0:26:10 It’s just microbes.
    0:26:12 I mean, the microbes are doing amazing things.
    0:26:14 They’re inventing fermentation.
    0:26:17 Thank you very much for we appreciate that.
    0:26:28 But it’s not until sort of you get probably this, these continents slamming into each other, you really get the beginning of continents forming and driving changes that evolution has to respond to.
    0:26:36 That on a planetary scale, this turmoil, this chaos is creating new niches, as well as closing other ones.
    0:26:38 And biology, evolution has to respond to that.
    0:26:47 And somewhere around there is when you get the Cambrian explosion is when suddenly everybody plan, you know, evolution goes on an orgy, essentially.
    0:26:54 So yeah, it does look like the that chaos or that turmoil was actually very helpful to evolution.
    0:27:02 I wonder if there is some extremely elevated levels of chaos, almost like catastrophes behind every leap of evolution.
    0:27:04 Like, you’re not going to have leaps.
    0:27:10 Like in human societies, we have like an Einstein that comes up with a good idea.
    0:27:24 But it feels like in an evolutionary time scale, you need some real big drama going on for for the evolutionary system to have to come up to a solution to that drama, like extra complex solution to that drama.
    0:27:26 Well, I think what I’m not sure if that’s true.
    0:27:29 I don’t know if it needs to be like an almost extinction event, right?
    0:27:33 Is it certainly true that we have gone through almost extinction events?
    0:27:42 Sorry, we’ve had, you know, five mass extinctions, but you don’t necessarily see that like there was this giant evolutionary leap happening after those.
    0:27:48 So, you know, with the comet impact, the KT boundary, certainly, you know, lots of niches opened up.
    0:27:49 And that’s why we’re here, right?
    0:27:56 Because, you know, our ancestors were just a little basically rodents, rats living under the footsteps of the dinosaurs.
    0:28:00 And it was that comet impact that opened the route for us.
    0:28:04 But it wasn’t, I mean, that still took another, you know, 65 million years.
    0:28:06 It wasn’t like this thing immediately happened.
    0:28:23 But what we found with this hard steps paper, because the whole idea of the hard steps paper was, it was one of these anthropic reasoning kinds of things where Brandon Carter said, Oh, look, the intelligence doesn’t show up on earth until about, you know, almost close to when the end of the sun’s lifetime.
    0:28:32 And so he’s like, well, there should be no reason why the sun’s lifetime and the time for evolution to produce intelligence should be the same.
    0:28:36 And so therefore, and he goes through all this reasoning, anthropic reasoning.
    0:28:43 And he ends up with the idea that like, oh, it must be that the odds of getting intelligence are super low.
    0:28:45 And so that’s the hard steps, right?
    0:28:48 So there was a series of steps in evolution that were, you know, very, very hard.
    0:28:55 And because that you can calculate some probability distributions and everybody loves a good probability distribution and they went a long way with this.
    0:29:14 But it turns out that the whole thing is flawed because on one, you know, when you look at it, of course, the timescale for the sun’s evolution and the timescale for evolution on life are coupled because life and the the timescale for evolution of the earth is coupled is about the same timescale as the evolution is the sun.
    0:29:15 It’s billions of years.
    0:29:17 The earth evolves over billions of years.
    0:29:19 And life and the earth co-evolve.
    0:29:26 That’s what Brandon Carter didn’t see is that actually the fate of the earth and the fate of life are inextricably combined.
    0:29:29 And this is really important for astrobiology, too.
    0:29:33 Life doesn’t happen on a planet.
    0:29:34 It happens to a planet.
    0:29:37 So this is something that David Grinspoon and Sarah Walker both say.
    0:29:39 And, you know, I agree with this.
    0:29:40 It’s a really nice way of putting it.
    0:29:49 So, you know, play tectonics, the evolution of oxygen of an oxygen atmosphere, which only happened because of life.
    0:29:57 These things, you know, these are things that are happening where life and the planet are sort of sloshing back and forth.
    0:30:09 And so rather than to your point about do you need giant catastrophes, maybe not giant catastrophes, but what happens is as the earth and life are evolving together, windows are opening up, evolutionary windows.
    0:30:23 Like, for example, life put oxygen into the atmosphere when life invented this new form of photosynthesis about two and a half billion years ago that broke water apart to, you know, work to do its chemical shenanigans.
    0:30:27 It broke water apart and pushed oxygen into the atmosphere.
    0:30:28 That’s why there’s oxygen in the atmosphere.
    0:30:29 It’s only because of life.
    0:30:35 That opened up huge possibilities, new spaces for evolution to happen.
    0:30:38 But it also changed the chemistry of the planet forever.
    0:30:48 So the evolution, the introduction of oxygen photosynthesis changed the planet forever and it opened up a bunch of windows for evolution that wouldn’t have happened otherwise.
    0:30:52 Like, for example, you and I, we need that amount of oxygen.
    0:30:59 Big brain creatures need an oxygen rich atmosphere because oxygen is so potent for metabolism.
    0:31:04 So you couldn’t get intelligent creatures 100 million years after the planet formed.
    0:31:15 So really on a scale of a planet, when there’s billions, trillions of organisms on a planet, they can actually have planetary scale impact.
    0:31:22 So the chemical shenanigans of an individual organism, when scaled out to trillions, can actually change a planet.
    0:31:33 Yeah, and we know this for a fact now, like this is, so there was this thing, Gaia theory that, you know, with James Lovelock introduced in the 70s, and then Lynn Margallus, the biologist, Lynn Margallus together.
    0:31:51 So this Gaia theory was the idea that planets pretty much take a, or sorry, life takes over a planet, life hijacks the planet in a way that some total of life creates these feedbacks between the planet and the life such that it keeps the planet habitable.
    0:31:52 It’s kind of a homeostasis, right?
    0:31:55 I can go out like right now outside, it’s 100 degrees, right?
    0:32:05 And I go outside, but my internal temperature is going to the same, and I can go back to, you know, Rochester, New York in the winter, and it’s going to be, you know, zero degrees, but my internal temperature is going to be the same.
    0:32:06 That’s homeostasis.
    0:32:19 The idea of Gaia theory was that life, the biosphere exerts this pressure on the planet or these feedbacks on the planet that even as other things are changing, the planet will always stay in the right kinds of conditions for life.
    0:32:29 Now, when this theory came out, it was very controversial, people were like, oh my God, you know, what are you, smoking weed, you know, and like, there were all these Gaian festivals with Gaian dances.
    0:32:32 And so, you know, it became very popular in the New Age community.
    0:32:37 But Lovelock, actually, they were able to show that, no, this has nothing to do with like the planet being conscious or anything.
    0:32:43 It was about these feedbacks that, that by the biology, the biosphere can exert these feedbacks.
    0:32:48 And now that’s become, whether or not it’s still, we’re still unclear whether there are true Gaian feedbacks.
    0:32:52 In the sense that the planet can really exert complete control.
    0:32:58 But it is absolutely true that the biosphere is a major player in Earth’s history.
    0:33:01 So the biosphere fights for homeostasis on Earth.
    0:33:02 The bias.
    0:33:05 So, OK, what I would say right now is I don’t know if I can say that scientifically.
    0:33:17 I can certainly say that the biosphere does a huge amount of the regulation of the planetary state and over billions of years has strongly modified the evolution of the planet.
    0:33:21 So whether or not a true Gaian feedback would be exactly what you said, right?
    0:33:24 The guy, the biosphere is this somehow and Sarah Walker and David Grinspoon.
    0:33:31 And I actually did a paper on this about the idea of planetary intelligence or cognition across a planetary scale.
    0:33:33 And I think that actually is possible.
    0:33:36 It’s not conscious, but there is a kind of cognitive activity going on.
    0:33:41 The biosphere, in some sense, knows what is happening because of these feedbacks.
    0:33:48 So it’s still unclear whether we have these full Gaian feedbacks, but we certainly have semi Gaian feedbacks.
    0:33:54 If there’s a perturbation on the planetary scale, temperature, you know, insulation, how much sunlight is coming in.
    0:33:59 The biosphere will start to have feedbacks that will damp that perturbation.
    0:34:02 Temperature goes up, the biosphere starts doing something, temperature comes down.
    0:34:13 Now, I wonder if the techno sphere also has a Gaian feedback or elements of a Gaian feedback such that the techno sphere will also fight to some degree for homeostasis.
    0:34:14 Open question, I guess.
    0:34:24 Well, that’s, I’m glad you asked that question because that paper that David and Sarah and I wrote, what we were arguing was, is that over the history of a planet, right?
    0:34:29 When life first forms, you know, 3.8 billion years ago, it’s kind of thin on the ground, right?
    0:34:32 You’ve got the first species, you know, these are all microbes.
    0:34:39 And they have not yet been, they’re not going to, enough of them to exert any kind of these Gaian feedback.
    0:34:42 So we call that an immature biosphere.
    0:34:50 But then as time goes on, as life becomes more robust and it begins to exert these feedbacks, keeping the planet in the place where it needs to be for life.
    0:34:52 We call that a mature biosphere, right?
    0:34:56 And the important thing, and we’re going to, I’m sure later on, we’re going to talk about definitions of life and such.
    0:35:03 There’s this great term called auto-poesis that Francisco Varela, the neurobiologist Francisco Varela came up with.
    0:35:10 And he said, you know, one of the defining things about life is this property of auto-poesis, which means self-creating and self-maintaining.
    0:35:16 Life does not create the conditions which will destroy itself, right?
    0:35:19 It’s always trying to keep itself in a place where it can stay alive.
    0:35:26 So the biosphere from this Gaian perspective has been auto-poetic for, you know, billions of years.
    0:35:31 Now, we just invented this techno-sphere in the last, you know, couple of hundred years.
    0:35:35 And what we were arguing in that paper is that it’s an immature techno-sphere, right?
    0:35:44 Because right now with climate change and all the other things we’re doing, we know we’re just the techno-sphere right now is sort of destroying the conditions under which it needs to maintain itself.
    0:35:55 So the real job for us, if we’re going to last over, you know, geologic timescales, if we want a techno-sphere that’s going to last tens of thousands, hundreds of thousands, millions of years,
    0:36:04 then we’ve got to become mature, which means to not undermine the conditions, to not subvert the conditions that you need to stay alive.
    0:36:07 So as of right now, it’s they were not auto-poetic.
    0:36:25 Well, I wonder if we look across thousands, tens of thousands, hundreds of thousands of years that perturbations, the techno-sphere should create perturbations as a way for developing greater and greater defenses against
    0:36:37 perturbations, which sounds like a ridiculous statement, but basically go out and play in the yard and hurt yourself to just strengthen the, or like drink water from the, from the pond.
    0:36:38 From the pond, yeah, right.
    0:36:40 Get sick a few times.
    0:36:41 Just strengthen the immune system.
    0:36:42 Yeah.
    0:36:44 Well, you know, it’s interesting with the techno-sphere.
    0:36:51 We can talk about this more, but like, you know, we’re just emerging as a techno-sphere in terms of as a interplanetary techno-sphere, right?
    0:37:03 That’s really the next step for us is to, um, David Grinspoon talks about, I love this idea of anti-accretion, like this amazing thing that for the first time, you know, over the entire history of the planet, stuff is coming off the planet, right?
    0:37:08 Used to be everything just fell down, all the meteorites fell down, but now we’re starting to push stuff out.
    0:37:17 And, you know, like the idea of planetary defense or such, you know, we are actually going to start exerting perturbations on the solar system as a whole.
    0:37:19 We’re going to start engineering if we make it, right?
    0:37:24 I always like to say that if we can get through climate change, the prize at the end is the solar system, right?
    0:37:30 So we will, um, we’ll be changed literally engineering the solar system.
    0:37:39 But what you can think of right now with what’s happening with the Anthropocene, the great acceleration that, that, uh, the, is the techno-sphere, you know, is the creation of that.
    0:37:42 That is a giant perturbation on the biosphere, right?
    0:37:47 And what you can’t do is, you know, the techno-sphere sits on top of the biosphere.
    0:37:55 And the techno, if the techno-sphere undermines the biosphere for its own conditions of habitability, then you’re in trouble, right?
    0:37:57 I mean, the biosphere is not going away.
    0:37:58 There’s nothing we could do.
    0:38:01 Like the idea that we have to save the earth is a little ridiculous.
    0:38:05 Like the earth is not a furry little bunny that we need to protect, but it’s the conditions for us, right?
    0:38:11 We, humanity, emerged out of this, out of the Holocene, the last 10,000 years interglacial period.
    0:38:14 We can’t tolerate very different kinds of earths.
    0:38:16 Um, so that’s what I mean about a perturbation.
    0:38:20 Before we forget, I got to ask you about this paper, pretty interesting.
    0:38:23 Uh, there’s an interesting table here about hard steps.
    0:38:40 Ebiogenesis, glucose fermentation, tuberic acid, all kinds of steps all the way to homo sapiens, animal intelligence, land ecosystems, endoskeletons, eye precursor, so formation of the eye, complex multicellularity.
    0:38:42 That’s definitely one of the big ones.
    0:38:43 Yeah.
    0:38:43 So interesting.
    0:38:45 I mean, what can you say about this chart?
    0:38:49 So there are all kinds of papers talking about what the difficulty of these steps.
    0:38:50 Right.
    0:38:51 And so this was the idea.
    0:39:03 So what Carter said was, you know, using anthropic reasoning, he said, there must be a few very hard steps for the evolution to get through to make it to intelligence, right?
    0:39:05 So there’s some steps are going to be easy.
    0:39:10 So every generation, you know, you roll the dice and yeah, it won’t take long for you to get that step.
    0:39:17 But there must be a few of them and he said you could even calculate what how many there were five, six in order to get to intelligence.
    0:39:21 And so this paper here, this plot is all these different people who’ve written all these papers.
    0:39:22 And this is the point.
    0:39:29 Actually, you can see all these papers that were written on the hard steps, each one proposing a different set of what those steps should be.
    0:39:36 And there’s this other idea from biology of the major transitions in evolution, MTEs, that those were the hard steps.
    0:39:40 But what we actually found was that none of those are actually hard.
    0:39:45 The whole idea of hard steps, that there are hard steps is actually suspect.
    0:39:52 So, you know, what’s amazing about this model is it shows how important it is to actually work with people who are in the field, right?
    0:39:56 So, you know, Brandon Carter was a brilliant physicist, the guy who came up with this.
    0:40:06 And then lots of physicists and astrophysicists like me have used this, but the people who actually study evolution and the planet were never involved.
    0:40:14 Right. And if you went and talked to an evolutionary biologist or a biogeophysicist, they’d look at you, when you explain this to the man, they’d be like, what?
    0:40:16 Like, what are you guys doing?
    0:40:29 Turns out, none of the details or none of the conceptual structure of this matches with what the people actually study the planet and its evolution.
    0:40:34 Is it mostly about the fact that there’s not really discrete big steps?
    0:40:36 Is this a gradual, continual kind of process?
    0:40:37 Well, there’s two things.
    0:40:40 The first most important one was that the planet and the biosphere have evolved together.
    0:40:45 That’s something that every, you know, most biogeophysicists completely accept.
    0:40:48 And it was the first thing that Carter kind of rejected.
    0:40:50 He said, like, no, that’s probably not possible.
    0:40:57 And yet, you know, like, if he’d only sort of had more discussions with this other community would have seemed like, no, there are actually windows that open up.
    0:41:01 And then the next thing is this idea of whether a step is hard or not.
    0:41:10 Because for a hard, what we mean by a hard step is that, like I said, every time there’s a generation, every time there’s the next generation born, you’re rolling the dice on whether this mutation will happen.
    0:41:19 And the idea of something being a hard step, there’s two ways in which something might even appear as a hard step and not be or actually not be a hard step at all.
    0:41:24 One is that you see something that has occurred in evolution has only happened once, right?
    0:41:25 So let’s take the opposite.
    0:41:32 We see something that’s happened multiple times, like wings, lots of examples of wings over lots of different evolutionary lineages.
    0:41:35 So that’s clearly not a hot making wings is not a hard step.
    0:41:38 There are certain other things that people say, no, that’s a hard step.
    0:41:47 Oxygen, you know, the oxygen photosynthesis, but they are so they tend to be so long ago that we’ve lost all the information.
    0:41:54 There could be other things in the fossil record that, you know, went made this innovation, but they’re just gone now.
    0:41:54 So you can’t tell.
    0:41:56 So there’s information loss.
    0:42:04 The other thing is the idea of pulling up the ladder that somebody, you know, some species makes the innovation, but then it fills the niche and nobody else can do it again.
    0:42:13 So yeah, it only happened once, but it happened once because basically the creature was so successful, it took over and there was no space for anybody else to evolve it.
    0:42:24 So yeah, so the interesting thing about this was seeing how, how much once you look at the details of life’s history on earth, how it really shifts you away
    0:42:25 from this hard steps model.
    0:42:28 And it shows you that those details, as we were talking about, like, do you have to know about the planet?
    0:42:30 Do you have to know about plate tectonics?
    0:42:31 Yeah, you’re going to have to.
    0:42:41 I mean, to be fair to Carter, on the first point, it makes it much more complicated if life and the planet are co-evolving.
    0:42:47 Because it’s not, it would be nice to consider the planet as a static thing that sets the initial conditions.
    0:42:54 Yeah. And then we can sort of, from an outside perspective, analyze planets based on the initial conditions they create.
    0:42:58 And then there’s a binary yes or no, will it create life?
    0:43:14 But if they co-evolve, it’s just a really complex dynamical system where everything is, because much more difficult from the perspective of SETI, of looking out there and trying to figure out which ones are actually producing life.
    0:43:23 But I think we’re at the point now, so now there may be other kinds of principles that actually, because co-evolution actually has its own, not deterministic, you’re done with determinism, right?
    0:43:29 But complex systems have patterns, complex systems have constraints.
    0:43:33 And that’s actually what we’re going to be looking for, our constraints on them.
    0:43:40 And so, you know, and again, nothing against Carter was a brilliant idea, but it just goes to show, you know, there’s this great XTC, you know, I’m a theoretical physicist, right?
    0:43:47 And so I love simplified, give me a simplified model with, you know, it’s a dynamical equation, some initial conditions, I’m very happy.
    0:43:56 But there’s this great XTC comic where like, you know, somebody’s working something out on the board and this physicist is looking over and saying, oh, oh, I just, I just wrote down an equation for that.
    0:43:57 I solved your problem.
    0:43:58 Do you guys even have a journal for this?
    0:44:01 You know, subtitle is Why Everybody Hates Physicists.
    0:44:01 Yeah.
    0:44:04 So sometimes that approach totally works.
    0:44:12 Sometimes physicists, you know, we can be very good at like zooming in on what is important and casting the details aside so you can get to the heart of an issue.
    0:44:15 And that’s very useful sometimes.
    0:44:17 Other times it obfuscates, right?
    0:44:23 Other times it clouds over actually what you needed to focus on, especially when it comes to complexity.
    0:44:33 Speaking of simplifying everything down to an equation, let’s return back to the question of how many alien civilizations are out there.
    0:44:35 And talk about the Drake equation.
    0:44:35 Yeah.
    0:44:38 Can you explain the Drake equation?
    0:44:42 You know, people have various feelings about the Drake equation.
    0:44:47 You know, it can be abused, but basically it was the story actually is really interesting.
    0:44:52 So Frank Drake in 1960 does the first ever astrobiological experiment.
    0:44:56 He gets a radio telescope, points it at a couple of stars and listens for signals.
    0:45:02 That was the first time anybody done any experiment about any kind of life in the history of humanity.
    0:45:05 And he does it and he’s kind of waiting for everybody to make fun of him.
    0:45:13 Instead, he gets a phone call from the government says, hey, we want you to do a meeting on interstellar communications, right?
    0:45:17 So he’s like, OK, so they organize a meeting with like just eight people.
    0:45:19 A young Carl Sagan is going to be there as well.
    0:45:25 And like the night before Drake has to come up with an agenda.
    0:45:30 How do you come up with an agenda for a meeting on a topic that no one’s ever talked about before, right?
    0:45:32 And so we actually write he breaks what he does.
    0:45:37 What’s so brilliant about the Drake equation is he breaks the problem of how many civilizations
    0:45:41 are there out there into a bunch of sub problems, right?
    0:45:43 And he breaks it into seven sub problems.
    0:45:48 Each one of them is a factor in an equation that when you multiply them all together,
    0:45:52 you get the number of civilizations out there that we could communicate with.
    0:45:56 So the first term is the rate at which stars form.
    0:46:00 The second term is the fraction of those stars that have planets, F sub p.
    0:46:05 The next term is the number of planets in the habitable zone, the place where we think life could form.
    0:46:13 The next term after that is the fraction of those planets were actually an a biogenesis event life forms occurs.
    0:46:19 The next one is the fraction of planets on which you start to get intelligence.
    0:46:25 After that, it’s the fraction of planets where that intelligence goes on to create a civilization.
    0:46:29 And then finally, the last term, which is the one that we really care about is the lifetime.
    0:46:31 How long you have a civilization. Now, how long does it last?
    0:46:34 Well, you say we humans, we humans, right?
    0:46:40 Because we’re standing, we’re staring at the guy, you know, multiple guns pointing at a nuclear war, climate change, AI.
    0:46:44 So, you know, how long on in general does civilizations last?
    0:46:50 Now, each one of these terms was brilliant about what he did was what he was doing was he was quantifying our ignorance, right?
    0:46:55 By breaking the problem up into these seven sub problems, he gave astronomers something to do, right?
    0:46:57 And so, you know, this is always with a new research field.
    0:47:00 You need a research program or else you just have a bunch of vague questions.
    0:47:03 You don’t even know really what you’re trying to do.
    0:47:07 So, you know, the star people could figure out how many stars we’re forming per year.
    0:47:13 The people who are interested in planets could go and find techniques to discover planets, etc, etc.
    0:47:16 I mean, these are their own fields.
    0:47:20 Essentially, by creating this equation, he’s launching new fields.
    0:47:21 Yeah, that’s exactly.
    0:47:26 He gave astrobiology, which wasn’t even a term then, a roadmap like, OK, you guys go do this.
    0:47:27 You go do that.
    0:47:28 You go do that.
    0:47:37 And it had such far reaching effect on astrobiology because it did break the problem up in a way that gave useful,
    0:47:40 you know, sort of marching orders for all these different groups.
    0:47:51 Like, for example, it’s because of the Drake equation in some sense that people who were involved in SETI pushed NASA to develop the technologies for planet hunting.
    0:48:01 There was this amazing meeting in 1978-1972 meetings, 1978-1979 that were driven in some part by the people who were involved in SETI getting NASA together to say,
    0:48:07 “Look, OK, look, how, you know, what’s what’s the roadmap for us to develop technologies to find, find planets?”
    0:48:18 So, yeah, so, you know, the Drake equation is absolutely foundational for astrobiology, but we should remember that it’s not a law of nature, right?
    0:48:21 It’s not something that’s it’s not equals MC squared.
    0:48:23 And so you can see it being abused in some sense.
    0:48:25 People, you know, it’s generated a trillion papers.
    0:48:26 Some of those papers are good.
    0:48:29 I’ve written some of those and some of those papers are bad.
    0:48:31 You know, I’m not sure where my paper fits in on those.
    0:48:34 So I’m saying, you know, one should be careful about what you’re using it for.
    0:48:43 But in terms of understanding the problem that that astrobiology faces, this really broke it up in a useful way.
    0:48:48 We could talk about each one of these, but let’s just look at exoplanets.
    0:48:48 Yeah.
    0:48:50 So that’s a really interesting one.
    0:48:57 I think when you look back, you know, hundreds of years from now, what’s in the 90s when they first detected the first 92 and 95.
    0:49:02 95 to me was really that was the discovery of the first planet orbiting a sun-like star.
    0:49:04 To me, that was the water, the dam being broken.
    0:49:09 I think that’s like one of the greatest discoveries in the history of science.
    0:49:10 I agree, I agree.
    0:49:16 Right now, I guess nobody’s celebrating it too much because you don’t know what it really means.
    0:49:28 But I think once we almost certainly will find life out there, it will obviously allow us to generalize across the entire galaxy, the entire universe.
    0:49:36 So if you can find life on a planet, even in the solar system, you can now start generalizing across the entire universe.
    0:49:36 You can.
    0:49:37 All you need is one.
    0:49:41 Like right now, it’s an, you know, our understanding of life, we have one example.
    0:49:43 We have n equals one example of life.
    0:49:45 So that means we could be an accident, right?
    0:49:51 It could be that we’re the only place in the entire universe where this weird thing called life has occurred.
    0:49:54 Get one more example and now you’re done.
    0:49:57 Because if you have one more example, now you’re even, you know, you don’t have to find all the other examples.
    0:49:59 You just know that it’s happened more than once.
    0:50:06 And now you are, you know, in from a Bayesian perspective, you can start thinking like, yeah, this life is not something that’s hard to make.
    0:50:10 Well, let me get your sense of estimates for the Drake equation.
    0:50:15 You were also written a paper expanding on the Drake equation, but what do you think is the answer?
    0:50:22 So the paper, there was this paper we wrote, Woody Sullivan and I in 2016, where we said, look, we have all this exoplanet data now, right?
    0:50:32 So the thing that exoplanet science and the exoplanet census I was talking about before have nailed is F sub p, the fraction of stars that have planets.
    0:50:39 It’s one every freaking star that you see in the sky hosts a family of worlds.
    0:50:44 I mean, it’s mind boggling because every one of those, those are all places, right?
    0:50:47 They’re either, you know, gas giants, probably with moons.
    0:50:49 So there’s the moons are places you can stand and look out.
    0:50:57 Or they’re like terrestrial worlds where even if there’s not life, there’s still snow falling and there’s oceans washing up on, you know, on shorelines.
    0:51:02 It’s incredible to think how many places and stories there are out there.
    0:51:06 So, right, the first term was F sub p, which is how many stars have planets.
    0:51:10 The next term is how many planets are in the habitable zone, right?
    0:51:13 On average, and it turns out to be one over five, right?
    0:51:15 So, you know, you know, we’re on point two.
    0:51:18 So that means you just count five of them, go out at night and go one, two, three, four, five.
    0:51:24 One of them has an earth like planet, you know, in the habitable zone, like, whoa.
    0:51:26 So what, what defines a habitable zone?
    0:51:34 Habitable zone is an idea that was developed in the 1958 by the Chinese American astronomer Shuxiang.
    0:51:36 And it was, it was a brilliant idea.
    0:51:40 It said, look, this is there, you know, I can do this simple calculation.
    0:51:46 If I take a planet and just stick it at some distance from a star of what’s the temperature of the planet?
    0:51:47 What’s the temperature of the surface?
    0:51:53 So now you’re all you’re going to ask, you give it a standard kind of, you know, earth like atmosphere and ask, could there be liquid water on the surface?
    0:51:53 Right.
    0:51:56 We believe that liquid water is really important for life.
    0:51:58 There could be other things that’s happening fine.
    0:52:04 But, you know, if you were to start off trying to make life, you’d probably choose water as your solvent for it.
    0:52:12 So basically the habitable zone is the band of orbits around a star where you can have liquid water on the surface.
    0:52:15 You could take a glass of water, pour it on the surface and it would just pool up.
    0:52:21 It wouldn’t freeze immediately, which would happen if your planet is too far out and it wouldn’t just boil away if your planet’s too close in.
    0:52:25 So that’s the formal definition of the habitable zone.
    0:52:27 So it’s a nice strict definition.
    0:52:30 There’s probably way more going on than that, but this is a place to start.
    0:52:31 Right.
    0:52:33 Well, we should say it’s a place to start.
    0:52:35 I do think it’s too strict of a constraint.
    0:52:36 I would agree.
    0:52:41 We’re talking about temperature where water can be on the surface.
    0:52:50 There’s so many other ways to get the aforementioned turmoil where the temperature varies, whether it’s volcanic.
    0:52:56 So interaction of volcanoes and ice and all of this on the moons of plants that are much farther away, all this kind of stuff.
    0:52:57 Yeah.
    0:53:07 Well, for example, we know in our own solar system, we have say Europa, the moon of Jupiter, which has got a hundred mile deep ocean under 10 miles of ice.
    0:53:08 Right.
    0:53:09 That’s not in the habitable zone.
    0:53:10 That is outside the habitable zone.
    0:53:12 And that may be the best place.
    0:53:14 It’s got more water than Earth does.
    0:53:18 All of its oceans are, you know, it’s twice as much water on Europa than there is on Earth.
    0:53:22 So, you know, that may be a really great place for life to form and it’s outside the habitable zone.
    0:53:26 So, you know, the habitable zone is a good place to start and it helps us.
    0:53:30 And there’s reason there’s reasons why you do want to focus on the habitable zone, because like Europa, I couldn’t.
    0:53:35 I won’t be able to see from across telescopic distances across light years.
    0:53:39 I wouldn’t be able to see life on Europa because it’s under 10 miles of ice.
    0:53:40 Right.
    0:53:47 So with the important thing about planets in the habitable zone is that we’re thinking they have atmospheres.
    0:53:54 Atmospheres are the things we can characterize for across 10, 50 light years and we can see biosignatures as we’re going to talk about.
    0:54:00 So there is a reason why the habitable zone becomes important for the detection of extra solar life.
    0:54:10 But for me, when I look up at the stars, it’s very likely that there’s a habitable planet or moon and each of the stars habitable defined broadly.
    0:54:14 Yeah, I think that’s not unreasonable to say.
    0:54:18 I mean, especially since the the formal definition, you get one in five, right?
    0:54:19 One in five is a lot.
    0:54:20 There’s a lot of stars in the sky.
    0:54:29 So yeah, saying that in general, when I look at a star, there’s a pretty good chance that there’s something habitable orbiting it is not a unreasonable scientific claim.
    0:54:36 To me, it seems like there should be alien civilizations everywhere.
    0:54:39 Why the Fermi Paradox?
    0:54:40 Why haven’t we seen them?
    0:54:43 Okay, the Fermi Paradox.
    0:54:47 Let’s talk about, I love talking about the Fermi Paradox because there is no Fermi Paradox.
    0:54:49 Dun dun dun dun.
    0:54:51 Yeah, so the Fermi Paradox.
    0:54:53 Let’s talk about the Fermi Paradox and the history of it.
    0:54:56 So Enrico Fermi, it’s 1950.
    0:55:01 He’s walking with his friends at Los Alamos Nuclear Weapons Lab to the Cantina.
    0:55:05 And there had been this cartoon in the New Yorker.
    0:55:12 They all read the New Yorker and the cartoon was trying to explain why there had been this rash of garbage cans being
    0:55:13 disappearing in New York.
    0:55:16 And this cartoon said, oh, it’s UFOs because this is already, you know, it’s 1950.
    0:55:19 The first big UFO craze happened in ’47.
    0:55:26 So they’d all, they were laughing about this as they’re walking and they started being physicists started talking about interstellar travel, interstellar propulsion, blah, blah.
    0:55:28 You know, conversation goes on for a while.
    0:55:32 Conversation turns to something else, you know, gone to other things.
    0:55:36 About 40 minutes later, over lunch, Fermi blurts out, well, where is everybody?
    0:55:37 Right?
    0:55:38 Typical Fermi sort of thing.
    0:55:42 He’d done the calculation in his head and he suddenly realized that, look, if
    0:55:54 one, if they’re, you know, if intelligence is common, that even traveling at sub light speeds, a civilization could cross, you know, kind of hop from one star system to the other and spread
    0:55:57 it out across the entire galaxy in a few hundred thousand years.
    0:55:58 And he realized this.
    0:56:00 And so he was like, why aren’t they here now?
    0:56:03 And that was the beginning of the Fermi paradox.
    0:56:12 It actually got picked up as a formal thing in 1975 in a paper by Hart, where he actually kind of went through this calculation and showed and said, well, there’s
    0:56:16 nobody here now, therefore, there’s nobody anywhere that, you know, okay.
    0:56:18 So that is what we will call the direct Fermi paradox.
    0:56:20 Why aren’t they here now?
    0:56:25 But something happened where people after SETI began, where people started to, there was this idea of the great silence.
    0:56:33 People got this idea in their head that like, oh, we’ve been looking for decades now for signals of extraterrestrial intelligence that we haven’t found any.
    0:56:35 Therefore, there’s nothing out there.
    0:56:38 But that, so we’ll call that the indirect Fermi paradox.
    0:56:43 And there absolutely is no indirect Fermi paradox for the most mundane of reasons, which is money.
    0:56:45 There’s never been any money to look.
    0:56:53 They’re really, SETI was always done by researchers who were kind of like scabbing some time, you know, some extra time from their other projects.
    0:56:57 So, you know, look a little bit, you know, at the sky with a telescope.
    0:56:58 Telescopes are expensive.
    0:57:06 So Jason Wright, one of my collaborators, he and his students did a study where they looked at the entire search space for SETI, you know, and imagine that’s an ocean.
    0:57:12 All the different stars you have to look at, the radio frequencies you have to look at, how when you look, how often you look.
    0:57:16 And they looked, then they summed up all the SETI searches that had ever been done.
    0:57:17 They went through the literature.
    0:57:25 And what they found was if the, if the, if that search space, if the sky is an ocean and you’re looking for fish, how much of the ocean have we looked at?
    0:57:27 And it turns out to be a hot tub.
    0:57:29 That’s how much of the ocean that we’ve looked up.
    0:57:34 We’ve dragged in a hot tub’s worth of ocean water up and there was no fish in it.
    0:57:37 And so now are we going to say up, well, there’s no fish in the ocean, right?
    0:57:41 So there is absolutely, positively no indirect Fermi Paradox.
    0:57:45 We just haven’t looked, but we’re starting to look.
    0:57:47 So that’s what’s, you know, finally we’re starting to look.
    0:57:48 That’s what’s exciting.
    0:57:51 The direct Fermi Paradox, there are so many ways out of that, right?
    0:57:57 There’s a book called “77 Solutions to the Fermi Paradox” that it just, you know, you can pick your favorite one.
    0:58:01 It just doesn’t carry a lot of weight because there’s so many ways around it.
    0:58:05 We did an actual simulation, my group, Jonathan Carroll, one of my collaborators.
    0:58:10 We actually simulated the galaxy and we simulated probes moving at sublight speed
    0:58:14 from one star to the other, gathering resources, heading to the next one.
    0:58:19 And so we could actually track the expansion wave across the galaxy.
    0:58:23 Have one Biogenesis event and then watch the whole galaxy get colonized or settled.
    0:58:28 And it is absolutely true that that wave crosses, you know, heart was right, Fermi was right.
    0:58:30 That wave crosses very quickly.
    0:58:33 But civilizations don’t last forever, right?
    0:58:35 So one question is, when did they visit?
    0:58:37 When did they come to Earth, right?
    0:58:42 So if you give civilizations a finite lifetime, you know, let them last 10,000, 100,000 years.
    0:58:44 What you find is you now have a steady state.
    0:58:46 Civilizations are dying.
    0:58:47 They’re, you know, they’re, they’re coming back.
    0:58:49 They’re traveling between the stars.
    0:58:51 What you find then is you can have big holes opened up.
    0:58:55 You can have regions of space where there is nobody for, you know, millions of years.
    0:58:59 And so if that, if we’re living in one of those bubbles right now,
    0:59:03 then maybe we were visited, but we were visited 100 million years ago.
    0:59:06 And there was a paper that Gavin Schmidt and I did that showed that if there was a civilization,
    0:59:12 whether it was like dinosaurs or aliens that was here 100 million years ago, there’s no way to tell.
    0:59:14 There’s just, there’s no record left over.
    0:59:16 The fossil record is too sparse.
    0:59:22 The only way maybe you could tell is by looking at the isotopic strata to see if there was anything
    0:59:24 reminiscent of an industrial civilization.
    0:59:30 But the idea that, you know, you’d be able to find, you know, iPhones or toppled buildings
    0:59:33 after 100 million years is there’s no way.
    0:59:41 So if there was an alien camp here, an alien village, a small civilization, maybe even a large civilization.
    0:59:44 Even a large civilization, even if it was 100 million years ago.
    0:59:46 And it lasted 10,000 years, fossil record’s not going to have it.
    0:59:48 Yeah, yeah.
    0:59:50 The fossil record is too sparse, right?
    0:59:52 Most things don’t fossilize.
    0:59:56 And 10,000 years is a, you know, blink in the eye of geological time.
    1:00:01 So we call our Gavin called this the Cylorean hypothesis after the Doctor Who episode with the
    1:00:02 lizard creatures, the Cyloreans.
    1:00:05 And so that paper got a lot of press.
    1:00:09 But it was, you know, it was, it was, it was an important idea.
    1:00:10 And it was, it was really Gavin’s.
    1:00:15 I was just helping with the astrobiology that to recognize that like, yeah, you know, we could have
    1:00:16 been visited a long time ago.
    1:00:17 There just would be no record.
    1:00:20 Yeah, it’s kind of mind blowing.
    1:00:21 It’s really mind blowing.
    1:00:28 And it’s also a good reminder that we’ve been intelligent species have been here for a very
    1:00:29 short amount of time.
    1:00:30 Very short amount of time.
    1:00:30 Yeah.
    1:00:35 This is not to say that there was like, so, oh, whenever I gave, you know, I like, I was on Joe
    1:00:36 Rogan for exactly this paper.
    1:00:42 And I had to always emphasize, we’re not saying there was a Cylorean, you know, but we’re just
    1:00:45 saying that if there was, that’s why I love Gavin’s question.
    1:00:47 Gavin’s question was just like, how could you tell?
    1:00:47 Right.
    1:00:49 It was a very beautifully scientific question.
    1:00:53 That’s what we were really showing is that you really, you know, unless you did a very
    1:00:57 specific kind of search, which nobody’s done so far, that, you know, there, there’s not
    1:01:02 an obvious way to tell that there, there could have been civilizations here earlier on.
    1:01:09 I’ve actually been reading a lot about ancient civilizations, and it just makes me
    1:01:17 sad how much of the wisdom of that time is lost and how much guessing is going on, whether
    1:01:20 it’s in South America, like what happened in the jungle?
    1:01:25 Yeah, like the Amazon, like the Amazon problem, that was, you know, the conquistor came and
    1:01:30 wiped everybody out, and especially just even the, like the plague may have decimated.
    1:01:33 So yeah, how much of that civilization?
    1:01:34 And there’s a lot of theories.
    1:01:40 And, you know, because of archaeology only looks at cities, they don’t really know the
    1:01:42 origins of humans.
    1:01:46 And there’s a, there’s a lot of really interesting theories in there, of course, controversial.
    1:01:49 There’s a lot of controversial people in every discipline.
    1:01:53 But archaeology is like a fascinating one, because we know so little that basically
    1:01:58 storytellers, you’re assembling the picture from just very few puzzle pieces.
    1:01:59 It’s fascinating.
    1:02:02 It makes me, it’s, it’s, it’s humbling.
    1:02:08 And it’s sad that there could be entire civilizations, ancient civilizations that are
    1:02:11 either almost entirely or entirely lost.
    1:02:12 Yeah.
    1:02:16 Well, like the, the, the indigenous peoples of North America, there could have been like
    1:02:17 millions and millions.
    1:02:21 You know, we get this idea that like, oh, you know, the Europeans came and it was empty,
    1:02:26 you know, but it was may have only been empty because the plague had swept up from the,
    1:02:28 you know, from the, what happened in Mesoamerica.
    1:02:32 So, and, you know, and they didn’t really build cities, but they had, they, I mean,
    1:02:35 they, they didn’t build wooden or stone cities.
    1:02:36 They built wooden cities, you know.
    1:02:40 Everybody seems to be building pyramids, and they’re really damn good at it.
    1:02:41 I don’t know.
    1:02:42 What is happening with a parrot?
    1:02:43 Like, what is, why, why does that apply?
    1:02:45 Like what archetype in our brain is that?
    1:02:53 And it is also really interesting speaking of archetypes is that independent civilizations
    1:03:00 formed, and they had a lot of similar kind of dynamics, like human nature when it, it
    1:03:04 builds up hierarchies in a certain way, builds up myths and religions in a certain way, it
    1:03:08 builds pyramids in a certain way, it goes to war, all this kind of stuff.
    1:03:09 Yeah.
    1:03:11 Independently, they’re just fascinating.
    1:03:15 Santa Fe Institute, the stuff the Santa Fe Institute does on this as complex systems
    1:03:19 you know, there are the origin of hierarchies and such, very cool.
    1:03:22 Yeah, Santa Fe folks, complexity in general is really cool.
    1:03:27 What phenomena emerge when a bunch of small things get together and interact.
    1:03:33 Going back to this, this paper, a new empirical constraint on the prevalence of technological
    1:03:37 species in the universe, this paper that expands on the Drake equation.
    1:03:39 What are some interesting things in this paper?
    1:03:43 Well, so the main thing we were trying to do with this paper is say, look, we have all of
    1:03:45 this exoplanet data, right?
    1:03:49 It’s got to be good for something, especially since two of the terms that have been nailed
    1:03:52 down empirically are two terms in the Drake equation.
    1:03:56 So F sub P, that’s the second term, fraction of stars that have planets.
    1:04:01 And then N sub B, the average number of planets in the habitable zone, those are the
    1:04:03 second and third term in the Drake equation.
    1:04:06 So what that means is all the astronomical terms have been nailed.
    1:04:10 And so we said like, okay, how do we use this to do something with the Drake equation?
    1:04:13 And so we realized is, well, okay, we got to get rid of time.
    1:04:15 The lifetime thing, we can’t say anything about that.
    1:04:21 But if we let that, if we don’t ask how long they last, but instead ask, what’s
    1:04:26 the probability that there have been any civilizations at all, no matter how long
    1:04:26 they lasted.
    1:04:28 I’m not asking whether they exist now or not.
    1:04:34 I’m just asking in general about probabilities to make a technological
    1:04:37 civilization anywhere and at any time in the history of the universe.
    1:04:39 And that we were able to constrain.
    1:04:49 And so what we found was basically that there have been 10 billion trillion habitable
    1:04:51 zone planets in the universe.
    1:04:57 And what that means is that are, those are 10 billion trillion experiments that
    1:04:57 have been run.
    1:05:03 And the only way that we’re the only time that this is, you know, this whole process
    1:05:08 from, you know, a biogenesis to a civilization has occurred is if every
    1:05:09 one of those experiments failed.
    1:05:09 Right.
    1:05:14 So therefore you could put a probability, we called it the pessimism line, right?
    1:05:18 We don’t really know what nature sets for the probability of making intelligent
    1:05:19 civilizations, right?
    1:05:21 But we could set a limit using this.
    1:05:26 We could say, look, as if the probability per habitable zone planet is less
    1:05:30 than 10 to the minus 22, one in 10 billion trillion, then yeah, we’re alone.
    1:05:34 If it’s anywhere larger than that, then we’re not the first.
    1:05:35 It’s happened somewhere else.
    1:05:37 And to me, that was, that was mind blowing.
    1:05:40 Doesn’t tell me there’s anybody nearby, the galaxy could be sterile.
    1:05:46 It just told me that like, you know, unless nature’s really against, it has
    1:05:50 some bias against civilizations, we’re not the first time this has happened.
    1:05:53 This has happened elsewhere over the course of cosmic history.
    1:05:57 10 billion trillion experiments.
    1:05:59 Yeah, that’s a lot of experiments.
    1:05:59 That’s a lot.
    1:06:00 Right.
    1:06:01 A thousand is a lot.
    1:06:01 Yeah.
    1:06:02 A hundred is a lot.
    1:06:03 Yeah.
    1:06:10 If we normal humans saw a hundred experiments and we knew that at least
    1:06:16 one time there was a successful human civilization built, I mean, we would say
    1:06:18 for sure in a hundred, you’ll get another one.
    1:06:18 Yeah.
    1:06:19 Yeah.
    1:06:19 So that’s what I mean.
    1:06:22 That’s why, so this, you know, these kinds of arguments, you have to be careful
    1:06:22 with what they can do.
    1:06:26 But what it really, I felt like what this paper showed was that, you know, the
    1:06:28 burden of proof is now on the pessimists, right?
    1:06:30 So that’s why we called it the pessimism line.
    1:06:34 There’s been, you know, throughout history, there’s been, you know, alien
    1:06:37 pessimists and alien optimists, and they’ve been yelling at each other.
    1:06:38 That’s all they had to go with, right?
    1:06:42 You know, and like with Giordano Bruno and 1600, they burned the guy at the
    1:06:43 stake for being an alien optimist.
    1:06:46 But nobody really knew what pessimism or optimism meant.
    1:06:49 This, you know, we sort of thought this was like the plank length.
    1:06:51 This was sort of the plank length of astrobiology.
    1:06:55 It gave you an actual number that, you know, if you could somehow calculate what
    1:06:59 the probability, you know, of forming a technological civilization was, this
    1:07:02 thing sort of shows you where the limit is.
    1:07:06 As long as you’re above 10 to the minus 22, then you actually, absolutely, it
    1:07:09 has occurred in the, in the, in the history, other civilizations have
    1:07:10 occurred in the history of the universe.
    1:07:15 So to me, at least a big question is FE, which is basically a biogenesis.
    1:07:18 How hard is it for life to originate on a planet?
    1:07:22 Cause all the other ones seem very likely.
    1:07:23 Everything seems very likely.
    1:07:26 The only open question to me is like, how hard is it for life to originate?
    1:07:30 There’s lots of ways to, again, you know, we don’t know unless we look and
    1:07:33 the, you know, you had Sarah walk around not too long ago.
    1:07:35 You know, she’s very interested in origins of life.
    1:07:39 Um, uh, so, you know, lots of people are working on this, but I think
    1:07:42 it’s, it’s hard looking at the history of the earth.
    1:07:44 You know, and again, this is, you can do Bayesian arguments on this.
    1:07:48 Um, but yeah, it’s forming life.
    1:07:51 I don’t think it’s hard getting, getting like basic biology started.
    1:07:52 I don’t think it’s hard.
    1:07:53 It’s still wild.
    1:07:57 It’s an amazing process that actually I think requires some deep rethinking
    1:08:01 about how we conceptualize what life is and what life isn’t.
    1:08:03 That’s one of the things I like about Sarah’s work.
    1:08:07 Um, we’re, we’re pursuing on a different level, uh, about the life as
    1:08:11 that the only process or the only system that uses information.
    1:08:16 Um, but still, regardless of all those kinds of details, uh, life is probably
    1:08:16 easy to make.
    1:08:18 That’s, that’s my, that’s my gut feeling.
    1:08:23 You know, I mean, day by day, this changes for me, but I just see once
    1:08:27 you create bacteria, it’s, it’s, it’s off to the races.
    1:08:30 You’re going to get complex life as long as you have enough time.
    1:08:36 I mean that boring billion and, but I just can’t imagine, uh, habitable planet
    1:08:39 not having a couple of billion to spare.
    1:08:39 Yeah.
    1:08:41 Couple of years to spare.
    1:08:45 You know, there is a mystery there about why did it take so long, like with
    1:08:48 the Cambrian explosion, but that may be again about these windows that like you
    1:08:53 couldn’t happen until, until the window, the planet and the, uh, life had evolved
    1:08:57 together enough that they together kind of opened the window for the, the next step.
    1:09:02 Um, you know, intelligent life and how long intelligent, you’re similar
    1:09:03 to technological civilizations.
    1:09:07 I think there’s a big question about how long those last and how, you know, I’m
    1:09:12 hopeful, you know, um, but, uh, but in terms of just like, I think life is
    1:09:15 absolutely going to be common in the, you know, pretty common in the unit.
    1:09:19 Yeah, I think it’s absolutely like, I think, uh, again, if I were to put
    1:09:23 everything, uh, even advanced civilizations are common.
    1:09:30 So the, to me, then the, the only explanation is the L our galaxy is a
    1:09:32 graveyard of civilizations.
    1:09:33 Yeah.
    1:09:35 Kids, you know, you think about it, we’ve only been around, I mean, as a tech, a
    1:09:39 lot, truly, you know, when we think about in, in Drake’s, uh, definition, you
    1:09:40 had to have radio telescopes.
    1:09:44 That’s been a hundred years, you know, and if we got another 10,000, a hundred
    1:09:47 thousand years of history, that would be, for us, it’d be pretty amazing, right?
    1:09:51 Um, but that’s still, that wouldn’t be long enough to really pop up the
    1:09:54 number of civilizations in the, in the galaxy.
    1:09:57 So you really need it to be like hundreds of millions of years.
    1:10:01 And that raises a question, which I am very interested in, which is how do
    1:10:04 we even talk about, I call it the billion year civilization, right?
    1:10:09 How do we even begin to hypothesize or think about in any kind of systematic
    1:10:14 way, what happens to a technological civilization across hundreds of
    1:10:16 millions to a billion years?
    1:10:16 Yeah.
    1:10:19 Like how, how do you even simulate the trajectories that civilizations
    1:10:21 can take across that kind of timescale?
    1:10:22 Yeah.
    1:10:27 Uh, when we, all the data we have is just for the 10,000 years or, or so 20,000
    1:10:30 years that humans have been building civilizations.
    1:10:33 And then just, I don’t, I don’t know what you put it at, but maybe a hundred
    1:10:35 years that we’ve been technological.
    1:10:36 Yeah.
    1:10:38 And we’re ready to blow ourselves to bits or, you know, drive
    1:10:39 ourselves off the planet.
    1:10:40 Yeah.
    1:10:42 No, it’s really interesting, but there’s got to be a way that I think
    1:10:43 that’s really a frontier.
    1:10:45 So you had David Kipping on not too long ago.
    1:10:48 Um, and David and I did a paper, uh, and Caleb Sharpe.
    1:10:51 David really drove this, uh, where we, you know, it was a Bayesian
    1:10:53 calculation to sort of ask the question.
    1:10:56 If you, if you were to find a detection, if you were to find a signal
    1:11:00 or, you know, a techno signature, would that come from a civilization
    1:11:03 that was younger, your age or older?
    1:11:06 And you could see, I mean, this is not hard to do, but it was great.
    1:11:08 The formalism, the formalism was hard, you know, it’s kind of
    1:11:12 intuitive, but the formalism was hard to show that, yeah, they’re older,
    1:11:13 you know, probably much older.
    1:11:16 So that means you really do need to think about, like, okay, how do
    1:11:19 billion year civilizations manifest themselves?
    1:11:20 What signatures will they leave?
    1:11:23 And yeah, can you even, I mean, what’s so cool about it?
    1:11:26 It’s so much fun because you’ve got to, like, you have to, you have
    1:11:28 to imagine the unimaginable.
    1:11:31 Like, you know, would you still, I mean, obviously biological evolution
    1:11:34 can happen on, you know, on those kinds of time scales.
    1:11:37 So you wouldn’t even really be the same thing you started out as, but
    1:11:39 social forms, what kind of social forms?
    1:11:42 Can you imagine that would be continuous over that?
    1:11:43 Or maybe they wouldn’t be continued.
    1:11:45 You should get, they drop out, you know, they destroy themselves
    1:11:46 and then they come back.
    1:11:51 So maybe it’s, you know, it’s a trunk or a punctuated evolution.
    1:11:53 I mean, but we got to sort of, this is the fun part.
    1:11:54 We have to sort of work this out.
    1:11:59 Well, I mean, one way to approach that question is like, how, what are
    1:12:02 the different ways to achieve homeostasis as you get greater and
    1:12:04 greater technological innovation?
    1:12:10 So like, if you expand out into the universe and you have, uh,
    1:12:13 optocartage have scale, what, what are the ways you can avoid destroying
    1:12:17 yourself, just achieve stability while still growing?
    1:12:18 Yeah.
    1:12:22 And I mean, that’s an interesting question.
    1:12:23 I think it’s probably simulatable.
    1:12:27 Could be, I mean, you know, agent-based modeling, you could do it with that.
    1:12:30 So, so, you know, our group has used agent-based modeling to do
    1:12:33 something like the Fermi paradox that was, that was agent-based modeling.
    1:12:34 But you can also do this.
    1:12:35 People at Santa Fe have done this.
    1:12:39 Other groups have done this to do use agent-based modeling to track the, the
    1:12:44 or formation of hierarchies, the formation of stable hierarchies.
    1:12:48 The, so I think that, I think it’s actually very doable, but, um, understanding
    1:12:51 the kind of assumptions and principles that are going into it and what you
    1:12:54 can extract from those, that is what is sort of the frontier.
    1:13:02 Do you think if humans colonize Mars, the dynamic between the civilization
    1:13:07 on Earth and Mars will be fundamentally different than the dynamic between
    1:13:09 individual nations on Earth right now?
    1:13:12 Like that’s, that’s the thing to load into the simulate, the agent-based
    1:13:13 simulation we’re talking about.
    1:13:17 If we settle it, Mars will very quickly want to become its own nation.
    1:13:21 Well, no, there’s already going to be nations on Mars.
    1:13:22 That’s guaranteed.
    1:13:22 Yeah.
    1:13:25 The moment you have two million people, one, the moment you have one million
    1:13:27 people, there’s going to be two tribes.
    1:13:29 And then they’re going to start fighting.
    1:13:30 Right.
    1:13:33 And the question is interplanetary fighting.
    1:13:34 How quickly does that happen?
    1:13:36 And does it have a different nature to it?
    1:13:38 Because of the distances, you know?
    1:13:40 Are you a fan of The Expanse?
    1:13:41 Do you have you watch The Expanse?
    1:13:42 Great show.
    1:13:45 Cause it’s all about the, I highly recommend to everybody.
    1:13:46 It’s based on a series of books that are excellent.
    1:13:48 It’s on prime, six seasons.
    1:13:50 And it’s basically about the settled solar system.
    1:13:53 It takes place about 300 years from now and the entire solar system is settled.
    1:13:57 And it is the best show about interplanetary politics.
    1:14:02 The first season, actually, the journal, what was it, foreign, foreign affairs said
    1:14:06 the best show on TV about politics, it takes place is interplanetary.
    1:14:09 So yeah, I think, you know, human beings being human beings.
    1:14:12 Yes, where there will be warfare and there will be conflict.
    1:14:16 I don’t think it’ll be necessarily all that different, you know, because really,
    1:14:20 I think within a few hundred years, we will have lots of people in the solar system.
    1:14:22 And it doesn’t even have to be on Mars.
    1:14:26 We did a paper where we look based on, because I wanted to know about whether
    1:14:29 an idea in the expanse was really possible.
    1:14:32 In the expanse, the asteroid belt, what they’ve done is they have
    1:14:35 colonized the asteroid belt by hollowing out the asteroids and spinning them up
    1:14:37 and living on the inside, right?
    1:14:40 Because they have the Coriolis force and I thought, like, wow, what a cool idea.
    1:14:44 And when I ran the blog for NPR, actually talked to the guys and said,
    1:14:47 did you guys calculate this to see if whether it’s possible?
    1:14:48 Sadly, it’s not possible.
    1:14:52 The rock is just not strong enough that if you tried to spin it up
    1:14:56 to the speeds you need to get one third gravity, which is what I think
    1:14:59 the minimum you need for human beings, the rock would just fall apart.
    1:14:59 It would break.
    1:15:03 But we came up with another idea, which was that if you could take small
    1:15:07 asteroids, put a giant bag around them, a nanofiber bag and spin those up.
    1:15:12 It would inflate the bag and then even a small couple of kilometer wide
    1:15:18 asteroid would expand out to you could get like a Manhattan’s worth of material inside.
    1:15:21 So forget about even colonizing Mars, space stations, right?
    1:15:24 Or space habitats with millions of people in them.
    1:15:28 So anyway, the point is that I think, you know, within a few hundred years,
    1:15:32 it is not unimaginable that there will be millions, if not billions,
    1:15:34 of people living in the solar system.
    1:15:38 And you think most of them will be in space habitats versus on Mars
    1:15:39 and on the planetary surface?
    1:15:42 You know, it’s a lot easier on some on some level, right?
    1:15:44 It depends on how like with nanofabrication and such.
    1:15:47 But, you know, getting down to gravity well is hard, right?
    1:15:50 So, you know, there’s a certain way in which there’s a lot of, you know,
    1:15:53 it’s a lot easier to build real estate out of the asteroids.
    1:15:54 But we’ll probably do both.
    1:15:56 I mean, I think what will happen is, you know, the next,
    1:16:00 should we make it through climate change and nuclear war and all the other and AI?
    1:16:05 The the next thousand years of human history is the solar system, right?
    1:16:10 And so, you know, I think we’ll settle every nook and cranny we possibly can.
    1:16:14 And it’s, you know, it’s a beautiful, what I love about what’s hopeful about it
    1:16:16 is this idea you’re going to have all of these pockets.
    1:16:20 And, you know, I’m sure there’s going to be a Mormon space habitat, like, you know,
    1:16:23 there’s going to be whatever you want, a libertarian space habitat.
    1:16:25 Everybody’s going to be able to kind of create there.
    1:16:27 There’ll be lots of experiments in human flourishing.
    1:16:31 And those kinds of experiments will be really useful for us to sort of figure
    1:16:36 out better ways for us to interact and have maximum flourishing, maximum wellness,
    1:16:38 maximum democracy, maximum freedom.
    1:16:42 Do you think that’s a good backup solution to go out into space
    1:16:47 sort of to avoid the possibility of humans destroying themselves completely here on Earth?
    1:16:50 Well, I think, you know, I want to be always careful with that because,
    1:16:53 like I said, it’s centuries that we’re talking about, right?
    1:16:57 So, you know, the problem with climate change and same with nuclear war,
    1:16:58 it’s breathing down our necks now.
    1:17:04 So it’s not a, you know, trying to establish a base on Mars is going to be
    1:17:09 so hard that it is not even going to be close to being self-sufficient for a couple
    1:17:11 of, you know, a century at least.
    1:17:13 So it’s not like a backup plan now.
    1:17:16 You know, we have to solve the problem of climate change.
    1:17:17 We have to deal with that.
    1:17:19 There’s still enough nuclear weapons to really do our, you know,
    1:17:22 horrific things to the planet for human beings.
    1:17:24 So I don’t think it’s like a backup plan in that way.
    1:17:26 But I do think, like I said, it’s the prize.
    1:17:31 It’s, you know, if we get through this, then we get the entire solar system to
    1:17:35 sort of play around in and experiment with and do really cool things with.
    1:17:38 Well, I think it could be a lot less than a couple of centuries.
    1:17:43 If there’s a urgency, like a real urgency, like a catastrophe, like,
    1:17:49 maybe a small nuclear war breaks out where it’s like, holy shit,
    1:17:52 this is for sure, for sure a bigger one is looming.
    1:17:56 Yeah, maybe if geopolitically, the war between China and the United States
    1:18:00 escalates, where there’s this tension that builds and builds and builds.
    1:18:03 And it becomes more obvious that we need to really, really be that story.
    1:18:09 I think my only dilemma with that is that I just think that a self-sufficient base
    1:18:12 is so far away that, like I say, you start doing that.
    1:18:14 And then there is a full-scale nuclear exchange.
    1:18:17 That base is, you know, it’s not going to last because it’s just, you know,
    1:18:22 the self-sufficiency requires a kind of economy, like literally a material
    1:18:27 economy that we are so far from with Mars, that we are centuries from.
    1:18:30 Like I said, you know, three centuries, which is not that long.
    1:18:34 Two to three centuries, you know, look at 1820, nobody had traveled faster
    1:18:37 than 60 miles an hour unless they were falling off a cliff, right?
    1:18:41 And now we routinely travel at 500 miles an hour, but it is sort of centuries long.
    1:18:45 So that’s why I think, I think we’d be better off trying to solve these problems
    1:18:49 than, you know, I just think the odds that we’re going to be able to create
    1:18:57 a self-sufficient colony on Mars before that threat comes to head is small.
    1:18:58 So we’d have to deal with the threat.
    1:19:02 Yeah, it’s an interesting scientific and engineering question of how to create
    1:19:06 a self-sufficient colony on Mars or out in space as a space habitat.
    1:19:10 Like where Earth entirely could be destroyed, you could still survive.
    1:19:11 Yeah, yeah.
    1:19:13 Because it’s really what about, you know, thinking about complex systems, right?
    1:19:21 A space habitat, you know, would have to be as robust as an ecosystem, as the kind
    1:19:24 of thing, you know, you go out and you see a pond with all the different webs
    1:19:25 of interactions.
    1:19:30 You know, that’s why I always think that, you know, if this process of going
    1:19:34 out into space is actually will help us with climate change and with thinking
    1:19:38 about making a long-term sustainable version of human civilization, because
    1:19:42 you really have to think about these webs, the complexity of these webs
    1:19:44 and recognize the biosphere has been doing this forever.
    1:19:46 The biosphere knows how to do this, right?
    1:19:50 And so, A, how do we support, how do we build a vibrant, powerful
    1:19:55 techno sphere that also doesn’t, you know, mess with the biospheres, mess
    1:19:58 with the biosphere’s capacity to support our techno sphere?
    1:20:01 So, you know, by doing this, by trying to build space habitats in some
    1:20:04 sense, you’re thinking about building a small scale version of this.
    1:20:07 So I think, I think the two problems are going to kind of feed back on each other.
    1:20:12 Well, there’s also the other possibility of, uh, like the movie, uh,
    1:20:16 Darren Aronofsky’s postcard from Earth, where we can create this kind
    1:20:22 of life gun that just shoots, so as opposed to, uh, engineering everything.
    1:20:27 Basically seeding life on a bunch of places and letting life do its thing,
    1:20:31 which is really good at doing, it seems like, so as opposed to like the, with
    1:20:36 a space habitat, you basically have to build the entire biosphere and techno
    1:20:38 sphere, the whole, the whole thing, by yourself.
    1:20:42 Yeah, uh, you know, if you just, hey, the aforementioned cockroach
    1:20:48 with some bacteria, place it in Europa, uh, I think you’d be surprised what happens.
    1:20:48 Yeah.
    1:20:49 Right.
    1:20:55 Like honestly, if you put a huge amount of bacteria, like a giant
    1:21:02 number of organisms from Earth into, uh, on Mars, on, uh, some of these moons
    1:21:06 of the other planets in the solar system, do you think like, I feel like
    1:21:08 some of them would actually find a way to survive?
    1:21:11 I, you know, the moon is hard because the moon is just like, there’s no, you
    1:21:14 know, the moon may be really hard, but you know, that’d be, I mean, I wonder
    1:21:16 if somebody’s must have done these experiments, right?
    1:21:18 Like how, because we know there are extremophiles, right?
    1:21:21 We know that they’re, you can go down, you know, 10 miles below the Earth’s
    1:21:24 surface and there are things where there’s no sunlight.
    1:21:29 There’s, you know, the conditions are so extreme and there’s lots of microbes
    1:21:32 having a great time, living off the radioactivity, you know, in the rocks.
    1:21:36 But, you know, they had lots of time to evolve to those conditions.
    1:21:41 So I’m not sure if you dumped a bunch of bacteria, you know, so somebody,
    1:21:42 like somebody must have done these experiments.
    1:21:50 Like, you know, how fast could microbial evolution occur in under harsh
    1:21:54 conditions that you maybe get somebody who figures out, okay, I can do with this.
    1:21:56 I think the moon’s too much because it’s so sterile.
    1:21:59 But, you know, Mars, I don’t know, maybe, I don’t know.
    1:22:01 We’d have to, that, but it’s an interesting idea.
    1:22:03 I wonder if somebody has done those experiments.
    1:22:06 Yeah, you think somebody would, like, let’s take a bunch of microbes.
    1:22:09 The harsh, take the harshest possible condition of all different kinds,
    1:22:10 temperature, all this kind of stuff.
    1:22:13 Right, pressure, salinity, and then just, like, dump a bunch of things
    1:22:17 that are not used to it and then just see, does everybody just die?
    1:22:18 You know, that’s it.
    1:22:18 There’s, you know.
    1:22:23 The thing about life, it, it flourishes in a non-sterile environment where
    1:22:27 there’s a bunch of options for resources, even if the condition is super harsh.
    1:22:32 In the lab, I don’t know if you can reconstruct harsh conditions plus options
    1:22:33 for survival.
    1:22:34 You know what I mean?
    1:22:40 Like, you have to have the, the, the huge variety of resources that are always
    1:22:44 available on a planet somehow, even when it’s in super harsh conditions.
    1:22:47 So that, so that’s actually not a trivial experiment.
    1:22:50 And I wouldn’t even, if somebody did that experiment in the lab, I’d be a little
    1:22:55 bit skeptical because, like, if, because I could see bacteria doesn’t survive
    1:22:59 in this kind of temperature, but then I’m feeling, I don’t know, I don’t know.
    1:23:00 Is there enough, right?
    1:23:03 Is that, you know, is there, are there other options?
    1:23:05 Like, you know, is the condition rich enough?
    1:23:06 Rich enough, yeah.
    1:23:08 You know, there’s, there’s an alternative view, though, which is there’s
    1:23:11 this great book by Kim Stanley Robinson called Aurora.
    1:23:15 You know, so there’s been a million, um, century ship stories, like where, you
    1:23:19 know, Earth sends out a, you know, generation ship or century ship and it goes
    1:23:21 to another planet and they land and they colonize.
    1:23:24 And on this one, they get all the way there and they think it’s, the plan’s
    1:23:28 going to be habitable and it turns out that it’s not habitable for earth life.
    1:23:30 Like that, you know, there’s, there’s like, you know, bacteria or prions
    1:23:35 actually, you know, super that just like, you know, kill people in the simplest way.
    1:23:38 Um, and the, the important thing about this book was the idea that like, you
    1:23:41 know, life is actually very tied to its planet.
    1:23:42 It may not be so easy.
    1:23:44 I just thought it was a really interesting idea.
    1:23:49 I’m not necessarily supporting it, but that actually life reflects the planetary
    1:23:53 conditions that not the planetary, the planet itself, the whole lineage, the
    1:23:57 whole history of the biosphere, and it may not be so easy this to, to just sort
    1:24:00 of be like, Oh, just drop it over here and it’ll, you know, cause the bacteria,
    1:24:03 even though they’re individual examples of life and they kind of believe this,
    1:24:07 the true unit of life, it’s not DNA, it’s not a cell.
    1:24:08 It’s the biosphere.
    1:24:10 It’s the whole community.
    1:24:10 Yeah.
    1:24:15 That’s actually an interesting field of study is how when you arrive from one
    1:24:20 planet to another, so we humans arrive to a planet that has a biosphere, maybe
    1:24:29 a techno sphere, what is the way to integrate without killing yourself or,
    1:24:31 or the other one, or the other one.
    1:24:33 That’s, let’s stick to biology.
    1:24:35 Like that, that’s an interesting question.
    1:24:41 I don’t know if we have a rigorous way of investigating that.
    1:24:45 Because everybody, everything on life is, you know, has the same lineage.
    1:24:48 We all come from Luca, you know, the last universal common ancestor.
    1:24:50 And what you see is often in science fiction, people will do things like,
    1:24:56 oh, well, it’s okay because like that bio, that metabolism, that biochemistry is so
    1:24:59 different from ours that we can coexist because they don’t even know each other,
    1:24:59 you know, right?
    1:25:02 That the, you know, and then the other version is you get there, you land and
    1:25:04 instantly, you know, the nose bleeds and you’re dead.
    1:25:08 Unfortunately, I think it’s the latter.
    1:25:11 Yeah, it sort of feels like, it’s the more alien kind of thing.
    1:25:17 So as we look out there, according to the Drake Equations, we just discussed,
    1:25:20 seems impossible to me that there’s not civilizations everywhere.
    1:25:21 So how do we look at them?
    1:25:22 This process of SETI.
    1:25:27 I have to put on my scientist hat and just say, my gut feeling is that dumb life,
    1:25:28 so to speak, is common.
    1:25:33 I am a little agnostic about, I can see ways in which intelligence civilizations
    1:25:38 may be sparse, but, but until, you know, we got to go look, it’s all, it’s all armchair,
    1:25:39 armchair astronomy.
    1:25:41 That’s, that’s from a sort of rigorous scientific perspective.
    1:25:46 From my bro science perspective, it seems, again, smoking the, the aforementioned weed.
    1:25:52 Yeah, after the bomb, yeah, I mean, honestly, it’s, it’s really just, it’s
    1:25:58 impossible to me that there’s not potentially dead, but advanced civilizations
    1:26:00 everywhere in our galaxy.
    1:26:00 Yeah.
    1:26:00 Yeah.
    1:26:02 The potentially dead port, I think.
    1:26:02 Right.
    1:26:05 It could be that, like, making civilizations is easy.
    1:26:06 They just don’t last long.
    1:26:09 So what we, when we went out there, we’d find a lot of extinct civilizations.
    1:26:10 Extinct civilizations.
    1:26:11 Yeah.
    1:26:13 Apex predators don’t survive.
    1:26:17 Like they, they get, get better, better, better and they die and kill themselves
    1:26:17 all somehow.
    1:26:20 Anyway, so just how do we find them?
    1:26:20 Yeah.
    1:26:26 So SETI, search for extraterrestrial technology, is a term that I am not fond of
    1:26:27 using anymore.
    1:26:30 I mean, some people in my field are, so I’m sorry, folks.
    1:26:34 But I’m really, what I really like is the idea of techno signatures.
    1:26:38 Cause I think, you know, to me, SETI is the, first of all, intelligence.
    1:26:39 We’re not really looking for intelligence.
    1:26:40 We’re looking for technology.
    1:26:45 I mean, you know, and SETI, the classic idea SETI is the radio telescopes,
    1:26:47 you know, in contact, Jody Foster with the headphones.
    1:26:50 That whole thing is still part, it’s still active.
    1:26:52 There’s still great things going on with it.
    1:26:54 But suddenly this whole new window opened up.
    1:27:00 When we discovered exoplanets, we now found a new way to look for
    1:27:04 intelligent civilizations or life in general, in a way that doesn’t have any
    1:27:07 of the assumptions that have to go into the classic radio SETI.
    1:27:11 And specifically what I mean is we’re not looking for somebody sending us a beacon.
    1:27:16 You really needed that with the classic model for a bunch of different reasons.
    1:27:19 You have to assume they wanted to be found and they were sending you a super
    1:27:19 powerful beacon.
    1:27:25 Now, because we know exactly where to look and we know exactly how to look, we
    1:27:30 can just go about looking for passive signatures of the civilization, going
    1:27:35 about its civilization in business, you know, without asking whether they want
    1:27:36 to be contacted or not.
    1:27:39 So this is what we call a biosignature or a techno signature.
    1:27:46 It is an imprint in the light from the planet of the activity of a biosphere
    1:27:47 or a techno sphere.
    1:27:47 And that’s really important.
    1:27:51 Yeah, that, that, that is why kind of the whole Gaia idea ends up being
    1:27:56 astrobiological, that biospheres and techno spheres are so potent, they
    1:27:58 change the entire planet.
    1:28:00 And you can see that from 20 light years.
    1:28:03 So let’s give an example of a biosignature to start off with, which
    1:28:07 would be a signature of a biosphere, oxygen, right?
    1:28:11 And on earth, at least, we know that oxygen is only in the atmosphere
    1:28:13 because life put it there.
    1:28:16 If life went away, the oxygen and particularly oxygen and methane, that
    1:28:19 pair, they would disappear, you know, very quickly.
    1:28:21 They’d react away, they’d all be gone.
    1:28:27 So if you find a planet with oxygen and methane, that’s a good bet that there’s
    1:28:28 a biosphere there.
    1:28:30 Okay, what about techno spheres?
    1:28:34 techno spheres, this is what, you know, so I’m the principal investigator on
    1:28:39 the first grant NASA has ever given to do these kind of exoplanet techno
    1:28:40 signatures.
    1:28:43 NASA was kind of, for reasons we can talk about, NASA had gotten pretty
    1:28:46 gun shy about funding anything about intelligent life.
    1:28:49 But okay, what’s an example of a techno signature?
    1:28:51 Well, one could be atmospheric pollution.
    1:28:54 I’m going to put pollution in quotes here because it doesn’t have to be
    1:28:56 pollution, but gases like chlorofluorocarbons.
    1:29:00 So we’ve dumped, you know, we dumped a huge amount of chlorofluorocarbons into
    1:29:02 the atmosphere by mistake.
    1:29:06 It was affecting the ozone, but we put so much in there that actually this is
    1:29:06 one of the things we did.
    1:29:10 We did a paper where we showed you could detect it across interstellar distances.
    1:29:15 You could look at the atmosphere, look at the light coming from a distant planet,
    1:29:19 pass the light through a spectrograph and see the, the spectral lines, the
    1:29:24 fingerprint, the spectral fingerprint of chlorofluorocarbons in an atmosphere.
    1:29:28 And that would for sure tell you that that word, there was a technological
    1:29:32 civilization there because there’s no other way to make chlorofluorocarbons
    1:29:35 except through some kind of industrial process.
    1:29:39 So you’re looking for, in the case of the biosphere, you’re looking for anomalies
    1:29:41 in the spectrograph.
    1:29:43 I wouldn’t necessarily call these anomalies.
    1:29:47 I’m looking for things that for biosignature, I’m looking for things that
    1:29:48 a geosphere, right?
    1:29:51 You know, that just rock and air wouldn’t produce on its own.
    1:29:53 What kind of chemicals would life produce?
    1:29:53 Right.
    1:29:56 And that’s, that’s part of the, that’s the interesting thing, right?
    1:29:59 So that’s what, you know, so we can use earth as an example, right?
    1:30:02 We can say, look, oxygen, we know there would be no oxygen in the atmosphere
    1:30:07 if it wasn’t for dimethyl sulfide, which is a compound that phyloplankton dump
    1:30:09 into the atmosphere, a lot of it, that’s sometimes mentioned.
    1:30:12 And there was even, there was a paper that somebody wrote where it was like,
    1:30:16 well, we’re not saying we see it, but, you know, there’s a bunch of noise
    1:30:17 in the spectra right there.
    1:30:22 So, you know, there’s a whole list of things that earth has done that are in
    1:30:24 the atmosphere that might be biosignatures.
    1:30:26 But now we’re reaching an interesting point.
    1:30:30 The field has matured to the point where we can start asking about agnostic
    1:30:34 biosignatures, things that have nothing to do with earth’s history.
    1:30:40 But we think that, that would still be indications of this weirdness we call life.
    1:30:40 Right?
    1:30:44 What, what is it in general that life does that leaves an imprint?
    1:30:49 So one of these things could be the structure of the network of chemical reactions.
    1:30:52 That biology always produces very different chemical networks.
    1:30:53 Who’s reacting with who?
    1:30:56 Then just rock and water, right?
    1:31:02 So, so there’s been some proposals for networked, you know, biosignatures.
    1:31:06 Information theory, you can use, you can try and look at the information
    1:31:11 that is in the different compounds that are you find in the atmosphere.
    1:31:14 And maybe that information shows you like, oh, if there’s too much
    1:31:16 information here, there must have been biology happening.
    1:31:17 It’s not just rock.
    1:31:18 Same thing for techno.
    1:31:22 We’re, that’s what we’re working on right now, that for techno signatures as well.
    1:31:25 So how do you detect techno signatures?
    1:31:25 Okay.
    1:31:28 So with techno signatures, I gave the example of chlorofluorocarbons.
    1:31:32 So that would be an example of, and again, that one is a non-agnostic one
    1:31:34 because we sort of like, oh, we produced chlorofluorocarbons.
    1:31:35 Maybe they will, right?
    1:31:37 And there’s solar panels, right?
    1:31:42 You can actually, the glint off of solar panels will produce a, the way the light
    1:31:46 is reflected off of solar panels, whether it, no matter what it’s made out of,
    1:31:51 actually, there was a paper that Monazvi Lingam and Avi Loeb did in, I think
    1:31:53 it was 2017, we’ve just followed up on it.
    1:31:55 That actually could act as a techno signature.
    1:31:59 You’d be able to see in the reflected light, this sort of big jump that would
    1:32:04 occur because of city lights, city, artificial illumination.
    1:32:08 If the, if there’s really like, you know, large scale cities, like, you know,
    1:32:13 Coruscant and Star Wars or Trent or in the foundation, those city lights would
    1:32:18 be detectable, you know, the spectral imprint of those across 20, 30 light years.
    1:32:23 So, you know, our job in this grant is to develop the first ever library of
    1:32:24 techno signatures.
    1:32:26 Nobody’s really ever thought about this before.
    1:32:32 So we’re trying to come up with all the possible ideas for what a civilization
    1:32:37 might produce that could be visible across, you know, interstellar distances.
    1:32:40 And are these good ones, or is these ones going to be hard to detect or such?
    1:32:42 City lights.
    1:32:47 So if a planet is all lit up with artificial light across 20 to 30 light years,
    1:32:48 we can see it.
    1:32:49 Yeah.
    1:32:52 If you looked at Earth at night from a distance where, you know, looked at
    1:32:56 spectra and you had sensitive enough instruments, you’d be able to see all the
    1:33:00 sodium lights and the reflected light off of, you know, they bounce off the ground,
    1:33:02 right, that the light bounces off the ground.
    1:33:07 So you’d convolve the, the sodium lamps with the reflected spectra from the
    1:33:09 ground and yeah, you’d be able to see that there’s city lights.
    1:33:13 Now, increase that by a factor of a thousand, you know, if you had a, a
    1:33:17 Trantor and you’d be able to detect that across interstellar distances.
    1:33:19 Thomas Beatty did this work, who’s now working with us.
    1:33:23 What do you think is the most detectable thing about Earth?
    1:33:26 Uh, wow, we just, this is fun.
    1:33:29 We just have a Sophia Schief, who’s part of our collaboration, just did a paper.
    1:33:30 We did Earth from Earth.
    1:33:35 If you were looking at Earth with Earth technology for a bunch of different
    1:33:39 techno signatures, how close would you have to be to be able to detect them?
    1:33:42 And most of them turn out to be, you’d have to be pretty close, at least out to
    1:33:45 the Oort cloud, but actually it’s, it is our radio signatures still.
    1:33:47 That is still most detectable.
    1:33:49 By the way, when you said you had to be pretty close and then you said the Oort
    1:33:52 cloud, that’s not very close, but you mean like from an interstellar.
    1:33:53 Interstellar distance.
    1:33:55 Cause the real question, you know, we really want to know is like, I’m sitting
    1:33:57 here on Earth, I’m looking at these exoplanets.
    1:34:00 The nearest star is four light years away.
    1:34:02 So that’s like the minimum distance.
    1:34:07 Um, so what can, if I’m looking at exoplanets, what kind of signals could I
    1:34:07 see?
    1:34:12 What is detectable about Earth with our current technology from the, our
    1:34:13 nearest solar system?
    1:34:14 Oh my God, there’s all kinds of stuff.
    1:34:18 Well, like our, our, the, the, um, chlorofluorocarbons, you can see, you
    1:34:21 know, you can see Earth’s pollution and you know, I think city lights, you
    1:34:25 had to be within, you know, within the solar system.
    1:34:29 If they do direct imaging of Earth, they’re going to need much more powerful.
    1:34:32 But let me tell you what the, let’s, let’s talk about direct imaging for a
    1:34:33 moment, because I just have to go on.
    1:34:34 This is such a cool idea, right?
    1:34:38 So what we really want, and the next generation of space telescopes and such
    1:34:39 is we’re trying to do direct imaging.
    1:34:44 We’re trying to get, uh, you know, an image of a planet separated from its
    1:34:47 star to be able to see the reflected light or the actual emission from the
    1:34:48 planet itself.
    1:34:48 Yeah.
    1:34:52 By the way, just to clarify, direct imaging means literally like a picture.
    1:34:53 A picture.
    1:34:56 But the problem is, is that with the, even with the, the, the prep, the
    1:35:00 thing that’s going to come after JWST, it’s going to be a pixel, right?
    1:35:01 You’re not going to get any kind of resolution.
    1:35:03 You’ll be able to get the light from it, which you’ll be able to pass
    1:35:05 through a spectrograph, but you’re not going to be able to take a picture.
    1:35:10 But there is this idea called the solar gravity lens telescope.
    1:35:11 I think that’s what it is.
    1:35:13 And the idea is insane, right?
    1:35:16 So their general relativity says, look, massive bodies distort space.
    1:35:18 They actually curve space time.
    1:35:21 So, um, the sun is a massive body.
    1:35:25 And so that means that the light passing through the sun gets focused.
    1:35:26 Like a lens, right?
    1:35:30 So the idea is to send a bunch of telescopes out kind of into the
    1:35:35 Oort cloud and then look back towards the sun, towards an exoplanet that is
    1:35:39 behind, not directly behind the sun, but is, you know, in the direction of the
    1:35:44 sun, and then let the, let the sun act like a lens and collect, focus the
    1:35:45 light onto the telescope.
    1:35:49 And you would be able to get, and they’ve done, it’s amazing.
    1:35:50 Like they’ve already, this idea is insane.
    1:35:55 They’d be able to get, if everything works out, 24 kilometer resolution.
    1:35:59 You’d be able to see Manhattan on a exoplanet.
    1:36:02 And this thing, it sounds insane, but actually, you know, NASA, they’ve
    1:36:06 already got, the team has already gotten through like sort of three levels of NASA.
    1:36:09 You know, there’s, there’s the NASA program for like, give us your wackiest idea.
    1:36:10 Right.
    1:36:13 And then the ones that survive that are like, okay, tell us whether that wacky
    1:36:15 idea, you know, is even feasible.
    1:36:16 And then, and they’re marching along.
    1:36:20 And the idea is that like, you know, and they even have plans for how you’d be
    1:36:25 able to get these probes out into the Oort cloud on relatively fast timescales.
    1:36:30 You need to be about 500 times as far from the sun as Earth is.
    1:36:33 Um, but right now everything looks like the idea seems to hold together.
    1:36:38 So probably when I’ll be dead, but when you’re an old man, um, it’s
    1:36:41 possible that something like this, could you imagine having like, yeah,
    1:36:46 res, that kind of resolution, a picture of an exoplanet down to, you know,
    1:36:47 kilometers.
    1:36:49 So I’m very excited about that.
    1:36:52 I can only imagine having a picture like that.
    1:36:56 And then there’s some, um, mysterious artifacts that you’re seeing.
    1:36:57 Yeah.
    1:37:03 I mean, it’s both, um, inspiring and, and almost heartbreaking that we
    1:37:09 can see, like, I think we would be able to see a civilization where there’s
    1:37:12 like a lot of scientists agree that this is very likely something and then we
    1:37:14 can’t, we can’t get there.
    1:37:17 But you know, I mean, again, this is the thing about being a long lived.
    1:37:20 We’ve got to get to the point where we’re long lived enough that, so let’s
    1:37:23 say we found like, this is what I always liked to, let’s imagine that we
    1:37:27 find, say 10 light years away, we find a planet that looks like it’s got
    1:37:28 techno signatures, right?
    1:37:29 It doesn’t end there.
    1:37:32 Like that would be the most important discovery in the history of humanity.
    1:37:34 And it wouldn’t be like, well, okay, we’re done.
    1:37:38 The first thing we do is we’d big, bigger telescope to try and do those
    1:37:38 imaging, right?
    1:37:41 And then the next thing after that, we plan a mission there, right?
    1:37:46 There’s there, we would figure out, like with breakthrough, breakthrough
    1:37:50 star shot, there was this idea of trying to use, you know, giant lasers to
    1:37:55 propel small spacecrafts, light sails, almost to the speed of light.
    1:37:57 So they would get there in 10 years and take pictures.
    1:38:00 And so we’ll, you know, if we actually made this discovery, there would be
    1:38:05 the impulse, there would be the effort to actually try and send something to,
    1:38:06 to get there.
    1:38:10 Now, you know, we probably couldn’t land, we could, but the, you know,
    1:38:14 so maybe we, maybe we take 30 years to build, 10 years to get there, 10
    1:38:15 years to get the picture back.
    1:38:18 Okay, you’re dead, but your kids are, you know what I mean?
    1:38:20 So it becomes now this multi-generational project.
    1:38:22 How long did it take to build the pyramids?
    1:38:25 How long did it take to build the giant cathedrals, right?
    1:38:27 Those were multi-generational projects.
    1:38:30 And I think we’re on the cusp of that kind of project.
    1:38:33 I think that would probably unite humans.
    1:38:34 I think it would play a big role.
    1:38:35 I think it would be helpful.
    1:38:36 I mean, human beings are a mess.
    1:38:37 Let’s face it.
    1:38:41 But I think having that record, that’s why I always say to people, discovery
    1:38:44 of life of any kind of life, even if it was microbial life, it wouldn’t matter.
    1:38:48 That to know that we’re not an accident, to know that there is probably, if we
    1:38:50 found one example of life, we’d know that we’re not an accident and there’s
    1:38:53 probably lots of life and that we’re a community.
    1:38:56 We’re part of a cosmic kind of community of life.
    1:38:58 And who knows what life has done, right?
    1:39:00 We don’t really, all bets are off with life.
    1:39:04 Since we’re talking about the future of telescopes, let’s talk about our
    1:39:08 current super sexy, awesome telescope, the James Webb Space Telescope, that I
    1:39:10 still can’t believe actually worked.
    1:39:10 I can’t believe it worked.
    1:39:12 I was really skeptical.
    1:39:15 I was like, okay, guys, all right, sure.
    1:39:20 We only got one shot for this incredibly complicated piece of hardware to unfold.
    1:39:23 So what kind of stuff can we see with it?
    1:39:27 I’ve been just looking through different kinds of announcements that have been
    1:39:29 detected, there’s been some direct imaging.
    1:39:30 Yes, like a single pixel.
    1:39:36 The kinds of exoplanets we’re able to direct image, I guess would have to be hot.
    1:39:40 Hot, usually far away from the, you know, reasonably far away from the star.
    1:39:43 I think, you know, JWST is really kind of at the hairy edge of being able to do
    1:39:44 much with this.
    1:39:47 What’s more important, I think, for JWST is the spectra.
    1:39:49 And the problem with spectra is that there’s not sexy pictures.
    1:39:51 It’s like, hey, look at this wiggly line.
    1:39:57 But be able to find and characterize atmospheres around terrestrial exoplanets
    1:40:00 is the critical next step.
    1:40:01 That’s where we are right now.
    1:40:04 In order to look for life, we’re going to be, we need to find planets with
    1:40:05 atmospheres, right?
    1:40:09 And then we need to be able to do this thing called characterization, where
    1:40:12 we look at the spectral fingerprints for what’s in the atmosphere.
    1:40:13 Is there carbon?
    1:40:14 Is there carbon dioxide?
    1:40:15 Is there oxygen?
    1:40:15 Is there methane?
    1:40:18 Um, and that’s the most exciting thing.
    1:40:23 For example, there was this planet K218B, which had, they did a beautiful
    1:40:24 job getting the spectra.
    1:40:28 And the spectra indicated it may be an entirely new kind of habitable world
    1:40:30 called a Hysian world.
    1:40:33 Hysian meaning hydrogen ocean world.
    1:40:37 And that is a kind of planet that it would be a, uh, you know, kind of in the
    1:40:41 super earth sub-neptune domain we were talking about, you know, maybe eight times
    1:40:46 that mass of the earth, but it’s got a layer of hydrogen of an atmosphere of hydrogen.
    1:40:48 Hydrogen is an amazing greenhouse gas.
    1:40:53 So hydrogen will keep the, uh, the planet underneath it warm enough that
    1:40:55 you could get liquid water.
    1:40:59 You can get a giant ocean of, uh, uh, of liquid water.
    1:41:02 And that’s an entirely different kind of planet that could be habitable planet.
    1:41:05 You know, it could be a 60 degree warm ocean.
    1:41:11 So the data that came out of JWST for that planet was good enough to
    1:41:14 be able to indicate like, oh yeah, you know what the models from what we
    1:41:17 understand what the models, this looks like it’s a, it could be a Hysian world.
    1:41:20 And it’s 120 light years away from earth.
    1:41:21 Yeah.
    1:41:22 And so isn’t that amazing?
    1:41:25 You can, it’s 120 light years away, but we can see into the atmosphere.
    1:41:29 We can see to the atmosphere so well that we can be like, oh, look, methane,
    1:41:32 methane was a five sigma detection.
    1:41:37 Like you knew that the data were so good that it was like the gold standard of science.
    1:41:42 What about detecting, uh, maybe, uh, the direct imaging or in other
    1:41:48 ways, megastructures that the civilizations build, you know, what’s great
    1:41:50 about megastructures is first of all, it’s fun to say, who doesn’t want to say
    1:41:52 megastructure, alien megastructure, right?
    1:41:55 Every morning I’m looking for an opportunity to say that.
    1:42:00 Um, so the, the, the, the, er, example of this is the Dyson sphere, right?
    1:42:00 Which is amazing.
    1:42:03 Cause, you know, it was literally 1960 that this idea came up.
    1:42:04 Can you explain the Dyson sphere?
    1:42:05 Yeah, the Dyson sphere.
    1:42:08 So Freeman Dyson, you know, one of the greatest physicists ever, um, who had
    1:42:11 was very broad minded and thought about a lot of different things.
    1:42:15 He recognized that, you know, when a civilization, as civilizations progress,
    1:42:19 what they’re going to need is ever more energy to do ever more, you know,
    1:42:20 amazing things.
    1:42:22 And what’s the best energy source in a solar system?
    1:42:23 It’s the star, right?
    1:42:29 So if you surrounded the star with solar collecting machine, sunlight
    1:42:34 collecting machines, um, and the limit of this would actually build a sphere
    1:42:37 and actual sphere around your star that had all solar panels on the inside.
    1:42:41 You could capture every photon the star produced, which is, you know,
    1:42:43 this insane amount of light.
    1:42:47 You would have enough power now to do anything to re-engineer your solar system.
    1:42:48 Um, so that was a Dyson sphere.
    1:42:51 It turns out that a Dyson sphere doesn’t really work cause it’s unstable.
    1:42:55 You know, but a Dyson swarm is, and that’s really what he meant.
    1:43:00 You know, this large collection of large orbiting structures that we’re
    1:43:01 able to collect light.
    1:43:01 Yeah.
    1:43:05 So he didn’t actually mean a rigid sphere structure.
    1:43:06 Yeah.
    1:43:07 He basically meant a swarm.
    1:43:11 So that, like you said, and then the limit basically starts to look.
    1:43:13 People started to say, yeah, it was like a sphere.
    1:43:17 And we actually almost thought we might have found one of these, um, uh,
    1:43:19 back with, uh, a Bajoyan star.
    1:43:22 We saw, you know, the way we detect planets is through the transit method
    1:43:26 where the planet passes in front of the star and there’s a dip in the star light.
    1:43:27 It’s a little eclipse basically.
    1:43:29 And we know exactly what they should look like.
    1:43:33 And then with this one star, there were these really weird transits where like,
    1:43:36 it was like this little dragon’s tooth and then there’d be another one
    1:43:39 and another one and another one and then nothing and then three more.
    1:43:43 And in the paper that was written about this, they suggested they, you know,
    1:43:45 they went through the list of they, oh, it’s could be comets,
    1:43:46 could be chunks of a broken up planet.
    1:43:49 And it could also be an alien megastructure.
    1:43:52 And of course the news picked up on this and like everybody’s, you know,
    1:43:54 newsfeed the next day, alien megastructures discovered.
    1:43:58 Turns out, sadly, they were not alien megastructures.
    1:44:00 They were probably guests or dust clouds.
    1:44:03 Um, but it raised the possibility like, oh, these are observable.
    1:44:06 And people have worked out the details of what they would look like.
    1:44:08 You don’t really need direct imaging.
    1:44:09 You can do transits, right?
    1:44:11 They’re big enough that when they pass in front of the star,
    1:44:13 they’re going to produce a little blip of light because that’s what
    1:44:14 they’re supposed to write.
    1:44:15 They’re absorbing starlight.
    1:44:19 So people did have worked out like, well, a square one or a triangular one,
    1:44:20 but that wouldn’t be a distance fear.
    1:44:23 There would be like one object, one object, right?
    1:44:25 Which is what, if it’s a swarm, you’d expect like the light to be like
    1:44:28 blinking in and out as these things pass in front of, you know,
    1:44:32 if you’ve got thousands of these, much of the time they’ll be blotting
    1:44:34 out the star, sometimes they won’t be, right?
    1:44:39 And so you’re going to get an irregular sort of signal, a transit signal.
    1:44:39 Yeah.
    1:44:41 One you wouldn’t expect from a star that doesn’t have anything.
    1:44:42 Exactly.
    1:44:44 Or just a planet, right?
    1:44:44 Or a couple of planets.
    1:44:48 There’d be so many of these that it would be like beep, beep, blip, blip, blip, blip.
    1:44:54 And that usually doesn’t happen in a star system because there’s only
    1:44:55 just a handful of planets.
    1:44:56 That’s exactly what it is.
    1:44:57 Everything’s coagular.
    1:45:00 And a stable solar system, you get a handful of planets, you know,
    1:45:03 five, 10, that’s it probably, and nothing else.
    1:45:07 So if now suddenly you see lots of these little microtransits, you’re
    1:45:10 telling you there’s something else that’s big enough to create a transit.
    1:45:14 But, you know, too many of them, and also within a regular shape, the
    1:45:18 transit itself, that these are, these could be megastructures.
    1:45:21 How many people are looking for megastructures now?
    1:45:26 Well, the main groups looking for megastructures are, again, Jason Wright
    1:45:29 at Penn State and collaborators.
    1:45:31 The way they’re looking for it though is for infrared light.
    1:45:35 Because, you know, the second law of thermodynamics says, look, if you capture
    1:45:39 all of this starlight, you’re going to warm up the, you know, your things
    1:45:41 going to warm up and emit an infrared.
    1:45:45 You’re just going to be waste heat, waste heat and waste light from this.
    1:45:49 That feels like a louder, clearer way to detect it.
    1:45:49 Right.
    1:45:51 And that’s actually, you know, Dyson, that’s actually why Dyson proposed it.
    1:45:54 He wasn’t really proposing it because like he was saying, this is what
    1:45:56 civilizations are going to do.
    1:45:58 He proposed it because he was like, oh, we want to start looking for alien
    1:45:59 civilizations.
    1:46:02 Here’s something that would have a detectable signature.
    1:46:07 Um, so, uh, Jason and company have done, you know, pretty good searches.
    1:46:11 And recently they’ve made news because, you know, they were able to eliminate a
    1:46:12 lot of places.
    1:46:14 No, these are not Dyson spheres, but they did have a couple that were like
    1:46:18 anomalous enough that they’re like, well, this is kind of what it would look like.
    1:46:19 It’s not a detection.
    1:46:21 And they were saying, they would never say it’s a detection, but they were
    1:46:23 like, they were not non-detections.
    1:46:25 And they’re potential candidates.
    1:46:25 Potential candidates.
    1:46:26 Yeah.
    1:46:26 Love it.
    1:46:28 We have megastructure candidates.
    1:46:29 That’s inspiring.
    1:46:32 What other megastructures do you think that could be?
    1:46:35 I mean, that, so that’s Dyson spheres about capturing the energy of a star.
    1:46:36 Yeah.
    1:46:37 Well, there could be other.
    1:46:41 Well, there’s something called the Clark belt, right?
    1:46:43 So we have a bunch of satellites that are in geosynchronous orbit.
    1:46:47 Nothing naturally is going to end up in geosynchronous orbit, right?
    1:46:49 Geosynchronous orbit is one particular orbit that’s really useful.
    1:46:52 If you want to beam things straight down, or if you want to put a space
    1:46:53 elevator up, right?
    1:46:58 Um, so, uh, there’s this idea that if, you know, a civilization becomes
    1:47:02 you know, advanced enough that it’s really using geosynchronous orbit,
    1:47:05 that you actually get a belt, something that would actually be detectable
    1:47:07 from a distance via a transit.
    1:47:11 Uh, there’s been a couple of papers written about the possibility of these
    1:47:16 Clark belts, densely occupied Clark belts being a megastructure.
    1:47:20 It’s not as mega as a Dyson swarm, but it’s, you know, kind of planetary scale.
    1:47:22 You think it’s detectable Clark belt?
    1:47:23 It could be.
    1:47:26 I mean, like in our list of techno signatures, it would be down there,
    1:47:29 but it would be again, if you had an advanced enough civilization that did
    1:47:33 enough of this, it would certainly you’d have a Clark belt.
    1:47:35 And the question is whether or not it’s detectable.
    1:47:35 Yeah.
    1:47:37 Probably Dyson sphere is the, that’s the more exciting.
    1:47:38 Let’s go to one.
    1:47:39 Yeah, yeah.
    1:47:42 Speaking of the Dyson sphere, let’s talk to the Kardashev scales.
    1:47:43 Right.
    1:47:47 What is the Kardashev scale and where are humans on it?
    1:47:47 Right.
    1:47:49 So the Kardashev scale was the same time.
    1:47:54 This is this golden age of SETI, like kind of like 60, 59 to 65.
    1:47:58 When it just starts, like this is, you know, Frank Drake has done his
    1:48:01 first experiment, people are like, Oh my God, this is even possible.
    1:48:04 And so people are just thrown out these ideas.
    1:48:07 And as I, you know, said in the book, science is conservative.
    1:48:09 And what I mean by that is it holds on to its best ideas.
    1:48:13 So Kardashev comes up with this idea that look, if we’re, again, it’s always
    1:48:14 about detectability.
    1:48:18 If we’re looking for civilizations, we should think about what are the state,
    1:48:23 what are the natural stages, natural in quotes that a civilization goes through.
    1:48:27 And he was thinking in terms of energy use, which is like a good physicist.
    1:48:35 So the, he said, look, the first hurdle in terms of energy or threshold
    1:48:38 that a civilization will go through is using all the starlight that falls
    1:48:39 onto a planet.
    1:48:41 He called that a type one civilization.
    1:48:45 In whatever way you’re doing it, some large fraction of the starlight
    1:48:47 that falls on your planet, you are using for your own ends.
    1:48:52 The next would be to use all the starlight there is from that star.
    1:48:53 Right.
    1:48:54 So that’s the Dyson sphere.
    1:48:58 So he actually Dyson had already proposed his idea of the swarm
    1:48:59 and Kardashev was picking out.
    1:49:01 So that’s a type two civilization.
    1:49:06 Type three is galactic scale, a civilization that could use all the starlight
    1:49:07 in a galaxy.
    1:49:07 Right.
    1:49:09 So we are now, where are we now?
    1:49:12 Remarkably on a log scale, we’re at point seven of a type one.
    1:49:14 So we’re not even type one.
    1:49:15 No, no, no, we’re not even type one.
    1:49:21 But according to, there was a paper written by a group that said, you know,
    1:49:25 if we continue on our path, we’ll be at a type one at around 2,300.
    1:49:26 2,300.
    1:49:28 So this is on a log scale.
    1:49:32 So point seven.
    1:49:37 So type one is about 10 to the 16th Watts type two is 10 orders of magnitude
    1:49:39 larger than that 10 to the 26th Watts.
    1:49:44 And I think estimate for the galaxy is another 10 orders of magnitude.
    1:49:44 Yeah.
    1:49:47 Cause there’s a hundred billion star of order, a hundred billion stars.
    1:49:49 So that’s a lot.
    1:49:50 That’s a lot.
    1:49:53 Do you think humans ever get to type one?
    1:49:57 Um, I think, you know, there’s a problem with type one, which is that, you know,
    1:49:59 we already know about climate change, right?
    1:50:03 The effects of our harvesting energy to do the work of civilization is already
    1:50:06 changing the climate state, right?
    1:50:08 And that’s something that, you know, Kardashev couldn’t have recognized.
    1:50:15 When you, you know, there’s, there’s, uh, the first law of thermodynamics, right?
    1:50:17 Which is just about energy, you know, the different forms of energy.
    1:50:20 Then there’s the second law, which is about when you use that energy.
    1:50:22 And Kardashev wasn’t thinking about the second law.
    1:50:28 If you get all that energy and you use it, there’s waste heat.
    1:50:29 You don’t get to use it all, right?
    1:50:32 You can only, second law tells you that if, you know, I have a tank of
    1:50:36 gasoline, I can only use a certain fraction of the energy in that tank.
    1:50:38 And the rest is going to go to heating up the engine block.
    1:50:43 Um, so that second law tells you that, you know, you can only use so much energy
    1:50:48 before the climate state is like, uh, oh, you know, sorry, is going to change on you.
    1:50:52 So there’s a way in which we probably can’t get to a type one without like
    1:50:54 devastating the earth’s climate.
    1:50:58 So we’re probably going to have to figure out the most important thing actually
    1:51:01 here is probably, this is why space becomes the colonization or settlement of space.
    1:51:05 If we have an idea that we’ve been working on for a while called service worlds, right?
    1:51:12 That at some point you probably move a lot of your, um, industry off world, right?
    1:51:15 We’ve got mercury, for example, there’s nothing on mercury.
    1:51:16 There’s no life on mercury.
    1:51:18 Why don’t you put your energy harvesting there?
    1:51:19 Right.
    1:51:21 Because you can’t mess with the biosphere.
    1:51:23 The biosphere is more powerful than you are.
    1:51:23 Right.
    1:51:31 And so, yeah, so, so there’s limits to how much energy we can harvest to do work on
    1:51:34 the earth without really adversely affecting the biosphere.
    1:51:39 It does seem that the best response to the climate change is not to use less technology,
    1:51:48 but to, to invent better technology and to invent technology that avoids the destructive effects.
    1:51:49 This is the frontier we are.
    1:51:52 And that was the topic of my last book, Light of the Stars.
    1:51:56 It’s like you’ve got, you have to do the astrobiology of the Anthropocene.
    1:52:00 You have to see the transition that we’re going through now of the Anthropocene on a
    1:52:03 kind of planetary astrobiological framework.
    1:52:07 And, you know, that paper we were talking about with the 10 billion trillion worlds,
    1:52:10 that was actually in service of the work I was doing for this other book, where I wanted
    1:52:13 to know how often are, do you go through an anthra?
    1:52:17 How, you know, does every civil is a technological civilization trigger its own
    1:52:21 planetary crisis, its own climate Anthropocene crisis.
    1:52:24 And the answer we actually came up from doing models was like, yeah, probably.
    1:52:28 And then the question is, are you smart enough to figure out how to readjust what you’re
    1:52:32 doing technologically so that you’re not, you know, that all boats rise, right?
    1:52:36 You want to figure out how to do this so that the biosphere becomes even more productive
    1:52:39 and healthy and resilient.
    1:52:40 So yeah, right.
    1:52:42 It’s the kind of technology.
    1:52:46 I think there’s probably absolutely limits on how much energy you can use, use.
    1:52:48 But how do you use that energy?
    1:52:52 And then also, yeah, getting off planet, eventually, if you want to use 10 times
    1:52:56 more energy than that, you’re going to be not going to do it on on world.
    1:53:02 So how do we detect alien type one, two and three civilizations?
    1:53:07 So we’ve been kind of talking about basically type one civilization detection.
    1:53:08 Yeah, right.
    1:53:12 Maybe with the Dyson sphere, you start to get like a little bit more type two.
    1:53:16 But it feels like if you have a type two civilization, it won’t be
    1:53:18 just the Dyson sphere, right?
    1:53:22 It feels like that just for the same reason you mentioned climate change.
    1:53:28 But now at the star system level, they’re probably expanding, right?
    1:53:31 So how, how would you detect a type two?
    1:53:34 How about propulsion plumes, right?
    1:53:39 If you’re expanding, no, no, we just, I literally just put in a NASA proposal now.
    1:53:42 Thomas Beatty, who’s joined us from these at the University of Wisconsin,
    1:53:46 has an idea to look for plumes, right?
    1:53:51 If you have a civiliz, if you have a solar system wide civilization, right?
    1:53:53 And you’ve got space truckers going back and forth, right?
    1:53:56 You know, from Mars to, you know, they’re doing the in settlers run.
    1:54:00 They’re accelerating and decelerating the whole way there, right?
    1:54:04 If you want to get to Mars in a couple of weeks, you have your fusion drive
    1:54:08 on the entire way out there, you flip and burn and have it on, you know.
    1:54:11 So you’re also always have gravity, you have thrust gravity.
    1:54:14 So would those plumes be detectable?
    1:54:17 Because now you’ve got spaceships going all over the place and the odds that,
    1:54:20 like, you know, the plume is going to cross your field of view becomes,
    1:54:21 could become pretty high.
    1:54:25 So, yeah, that’s, I think that’s a good way of looking for.
    1:54:31 That’s one idea of looking for, you know, large scale interplanetary,
    1:54:34 which is kind of like when you’re getting to a type type two.
    1:54:38 Another possibility is looking for the tailings of asteroid mining.
    1:54:42 This was an idea it was a group at Harvard, Smithsonian that, you know,
    1:54:46 would be able to look for, if you’re really chewing up asteroids to build
    1:54:50 space habitats, can, you know, there’d be dust particles left around.
    1:54:52 And would they look different from, just say, the dust, you know,
    1:54:54 from just regular collisions?
    1:54:56 So pollution of all different kinds.
    1:54:57 Pollution of all different kinds.
    1:54:58 And trash also.
    1:54:58 Okay.
    1:55:02 So trash is an interesting idea when you come to the actual solar system, right?
    1:55:06 We are actually, there’s a whole other field of techno signatures,
    1:55:07 which are things in the solar system.
    1:55:12 What if somebody came by a billion years ago, you know,
    1:55:13 and left some stuff, right?
    1:55:17 So the earth has been showing biosignatures for billions of years.
    1:55:21 And, you know, a species like us looking at our level, looking at earth,
    1:55:24 would have been able to know that earth had life on it, had a biosign,
    1:55:27 had a biosphere for billions of years.
    1:55:31 So maybe somebody sent something by, you know, a half a billion years ago.
    1:55:37 So, um, this idea of looking, say at the moon for artifacts is that have been
    1:55:40 there for a long time is something that people, a number of people are doing.
    1:55:43 We’re just working on a paper where we just calculated, this was super fun.
    1:55:49 We calculated how long would the lunar lander exist on the moon
    1:55:52 before micrometeorites just chewed it down, right?
    1:55:55 How long would you be able to land on the moon and go, oh, look, there’s,
    1:55:57 you know, there’s somebody was here and left some debris.
    1:56:01 Um, so there’s this process called gardening, which is just the micrometeorite
    1:56:03 constant range of micrometeorites.
    1:56:07 You know, and that’s where you get the lunar regolith, that fine powder
    1:56:09 on the moon is because of this gardening.
    1:56:13 And it turns out it is literally hundreds of millions to billions of years.
    1:56:14 Oh, nice.
    1:56:18 That, uh, yeah, that the lunar lander will be visible.
    1:56:21 Oh, so we should be able to find artifacts.
    1:56:21 Yeah.
    1:56:23 If there’s art, if there are artifacts on the, and people have proposed
    1:56:27 doing this with, um, artificial intelligence, we have, you know, the moon has
    1:56:31 been mapped down to like a couple of meters with various probes and all that
    1:56:32 data is sitting there.
    1:56:35 So have, why not use machine learning to like look through all those things
    1:56:39 and look for anything that looks not like the lunar surface.
    1:56:43 And they did a test program where they gave it, they gave the computer, you
    1:56:46 know, sort of like, I don’t know, 50 miles around the Apollo 11 or Apollo,
    1:56:50 maybe it was Apollo 17 site, and it instantly was able to pull out the lander.
    1:56:54 I mean, the whole task of looking for anomalies, something that looks not
    1:56:57 like the lunar surface, you may get sound obvious, but it’s not exactly obvious.
    1:57:05 Like anomalies is really not, I mean, detect something that doesn’t look
    1:57:05 right about this room.
    1:57:08 It’s, it’s actually really difficult, really difficult.
    1:57:09 It’s really difficult.
    1:57:11 And it’s, you know, what’s cool, it’s a really information
    1:57:13 theoretic kind of proposal.
    1:57:16 You really have to use information theory to say like, what’s the background?
    1:57:20 What’s, you know, well, how do I define something that I can say that looks weird?
    1:57:25 So, yeah, maybe when you’re looking at a spectrograph or something, like, it’s
    1:57:30 still, it’s still like, it’s going to look really weird potentially.
    1:57:35 Like we’re kind of, we’re kind of hypothesizing all the things that humans
    1:57:36 would build and how do we detect that.
    1:57:39 That could be really weird stuff.
    1:57:43 That’s why there’s this emphasis now on these agnostic signatures, right?
    1:57:45 So, um, actually disequilibrium is a nice one.
    1:57:50 For one way to define life is it is a system that is far from equilibrium, right?
    1:57:51 It’s alive, right?
    1:57:54 Cause as soon as it dies, it turns into, it goes back to equilibrium.
    1:57:58 And so you can look at all chemicals in an atmosphere, even if you don’t know
    1:58:00 whether these could be chemicals that you have no idea whether or not they have
    1:58:04 anything to do with life, but the degree of disequilibrium, the degree to
    1:58:08 which they show that that atmosphere has not, you know, the chemicals have
    1:58:11 not all kind of like just gone down to a, you know, they’ve all reacted
    1:58:13 away to an equilibrium state.
    1:58:16 You can actually tell that in very general ways using what’s called a Gibbs,
    1:58:17 the Gibbs free energy.
    1:58:19 And that, that’s kind of a signature.
    1:58:24 Like if you see an atmosphere that is wildly out of equilibrium, you know,
    1:58:27 that indicates that there’s some, there’s something happening on that planet,
    1:58:33 biosphere or techno sphere that is pumping gases, you know, into the, um,
    1:58:36 into the atmosphere that is keeping the whole system from relaxing.
    1:58:41 So is it possible we can detect anomalies in, in space time?
    1:58:44 Well, you, you could detect, and there’s, there’s been some work on this, like
    1:58:47 with the Accubre drive, you know, these proposals for warp drives.
    1:58:48 And we can talk about that later.
    1:58:52 I’m skeptical of those, but, um, cause it may really be possible that you just
    1:58:56 can’t go fast from the speed of light, but people have done work on like, you
    1:59:01 know, what would be the signature of, uh, an Accubre drive?
    1:59:02 What would be the signature?
    1:59:06 You like, you know, could you detect if you’re using a drive like that, then
    1:59:09 you certainly are distorting space time, which means any light that’s passing by
    1:59:13 has gotten, you know, it’s, it’s, it’s trajectory has gotten altered because
    1:59:15 it had to pass through the distorted space time.
    1:59:18 So yeah, there are possibilities along with that.
    1:59:20 You know, one of the funny things, I don’t know if they’ve gotten past this,
    1:59:23 but somebody had calculated the problem with the Accubre drive or this warp
    1:59:28 drive was that if, if you dropped out of warp, there would be this spray of gamma
    1:59:31 rays that would like sterilize any planet in front of you.
    1:59:34 So it’s like, well, yeah, you probably don’t want to do that, but that
    1:59:36 would be a great bios our techno signature.
    1:59:37 I don’t know.
    1:59:38 They’re planted obliterated.
    1:59:40 So you think it’s not possible to travel fast?
    1:59:41 I wouldn’t say that.
    1:59:42 I wouldn’t say that.
    1:59:45 But what I think, you know, if you look at the physics, we understand, right?
    1:59:45 Yeah.
    1:59:52 Um, the, you know, every possibility for faster than light travel really
    1:59:54 relies on something that doesn’t exist, right?
    1:59:58 So, so, you know, the cool thing is Einstein’s field equations.
    1:59:59 You can actually play with them.
    2:00:00 The equations are right there.
    2:00:04 You can add things to the, you know, right or left hand side that allow
    2:00:07 you to get something like the Accubre drive.
    2:00:10 That was a metric that, you know, showed you like, oh, it’s a warped bubble.
    2:00:15 It’s a warping of space time that moves through space time faster than
    2:00:16 the speed of light, right?
    2:00:20 Because nothing to move across space time faster than the speed of light,
    2:00:23 but space time itself can move faster than the speed of light.
    2:00:27 But here’s the problem with all of those proposals is they all need something.
    2:00:31 The thing you added, the little fictional term you added on the, into the equations
    2:00:35 is something called, um, exotic matter and it doesn’t exist.
    2:00:37 It’s really just something we dreamed up to make the equation to do
    2:00:38 what we wanted them to do.
    2:00:45 So, you know, it’s a nice fiction, but really right now, you know, you know,
    2:00:49 we live in this weird moment in history of the great acceleration.
    2:00:55 We’re like, the technology we use now is, you know, is completely different
    2:00:59 from the technology we used 10 years ago is remarkably different
    2:01:01 from the technology from a hundred years ago.
    2:01:06 Um, but, you know, I remember playing, um, uh, Assassin’s Creed where everybody’s
    2:01:09 like, you know, what is it’s 1200 and everybody’s like stab, stab, stab.
    2:01:10 And I was like, yeah, it’s a great game.
    2:01:16 And then I got Assassin’s Creed two and, uh, it was 300 years later and everybody’s
    2:01:21 like stab, stab, stab and it was like 300 years and the technology hadn’t changed.
    2:01:23 And that was actually true for most of human history, right?
    2:01:28 You used your great grandfather’s tools because there was no need to have any
    2:01:30 other new tools and you probably did his job.
    2:01:34 Uh, so, you know, we can be fooled into thinking like, Oh, you know,
    2:01:36 technology is just going to go on forever.
    2:01:39 We’re always going to find new advances as opposed to sometimes things just
    2:01:41 flatten out for a long time.
    2:01:45 So you have to be careful about that bias that we have living in this time of
    2:01:46 great acceleration.
    2:01:52 Yeah, but, uh, also it is a great acceleration and we also are not good at
    2:01:55 predicting what that entails if it does keep accelerating.
    2:02:00 So for example, somebody like, um, Eric Weisstein often talks about we under
    2:02:03 invest in theoretical physics research.
    2:02:10 Basically like we’re trying too hard for traditional chemical propulsion on
    2:02:14 rockets versus like trying to hack physics.
    2:02:21 Sort of warp drives and so on, because it’s really hard to do space travel.
    2:02:25 And it seems like in the long arc of human history, if we survive the way
    2:02:30 to really travel across long distances is going to be some new, totally new thing.
    2:02:31 Right.
    2:02:31 Right.
    2:02:34 So it’s not going to be an engineering problem.
    2:02:38 It’s going to be a physics, a fundamental physics, fun about the physics.
    2:02:42 Well, yeah, I mean, I agree with that in principle, but I think there’s been, you
    2:02:44 know, I mean, there’s a lot of ideas out there.
    2:02:46 People, you know, string theory, people have been playing with string theory
    2:02:48 now for 40 years.
    2:02:51 It’s not like people haven’t been, not like there hasn’t been a lot of effort.
    2:02:53 And, you know, and again, I’m not going to predict.
    2:02:57 I think it’s entirely possible that we have, you know, there’s incredible
    2:03:00 boundaries of physics that have yet to be poked through.
    2:03:03 In which case, then all bets are off, right?
    2:03:06 Once you get sort of, you know, interstellar, fast interstellar travel.
    2:03:08 Whoa, you know, who knows what can happen.
    2:03:13 Um, but I tend to be drawn to like science fiction stories that take the
    2:03:17 speed of light seriously, like what kind of civilization can you build where like
    2:03:22 it takes, you know, 50 years to get to where you’re going and a 50 years back.
    2:03:23 Like, so, I don’t know.
    2:03:26 I mean, yeah, there’s no way I’m going to say that, that we won’t get warp drives.
    2:03:29 But as of right now, there’s, it’s all fictional.
    2:03:32 It’s, you know, it’s barely even a coherent concept.
    2:03:36 Well, it’s also a really exciting possibility of hacking this whole thing by
    2:03:41 extending human lifespan or extending our notion of, of time.
    2:03:47 And maybe as dark as the same, but the value of an individual human life versus
    2:03:50 the value of life from the perspective of generations.
    2:03:54 So you can have something like a generational ship that travels for hundreds
    2:04:00 of thousands of years and it, you’re not sad, uh, that you’ll never see the
    2:04:07 destination because you kind of have the value for the, uh, prolonged survival of
    2:04:08 humanity versus your own individual life.
    2:04:09 Yeah.
    2:04:10 It’s a wild ethical question.
    2:04:14 Isn’t it one of the, that book I told you about Aurora was suck.
    2:04:18 I love the book because it was such a sort of inversion of the usual.
    2:04:20 Cause you know, I’ve read, I love science fiction.
    2:04:23 I’ve read so many generationship stories and they get to that planet.
    2:04:25 The planet turns out to be uninhabitable.
    2:04:28 It’s inhabited, but it’s uninhabitable for earth because again, he has this
    2:04:31 idea of like, you know, life is particular to their planets.
    2:04:36 So they turn around and they come back and then when they land, the main character
    2:04:39 goes, there’s still people who are, you know, arguing for more generationships.
    2:04:42 And she goes and she punches the guy out cause she spent her whole life in a
    2:04:46 tube, you know, with this, I thought that was a really interesting inversion.
    2:04:48 You know, the interesting thing about, about, we were talking about these
    2:04:52 space habitats, but if you really had a space habit, not some super cramped,
    2:04:55 you know, crappy, usual version of a century ship, but if you had these
    2:04:58 like space habitats that were really, you know, like the O’Neill cylinders,
    2:05:00 they’re actually pretty nice places to live.
    2:05:04 Put a thruster on those, you know, like why, why keep them in the solar system?
    2:05:09 Maybe that’s, maybe space is full of like these sort of traveling space habitats
    2:05:12 that are in some sense a, you know, their worlds in them, in and of themselves.
    2:05:17 There’s the show Silo, which raises the question of basically, if you’re
    2:05:22 putting on a generational ship, what do you tell the inhabitants of that ship?
    2:05:24 You might want to lie to them.
    2:05:25 Yeah.
    2:05:29 You might want to tell them a story that they believe because there is a society,
    2:05:30 there’s human nature.
    2:05:35 It’s like, how do you maintain homeostasis of that little society?
    2:05:40 I mean, that’s a fascinating technical question, the social question, the
    2:05:41 psychology question.
    2:05:43 You know, the generation ship too, and you know, which I talked about in the
    2:05:47 book, the idea of also the, you know, you talked about extending human lifetimes
    2:05:53 or, you know, the stasis, the cryostasis, which is a mainstay of science fiction,
    2:05:53 you know, that, you know, right.
    2:05:56 You can be put to, you can basically put in suspended animation and such.
    2:05:59 None of these things we know are possible, but you know, it’s so interesting.
    2:06:02 And this is why I love science fiction, the way it seeds ideas, right?
    2:06:05 All these ideas we’re going to talk about because they’ve been staples of
    2:06:07 science fiction for 50 years.
    2:06:09 I mean, the whole field of cryogenics.
    2:06:09 Yeah.
    2:06:10 Where are we at with that?
    2:06:10 Yeah.
    2:06:13 I wonder what the state of the art is for a complex organism.
    2:06:17 Can you freeze, how long can you freeze and then unfreeze?
    2:06:17 Right.
    2:06:19 Maybe, maybe like with bacteria, you could do freeze.
    2:06:20 Oh, bacteria can last.
    2:06:22 This is the thing about panspermia, right?
    2:06:28 How long can, you know, how long can a bacteria survive in a rock that’s
    2:06:33 been blasted, you know, if there’s a common impact across, you know, interstellar
    2:06:35 distances, that does seem to actually be possible.
    2:06:36 People have done those kinds of calculations.
    2:06:41 It’s not out of the realm of possibility, but a complex organism, multi-cellular,
    2:06:43 multi-systemic or multi-systems, right?
    2:06:44 With organs and such.
    2:06:46 Also, what makes an organism?
    2:06:49 I mean, it could, you know, which part do you want to preserve?
    2:06:55 Cause maybe the, for humans, it seems like, uh, like what makes a personality?
    2:06:59 It feels like you want to preserve a set of memories.
    2:07:05 Like if I woke up in a different body with the same memories, I pretty much, I
    2:07:06 would feel like I would be the same person.
    2:07:07 Altered carbon?
    2:07:09 Have you, that’s a, that’s a great series.
    2:07:12 I think it’s on Netflix, just to, you know, that’s a really great series.
    2:07:14 Well, that’s exactly the idea of sleeves.
    2:07:17 Everybody’s able to like, you know, you can re-sleeve in another body.
    2:07:20 Um, and it raises exactly sort of this question.
    2:07:22 It’s not the greatest cyberpunk, but it’s pretty good.
    2:07:25 It’s got, it’s got some great, great action sequences too.
    2:07:30 As we get better and better advancements in large language models that are able
    2:07:36 to be fine-tuned on you, it raises a question because I had to, to me, that
    2:07:39 already passed the touring test, as we traditionally have defined it.
    2:07:43 Is, so if there’s going to be an LLM that’s able to copy you in terms of
    2:07:48 language extremely well, it’s going to raise ethical and, uh, I don’t know,
    2:07:53 philosophical questions about what makes you, you like what, if there’s a thing
    2:07:59 that can talk exactly like you, like, what is the thing that makes you use?
    2:08:04 Is it, is it, it’s going to speak about your memories very effectively.
    2:08:08 This leads us to, if we’re going to get to the, the blind spot, I, I, you know,
    2:08:13 I am of the opinion, heretical in some camps, that, you know, the brain
    2:08:17 is not the minimal, the minimal structure for consciousness.
    2:08:19 You know, it’s the whole body.
    2:08:19 It’s embodied.
    2:08:22 It may actually, in some sense, it’s communities, actually.
    2:08:26 Um, so yeah, so I don’t, I mean, I’m, you know, I could be wrong, but this is,
    2:08:28 you know, this is what this whole work that I did with Marcelo
    2:08:32 Gleiser and Evan Thompson, the, um, philosophy of science, which is
    2:08:34 interesting because it leads to this question about, you know, right.
    2:08:36 Oh, maybe we should just download ourselves into computers.
    2:08:36 Right.
    2:08:41 That’s another story that, that one tells, I’m super skeptical about those, but
    2:08:44 is that’s one of the narratives about interstellar travel is just like, and
    2:08:47 that anybody we meet is going to be a machine anyway, whether it’s like,
    2:08:51 whether it’s downloaded bodies or it’s just going to be artificial intelligence.
    2:08:54 Like there’s the whole idea of how long does biological evolution last?
    2:08:58 Maybe it’s a very short period before everybody, you know, goes to, or the
    2:09:02 machine’s takeover and, you know, kill you, or, you know, it’s some hybrid.
    2:09:04 What do you think aliens look like?
    2:09:08 So we talked about all the different kinds of bio signatures.
    2:09:11 They might leave over techno signatures, but what would they look like?
    2:09:15 When we show up, are they going to have arms and legs?
    2:09:18 Are they, uh, going to be recognizable at all?
    2:09:20 Are they going to be carbon based?
    2:09:21 Yeah.
    2:09:22 So great question.
    2:09:27 And this question gets to the heart of thinking about life, right?
    2:09:28 About what life is.
    2:09:30 And this is the physical part of that.
    2:09:33 There’s also sort of the informational part of it.
    2:09:38 Um, but let’s just talk about the physical part of it, which is, you know, life.
    2:09:42 Anything that we’re going to call life is probably going to work on Darwinian evolution.
    2:09:44 That’s the nice thing about Darwinian evolution.
    2:09:46 Just like we know the laws of physics are general.
    2:09:49 The laws of Darwinian evolution are kind of this logic, this basic logic.
    2:09:54 Um, that, you know, anything we’d reasonably call life probably has to operate
    2:09:55 under these kinds of principles.
    2:10:01 And so, you know, evolution is about solving problems that, you know, to survive.
    2:10:05 Um, that the environment presents and the environment.
    2:10:10 So it’s going to present these problems in physical and chemical terms so that you’d expect.
    2:10:15 Um, you expect a kind of balance between what we call convergence, evolutionary
    2:10:18 convergence and evolutionary contingency.
    2:10:23 So, you know, if you’ve got to move along a surface, you know, a surface between, you know,
    2:10:27 a hard surface and air, then the idea of some kind of jointed stick, right?
    2:10:30 Legs make sense that you’re probably going to trigger that.
    2:10:34 You know, if you look at Earth’s history, multiple times, multiple lineages that
    2:10:37 had nothing to do with each other are going to solve the problem of getting
    2:10:42 towards energy sources using some kind of, you know, a stick like apparatus.
    2:10:43 So that’s about movement.
    2:10:43 Yeah.
    2:10:45 So that’s one problem that has to be solved.
    2:10:47 One problem that has to be solved is I got to get to food, right?
    2:10:49 Another problem is I got to get away from predators, right?
    2:10:50 Um, you’ve seen wings.
    2:10:56 We’ve seen wings, the line that went through dinosaurs to birds, involved wings, insects,
    2:10:59 evolved wings, mammals, evolved wings.
    2:11:02 If the gas is dense enough that a curved surface, if you move through the curved
    2:11:04 surface, it’s going to produce lift.
    2:11:05 Yeah, there you go.
    2:11:06 Evolution will trip on that.
    2:11:12 So I think you, you can expect certain classes of solutions to the basic problems that
    2:11:17 life is going to, is going to be presented with, stay alive, reproduce.
    2:11:22 Um, but one of the weird things about like with the UFO things is that you always
    2:11:24 see like, oh, they all look like humans.
    2:11:26 They’re just like basically humans with, you know, triangular heads.
    2:11:29 And that’s where we get to, um, contingency, right?
    2:11:31 So what we’ve been talking about is convergence.
    2:11:35 You expect that evolution will converge on wings multiple times when presented
    2:11:38 with the problems that wings can solve.
    2:11:42 Um, but con, contingency is accidents, right?
    2:11:46 That, you know, you’ve got something that’s evolving a certain kind of wing,
    2:11:47 a leathery wing, right?
    2:11:50 Uh, and then, you know, the climate changes and they all die out.
    2:11:51 End of story.
    2:11:53 Or, you know, an asteroid, that total accident asteroid hits.
    2:11:58 And so, uh, contingency accidents play also a huge role in evolution.
    2:12:03 And one of the things that, you know, lots of evolutionary biologists have talked
    2:12:06 about is the idea that if you ran the tape of Earth’s history over again, would
    2:12:08 you get the same creatures?
    2:12:12 Now, um, uh, Stephen Jay Gould was of the opinion that no way that you wouldn’t
    2:12:16 find anything on earth that right resemble to any species today.
    2:12:19 They’ve done experiments actually on this with, uh, E. coli.
    2:12:22 You take, you know, you take a bunch of E. coli, you let them evolve for a while.
    2:12:26 You take a bunch of them out, freeze them, let one, you know, let that population
    2:12:27 continue to evolve.
    2:12:30 The other one’s frozen now started over again with the frozen.
    2:12:34 And it seems to be that contingency tends to win, right?
    2:12:37 The contingency, at least from what we can tell, I mean, that’s not a, that’s not
    2:12:41 a hard result, but in those experiments, what you find is that accidents really
    2:12:41 do matter.
    2:12:43 So the idea, and this is important.
    2:12:47 So yes, you should expect legs or jointed sticks.
    2:12:48 How many joints they’re going to be?
    2:12:49 Anybody’s guess.
    2:12:54 Um, you know, do you expect humanoids, you know, things with a, you know, uh, a
    2:12:58 sensing apparatus on top of a shoulder with two arms and two legs, that’s
    2:13:02 probably a pretty random set of occurrences that led to that.
    2:13:06 I guess what is a brain versus the nervous system?
    2:13:09 Like, where’s most of the cognition competition going on?
    2:13:10 Yeah.
    2:13:11 Yeah.
    2:13:13 You could see that in organisms.
    2:13:18 Like I actually had, I don’t know how the brain evolved.
    2:13:19 Like, why does it have to be in one place?
    2:13:20 Doesn’t have to be.
    2:13:24 So my favorite word, word of the day is liquid brains, right?
    2:13:27 This idea of distributed cognition, which, um, fascinating idea.
    2:13:32 And we’ve come to understand how much, uh, distributed cognition there is.
    2:13:37 Obviously you social animals, like termites, et cetera, and ants.
    2:13:39 That’s an example of distributed cognition.
    2:13:41 The organism is the whole colony.
    2:13:43 This is one thing that’s been really interesting in the state of the study.
    2:13:46 When we cut to, for aliens is that when we’ve come to recognize that human
    2:13:50 intelligence, it’s not actually, it’s been the kinds of things that go into
    2:13:54 intelligence are distributed all across the biosphere.
    2:13:58 Lots of different examples of things show various pieces of what we have.
    2:14:01 Jason Wright will describe it as like a deck of cards.
    2:14:02 The cards are all there.
    2:14:06 We got the hand that actually led to the kind of technological progress that we,
    2:14:10 we see, but the kinds of, you know, the basic idea of using tools, the basic idea
    2:14:14 of recognizing each other eye to eye, all the things that we define as intelligence.
    2:14:19 You can find many places in many other, um, uh, places across many other line
    2:14:21 lineages across the earth.
    2:14:24 So it could be, they could be very, very different with something like, yeah,
    2:14:29 maybe that’s, you know, the hive mind idea or, you know, bacterial colonies
    2:14:33 that actually managed to, you know, come to their own version of high cognition.
    2:14:40 Well, I wonder if there’s, if we stretch out time across 10s, 20 billion years,
    2:14:46 whether there’s an Darwinian evolution stops working at some point in terms
    2:14:51 of the biology or the chemistry of the organisms and it switches to ideas.
    2:14:54 For example, it’s much more rapidly you’re operating.
    2:14:58 Maybe I guess it’s a kind of Darwinian evolution on the space of memes or
    2:15:03 whatever, as a technology seems to operate on, and, and, and, yeah, but certainly
    2:15:06 markets can operate in ways that look very Darwinian.
    2:15:12 So basically a planet is working hard to get to the first kind of organisms that’s
    2:15:17 able to be a nice platform for ideas to compete.
    2:15:17 Yeah.
    2:15:19 And then it kind of stops evolving there.
    2:15:21 And then, then it’s ideas that take off.
    2:15:21 Right, right.
    2:15:23 Cause yeah, cultural, like it’s true.
    2:15:28 It’s amazing that cultural evolution totally disconnects from, from the
    2:15:29 Darwinian process.
    2:15:32 But I’d be careful to say that like a planet is working hard to do this.
    2:15:33 Cause, you know, it’s really impotent looking at us.
    2:15:39 Like what we think of is ideas and culture and, you know, it’s quite possible.
    2:15:41 We’re going to make it another 200 years and this is gone.
    2:15:41 Right.
    2:15:44 Cause it actually wasn’t a very good idea long term.
    2:15:45 We just don’t know.
    2:15:50 Oh, so maybe the idea generation organism is actually the thing that destroys.
    2:15:52 Not the biosphere, but it destroys itself.
    2:15:54 It may not be very long term.
    2:15:58 It may be very potent for a short period of time, but that it’s not sustainable.
    2:16:00 It doesn’t become like we were talking about before mature.
    2:16:06 It’s very hard to make it into integrated into a mature bio slash techno sphere.
    2:16:08 And of course, you know, evolution is not working for anything.
    2:16:10 Well, here’s the actually interesting thing.
    2:16:10 Right.
    2:16:13 So people are very much, you know, evolutionary biologists will get very,
    2:16:14 their hair will stand on it.
    2:16:16 And if you start talking about evolution, having a purpose or anything,
    2:16:21 but the very interesting thing about purpose is that once you do get to a idea
    2:16:27 generating species or collective organism, um, yeah, then, uh, you know,
    2:16:30 kind of all bets are off and there is goals.
    2:16:32 There is teleology.
    2:16:37 There is a, you know, the now suddenly, you know, absolutely there’s a direction implied.
    2:16:40 So that’s kind of the cool, interesting thing that once you get to that evolution
    2:16:43 stops being goal lists and direction lists.
    2:16:46 And suddenly, yeah, we’re the ones who supply or any kind of creature
    2:16:49 like us has an absolute direction that way they decide on.
    2:16:53 Although you could argue that from a perspective of the entire human civilization,
    2:16:54 we’re also directionless.
    2:17:01 We have a sense that there’s a direction in this cluster of humans.
    2:17:04 And then there’s another cluster as a different set of direction.
    2:17:06 There’s all kinds of religions that are competing.
    2:17:08 There’s different ideologies that are competing.
    2:17:14 And when you just zoom out across, if we survive across thousands of years,
    2:17:15 it will seem directionless.
    2:17:17 It will seem like a pinball.
    2:17:20 It’s an unholy mess.
    2:17:24 But, you know, but at some point, like the expansion into the solar system.
    2:17:26 Like that would be both direction.
    2:17:29 I mean, depending on how you look at it, it was directional.
    2:17:32 There was a, there was a decision that the collective of human beings
    2:17:36 made to like anti a creed to start spreading out into the solar system.
    2:17:40 So that was definitely a goal there that may have been reached
    2:17:44 in some crazy sort of, you know, nonlinear way.
    2:17:45 But it was still, right?
    2:17:48 There was still, it’s still a goal was set and it was achieved.
    2:17:50 If there’s advanced civilizations out there,
    2:17:56 what do you think is the proper protocol for interacting with them?
    2:17:58 Do you think there would be peaceful?
    2:18:00 Do you think there would be war like?
    2:18:02 Like, what do we do next?
    2:18:05 We detect, we detect a civilization through all the technical
    2:18:08 signatures we’ve been talking about, maybe direct imaging.
    2:18:09 Maybe there’s really strong signal.
    2:18:13 We come up with a strategy of how to actually get there.
    2:18:13 Yeah.
    2:18:16 But what’s the, then the generals, as they always do.
    2:18:19 The military industrial complex.
    2:18:20 We’ve watched that movie.
    2:18:26 Where what kind of rock is, what kind of, and do we bring rockets?
    2:18:26 Right.
    2:18:30 Well, I think, you know, so this also, this is a general question
    2:18:33 also leads to many messaging, extraterrestrial intelligence.
    2:18:36 And I am definitely of the opinion of like, you should be very careful, you
    2:18:39 know, like, I don’t think it’s necessarily a bad idea to have your head
    2:18:40 below the grass.
    2:18:44 Um, you know, the people who advocate like, oh, yeah, we should be sending,
    2:18:49 you know, powerful messages that are easily detectable into interstellar space.
    2:18:51 I’m like, why would you, because we just don’t know.
    2:18:53 Like, I’m not going to say they are warlike.
    2:18:54 I’m not going to say they’re not warlike.
    2:18:57 I have no idea, you know, but we sure as hell.
    2:19:00 Well, first of all, who gets to decide that the idea that a bunch of
    2:19:03 astronomers who happen to have a radio telescope, I don’t, you know,
    2:19:07 who speaks for earth, which I think was a great book somebody wrote.
    2:19:12 Um, so, you know, definitely we should, we should be cautious, I would say,
    2:19:14 because we just have zero information.
    2:19:17 And the idea, you used to have this idea of well, if they’re advanced,
    2:19:18 they’ve managed to survive.
    2:19:22 So of course they’re going to be wearing togas, you know, and be singing kumbaya.
    2:19:25 But I just wouldn’t, I just wouldn’t assume that it’s also possible, though,
    2:19:29 that like their cognitive structure is so different that we’re not even living
    2:19:31 in the same universe in a certain way.
    2:19:32 I think we have to be prepared for that.
    2:19:39 We may not even be able to recognize each other in some way as, as cognizing beings.
    2:19:40 One of my favorite movies is Arrival.
    2:19:42 I don’t know if you’ve ever seen that one.
    2:19:44 I really love that one because, you know, they literally, they have a different
    2:19:47 language, they have a different cognitive structure in terms of their language.
    2:19:49 And they’re literally kind of living in a different physics.
    2:19:53 Different physics, different language, different, different, everything.
    2:19:53 Yeah.
    2:19:58 But in the case of Arrival, it can at least like recognize that they’re there.
    2:20:01 And they managed to cross the language barrier.
    2:20:02 Yeah.
    2:20:06 So, but that’s both sides have an interest in communicating, which you kind
    2:20:11 of suppose that an advanced civilization would have a curiosity.
    2:20:16 Because like, how do you become advanced without a kind of curiosity about the
    2:20:17 mysterious, about the other.
    2:20:23 But also, you know, if they’re long lived, they may just be like, we’re not even interested.
    2:20:28 Like we’ve done this, we’re like, we, you know, you know, 10, 10 billion year, sorry,
    2:20:31 say 10 million years ago, we were really interested in that, in this, in communicating
    2:20:34 with you, you know, young and young and, but now we’re not at all.
    2:20:37 And that’s just, you know, one of the beauties of this, again, is how to think
    2:20:41 about this systematically, because you’re so far past the hairy edge, right?
    2:20:46 Of our experience, of what we know that you want to think about it, right?
    2:20:49 You don’t want to be like, don’t know, can’t say anything, because that’s not fun.
    2:20:53 But you also have to sort of systematically go after your own biases, right?
    2:20:56 So the one of the things I loved about Arrival too, was, you know, Carl
    2:21:00 Sagan always had this idea, like we’ll teach him math, we’ll teach him our math.
    2:21:01 Then they’ll teach us their math.
    2:21:04 And then, you know, we’ll be telling each other knock, knock jokes, you know,
    2:21:06 and swapping cures for cancer.
    2:21:09 And, you know, in the movie, like they send a Carl Sagan guy in and a linguist.
    2:21:12 And the Carl Sagan guy fails immediately, right?
    2:21:15 And it’s the linguist who understands that language is actually embodied.
    2:21:17 Language is not just something that happens in your head.
    2:21:19 It’s actually the whole experience.
    2:21:20 And she’s the one who breaks through.
    2:21:26 And it just points to the idea that, um, how utterly different the cognitive
    2:21:29 structures, the, you know, of, of a, of a different species should be.
    2:21:33 So somehow we have to figure out how to think about it, but be so careful of our
    2:21:37 biases or figure out like a systematic way to break through our biases and not
    2:21:39 just tell something, make science fiction movies.
    2:21:40 You know what I mean?
    2:21:41 Yeah.
    2:21:42 Yeah.
    2:21:46 Speaking of biases, do you think aliens have visited earth?
    2:21:49 You’ve mentioned that they could have visited and started civilizations.
    2:21:51 I wouldn’t, we wouldn’t even know about it.
    2:21:55 If it was a hundred million years ago, how could we even begin to answer this
    2:21:56 question?
    2:21:58 Whether they’ve got to look, got to look, got to figure out ways to look.
    2:22:02 So I, you know, I mean, I, I don’t put it, it’s not high on my list of, you know,
    2:22:07 things that I’m, I think are probable, but it certainly it needs to be explored.
    2:22:09 You know, and unless you look, you never know.
    2:22:13 So looking on the moon, look at, where would we find if, if aliens had passed
    2:22:17 through the solar system anytime in the last three billion years, where might we
    2:22:18 find artifacts?
    2:22:20 Where might artifacts still be around earth?
    2:22:23 Probably not because of weathering and resurfacing.
    2:22:27 Um, the moon’s a good place, uh, certain kinds of orbits, you know, maybe they
    2:22:29 parked a probe in an orbit that was stable.
    2:22:31 So you got to figure out which orbits actually you could put something there
    2:22:33 and it’ll last for a billion years.
    2:22:38 So those are the kind of questions I don’t, like I said, I don’t, it’s not high
    2:22:41 on my list of thinking this could happen, but it could happen.
    2:22:43 I certainly can’t, unless you look, you don’t know.
    2:22:48 What about speaking of biases, what about if aliens visiting earth is the
    2:22:53 elephant in the room, meaning like, uh, the potential of aliens say seeding life on earth?
    2:22:56 Uh, you mean like in that directed panspermia?
    2:23:01 Directed panspermia or seeding some aspect of the evolution?
    2:23:03 Like 2001.
    2:23:04 Yeah.
    2:23:04 Yeah.
    2:23:10 Uh, you know, it’s great story, but you know, always with Occam’s razor or whatever
    2:23:15 with science, if I can, if I can answer that question without that extra, very
    2:23:18 detailed, uh, hypothesis, then I should.
    2:23:22 And you know, the idea that evolution is a natural process, that’s what I would
    2:23:23 go for first, right?
    2:23:26 There’s, there’s, that just seems, it’s so much easier to do it.
    2:23:31 That way than adding, you know, sort of, cause it’s kind of a duo sex machina thing
    2:23:33 of like, oh, then the aliens came down and they solved that problem that you’re
    2:23:36 trying to solve by just coming down and putting their finger on the scales.
    2:23:42 So to you, the origin of life is, uh, is a pretty simple thing that doesn’t
    2:23:43 require an alien.
    2:23:46 I wouldn’t say that it’s not a simple thing, but it doesn’t, you know, putting,
    2:23:50 I think, cause you know, all you’re doing is kicking the can down the road, right?
    2:23:52 The aliens, the aliens formed, right?
    2:23:56 So you’re just saying like, all right, I’m just kicking the can down the road
    2:23:56 to the aliens.
    2:23:59 How did they, how did, what was their a biogenesis event?
    2:24:02 Well, so from a different perspective, I’m just saying, it seems to me that
    2:24:06 there’s obviously advanced civilizations everywhere throughout the galaxy and
    2:24:08 through the universe from the Drake equation perspective.
    2:24:11 And then if I was an alien, what would I do?
    2:24:19 You know, I’ve gotten a chance to learn about the uncontacted tribes in the Amazon.
    2:24:23 I recently went to the Amazon, you get to understand how they function and how
    2:24:29 the humans in the Amazon, they’re in contact with the civilized world, how
    2:24:30 they interact with the uncontacted tribes.
    2:24:35 First of all, the uncontacted tribes are very violent towards the outside world,
    2:24:37 but everybody else try to stay away from them.
    2:24:40 They try to kind of protect them, don’t talk about them or don’t, don’t talk
    2:24:42 about their location and all this kind of stuff.
    2:24:47 And I’ve begun to internalize and understand that perspective of why you’re doing
    2:24:47 that.
    2:24:51 And if I was an alien civilization, if I probably would be doing a similar kind
    2:24:55 of thing, and of course, there’s always the teenager of the troll who’s going to
    2:24:59 start messing with the stuff or the scientists, you know, right.
    2:25:03 And so it’s not from our perspective.
    2:25:03 Yes.
    2:25:08 And if you’re in the Truman show, like Occam’s razor, but like also the Occam’s
    2:25:15 razor from the perspective of the alien civilization, we have to have the humility
    2:25:19 to understand that that interaction will be extremely difficult to detect.
    2:25:20 That won’t be obvious.
    2:25:21 Right.
    2:25:24 I understand the logic of what you’re saying, but the problem for me with that
    2:25:28 is that right there, the first you have to assume that alien civilizations are
    2:25:31 common, which I’m not sure about it, that most of them may be dead.
    2:25:34 Or they’re not yet still, you know, like I, while I think that life is common.
    2:25:35 And again, this is just my biases.
    2:25:35 Right.
    2:25:41 So now the problem is how do we sort out sort of, you know, the, the, the biases
    2:25:47 we’re bringing or the assumptions we’re bringing in from, you know, from the, the
    2:25:50 sort of causal chain that comes out of that.
    2:25:53 I would first want to try and do this without it.
    2:25:55 Like, you know, if we’re looking at the origin of life or the evolution of life
    2:26:00 on earth, I’d want to do it just on its own without asking for this other layer.
    2:26:05 Because it requires a bunch of these other assumptions, which also have
    2:26:07 their own sort of breaking of causal chains.
    2:26:11 Cause I don’t really like the idea that when you ask, what would you do
    2:26:12 if you were an alien?
    2:26:17 But again, like alien minds could be so unbelievably different, right?
    2:26:20 That they wouldn’t even recognize the question you just posed, right?
    2:26:23 Cause it just like, you know, we’re very much, we have a very particular
    2:26:27 kind of cognitive structure, you know, and, and we’re very governed by, you know,
    2:26:31 even if you went and talked to, this is an interesting thing to think about, you
    2:26:34 know, if I could suddenly magically appear a hundred thousand years ago and
    2:26:37 talk to a hunter-gatherer about their worldview and their motivations, you
    2:26:41 know, I might find something that’s like, there were no resemblance to things
    2:26:44 that I think are sort of, oh, that’s what naturally humans do.
    2:26:45 Well, let me, let me ask you this question.
    2:26:47 Let’s, let’s together do the thought experiment.
    2:26:52 If we create a time machine that allows us to travel back and talk to them or
    2:26:59 we discover maybe a primitive alien civilization on a nearby star system,
    2:27:01 what, what would we do?
    2:27:01 Yeah.
    2:27:03 I think that’s a great question.
    2:27:05 I mean, so, you know, it’s interesting how that even brings up the ethical
    2:27:06 questions, right?
    2:27:10 Let’s say that, you know, would we, we’d have to first sort of sort out what
    2:27:14 are the consequences for them and what do we feel our ethical responsibilities are
    2:27:17 to them and also, sorry, from a capitalist perspective.
    2:27:20 What are we to gain from this interaction?
    2:27:21 Right, right, right.
    2:27:23 You look at the way the missionaries, you know, missionaries had these
    2:27:27 interactions because they thought converting them to whatever religion they
    2:27:29 were, you know, was the most important.
    2:27:30 That’s what the gain was.
    2:27:34 So from our perspective, I mean, we’d have to sort that out.
    2:27:40 I think given, you know, if we’re doing this thought experiment, we are curious.
    2:27:42 And I think eventually we’d want to reach out to them.
    2:27:47 Now, I think when you say we, let’s start with the people in this room, right?
    2:27:52 But there is, I wonder who the dominant forces are in the world, because I think
    2:27:58 there’s a lot of people, the military, they will probably move first so they
    2:28:03 can steal whatever advantage they can from this new discovery so they can
    2:28:05 hurt China or China hurt America.
    2:28:07 That’s one perspective.
    2:28:12 Then there’s the, the capitalist who will see like how the benefit of the
    2:28:15 costs here and how can I make money off of this?
    2:28:16 There’s opportunity here.
    2:28:18 There’s gold in them hills.
    2:28:22 And I wonder, and I think the scientists is just not going to, unlike the movies.
    2:28:24 We’re not going to get much say.
    2:28:26 They’re going to put them, hey guys, we, wait a minute.
    2:28:28 They would engage probably.
    2:28:32 I mean, it’s just as, as a human society as we are now, we would engage.
    2:28:35 And we would be detectable, I think.
    2:28:36 In our engagement.
    2:28:37 In our engagement.
    2:28:39 Yeah, yeah, probably.
    2:28:44 So using that trivial bias logic, I just, it just feels like aliens would need
    2:28:46 to be engaging in a very obvious way.
    2:28:48 Yeah, yeah, yeah.
    2:28:53 This brings up that old direct for me paradox for me.
    2:28:56 Uh, what do you make of all the UFO sightings?
    2:29:03 I am all in favor of an open, agnostic, you know, transparent scientific
    2:29:05 investigation of UFOs and UAPs.
    2:29:12 But the idea that, that there’s any data that we have that links UFOs and
    2:29:15 UAPs to non-human technology, I just think they’re the standards.
    2:29:20 They just, none of what is claimed to be the data lives up to the standards of
    2:29:20 evidence.
    2:29:22 So let’s just take a moment on that idea of standards of evidence because I’ve
    2:29:25 made a big deal about this both, you know, in the book and elsewhere.
    2:29:26 Whenever I talk about this.
    2:29:30 So what people have to understand about science is we are really scientists.
    2:29:32 We are really mean to each other.
    2:29:35 We are brutal to each other because we have this thing that we call standards
    2:29:39 of evidence and it’s the idea of like, you have a piece of evidence that you
    2:29:44 want to link to a claim and, you know, under what conditions can you say, oh,
    2:29:48 look, I’ve got evidence of, you know, this claim X, Y and C.
    2:29:53 And in science, we are so mean to each other about whether or not that piece
    2:29:55 of evidence lives up to the standards that we have.
    2:29:59 And we spent 400 years determining what those standards are.
    2:30:02 Um, and that is why cell phones work, right?
    2:30:07 If you didn’t have super rigorous standards about, you know, what you think
    2:30:10 that’s, oh, this little antenna, I’ve invented a new kind of antenna that I
    2:30:13 can slip into the cell phone and I, you know, I can show you that it works.
    2:30:15 You know, if you didn’t have these standards, you know, you did every
    2:30:17 cell phone would be a brick, right?
    2:30:21 And when it comes to UFOs and UPS, the evidence you have and the claim that
    2:30:26 though this shows that, you know, we are being visited by non-human, uh,
    2:30:31 advanced civilization just doesn’t even come close to the same standards.
    2:30:34 I’m going to have to obey or whatever live under.
    2:30:39 If my team, you know, the group I work with is one of them says, look, we’ve
    2:30:42 discovered, he wants to announce that, oh, we’ve discovered, uh, techno
    2:30:44 signature on an alien planet.
    2:30:47 We’re going to get shredded as we expect to be.
    2:30:48 We expect to be beaten up.
    2:30:52 And, you know, the UAP UFO community should expect the same thing.
    2:30:56 You don’t get, you know, you don’t get a pass because it’s a really cool topic.
    2:30:57 So that’s where I am right now.
    2:31:01 I just don’t think any of the evidence is even close to anything that
    2:31:02 could support that claim.
    2:31:07 Well, I generally assign a lot of value to anecdotal evidence from pilots.
    2:31:13 Not scientific value, but just like, it’s always nice to get anecdotal
    2:31:15 evidence as a first step.
    2:31:17 I was like, hmm, I wonder if there’s something there.
    2:31:20 But unfortunately with this topic, there’s so much excitement around it.
    2:31:24 There’s a lot of people that are, uh, basically trying to make money off of it.
    2:31:26 There’s hoaxes, all this kind of stuff.
    2:31:29 So even, even if there’s some signal, there’s just so much noise.
    2:31:30 It’s very difficult to operate with.
    2:31:33 So how do we get better signal?
    2:31:40 So you’ve talked about sort of, if we wanted to really search for UFOs on earth.
    2:31:40 Right.
    2:31:44 And, uh, maybe detect things like weird physics.
    2:31:47 What kind of instruments would we be using?
    2:31:47 Yeah.
    2:31:51 So, uh, you know, in the book, I talked about the idea of this is really stupid,
    2:31:54 but you know, you want to look up, you want to look down and you want to look
    2:31:54 all around.
    2:31:55 I think that’s brilliant.
    2:31:58 I mean, that’s, it’s simple, not stupid.
    2:31:59 It’s like literally.
    2:32:03 So you want to do ground based detectors that, you know, upward looking
    2:32:06 ground based sectors of the kind we’re already building for meteors, right?
    2:32:07 For tracking meteors.
    2:32:10 You want to have space based detectors, put them on satellites.
    2:32:12 This is what the NASA UAP panel was thinking about.
    2:32:16 And then probably on pile, you know, all, we have lots of people in the sky.
    2:32:21 There should be detectors, uh, on the planes or at least, you know, some
    2:32:24 kind of alert system that if some pilot says, Oh, look, I’m seeing something.
    2:32:25 I don’t understand.
    2:32:29 Boop, presses the red button and that triggers the ground based and, uh,
    2:32:34 is space based, um, uh, data collectors and the data collectors themselves.
    2:32:36 This is something that people really don’t understand and it’s so important.
    2:32:41 In order to actually do science with anything, the data you have, you have
    2:32:45 to understand where it came from, like down to the, you know, the nth degree.
    2:32:51 You have to know how that camera behaves in a bunch of different wavelengths.
    2:32:52 You have to have characterized that.
    2:32:56 You have to know what the software does, what the limits of the software
    2:32:59 possibly have to know what happened to the camera as it was it refurbished
    2:33:04 recently, um, in, you know, in every spectral wavelength, uh, in all of its
    2:33:08 data, um, collection and, and, and processing, you have to know all of those
    2:33:11 steps and having them all characterized because especially if you want to claim
    2:33:15 like, Oh my God, I saw something take a right hand turn at Mach 500, right?
    2:33:19 You better have all of that nailed down before you make that kind of claim.
    2:33:23 So we have to have characterized detectors looking up, down and maybe on, on
    2:33:24 planes themselves.
    2:33:26 We need a rational search strategy.
    2:33:29 So let’s say you want to lay out these, uh, ground based detectors.
    2:33:30 Where do you put them?
    2:33:30 Right?
    2:33:32 There’s only so much money in the world.
    2:33:35 So, you know, do you want to put them near places where you’ve seen a lot of
    2:33:39 things beforehand, or do you want to, you know, have them try and do a, a sparse
    2:33:40 coverage of the entire country?
    2:33:44 Um, and then you need the, uh, the data analysis, analysis, right?
    2:33:47 You’re going to have so much data, so many false positives or, you know,
    2:33:51 false triggering that you need a way of sorting through enormous amounts of
    2:33:53 data and figuring out what you’re going to throw out and what you’re going to
    2:33:53 keep.
    2:33:55 And all of these things we’re used to doing in other scientific
    2:33:56 enterprises.
    2:34:00 And without that, if we don’t do that, we’re going to be having the same damn
    2:34:03 argument about these things for, you know, the next hundred years.
    2:34:09 But if I asked you, I give you a trillion dollars and ask you to allocate to one
    2:34:16 place, looking out, steady, or looking at earth, what should you allocate?
    2:34:18 Oh God, looking out, looking out, because that’s the bet.
    2:34:21 You know, as I always like to say, here’s my, my codification of this.
    2:34:24 If you said, Hey, Adam, I’d like to find some Nebraskans.
    2:34:27 And I said, Oh good, let’s go to the Himalayas.
    2:34:29 You know, you’d be like, why am I going there?
    2:34:32 I’m like, well, you know, maybe there’s some Himalayas, you know,
    2:34:33 some Nebraskans and Himalayas.
    2:34:34 Say no, no, let’s go to Nebraska.
    2:34:40 If we’re looking for aliens, why don’t we look on alien planets where they live?
    2:34:44 Cause that’s, we have that technology now, as opposed to the, you know, the, the
    2:34:48 bucket of assumptions that you have to come up with in order to say, like, Oh,
    2:34:48 they’re here right now.
    2:34:50 You know, they just happen to be here right now.
    2:34:53 And also the very important thing, I called this the high beam argument.
    2:34:57 You know, to deal with the UFO stuff, you have to deal with all of, you have to
    2:35:00 answer these weird irrational things that are happening.
    2:35:05 Like, okay, there’s an advanced civilization that is visiting earth regularly.
    2:35:07 They don’t want to be detected.
    2:35:11 They’ve got super powerful technology, but they really suck at using it because
    2:35:14 they, we keep seeing them, we keep seeing them, but then they disappear.
    2:35:14 Right.
    2:35:19 I mean, explain to me what rational world that works under.
    2:35:22 It’s like, you know, so there’s that whole sort of argument you’ve got to
    2:35:27 explain, like why, if they want to stay hidden, are they so bad at it?
    2:35:31 So, you know, that’s why I take that level of difficulty.
    2:35:33 And then I put it on top of where should I look?
    2:35:37 I should look at the, the, you know, I should look at where they, where they’re
    2:35:41 from that makes me want to look at, do the telescopic stuff.
    2:35:41 Yeah.
    2:35:48 I think the more likely explanation is either the sensors are not working correctly
    2:35:51 or it’s a secret military technology being tested.
    2:35:51 Absolutely.
    2:35:55 I mean, if you had, I listen, I, that’s why, again, I think UAP, you know, the
    2:35:58 absolutely UAP should be studied scientifically.
    2:36:02 Um, uh, but if I had to make a bet and it’s just a bet, I would say this is,
    2:36:05 you know, this is pure state adversary stuff.
    2:36:10 When I did, I did a, a New York Times op-ed for this in 2021, which blew up.
    2:36:13 And, um, and so, you know, I had a lot of, you know, people talking to me.
    2:36:16 While I was doing that, I sort of looked at the signals, intelligence people,
    2:36:21 the sig int and an eint, electronic intelligence communities.
    2:36:23 And what they were saying about, you know, the New York Times articles
    2:36:27 and the, the various videos, and really none of them were talking about UFOs.
    2:36:29 They were all talking about, you know, pure state.
    2:36:31 That’s why I learned the word pure state adversaries.
    2:36:35 How like even simple drone technologies, you can, you know, and you want to,
    2:36:39 you purposely want to do this, you want to, um, fake, you know, signals into
    2:36:42 the electronics, uh, of their adversary.
    2:36:46 So they crank it up so then you can just soak up all the electromagnetic
    2:36:49 radiation and know exactly what those advanced radars can do.
    2:36:52 That said, I’m not saying that that’s what this is.
    2:36:58 If I wasn’t the head of an alien civilization and I chose to not, to
    2:37:03 minimize the amount of contact I’m doing, I would try to figure out what would
    2:37:07 these humans, what would these aliens like to see?
    2:37:13 That’s why like the big heads in the humanoid form, like, I mean, that’s
    2:37:15 kind of like how would I would approach communication.
    2:37:18 If I, if I was much more intelligent, I would observe them enough.
    2:37:23 It’s like, all right, if I wanted to communicate with a nail colony, I
    2:37:26 would observe it long enough to see what are the basic elements of communication.
    2:37:27 Yeah.
    2:37:27 Yeah.
    2:37:31 And maybe I would do a trivial thing, like do a, like a fake ant.
    2:37:31 Right.
    2:37:32 A robot ant.
    2:37:35 A robot ant, but then it’s not enough to just do a robot ant.
    2:37:38 You’d have to do a robot ant that like moves in the way they do.
    2:37:42 And maybe aliens are just shitty at doing the robot ants.
    2:37:45 But no, I do sort of, I just wanted to make the case for that.
    2:37:49 This is the plot, actually, of a great science fiction book called Eon by Greg
    2:37:52 Bear, and the idea was like these sort of, you know, this, this is actually
    2:37:58 where my first, I got, I became sort of more than agnostic, anti-Medi, because
    2:38:01 the idea is that, yes, our aliens come, they, you know, they sort of make their
    2:38:04 arrival, and really their point is to get rid of us.
    2:38:06 It’s the, it’s the dark forest hypothesis.
    2:38:10 And what they do is they sort of literally the way they present themselves is
    2:38:14 in this sort of classic UFO thing, and they do it and they, you know, they arrive
    2:38:16 at the, this was during the Soviet Union, they arrive at the USSR, they arrive
    2:38:20 in China, and they’re kind of faking us out so that we never can organize
    2:38:24 ourselves against, so it was really, they did exactly kind of what you’re
    2:38:27 talking about, but for nefarious purposes.
    2:38:28 Okay.
    2:38:29 Let me ask the podhead question.
    2:38:32 Another, yet another, the whole conversation.
    2:38:33 I’m sorry.
    2:38:34 Boggs before breakfast.
    2:38:37 It’s, it’s science and podhead questions back and forth.
    2:38:44 Okay, what if aliens take a form that’s unlike what we kind of traditionally
    2:38:50 envision in analyzing physical objects?
    2:38:52 What if they take the form of, say, ideas?
    2:38:58 What if real podhead, if it’s consciousness itself, like the subjective
    2:39:04 experience is an alien being, maybe ideas and is an easier one to visualize
    2:39:07 because we can think of ideas as entities traveling from human to human.
    2:39:11 When, you know, I made the claim that the most important, that finding
    2:39:14 life, any kind of life would be the most important discovery in human history.
    2:39:18 And one of the reasons is, again, as I said, that, you know, life, if we’re not
    2:39:22 an accident and there’s other life, then there’s probably lots of other life.
    2:39:27 And because the most significant thing about life is it can innovate, right?
    2:39:33 If I give you a star and, you know, give, tell you the mass and the composition,
    2:39:35 you can basically pretty much use the laws of physics, tell exactly what’s
    2:39:37 going to happen to that star over its entire lifetime.
    2:39:40 Maybe not the little tiny details, but overall, it’s going to be a white dwarf.
    2:39:41 It’s going to be a black hole in the story.
    2:39:44 If I gave you a single cell and said what’s going to happen in a few billion
    2:39:48 years, you’d never be able to predict a giant rabbit that can punch you in the face,
    2:39:49 right, a kangaroo.
    2:39:53 So life has this possibility of innovating, of being creative.
    2:39:57 So here’s, so what it means is, and that’s a part of it, kind of a fundamental
    2:39:59 definition of what it means to be alive.
    2:40:00 It goes past itself.
    2:40:07 So give life enough time, you know, and what are the, what are the
    2:40:07 end results?
    2:40:09 Like, you know, there’s, there’s, you know, like, that’s why I love
    2:40:10 science fiction so much.
    2:40:15 It does, at some point does life reach a point where it climbs into the laws
    2:40:19 of physics itself, it becomes the laws of physics or, you know, these, these
    2:40:23 sort of lie at the, the extreme limits of thinking about what, what we mean by
    2:40:27 reality, what we mean by, you know, uh, uh, experience.
    2:40:30 Um, but I’m not sure there was much we can do with them scientifically,
    2:40:33 but it, you know, they’re, they’re open-ended question about the open-ended
    2:40:37 nature of what it means to be alive and what life can do.
    2:40:42 Since you said it’s the biggest question, which is an interesting thought
    2:40:45 experiment, what is the biggest scientific question we can possibly answer?
    2:40:49 You know, some people might say about, like, what happened before the big
    2:40:52 bang, like some big physics questions about the universe.
    2:40:58 I can see the argument for, you know, how many alien civilizations, or if
    2:41:01 there’s other life out there, you want to speak to that a little bit?
    2:41:03 Like why, why is the, why is it?
    2:41:07 Is it the biggest question in your, why is it number one in your top five?
    2:41:08 I’ve evolved in this, right?
    2:41:10 You know, I started off as a theoretical physicist.
    2:41:13 I went into, um, computational astrophysics and magneto hydrodynamics
    2:41:16 of star formation, but I always, you know, I was a philosophy minor.
    2:41:19 I always had the sort of bigger questions sort of floating around the back of my mind.
    2:41:24 And what I’ve come to now is the most important question in the, for physics
    2:41:25 is what is life?
    2:41:29 What the hell is the difference between a rock and a cell fundamentally?
    2:41:32 And what I really mean by this, and this is where I’m going to go non-traditional,
    2:41:36 um, is that really the fundamental question that is the, is agency?
    2:41:39 What does it mean to be an autonomous agent?
    2:41:41 How the hell does that happen?
    2:41:43 You know, it’s so, I’m not a reductionist.
    2:41:45 I’m not somebody who’s just like, well, you just put together enough chemicals
    2:41:47 and bing, bang, boom, and you know, it suddenly appears.
    2:41:54 There’s something that really is going to demand a reconception of what nature itself is.
    2:41:56 And so yeah, black holes are super cool.
    2:41:57 Cosmology is super cool.
    2:42:04 But really this question of, of what is life, especially from by viewing it from the inside,
    2:42:07 because it’s really about the verb to be, right?
    2:42:10 Really, what is the most, what is the most impressive philosophical question
    2:42:12 beyond science is the verb to be?
    2:42:15 What is, what is being, right?
    2:42:19 This is what Stephen Hawking said when he talked about what puts the fire in the equations.
    2:42:20 The fire, right?
    2:42:22 The fire is this, this presence.
    2:42:25 And this is where it touches things like, you know, whatever you want to say it,
    2:42:28 the sacred spirituality, whatever you want to talk about.
    2:42:31 My first book was about science and, and human spirituality.
    2:42:36 So it’s like, you know, so this question of life, what makes life as a physical system,
    2:42:42 you know, so different is, is to me much, because it’s, you know, that’s where being appears.
    2:42:45 Being doesn’t appear out there, right?
    2:42:47 The only place that ever appears to any of us is us.
    2:42:51 So, you know, I can do this kind of projection into this third person thing,
    2:42:53 but nobody ever has that, that God’s eye view.
    2:42:54 That’s a story we tell.
    2:43:00 This is where, you know, this between us is where the verb to be appears.
    2:43:07 So this is something that you write about in the blind spot, why science cannot ignore human experience,
    2:43:15 sort of trying to pull the fire into the process of science.
    2:43:18 And it’s a kind of critique of materialism.
    2:43:20 Can you explain the main thesis of this book?
    2:43:20 Yeah.
    2:43:24 So the idea of the blind spot is that there is this thing.
    2:43:27 That is central to science.
    2:43:29 So the blind, we’re using the blind spot as a metaphor, right?
    2:43:34 So the eye has an optic nerve and the optic nerve is what allows vision to happen.
    2:43:38 So you can’t have vision without the optic nerve, but actually you’re blind to the optic nerve.
    2:43:41 There’s a little hole in your vision where the optic nerve is.
    2:43:45 And what we’re saying is that science has something like this.
    2:43:49 That there is something that without which science would not be possible.
    2:43:51 But that science, the way it’s been configured.
    2:43:55 And actually, when we mean the blind spot, I’ll get into exactly what I mean, what it is.
    2:43:57 But it’s not really science.
    2:44:00 It is a set of ideas that got glued on to science.
    2:44:03 It’s a metaphysics that got glued on to science.
    2:44:06 And so what is that thing that is, what is the blind spot?
    2:44:07 It’s experience.
    2:44:09 It is presence.
    2:44:12 And by experience, people have to be very careful because I’m not talking about being an observer.
    2:44:15 It’s the, you know, there’s lots of words for it.
    2:44:16 There’s direct experience.
    2:44:23 There is presence being the life world within the philosophy called phenomenology.
    2:44:24 There’s the life world.
    2:44:28 It’s this sort of raw presence that you can’t get away from until you die.
    2:44:32 And then who the hell knows, you know, that like, you know, as long as you’re around, it’s there.
    2:44:35 And what we’re saying is that that is the way to say this.
    2:44:41 That is the precondition for the possibility of science.
    2:44:47 And the whole nature of science, the way it has evolved is that it purposely pushed that out.
    2:44:49 It pushed that out so it could make progress.
    2:44:52 And that’s fine for a certain class of problems.
    2:44:58 But when we try to answer, when we try and go deeper, there’s a whole other class of problems.
    2:45:03 The nature of consciousness, the nature of time, quantum mechanics, that comes back to bite us.
    2:45:09 And that if we don’t learn how to take, understand that that is always the background,
    2:45:11 that experience is always the background.
    2:45:17 Then we just end up with these paradoxes and these yoga that require this intellectual yoga to get out of.
    2:45:20 I think you give a bunch of examples of that, like looking at temperature as a number.
    2:45:23 There’s a very sort of objective scientific way of looking at that.
    2:45:25 And then there’s the experience of the temperature.
    2:45:29 And how you build the parable of temperature that we call it.
    2:45:30 So what is the blind spot?
    2:45:32 We use the term, it’s a constellation.
    2:45:33 It’s not just materialism.
    2:45:37 It’s a constellation of ideas that are all really sort of philosophical views.
    2:45:42 They’re not what science says, but because of the evolution of the history of science and culture,
    2:45:44 they got like pin the tail on the donkey.
    2:45:48 They were sort of pinned on and to tell us that this is what science says.
    2:45:49 So what is it?
    2:45:55 One is reductionism, that you are nothing but your nerve cells, which are nothing but the chemistry,
    2:45:58 which is nothing but, you know, all the way down to quarks.
    2:45:58 That’s it.
    2:45:59 So that’s reductionism.
    2:46:07 The objective frame that science gives us this God’s eye view, this third person view of the world to view the world from the outside.
    2:46:09 That that’s what science, you know, bequeaths to us, that view.
    2:46:14 Physicalism, that everything in the world is basically made of stuff.
    2:46:16 There’s nothing else to talk about, right?
    2:46:19 That that’s all there is and everything can be reduced to that.
    2:46:24 And then also the reification of mathematics, that mathematics is somehow more real than this.
    2:46:25 And there’s a bunch of other things.
    2:46:32 But all of these together, what they all do is they end up pushing experience out and saying experience is an epiphenomena.
    2:46:33 Consciousness.
    2:46:39 I don’t, I tend not to use the word consciousness because it’s, I think it gets, you know, it leads us in the wrong direction.
    2:46:44 We should focus on experience because it’s a verb kind of in a way or it’s verb, it’s verb like.
    2:46:53 So yeah, and that this, by being blind to that, we end up with these paradoxes and problems that really not only block science,
    2:46:56 but also have been detrimental to society as a whole, especially where we’re at right now.
    2:47:02 So you actually say that that from a perspective of detrimental society, that there’s a crisis of meaning.
    2:47:09 And then we respond to that in a way that’s counterproductive to these bigger questions, scientific questions.
    2:47:15 So the three ways, the three responses you mentioned is scientific triumphalism.
    2:47:20 And then on the other side is rejecting science completely, both on the left and the right.
    2:47:24 I think the postmodernist on the left and anti-establishment people on the right.
    2:47:28 And then just pseudoscience that kind of does this in between thing.
    2:47:32 Can you just speak to those responses and to the crisis of meaning?
    2:47:33 Right, right.
    2:47:39 So the crisis of meaning is that, you know, on the one hand, science wants to tell us that we’re insignificant.
    2:47:40 We’re not important.
    2:47:42 We’re just, you know, biological machines.
    2:47:46 And, you know, so we’re basically an insignificant part of the universe.
    2:47:51 And the other hand, we also find ourselves being completely significant in cosmology.
    2:47:56 We have to figure out how to look from the inside at cosmology.
    2:47:57 We’re always the observers.
    2:48:00 We’re at the center of this, you know, collapsing wave front of light.
    2:48:03 You know, in quantum mechanics, it really comes in.
    2:48:06 It comes in, you know, the measurement problem just puts us front and center.
    2:48:11 We’ve spent a hundred, some people spent a hundred years trying to ignore the measurement part of the measurement problem.
    2:48:13 So on the one hand, we’re insignificant.
    2:48:14 And on the other hand, we’re central.
    2:48:15 So which one is it, right?
    2:48:21 And so this all comes from not understanding actually the foundational role of experience.
    2:48:27 This inability, we can’t, it’s, we can’t do science without already being present in the world.
    2:48:31 We can’t reduce what happens in science to some sort of formal, it’s real.
    2:48:36 A lot of it is about, we love our formal systems, you know, our mathematics and we’re substituting.
    2:48:40 That’s one of the things that we, there’s two philosophers we really like who are heroes.
    2:48:45 One is Herserl, who is a mathematician who invented phenomenology.
    2:48:51 And the other is Whitehead, who was one of the greatest mathematicians of the 20th century.
    2:48:54 And Herserl came up with this idea of the surreptitious substitution.
    2:49:01 Part of the blind spot is substituting a formal system, a calculus of, you know, data for actual experience.
    2:49:06 That that’s more important than, and so let me just do before I go to those three responses.
    2:49:10 Let’s just do the parable of temperature, because I think it’ll, people can, it’ll, it’ll help them understand what we mean.
    2:49:14 So think about degree Celsius, right?
    2:49:19 We kind of have in the modern scientific culture we live in, we think like, oh, yeah, degree Celsius.
    2:49:24 They’re out there, the universe, it’s, you know, the molecular cloud in space is 10 degrees, you know, Kelvin.
    2:49:32 The way we got there is we’ve forgotten how that idea is rooted in experience, right?
    2:49:37 We started off with science by, we had the experiment, the subjective experience of hot and cold.
    2:49:40 I feel hot, I feel cold, you feel hot, you feel cold.
    2:49:48 Science was this process of trying to extract from those experiences what Michelle Bitbowl philosopher calls the structural invariance.
    2:49:51 The things that, like, we could both kind of do agree on.
    2:50:03 So, you know, we figured out, like, oh, we could make a gradated little cylinder that’s got mercury in it and that, you know, hot things will be higher in that, you know, on that gradated cylinder, cold things will be lower.
    2:50:07 And we can both kind of figure out what we’re going to agree on our standards for that.
    2:50:09 And then we have thermometry, yay.
    2:50:16 We have a way of sort of like having a structural invariant of this sort of very personal experience of hot or cold.
    2:50:19 And then from that, we can come up with thermodynamics, et cetera.
    2:50:28 And then we end up at the bottom, you know, at the end of that with this idea of, like, every day I wake up and I check my phone and I’m like, oh, it’s going to be, you know, 60 degrees out, great.
    2:50:42 And we start thinking that 60 degrees is more real than hot and cold, that thermodynamics, the whole formal structure of thermodynamics is more real than the basic experience of hot and cold that it came from, you know.
    2:50:50 It required that bodily experience that also, not just me, you, I have to tell you, you know, it’s part of my communication with you, cold today, isn’t it?
    2:50:50 Right.
    2:51:01 That from that basic, irreducible experience of being in the world, you know, with everything that involves, I developed degrees Celsius, but then I forgot about it.
    2:51:02 I forgot the experience.
    2:51:04 So that’s called the amnesia of experience.
    2:51:18 So that’s what we mean by the, you know, how the blind spot emerges, how we end up, how science purposely pushes experience out of the way so it can make progress, but then it forgets that experience was important.
    2:51:19 So where does this show up?
    2:51:23 Why is this, you know, what are the responses to trying to get this back in?
    2:51:25 And where, where, where does this crisis of meaning emerge?
    2:51:31 So scientific triumphalism is the idea that only, the only thing that’s true for us are scientific truths, right?
    2:51:41 Unless it can be codified in a formal system and represented as data, you know, captured in some kind of scientific causal network, it doesn’t even exist, right?
    2:51:47 And anything else that’s not part of it, part that can be formalized in that way is an epiphenomenon.
    2:51:48 It’s not real.
    2:51:59 So, you know, scientific triumphalism is this response to, to the mist, you know, the weirdness of, you know, I could call it the mystery, the weirdness of experience by kind of just ignoring it and completely.
    2:52:08 So there’s no other truth, you know, art, music, you know, human spirituality, it’s all actually reducible just to neuro, you know, neural correlates.
    2:52:11 So that’s one way that it’s been dealt with.
    2:52:12 The other way is this sort of right.
    2:52:23 You’ve got on the, on the postmodern, you know, the left academic left, you get this thing like science is just a game, you know, it’s just a game by the from the powerful come up with, which is also not true.
    2:52:27 Science is totally potent and requires an account for what is happening.
    2:52:31 So that’s another way to push sort of science away or respond to it.
    2:52:42 The denial, science denial that happens, that’s also another way of sort of, you know, not understanding the balance that science is trying, that we need to establish with experience.
    2:52:53 And then there’s just pseudoscience, which wants to sort of say like, oh, you know, the new age movement or whatever, which wants to have, you know, wants to deal with experience by kind of elevating it in this weird pseudo spiritual way.
    2:52:56 Or, you know, it said that doesn’t have the rigor of science.
    2:53:02 So, you know, all of these ways, all of these responses, we have this difficulty about experience.
    2:53:07 We need to understand how experience fits into the web of meaning.
    2:53:11 And we don’t really have an accurate, we don’t have a good way of doing it yet.
    2:53:19 And the point of the book was to identify very clearly how the problem manifests, what the problem is, and what its effects are in the various sciences.
    2:53:26 And by the way, we should mention that at least the first two responses, they kind of feed each other.
    2:53:40 There’s a, just to observe the scientific community, those who sort of gravitate a little bit towards the scientific triumphalism, they, there’s an arrogance that builds in the human soul.
    2:53:49 I mean, it has to do with PhDs, it has to do with sitting on an academic throne, all of those things, and the human nature with the egos and so on, it builds.
    2:53:52 And of course that, nobody likes arrogance.
    2:54:01 And so the, those that reject science, the arrogance is fuel for the people that reject science, which just goes back, and it’s just, is this divide that builds.
    2:54:05 Yeah, no, and that was a problem like when you saw, so like I said, you know, my first book was about science and human spirituality.
    2:54:13 So I was trying to say that like, you know, science is actually, if we look at what happens in human spirituality, not religion, religion is about politics, right?
    2:54:20 But about, you know, for the entire history of the species, we’ve, we’ve had this experience of, for a better, lack of a better word, the sacredness.
    2:54:24 I’m not connecting this God or anything, I’m just saying this experience of like the more.
    2:54:34 And then, you know, with the new atheist movement, you’ve got people saying that like, anybody who feels that is an idiot, you know, they just can’t handle the hardcore science.
    2:54:46 When in fact, their views of the world are so denuded of it, they can’t even see the role that experience plays in how they came up with their formal systems, you know, and experience fundamentally is weird, you know, mysterious.
    2:54:50 It’s like, it’s, it’s, you know, kind of goes down forever in some sense, there is always more.
    2:55:01 So yeah, that arrogance then, just if you’re telling everybody who’s not hardcore enough to do the, you know, standard model of cosmology, that they’re idiots, that’s not going to bode well for your, you know, the advance of your project.
    2:55:19 So you’re proposing, at least to consider the idea that experience is a fundamental experience is not just an illusion that emerges from the set of quirks, that there could be something about the conscious experience of the world that is like at the core of reality.
    2:55:20 Yeah, but I wouldn’t do it.
    2:55:24 I wouldn’t, because, you know, there’s panpsychism, right, which is all the way there.
    2:55:24 Yeah.
    2:55:27 Panpsychism is like, that’s literally one of the laws of physics.
    2:55:28 Right, right.
    2:55:36 But see, what all those do is like, just the idea of, say, like, physicalism versus idealism, which are kind of the two philosophical schools you can go with.
    2:55:38 Physicalism says, all that exists is physical.
    2:55:40 Idealism says, all that exists is mind.
    2:55:48 We’re actually saying, look, both of these, to take either of those positions is already to project out into that third person view, right?
    2:55:53 And that third person view, we want to really emphasize, is a fiction.
    2:55:55 It’s a useful fiction when you’re doing science, right?
    2:56:01 If I want to do, like, you know, the Newtonian physics of billiard balls on a pool table, great.
    2:56:03 I don’t want to have to think about experience at all, right?
    2:56:13 But, you know, if I’m asking deeper questions, I can’t ignore the fact that there really is no third person view and that any story I tell about the world is coming from.
    2:56:19 It’s not just first person, but it’s literally, because I’m going to argue that experience always involves all of us.
    2:56:30 Experience always originates out of a community that, you know, you’re always telling those stories from the perspective of already existing, of already being inexperienced.
    2:56:40 So whatever account we want to give is of the world is going to have to take that as experience as being irreducible and the irreducible starting point.
    2:56:42 So ultimately, like, we don’t have an answer.
    2:56:45 Like, that’s when people are like, well, what are you suggesting is your alternative?
    2:56:48 It’s like, look, that’s the good work of the next science to come.
    2:56:50 Well, our job was to point out the problem with this.
    2:56:57 But what we would argue with is, and we’re thinking about the next book, is this is really going to require a new conception of nature, right?
    2:57:07 That doesn’t sort of jump right to that third person, that fictional third person view and somehow figures out how to do science, recognizing that it always starts from experience.
    2:57:14 It always starts from this field of experience or in phenomenology, the world is the life world that you’re embedded in.
    2:57:16 You can’t un-embed yourself from it.
    2:57:23 So how do you do, so one of the things that Whitehead said was, you know, we have to avoid the bifurcation of nature.
    2:57:30 And what he meant by that is the bifurcation into, like, sort of scientific concepts, wavelength, you know, think about, like, the seeing a sunset.
    2:57:38 You can say, like, oh, look, it’s just wavelengths, you know, and scattering particles and your experience of the redness, the actual experience of the redness and all the other things.
    2:57:39 It’s not just red.
    2:57:40 There’s no qualia.
    2:57:41 There’s no pure redness.
    2:57:44 Everything that’s happening in the experiential part is just an epiphenomenon.
    2:57:46 It’s just, you know, brain states, whatever.
    2:57:48 He said, you can’t do that.
    2:57:49 They’re just, they’re both real.
    2:57:53 They’re both accounts or both, they both need to be integrated.
    2:57:57 And so that required, I think, a really a different conception of what we mean by nature.
    2:58:08 Is it something like incorporating in the physics, in the study of nature, the observer, the experiencing observer, or is that still also looking for my third person?
    2:58:10 I think that that’s what we have to figure out, right?
    2:58:13 And so actually, you know, a great place to think about this is quantum mechanics, right?
    2:58:22 Cause one of the things we’re arguing is like, look, in the chapter that I wrote on, cause it was, I wrote this with Evan Thompson, who’s a wonderful philosopher and Marcelo
    2:58:24 Gleiser, who’s a theoretical physicist.
    2:58:33 Um, when I was writing the chapter on the origin of the blind spot, like, you know, sort of what, how this emerged out of history, my subheader was like, well, it made sense at the time.
    2:58:39 Cause it did, you know, it really, there was a reason why people adopted this third person, God’s eye deterministic view.
    2:58:43 This view of sort of like, yeah, the perfect clockwork of the universe.
    2:58:44 Yeah, totally made sense.
    2:58:53 But by the time you got to the beginning of the 20th century, science itself was telling you like, and no place does this appear more than in quantum mechanics, right?
    2:59:08 Quantum mechanics slams you with the idea that the of the measurement problem, you know, uh, the most important thing about quantum mechanics is you have a dynamical equation, the Schrodinger equation, which, you know, you put in, like we talked about before, you have initial conditions.
    2:59:13 And now you got a differential equation and you crank out the differential equation and it makes predictions for the future, right?
    2:59:19 Exactly like Newtonian physics or its higher versions of the Lagrange or Hamiltonians.
    2:59:28 But then this other thing happens where it’s like, oh, by the way, as soon as you look at it, as soon as the measurement is made, I have a whole nother set of rules for you.
    2:59:30 You know, that’s the born, what we call the born rule.
    2:59:35 And it was telling you right from the beginning that measurement matters, right?
    2:59:40 So when you’re asking like, how will we do this, quantum mechanics is actually pointing to how to do it.
    2:59:43 So, you know, there’s been all these different interpretations of the quantum mechanics.
    2:59:47 Many of them try to pretend the measurement problem isn’t there.
    2:59:59 Go to enormous lengths like the, the many worlds interpretation, literally inventing an infinite number of unobservable parallel universes to avoid the thing that quantum mechanics is telling them, which is that measurements matter.
    3:00:07 And then you get something like Cubism, which is I’m going to advocate for is a new interpretation of quantum mechanics, which puts the born rule at the center, right?
    3:00:13 Instead of like focusing on the Schrodinger equation and the weird things that come out of it, like Schrodinger’s cat and all that other stuff.
    3:00:16 It says, no, no, actually, the real mystery is the born rule.
    3:00:18 Let’s think about the born rule.
    3:00:24 And like you said, that puts the agent, the agent and information at the center of the whole thing.
    3:00:27 So that’s not a thing you’re trying to get rid of.
    3:00:31 That’s the thing you’re trying to integrate at the center of the thing in quantum mechanics.
    3:00:43 It becomes super obvious, but maybe the same kind of thing should be incorporated in every layer of study of nature.
    3:00:43 Absolutely.
    3:00:44 That’s exactly it.
    3:00:47 So, you know, one of the things that’s really interesting to me, so I’m, you know, I have a project.
    3:00:52 I’m part of a big project that Chris Fuchs and Jacques Pinier on Cubism.
    3:00:53 So I’ve been part of that.
    3:00:56 And what I’ve been amazed by is the language they use.
    3:00:59 So what’s cool about Cubism is it comes from quantum information theory.
    3:01:02 It’s a pretty modern version of thinking about quantum mechanics.
    3:01:14 And it’s always about you have an agent who makes an action on the world and then the information they get from that action through the experiment.
    3:01:19 That’s the action in the world updates their priors, updates their, their, you know, their Bayesian.
    3:01:20 That’s why it’s called Cubism.
    3:01:24 Quantum Bayesianism updates how the information they’ve gotten from the world.
    3:01:40 Now, this turns out to be kind of the same language that we’re using in a project that’s about the physics of life, where we have a grant from the Templeton Foundation to look at semantic information and the role of semantic information in living systems like cells.
    3:01:48 So, you know, we have Shannon information, which is a probability distribution that tells you, you know, basically how much surprise there is in a, in a message.
    3:01:51 Semantic information focuses on meaning, right?
    3:02:04 Focuses on, in a very simple way, just like, what is, how much of the information that I’m, that the agent, you know, the critter is getting from the world actually has, helps it survive, right?
    3:02:06 That’s the most basic idea of meaning, right?
    3:02:08 We can get all philosophical about meaning, but this is it.
    3:02:10 Does it help me stay alive or not?
    3:02:26 And the whole question of agency and autonomy that occurs in this setting of just asking about how do cells move up a chemical gradient to get more food kind of has the same feel, the same, you know, sort of architecture as what’s going on in quantum mechanics.
    3:02:50 So I think what you said is exactly it. How do we bring this sort of recognition that there’s always us, the agent or life, the agent interacting with the world and drawing it, both giving information and passing information back as a way of doing science, doing hardcore science with experiments, but never forgetting that agency, which also means experience in some sense, is at the center of the whole thing.
    3:03:06 So you think that could be something like Cubism, quantum Bayesianism that creates a theory like a Nobel Prize winning theory, sort of like hardcore real theories that put the agent at the center.
    3:03:08 Yes, that’s what we’re looking for.
    3:03:10 I think that is really, that’s the exciting part.
    3:03:16 And it’s a move, you know, the scientific triumphalist thing says, you know, we understand why people love this.
    3:03:24 Like, I have these equations and these equations represent, you know, there’s this platonic idea that they are, you know, they exist eternally on their own.
    3:03:26 It’s kind of quasi religious, right?
    3:03:30 It’s sort of like somehow look, these equations are the, you’re reading the mind of God.
    3:03:37 But this other approach to me is just as exciting, because what you’re saying is there’s us and the world, they’re inseparable, right?
    3:03:52 It’s always us and the world. And what we’re now finding about is this kind of co-creation, this interaction, you know, between the agent and the world, such that these powerful laws of physics that need an account, like in no way am I saying these laws aren’t important.
    3:04:07 These laws are amazing, but they need an account, but not an account that strips, you know, that turns the experience, turns the agent into just a, you know, an epiphenomena that pushes the agent out and makes it seem as if the agent is not the most
    3:04:08 important part of the story.
    3:04:23 So if you pull on this thread and say there’s a whole discipline born of this, putting the agent as the primary thing in a theory and a physics theory, like how is it possible it just like breaks the whole thing open?
    3:04:42 So there’s this whole effort of, you know, unifying general relativity and quantum mechanics of like coming up with a theory of everything. What if these are like the tip of the iceberg? What if the agent thing is like really important?
    3:04:56 So, you know, listen, that that would be like kind of my dream. I’m not going to be the one to do it because I’m not smart enough to do it. But, you know, Marcelo and I have for a while have been sort of critical of where foundational physics has been for a while with string theory.
    3:05:11 I spent my whole life listening to talks about string theory real soon, you know, and it’s gotten ever more disconnected from, you know, data observations. There were people talking for a while that it’s post empirical.
    3:05:30 And, you know, I want to always wanted to write a paper or an article that was like physicists have been smoking their own stash, right? There’s this way we’ve gotten used to like, you know, you have to outweard the other person like my theory is 38 dimensions, my theory is 22 dimensions, but it’s got, you know, you know, psychedelic squirrels in it.
    3:05:41 And so there’s been a problem. There’s a problem. I don’t need to tell you there’s a crisis in physics or there’s a crisis in cosmology. Other people have used that. That’s been the headline on scientific American stories.
    3:06:10 So they’re clearly another direction has to be found. And maybe it has nothing to do with this. But I suspect that because so many times the agent or the having to deal with the view from the inside or the role of agency, like when it comes to time, thinking that you can replace the block universe with the actual experience of time, you know, clocks don’t tell time, we use clocks to tell time.
    3:06:22 So maybe that even like the fundamental nature of time can’t be viewed from the outside, that there’s a new physics theory that is going to come from that comes from this agential informational computational view.
    3:06:29 I don’t know. But that’s kind of what I think it would be fertile ground to explore.
    3:06:35 Yeah, the time is really interesting one. This time is really important to us humans. What is time?
    3:06:56 Yeah, that’s a right. What is time? So the way we have tended to view it is we’ve taken this is what when Herschel talks about the surreptitious substitution, we’ve taken Einstein’s beautiful, powerful, formal system for viewing time, and we substituted that for the actual experience of time, right?
    3:07:09 So the block universe where like next Tuesday is already written down, you know, it’s in the block, you know, the four dimensional universe, all events are already there, which is very potent for making certain kinds of predictions within this sort of, you know, the scientific framework.
    3:07:17 But, you know, it is not lived time. And, you know, this was pointed out to Einstein and he eventually recognized it.
    3:07:31 Very famous meeting between Henri Berkson, who was the most famous philosopher of like the, you know, 20 early 20th century, and Einstein, where Einstein was giving a talk on relativity and Berkson, whose whole thing was about time and was about duration.
    3:07:45 He wanted to separate the scientific image of time, the map of time from the actual terrain, which he used the word duration, like we humans were where duration for us is full.
    3:07:49 It’s sort of, it’s stretched out. It’s got a little bit of the past, a little bit of the future, a little bit of the present.
    3:07:57 Music is the best example, right? You’re hearing music, you’re both already anticipating what’s going to happen, and you’re, you know, remembering what’s going on.
    3:08:14 There’s a kind of phenomenal structure there, which is different from the representation of time that you have with the formal mathematics and what, you know, the way we would look at this is that the problem with the surreptitious substitution, the problem with the blind spot,
    3:08:37 is it says, Oh, no, no, the formal system is time, but really the only place time appears is with us, right? Where we’re, you know, so having a theory that actually could start with us, you know, and then stretch out into the universe rather than imposing this imaginary third person view back on us, you know, could that’s a route towards a different way of approaching the whole problem.
    3:08:44 I just wonder who’s the observer? I mean, define what the agent is in any kind of frame is difficult.
    3:08:56 Right. And so that, but that’s the good work of the science ahead of us. Right. So what happened with this idea of the structural invariance I was talking about? So, you know, we start with experience, which is irreducible, there’s no atoms of experience, right, it’s a whole.
    3:09:06 And we go through the whole process, which is a communal process, by the way, there’s a philosopher Robert Crease, who talks about the workshop that’s starting in like the 1700s, 1600s, we developed this communal
    3:09:17 space to work in, sometimes it was literally a physical space, a laboratory, where these ideas would be pulled apart, refined, argued over, and then validated and we went to the next step.
    3:09:30 So this idea of pulling out from experience, these thinner, abstract, structural invariance, the things that we could actually do science with, and it’s kind of like we call it an ascending spiral of abstraction, right.
    3:10:00 So the problem with the way we do things now is we take that those abstractions, which came from experience, and then with something like, you know, a computational model of consciousness or experience, we think we can put it back in, like you literally pulled out these super thin things, these abstractions, you know, neglecting experience, because that’s the only way to do science, and then you think somehow I’m going to put, I’m going to jam experience back in and, you know, have an explanation for experience.
    3:10:09 So do you think it’s possible to show that something like free will is quote unquote real, if you integrate experience back into the physics, into the physics model of the world?
    3:10:14 What I would say is that free will is a given, and that’s the thing about experience, right.
    3:10:24 So one of the things that Whitehead said, I really love this quote, he says it’s not the job of either science or philosophy to account for the concrete, it’s the job to account for the abstract.
    3:10:38 The concrete, what’s happening between us right now is just given, you know, it’s just, it’s presented to us every day, it’s presented to if you want an explanation fine, but the explanation actually doesn’t add anything to it, right.
    3:10:47 So that free will in some sense is the nature of being an agent, right, to be an agent, agency and autonomy are sort of the two things that are, you know, they’re equivalent.
    3:10:50 And so in some sense, to be an agent is to be autonomous.
    3:11:04 And so then the question really to ask is, can you have an account for agency and autonomy that captures aspects of its, its arising in the world or the way it and the world sort of co arise.
    3:11:20 But the idea, you know, the reason why we argue about free will often is because we already have this blind spot view that the world is deterministic because of our equations, which themselves, we treat the equations as if they’re more real than experience, you know, and the equations are a paler, you know, they don’t
    3:11:28 corral experience, they are a thinner, you know, representation, as we like to say, don’t confuse the map for the terrain.
    3:11:32 What’s happening between us right now in this, you know, all the weirdness of it, that’s the terrain.
    3:11:40 The map is what I can write down on equations and then in the workshop do experiments on super powerful needs an account, but experience overflows that.
    3:11:49 What if the experience is an illusion, like, how do we know what if the agency that we experience is an illusion?
    3:11:58 An illusion looking from where like, right, because that already requires to just take that stance is you’ve already pushed yourself into that third person view, right.
    3:12:15 And so what we’re saying is that’s a third person view, which now you’re going to say like, oh, I’ve got a whole other set of entities of ontological entities, meaning, you know, things that I think exist in God’s living room in spite, you know, that are independent of me and the community of living things I’m part of.
    3:12:27 So you’re pushing it elsewhere at this, just like there’s a stack of turtles is probably if this experience, the human experience is an illusion, maybe there’s an observer for whom it’s not an illusion.
    3:12:29 So you always have to find an observer somewhere.
    3:12:30 Yeah, right.
    3:12:40 And that’s where that’s why, you know, fundamentally, the blind spot, especially the scientific triumphalist part is following a religious impulse, you know, it’s wanting the God’s eye view.
    3:12:41 And you know, it’s really interesting.
    3:12:50 And when we think about this and the way this gets talked about, especially publicly, you know, there’s a line of philosophical inquiry that this language gets couched in.
    3:12:56 And it is actually a pretty, it’s only one version of philosophy, right.
    3:12:58 So it is pretty much what we call the analytic tradition, right.
    3:13:06 But there’s even in Europe or in the Western tradition, and you know, for Western, what we’ll call Western philosophy, there’s phenomenology.
    3:13:10 There’s a herceral and Eidegger and Merlupanti, which took an entirely different track.
    3:13:13 They were really interested in the structure of experience.
    3:13:20 They spent all their time trying to understand, trying to develop a language that could kind of climb into the circle that is experience.
    3:13:20 Right.
    3:13:23 You experience, you’re not going to be able to start with axioms and work your way to it.
    3:13:24 It’s over, it’s given.
    3:13:29 So you have to kind of jump in and then try and find a language to account for its structure.
    3:13:44 But then, so that has not been part of this discussion about you’ll never, good luck finding a YouTube video where someone, you know, a famous scientist is talking about science from a phenomenological point of view, even though it’s a huge branch of philosophy.
    3:13:48 And then you get the philosophies that occurred from other cores of civilization, right.
    3:13:55 So there’s the, there’s the Western core out of which comes the Greeks and the, you know, the Judeo-Christian Islamic tradition.
    3:13:58 But then you get India and you get Asia, and they developed their own.
    3:14:03 They were highly complex societies that developed their own responses to these questions.
    3:14:12 And they, for reasons because they had contemplative practice, they were very focused on like direct, trying to like directly probe attention and experience.
    3:14:16 They asked questions in ways that the West never really did.
    3:14:18 Phenomenology kind of started it.
    3:14:27 But, you know, there’s, there’s philosophers like Nagarjuna and Vasubandhu, and they’re like the Plato and the, you know, Aristotle of, you know, sort of those philosophies.
    3:14:30 And they were really focused on experience in the West.
    3:14:39 I think maybe because we had the Judeo-Christian tradition, where we already had this kind of God, who was going to be the frame on which you could always point to that frame.
    3:14:48 The, in the, the traditions that came from the classical philosophies of India and Asia, they started always with, they wanted to know about experience.
    3:14:54 Their whole philosophies and their logic and their, their argumentation was based on, “I’ve got this experience.
    3:14:56 I can’t get out of this experience.
    3:14:58 How do I reason from it?”
    3:15:03 So I think there’s like a lot of other philosophical traditions that we could draw from, you know, not like slavishly.
    3:15:09 We don’t all have to become Buddhists to do it, but there are traditions that really tried to work this out in a way that the Western traditions.
    3:15:10 Just didn’t.
    3:15:17 But there’s also the practical fact that it’s difficult to build a logical system on top of experience.
    3:15:20 It’s difficult to have the rigor of science on top of experience.
    3:15:25 And so it’s, as science advances, we might get better and better.
    3:15:39 Like the same is, it’s very difficult to have any kind of mathematical or kind of scientific rigor to, why complexity emerges from simple rules and simple objects, sort of the Santa Fe questions.
    3:15:40 Yeah, I think, but I think we can do it.
    3:15:42 I think there’s aspects of it.
    3:15:45 I mean, as long as you’re never trying to like, “This is what experience is.”
    3:15:52 Like, I think that’s kind of the where we’re, you know, you’re never going to have a causal account of experience because it’s just given.
    3:15:57 But you can do lots about, and that’s what the good work is, is to, “How do I approach this?
    3:16:00 How do I approach this in a way that’s rigorous that I can do experiments with also?”
    3:16:07 But so, for example, I was just reading this beautiful paper that was talking about in the, you know, this is what we’re accounting with our semantic information too.
    3:16:09 Causal closure.
    3:16:11 Love this idea, right?
    3:16:14 The idea that, so we talked about auto-poesis a while back, right?
    3:16:20 The idea that living systems are, they are self-creating and self-maintaining.
    3:16:23 So the membrane, cell membrane is a great example of this, right?
    3:16:26 The cell membrane, you can’t have a cell without a cell membrane.
    3:16:30 The cell membrane lets stuff through, keeps other stuff out, right?
    3:16:40 But the cell membrane is part of the processes and it’s a product of the processes that the cell membrane needs, right?
    3:16:43 In some sense, the cell membrane, cell membrane creates itself.
    3:16:45 So there’s this strange, it’s always with life.
    3:16:47 There’s always this strange loop.
    3:16:53 And so somehow figuring out how to jump into that strange loop is, you know, the science that’s ahead of us.
    3:17:01 And so this idea of causal closure, accounting for how the, you know, we talked about like a downward causation, right?
    3:17:04 So reductionism says everything only depends on the microstate.
    3:17:06 Everything just depends on the atoms, right?
    3:17:06 That’s it.
    3:17:10 You don’t really, if you know, if you know the Lagrangian for the standard model, you’re done.
    3:17:13 You know, of course, in principle, you need God’s computer, but fine.
    3:17:15 You know, in principle, you know, in principle, it can be done.
    3:17:17 Causal closure.
    3:17:21 And there’s, I was just reading this great paper that sort of argues for this.
    3:17:33 There’s ways in which using epsilon machines and all this machinery from information theory that you can see ways in which the system can organize itself so that it decouples from the microstates.
    3:17:40 Now, the macro state fundamentally no longer needs the microstate for its own description, its own account of the laws.
    3:17:44 Whether that paper is true or not, it’s an example of heading down that road.
    3:17:46 There’s also Robert Rosen’s work.
    3:17:59 He was a theoretical biologist who he was, you know, he talked about closure to efficient cause that, that living systems, you know, are organizationally closed, are, are causally closed so that they don’t depend anymore in the microstate.
    3:18:01 And he made, he had a proof, which is very contentious.
    3:18:04 Nobody knows if it’s, you know, some argue it’s true, some argue it’s not.
    3:18:10 But he said that because of this, living systems are not church-turing complete.
    3:18:13 They cannot be represented as formal systems.
    3:18:15 So, you know, in that way, they’re not axioms.
    3:18:18 They’re not living systems will not be axioms.
    3:18:21 They can only be partially captured by algorithms.
    3:18:26 Now, again, people fight back and forth about whether or not his proof was, you know, is valid or not.
    3:18:36 But I’m saying I’m giving you examples of like, you know, when you, when you see the blind spot, when you acknowledge the blind spot, it opens up a whole other class of kinds of scientific investigations.
    3:18:39 You know, the book we thought was going to be really heretical, right?
    3:18:46 You know, obviously, you know, most, most public facing scientists are very sort of in that, especially scientific triumphal.
    3:18:48 And so we were just like, waiting, you know, waiting for the fight.
    3:18:55 And then the review from science came out and it was like, totally pro, you know, they was very positive.
    3:19:01 We’re like, oh my God, you know, and then a review came out in nature physics and it was totally positive.
    3:19:09 And then a review came out in the Wall Street Journal, because we kind of criticized not capitalism, but we criticized sort of all industrial economies.
    3:19:12 Forget that they were sort of had been touched by the blind spot.
    3:19:13 Socialism, communism doesn’t matter.
    3:19:20 These extractive, you know, had sort of had that sort of view that the world is just reducible to, you know, resources.
    3:19:23 The Wall Street Journal gave us a great review.
    3:19:38 So it feels like there’s actually out there, there is some among working scientists in particular, there is some dissatisfaction with this triumphalist view and a recognition that we need to shift something in order to like jump past these hurdles that we’ve been arguing about.
    3:19:41 Forever, and we’re not, you know, we’re sort of stuck in a vortex.
    3:19:46 Well, it is, I mean, I think there’s a hunger to acknowledge that there’s an elephant in the room like that.
    3:19:48 We’re just removing the age.
    3:19:54 Like it’s, everyone is doing it and it’s like, yeah, yeah, there’s the experience.
    3:19:58 And then there’s the third person perspective on the world.
    3:20:06 And so to, man, science from applying scientific rigor from a first person perspective is very difficult.
    3:20:07 I mean, it’s fascinating.
    3:20:14 I think we can do it because it’s also the thing, you know, what’s really interesting is this, I think it’s not just first person, it’s first and second, right?
    3:20:24 Because science, because when so, like one idea is that we, you know, the idea that, oh, science gives us this objective third person view, that’s one way of talking about objectivity.
    3:20:30 There’s a whole other way is that I do the experiment, you do the experiment, we talk to each other, we agree on methods, and we both get the same result.
    3:20:33 That is a very different way of thinking about objectivity.
    3:20:41 And it acknowledges that, you know, when we talk about agents, agency and individuality are flexible, right?
    3:20:47 So there’s a great paper, Speaking of Santa Fe by David Krakauer, where they looked at sort of information theoretic measures of individuality.
    3:20:54 What you find is it’s actually pretty fluid, like my liver cell is an individual, but really it’s part of the liver.
    3:20:57 And my liver is, you know, a separate system, but really it’s part of me.
    3:21:07 But I’m, so I’m an individual, yay, but actually I’m part of a society, like, and I couldn’t be me without the entire community of, say, language users, right?
    3:21:09 I wouldn’t even be able to frame any questions.
    3:21:16 And my community of language users is part of ecosystems, right, that are alive, that I am a part of a lineage of.
    3:21:17 This is like Sarah Walker stuff.
    3:21:21 And then that those ecosystems are part of the biosphere, right?
    3:21:28 We’re never separable, as opposed to this very atomizing, the triumphal, this science view is wants like Boltzmann brains.
    3:21:30 You’re just a brain floating in the space, you know?
    3:21:40 Yeah, there’s a fascinating degree to which agencies fluid, like you are an individual, but you and I talking is the kind of individual.
    3:21:41 Yeah.
    3:21:47 And then the person listening to this right now is also an individual.
    3:21:47 Right.
    3:21:48 I mean, that’s a weird thing.
    3:21:49 That’s a weird thing, right?
    3:21:51 Because there’s like, there’s a broadcast nature too.
    3:21:54 This is why information theoretic.
    3:22:00 So the idea that we’re pursuing now, which I get really excited about is this idea of information architecture, right?
    3:22:05 Or organization, informational organization, because, you know, right, physicalism is like everything’s atoms.
    3:22:15 But, you know, Kant recognized, Kant is apparently the one who came up with the word organism, because he recognized that life has a weird organization that would see specifically different from machines.
    3:22:31 And so this idea that how do we engage with the idea that organization, which is often I can be cast in information theoretic terms or computational terms even, is sort of it’s not really quite physical, right?
    3:22:41 It’s embodied in physical, you know, in the physical, it has to instantiate in the physical, but it also has this other realm of design, you know, and not design like intelligent design.
    3:22:46 But there’s a, you know, organization itself is a relationship of constraints and information flow.
    3:22:52 And I think, again, that’s an entirely new, interesting way that we might get a very different kind of science that would flow out of that.
    3:22:58 So going back to Kant and organism versus machine.
    3:23:03 So I showed you a couple of legged robots.
    3:23:04 Very cool.
    3:23:08 Is it possible for machines to have agency?
    3:23:11 I would not discount that possibility.
    3:23:23 I think, you know, there’s no reason I would say that it’s impossible that machines could, whatever it manifests, that strange loop that we’re talking about, that auto poesis.
    3:23:29 I don’t think there’s a reason to say it can’t happen in silicon.
    3:23:35 I think whatever it would, it would be very different from us, like the idea that it would be like, oh, it’d be just like us, but now it’s instantiated.
    3:23:39 I think it might have very different kind of experiential nature.
    3:23:45 I don’t think, I don’t think what we have now, like the LLMs are really there.
    3:23:49 But, but I, yeah, I’m not going to say that it’s not possible.
    3:23:54 I wonder how far I can get with imitation, which is essentially what LLMs are doing.
    3:23:55 So imitating humans.
    3:24:04 And I wouldn’t discount either the possibility that through imitation, you can achieve what you call consciousness or.
    3:24:07 Agency or the ability to have experience.
    3:24:10 I think for most us humans that think, oh, that’s just fake.
    3:24:15 That’s copying, but there’s some degree to which we humans are just copying each other.
    3:24:20 We just are really good imitation machines coming from babies.
    3:24:23 We were born in this world and we’re just learning to imitate each other.
    3:24:31 And through the imitation and the tension in the disagreements in the imitations, we gain personality, perspective, all that kind of stuff.
    3:24:35 Yeah, I think so, I, you know, it’s possible, right?
    3:24:47 It’s possible, but I think probably the view I’m advocating would say that one of the most important parts of agency is there’s something called E4, the E4 theory of cognition.
    3:24:52 Embodiment in action, embedding, and there’s another one, extension.
    3:25:09 But so the idea is that you actually have to be in a body, which is itself part of an environment that is the physical nature of it and of the of the extension in with other living systems as well is essential.
    3:25:15 So that’s why I think the LLMs are not going to, it’s not just imitation, it’s going to require, this goes to the brain in the vat thing.
    3:25:21 I did an article about the brain in the vat, which was really Evans, I was reporting on Evans, where they did the brain in the vat argument.
    3:25:25 But they said, look, in the end, actually, the only way to actually get a real brain in the vat is actually to have a brain in a body.
    3:25:29 And if it could be a robot body, you know, but you still need a brain in the body.
    3:25:36 So I don’t think LLMs will get there because they can’t, you know, you really need to be embedded in a world, at least that’s the E4 idea.
    3:25:50 The E4, the 4E approach to cognition argues that cognition does not occur solely in the head, but it’s also embodied, embedded, enacted, and extended by way of extra cranial processes and structures.
    3:25:56 Though very much in vogue, 4E cognition has received relatively few critical evaluations.
    3:26:05 This is a paper by reflecting on two recent collections, this article reviews the 4E paradigm with a view to assessing the strengths and weaknesses.
    3:26:06 That’s fascinating.
    3:26:12 I mean, yeah, they’re the branches of what is cognition extends far and it could go real far.
    3:26:13 Right.
    3:26:20 There’s a great story about an interaction between Jonas Salk, who is very much a reductionist, you know, the great biologist, and
    3:26:25 Gregory Bateson, who was a cyberneticist, and Bateson always loved to poke people.
    3:26:27 And he said to Salk, he said, you know, where’s your mind?
    3:26:32 And, you know, Salk went up here and Bateson said, no, no, no, out here.
    3:26:34 And what he really meant was this extended idea.
    3:26:42 It’s not just within your cranium to be, to be, to have experience, you know, experience in some sense is not a thing you have.
    3:26:44 It is a thing you do, right?
    3:26:56 It’s almost perform it in a way, which is why both actually having a body, but having the body itself be in a world with other bodies is from this perspective is really important.
    3:27:03 And it’s very attractive to me and, you know, seeing, again, if we’re really going to do science with them, we’re going to have to, like, have these ideas crash up against data, you know, crash up against.
    3:27:08 We can’t just armchair it, you know, or, you know, or a quarter, you know, couch quarterbacking it.
    3:27:11 But I think there’s a lot of possibility here.
    3:27:16 It’s a very radically different way of looking at what we mean by nature.
    3:27:26 What do you make of the fact that this individual observer, you as an individual observer, only get a finite amount of time to exist in this world?
    3:27:27 To make you sad?
    3:27:30 No, actually, it doesn’t make me sad.
    3:27:33 So, okay, so, you know, full reveal.
    3:27:37 I have been doing contemplative practice in the Zen tradition for 30 years.
    3:27:40 I’ve been staring at a wall for 30 years.
    3:27:42 And it’s taught me a lot, right?
    3:27:47 You know, I’m really, I really value what that practice has given me about the nature of experience.
    3:27:51 And one of the things it’s taught me is like, you know, I don’t really matter that very much.
    3:28:01 This thing I call Adam Frank is really, you know, it’s kind of a construct, you know, there’s this process going on of which I am actually fundamentally.
    3:28:02 And that’s super cool.
    3:28:05 But, you know, it’s going to go, you know, I don’t know where it came from.
    3:28:06 It’s going to go.
    3:28:09 I don’t really need it to, you know, and then, and then who in the hell knows?
    3:28:11 You know, I’m not, I’m not an advocate for an afterlife.
    3:28:15 But just that, like, you know, what I love, Zen has this idea of beyond birth and death.
    3:28:17 And they don’t mean reincarnation.
    3:28:20 What they mean is, dude, you don’t even really understand what life is.
    3:28:21 You know what I mean?
    3:28:24 I’m like this, you know, this core level of your own experience.
    3:28:29 So, you know, your ideas about what death is are equally ill-formed, you know?
    3:28:34 And it’s, it’s, so, you know, the contemplative practice really tries to focus on experience itself.
    3:28:39 Like spend five days at a Zen session doing contemplative practice from, you know,
    3:28:42 seven a.m. until nine p.m., obviously with breaks.
    3:28:47 And you’ll really get a much deeper understanding of, like, what my own experience is.
    3:28:48 What does it really like?
    3:28:52 You have, you, it forces you to learn how to stabilize your attention because, you know,
    3:28:55 attention is kind of like this thing, like it’s usually just like, oh, over there.
    3:28:56 Oh, my foot hurts.
    3:28:57 Oh, I got to do my taxes.
    3:28:58 Oh, that, you know, what’s that guy over there?
    3:29:00 Why is he wearing those stupid shoes?
    3:29:03 And with the contemplative practice, you learn how to stabilize it.
    3:29:07 And once you stabilize it, you can now begin to sort of explore the phenomenal nature of it.
    3:29:12 So what I think I’ve learned from that is like, kind of whatever, you know,
    3:29:14 I’m not, I’m not really kind of real to begin with.
    3:29:16 The Adam Frankfurt, the identity, the thing.
    3:29:20 And the, the part of me that is real is, you know, everything’s coming and going.
    3:29:21 It’s all coming and going.
    3:29:26 Well, how could, how could I ever not come and go when the entire world is just, you know,
    3:29:29 Buddhism has this idea of codependent arising.
    3:29:30 Nothing exists.
    3:29:32 Nothing has self-nature.
    3:29:33 Nothing exists by itself.
    3:29:37 It’s an endless, infinitely connected web.
    3:29:42 But still, there’s a deliciousness to the individual experience.
    3:29:48 You get attached to it and it ends and it’s, it’s good while last and it sucks that it ends.
    3:29:51 Like you can be like, ah, well, everything comes and goes.
    3:29:54 But like I was eating ice cream yesterday.
    3:29:59 Found this awesome low carb ice cream called Delights here in Austin.
    3:30:01 And, you know, it ends.
    3:30:06 And I was like, and I was staring at the empty container and it was.
    3:30:07 That’s beautiful, man.
    3:30:08 I love that.
    3:30:10 You could say like, yeah, well, that’s how it all is.
    3:30:15 But can I say that, that’s what I’ve learned from, because I love your idea of the deliciousness of it.
    3:30:21 You know, but what I think happens with contemplative practice when it deepens is that it’s not just,
    3:30:23 you’re not just saying, right?
    3:30:25 This is why, you know, I do Koan practice.
    3:30:28 So this is a tradition in Zen that it was established.
    3:30:31 It was a teaching method that was established like a thousand years ago.
    3:30:32 They’re these book of Koans.
    3:30:37 And every Koan, you know, if you’ve ever read Godel Escher Bach, he’s got a whole chapter on Koans.
    3:30:41 They’re kind of non-logical problems that you have to work on.
    3:30:46 One of my favorite one was stop the sound of the distant temple bell.
    3:30:48 You know, you’re like, what?
    3:30:51 Every time my teacher gives it to me, I’m like, what are you talking about?
    3:30:54 You know, this is a whole Zen thing of like, up is down, but down is up.
    3:30:55 You must understand this.
    3:30:59 So, you know, your job with these Koans is to, is to sit with them.
    3:31:02 Is to sit with them until you sort of kind of, you know, you realize what the
    3:31:06 thing is trying to teach you, what aspect of experience it’s trying to teach you.
    3:31:07 So there’s no answer.
    3:31:09 There’s no, and in fact, actually, you don’t give an answer.
    3:31:11 You actually usually have to demonstrate.
    3:31:14 The first time I sat in when I did a Koan and the guy was like, don’t tell me the answer.
    3:31:15 Show me the answer.
    3:31:17 I was like, what are you talking about?
    3:31:20 But after doing these for years now, you know, I’ve kind of learned,
    3:31:22 learned the language of them.
    3:31:25 So I could never tell you, if I told you the answer, I could give you a
    3:31:26 Koan and tell you the answer.
    3:31:27 You’d be like, what?
    3:31:30 You know, it’s never, it’s not the words.
    3:31:34 It’s the, you know, so like your experience of like, yeah, the cup is empty with
    3:31:36 a contemplative practice as it deepens over years.
    3:31:38 There really does take years, just like anything in math.
    3:31:40 They can be took me years to understand Lagrangians.
    3:31:43 You kind of come to a deeper understanding with like, yeah, the words of like,
    3:31:45 it’s not just like, oh, everything changes.
    3:31:48 You actually feel that movement.
    3:31:52 Like you feel it with like breath to breath, you know, and it really becomes
    3:31:57 sometimes I have this feeling this is messed up, but I’m just joy and it’s
    3:31:58 not connected to anything.
    3:31:58 Right.
    3:31:59 That’s what I’ve kind of gotten from practice.
    3:32:04 It’s just like, yeah, you know, that passage, that, that infinite passage of
    3:32:07 moment to moment, that is truly the way things are.
    3:32:08 And it’s okay.
    3:32:10 Like not, it’s not okay because I have a feeling about it.
    3:32:10 Okay.
    3:32:11 I want it to be okay.
    3:32:12 It just is okay.
    3:32:14 It’s a really, it’s a pretty awesome thing.
    3:32:15 That’s beautiful.
    3:32:19 I mean, I, I, I, maybe it’s the genetics, maybe it’s the biochemistry of my brain,
    3:32:24 but I generally have that joy about experience, just amorphous joy, but it
    3:32:28 seems like, again, maybe it’s my Eastern European roots, but there’s always like
    3:32:30 a melancholy that’s also sitting next to the joy.
    3:32:36 And I think it always feels like they’re intricately linked.
    3:32:41 So the melancholy is about, maybe about the finiteness of experience.
    3:32:44 And the joy is just about the beauty of experience.
    3:32:45 And they’re just kind of sitting there.
    3:32:46 Yeah.
    3:32:49 Which is cool actually, because that, you know, I’m also, you know, I come from
    3:32:53 Eastern, my roots are Eastern European as well going back and I get it.
    3:32:53 Right.
    3:32:56 I mean, you know, the, but that’s also the cool thing.
    3:32:58 I think one of the things is, is like, yeah, well that, that is what it is.
    3:32:59 That is what it is.
    3:33:00 Right.
    3:33:00 You don’t have to do anything.
    3:33:03 You don’t have to like manipulate or move it around or like, yeah, this is the
    3:33:04 experience, you know?
    3:33:08 Can you speak to the, just the practical nature of sitting there from 7am to 9pm?
    3:33:10 I’m like, what the hell are you doing?
    3:33:11 What’s, what’s powerful?
    3:33:12 What’s fascinating to you?
    3:33:15 What have you learned from just the experience of staring at a wall?
    3:33:15 Yeah.
    3:33:16 Yeah.
    3:33:19 So, um, you know, it’s not really, I mean, you’re staring, you’re facing a
    3:33:22 wall and what you’re doing is you’re, you know, you’re just sitting with, you
    3:33:24 know, you can, there’s different meditative practices, right?
    3:33:25 There’s counting breaths.
    3:33:26 So that’s usually what I do.
    3:33:29 I sit down, I start counting breaths and for the first half hour, it’s just like,
    3:33:30 blah, blah, blah.
    3:33:32 I’m thinking, like I said, I’m thinking about my taxes.
    3:33:34 I’m thinking about what I got to do later on.
    3:33:35 Yada, yada, yada.
    3:33:39 First time I ever did a full session, a two day session, I swear to God, I had
    3:33:43 Bruce Springsteen’s Born to Run album track through from the beginning to the
    3:33:45 end with the pauses, back in when they were LPs.
    3:33:45 Yeah.
    3:33:47 The fricking nice, you know?
    3:33:49 Cause my mind was just like, I need to do something.
    3:33:51 So it literally played the whole album in order.
    3:33:53 That’s pretty cool, actually.
    3:33:56 Yeah, it was pretty amazing to see, you know, cause you really do, you see the
    3:33:59 dynamics of your mind, but what happens is, and this took me a while.
    3:34:05 I used to, I used to hate sitting, you know, I do it, but I, after a while, the
    3:34:09 mind gets exhausted, like that part of the mind, the upper level though, the roof
    3:34:11 brain chatter is just like there’s nothing else to do.
    3:34:15 And then you get bored and I now I realized that’s the, that’s when something
    3:34:16 interesting is going to happen.
    3:34:20 Cause you kind of like drop down and now it’s a very physical practice.
    3:34:23 People think you’re just sitting there not thinking or thinking about not
    3:34:27 thinking actually becomes a very physical process where you’re really just
    3:34:28 following the breath.
    3:34:33 You’re kind of riding the breath and it gets very quiet, you know, and within
    3:34:37 that quietness, it’s, you know, there’s, there’s a path, you know, because
    3:34:40 obviously there’s been, Buddhism is always like, you know, you know, not
    3:34:42 about thinking, but there’s a huge literature.
    3:34:45 So these guys are always about, don’t think I’ve written all this stuff, but
    3:34:47 they’re guideposts, they’re like the finger pointing at the moon.
    3:34:51 And, you know, there’s the idea of first, you know, your mind is usually
    3:34:52 scattered, right?
    3:34:54 Like right now when I walk out, I’m going to go get the Uber and every
    3:34:55 mind’s going to be all over the place.
    3:34:59 But with sitting, first you concentrate the mind so that there’s no more
    3:34:59 scatter anymore.
    3:35:01 The thoughts are still happening, but you’re just not there happening up
    3:35:02 there.
    3:35:03 You’re not even paying attention to them.
    3:35:09 And then as time goes on, you unify the mind, which is this very powerful
    3:35:13 thing where kind of the self drops away, you know, and there’s just this
    3:35:15 presence, it’s kind of like a raw presence.
    3:35:20 And that’s often where the, the, the joy up, up wells from, but you sit with
    3:35:21 whatever, maybe you’re going to sit and you’re going to have it.
    3:35:24 Like, you know, maybe you’re going to go through like an hour of being
    3:35:26 bummed out about your mom who died or something.
    3:35:29 You know, you’re just going to sit with whatever comes up.
    3:35:30 You’re going to make that.
    3:35:32 That’s why the sitting part, you’re making the commitment.
    3:35:33 I’m going to sit here with whatever comes up.
    3:35:34 I will not be moved.
    3:35:37 And then what you come away with, it actually over time, it actually
    3:35:39 changes kind of who you are.
    3:35:42 Like I’m still the asshole I was from New Jersey growing up, but I just
    3:35:45 have more space now for things, you know?
    3:35:48 Well, yeah.
    3:35:52 Once Jersey, I was Jersey, but I love, they had Bruce Springsteen.
    3:35:53 He’s just blasting in your head.
    3:35:54 Yeah, that was amazing.
    3:35:55 Why are we here?
    3:35:59 What do you think is the, is the, is the purpose, the meaning of human existence?
    3:35:59 It’s good.
    3:36:02 We just had the last conversation because I’m going to give this answer,
    3:36:04 which is so corny.
    3:36:05 Um, it’s love.
    3:36:08 And I’m not messing around because really actually what happened to you.
    3:36:12 So within Buddhism, there’s the idea of the Bodhisattva principle.
    3:36:13 You’re here to help.
    3:36:14 You’re just here to help, right?
    3:36:19 Compassion, like that’s a really essential part of this path, of the Dharma path.
    3:36:22 And when I first started out, I was like, um, I don’t care about compassion.
    3:36:23 I’m here for knowledge, right?
    3:36:26 I’m here, you know, I started contemplative practice because of the
    3:36:27 usual thing I was suffering.
    3:36:29 I had, you know, the reason everybody comes to things like this, you know, life
    3:36:33 was hard, I was going through stuff, but I also wanted knowledge.
    3:36:35 I wanted to understand the foundational nature of reality.
    3:36:36 So it was like compassion, whatever.
    3:36:39 But then I found out that you can’t get that.
    3:36:40 You can’t get though.
    3:36:42 You can’t go to love without compassion.
    3:36:49 Somehow in this process, you realize that it really is about helping
    3:36:51 all sentient beings.
    3:36:53 That’s the way they, you know, just being here to help.
    3:36:57 So I know that sounds cornball, but especially for a guy from Jersey, which
    3:36:59 is like, you know, the main thing is to get over.
    3:37:01 You’re like, your job is to get over.
    3:37:03 Um, uh, but that’s really what I found.
    3:37:06 It’s, it is actually kind of, and that’s what that joy, the joy.
    3:37:08 Some of that joy is just, it’s like this.
    3:37:11 One of the things I have, when I have like really, you know, there’s a kind
    3:37:13 of experience I’ll have in contemplative practice, which we’ll carry
    3:37:16 out into the world, which is just this gratitude for the fact that the world
    3:37:18 is just, the world gives you everything.
    3:37:19 And this is a certain way, right?
    3:37:24 Just the blue sky and the breath, the world is just giving you itself
    3:37:25 completely unhindered.
    3:37:26 It holds nothing back.
    3:37:28 And, uh, yeah, that’s kind of the experience.
    3:37:31 And then you kind of like, oh, I need to be helpful because who’s not
    3:37:32 having this experience, you know?
    3:37:34 So just love for the world as it is.
    3:37:37 Love for the way, and all the beings who are suffering, everybody’s suffering.
    3:37:41 Everybody’s, you know, your worst political opponent, they’re suffering,
    3:37:46 you know, and our job is just to try and drop our biases and our stories
    3:37:49 and see this fundamental level at which life is occurring.
    3:37:53 And, uh, hopefully there’s many alien civilizations out there going
    3:37:55 through the same journey out of suffering towards love.
    3:37:59 Yeah, that would, I, you know, that may be a universal thing about
    3:38:00 what it means to be alive.
    3:38:00 I hope so.
    3:38:01 I hope so too.
    3:38:04 If that or they’re coming to eat us, especially if they’re a type three
    3:38:07 civilization, they got really big guns.
    3:38:13 Uh, well, this was a truly mind blowing, fascinating, just awesome conversation.
    3:38:14 Adam, thank you for everything you do.
    3:38:15 And thank you for talking to me.
    3:38:16 Oh, thank you.
    3:38:17 This was a lot of fun.
    3:38:20 Thanks for listening to this conversation with Adam Frank.
    3:38:24 To support this podcast, please check out our sponsors in the description.
    3:38:28 And now let me leave you with some words from Carl Sagan.
    3:38:33 The cosmos is all that is, or ever was, or ever would be.
    3:38:37 Our feeblest contemplations of the cosmos stare us.
    3:38:42 There’s a tingling in the spine, a catch in the voice, a faint sensation, as if a
    3:38:44 distant memory or falling from a height.
    3:38:50 We know we are approaching the greatest of mysteries.
    3:38:54 Thank you for listening and hope to see you next time.
    3:38:55 .
    3:38:56 Yeah.
    3:38:57 .
    3:38:58 .
    3:38:59 .
    3:39:00 .
    3:39:01 .
    3:39:02 .
    3:39:03 .
    3:39:04 .
    3:39:06 Yeah.
    3:39:07 Yeah.
    3:39:09 (gentle music)
    3:39:19 [BLANK_AUDIO]

    Adam Frank is an astrophysicist studying star systems and the search for extraterrestrial life and alien civilizations.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep455-sc
    See below for timestamps, and to give feedback, submit questions, contact Lex, etc.

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Adam’s Website: https://adamfrankscience.com
    Adam’s X: https://x.com/adamfrank4
    Adam’s Instagram: https://instagram.com/adamfrankscience
    Adam’s Books:
    The Little Book of Aliens: https://amzn.to/3OTX1rP
    Light of the Stars: https://amzn.to/4iMKC6C
    The Blind Spot: https://amzn.to/4gOCe4K
    The Constant Fire: https://amzn.to/3ZVnxX4

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Encord: AI tooling for annotation & data management.
    Go to https://encord.com/lex
    Eight Sleep: Temp-controlled smart mattress cover.
    Go to https://eightsleep.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (14:22) – Planet formation
    (19:32) – Plate tectonics
    (26:54) – Extinction events
    (31:04) – Biosphere
    (34:02) – Technosphere
    (38:17) – Emergence of intelligence
    (44:29) – Drake equation
    (48:43) – Exoplanets
    (51:28) – Habitable zones
    (54:30) – Fermi Paradox
    (1:03:28) – Alien civilizations
    (1:12:55) – Colonizing Mars
    (1:25:11) – Search for aliens
    (1:41:37) – Alien megastructures
    (1:47:43) – Kardashev scale
    (1:52:56) – Detecting aliens
    (1:59:38) – Warp drives
    (2:05:45) – Cryogenics
    (2:09:03) – What aliens look like
    (2:17:48) – Alien contact
    (2:28:53) – UFO sightings
    (2:40:38) – Physics of life
    (3:06:29) – Nature of time
    (3:22:53) – Cognition
    (3:27:16) – Mortality

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

  • #454 – Saagar Enjeti: Trump, MAGA, DOGE, Obama, FDR, JFK, History & Politics

    AI transcript
    0:00:04 The following is a conversation with Sagar Anjedi, his second time in the podcast.
    0:00:10 Sagar is a political commentator, journalist, co-host of Breaking Points with
    0:00:16 Crystal Ball and of the Realignment podcast with Marshall Kosloff.
    0:00:20 Sagar is one of the most well-read people I’ve ever met.
    0:00:25 His love of history and the wisdom gained from reading thousands of history
    0:00:28 books radiates through every analysis he makes of the world.
    0:00:33 In this podcast, we trace out the history of the various ideological
    0:00:35 movements that led up to the current political moment.
    0:00:40 In doing so, we mentioned a large number of amazing books.
    0:00:44 We’ll put a link to them in the description for those interested to
    0:00:46 learn more about each topic.
    0:00:50 And now a quick few second mention of each sponsor.
    0:00:52 Check them out in the description.
    0:00:53 It’s the best way to support this podcast.
    0:00:58 We’ve got A Sleep, for Naps, AG1, for Health, Element, for Hydration,
    0:01:02 BetterHelp, for the Mind, Shopify, for the Wallet, and Netsuite for your
    0:01:04 business. Choose wisely, my friends.
    0:01:09 Also, if you want to get in touch with me for a multitude of reasons,
    0:01:11 go to lexfreedmen.com/contact.
    0:01:13 And now onto the full lad reads.
    0:01:16 I try to make them interesting, but if you skip them, please
    0:01:17 still check out our sponsors.
    0:01:18 I enjoy their stuff.
    0:01:19 Maybe you will too.
    0:01:24 This episode is brought to you by A Sleep and it’s Pod 4 Ultra.
    0:01:29 I’m going to try a new thing where I hold on to a theme.
    0:01:36 As I talk about these ads, I use A Sleep and the Pod 4 Ultra to cool the
    0:01:41 bed. And since Sagar knows pretty much more than anybody I’ve ever
    0:01:45 met about the various US presidents and presidential politics and the
    0:01:49 history of politics in US, let me mention a little factoid.
    0:01:54 Did you know that the White House didn’t get air conditioning until 1933
    0:01:59 under Hoover, who funded it just before leaving office for FDR.
    0:02:05 So all that praise that Sagar gives to FDR, just remember, maybe it wouldn’t
    0:02:13 be possible without the cool, fresh air that Hoover gave to the great FDR.
    0:02:18 And that, in fact, and I’m not sure why I’m using this voice in talking, but
    0:02:21 that, in fact, is essential for sleep, controlling the temperature of the
    0:02:25 bed, controlling the temperature of the sleeping environment.
    0:02:25 There you go.
    0:02:30 The more you know, create sleep.com/lex and use code Lex to get up to
    0:02:35 $600 off your Pod 4 Ultra purchase when bundled.
    0:02:37 That’s a sleep.com/lex.
    0:02:41 This episode is brought to you by AG1.
    0:02:44 Basically a nice multivitamin.
    0:02:46 That’s also delicious.
    0:02:47 That drink every day.
    0:02:51 It makes me feel like I have my life together, which I barely do.
    0:02:57 Now, speaking of drinks that you believe make you feel better.
    0:03:00 You know, placebo effect, that kind of thing.
    0:03:02 Here’s a little presidential themed factoid.
    0:03:09 John Adams drank hard cider every morning, believing it promoted good health.
    0:03:16 I would love to get like a health advice podcast with Winston Churchill.
    0:03:24 Another president, William Howard Taft, had the White House kitchen prepare special
    0:03:27 protein shakes made from eggs, milk and beef extract.
    0:03:31 I would love the dietary details of some of the presidents.
    0:03:36 I’m sure a bunch of them just smoked and drank and, you know, had their own like
    0:03:41 little habits that serve as a kind of escape from the madness of the world.
    0:03:46 Anyway, get AG1 and they’ll give you one month’s supply of fish oil when you sign
    0:03:49 up at drinkag1.com/lex.
    0:03:56 This episode is also brought to you by Element, my daily zero sugar and delicious
    0:03:57 electrolyte mix.
    0:04:06 And here I have to return again to the presidents who consumed various kinds of liquids.
    0:04:12 Did you know that Thomas Jefferson spent $11,000 on wine during his presidency?
    0:04:15 And we’re not talking about quality here.
    0:04:18 We are, in fact, talking about quantity.
    0:04:23 That’s equivalent to about $300,000 in today’s money.
    0:04:25 Whatever works.
    0:04:30 It’s like that meme that there’s a perfect optimal amount of alcohol that makes you
    0:04:31 productive in programming.
    0:04:33 I have never found that optimal.
    0:04:40 Actually, if I have a drink, my productivity and my clarity of thinking and my creativity
    0:04:41 all go down.
    0:04:48 Now, I start enjoying the social interactions more and more because I am fundamentally
    0:04:51 an introvert that have anxiety about social interaction.
    0:04:51 So that helps.
    0:04:56 But in terms of productivity or creative juices or whatever.
    0:04:57 Nope.
    0:05:00 Anyway, you can get a sample pack for free with any purchase.
    0:05:03 Try it at drinkelement.com/lex.
    0:05:08 This episode is also brought to you by BetterHelp, spelled H-E-L-P Help.
    0:05:13 They figure out what you need and match it with a licensed therapist in under 48 hours.
    0:05:20 And there’s actually quite a lot of presidents that really struggled with anxiety, with
    0:05:25 depression, with all kinds of complicated mental states.
    0:05:30 Coolidge, for example, fell into a deep depression after his son died from blood poisoning.
    0:05:33 And that changed him forever, actually.
    0:05:36 It’s difficult to come back from that.
    0:05:46 John Quincy Adams, somewhat famously, kept extremely detailed diary for 68 years, often
    0:05:52 writing sort of a detailed analysis and almost like log of his mental states.
    0:05:54 That’s an interesting thing to do, actually.
    0:05:56 I don’t do that enough.
    0:05:58 I speak it.
    0:05:59 I don’t write it down.
    0:06:01 Perhaps there’s some magic in writing it down.
    0:06:06 But there is, with BetterHelp, also magic in speaking it.
    0:06:08 With a professional.
    0:06:12 Check them out at BetterHelp.com/lex and save on your first month.
    0:06:14 That’s BetterHelp.com/lex.
    0:06:21 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere
    0:06:23 with a great looking online store.
    0:06:27 So Abraham Lincoln actually owned a general store.
    0:06:32 And he has famously written that he wished he had Shopify.
    0:06:33 It would be much more convenient.
    0:06:36 Anyway, he had a general store that failed.
    0:06:42 So, you know, sometimes you need the right job for the right man.
    0:06:45 That match to be made and everything else is not going to work out.
    0:06:54 I sold shoes, women’s shoes at Sears, kind of like Al Bundy for married with children.
    0:06:55 If you know the show.
    0:06:58 And, you know, I did okay.
    0:07:04 But I think it wasn’t quite the right fit for me.
    0:07:09 You know, I was quite technically savvy and knew about computers.
    0:07:12 And I said, I should probably be selling electronics and computers.
    0:07:18 And they said, yes, yes, yes, one day you will, but now we need helping shoes.
    0:07:20 So let’s start you there.
    0:07:25 And if I stayed there for many more years, perhaps I would have upgraded to electronics.
    0:07:28 But then I also saw the beauty in selling women’s shoes.
    0:07:32 There was a real joy in finding the right match for the right person.
    0:07:36 And that joy can be scaled significantly with Shopify.
    0:07:41 Sign up for a $1 per month trial period at Shopify.com/Lex.
    0:07:42 That’s all lowercase.
    0:07:45 Go to Shopify.com/Lex to take your business to the next level today.
    0:07:52 This episode is brought to you by Netsuite, an all-in-one cloud business management system.
    0:08:00 Ulysses S. Grant, the famed general, kept extremely detailed expense accounts,
    0:08:02 recording every single penny he spent.
    0:08:10 Now, rigor, attention to detail, obsession with detail, financial detail is important.
    0:08:14 But, you know, if you have the right tool for the job, that’s made easier.
    0:08:18 I would love to kind of throw some of these people, some of these leaders,
    0:08:25 some of these brilliant minds from history into the modern world that is digitized.
    0:08:30 I think a lot of them would actually be destroyed by it.
    0:08:34 Because the machine of distraction will pull them away from the focus
    0:08:39 that you can more easily attain in a non-technological world.
    0:08:42 And some of them, I think, will become even more super productive.
    0:08:43 So it’ll be really interesting.
    0:08:50 And there’s been a lot of presidents that kind of pushed the White House and government in general
    0:08:57 into the direction of great record keeping from George Washington to Carter to FDR,
    0:08:59 as Sager talks a lot about.
    0:09:03 Anyway, all that is in the realm of politics, but the realm of business
    0:09:06 in many ways is the same, especially when the government is working well.
    0:09:09 So Netsuite is for business.
    0:09:12 In fact, over 37,000 companies have upgraded to Netsuite.
    0:09:17 Take advantage of their flexible financing plan at Netsuite.com/lex.
    0:09:20 That’s Netsuite.com/lex.
    0:09:23 This is a Lex Friedman podcast.
    0:09:26 To support it, please check out our sponsors in the description.
    0:09:30 And now, dear friends, here’s Sager and Jetty.
    0:09:49 So let’s start with the obvious big question.
    0:09:51 What do you think Trump won?
    0:09:54 Let’s break it down before the election.
    0:09:58 You said that if Trump wins, it’s going to be because of immigration.
    0:10:05 So aside from immigration, what are the maybe less than obvious reasons that Trump won?
    0:10:07 Yes, we absolutely need to return to immigration.
    0:10:10 But without that, multifaceted explanation.
    0:10:12 Let’s start with the easiest one.
    0:10:16 There has been a wave of anti-incumbent energy around the world.
    0:10:19 Financial Times chart recently went viral showing so the first time,
    0:10:23 I think since World War II, possibly since 1905, I need to look at the data set
    0:10:27 that all anti-incumbent parties all across the world suffered major defeats.
    0:10:30 So that’s a very, very high level analysis.
    0:10:33 And we can return to that if we talk about Donald Trump’s victory in 2016,
    0:10:36 because there were similar global precursors.
    0:10:40 That individual level in the United States, there’s a very simple explanation as well,
    0:10:42 which is that Joe Biden was very old.
    0:10:43 He was very unpopular.
    0:10:44 Inflation was high.
    0:10:48 Inflation is one of the highest determiners of people switching their votes
    0:10:52 and of putting their primacy on that ahead of any other issue at the ballot box.
    0:10:53 So that’s that.
    0:10:58 But I think it’s actually much deeper at a psychological level for who America is
    0:10:59 and what it is.
    0:11:02 And fundamentally, I think what we’re going to spend a lot of time talking about today
    0:11:08 is the evolution of the modern left and its collapse in the Kamala Harris candidacy
    0:11:12 and eventually the loss to Donald Trump in the popular vote,
    0:11:16 where it really is like an apotheosis of several social forces.
    0:11:19 So we’re going to talk about the great awakening or so-called awokening,
    0:11:22 which is very important to understanding all of this.
    0:11:26 There’s also really Donald Trump himself, who was really one of the most unique
    0:11:30 individual American politicians that we’ve seen in decades.
    0:11:34 At this point, Donald Trump’s victory makes him the most important and transformative
    0:11:37 figure in American politics since FDR.
    0:11:42 And thought process for the audience is in 2028, there will be an 18-year-old who’s
    0:11:48 eligible to vote who cannot remember a time when Donald J. Trump was not the central American figure.
    0:11:52 And there’s stories in World War II where troops were on the front line.
    0:11:54 Some of them are 18, 19 years old.
    0:11:57 FDR died and they literally said, “Who’s the president?”
    0:11:58 And they said, “Harry Truman, you dumbass.”
    0:12:00 And they go, “Who?”
    0:12:05 They couldn’t conceive of a universe where FDR was not the president of the United States.
    0:12:10 And Donald Trump, even during the Biden administration, he was the figure.
    0:12:14 Joe Biden defined his entire candidacy and his legacy around defeating this man.
    0:12:15 And obviously he’s failed.
    0:12:20 We should talk a lot about Joe Biden as well for his own failed theories of the presidency.
    0:12:23 So I think at macro level, it’s easy to understand.
    0:12:26 At a basic level, inflation, it’s easy to understand.
    0:12:30 But what I really hope that a lot of people can take away is how fundamentally unique
    0:12:35 Donald Trump is as a political figure and what he was able to do to realign American politics
    0:12:36 really forever.
    0:12:42 I mean, in the white working class realignment originally of 2016, the activation really of
    0:12:47 a multiracial kind of working class coalition and of really splitting American lines along
    0:12:53 a single individual question of did you attend a four-year college degree institution or not?
    0:12:56 And this is a crazy thing to say.
    0:13:02 Donald Trump is one of the most racially depolarizing electoral figures in American history.
    0:13:10 We lived in 2016 at a time when racial groups really voted in blacks, Latinos, blacks, whites.
    0:13:15 There was some, of course, division between the white working class and the white college
    0:13:17 educated white collar workers.
    0:13:22 But by and large, you could pretty fairly say that Asians were Indians.
    0:13:26 Everyone, 80, 90 percent were going to vote for the Democratic Party.
    0:13:27 Latinos as well.
    0:13:30 I’m born here in Texas, in the state of Texas.
    0:13:34 George W. Bush shocked people when he won some 40 percent of the Latino vote.
    0:13:39 Donald Trump just beat Kamala Harris with Latino men and he ran up the table for young men.
    0:13:45 So really, fundamentally, we have witnessed a full realignment in American politics.
    0:13:48 And that’s a really fundamental problem for the modern left.
    0:13:53 It’s erased a lot of the conversation around gerrymandering, around the electoral college,
    0:13:59 the so-called electoral college bias towards Republicans, really being able to win the
    0:14:04 popular vote for the first time since 2004 is a shocking landmark achievement by a Republican.
    0:14:10 In 2008, I have a book on my shelf and I always look at it to remind myself of how much things
    0:14:11 can change.
    0:14:17 James Carville and it says, “40 more years, how Democrats will never lose an election again.”
    0:14:21 2008, they wrote that book after the Obama Coalition and the landslide.
    0:14:27 And something I love so much about this country, people change their minds all the time.
    0:14:28 I was born in 1992.
    0:14:30 I watched red states go blue.
    0:14:31 I’ve seen blue states go red.
    0:14:33 I’ve seen swing states go red or blue.
    0:14:35 I’ve seen millions of people pick up and move.
    0:14:39 The greatest internal migration in the United States since World War II.
    0:14:43 And it’s really inspiring because it’s a really dynamic, interesting place.
    0:14:47 And I love covering and I love thinking about it, talking about it, talking to people.
    0:14:47 It’s awesome.
    0:14:51 One of the reasons I’m a big fan of yours is your student history.
    0:14:54 And so you’ve recommended a bunch of books to me.
    0:14:59 And they and others thread the different movements throughout American history.
    0:15:02 Some movements take off and do hold power for a long time.
    0:15:03 Some don’t.
    0:15:08 And some are started by a small number of people and are controlled by a small number of people.
    0:15:09 Some are mass movements.
    0:15:16 And it’s just fascinating to watch how those movements evolve and then fit themselves maybe
    0:15:19 into the constraints of a two-party system.
    0:15:22 And I’d love to sort of talk about the various perspectives of that.
    0:15:31 So would it be fair to say that this election was turned into a kind of class struggle?
    0:15:37 Well, I won’t go that far because to say it’s a class struggle really implies that things
    0:15:39 fundamentally align on economic lines.
    0:15:41 And I don’t think that’s necessarily accurate.
    0:15:44 Although if that’s your lens, you could get there.
    0:15:50 So there’s a very big statistic going around right now where Kamala Harris increased her
    0:15:56 vote share and won households over $100,000 or more and Donald Trump won households under $100,000.
    0:15:59 So you could view that in an economic lens.
    0:16:03 The problem again that I have is that that is much more a proxy for four-year college degree
    0:16:05 and for education.
    0:16:08 And so one of my favorite books is called Coming Apart by Charles Murray.
    0:16:15 And that book really, really underscores how the cultural milieu that people swim in
    0:16:19 when they attend a four-year college degree and the trajectory of their life,
    0:16:24 not only on where they move to, who they marry, what type of grocery store they go to,
    0:16:27 their cultural, what television shows that they watch.
    0:16:31 One of my favorite questions from Charles Murray is called a bubble quiz.
    0:16:34 I encourage people to go take it, by the way, which asks you a question.
    0:16:37 It’s like, what does the word Branson mean to you?
    0:16:39 And it has a couple of answers.
    0:16:42 One of them is Branson is Richard Branson, Sir Richard Branson.
    0:16:46 Number two is Branson, Missouri, which is like a country music tourist style destination.
    0:16:48 Three is it means nothing.
    0:16:52 So you are less in a bubble if you say country music and you’re very much in the bubble if
    0:16:53 you say Richard Branson.
    0:16:56 And I remember taking that test for the first time ago.
    0:16:58 Obviously, Sir Richard Branson, Virgin Atlantic, like what?
    0:17:01 And then I was like, wait, I’m in the bubble.
    0:17:02 And there are other things in there.
    0:17:04 Like, can you name various different military ranks?
    0:17:08 I can because I’m a history nerd, but the vast majority of college-educated people
    0:17:10 don’t know anybody who served in the United States military.
    0:17:12 They don’t have family members who do.
    0:17:16 The most popular shows in America are like The Big Bang Theory and NCIS.
    0:17:21 Whereas people in our probably cultural milieu, our favorite shows are White Lotus,
    0:17:22 The Last of Us.
    0:17:24 This is prestige television, right?
    0:17:27 With a very small audience, but high income, high education.
    0:17:32 So the point is, is that culture really defines who we are as Americans, where we live.
    0:17:35 And rural urban is one way to describe it.
    0:17:40 But honestly, with the work from Home Revolution and more rich people and highly educated people
    0:17:45 moving to more rural suburban or areas they traditionally weren’t able to commute in,
    0:17:45 that’s changing.
    0:17:48 And so really, the internet is everything.
    0:17:51 The stuff that you consume on the internet, the stuff that you spend your time doing,
    0:17:54 type of books you read, whether you read a book at all, frankly.
    0:17:59 Whether you travel to Europe, whether you have a passport, all the things that you value in your
    0:18:02 life, that is the real cultural divide in America.
    0:18:08 And I actually think that’s what this revolution of Donald Trump was activating and bringing people
    0:18:13 to the polls, bringing a lot of those traditional working class voters of all races away from
    0:18:20 the Democratic Party along the lines of elitism, of sneering, and of a general cultural feeling
    0:18:24 that these people don’t understand me and my struggles in this life.
    0:18:29 And so the trivial formulation is that it’s the wokeism, the anti-wokeism movement.
    0:18:36 So it’s not necessarily that Trump winning was a statement against wokeism.
    0:18:38 It was the broader anti-elitism.
    0:18:43 It’s difficult to say because I wouldn’t dismiss anti-wokeism or wokeism as an explanation.
    0:18:47 But we need to understand the electoral impacts of woke.
    0:18:54 So there’s varying degrees of how you’re going to encounter “wokeism,” and this is a very difficult
    0:18:55 thing to define.
    0:18:59 So let me just try and break it down, which is there are the types of things that you’re going
    0:19:02 to interact with on a cultural basis.
    0:19:07 And what I mean by that is going to watch a TV show, and just for some reason,
    0:19:08 there’s two trans characters.
    0:19:13 And it’s never particularly explained why they just are there, or watching a commercial,
    0:19:13 and it’s the same thing.
    0:19:18 Watching, I don’t know, I remember watching, I think it was Doctor Strange and the Multiverse
    0:19:22 of Madness, and the main, it was a terrible movie, by the way, I don’t recommend it.
    0:19:26 But one of the characters, I think it’s her name was like America, and she wore a gay pride flag.
    0:19:29 Right, look, many left-wingers would make fun of me for saying these things.
    0:19:34 But that is obviously a social agenda to the point, as in they believe it is like deeply
    0:19:40 acceptable, that is used by Hollywood and cultural elites who really value those progress,
    0:19:44 you know, in sexual orientation and others, and they really believe it’s important to
    0:19:46 quote unquote showcase it for representation.
    0:19:49 So that’s like one way that we may encounter quote unquote wokeism.
    0:19:54 But the more important ways, frankly, are the ways that affirmative action, which really has its
    0:19:59 roots in, you know, American society, all the way going back to the 1960s, and how those have
    0:20:04 manifested in our economy, and in our understanding of quote unquote discrimination.
    0:20:06 So two books I can recommend.
    0:20:09 One is called The Origins of Woke, that’s by Richard Hanania.
    0:20:13 There’s another one by The Age of Entitlement by Christopher Caldwell.
    0:20:17 And they make a very strong case that Caldwell in particular, that he calls it like a new
    0:20:22 founding of America, was the passage of the Civil Rights Act of 1964.
    0:20:27 Because it created an entire new legal regime and understanding of race and the American
    0:20:30 character and how the government was going to enforce that.
    0:20:34 And that really ties in with another one of the books that I recommended to you about
    0:20:40 The Origins of Trump by Jim Webb and Senator Jim Webb, incredible, incredible man.
    0:20:43 He’s so underappreciated, intellectual.
    0:20:44 He was anti-war.
    0:20:49 And people may remember him from the 2016 primary.
    0:20:53 And they asked him a question I don’t exactly remember about one of his enemies.
    0:20:56 And he’s like, well, one of them was a guy I shot in Vietnam.
    0:20:58 And he was running against Hillary.
    0:21:01 And that guy, he wrote the book Born Fighting.
    0:21:05 I think it’s history of the Scots-Irish people, something like that.
    0:21:11 And that book really opened my eyes to the way that affirmative action and racial preferences
    0:21:18 that were playing out through the HR managerial elite really turned a lot of people within
    0:21:24 the white working class away from the Democratic Party and felt fundamentally discriminated against
    0:21:26 by the professional managerial class.
    0:21:31 And so there’s a lot of roots to this, the managerial revolution by James Burnham.
    0:21:37 And in terms of the origin of kind of how we got here, but the crystallization of like DEI
    0:21:39 and or affirmative action.
    0:21:43 I prefer to use the term affirmative action in the highest echelons of business.
    0:21:48 And there became this idea that representation itself was the only thing that mattered.
    0:21:51 And I think that right around 2014, that really went on steroids.
    0:21:54 And that’s why it’s not an accident Donald J. Trump elected in 2016.
    0:22:00 At this point, do you think this election is the kind of statement that wokeism is a movement is dead?
    0:22:01 I don’t know.
    0:22:06 I mean, it’s very difficult to say because wokeism itself is not a movement with a party leader.
    0:22:14 It’s a amorphous belief that has worked its way through institutions now for almost 40 or 50 years.
    0:22:15 I mean, it’s effectively a religion.
    0:22:20 And part of the reason why it’s difficult to define is it means different things to different people.
    0:22:25 So for example, there are varying degrees of how we would define quote unquote woke.
    0:22:30 Do I think that the Democrats will be speaking in so-called academic language?
    0:22:31 Yes, I do think they will.
    0:22:34 I think that the next Democratic nominee will not do that.
    0:22:39 However, Kamala Harris actually did move as much as she could away from quote unquote woke.
    0:22:44 But she basically was punished for a lot of the sins of both herself from 2019.
    0:22:49 But a general cultural feeling that her and the people around her do not understand me.
    0:22:54 And not only do not understand me, but I have racial preferences or a regime or an understanding
    0:22:59 that would lead to a quote unquote equity mindset, equal outcomes for everybody as opposed to
    0:23:03 equality of opportunity, which is more of a colorblind philosophy.
    0:23:06 So I can’t say, I think it’s way too early.
    0:23:11 And again, you can not use the word Latinx.
    0:23:16 But do you still believe in an effective affirmative action regime in terms of how
    0:23:21 you would run your Department of Justice in terms of how you view the world,
    0:23:25 in terms of what you think the real dividing lines in America are?
    0:23:28 And because I would say that’s still actually kind of a woke mindset.
    0:23:31 And that’s part of the reason why the term itself doesn’t really mean a whole lot.
    0:23:36 And we have to get actually really specific about what it looks like in operations.
    0:23:38 In operation, it means affirmative action.
    0:23:42 It means the NASDAQ passing some law that if you want to go public or something,
    0:23:45 that you have to have a woman and a person of color on your board.
    0:23:52 Like this is a blatant and extraordinary look racialism that they’ve enshrined in their bylaws.
    0:23:53 So you can get rid of ESG.
    0:23:54 That’s great.
    0:23:56 But you can get rid of DEI.
    0:23:56 I think that’s great.
    0:23:59 But it’s really about a mindset and a view of the world.
    0:24:01 And I don’t think that’s going anywhere.
    0:24:07 And you think the reason it doesn’t work well in practice is because there’s a big degree
    0:24:09 to which it’s anti-meritocracy.
    0:24:10 It’s anti-American, really.
    0:24:15 I mean, DEI and woke and affirmative action make perfect sense in a lot of different countries.
    0:24:16 Okay.
    0:24:22 And there are a lot of countries out there that are multi-ethnic and they’re heterogeneous.
    0:24:25 And they were run by basically quasi-dictators.
    0:24:29 And the way it works is that you pay off the Christians and they pay off the Muslims.
    0:24:32 And they get this guy and they get that guy and everybody kind of shakes it.
    0:24:35 It’s very explicit where they’re like, we have 10 spots and they go to the Christians.
    0:24:37 We have 10 spots and they go to the Hindus.
    0:24:39 I’m talking India is a country I know pretty well.
    0:24:43 And this does kind of work like that on state politics level in some respect.
    0:24:47 But in America, fundamentally, we really believe that no matter where you are from,
    0:24:52 that you come here and basically within a generation, especially if you migrate here
    0:24:56 legally and you integrate, that you leave a lot of that stuff behind.
    0:25:01 And the story, the American dream that is ingrained in so many of us is one that really
    0:25:08 does not mesh well with any sort of racial preference regime or anything that’s not
    0:25:14 meritocratic. And I mean, I will give the left-wingers some credit in the idea that
    0:25:18 meritocracy itself could have preference for people who have privileged backgrounds.
    0:25:24 I think that’s true. And so the way I would like to see it is to increase
    0:25:30 everybody’s equality of opportunity to make sure that they all have a chance at “willing out”
    0:25:34 the American dream. But that doesn’t erase meritocracy, hard work, and many of the other
    0:25:38 things that we associate with the American character, with the American frontier.
    0:25:42 So these are two ideologies which are really at odds. In a lot of ways,
    0:25:46 like wokeism, racialism, and all this is a third-world ideology. It’s one that’s very
    0:25:51 prevalent in Europe and all across Asia, but it doesn’t mix well here and it shouldn’t.
    0:25:53 And I’m really glad that America feels the same way.
    0:25:59 Yeah, I got to go back to Jim Webb in that book. What a badass, fascinating book.
    0:26:06 Worn Fighting, How the Scots Ira Shaped America. So I did not realize to the degree,
    0:26:12 first of all, how badass the Scots are, and to the degree, many of the things that kind of
    0:26:18 identifies America and part of the American spirit were defined by this relatively small
    0:26:23 group of people. As he describes, the model could be summarized as fight, sing, drink, and pray.
    0:26:29 So there’s the principles of fierce individualism, the principles of a deep distrust of government,
    0:26:35 the elites, the authorities, bottom-up governance. Over 2,000 years of a military tradition,
    0:26:41 they made up 40% of the Revolutionary War Army and produced numerous military leaders,
    0:26:47 including Stonewall Jackson, Ulysses S. Grant, George S. Patton, and a bunch of presidents,
    0:26:52 some of the more gangster presidents, Andrew Jackson, Teddy Roosevelt,
    0:26:58 Woodrow Wilson, Ronald Reagan, and Bill Clinton. Just the whole cultural legacy of country music.
    0:27:03 We owe them so much, and they really don’t get their due, unfortunately. A lot of,
    0:27:08 for the reasons that I just described around racialism is because post mass immigration
    0:27:14 from Europe, the term white kind of became blanket applied to New Irish, to Italians,
    0:27:19 to Slovenians. And as you and I both know, if you travel those countries, people are pretty
    0:27:24 different. And it’s not the different here in the United States. Scott Cyrus was some of the
    0:27:30 original settlers here in America, and particularly in Appalachia, and their contribution to the
    0:27:34 fighting spirit and their own culture, and who we are as individualists, and some of the first
    0:27:38 people to ever settle the frontier. And that frontier mindset really does come from them.
    0:27:42 We owe them just as much as we do the Puritans, but they don’t ever really get their due.
    0:27:47 And the reason I recommend that book is if you read that book and you understand then
    0:27:52 how exactly could this group of white working class voters forgo from 2012 voting for a man
    0:27:58 named Barack Hussein Obama to Donald J. Trump, you really seem to, it makes perfect sense if
    0:28:02 you combine it with a lot of the stuff I’m talking about here, about affirmative action,
    0:28:06 about distrust of the elites, about feeling as if institutions are not seeing through to you,
    0:28:13 and specifically also not valuing your contribution to American history. And in some cases,
    0:28:18 actively looking down. I’m glad you pointed out not only their role in the Revolutionary War,
    0:28:24 but in the Civil War as well. And just how a bunch of a contribution culturally really that we owe
    0:28:29 them for setting the groundwork that so many of us who came later could build upon and adopt
    0:28:32 some of their own ideas in their culture as our own. It’s one of the things that makes America
    0:28:40 great. Mark Twain. Yeah. I mean, so much of the culture, so much of the American spirit,
    0:28:46 the whole idea, the whole shape and form and type of populism that represents our democracy.
    0:28:52 So would you trace the, that fierce individualism that we think of back to them?
    0:28:57 Definitely. It’s a huge part of them about who they were, about the screw you attitude. I mean,
    0:29:02 that book actually kind of had a renaissance back in 2016 when Hill Bailey Elegy came out.
    0:29:06 I’m sure you remember this, which it’s kind of weird to think that it’s now the
    0:29:10 Vice President-elect of the United States. It’s kind of wild, honestly, to think about.
    0:29:16 But JD Vance’s book, Hill Bailey Elegy, I think was really important for a lot of American elites
    0:29:20 who were like, how do these people support Trump? Where does this shit come from? That they’re
    0:29:25 really, I mean, that if you really think back to that time, it was shocking to the elite character
    0:29:30 that any person in the world could ever vote for Donald Trump and not just vote. He won the election.
    0:29:34 How does that happen? And that’s Hill Bailey Elegy guided people in an understanding of what
    0:29:40 that’s like on a lived day-to-day basis. And JD, to his credit, talks about Scott’s Irish heritage,
    0:29:44 about Appalachia, and the legacy of what that culture looks like today, and how a lot of these
    0:29:48 people voted for Donald Trump. But we got to give credit to Jim Webb, who wrote the history of these
    0:29:55 people and taught me and you about their original fight against the oppressors in Scotland and Ireland
    0:30:01 and their militant spirit and how they were able to bring that over here. And they got their due
    0:30:06 in Andrew Jackson and some of our other populist presidents who set us up on the road to Donald
    0:30:11 Trump, to where we are today. Dude, it got me pumped, excited to be an American. Me too. I love
    0:30:18 that book. It’s crazy that JD, the same guy, because that’s Hill Bailey Elegy is what I kind of
    0:30:22 thought of him as. Yeah, I mean, I’ll tell you, for me, it’s actually pretty surreal. I met JD
    0:30:28 Vance in like 2017 in like a bar. I didn’t ever think he would be the vice president-elect
    0:30:33 of the United States. I mean, just kind of wild. One of my friends went back and dug up the email
    0:30:36 that we originally sent him, just like, “Hey, do you want to meet up?” And he’s like, “Sure, you
    0:30:41 know.” I was watching on television. I mean, the first time that it really hit me, I was like,
    0:30:45 “Whoa.” It was like name in a history book. It’s whenever he became the vice presidential nominee,
    0:30:49 I was watching him on TV, and the confetti was falling, and he was waving with his wife, and I
    0:30:55 was like, “Wow, that’s it. You’re in the history books now forever, especially now.” So as the
    0:31:03 literal vice president-elect of the US, but his own evolution is actually a fascinating story
    0:31:07 for us too, because I think a lot of the time I’ve spent right now is kind of… A lot of what
    0:31:13 I’m giving right now are like 2016 kind of takes about like why Trump won that time. But we just
    0:31:17 spent a lot of time on how Donald Trump won this election, and like how what happened, some of the
    0:31:22 failures of the Biden administration, some of the payback for the great awokening. But also,
    0:31:27 if you look at the evolution of J.D. Vance, this is a person who wrote Hillbilly Elegy,
    0:31:30 and not a lot of people pay attention to this, but if you read Hillbilly Elegy,
    0:31:36 J.D. was much more of a traditional conservative at that time. He was citing, you know, report,
    0:31:40 I think the famous passage is about like payday loans and why they’re good or something like that.
    0:31:44 I don’t know his position today, but I would assume that he’s probably changed that. But the
    0:31:51 point is, is that his ideological evolution from watching somebody who really was more of a traditional
    0:31:57 Republican with a deep empathy for the white working class, then eventually become a champion
    0:32:02 and a disciple of Donald Trump, and to believe that he himself was the vehicle for accomplishing
    0:32:07 and bettering the United States specifically for working class Americans, really, of all stripes.
    0:32:16 And that story is really one of the rise of the modern left as it exists as a political project,
    0:32:21 as an ideology. It’s also one of the Republican Party, which coalesced now with Donald Trump as
    0:32:27 a legitimate figure and as the single bulwark against cultural leftism and elitism that
    0:32:31 eventually was normalized to the point that majority of Americans decided to vote for him in 2024.
    0:32:38 So let’s talk about 2024. What happened with the left? What happened with Biden? What’s
    0:32:45 your take on Biden? Biden is, I try to remove myself from it, and I try not to give like
    0:32:50 hit big history takes while you’re in the moment. But it’s really hard not to say that he’s one
    0:32:55 of the worst presidents in modern history. And I think the reason why I’m going to go with it
    0:33:01 is because I want to judge him by the things that he set out to do. So Joe Biden has been the same
    0:33:08 person for his entire political career. He is a basically C student who thinks he’s an A student.
    0:33:14 The chip on his shoulder against the elites has played to his benefit in his original election
    0:33:18 to the United States Senate through his entire career as United States Senator, where he always
    0:33:23 wanted to be the star and the center of attention. And to his 1988 presidential campaign. And one of
    0:33:27 the most fascinating things about Biden and watching him age is watching him become even more of what
    0:33:34 he already was. And so a book recommendation, it’s called What It Takes. And it was written in 1988.
    0:33:39 And there’s actually a long chapter on Joe Biden and about the plagiarism scandal. And one of the
    0:33:43 things that comes across is his sheer arrogance and belief in himself as to why he should be the
    0:33:48 center of attention. Now, the reason I’m laying all this out is the arrogance of Joe Biden, the
    0:33:52 individual and his character is fundamentally the reason that his presidency went awry. This is a
    0:34:00 person who was elected in 2020 really because of a feeling of chaos, of Donald Trump, of we need
    0:34:06 normalcy, decides to come into the office, portrays himself as a quote unquote transitional president,
    0:34:12 slowly, you know, begins to lose a lot of his faculties and then surrounds himself with sycophants,
    0:34:17 the same ones who have been around him for so long, that he had no single input into his life
    0:34:22 to tell him that he needed to stop and he needed to drop out of the race until it became truly
    0:34:28 undeniable to the vast majority of the American people. And that’s why I’m trying to keep it as
    0:34:31 like him as an individual, as a president, because we can separate him from some of his
    0:34:35 accomplishments and the things that happen. Some I support, some I don’t. But generally,
    0:34:38 a lot of people are not going to look back and think about Joe Biden and the Chips Act. A lot
    0:34:42 of people are not going to look back and think about Joe Biden and the build back better bill or
    0:34:47 whatever his Lena Kahn antitrust policy, they’re going to look back on him and they’re going to
    0:34:53 remember high inflation. They’re going to remember somebody who fundamentally never was up to the
    0:34:59 job in the sense that one again book recommendation freedom from fear by David Kennedy is about
    0:35:04 the Roosevelt years. And one of the most important things people don’t understand is the new deal
    0:35:08 didn’t really work in the way that a lot of people wanted it to, right? Like there was still
    0:35:13 high unemployment, there was still a lot of suffering. But you know what changed? They felt
    0:35:18 that they had a vigorous commander in chief who was doing everything in his power to attack
    0:35:22 the problems of the everyday American. So even though things didn’t even materially change,
    0:35:27 the vigor that’s a term that was often associated with John F. Kennedy at Vega,
    0:35:32 you know, in the Massachusetts accent, we had this young vibrant president in 1960 and he was
    0:35:36 running around and he wanted to convince us that he was working every single day tirelessly.
    0:35:41 And we have an 80 year old man who is simply just eating ice cream and going to the beach
    0:35:47 while people’s grocery prices and all this thing go up by 25%. And we don’t see the same vigor. We
    0:35:52 don’t see the same action, the bias to action, which is so important in the modern presidency.
    0:35:57 That is fundamentally why I think the Democrats, part of the reason why the Democrats lost the
    0:36:02 election and also why I think that he missed his moment in such a dramatic way. And he had
    0:36:06 the opportunity, he could have done it, you know, if he wanted to, but maybe 20 years ago. But
    0:36:13 the truth is that his own narcissism, his own misplaced belief in himself and his own accidental
    0:36:19 rise to the presidency ended up in his downfall. And it’s kind of amazing because again, if we
    0:36:24 look back to his original campaign speech, 2019, why I’m running for president, it was
    0:36:28 Charlottesville. And he said, I want to defeat Donald Trump forever. And I want to make sure
    0:36:31 that he never gets back in the White House again. So by his own metric, he did fail.
    0:36:34 That was his, it was the only thing he wanted to do. And he failed, failed from.
    0:36:40 You said a lot of interesting stuff. So one FDR, that’s really interesting. It’s not about
    0:36:46 the specific policy. It’s about like fighting for the people and doing that with charisma and
    0:36:53 just uniting the entire country for a particular, this is the same with Bernie. Like maybe there’s
    0:36:56 a lot of people that disagree with Bernie that still support him because like we just want
    0:37:02 to be authentic. Yeah, that’s it. We just want somebody to fight authentically. Yes. Yes. FDR,
    0:37:06 people really need FDR was like a king. He was like Jesus Christ. Okay. And in the US.
    0:37:11 And some of it was because of what he did, but it was just the fight. So people need to go back
    0:37:15 and read the history of the first 100 days under FDR, the sheer amount of legislation that went
    0:37:19 through his ability to bring Congress to heal and the Senate, he gets all this stuff through.
    0:37:22 But as you and I know, legislation takes a long time to put into place, right?
    0:37:29 We’ve had people starving on the streets all throughout 1933 under Hoover. The difference
    0:37:34 was Hoover was seen as this do nothing joke who would dine nine course meals in the White House
    0:37:40 and use a filthy rich banker. FDR comes in there and every single day has in fireside chats,
    0:37:45 he’s passing legislation. But more importantly, so he tries various different programs,
    0:37:49 then they get ruled on constitutional. He tries even more. So what does America take away from
    0:37:53 that? Every single time if he gets knocked down, he comes back fighting. And that was a really
    0:37:59 part of his character that he developed after he got polio. And it was, it gave him the strength
    0:38:07 to persevere through personally what he could transfer in his calm demeanor and his feeling
    0:38:14 of fight that America really got that spirit from him and was able to climb itself out of
    0:38:18 the Great Depression. He’s such an inspirational figure. He really is. And I people think of him
    0:38:23 for World War II. And of course, we can spend forever on that. But in my opinion, the early
    0:38:29 years are not studied enough. ’33 to ’37 is one of the most remarkable periods in American history.
    0:38:33 We were not ruled by a president. We were ruled by a king, by a monarch. And people liked it.
    0:38:41 He was a dictator and he was a good one. Yeah. So to sort of push back against the implied
    0:38:45 thing that you said. Sure. So when saying Biden is the worst president.
    0:38:49 No, second worst in modern history, that’s what I said. In modern history, who’s the worst?
    0:38:52 W, no question. I see because of the horrible wars probably. I mean,
    0:38:57 Iraq is just so bad. Like one of my favorite authors is a guy, Gene Edward Smith. He’s
    0:39:02 written a bunch of presidential biographies. And in the opening of his book, W Biography,
    0:39:06 he’s like, there’s just no question. There’s a single worst foreign policy mistake in all
    0:39:11 of American history. And W is one of our worst presidents ever. He had terrible judgment and
    0:39:16 got us into a war of his own choosing. It was a disaster and it set us up for failure.
    0:39:20 By the way, we talked a lot about Donald Trump. Nobody is more responsible for the rise of Donald
    0:39:26 Trump than George W. Bush. But I could go off on Bush for a long time. We will return there.
    0:39:31 So as part of the pushback, I’d like to say, because I agree with your criticism of arrogance
    0:39:36 and narcissism against Joe Biden. The same could be said about Donald Trump. You’re absolutely right.
    0:39:42 Of arrogance. And I think you’ve also articulated that a lot of presidents throughout American
    0:39:46 history have suffered from a bad case of arrogance and narcissism.
    0:39:51 Absolutely. But sometimes for a benefit, you have to be a pretty crazy person to want to be
    0:39:57 president. I had put out a tweet that got some controversy. And I think it was Joe Rogan,
    0:40:00 who I love. But he was like, I want to find out who Kamala Harris is as a human being.
    0:40:04 And I was like, I’m actually not interested in who politicians are as human beings at all.
    0:40:10 I was like, I’ve read too much about them to know. I know who you are. If you spend your life,
    0:40:14 and because I live in Washington and I spend a lot of time around would-be politicians,
    0:40:18 I know what it takes to actually become the president. It’s crazy. You have to give up
    0:40:22 everything, everything, every night. You’re not spending it with your wife. You’re spending
    0:40:26 it at dinner with potential donors, with friends, with people who can connect to you.
    0:40:30 Every, even after you get elected, that’s even more so. Now you got to raise money.
    0:40:33 And now you’re on to the next thing. Now you want to get your political thing through.
    0:40:36 You’re going to spend all your time on your phone. You and your staff are going to be more like this.
    0:40:41 Your entire life revolves around your career. It’s honestly, you need an insane level of
    0:40:46 narcissism to do it because you have to believe that you are better than everybody else,
    0:40:53 which is already pretty crazy. And not only that, your own personal characteristics and foibles
    0:40:58 lead you to the pursuit of this office and to the pursuit of the idolatry of the self
    0:41:03 and everything around you. There’s a famous story of Lady Bird Johnson
    0:41:06 after Johnson becomes the president. He’s talking to the White House butler,
    0:41:09 and she was like, “Everything in this house revolves around my husband.
    0:41:12 Whatever’s left goes to the girls, her two children, and I’ll take the scraps.”
    0:41:19 Everything revolves around Johnson’s political career. And his daughters, when they’re honest,
    0:41:22 because they like to paper over some of the things that happened under him, but
    0:41:27 they didn’t spend any time with him. Saturday morning was for breakfast with Richard Russell,
    0:41:32 I forget. These are all in the Robert A. Carroll books. Sunday was for Ray Byrne. There was no
    0:41:38 time for his kids. That’s what it was. And by the way, he’s one of the greatest politicians
    0:41:42 to ever live. But he also died from a massive heart attack and he was a deeply sad and depressed
    0:41:49 individual. Yeah, I saw that tweet to go back to that. And also, I listened to your incredible
    0:41:54 debate about it with Marshall on the Realignment Podcast. And I have to side with Marshall.
    0:41:58 I think you’re just wrong on this, because I think revealing the character of a person
    0:42:04 is really important to understand how they will act in a room full of generals and full of…
    0:42:10 Yeah, this gets to the judgment question. I think of Johnson and of Nixon, of Teddy Roosevelt,
    0:42:16 even of FDR. I can give you a laundry list of personal problems that all those people had.
    0:42:22 I think they had a really, really good judgment. And I’m not sure how intrinsic their own personal
    0:42:27 character was to their exploration and thinking about the world. So JFK is… Actually, JFK
    0:42:32 might be our best example, because he had the best judgment out of anybody in the room as a
    0:42:38 brand new president in the Cuban Missile Crisis. And he got us out and avoided nuclear war, which he
    0:42:44 deserves eternal credit for that. But how did he arrive to good judgment? Some of it certainly
    0:42:49 was his character. And we can go again, though, into his laundry list of that. But most of it
    0:42:54 was around being with his father, seeing some of the mistakes that he would make. And he was also
    0:43:01 had a deeply inquisitive mind, and he experienced World War II at the personal level after PT 109.
    0:43:07 So it is… Look, I get it. I actually could steelman it. The response to what I’m saying is
    0:43:12 judgment is not divisible from personal character. But just because I know a lot of politicians,
    0:43:17 and I’ve read enough with the really great ones, the people who I revere the most,
    0:43:21 there’s really bad personal stuff, basically, every single time.
    0:43:24 But you’re saying the judgment was good.
    0:43:25 Yeah, his judgment was great.
    0:43:26 On the Missile Crisis.
    0:43:26 Yes.
    0:43:32 Some of the best judgment and decision-making in the history of America.
    0:43:37 Yes. And we should study a lot of it. And I encourage people out there. This is a brutal text.
    0:43:42 We were forced to read it in graduate school. The Essence of Decision by Graham Allison. I’m so
    0:43:47 thankful we did. It’s one of the foundations of political science, because it lays out theories
    0:43:51 of how government works. This is also a useful transition, by the way, if we want to talk about
    0:43:57 Trump and some of his cabinet and how that is shaping up, because people really need to understand
    0:44:03 Washington. Washington is a creature with traditions, with institutions that don’t care
    0:44:07 about you. They don’t even really care about the president. They have self-perpetuating
    0:44:12 mechanisms, which have been done a certain way. And it usually takes a great shocking event,
    0:44:16 like World War II, to change really anything beyond the marginal. Every once in a while,
    0:44:20 you have a figure like Teddy Roosevelt, who’s actually able to take peacetime presidency
    0:44:23 and transform the country. But it needs an extraordinary individual to get something like
    0:44:29 that done. So the question around the Essence of Decision was the theory behind the Cuban
    0:44:34 Missile Crisis of how Kennedy arrived at his decision. And there are various different
    0:44:38 schools of thought. But one of the things I love about the book is it presents a case for all three,
    0:44:43 the organizational theory, the bureaucratic politics theory, and then kind of the great man
    0:44:49 theory as well. So you and I could sit here and I could tell you a case about PT 109 and about
    0:44:54 how John F. Kennedy experienced World War II as this, I think he was like a first lieutenant or
    0:45:00 something like that, and how he literally swam miles with a wounded man’s life jacket strap
    0:45:04 in his teeth with a broken back. And he saved him and he ended up on the cover of Life Magazine
    0:45:10 and he was a war hero. And he was a deeply smart individual who wrote a book in 1939 called Why
    0:45:18 England Slept, which to this day is considered a text which at the moment was able to describe in
    0:45:23 detail why Neville Chamberlain and the British political system arrived at the policy of appeasement.
    0:45:29 I actually have an original copy. It’s one of my most prized possessions. And from 1939,
    0:45:33 because this is a 23-year-old kid, who the fuck are you, John F. Kennedy, turns out he’s a brilliant
    0:45:39 man. And another just favorite aside is at the Potsdam Conference, where Harry Truman is there
    0:45:43 with Stalin and everybody. So in the room at the same time, Harry S. Truman, president of the United
    0:45:50 States, Dwight D. Eisenhower, the general, right, who will succeed him, 26-year-old John F. Kennedy
    0:45:54 as a journalist, some shithead journalist on the side, and all three of those presidents were in
    0:46:00 the same room with Joseph Stalin and others. And that’s the story of America right there. It’s kind
    0:46:06 of amazing. I love people to say that because you never know about who will end up rising to power.
    0:46:09 But are you announcing that you’re running for power? No, absolutely not. Yeah. I don’t have
    0:46:14 what it takes. I don’t think so. I’m self-aware. Yeah. Well, maybe humility is necessary for
    0:46:21 greatness. Okay. So actually, can we just linger on that book? Yeah. So the book Essence of Decision,
    0:46:26 Explaining the Cuban Missile Crisis, by Graham Allison, it presents three different models of
    0:46:31 how government works. The rational actor model, so seeing government as one entity,
    0:46:39 trying to maximize the national interest, also seeing government as through the lens of the
    0:46:45 momentum of standard operating procedures, sort of this giant organization that’s just doing
    0:46:51 things how it’s always been done. And the government politics model of there’s just these
    0:46:59 individual internal power struggles within government. And all of that is like a different
    0:47:05 way to view and they’re probably all true to a degree of how decisions are made within this
    0:47:09 giant machinery of government. That’s why it’s so important is because you cannot read that book
    0:47:13 and say one is true and one is not. You can say one is many more true than the other, but all of them
    0:47:18 are deeply true. And this is one, this is probably a good transition to Donald Trump because, and I
    0:47:23 guess for the people out there who think I’ve been up too obsequious, he’ll be my criticism,
    0:47:26 Trump says something very fundamental and interesting on the Joe Rogan podcast. Probably
    0:47:31 the most important thing that he ever said, which is he said, “I like to have people like John Bolton
    0:47:36 in my administration because they scare people and it makes me seem like the most rational
    0:47:41 individual in the room.” So at a very intuitive level, a lot of people can understand that and
    0:47:46 then they can rationalize while there are picks that Donald Trump has brought into his White House,
    0:47:51 people like Mike Walts and others that have espoused views that are directly at odds with a
    0:47:58 quote unquote anti-Neocon, anti-Liz Cheney agenda. Now, Trump’s theory of this is that he likes to
    0:48:04 have quote unquote like psychopaths like John Bolton in the room with him while he’s sitting
    0:48:08 across from Kim Jong-un because it gets scared. What I think Trump never understood when he was
    0:48:13 president, and I honestly question if he still does now, is those two theories that you laid out
    0:48:17 which are not about the rational interest as the government is one model, but the bureaucratic
    0:48:22 theory and the organizational theory of politics. And because what Trump I don’t think quite gets
    0:48:27 is that there are 99% of the decisions that get made in government never reach the president’s
    0:48:30 desk. One of the most important Obama quotes ever is, “By the time it gets to my desk,
    0:48:34 nobody else can solve it. All the problems here are hard. All the problems here don’t have an
    0:48:41 answer. That’s why I have to make the call.” So the theory that Trump has that you can have people
    0:48:45 in there who are let’s say warmongers, neocons or whatever who don’t necessarily agree with you
    0:48:49 is that when push comes to shove at the most important decisions, that I’ll still be able to
    0:48:54 rein those people in as an influence. Here’s the issue. Let’s say for Mike Waltz, who’s going to
    0:48:59 be the national security advisor. A lot of people don’t really understand, you know, there’s this
    0:49:02 theory of national security advisor where you call me into your office and you’re the president,
    0:49:06 you’re like, “Hey, what do we think about Iran?” I’m like, “I think you should do X, Y, and Z. No,
    0:49:09 that’s not how it works.” The national security advisor’s job is to coordinate the interagency
    0:49:15 process. So his job is to actually convene meetings, him and his staff, where in the situation room,
    0:49:20 CIA, State Department, SECDEF, others. Before the POTUS even walks in, we have options. So we’re
    0:49:26 like, “Hey, Russia just invaded Ukraine. We need a package of options. Those packages of options
    0:49:29 are conceded of three things. We’re going to have one group. We’re going to call it the dovish
    0:49:34 option. Two, we’re going to call it the middle ground. Three, the hardcore package. Trump walks
    0:49:38 in. This is how it’s supposed to work. Trump walks in and he goes, “Okay, Russia invaded Ukraine.
    0:49:42 What do we do? Mr. President, we’ve prepared three options for you. We’ve got one, two, and three.
    0:49:46 Now, who has the power? Is it Trump when he picks one, two, or three? Or is the man who decides
    0:49:52 what’s even in option one, two, and three?” That is the part where Trump needs to really understand
    0:49:55 how these things happen. And I watched this happen to him in his first administration.
    0:50:00 He hired a guy, Mike Flynn, who was his national security advisor. You could say a lot about Flynn,
    0:50:05 but him and Trump were at least like this, on foreign policy. Flynn gets outed because what I
    0:50:12 would call an FBI coup, whatever, 33 days. He’s out as a national security advisor. HR Master,
    0:50:18 he’s got a nice, shiny uniform, four star, all of this. Master doesn’t agree with Donald Trump at
    0:50:23 all. And so Trump says, “I ran on pulling out of Afghanistan. I want to get out of Afghanistan.”
    0:50:26 They’re like, “Yeah, we’ll get out of Afghanistan.” But before we get out, we got to go back in,
    0:50:31 as in we need more troops in there. And he’s like, “Oh, okay.” It’s like all this and it
    0:50:38 proves a plan and effectively gives a speech in 2017, where he ends up escalating and increasing
    0:50:42 the number of troops in Afghanistan. And it’s only till February 2020 that he gets to sign a deal,
    0:50:47 the Taliban peace deal, which in my opinion, he should have done in 2017. But the reason why
    0:50:52 that happened was because of that organizational theory, of that bureaucratic politics theory,
    0:50:57 where HR McMaster is able to guide the interagency process, bring the uniform
    0:51:01 recommendations of the Joint Chiefs of Staff and others, just give Donald Trump no option but to
    0:51:06 say we must put troops. Another example of this is a book called Obama’s War by Bob Woodward.
    0:51:10 I highly encourage people to read this book, because this book talks about how Obama comes into
    0:51:14 the White House in 2009, and he says, “I want to get out of Iraq and I don’t want to increase,
    0:51:19 I want to fight the good war in Afghanistan.” And he’s doing, Obama’s a thoughtful guy,
    0:51:24 too thoughtful, actually. And so he sits there and he’s working out his opinions.
    0:51:31 And what he starts to watch is that very slowly his narrow, his options begin to narrow, because
    0:51:35 strategic leaks start to come out from the White House situation room about what we
    0:51:41 should do in Afghanistan. And pretty soon, David Petraeus and Stan McChrystal and the entire
    0:51:47 National Security apparatus has Obama pegged, where he basically politically at the time,
    0:51:53 decides to take the advantage of increasing troops in Afghanistan, but then tries to have it both
    0:51:58 ways by saying, “But in two years we’re going to withdraw.” That book really demonstrates how the
    0:52:05 deep state can completely remove any of your options to be able to move by presenting you with
    0:52:12 ones which you don’t even want, and then making it politically completely infeasible to travel down
    0:52:16 the extreme directions. That’s why when Trump says things like, “I want to get out of Syria,”
    0:52:21 that doesn’t compute up here for the Pentagon. Because first of all, if I even asked you how
    0:52:24 many troops we have in Syria, and you could go on the DoD website, it’ll tell you the number.
    0:52:28 The number’s bullshit, because the way that they do it is if you’re only there for 179 days,
    0:52:32 you don’t count as active, military contracts, the real numbers, let’s say five times. And so
    0:52:36 Trump would be like, “Hey, I want to get out of Syria. We’ll do it six months, right? We need six
    0:52:40 months.” And after six months ago, so are we out of Syria yet? And they’re like, “No, well, we got
    0:52:44 to wrap this up. We got this base. We got that. We have this important mission. And next thing you
    0:52:49 know, you’re out of office, and it’s over.” So there’s all these things which I don’t think he
    0:52:53 quite understands. I know that some of the people around him who disagree with these picks do, is
    0:52:57 the reason why these picks really matter is not only are the voices in the situation room for
    0:53:01 the really, really high-profile stuff. It’s for all the little things to never get to that president’s
    0:53:07 desk, of which can shape extraordinary policy. And I’ll give you the best example. There was never
    0:53:13 a decision by FDR as president of the United States to oil embargo Japan, one which he thought
    0:53:18 about as deeply as you and I would want. It was a decision kind of made within the State Department.
    0:53:22 It was a decision that was made by some of his advisors. I think he eventually signed off on it.
    0:53:26 It was a conscious choice, but it was not one which ever was understood,
    0:53:31 the implications that by doing that, we invite a potential response like Pearl Harbor. So think
    0:53:37 about what the organizational bureaucratic model can tell us about the extraordinary blowback that
    0:53:42 we can get and why we want people with great judgment all the way up and down the entire
    0:53:47 national security chain in the White House. Also, I just realized I did not talk about immigration,
    0:53:52 which is so insane. One of the reasons Donald Trump won in 2024, of course, was because of the
    0:53:57 massive change to the immigration status quo. The truth is, is that it may actually be second to
    0:54:02 inflation in terms of the reason that Trump did win. The presidency was because Joe Biden fundamentally
    0:54:05 changed the immigration status quo in this country. That was another thing about the
    0:54:10 Scott’s Irish people and others that we need to understand is that when government machinery
    0:54:16 and elitism and liberalism appears to be more concerned about people who are coming here in a
    0:54:22 disorderly and illegal process and about their rights and their ability to quote unquote pursue
    0:54:27 the American dream, while the American dream is dying for the native born population, that is a huge
    0:54:33 reason why people are turning against mass immigration. Historically as well, my friend
    0:54:38 Raihan Salam wrote a book called Melting Pot or Civil War. And one of the most important
    0:54:43 parts about that book is the history of mass migration to the United States. So if we think
    0:54:49 about the transition from Scott’s Irish America to the opening of America to the Irish and to
    0:54:56 mass European immigration, what a lot of people don’t realize is it caused a ton of problems.
    0:55:01 There were mass movements at the time, the know nothings and others in the 1860s who rose up against
    0:55:07 mass European migration. They were particularly concerned about Catholicism by as the religion
    0:55:12 of a lot of the new immigrants. But really what it was is about the changing of the American character
    0:55:19 by people who are not have the same traditions, values and skills as the native born population
    0:55:23 and their understanding of what their owed and their role in American society is very different
    0:55:29 from the way that people previously had. One of the most tumultuous periods of US politics was
    0:55:34 actually during the resolution of the immigration question, where we had massive waves of foreign
    0:55:41 born population come to the United States. We had them, you know, integrated, luckily actually at
    0:55:47 the time with the industrial revolution. So we actually did have jobs for them. One of the problems
    0:55:52 is that today in the United States, we have one of the highest levels of foreign born population
    0:55:58 than ever before actually since that time in the early 1900s. But we have all of the same attendant
    0:56:04 problems. But even worse is we don’t live in an industrial economy anymore. We live in a predominantly
    0:56:08 service based economy that has long, you know, moved past manufacturing. Now I’m not saying we
    0:56:12 shouldn’t bring some of that back, but the truth is that manufacturing today is not what it was to
    0:56:18 work in a steel mill in 1875. I think we can all be reasonable and we can agree on that. And part
    0:56:23 of the problems with extremely high levels of foreign born population, particularly unskilled,
    0:56:28 and the vast majority of the people who are coming here and who are claiming asylum are doing so
    0:56:33 under fraudulent purposes. They’re doing so because they are economic migrants and they’re abusing,
    0:56:38 you know, asylum law to basically gain entrance to the United States without going through a
    0:56:45 process of application or merit. And this has all of its traces back to 1965, where the Immigration
    0:56:51 Naturalization Act of 1965 really reversed and changed the status quo of immigration from the
    0:56:58 1920s to 1960, which really shut down levels of immigration in the United States. In my opinion,
    0:57:03 it was one of the most important things that ever happened. And one of the reasons why is it forced
    0:57:09 and caused integration. It also forced by slowing down the increase in the number of foreign born
    0:57:15 population, it redeveloped an American character and understanding that was more homogenous and was
    0:57:20 the ability for you and me to understand despite the difference in our background. If you accelerate
    0:57:25 and you continue this trend of the very high foreign born unskilled population, you unfortunately
    0:57:32 are basically creating a mass, you know, it’s basically a non-citizen population of illegal
    0:57:38 immigrants, people who are not as skilled. You know, I think it was, I read 27% of the people
    0:57:43 who’ve come under Joe Biden illegally don’t even have a college degree. That means that we’re lucky
    0:57:48 if they’re even literate in Spanish, let alone English. So there are major problems about
    0:57:53 integrating that type of person, you know, even in the past, whenever we had a mass industrial
    0:57:59 economy. Now imagine today, the amount of strain that would put on social services if mass
    0:58:05 citizenship happened, you know, to that population would be extraordinary. And even if we were to
    0:58:10 do, I don’t think it’s a good idea, but even if we were to do so, we would still need to pair it
    0:58:14 with a dramatic change. And part of the problem right now is I don’t think a lot of people understand
    0:58:19 the immigration system. The immigration system in the United States, effectively, they call it
    0:58:26 family-based migration. I call it chain migration. Chain migration is the term which implies that,
    0:58:32 let’s say you come over here, and you get your green card, you can use sponsorship and others
    0:58:36 by gaming the quota system to get your cousin or whatever to be able to come. The problem with
    0:58:41 that is who is your cousin? Like, is he a plumber? Is he, you know, is he a coder? You know, that
    0:58:44 doesn’t actually matter because he’s your cousin. So actually, it’s preference. The way that it
    0:58:49 should work is it should be nobody cares if he’s your cousin. What does he do? You know, what does
    0:58:53 she do? What is she going to bring to this country? All immigration in the United States, in my
    0:58:57 opinion, should be net positive without doing fake statistics about, oh, they actually increased
    0:59:03 the GDP or whatever. It’s like, we need a merit-based immigration system. We are the largest
    0:59:07 country in the world. And one of the only non-Western, or one of the only Western countries in the
    0:59:13 world that does not have a merit-based, points-based immigration system, like Australia and or Canada.
    0:59:17 And I mean, I get it because a lot of people did come to this country under non-merit-based
    0:59:22 purposes. So they’re really reluctant to let that go. But I do think that Biden, by changing the
    0:59:27 immigration status quo and by basically just allowing tens of millions, potentially tens of
    0:59:34 millions, at the very least 12 million new entrants to come to the U.S. under these pretenses of
    0:59:42 complete disorder and of no conduct, really broke a lot of people’s understanding and even like
    0:59:46 mercy in that regard. And so that was obviously a massive part of Trump’s victory.
    0:59:52 Speaking of illegal immigration, what do you think about the borders are? Tom Holman.
    0:59:59 Tom Holman is a very legit dude. Got to know him a little bit in Trump 1.0. He is an original
    1:00:06 true believer on enforcing immigration law, as it is. Now, notice how I just said that.
    1:00:12 That’s a politically correct way of saying mass deportation. And I will point out for my left
    1:00:22 wing critics in that he really believes in the ability in the necessity of mass deportation
    1:00:26 and he has the background to be able to carry that out. I will give some warnings and this will
    1:00:33 apply to Doge too. Tsar has no statutory or constitutional authority. Tsar has as much
    1:00:38 authority as the President of the United States gives him. Donald Trump, I think it’s fair to
    1:00:42 say, even his critics or even the people who love him could say he can be capricious at times
    1:00:49 and he can strip you or not strip you or give you the ability to compel. So, Tsar in and of
    1:00:53 itself is frankly a very flawed position in the White House and it’s one that I really wish we
    1:00:58 would move away from. I understand why we do it. It’s basically to do a national security advisor,
    1:01:05 interagency convener to accomplish certain goals. That said, there is a person, Stephen Miller,
    1:01:10 who will be in the White House, the Deputy White House Chief of Staff, who has well founded beliefs,
    1:01:16 experience in government and rock solid ideology on this, which I think would also give him the
    1:01:22 ability to work with home and to pull that off. That said, the corollary to this, and frankly,
    1:01:29 this is the one I am the most mystified yet, is Kirsti Noem as the Department of Homeland Security
    1:01:32 Secretary. So, let me just lay this out for people because people don’t know what this is.
    1:01:37 Department of Homeland Security, 90% of the time the way you’re going to interact with them is TSA.
    1:01:41 You don’t think about it. But people don’t know, the Department of Homeland Security is one of the
    1:01:46 largest law enforcement, if maybe the largest law enforcement agency in the world. It’s gigantic.
    1:01:53 You have extraordinary statutory power to be able to prove investigations. You have Border Patrol,
    1:01:59 ICE, TSA, CBP, all these other agencies that report up to you. But most importantly for this,
    1:02:05 you will be the public face of mass deportation. So, I was there in the White House briefing room
    1:02:10 last time around when Kirsten Nielsen, who was the DHS Secretary under Donald Trump and specifically
    1:02:16 the one who enforced child separation for a limited period of time, she was a smart woman,
    1:02:22 she has long experience in government, and honestly, she melted under the criticism.
    1:02:26 Kirsti Noem is the governor of South Dakota. I mean, that’s great. You have a little bit of
    1:02:30 executive experience. But to be honest, I mean, you have no law enforcement background. You have
    1:02:36 no ability to lie. You have no, frankly, with understanding of what it is going to be like
    1:02:41 to be the secretary of one of the most controversial programs in modern American history. You have
    1:02:47 to go on television and defend that every single day, a literal job requirement under Donald Trump.
    1:02:52 And you will have to have extraordinary command of the facts. You have to have a very high
    1:02:57 intellect. You have to have the ability to really break through. And I mean, we all watch how she
    1:03:02 handled that situation with her dog and her interviews, and that does not give me confidence
    1:03:04 that she will be able to do all that well in the position. So-
    1:03:11 What do you think is behind that? So, Crystal Falls, on breaking point, is that there’s some
    1:03:19 kind of interpersonal, like, I didn’t know, I should know this, but I didn’t know any of the,
    1:03:23 there was some cheating or whatever. There’s a rumor, nobody knows if it’s true, that Cory
    1:03:28 Lewandowski and Kirsten Noem had a previous relationship on going. Cory Lewandowski is a
    1:03:32 Trump official, and that he may be put her in front. I don’t know. Is this like the real
    1:03:36 Housewives of DC? Yeah, kind of. Although, I mean, it was the most open secret in the world.
    1:03:40 Allegedly, I don’t know if it’s true. Okay, all right. I mean, I don’t like the traffic too much
    1:03:45 in personal theories. But, I mean, in this respect, it might actually be correct in terms of how
    1:03:49 it all came down. I have no idea what he’s thinking, to be honest. I truly don’t. I mean,
    1:03:55 maybe it’s like he was last time. He said, I want a woman who’s like softer and like emotionally,
    1:04:02 and the ability to be the face of my immigration program. I mean, again, like I said, I don’t see
    1:04:07 it in terms of her experience and her media. It’s frankly, like, not very good. So, you think she
    1:04:14 needs to be able to articulate, not just be like the softer face of this radical policy,
    1:04:17 but also be able to articulate the what’s happening with the reasoning behind all this.
    1:04:21 Yes. You need to give justification for everything. Here’s the thing. Under mass deportation,
    1:04:28 the media will drag up every sob story known to planet Earth about this person and that person
    1:04:32 who came here illegally and why they deserve to stay. And really, what the quasi thing is,
    1:04:36 that’s why the program itself is bad, and we should legalize everybody who is here illegally.
    1:04:41 Okay. So, the thing is, is that you need to be able to have extraordinary oversight. You need a
    1:04:45 great team with you. You need to make sure that everything is being done by the book. The way
    1:04:50 that the media is being handled is that you throw every question back in their face and you say,
    1:04:54 well, you know, you either talk about crime or you talk about the enforceability of the law,
    1:05:00 the necessity. I mean, I just, I think, articulated a very coherent case for why we need much less
    1:05:06 high levels of immigration to the United States. And I am the son of people who immigrated to this
    1:05:11 country. But one of the favorite phrases I heard from this, from a guy named Mark Cracorian,
    1:05:16 who’s the center for immigration studies, is we don’t make immigration policy for the benefit
    1:05:22 of our grandparents. We make immigration policy for the benefit of our grandchildren. And that is
    1:05:25 an extraordinary and good way to put it. And in fact, I would say it’s a triumph of the American
    1:05:31 system that somebody whose family benefited from the immigration regime and was able to come here.
    1:05:36 My parents had PhDs, came here legally, applied, spent thousands of dollars through the process,
    1:05:42 can arrive at the conclusion that actually we need to care about all of our fellow American
    1:05:46 citizens. I’m not talking about other Indians or, you know, whatever. I’m talking about all of it.
    1:05:51 I care about everybody who is here in this country. But fundamentally, that will mean that we are
    1:05:57 going to have to exclude some people from the US. And another thing that the open borders people
    1:06:03 don’t ever really grapple with is that even within their own framework, it makes no sense. So,
    1:06:10 for example, a common left wing talking point is that it’s America’s fault that El Salvador and
    1:06:15 Honduras and Central America is fucked up. And so because of that, we have a responsibility to
    1:06:20 take all those people in because it’s our fault or Haiti, right? But, you know, if you think about
    1:06:24 it, America is responsible, and I’m just being honest, for destroying and ruining a lot of
    1:06:30 countries, they just don’t benefit from the geographic ability to walk to the United States.
    1:06:35 So, I mean, if we’re doing grievance politics, Iraqis have way more of a claim to be able to come
    1:06:41 here than anybody from El Salvador who’s talking about something that happened in 1982. So, within
    1:06:46 its own logic, it doesn’t make any sense. Even under the asylum process, you know, people, I mean,
    1:06:50 people don’t even know this, you’re literally able to claim asylum from domestic violence, okay?
    1:06:57 There are, I mean, imagine that, like, that’s, frankly, that is a local law enforcement and
    1:07:02 problem of people who are experiencing that in their home country. I know how cold-hearted this
    1:07:07 sounds, but maybe, honestly, it could be because I’m Indian. One of the things that whenever you
    1:07:11 visit India and you see a country with over a billion people, you’re like, holy shit, you know,
    1:07:18 this, this is crazy. And you understand both the sheer numbers of the amount of people involved.
    1:07:22 And also, there is nothing in the world you could ever do to solve all problems for everybody.
    1:07:27 It’s a very complex and dynamic problem. And it’s really nice to be bleeding heart and to say,
    1:07:31 oh, well, we have responsibility to this and to all mankind and all that, but it doesn’t work.
    1:07:35 It doesn’t work by the nation-state. It doesn’t work with a sovereign nation. We’re the luckiest
    1:07:39 people in the history of the world to live here in this country. And it, you need to protect it.
    1:07:45 And protecting it requires really thinking about the fundamentals of immigration itself and not
    1:07:50 telling us stories like what there’s a famous moment in the Trump White House where Jim Acosta,
    1:07:56 CNN White House correspondent, got into it with Stephen Miller, the current, you know,
    1:08:00 who will be the current deputy chief. And he was like, what do you say something along the lines
    1:08:04 to people who say you’re violating, you know, that quote on the Statue of Liberty, like,
    1:08:09 give me your tired, your poor, your hungry, all of that, the Emma Lazarus quote. And Stephen,
    1:08:14 very logically, was like, what level of immigration comports with the Emma Lazarus quote?
    1:08:19 Is it 200,000 people a year? Is it 300? Is it 1 million? Is it 1.5 million?
    1:08:25 And that’s such a great way of putting it because there is no limiting principle on Emma Lazarus
    1:08:30 quote. There is, when you start talking, honestly, you’re like, okay, we live in X, Y, and Z society
    1:08:36 with X, Y, and Z GDP. People who are coming here should be able to benefit for themselves and us,
    1:08:42 not rely on welfare, not, you know, be people who we have to take care of after because we have
    1:08:46 our own problems here right now. And who are the population, the types of people that we can study
    1:08:50 and look at, who will be able to benefit. And based on that, yeah, immigration is great. But
    1:08:58 there are a lot of economic, legal, and societal reasons for why you definitely don’t want the
    1:09:06 current level. But another thing is, even if we turn the switch, and we still let in a million
    1:09:11 five people a year under the chain, the chain family based migration, I think it would be a
    1:09:17 colossal mistake because it’s not rooted in the idea that people who are coming to America are
    1:09:22 explicitly doing so at the benefit of America. It’s doing so based on the familial connections
    1:09:26 of people who already gained the immigration system to be able to come here. I have a lot of
    1:09:30 family in India. And, you know, I love them, but and some of them are actually very talented and
    1:09:34 qualified. If they wanted to come here, I think they should be able to apply on their own merit.
    1:09:37 And that should have nothing to do with their familial status of the fact that I’m a U.S.
    1:09:43 citizen. Like you mentioned, the book, Melting Potter, Civil War by Raihan Salam,
    1:09:49 he makes an argument against the open borders. The thesis there is a simulation should be a big
    1:09:55 part. I guess there’s some kind of optimal rate of immigration, which allows for a simulation.
    1:09:58 Yeah. And there are ebbs and flows. And that’s kind of what I was talking about historically,
    1:10:03 where, you know, I mean, the truth is, is you could walk the streets of New York City in the
    1:10:08 early 1900s and late 1890s, and you’re not gonna hear any English. And I think that’s bad. I mean,
    1:10:13 really what you had was ethnic enclaves of people who were basically practicing their way of life,
    1:10:17 just like they did previously, bringing over a lot of their ethnic problems that they had and
    1:10:22 even some of their cultural like unique capabilities or whatever, bringing it to America and then
    1:10:26 New York City police and others are figuring out like, what the hell do we do with all this?
    1:10:30 And it literally took shutting down immigration for an entire generation
    1:10:35 to do away with that. And there’s actually still some. The point about assimilation is twofold.
    1:10:42 One is that you should have the capacity to inherit the understanding of the American
    1:10:47 character that has nothing to do with race. And that’s so unique that I can sit here as a child
    1:10:53 of people from India and that’s such a deep appreciation for the Scots-Irish. I consider
    1:10:58 myself, you know, American first. And one of the things that I really love about that is that I
    1:11:05 have no historical relationship to anybody who fought in the Civil War. But I feel such kinship
    1:11:11 with a lot of the people who did and reading the memoirs and the ideas of those that did because
    1:11:18 that same mindset of the victors and the values that they were able to instill in the country
    1:11:23 for 150 years later gives me the ability to connect to them. And that’s such an incredible
    1:11:27 victory on their part. And that’s such a unique thing in almost every other country in the world
    1:11:33 in China and India or wherever. You’re kind of like what you are. You’re a Hindu, you’re a Jew,
    1:11:39 you’re Han Chinese, you’re a Uyghur, or you’re Tibetan, something like that. You’re born into it.
    1:11:43 But really here was the only one of the only places in the world where you can really connect to
    1:11:48 that story and that spirit and the compounding effect of all of these different people who come
    1:11:54 to America. And that is a celebration of immigration as an idea. But immigration is also a discrete
    1:12:00 policy. And that policy was really screwed up by the Biden administration. And so we can celebrate
    1:12:07 the idea and also pursue a policy for all of the people in the US, our citizens, to actually be
    1:12:13 able to benefit. And look, it’s going to be messy. And honestly, I still don’t know yet if Trump will
    1:12:18 be able to pursue actual mass deportation, just because I think that I’m not sure the public
    1:12:21 is ready for it. I do support mass deportation. I don’t know if the public is ready for it.
    1:12:26 I think, I don’t know, I’ll have to see because there’s a lot of different ways that you can do
    1:12:31 it. There’s mandatory you verify, which requires businesses to basically verify or a US citizen
    1:12:34 or you’re here illegally whenever they employ you, which is not the law of the land currently,
    1:12:40 which is crazy, by the way. There’s, you know, you can cut off or tax remittance payments,
    1:12:45 which are payments that are sent back to other countries like Mexico, Honduras, and Guatemala,
    1:12:49 again, illustrating my economic migrant point. There are a lot of various different ways where
    1:12:53 you can just make it more difficult to be illegally here in the US, so people will self-deport.
    1:13:00 But, you know, if he does pursue like real mass deportation, that will be a flashpoint in America.
    1:13:05 Aren’t you talking about things like what Tom Holman said that works at raids,
    1:13:09 sort of increasing the rate of that? Yeah. We used to do that, you know?
    1:13:12 Yeah. But there’s a rate at which you can do that,
    1:13:18 where it would lead to, I mean, a radical social upheaval.
    1:13:22 Yeah, it will. I mean, and I think some people need to be honest here. And this actually flies
    1:13:29 in the face of, I mean, one of the most common liberal critiques is this is going to raise prices.
    1:13:35 And yeah, I think it’s true. I think it’s worth it. But that’s easy for me to say. I’m making a good
    1:13:39 living. If you care about inflation, you voted for Donald Trump and your price of groceries or
    1:13:44 whatever goes up because of this immigration policy, I think that needs to be extremely well
    1:13:48 articulated by the president. And of course, he needs to think about it. The truth is,
    1:13:52 this is America right now is built on cheap labor. It’s not fair to the consumer.
    1:13:56 It’s not fair to the immigrants, the illegal immigrants themselves.
    1:14:01 And it’s not fair to the natural born citizen. The natural born citizen has his wages suppressed
    1:14:05 for competition by tens of millions of people who are willing to work at lower wages.
    1:14:10 They have to compete for housing, for social services. I mean, just even, you know, like basic
    1:14:15 stuff at a societal level, it’s not fair to them. It’s definitely not fair to the other person.
    1:14:19 Because I mean, whenever people say like, who’s going to build your houses or whatever,
    1:14:28 you’re endorsing this quasi legal system where, you know, uninsured laborers from Mexico,
    1:14:35 they have no guarantee of wages. They’re getting paid cash under the table. They are living, you
    1:14:39 know, tend to a room. They’re sending Mexican remittance payments back just so that their
    1:14:43 children can eat. I mean, that’s not really fair to that person either. So that’s the point.
    1:14:49 The point is, is that it will lead to a lot of social upheaval. But this gets to my kirstenome
    1:14:54 point as well as you need to be able to articulate a lot of what I just said here. Because if you
    1:15:00 don’t, it’s going to go south real quick. The way Vivek articulates this is that our immigration
    1:15:05 system is deeply dishonest. Like we don’t acknowledge some of the things he just said.
    1:15:09 Yeah, exactly. And he wants to make it honest. So if we don’t do mass deportation, at least you
    1:15:16 have to be really honest about the living conditions of illegal immigrants, about basically
    1:15:22 mistreatment of them. Yes, it’s true. I mean, you know, if you support mass illegal migration,
    1:15:28 you’re basically supporting tens of millions who are living lives as second class citizens.
    1:15:34 That’s not fair to them. I also think it’s deeply paternalistic. So there’s this idea
    1:15:40 that America has so ruined these Central American countries that they have no agency
    1:15:44 whatsoever. And they can never turn things around. What does that say about our confidence in them?
    1:15:47 You know, one of the things they always say, they’re, oh, they’re law abiding. They’re great
    1:15:51 people and all that. I agree. Okay, by and large, I’m not saying these are bad people.
    1:15:56 But I am saying like, if they’re not bad and they’re law abiding and they’re citizens and thoughtful
    1:16:01 and all that, they can fix their own countries. And they did in El Salvador. That’s the perfect
    1:16:06 example. Look at the dramatic drop in their crime rate. Bukella is one of the most popular leaders
    1:16:12 in all of South America. That is proof positive that you can change things around despite perhaps
    1:16:19 a legacy of U.S. intervention. So, you know, to just say this idea that, you know, because it’s
    1:16:23 America’s fault that they’re screwed up, it takes agency away from them. You know, another really
    1:16:27 key part of this dishonesty, this really gets to Springfield and the whole Haitian thing. Because
    1:16:32 everybody, you know, beyond the eating cats and dogs, everybody does not even acknowledge,
    1:16:36 because when they’re like, the Haitians are here legally, they need to actually think about the
    1:16:41 program. The program is called TPS. So, let me explain that. TPS is called temporary protected
    1:16:46 status. Note, what’s the first word on that? Temporary. What does that mean? TPS was developed
    1:16:52 under a regime in which, let’s say that there was a catastrophic, I think this is a real example,
    1:16:56 I think there was like a volcano or an earthquake or something, where people were granted TPS to
    1:17:00 come to the United States. And the idea was they were going to go back after it was safe.
    1:17:07 They just never went back. There are children born in the United States today who are literally the
    1:17:12 descent, who are adults, who are the descendants of people who are still living in the U.S. under
    1:17:17 TPS. That’s a perfect example of what Vivek says is dishonest. You know, you can’t mass
    1:17:23 de facto legalize people by saying that they’re here temporarily because of a program or because
    1:17:29 of something that happened in their home country. When the reality is that, for all intents and
    1:17:35 purposes, we are acknowledging them as full legal migrants. So, even the term migrant to these
    1:17:40 Haitians in Springfield makes no sense because they’re supposed to be here under TPS. That’s not,
    1:17:46 migrant implies permanency. So, the language is all dishonest. And people don’t want to tell you
    1:17:50 about the things I just said about chain migration. The vast majority of Americans don’t even know
    1:17:54 the immigration system works. They don’t understand what I just said about TPS. They don’t really
    1:17:58 understand the insanity of asylum law, where you can just literally throw up your hands and say,
    1:18:03 “I fear for my life,” and you get to live here for five years before your court date even happens.
    1:18:09 And, you know, by that time, get a work permit or whatever, you can, you know, get housing,
    1:18:13 like you just said, in substandard conditions. And you can kind of just play the game and wait
    1:18:17 before a deportation order comes. And even if it does, you never have to leave because there’s no
    1:18:21 ICE agent or whatever who’s going to enforce it. So, the whole system is nuts right now. We need
    1:18:27 complete systematic reform that burns it all to the ground. That said, sort of the image
    1:18:35 and the reality of a child being separated from their parents seems deeply un-American, right?
    1:18:40 Well, I mean, look, it gets, okay, so, you know, I’m not going to defend it, but I’ll just put it
    1:18:46 this way. Do you hate children? Yeah, see, that’s what I mean. Do you think twice whenever you see
    1:18:52 a drug addict who’s put in prison and their child is put in protective services? Nobody in America
    1:18:58 thinks twice about that, right? Right? So, I mean, well, that’s kind of screwed up. Well, we should
    1:19:03 think about why did we come to that conclusion? The conclusion was is that these adults willingly
    1:19:08 broke the law and pursued a path of life, which put them on a, you know, which put them on a
    1:19:14 trajectory where the state had to come in and determine that you are not allowed to be a parent
    1:19:18 basically to this child while you serve your debt to society. Now, child separation was very
    1:19:24 different. Child separation was also a product of extremely strange circumstances in US immigration
    1:19:30 law, where basically at the time, the reason why it was happening was because there was no way to
    1:19:36 prosecute people for illegal entry without child separation, because previous doctrine, I believe
    1:19:42 it’s called the Flores doctrine under some asylum law, people have to go check my work on this.
    1:19:46 But basically, the whole reason this evolved is a legal regime was because
    1:19:51 people figured out that if you bring a kid with you, because of the so-called Flores doctrine
    1:19:56 or whatever, that you couldn’t be prosecuted for illegal entry. So, it was a de facto way
    1:20:02 of breaking the law. And in fact, a lot of people were bringing children here who weren’t even theirs,
    1:20:07 who weren’t, they weren’t even related to or couldn’t even, you know, prove it, were bringing them to
    1:20:12 get around the prosecution for illegal entry. So, I’m not defending child separation. I think it was
    1:20:17 horrible or whatever. But, you know, if I give you the context, it does seem like a very tricky
    1:20:23 problem in terms of do we enforce the law or not? How are we able to do that? And the solution,
    1:20:30 honestly, is what Donald Trump did was remain in Mexico and then pursue a complete rewrite
    1:20:37 of the way that we have U.S. asylum law applied and of asylum adjudication and really just about
    1:20:43 enforcing our actual laws. So, when I try to explain to people is the immigration system right now
    1:20:49 is a patchwork of this deeply dishonest, such a great word, deeply dishonest system in which
    1:20:57 you use the system and set it up in such ways that illegal immigration is actually one of the
    1:21:03 easiest things to do to accomplish immigration to the United States. That is wrong. My parents had
    1:21:09 to apply. It wasn’t easy. Do you know in India, there’s a temple called the VISA temple where you
    1:21:13 walk 108 times around it, which is like a lucky number. And if you do it when you’re applying
    1:21:17 for a visa to the United States, all right, it costs a lot of money and it’s hard. People get
    1:21:21 rejected all the time. There’s billions of people across the world who would love to be able to come
    1:21:26 here. And many of them want to do so legally and they should have to go through a process. The
    1:21:30 current way it works is it’s easier to get here illegally than it is legally. I think that’s
    1:21:33 fundamentally right. It’s also unfair to people like us whose parents did come here legally.
    1:21:37 Can you still be on the case against mass deportation? What are the strongest arguments?
    1:21:42 The strongest argument would be that these people contribute to society, that these people,
    1:21:47 many of whom, millions of here have been here for many years, have children, natural born
    1:21:52 citizens because of birthright citizenship. It would require something that’s fundamentally
    1:21:57 inhumane and un-American, as you said, the idea of separating families across different borders
    1:22:05 simply because of what is a “small decision” of coming here illegally. And the best case,
    1:22:10 beyond any of this moral stuff, for no mass deportation is it’s good for business.
    1:22:17 Illegal immigration is great for big business. It is great for big agriculture. So if you want the
    1:22:23 lowest prices of all time, then yeah, mass deportation is a terrible idea. But first of all,
    1:22:30 very convincing. And second of all, you can’t just do mass deportation without also fixing
    1:22:35 the immigration system. Yes, exactly. And there are several pieces of legislation,
    1:22:39 HR2, that’s something that the Republicans have really coalesced around. It’s a border bill.
    1:22:43 I encourage people to go read it and see some of the different fixes to the U.S. immigration system.
    1:22:48 I’m curious whether it’ll actually pass or not. Remember, there’s a very slim majority of the
    1:22:52 House of Representatives for Republicans this time around. And people vote for a lot of things
    1:22:56 when they’re not in power, but when it’s actually about to become the law, we’ll see. There’s a
    1:23:01 lot of swing state people out there who may think twice before casting that vote. So I’m
    1:23:07 definitely curious to see how that one plays out. The other thing is, is that, like I just said,
    1:23:11 the biggest beneficiary of illegal immigration is big business. So if you think they’re going to
    1:23:16 take this one lying down, absolutely not. They will fight for everything that they have to keep
    1:23:22 their pool of cheap labor because it’s great for them. I think JD said a story, I think he was
    1:23:28 on Rogan about how he talked to a hotelier chain guy and he was just, he was like, “Yeah, it’s just
    1:23:32 terrible.” It’s like they would take away our whole workforce. And he was like, “Do you hear
    1:23:37 yourself in terms of what you’re talking or bragging about?” But that’s real. That’s a real
    1:23:44 thing. And that Tyson’s foods and all these other people, that’s another really sad part.
    1:23:50 What I mean by second-class citizenship is this presumption, first of all, that Americans think
    1:23:54 it’s too disgusting to process meat or to work in a field. I think anybody will do anything for
    1:24:00 the right wage, first of all. But second is, the conditions in a lot of those facilities are
    1:24:05 horrible and they’re covered up for a reason, not only in terms of the way that businesses,
    1:24:09 they actually conduct themselves, but also to cover up their illegal immigrant workforce.
    1:24:11 So, honestly, I think it could make things better for everything.
    1:24:14 You have studied how government works. What are the chances of mass deportation happens?
    1:24:18 Well, it depends how you define it. So, I mean, mass deportation could mean one million. I mean,
    1:24:21 nobody even knows how many people are here illegally. It could be 20 million. It could
    1:24:25 be 30 million. I’ve seen estimates of up to 30 million, which is crazy. That’s almost one-eleventh
    1:24:30 of the entire US population. What number do you think will feel like mass deportation? One million
    1:24:35 people? A million people is a lot. I mean, that’s a lot of people. That’s a lot. I mean, but the
    1:24:40 crazy part is that’s only one-twelfth of what Joe Biden led in the country. So, that’s one of those
    1:24:46 that just give people the scale of what it will all look like. Do I think mass deportation will
    1:24:52 happen? It depends on the definition. Will one million over four years? Yeah, I feel relatively
    1:24:59 confident in that. Anything over that, it’s going to be tough to say. Like I said,
    1:25:05 probably the most efficient way to do it is to have mandatory e-verify and to have processes in
    1:25:10 place where it becomes very difficult to live in the United States illegally, and then you will
    1:25:17 have mass self-deportation, and they will take the victory lap on that. But actual, like rounding
    1:25:24 millions of people up and putting them in deportation facilities and then arranging flights to, God
    1:25:29 knows, all across the globe, that’s a logistical nightmare. It also costs a lot of money. And
    1:25:37 don’t forget, Congress has to pay for all of this. So, we can have doge or we can have mass
    1:25:42 deportation. So, those two things are kind of irreconcilable, actually. There’s a lot of competing
    1:25:47 influences at play that people are not being real about at all. Yeah, that was one of the tensions
    1:25:54 I had talking to Vivek is he’s big on mass deportation and big on making government more
    1:25:59 efficient. And it really feels like there’s a tension between those two in the short term.
    1:26:04 Well, yes, absolutely. Also, I mean, this is a good segue. I’ve been wanting to talk about this.
    1:26:07 I am sympathetic to doge to the whole Department of Government Efficiency.
    1:26:12 How unreal is it that it’s called doge? Actually, with Elon, it’s quite real. I guess I’ve just,
    1:26:18 you know, I’ve accepted Elon as a major political figure in the US. But the doge committee,
    1:26:24 the Department of Government Efficiency, is a non-statutory agency that has zero funding
    1:26:31 that Donald Trump says will advise OMB, the Office of Management and Budget. Now, two things. Number
    1:26:38 one is, as I predicted, doge would become a “blue ribbon commission.” So, this is a non-statutory
    1:26:43 blue ribbon commission that has been given authority to Vivek Ramaswamy and to Elon Musk.
    1:26:49 Secondary, their recommendations to government should be complete by July of 2026, according
    1:26:54 to the press release released by Trump. First of all, what that will mean is they’re probably
    1:26:57 going to need private funding to even set all this up. That’s great. Not a problem for Elon.
    1:27:02 But you’re basically going to be able to have to commission GAO reports, Government Accountability
    1:27:08 Office, and other reports and fact-finding missions across the government, which is fantastic.
    1:27:12 Trump can even empower you to go through every agency and to collect figures.
    1:27:18 None of it matters one iota if Republican appropriators in the House of Representatives
    1:27:22 care what you have to say. Historically, they don’t give a shit what the Executive Office has to say.
    1:27:27 So, every year, the President releases his own budget. It used to mean something,
    1:27:32 but in the last decade or so, it’s become completely meaningless. The House Ways and Means
    1:27:37 Committee and the People’s House are the ones who originate all appropriations and set up spending.
    1:27:45 So, that’s one. Doge in and of itself has no power. It has no ability to compel or force people to do
    1:27:51 anything. Its entire case for being, really, if you think about it mechanically, is to try and
    1:27:56 convince and provide a report to Republican legislators to be able to cut spending. So,
    1:28:02 that’s that. Now, we all know how Congress takes to government reports and whether they get acted
    1:28:08 on or not. So, that’s number one. Number two is the figures that Elon is throwing out there.
    1:28:12 Again, I want to give them some advice because people do not understand federal government
    1:28:18 spending. The absolute vast majority of government spending is entitlement programs like Social
    1:28:23 Security and Medicare, which are untouchable under Donald Trump and their most politically
    1:28:28 popular programs in the world, and military spending. Discretionary non-military spending,
    1:28:33 I don’t have the exact figure in front of me, is a very, very small part of the federal budget.
    1:28:40 Now, within that small slice, about 90% of that eight is bipartisan and is supported by like
    1:28:46 everybody. NOAA, you know the hurricane guys? Like people like that. You know, people who are flying
    1:28:51 into the eye of the hurricane, people who are government inspectors of X, Y, and Z. The parts
    1:28:57 that are controversial that you’re actually able to touch, things like welfare programs like food
    1:29:02 stamps is an extraordinary small slice. So, what’s the number you put out there? $5 trillion?
    1:29:07 Something like that? There is only one way to do that. And realistically, under the current thing,
    1:29:13 you have to radically change the entire way that the Pentagon buys everything. And I support that,
    1:29:19 but I just want to be very, very clear. But I haven’t seen enough energy around that. There’s
    1:29:25 this real belief in the US that we spend billions on all of these programs that are doing complete
    1:29:31 bullshit. But like the truth, the absolute vast majority of it is military spending and entitlements.
    1:29:33 Trump has made clear entitlements are off the table. It’s not going to happen. So,
    1:29:39 the way that you’re going to be able to cut realistically military spending over a decade
    1:29:47 long period is to really change the way that the United States procures military equipment,
    1:29:51 hands out government contracts. Elon actually does have the background to be able to accomplish
    1:29:55 this because he has had to wrangle with SpaceX and the bullshit that Boeing has been pulling
    1:30:01 for over a decade. But I really want everybody’s expectations to be very set around this. Just
    1:30:08 remember, non-statutory, blue ribbon. So, if he’s serious about it, I just laid out all of these
    1:30:12 hurdles that he’s going to have to overcome. And I’m not saying him and Vivek aren’t serious dudes,
    1:30:17 but you got to really know the system to be able to accomplish this. So, you just laid out the
    1:30:24 reality of how Washington works to give the counterpoint that I think you’re probably also
    1:30:29 rooting for is that one, as a statement like Peter Thiel said, don’t bet against Elon. Sure.
    1:30:35 One of the things that you don’t usually have with blue ribbon is the kind of megaphone that
    1:30:45 Elon has. True. And I would even set the financial aspects aside, just the influence he has with
    1:30:52 the megaphone, but also just with other people who are also really influential. I think that can
    1:30:57 have real power when backed by sort of a populist movement. I don’t disagree with you, but let me
    1:31:02 give you a case where this just failed. So, Elon endorsed who for Senate Majority Leader,
    1:31:08 Rick Scott, right? Who got the least amount of votes in the US Senate for GOP leader, Rick Scott?
    1:31:13 John Thune is the person who got it. Now, the reason I’m bringing that up, one of my favorite
    1:31:19 books, Master of the Senate, by Robert Caro, part of the LBJ series, the Senate as an institution,
    1:31:26 it reveres independence. It reveres, I mean, the entire theory of the Senate is to cool down
    1:31:31 the mob that is in the House of Representatives and to deliberate. That’s its entire body.
    1:31:37 They are set up to be immune from public pressure. Now, I’m not saying they can’t be pressured,
    1:31:42 but that example I just gave on Rick Scott is a very important one of he literally endorsed somebody
    1:31:47 for leader. So did Tucker Carlson. So did a lot of people online. And only 13 senators voted for
    1:31:52 Rick Scott. The truth is, is that they don’t care. Like they’re set up where they’re marginally popular
    1:31:56 in their own home states. They’ll be able to win their primaries. And that’s all they really need
    1:32:01 to do to get elected. And they have six year terms, not even up for four years. So will Elon
    1:32:05 still be interested in politics six years from now? That’s a legitimate question for a Republican
    1:32:09 senator. So maybe he could get the House of Representatives to sign off maybe on some of his
    1:32:15 things. But there’s no guarantee that the Senate is going to agree with any of that. There’s a story
    1:32:21 that Caro tells in the Master of the Senate book, which I love, where Thomas Jefferson was in Paris
    1:32:27 during the writing of the Constitution. And he asked Washington, he said, “Why did you put in a
    1:32:34 Senate, a bicameral legislature?” And Washington said, “Why did you pour your tea into a saucer?”
    1:32:40 And Jefferson goes to cool it. And Washington says, “Just so.” That’s what I had to explain.
    1:32:46 He was a man of very few words. He was a brilliant man. Okay. So you actually outlined the most likely
    1:32:51 thing that’s going to happen with Doja as it hits the wall of Washington.
    1:32:58 What is the most successful thing that can be pulled off? The most successful thing they could
    1:33:06 do is right now, I think they’re really obsessed with designing cuts and identifying cuts. I would
    1:33:13 redesign systems, systems of procurement. I would redesign the way that we have processes in place
    1:33:19 to dispense taxpayer dollars, because the truth is that appropriations itself, again,
    1:33:26 are set by the United States Congress. But the way that those appropriations are spent by the
    1:33:32 government, the executive has some discretionary authority. So your ability as the executive
    1:33:37 to be a good steward of the taxpayer money and to redesign a system, which actually I think Elon
    1:33:41 could be good at this, and Vivek too, in terms of their entrepreneurial spirit, is the entire
    1:33:45 Pentagon procurement thing. It needs to be burned to the ground. Number one, it’s bad for the Pentagon.
    1:33:52 It gives them substandard equipment. It rewards very old weapons systems and programs and thinking
    1:33:57 that can be easily defeated by people who are studying that for vulnerabilities. The perfect
    1:34:04 example is all of this drone warfare in Ukraine and in Russia. I mean, drone warfare costs almost
    1:34:10 nothing, and yet drone swarms and hypersonic missiles pose huge dangers to U.S. systems,
    1:34:17 which cost more than hundreds of billions of dollars. So my point is that giving nimble procurement
    1:34:23 and systemic change in the way that we think about executing the mission that Congress does give you
    1:34:28 actually could save the most amount of money in the long run. That’s where I would really focus in on.
    1:34:35 The other one is, counter to everything I just said, is maybe they will listen. Maybe the Republicans
    1:34:41 are like, “Yeah, okay, let’s do it.” The problem again, though, is swing state people who need
    1:34:45 to get reelected, they need to do one thing. They need to deliver for their district. They need to
    1:34:51 run on stuff, and nobody has ever run on cutting money for your state. They have run on bringing
    1:34:57 money to your state, and that’s why earmarks and a lot of these other things are extraordinarily
    1:35:02 popular in Congress is because it’s such an easy way to show constituents how you’re working for
    1:35:09 them whenever it does come re-election time. So it’s a very difficult system. And I also want
    1:35:14 to tell people who are frustrated by this, I share your frustration, but the system is designed to
    1:35:19 work this way. And for two centuries, the Senate has stood as a bulwark against literally every
    1:35:26 popular change. And because of that, it’s designed to make sure that it’s so popular for long enough
    1:35:30 that it has to become inevitable before the status quo can change. That’s really,
    1:35:35 really frustrating, but you should take comfort in that it’s always been that way. So it’s been okay.
    1:35:39 Well, as I’ve learned from one of the recommendations of the age of acrimony,
    1:35:44 as I feel embarrassed that I didn’t know that senators used to not be elected.
    1:35:49 What a crazy system, huh? Yeah. I mean, many of the things we take for granted now,
    1:35:59 as defining our democracy, was kind of invented, developed after the Civil War in the 50 years
    1:36:03 after the Civil War. Absolutely correct. Age of acrimony, oh my God, I love that book. I cannot
    1:36:09 recommend it enough. It is so important. And one of the biggest mistakes that Americans make is that
    1:36:14 we study periods where greatness happened, but we don’t often study periods where nothing happened,
    1:36:20 or where really bad shit happened. You know, we don’t spend nearly enough. Americans know about
    1:36:24 FDR. They don’t really know anything about the depression or how we got there. What was it like
    1:36:30 to be alive in the United States in 1840, right? Nobody thinks about that, really, because it’s
    1:36:34 kind of an in-between time in history. There are people who lived their entire lives, who were born,
    1:36:39 who had to live through those times, who were just as conscientious and intelligent as you and I are,
    1:36:43 and we’re just trying to figure shit out, and things felt really big. So the age of acrimony
    1:36:47 is a time where it was almost completely ignored outside of the Gilded Age aspect. But like you
    1:36:53 just said, it was a time where progressive reform of government and of the tension between civil
    1:37:01 rights, extraordinary wealth, and democracy, and really the reigning in of big business,
    1:37:06 so many of our foundations happened exactly in that time. And I take a lot of comfort from that
    1:37:11 book because one of the things I learned from the book is that voter participation is highest
    1:37:16 when people are pissed off, not when they’re happy. And that’s such a counterintuitive thing,
    1:37:21 but voter participation goes down when the system is working. So 2020, right? I think we can all
    1:37:25 agree it was a very tense election. That’s also why it had the highest voter participation ever.
    1:37:31 2024, very high rates of participation, same thing. People are pissed off, and that’s actually what
    1:37:36 drives them to the vote. But something that take comfort in that is that people being pissed off
    1:37:40 and people going out to vote, it actually does have major impact on the system. Because otherwise,
    1:37:46 the status quo is basically allowed to continue. And so, yeah, like you just said, I mean, direct
    1:37:51 election of senators, I mean, there are probably people alive today who were born when there was
    1:37:56 no direct election of senators, which is an insane thing to think about. I mean, there’d be almost
    1:38:02 a hundred or so. But the point is, is that at that time, it was so deeply corrupt. And it was one
    1:38:08 where the quasi aristocracy from the early days leading into the Gilded Age were able to enforce
    1:38:13 their will upon the people. But you can take comfort in that that was one of those areas where
    1:38:18 Americans were so fed up with it, they changed the constitution and actually forced the aristocrats
    1:38:24 in power to give their own power. It’s like our version of when they flipped power and took away
    1:38:29 the legislative power of the House of Lords in the UK. I just think that’s amazing. And such a cool
    1:38:36 thing about our country in the UK too. It’s the continued battle between the people and the elite,
    1:38:44 right? And we should mention not just the direct election of senators, but the election of candidates
    1:38:50 for a party. Yes. That was also invented. It used to be that the quote unquote party bosses,
    1:38:57 I say that with a half a chuckle, chose the candidate. Yeah, the whole system is nuts.
    1:39:00 The way that we currently experience politics is such a modern invention.
    1:39:06 With a little asterisk with Kamala Harris, but yeah, good point. But that was actually
    1:39:10 more of a mean reversion, right? We’re living in an extraordinarily new era where we actually have
    1:39:16 more input than ever on who our candidates are. It used to be, this is crazy. So the conventions
    1:39:20 have always took in place two months before, right? Imagine a world where you did not know who
    1:39:24 the pro nominee was going to be before that convention. And the nominee literally was decided
    1:39:30 at that convention by those party bosses. Even crazier, there used to be a standard in American
    1:39:36 politics where presidents did not directly campaign. They in fact did not even comment
    1:39:40 about the news or mentioned their opponents names. They were, they would give speeches from their
    1:39:46 doorstep, but they, it was unseemly for them to engage in direct politics. You would not get a
    1:39:52 Bernie Sanders, you would not get a Donald Trump, Bill Clinton. I mean, basically every
    1:39:56 president from John F. Kennedy onwards has been a product of the new system. Every president prior
    1:40:01 to that has been much more of the older system. There was a in between period post FDR where
    1:40:07 things were really changing, but the primary system itself had its first true big win under
    1:40:12 John F. Kennedy. I think that the lesson from that is there’s a collective wisdom to the people,
    1:40:18 right? I think so. I think it works. Yeah. I mean, well, okay, I’ll steal man it. We had some great
    1:40:24 presidents in the party boss era. FDR was a great president. FDR was the master of
    1:40:28 coalitional politics of his ability. In fact, what really made him a genius was his ability to get
    1:40:34 this overthrow, the support of a lot of the corruption and the elite Democrats to take
    1:40:40 control in there at the convention and then combine his personal popularity to fuse all systems of
    1:40:46 power where he had the, he had the, he had the elites basically under his boot because he was
    1:40:51 the king and he used his popular power and his support from the people to be able to enforce
    1:40:58 things up and down. I mean, you know, even in the party boss era, we would have no, a lot of the,
    1:41:02 a lot of the people we revere really came out of that people like Abraham Lincoln.
    1:41:07 I mean, I don’t think Abraham Lincoln would have won a party primary in 1860. There’s no chance. He
    1:41:15 won, he won, luck, thank God, from an insane process in the 1860 Republican convention. People
    1:41:19 should go read about that because that was wild. I think we were this close to not having Lincoln
    1:41:24 as president. And yeah, I mean, there, Teddy Roosevelt, there’s so many that I could point to
    1:41:28 who made great impacts on history. So the system does find a way to still produce good stuff.
    1:41:34 That was a kind of beautiful diversion from the Doge discussion. If we’re going to turn
    1:41:40 briefly to Doge. Sure. So we kind of talked about cost cutting, but there’s also increasing the
    1:41:46 efficiency of government, which you also kind of talked about procurement. So maybe we can throw
    1:41:53 into the pile the 400 plus federal agencies. So let’s take another perspective on what success
    1:42:01 might look like. So like radically successful Doge, would it basically cut a lot of federal
    1:42:06 agencies? Probably combine. Combine. Okay, so I can give great examples of this because I have a
    1:42:12 great insight. Like for each agency will often use different like payroll systems. They’ll have
    1:42:18 different internal processes, right? That makes no sense. And it’s all because it’s antiquated.
    1:42:25 Now, everybody always talks about changing it. But there are a lot of like party interests about
    1:42:30 why certain people get certain things. The real problem with government, the people like us who
    1:42:34 are private and like, for example, when you want to do something, you can just do it. So I was
    1:42:42 listening to a really interesting analysis about law enforcement and the military. So I think the
    1:42:48 story was that the military was assign national guard guys were assigned to like help with the
    1:42:53 border. And they were trying to provide, I think it was translation services to people at border
    1:42:58 patrol. But somebody had to come down and be like, Hey, this has got to stop. According to US Code X,
    1:43:04 Y, and Z, the United States military cannot help with law enforcement, you know, abilities here.
    1:43:10 And so even though that makes absolutely no sense, because they’re all work, there are literal legal
    1:43:14 statutes in place that prevent you from doing the most efficient thing possible. So for some reason,
    1:43:21 we have to have a ton of Spanish speakers in Southcom, you know, in the US command that is
    1:43:25 responsible for South America, who literally cannot help with a crisis at the border. Now,
    1:43:31 maybe you can find some legal chicanery to make that work. But man, you got to have an attorney
    1:43:34 general who knows what he’s doing. You need a White House counsel, you need to make sure that
    1:43:39 shit stands up in a court of law. I mean, it’s not so simple. Whereas, let’s say, you know,
    1:43:42 you have a software right here, and you want to get a new software, you can just do it. You can
    1:43:47 hire whoever you want. When you’re the government, there’s a whole process you got to go through
    1:43:53 about bidding. And it just takes forever. And it is so inefficient. But unfortunately,
    1:44:00 the inefficiency is really derivative of a lot of legal statutes. And that is something that,
    1:44:05 yeah, again, actually, you know, radically successful doge quote unquote, would be
    1:44:12 study the law and then change it. Like figure instead of cost cutting, like cut this program
    1:44:17 or whatever, like I just said about why do different systems use payroll, just say that
    1:44:24 you can change the statute under which new software can be updated, let’s say, after 90 days.
    1:44:27 You know, I’ve heard stories of people who work for the government who still have like IBM
    1:44:32 mainframe that they’re still in 2024, that they’re still working because those systems
    1:44:36 have never been updated. There’s also a big problem with a lot of this clearance stuff.
    1:44:40 That’s where a lot of inefficiency happens, because a lot of contractors can only work
    1:44:45 based upon previous clearance that they already got achieving a clearance is very expensive,
    1:44:49 it’s very lengthy process. I’m not saying it shouldn’t be talking about security clearance,
    1:44:54 but it does naturally, you know, create a very small pool that you can draw some contracts
    1:45:00 fund. And I even mean stuff like, like the janitor at the Pentagon needs a security service, right?
    1:45:06 So clearance, so there’s only like five people who can even apply for that contract. Well,
    1:45:11 naturally, in an internal monopoly like that, he’s going to jack his price up because he literally
    1:45:17 has a moat around his product. Whereas if you or I are hiring a jant, whatever, anybody for
    1:45:21 anything, that type of credentialism and legal regime, it doesn’t matter at all. So there are
    1:45:26 million problems like this that people in government run into. And that is what I would see as the
    1:45:32 most successful. You know, paperwork slows everything down, and it feels impossible to break
    1:45:37 through that in a sort of incremental way. It’s so hard. It feels like the only way to do it
    1:45:45 is to literally shut down agencies in some kind of radical way, and then build up from
    1:45:53 scratch. Of course, as you highlight, that’s going to be opposed by a lot of people within
    1:45:56 government. Yeah. Well, historically, there’s only one way to do it. And it’s a really bad
    1:46:04 answer, war. War. Yeah. So I was going to say, basically, you have the kind of consensus where,
    1:46:08 okay, all this stupid bureaucratic bullshit we’ve been doing, we need to like put that
    1:46:14 shit aside, get the fuck out of here, we need to win a war. So like all the paperwork, you know,
    1:46:20 all the lawyers go leave. Yeah, exactly. No, but I want people to really understand that,
    1:46:26 you know, up until 1865 or 1860, I forget the exact year, we didn’t even have national currency.
    1:46:33 And then we were like, well, we need a greenback. And prior to that, people would freak out if we
    1:46:37 were talking about having national currency, greenback, backed by the, you know, the US government
    1:46:42 and all that, not even a question, passing like two weeks in the US Congress, an income tax eventually
    1:46:48 went away, but not even in the realm of possibility. And they decided to pass it. Same thing after
    1:46:53 World War One. And you think about how World War Two, I mean, World War Two just fundamentally
    1:46:58 changed the entire way the United States government works. Even the DHS, which I mentioned earlier,
    1:47:04 the Department of Homeland Security, it didn’t even exist prior to 9/11. It was done as response
    1:47:09 to 9/11 to coalesce all of those agencies under one branch to make sure that nothing like that
    1:47:16 could ever happen again. And so historically, unfortunately, absolute shitshow disaster war
    1:47:22 is the only thing that moves and throws the paperwork off the table. And I wish I wasn’t
    1:47:28 such a downer, but I’ve just, I’ve both, I’ve read too much, and I’ve had enough experience now
    1:47:35 in Washington to just see how these dreams get crushed instantly. And I wish it wasn’t that way.
    1:47:39 I mean, it’s a cool idea. And I want people who are inspired, who are getting into politics,
    1:47:43 to think that they can do something. But I want them to be realistic too. And I want them to know
    1:47:46 what they’re signing up for whenever they do something like that. And the titanic amount of
    1:47:50 work it is going to take for you to be able to accomplish something. Yeah, but I’ve also
    1:47:56 heard a lot of people in Silicon Valley laughing when Elon rolled in and fired 90% of Twitter.
    1:47:58 Here’s this guy, Elon Musk. You are absolutely correct.
    1:48:02 Knows nothing about running a social media company. Of course, you need all these
    1:48:07 servers. Of course, you need all these employees. And nevertheless, the service keeps running.
    1:48:11 He figured it out. And you have to give him eternal credit for that. I guess the difference is
    1:48:17 no, there was no law that he could fire him. You know, there was no, there was no like,
    1:48:20 like at the end of the day, he owned the company. You know, he had total discretion of his ability
    1:48:26 to move. So I’m not even saying his ideas are bad. I’m saying that the ability that’s that
    1:48:32 what makes him such an incredible visionary entrepreneur, its movement, its difference
    1:48:38 at times to the right people, but also the knowledge of every individual piece of the machine
    1:48:43 and his ability to come in and to execute his full vision at any time and override
    1:48:47 any of the managers. So I talked previously about the professional managerial class and the
    1:48:51 managerial revolution. Elon is one of the few people who’s ever built a multi-billion-dollar
    1:48:56 company who has not actually fallen victim to the managerial revolution and against
    1:49:00 entrepreneurship and innovation that happens there. There are very few people who can do it.
    1:49:06 Elon, Steve Jobs, but you know, what do we learn is that unfortunately after Steve died,
    1:49:10 Apple basically did succumb to the managerial revolution and has become like the product,
    1:49:15 you know, they make all their money by printing services and making it impossible to leave
    1:49:21 this ecosystem as opposed to building the most cool product ever. As much as I love my Vision Pro,
    1:49:25 don’t get me wrong. I think you just admitted that you’re part of a cult. I know, I literally am.
    1:49:31 I am. I fully admit it. Yeah. I miss Steve. The grass is green on the other side. Come join us.
    1:49:38 Okay. Whether it’s Elon or somebody else, what gives you hope about something like a radical
    1:49:43 transformation of government towards efficiency, towards being more slim?
    1:49:49 What gives you hope that that would be possible? Well, I wouldn’t put it that way. I don’t think
    1:49:54 slimness in and of itself is a good thing. What I care about is the relationship to people in its
    1:50:00 government. So the biggest problem that we have is that we have a complete loss of faith in all of
    1:50:06 our institutions. And I’ve really encouraged people. I don’t think people can quite understand
    1:50:11 what the relationship between America and its government was like after World War II and after
    1:50:18 FDR. Like 90% of the people trusted the government. That’s crazy. Like when the president said
    1:50:24 something, they were like, okay, he’s not lying. Think about our cynical attitude towards politicians
    1:50:29 today. That is largely the fault of Lyndon Johnson and of Richard Nixon and that entire
    1:50:34 fallout period of Vietnam. Vietnam in particular really broke the American character and its
    1:50:38 ability and its relationship with government. And we’ve never recovered faith in institutions
    1:50:44 ever since that. And it’s really unfortunate. So what makes me hopeful at least this time is
    1:50:49 anytime a president wins a popular vote and an election is they have the ability to reset
    1:50:56 and to actually try and build something that is new. And so what I would hope is that this is
    1:51:02 different from the first Trump administration in which the mandate for Donald Trump is actually
    1:51:08 carried out competently. Yes, he can do his antics which got him elected. At this point,
    1:51:14 we can’t deny it. McDonald’s thing is hilarious. It’s funny. It is. People love it. People like
    1:51:18 the podcasting. People like… Garbage truck. The garbage truck. Yeah, exactly. They like the
    1:51:23 stunts. And he will always excel and he will continue to do that. There are policy and other
    1:51:28 things that he can and should do like the pursuit of no war, like solving the immigration question
    1:51:35 and also really figuring out our economy, the way that it currently runs and changing it so that
    1:51:42 the actual American dream is more achievable. And housing is one of the chief problems that we have
    1:51:47 right now. The real thing is Donald Trump was elected on the backs of the working man. I mean,
    1:51:51 it’s just true. Households under $100,000 voted for Donald Trump. Maybe they didn’t do so for
    1:51:56 economic reasons. I think a lot of them did for economic. A lot of them did for immigration,
    1:52:01 for cultural, but he still owed them something. And there is… I would hope that they could
    1:52:07 carry something out in that respect that is not a similar continuation and chaotic vibe of the
    1:52:13 first time where everything felt like it exploded any time with staffing, with even his policy or
    1:52:18 what he cared about or his ability to pursue. And a lot of that does come back to personnel.
    1:52:23 So I’m concerned in some respects. I’m not thrilled in some respects. I’m happy in some
    1:52:27 respects, but it remains to be seen. How he’s going to do it.
    1:52:32 To the degree it’s possible to see Trumpism and MAGA as a coherent ideology. What do you think
    1:52:39 are the central pillars of it? MAGA is a rejection of cultural elitism. That’s what I would say.
    1:52:45 Cultural elitism, though, has many different categories. Immigration is one, right? Is that
    1:52:50 cultural elitism and cultural liberalism has a fundamental belief that immigration in and of
    1:52:55 itself is a natural good at any and all levels, that all immigrants are like replacement level,
    1:53:00 that there is no difference between them. Cultural elitism in a foreign policy context
    1:53:06 comes back to a lot of that human rights, democracy stuff that I was talking about earlier,
    1:53:11 which divorces American values from American interests, and says that actually American
    1:53:18 values are American interests. Cultural elitism and liberalism leads to the worship of the Postal
    1:53:23 Rights Era bureaucracy that I talked about from those two books of DEI, quote unquote Woke,
    1:53:31 and of progressive social ideology. So I would put all those together as ultimately what MAGA
    1:53:38 is. It is a screw you. I once drove past, it was in rural Nevada and I was driving,
    1:53:43 and I drove past the biggest sign I’ve ever seen, political sign to this J, and it’s just,
    1:53:48 it was in 2020, it just said, “Trump, fuck your feelings.” And I still believe that it’s the most
    1:53:55 coherent MAGA thing I’ve ever seen because everyone’s always like, “How can a neocon
    1:54:01 and Tulsi Gabbard and RFK and all these other people, how can they all exist under the same
    1:54:05 umbrella?” And I’m like, it’s very simple. All of them have rejected the cultural elite
    1:54:11 in their own way, certainly, but they’ve arrived at the same place. It’s an umbrella,
    1:54:15 and it’s an umbrella fundamentally, which has nothing to do with the status quo
    1:54:20 and with the currently established cultural elite. That doesn’t mean they’re not elite,
    1:54:23 and they’re not rich in their own regards. That doesn’t mean they don’t disagree,
    1:54:27 but that’s the one thing that unites the entire party. And so that’s the way I would put it.
    1:54:33 Anti-cultural elite, is that synonymous with anti-establishment, so basic distrust of all
    1:54:39 institutions? Is elitism connected to institutions? Yes, absolutely, because elites are the ones who
    1:54:45 runs our institutions. That said, anti-establishment is really not the right word, because there are
    1:54:50 a lot of left-wingers who are anti-establishment. They are against that, but they’re not anti-cultural
    1:54:57 leftism, and that’s the key distinction between MAGA and left populism. Left populism basically does
    1:55:03 agree. They agree with basic conceits. Racism is one of the biggest problems facing America.
    1:55:08 They’re one of the ways that we would fix that is through class-oriented economic programs,
    1:55:14 in order to address that. But we believe in, I don’t know, reparations as a concept. It’s just
    1:55:19 more about how we arrive there. Whereas in MAGA, we would say, no, we actually don’t think that at
    1:55:24 all. We think we’ve evolved past that, and we think that the best way to fix it is actually
    1:55:29 similar policy prescription, but the mindset matters a lot. The real distinction between
    1:55:36 MAGA and left populism really is on culture. Trans, in particular, orientation about, actually,
    1:55:41 immigration may be the biggest one, because if you look at the history of Bernie Sanders,
    1:55:47 Bernie Sanders was a person who railed against open borders and against mass migration for years.
    1:55:53 There are famous interviews of him on YouTube with Lou Dobbs, who’s one of the hardcore immigration
    1:55:57 guys, and they agree with each other. Lou is like, Bernie’s one of the only guys out there.
    1:56:03 Bernie, at the end of the day, he had to succumb to the cultural left and its changing attitudes
    1:56:09 on mass immigration. There are some famous clips from 2015 in a Vox interview that he gave,
    1:56:13 where he started, I think he started talking about how open borders is a Koch brothers libertarian
    1:56:19 concept, right? Because Bernie is basically of a European welfare state tradition. European
    1:56:25 welfare states are very simply understood. We have high taxes, high services, low rates of
    1:56:30 immigration, because we have high taxes and high services. We have a limited pool of people who
    1:56:34 can experience and take those services. He used to understand that. He changed a lot of his attitude.
    1:56:39 Bernie also, I will say, look, he’s a courageous man and a courageous politician.
    1:56:45 You know, as late as 2017, he actually endorsed a pro-life candidate, because he said that that
    1:56:50 pro-life candidate was pro-worker. At the end of the day, I care about pro-worker policy.
    1:56:54 He took a ton of shit for it. I don’t think he’s done it since. The sad part that’s really
    1:57:04 happened is that a lot of left populist agenda and other has become subsumed in the hysteria around
    1:57:08 cultural leftism, wokeism, whatever the hell you want to call it. Ultimately, that cultural
    1:57:14 leftism was the thing that really united the two wings of that party. That’s really why MAGA is
    1:57:19 very opposed to that. They’re really not the same, but the left populist can still be anti-establishment.
    1:57:24 That’s the key. It’s interesting to think of the left cultural elite
    1:57:30 subsuming, consuming Bernie Sanders, the left populist. You think that’s what happened?
    1:57:35 That’s what I would say. What do you think happened in 2016 with Bernie? Is there a possible
    1:57:42 future where he would have won? You and Chris wrote a book on populism in 2020. From that
    1:57:48 perspective, just looking at 2016, if he rejected wokeism at that time, by the way,
    1:57:54 that would be pretty gangstered during 2016. Would he have, because I think Hillary went
    1:57:58 towards the left more, right? Am I remembering that correctly?
    1:58:06 It was a very weird time. Yes and no. It wasn’t full-on BLM mania like it was in 2020,
    1:58:13 but the signs were all there. The great awakening was in 2014. I know it’s a ridiculous term.
    1:58:19 I love it. Please keep saying it. Just to give the origin, the great awakening is about the great
    1:58:24 religious revival in the United States. Because wokeism is a religion, that’s a common refrain,
    1:58:28 they’re like, “The great awakening is a really good term.” Thank you for explaining the joke.
    1:58:34 The great awakening is basically when racial attitudes amongst college-educated whites
    1:58:37 basically flipped on its head. There are a variety of reasons why this happened.
    1:58:43 I really believe that Tanahisi Coates’ case for reparations in the Atlantic is one of those.
    1:58:48 It radicalized an entire generation of basically white college-educated women
    1:58:53 to think completely differently on race. It was during Ferguson and then it also happened immediately
    1:59:00 after the Trayvon Martin case. Those two things really set the stage for the eventual BLM takeover
    1:59:05 of 2020, but fundamentally what they did is they changed racial attitudes amongst college-educated
    1:59:13 elites to really think in a race-first construct. Worse is that they were rejected in 2016 at the
    1:59:18 ballot box by the election of Donald Trump. In response, they ramped it up because they believed
    1:59:23 that that was the framework to view the world, that people voted for Trump because he was racist
    1:59:29 and not for a variety of other reasons that they eventually did. The point around this on
    1:59:34 question of whether Bernie could have won in 2016, I don’t know. Crystal seems to think so.
    1:59:40 I’m skeptical. I’m skeptical for a variety of reasons. I think the culture is honestly one of
    1:59:46 them. One of Trump’s core issues in 2016 was immigration, and Bernie and him did not agree
    1:59:52 on immigration. If immigration, even if people did support Bernie Sanders and his vision for
    1:59:56 working-class people, the debates and the understanding about what it would look like,
    2:00:00 like a healthcare system which literally would pay for illegal immigrants,
    2:00:04 I think he would have gotten killed on that. But I could be wrong. I honestly,
    2:00:09 I will never know what that looked like. Let me reference you from earlier in the
    2:00:15 conversation with FDR. It’s not the policy. I think if he went more anti-establishment
    2:00:22 and more populist as opposed to trying to court, trying to be friendly with the DNC.
    2:00:30 Yeah. That’s a good counterfactual. Nobody will really know. Look, I have a lot of love for the
    2:00:35 Bernie 2016 campaign. He has a great ad from 2016 called America. You should watch it. It’s a great
    2:00:40 ad. That’s another very interesting thing. It’s unapologetically patriotic. That is not something
    2:00:46 that you see in a lot of left-wing circles these days. He understood politics at a base level
    2:00:52 that a lot of people did not. But Bernie himself and then a lot of the Bernie movement was basically
    2:00:58 crushed by the elite Democratic Party for a variety of reasons. They hated them. They
    2:01:04 attacked Joe Rogan for even having him on and for giving him a platform that was ridiculous,
    2:01:09 obviously backfired in their face, which is really funny. But there are a lot of
    2:01:14 million examples like that. When they attacked Bernie for endorsing a pro-life politician,
    2:01:20 he never did it again. They attacked Bernie for having Bernie bros, people online, the bros who
    2:01:26 were Super Bro Bernie. It was his fault. His supporters would say nasty things about Elizabeth
    2:01:31 Warren. He would defend straight himself and be like, “Yes, I’m sorry. Please, my bro.” He was like,
    2:01:38 “Stop that.” I think his biggest problem is he never went full Trump. He didn’t go. He kept saying
    2:01:43 “sorry.” Yeah, I agree. I totally agree. Actually, in 2020, I did a ton of analysis on this at the
    2:01:48 time. He would always do stuff like, “Joe Biden, my friend.” It’s like, “No, he’s not your friend.
    2:01:53 He stands for everything that you disagree with. Everything.” He’d be like, “Yeah, he’s a nice guy,
    2:01:59 but he’s not my friend.” He would always be like, “Joe and I are great friends, but we have a small
    2:02:04 disagreement on this.” Like you just said, in terms of going full Trump, they wanted to see Trump up
    2:02:09 there humiliating all of the GOP politicians that they didn’t trust anymore. That’s what people really
    2:02:15 wanted. But the other side of this is that the Democratic base in 2020 was very different than
    2:02:23 2016, because by 2020, they full-on had TDS, and they were basically like, “We need to defeat Trump
    2:02:30 at all costs. We don’t give a shit what your name is. Bernie, Biden, whatever. Whichever of you is
    2:02:35 going to be at best defeat Trump, you get the knob.” 2016 is different because they didn’t full-on
    2:02:41 have that love and necessity of winning. By the way, this is a strategic advantage that the Democrats
    2:02:46 have. Democrats just care about winning. The current base of the party, all they want to do
    2:02:50 is win. Republican base, they don’t give a shit about winning. They just love Trump. So it’s nice
    2:02:58 to win, but one of those where they will express their id for what they really want. Now, it’s
    2:03:02 worked out for them because it turns out that’s a very palpable political force. But one of the
    2:03:09 reasons why you won’t see me up here doing James Carville 40 more years is there is a law of something
    2:03:16 called thermostatic public opinion, where the thermostat, it changes a lot whenever you actually.
    2:03:20 So when you have a left-wing president in power, the country goes right. When you have a right-wing
    2:03:25 president in power, the country goes left. Amazing, right? You can actually look at a graph of economic
    2:03:31 attitudes from the two months where Joe Biden became president after Donald Trump. So Republicans,
    2:03:35 Trump was president in the last year in office. Economy is great. Two months later, the economy
    2:03:41 is horrible. That is a perfect example of thermostatic opinion. And I’m not counting these
    2:03:47 Democrats out. 2004, George W. Bush wins the popular vote. He has a historic mandate to
    2:03:53 continue in Iraq. By ’06, he’s toasted. We have a massive midterm election. And by ’08, we’re writing
    2:03:57 books about 40 more years and how there’s never going to be a Republican in office ever again.
    2:04:01 So things can change a lot in a very short period of time. I think also for me personally, maybe I’m
    2:04:08 deluded, sort of the great man view of history. I think some of it, it’s in programming circles,
    2:04:14 the term skills issue. I think somebody just has to do how good you are, how charismatic you are,
    2:04:19 how good you are as a politician. I maybe disagree with this. I’d love to see what you think.
    2:04:23 I think if Obama, if you were allowed to run for many terms, I think Obama would just keep
    2:04:29 winning. He would win 2016. He would win 2020. He would win this year 2024.
    2:04:33 It’s possible, but I would flip it on you and I would say Obama would never be elected
    2:04:36 if there were no term limits, because Bill Clinton would have still been president.
    2:04:43 Well, those two, right. That’s two examples of exactly. They extremely skilled politicians
    2:04:51 and somehow can appear like populists. Man, Bill Clinton was a force in his time,
    2:04:55 and it’s honestly sad what’s happened to him. I was actually just talking with a friend the other
    2:04:59 day. I’m like, I kind of don’t think that presidents should become president when they’re young,
    2:05:04 because they live to see themselves become irrelevant. That must be really painful,
    2:05:08 because I know what it takes to get there. Imagine being Clinton. I mean,
    2:05:15 your entire legacy was destroyed with Hillary Clinton in 2016. Then imagine being Obama,
    2:05:20 who in 2016 you could argue it’s a one-off and say that Trump is just, “Oh, Hillary was a bad
    2:05:26 candidate,” but Michelle Barack Obama went so hard for Kamala Harris, and they just got blown
    2:05:31 out in the popular vote. I mean, the Obama era officially ended with Donald Trump’s re-election
    2:05:36 to the presidency in 2024, and that was a 20-year period where Obama was one of the most popular
    2:05:40 central figures in American politics. But I want to return to what you’re saying,
    2:05:45 because it is important. And by the way, I do not support term limits on American presidents.
    2:05:50 Are you a fascist? Well, that would imply that I don’t believe in democracy. I actually do believe
    2:05:55 in democracy, because I think the people, if they love their president, should be able to re-elect him.
    2:06:02 I think FDR was amazing. I think that the term limit change was basically what happened is
    2:06:07 is that Republicans and a lot of elite Democrats always wanted to speak against FDR,
    2:06:12 but he was a God, so they couldn’t. So they waited until he died. And then after he died,
    2:06:16 they were like, “Yeah, this whole third, fourth term, that can never happen again.”
    2:06:21 And America didn’t really think that hard about it. They were like, “Yeah, okay, whatever.” But,
    2:06:26 I mean, it had immense consequences for American history. Clinton is the perfect example. I mean,
    2:06:33 Bill Clinton left office even despite the Lewinsky bullshit. He had a 60% approval rating.
    2:06:38 Okay? No way George W. Bush gets elected. Impossible. Clinton would have blown his
    2:06:43 ass out. And imagine the consequences of that. We would have no Iraq. I mean, I’m not saying he
    2:06:47 was a great man. We probably still would have had the financial crisis, and there’s still a lot of
    2:06:51 bad stuff that would have happened, but he was a popular dude. And I wouldn’t say he had the best
    2:06:58 judgment at times, presidentially, not personally, definitely not personally, but presidentially.
    2:07:03 But I’m pretty confident we would have not gone into the Iraq war. And so that’s where it really
    2:07:07 cost us. If you’re a left wing and you’re talking about Obama, yeah, I think Obama probably would
    2:07:15 have won in 2016. Although it’s a counterfactual because Obama was never challenged in the same way
    2:07:22 that MAGA was able to to the liberal consensus. Like Romney really ran this like awful campaign,
    2:07:27 honestly, about cutting spending. It was very traditional. Republican was deeply unpopular.
    2:07:33 The autopsy of that election was we actually need to be more pro immigration that literally was the
    2:07:40 autopsy. But Trump understood the assignment. There are two people who I so deeply respect for
    2:07:45 their political bets, Peter Thiel and Donald Trump. So one of the books that I recommended called
    2:07:50 “The Unwinding” by George Packer, he actually talks about Peter Thiel there. This is in 2013.
    2:07:56 And Thiel talks about, he was like, you know, whoever runs for office next, they don’t need to
    2:08:01 run on an optimistic message. They need to run on a message that everything is fucked up and that
    2:08:08 we need to fix. And if you think about that’s why Thiel’s endorsement of Trump with the American
    2:08:14 carnage message is, I mean, it took, it was shocking, right, at the time. But he had that
    2:08:19 fundamental insight that that’s what the American people wanted. Trump too comes out of an election
    2:08:25 in 2012 where the literal GOP autopsy, the report produced by the party, says we need to be pro
    2:08:32 mass immigration. What happens? Immediately after 2012, they start to go for mass immigrant,
    2:08:37 basically they go for like these amnesty plans, the so-called gang of eight plan, Marco Rubio,
    2:08:43 and all of this in 2013, it falls apart. But Republicans get punished by their base
    2:08:49 in 2014. So Eric Cantor, who was the House majority leader, the number two Republican,
    2:08:53 spent more on stake in his campaign than his primary opponent who successfully defeated him,
    2:08:58 a guy named Dave Bratt. Dave Bratt kicked his ass on the issue of immigration and said that
    2:09:03 Eric Cantor is pro amnesty. All of the forces were there. And then in 2015, Trump comes down the
    2:09:08 escalator and he gives the message on immigration that the GOP base has been roaring and wanting
    2:09:14 to hear now, but that nobody wanted to listen to them. And that was his fundamental insight.
    2:09:21 That bet was a colossal and a titanic political bet at a time when all political ideology and
    2:09:25 thought process would have said that you should come out on the other side, which is where
    2:09:30 Marco Rubio and Ted Cruz and all these other guys were effectively there in varying different
    2:09:36 ways, like they were hawkish or whatever. But Trump had such a monopoly on that as an idea.
    2:09:40 That’s why he wins the 2016 primary. And then paired with immigration,
    2:09:46 a hard line position on immigration, is this American carnage idea that actually everything
    2:09:54 is wrong. The American dream is gone. We will stop this American carnage. And I think American
    2:09:58 carnage is one of the most important inaugural speeches ever given in American history. It’s
    2:10:03 put it up against every single other speech. There’s nothing else like it. But that was
    2:10:08 what the country wanted at the time. And that’s what great politicians are able to do is they’re
    2:10:14 able to suss something out. That’s also why Peter Thiel is who he is, because he saw that in 2000.
    2:10:20 Imagine what it takes to come out of the 2012 election and to be honestly totally contrarian
    2:10:25 to the entire national mood and this entire theory of Obama-esque star politics and say,
    2:10:28 no, you need somebody who runs on the opposite of that to win.
    2:10:33 Well, we’ll never know. And I love this kind of Mike Tyson versus Muhammad Ali.
    2:10:37 I still think I would have loved to see Obama versus Trump.
    2:10:38 Me too. I agree.
    2:10:43 And first of all, Obama versus Trump in 2008, Obama wins hands down.
    2:10:45 Well, yes, definitely.
    2:10:54 I love how this is a boxing talk. Now, when 2016, Obama has a bunch of Iraq and Afghanistan.
    2:10:56 He’s vulnerable, though. I’ll tell you why, DACA.
    2:10:59 That’s what nobody ever talks about in the Obama-Trump thing.
    2:11:03 Don’t forget, Obama takes his 2012 victory, basically says,
    2:11:09 oh, the GOP even now agrees with me on immigration. And then he does DACA and legalizes X million
    2:11:14 in a number of illegal immigrants who are here, who are brought here as children.
    2:11:18 That also fundamentally changed the immigration consensus on the Republican side because they
    2:11:23 were like, wait, holy shit, you can just do that because we don’t agree with that at all.
    2:11:27 And that really ignited the base as well. So I’m not sure.
    2:11:32 I mean, a moment I think about a lot with Trump, just like being able to unleash the rage of the
    2:11:38 Republican bases. In the 2012 debate, Kandy Crowley was the moderator with Mitt Romney,
    2:11:42 and she fact-checked him famously. This was when fact-checking was shocking in a president debate.
    2:11:47 And she said something about Benghazi, and she was like, no, he did say that.
    2:11:53 She corrected Romney on behalf of Obama. To this day, it’s questionable whether she was even right.
    2:11:57 But, and Romney was just like, oh, he did. Okay. Trump would have been like, excuse me,
    2:12:02 excuse me. Look at this woman. You know, he would have gone off. And I was like,
    2:12:07 and I think about that moment because that’s what the Republican base wanted to hear.
    2:12:11 But also it turns out America had a lot of festering feelings about the mainstream media
    2:12:19 that it needed unleashed. And Trump was just this incredible vector to just blow up this system,
    2:12:22 which I mean, if you ask me about optimism, that’s the thing I’m most optimistic about.
    2:12:26 Yeah. But don’t you think Obama had a good sense in how to turn it on,
    2:12:29 how to be anti-establishment correctly? I will not deny that he’s one of the most
    2:12:34 talented politicians literally to ever play the game. And he is, I mean, just unbelievable,
    2:12:39 rhetorical talent. Look, is it counterfactual? Would he be in more talented than Hillary? Yeah,
    2:12:45 okay, no question. In terms of anybody would have been for that one. But at the same time,
    2:12:50 all the signs were there. All the signs for the Trump victory and for the backlash against
    2:12:56 Obamacism kind of as a political project, it all existed. Like I just laid the tea leaves out there
    2:13:01 from 2012 to 2015, in retrospect, it’s the most predictable thing in the world that
    2:13:04 Donald Trump would get elected. But it was crazy in the moment. I got to live through that,
    2:13:10 which was really fun, like professionally. I think it’s unfortunate that he kind of
    2:13:17 led Kamala Harris borrow his reputation. Oh, it’s, I mean, it’s like, you know better, dude,
    2:13:24 you know, you defeated these people, this Clinton machine, you destroyed them. And it was awesome
    2:13:30 in ’08. What is that? Why do you, why, why did he, like, he’s so much bigger and better than the
    2:13:35 machine. I don’t get it. It’s interesting, right? It’s so weird, though. I just think, I think this
    2:13:40 was a wake-up call. 2024 was a wake-up call, like the, the DNC machine doesn’t work. Absolutely. I
    2:13:44 mean, there needs to be new blood, new, new candidate, new Obama-like candidates. Well,
    2:13:47 I’m glad you brought that up, because that’s important, too, in terms of the process and the
    2:13:53 way that things currently stand. The DNC actually rigged its entire primary system under Biden,
    2:13:59 way to the, not to the benefit of Obama. So for example, you know how they moved away from the
    2:14:04 Iowa caucuses, and they actually moved some other primaries and moved the calendar to reward
    2:14:09 traditional states that vote much more in line with the Democratic establishment. So the story
    2:14:13 of Barack Obama is one that not many, actually, probably a lot of young people today don’t even
    2:14:19 remember how it happened. In 2008, Obama was the underdog, right? And actually, here’s the critical
    2:14:24 thing. Obama was losing with black people. Why? Black Democrats simply did not believe that white
    2:14:30 people would vote for a black guy. So Barack Obama goes to where this white state, Iowa, all in on
    2:14:37 the Iowa caucuses and shocks the world by winning the Iowa caucuses. Overnight, there’s a shift in
    2:14:41 public opinion amongst the black population in South Carolina that says, “Oh, shit, he actually
    2:14:45 could win.” And it comes out and you win South Carolina. And that’s basically was near the death
    2:14:50 knell for the Hillary Clinton campaign. The problem is by moving South Carolina up and by
    2:14:55 making it first, along with other more pro establishment friendly places, what do we do?
    2:15:00 We make it so that Barack Obama could never happen again. We make it so that an older,
    2:15:06 you know, base of Democratic Party voters who listens to the elites can never have their
    2:15:11 assumptions challenged. And that’s one of the worst things Joe Biden did. You know,
    2:15:15 I talked about his arrogance. He was so arrogant, he changed the freaking primary system. He was
    2:15:22 so arrogant, he refused to do a debate. I mean, imagine history. How lucky are we, honestly,
    2:15:27 that Joe Biden agreed to do that debate with Donald Trump early? And again, that was his arrogance.
    2:15:34 I think we’re so lucky for it. Because if we hadn’t gotten, we got to understand as a country
    2:15:40 how cooked he was and how fake everything was behind the scenes in front of all of our eyes.
    2:15:43 And they tried for three straight years to make sure that that would never happen.
    2:15:48 So, I mean, it’s still such a crime, honestly, against the American people.
    2:15:52 I’ve been thinking about who I want to talk to for three hours. And that’s why
    2:16:00 bring up Obama, because he’s probably the number one person on the left. I would like
    2:16:07 to hear analyze what happened in this election and what happened to the United States of America
    2:16:12 over the past 20 plus years. I can’t imagine anybody else. Look, if anybody could do it,
    2:16:16 it would be you. But there are layers upon layers with that man. I would love to actually sit with
    2:16:21 a talk with him, for real. I think it’s fair to say that we talked about the great man
    2:16:29 view of history. I think you have a psychopath view of history where all great leaders are for
    2:16:32 sure psychopaths. Not for sure. There are many who are good people. Harry Truman.
    2:16:35 You’re like some of my best friends. Harry Truman.
    2:16:43 Some, I assume, are good people. To be fair, though, most of the good ones are accidents
    2:16:46 like Harry Truman. He never would have gotten himself elected. He was a great dude.
    2:16:49 How do you know he was a great dude? David McCullough book. I highly recommend it.
    2:16:53 Everybody should read it. Truman loved his wife. I think that’s really awesome. I love
    2:16:59 when politicians love their wife. It’s so rare. He adored his wife. He adored his daughter,
    2:17:05 spent time with them. He made family life a priority. He had really good small town judgment
    2:17:09 that he would apply to foreign affairs. He was just a very well-considered,
    2:17:16 very stand-up man. I so appreciate that about him. Another one is John Adams. I love and
    2:17:21 revere John Adams. He’s my favorite founding father. Him and John Quincy, they don’t get nearly
    2:17:27 enough of their due. They were some of the most intelligent, well-considered. They were family
    2:17:33 men. The love, the relationship between John and Abigail Adams is literally legendary. I think
    2:17:38 it’s amazing, especially in the context of the 1700s, the way that he would take her counsel
    2:17:45 and into conversations and her own ability. She would sit there and go toe-to-toe as much with
    2:17:51 Thomas Jefferson. There are some who are great, who are really, really good presidents, who have
    2:17:56 good judgment and who are really good people and really think deeply about the world and have really
    2:18:01 cool personal lives. There’s also the vast majority of them, especially in the, I would say,
    2:18:06 especially in the modern era and where the price of the presidency extracts everything that you have.
    2:18:12 You have to be able to, you have to be willing to give everything. It’s just, that’s not a price
    2:18:18 that most people want to pay. Is it possible that some of the people who you think are sociopaths
    2:18:23 and politics are, in fact, really good people and some of the people you think are good,
    2:18:28 like Truman and Adams are actually sociopaths? Definitely. I mean, I could just be reading
    2:18:34 the wrong books, right? Yeah, that’s right. It sounds like you’re, you just read some really
    2:18:39 compelling biographies. Well, okay, to be fair, I don’t base this on one book. I read a lot of them,
    2:18:44 and I’ll get like a, for example, I’ve read books about LBJ. You wouldn’t know any of his foibles,
    2:18:48 but then you find out that they’re written by his friend or, you know, it was written by,
    2:18:54 and I read the truth. I really worry about this kind of general, especially now the sense of the
    2:19:02 anti-establishment sense that every politician must be a sociopath. Now, well, the reason I worry
    2:19:13 about that is it feels true. Yeah. So it’s, you can fall into this bubble of beliefs where every
    2:19:18 politician is a sociopath, and because of that- It can be a self-reinforced. Self-reinforced.
    2:19:22 Yeah, I understand what you’re saying. I agree, by the way, we do need to dramatically change it,
    2:19:26 but the problem is, is that, you know, people vote with their eyeballs and with their interests,
    2:19:32 and people love, like to, you know, dissect people’s personal lives. And one of the reasons why you
    2:19:37 were probably more likely in the pre-modern era to get a “good people” is they were not
    2:19:41 subject to the level of scrutiny and to the insanity of the process that you are currently.
    2:19:46 Like I just said about you, I mean, theoretically, you could run for president and you would just
    2:19:51 get your nomination at the convention. It’s only two months to election day. That’s not so bad.
    2:19:56 But, you know, you run for president today. You got your ass on the road for two years and then
    2:20:01 two years before that, and then you have to run the damn government. So the price is so
    2:20:08 extraordinarily high. I also think that, oh God, and just Washington is a system. It will burn you.
    2:20:15 It will just, it will extract absolutely everything that you can give it. And at the end of the day,
    2:20:19 you know, I mean, everyone always talks about this. It’s hilarious. How Trump is the only
    2:20:24 president not to age in office. I think, I actually think it’s crazy. Like when you look
    2:20:27 at the photos of how he actually looks better today than he did whenever he went into the office,
    2:20:34 that’s amazing. And it actually says a lot about how his mind works. I think Trump is pure id.
    2:20:39 Like, I think he’s, having observed him a little bit, and, you know, both at the White House and
    2:20:43 having interviewed him, it’s pure just like, it’s calculating, but it’s also pure id,
    2:20:47 which is very interesting. The ones who are the thinkers, guys like Obama and others who are really
    2:20:53 in their heads, it’s a nightmare. It’s a nightmare. It will, they will, I mean, apparently Obama would
    2:20:58 only sleep four hours a night, you know. Yeah, add like some empathy on top of that. It’s gonna
    2:21:03 destroy you. It will kill you, man. All right. Speaking about the dirty game of politics, several
    2:21:09 people, different people, told me that of everyone they have ever met in politics, Nancy Pelosi is
    2:21:14 the best at attaining and wielding political power. Is there any truth to that?
    2:21:17 In the modern era? Yeah, I think that’s fair. In the last 25 years, definitely.
    2:21:23 Let’s think about it. Number one is longevity. So she’s had the ability to control the caucus
    2:21:28 for a long period of time. So that’s impressive, because as I just laid out with Clinton, Obama,
    2:21:33 these figures come and they go, but over 25, almost a year period, you’ve been at the very top
    2:21:38 in the center of American politics. The other case I would be is that in this modern era has
    2:21:43 been defined by access to money. She’s one of the greatest fundraisers in Democratic Party history.
    2:21:49 And again, consistently, Obama, Kamala, all those people come and go, but she’s always had a very
    2:21:55 central understanding of the ability to fundraise, to cultivate good relationships with Democratic
    2:22:00 Party elites all across the country, use that money and dole it out to her caucus. She’s also
    2:22:05 was really good at making sure that legislation that came to the floor actually had the votes to
    2:22:11 do so. She ran an extremely well ordered process in the House of Representatives, one in which
    2:22:17 you were able to reconcile like problems within her office. It didn’t usually go public. And then
    2:22:22 it would make it to the floor and it would pass so that there will be no general like media frenzy.
    2:22:27 And you know, Democrats in disarray or any of that put that on display with the Republicans
    2:22:32 and we’ve had multiple speakers all resign or get fired in a 16 year period. That’s pretty
    2:22:37 remarkable. Basically ever since John Boehner decided to leave in what was that 2012, I forget
    2:22:42 the exact year. My point is that if you compare her record to the longevity on the Republican side,
    2:22:48 it is astounding. The other interesting thing is that she also has pulled off one of the real
    2:22:53 tests of political power is can you rule even when you don’t have the title anymore? So she gave up
    2:22:58 the leader position to Hakeem Jeffries, but everybody knows she pulled Joe Biden out of the
    2:23:03 race. That’s pretty interesting, right? So she’s a technically just a backbencher, nobody member
    2:23:08 of Congress, but we all know that’s bullshit. So that’s that’s actually a very important case
    2:23:14 of political power is can you rule without the title? And if you can, then you truly are powerful.
    2:23:20 So I would make a good case for her. She’s done a lot of remarkable stuff for her party. I will
    2:23:25 say they played Trump like a fiddle man last time around. They were able to. I mean, they really
    2:23:32 got him. One of the craziest elements that I covered was during the Trump basically threatened
    2:23:36 to shut down the government and actually did shut down the government for a period of time over
    2:23:42 a dispute over border wall funding and Pelosi and Schumer, despite like genuine mass hysteria
    2:23:47 in the Democratic Party, with even some people who were willing to try and to strike a deal,
    2:23:55 never wavered and actually basically won and forced Trump to back down. Not a lot of MAGA people
    2:23:59 want to admit it, but that was honestly really embarrassing for the Trump administration at
    2:24:04 the time. And yeah, I mean, the amount of discipline that it took for her and Chuck,
    2:24:09 to a lesser extent, but for the two of them to pull that off, it was honestly impressive
    2:24:13 that they were able to do that. Even when the president has so much political power and it
    2:24:19 literally shut down the government over it. Speaking of fundraising, Kamala raised one billion
    2:24:26 dollars. Insane. But I guess the conclusion is she spent it poorly. How would you spend it?
    2:24:31 I don’t think money matters that much. I think Donald Trump has proven to us twice that you can
    2:24:38 win an underdog campaign through earned media. And I don’t think that paid advertisement moves the
    2:24:44 needle that much. Now, don’t notice, I didn’t say it doesn’t matter. But am I buying $425,000
    2:24:49 a day spots on the Vegas sphere? No, we’re not doing that. Are we building? Okay, as people who do
    2:24:56 this for a living, how do you even spend $100,000 to build a set for one interview? This is the
    2:25:01 Call Her Daddy thing. Okay. How’s that possible? So think about the dollar per hour cost. That’s
    2:25:06 like running a jet airplane in terms of what they did. You know what I want to note behind the scenes?
    2:25:14 I’m not good with this. I get really frustrated and I shouldn’t. But dealing with PR and comms people
    2:25:19 can sometimes break my soul. It’s maddening. Can we not talk about this? We need to pull them in
    2:25:25 to 12 p.m. and you’re like, well, that’s only 30 minutes. Yeah, that but there’s stuff like
    2:25:31 where to put the camera. It’s not that I don’t, it’s not actually hypothetically, I don’t even
    2:25:37 disagree with any of the suggestions or this, but it’s like the micromanagement. Just the micromanagement
    2:25:44 and the politeness, but the fake politeness. And it just makes me feel like, I think like,
    2:25:49 what would Kubrick do? Would he murder all of them right now? He would just ban them
    2:25:55 after he became Stanley Kubrick, but he dealt with it for a while. But I just went on a Kubrick
    2:26:01 binge. Man, he was awesome. I watched that World War I movie of his, the one from the 50s. That is
    2:26:05 such an underrated film. I feel like people don’t, whatever, we’ll get past.
    2:26:15 But she, yeah, I guess you paid for 100 grand, bro. 100 and the Oprah thing. She paid for the
    2:26:19 interviews. So, you know, that’s another one. I do this for a living. And as you can tell,
    2:26:24 I’m a very cynical person. I did not even know that celebrities got paid for their endorsements.
    2:26:30 I could never have imagined a universe where Oprah Winfrey has paid $1 million to endorse Kamala
    2:26:35 Harris. I’m like, first of all, you’re a billionaire. Second, I thought you do this because
    2:26:42 you believe. No, I think to be fair, I think the million just helps do the thing you would like
    2:26:47 to do. It’s a nudge because I don’t think any celebrity would endorse. They’re not doing it
    2:26:52 because of the money, but you should just do it for free. I can’t even believe that you’re doing
    2:26:57 this for money. I mean, and the fact, what was it, Alanis Morissette? You know how they were able,
    2:27:01 they had to cut her because they didn’t have the funds to pay her. I’m like, first of all,
    2:27:05 if you believe you should just play for free. But second, again, as a person who is deeply
    2:27:11 cynical, I still am genuinely shook that we are paying celebrities for their endorsements.
    2:27:15 Yes, really fucked up. That’s insane. Why do you think people on the left
    2:27:21 who are actually in the political arena are afraid of doing anything longer than an hour?
    2:27:27 That’s a great question. So let me just say, probably most of the people I’ve talked to on
    2:27:34 this podcast are left-wing or have been for a long time. They just don’t sort of out and say it.
    2:27:40 Like most scientists are left-wing. Most sort of vaguely political people are
    2:27:45 a left-wing that I’ve talked to. But the closer you get to the actual political arena,
    2:27:54 and I’ve tried really hard, I had a bunch of people with the highest profile people,
    2:27:58 say 15 minutes, 20 minutes. I’m used to that, so welcome.
    2:28:08 I just can’t imagine a conversation with Kamala, or with Joe Biden,
    2:28:19 or AOC, or Obama, that’s of any quality at all, shows any kind of humanity of the person,
    2:28:25 the genius of the person, the interesting nuance of the person in like 30 minutes.
    2:28:33 I don’t know. Maybe if there’s people that are extremely skilled that can do that, you just can’t.
    2:28:37 You should be optimistic because a huge narrative out of this election is that the
    2:28:41 Democrats massively fucked up by not coming on this show or a broken show. So I actually,
    2:28:46 fundamentally, number one, that’s going to change dramatically. So be optimistic and keep pushing.
    2:28:51 But two is, this is a good segue actually, is I’ve been thinking a lot about, I know a lot of
    2:28:55 people who listen to this show, who are in tech and may have some influence on the admin. So this
    2:29:00 is kind of, this is something I want people to take really seriously. I was a White House
    2:29:05 correspondent for The Daily Caller. It’s a conservative outlet in Washington during the
    2:29:12 Trump years. And the most important thing I learned from that was that under the White House
    2:29:17 Correspondents Association, the way that the media cartel has everything set up for access,
    2:29:24 for press, to the president is fundamentally broken, anti-American, and bad for actual democracy.
    2:29:29 So let me lay this out at a very mechanical level because nobody knows this. And I was a
    2:29:33 former White House Correspondents Association member. So anybody who says I’m full of shit,
    2:29:38 I was there. For example, number one, all the seats in the briefing room, those seats are assigned
    2:29:42 by the White House Correspondents Association, not by the White House itself. The White House
    2:29:48 Correspondents Association requires you to apply for a seat, right? That adjudication process can
    2:29:53 take literally years for bylaws, elections, and all these things to do. This means that they can
    2:29:59 slow roll the entrance of new media online outlets who are allowed into the room. The reason it really
    2:30:03 matters not having a seat is if you don’t have a seat, you have to get there early and stand in
    2:30:07 the wings like I used to and raise your hand like this and just hope and pray that the press secretary
    2:30:11 can see it’s extremely inconvenient. I’m talking I have to get there hours early at a chance during
    2:30:18 a 15 minute briefing. So one of the things is that Trump has is he owes a huge part of his election
    2:30:24 to coming on podcasts and to new media. Now, because of that, it’s really important that the
    2:30:29 White House Correspondents Association, which is a literal guild cartel that keeps people out of
    2:30:36 the White House and credentials itself and creates this opaque mechanism through which they control
    2:30:42 access to asking the press secretary questions is destroyed. And there are a lot of different ways
    2:30:48 you can do this because what nobody gets to is that all of these rules are unofficial. So for
    2:30:52 example, they’re just traditions. The White House is like, yeah, it’s our building, but you guys
    2:30:57 figure it out, right? Because that’s a longstanding tradition. Let me give you another insane tradition
    2:31:01 that currently exists in the White House. The Associated Press, the White Press Secretary or
    2:31:07 the Associated Press Correspondent gets to start the briefing. Traditionally, they get the first
    2:31:11 question. They also get to end the briefing when they think it’s been enough time. They’re like,
    2:31:16 okay, cringe up here. Thank you. And that calls the briefing over. You’re not even the White
    2:31:21 House Correspondents Association. You literally just happen to work for the Associated Press.
    2:31:27 Why? Why do we allow that to happen? So number one, stop doing that. To their credit, the Trump
    2:31:31 people didn’t really do that, but it’s a longstanding tradition. The other thing is that
    2:31:36 what nobody gets either is that the first row is all television networks for logistical reasons
    2:31:39 so that they can do their little standups with their mic and say, you know, I’m reporting longer
    2:31:46 than this. Well, what people don’t seem to know is that all the television networks are basically
    2:31:51 going to ask some version of the same question. The reason they do that is because they need
    2:31:56 a clip of their correspondent going after the White House Press Secretary all out,
    2:32:02 Robert Mueller, like whenever I was there. So you get the same goddamn version of the stupid
    2:32:07 political questions over and over again. The briefing room is designed for traditional media
    2:32:11 and they have all the access in the world. So in an election where you owe your victory,
    2:32:17 to at least in part to new media and recognizing the changing landscape, you need to change the
    2:32:24 conduit of information to the American people. And in an election, I don’t know if you saw this,
    2:32:30 but election night coverage on cable news was down 25%. Just in four years, 25%. That’s astounding.
    2:32:38 Cable news had a monopoly on election night for my entire lifetime. And yet my show had record
    2:32:43 ratings that night. And look, I’m a small slice of the puzzle here. We’ve got Candace Owens,
    2:32:48 Patrick Bette David, Tim Pool, David Pakman, TYT, all these other people. From what I understand,
    2:32:52 all of us blew it out that night because millions of Americans watching on YouTube,
    2:32:58 we even partnered with some decision desk HQ. So we had live data. We could make state calls.
    2:33:03 And we’re just a silly little YouTube show. My point, though, is that in an election where the
    2:33:08 vast majority of Americans are the age of 55, are listening to podcasts, consuming new media,
    2:33:13 and are not watching cable news, where the median age of CNN, which is the youngest viewership,
    2:33:21 is 68. 68 is the median. So statistically, what does that tell us? There’s a decent number
    2:33:28 of people who are watching CNN who are in their 80s and in their 90s. Yeah, I’m glad you brought up
    2:33:33 Alex, because he deserves a tremendous shout out, Alex Brucewitz. He was the pioneer of the
    2:33:39 podcast strategy for the Donald J. Trump campaign. He got on your show. He was able to get on Andrew
    2:33:45 Schultz’s show, Rogan. He was the internal force that pushed a lot of this. My personal hope
    2:33:49 is that somebody like Alex is elevated in the traditional White House bureaucracy,
    2:33:54 that the number of credentials that are issued to these mainstream media outlets is cut, and there
    2:33:59 is a new lottery process put in place where people with large audiences are invited. And
    2:34:04 I also want to make a case here for why I think it’s really important for people like you and
    2:34:09 others who don’t have as much traditional media experience to comment and practice some capital
    2:34:16 J journalism, because it will sharpen you, too, giving you access in that pressure cooker environment
    2:34:22 and having to really sit there and spar a little bit with a public official and not have as long
    2:34:28 necessarily as you’re used to. It really hones your news media skills, your news gathering skills,
    2:34:32 and it will make you a better interviewer in the long run. Because a lot of the things that I have
    2:34:36 learned have just been through osmosis. I’ve just lived in DC. I’ve been so lucky. I’ve had a lot of
    2:34:42 cool jobs, and I’ve just been able to experience a lot of this stuff. So I’m really hoping that people
    2:34:47 who are listening to this, who may have some influence or even the viewership, if you want to,
    2:34:53 you know, reach out to them and all them. This is a very easily changeable problem. It’s a cartel
    2:34:58 which has no official power. It’s all power by tradition, and it needs to be blown up. It has,
    2:35:03 it does not serve America’s interests to have 58 seats, I think, in the White House press briefing
    2:35:10 room to people who have audiences of like five. It just makes absolutely zero workspace, seats,
    2:35:16 access credentials, and also credentials that are issued to press and to other like new media
    2:35:23 journalists at major events should take precedence. Because it’s not even about rewarding the creator.
    2:35:30 The American people are here. You need to meet them. That’s your job. And I’ll just end with a
    2:35:35 historical thing. Barack Obama shocked the White House press corps in 2009 because he took a question
    2:35:42 from the Huffington Post. A brand new blog, but they were stunned because he knew he said these
    2:35:47 blog people, they went all in for me, and I got to reward them. So there’s long-standing precedence
    2:35:52 of this. They’ll bitch and they’ll moan, they’ll be upset, but it’s their fault, you know, that they
    2:35:58 don’t have as much credibility. And it’s incumbent upon the White House, which serves the public,
    2:36:02 to actually meet them where they are. So I really hope that at least some of this is
    2:36:07 implemented inside of them. Yeah, if you break apart the cartel, I think you can actually
    2:36:13 enable greater journalism, frankly, with the capitalist J, because actually in a long form
    2:36:19 is when you can do better journalism from even just the politician perspective. You can disagree,
    2:36:23 you can get criticized, because you can defend yourself. I had an idea, actually. Tell me what
    2:36:28 you think. I think a really cool format would be there’s a room right near the press briefing room
    2:36:32 called the Roosevelt Room. Beautiful room, by the way. It’s awesome. It has the Medal of Honor
    2:36:36 for Teddy Roosevelt, and it has a portrait of him and a portrait of FDR. It’s one of my favorite
    2:36:41 rooms in the White House. It’s so cool. And so my idea would be in the Roosevelt Room, which
    2:36:48 traditionally used for press briefings and stuff, is like you, as the press secretary,
    2:36:52 sit there. I think there’s like 12 seats, something like that. And you set it all up and you have,
    2:36:56 let’s say, sure microphones like this. And that person that secretary is going to commit to being
    2:37:01 there for like two hours. And new media people can sit around the room. All this is being streamed
    2:37:06 live, by the way, just like the White House press briefing room. But the expectation is that
    2:37:10 the type of questions have to be substantive. Obviously, nothing is off limits. You should
    2:37:15 never ever, you know, accept I’m not going to ask about this, especially as a journalist,
    2:37:18 you can’t do that. Every time they’re like, Hey, please don’t ask about this. It’s like,
    2:37:23 actually, that’s probably one thing you should ask about. But my point being that the expectation
    2:37:28 is that there’s no interference on the White House side, but that the format itself will lend
    2:37:33 exactly to what you’re saying to allow people to explain. And again, in a media era where we need
    2:37:41 to trust the consumer, like my show is routinely over two hours long, on cable television, on cable
    2:37:46 television, you know, the Tucker Carlson program, whenever it was on Fox News, without commercial
    2:37:52 breaks was about 42, 43 minutes, something like that of runtime. So I’m speaking for almost triple
    2:37:59 what that is on a regular basis. The point is, is that millions are willing to sit and to listen,
    2:38:04 but you just have to meet them where they are. So I would really hope that a format like that,
    2:38:08 like a streamer briefing or something like that, I think, I think it’s, look, I know they would
    2:38:13 dunk on it endlessly, but I think it could work. Yeah, I think the incentives are different. I
    2:38:20 think it works because you don’t have to, like you saw good, don’t have to signal to the other
    2:38:23 journalists that you’re part of the clique. Oh, I’m so glad you brought that up because
    2:38:27 that was another lesson I learned. I go, oh, none of you are asking important questions for the
    2:38:31 people. You’re asking questions because you all hang out with each other. And you’re like, oh,
    2:38:38 wait, so this entire thing is a self-reinforcing guild to impress each other at cocktail parties
    2:38:43 and not to actually ask anything interesting. I remember people were so mad at me because
    2:38:50 this was 2018 or maybe 2017. And I said, do you think that Kim Jong-un is sincere
    2:38:54 in his willingness to meet with you? Something like that to that effect.
    2:38:58 They were furious because I didn’t ask about some bullshit political
    2:39:04 controversy that was happening at the time. So in the historical legacy, what was more important?
    2:39:11 The Mueller question or Donald Trump breaking 50 years or whatever of tradition with America’s
    2:39:15 relationship with North Korea and meeting him in Singapore and basically resetting
    2:39:19 that relationship for all time. As you can tell, I read a lot of book. I like to take the long
    2:39:26 view. Every time I would ask a question, I go, okay, when the future Robert Caro is writing books
    2:39:30 and he sees, he’s reading the transcript of the White House press briefing, you don’t even know
    2:39:33 who this kid is. He goes, that was a pretty good question right there. That’s pretty relevant.
    2:39:37 You got to think about all the bullshit that gets left on the cutting room floor.
    2:39:44 I love that view of journalism actually. The goal is to end up as one line in a history book.
    2:39:48 I just want a quote of what the president said to something that I asked in the book.
    2:39:52 I would be happy. I would die happy with that. If you told me that when I’m like a 90-year-old
    2:39:59 man, I’d be like, man, that means I succeeded. When the AI’s write the history of human civilization.
    2:40:04 One of the things I continuously learned from you when looking back through history
    2:40:10 is how crazy American politics has been throughout history. It makes me feel a lot
    2:40:17 better about the current day. It should. Corruption, just the divisiveness also.
    2:40:25 Just the incentive for stealing elections at all levels of government and direct stealing
    2:40:30 and indirect stealing, all kinds of stuff. Is there stuff that jumps out to mind throughout
    2:40:39 history that’s just like the craziest corruptions or stealing of elections that come to mind?
    2:40:45 I’ll give the micro and the macro. My favorite example is Robert Caro, who I’ve probably talked
    2:40:49 about him a lot. God bless you, Robert. I hope you live to write your last book because we really
    2:40:57 need that from you. Robert came to Texas. He only intended on writing three books about Lyndon
    2:41:01 Johnson. He’s currently completed four and he’s on his fifth. It’s taken him over 40 years to
    2:41:07 write those. One of the reasons is he just kept uncovering so much stuff. One of them
    2:41:13 is book two, means of ascent. He never intended to write it, but as he began to investigate
    2:41:20 Lyndon Johnson’s 1948 Senate election, he realizes in real time how rigged and stolen it was.
    2:41:27 I often tell people, “What if I told you that we lived in the most secure election period
    2:41:32 in modern history?” They wouldn’t believe it. But if you read through that shit, I’m talking
    2:41:39 about bags of cash, millions of dollars, literal stuffed ballot boxes. It’s great to be back here
    2:41:45 in Texas because I always think about that place down in Zapata and Star County. I’m talking basically
    2:41:53 Mexico where these dons were in power in the 1940s and they would literally stuff the ballot boxes
    2:41:57 with the rolls and they wouldn’t even allow people to come and vote. They just checkmarked it all for
    2:42:04 you based upon the amount that he paid. Means of ascent is the painstaking detail of exactly how
    2:42:10 Lyndon Johnson stole the 1948 Senate election. Nothing like that, as far as I know, is still
    2:42:17 happening. Macro, we can talk about the 1876 election. Rutherford B. Hayes, one of the closest
    2:42:21 elections in modern history. It was one of those that got kicked with House of Representatives.
    2:42:27 That was an insane, insane time. The corrupt bargain that was struck to basically end reconstruction
    2:42:31 and federal occupation of the South. And of course, the amount of wheeling and dealing
    2:42:37 that happened inside of that was absolutely bonkers and nuts. That was what an actual
    2:42:42 stolen election looks like. Just so people know. So on a micro and a macro, yeah, that’s what it
    2:42:48 really looks like. And so, look, I understand where people are coming from. Also, let’s do what?
    2:42:54 1960, that was pretty wild. I mean, in 1960, there was all those allegations about Illinois going for
    2:43:00 Kennedy. If you look at the actual vote totals of Kennedy Nixon, wow, I mean, it’s such an
    2:43:05 insanely close presidential election. And even though the electoral college victory looks a
    2:43:11 little bit differently, Nixon would openly talk about, he’s like, oh, Joe Kennedy rigged Illinois
    2:43:16 for his boy. And he’d be like, and we didn’t even have a chance in Texas with Lyndon pulling,
    2:43:22 you know, like Lyndon or Lyndon stuffing the ballot boxes down there. So, and this is open
    2:43:27 on the like, they openly admit this stuff, they talk about it. So actually, there’s a funny story.
    2:43:38 LBJ lost is, I think it’s 1941 Senate primary. And it’s because that his opponent, Papio Daniel,
    2:43:43 actually outstole Lyndon. So they’re both corrupt. But Papio Daniel stuffed the ballot box
    2:43:50 in like the fifth day of the seventh days to count the votes. And FDR loved LBJ. And it’s
    2:43:57 interesting, right? FDR recognized Johnson’s, his talent. And he goes, Lyndon, you, you know,
    2:44:02 in New York, we sit on the ballot boxes till we count them, you know, because he’s admitting
    2:44:08 that he, you know, you know, participated in a lot of this stuff. So this high level chicanery
    2:44:13 of stolen elections is actually an American pastime that we luckily have moved on from.
    2:44:20 And quite a lot of people do not know the exact intricate details of how wild it was back in the
    2:44:24 day. Yeah, it’s actually one of the things, it’s harder to pull off a bunch of bullshit with all
    2:44:29 these cameras everywhere. No. Transparency to lack of cash, banking regulations, there’s a variety
    2:44:35 of reasons. But yeah. So that said, let’s talk about the 2020 election. It seems like forever ago.
    2:44:42 Do you think it was rigged the way that Trump claimed? No. And was it rigged in other ways?
    2:44:48 This is the problem with language like rigged. And by the way, when I interviewed Vivek Ramaswamy,
    2:44:51 you said the exact same thing. So for all the MAGA people who are going to get mad at me,
    2:44:59 Vivek agrees, all right? And if, okay, I have observed, and I’m going to put my analysts hat
    2:45:06 on, there are two theories of stop the steal. One I call low IQ stop the steal and one I call
    2:45:11 high IQ stop the steal. Low IQ stop the steal is basically what Donald Trump has advocated,
    2:45:18 where the, you know, Dominion voting machines and bamboo ballots and Venezuela and Sidney Powell
    2:45:21 and all the people involved basically got indicted by the state of Georgia. I’m not saying that
    2:45:26 was correct. I’m just like, that’s what that actually looked like. Rudy Giuliani, et cetera.
    2:45:32 High IQ stop the steal is basically, actually, I mean, these are not illegitimate arguments.
    2:45:39 The school of thought is it was illegitimate for the state of Pennsylvania and other swing
    2:45:45 states to change the mail in balloting laws as a response to COVID, which enabled millions of
    2:45:50 people more to vote that wouldn’t have and that those change in regulations became enough to
    2:45:55 swing the election. I actually think that that is true. Now, would you say that that’s rigged?
    2:45:59 That’s a very important question, because we’re talking about a Republican state legislature
    2:46:03 and Republican state Supreme Court, right? The two that actually ruled on this question.
    2:46:07 So could you say that it was rigged by the Democrats to do that? Another problem with
    2:46:13 that theory is that while you can say that that’s unfair to change the rules last time around,
    2:46:17 you can also understand it to a certain extent. And I’m not justifying it. I’m just giving you
    2:46:24 an example. So for example, after the hurricane hit North Carolina, Republican officials were
    2:46:28 like, Hey, we need to make sure that these people who had Western North Carolina who were affected
    2:46:32 by the hurricane could still be able to have access to the ballot box. And people were like, Oh,
    2:46:36 so you’re saying in an extraordinary circumstance that you should change voting, right? You know,
    2:46:42 access and regularity to make sure that people have access. So my point is, you can see the logic
    2:46:48 through which this happened. And the high IQ version is basically the one that was adopted
    2:46:54 by Josh Hawley whenever he voted against certification. He said that the the Pennsylvania
    2:46:59 particularly election law and that those changes were unfair and led to the quote unquote rigging
    2:47:04 of the election against Donald Trump. Now there’s an even higher IQ galaxy brain stop the steal.
    2:47:10 Galaxy brain stop the steal is one that you saw with great love and respect, my friend JD Vance
    2:47:17 at his debate with Tim Walsh when Tim Walsh asked him, what did he say? He said, did Donald Trump
    2:47:23 win the 2020 election? He’s like, Tim, focused on the future. And then he started talking about
    2:47:28 censorship, the Hunter Biden laptop story. If you take a look at the Joe Rogan interview,
    2:47:32 Rogan actually asked JD this, he’s like, What do you mean you in the election was some version
    2:47:37 of that? And JD was like, Well, what I get really frustrated by is people will bring up all these
    2:47:43 insane conspiracy theories. But they ignore that the media censored the Hunter Biden laptop story
    2:47:49 and that big tech had its finger on the thumb for the Democrats. Now that is empirically true.
    2:47:54 Okay, that is true, right? Now, would you say that that’s rigged? I’m not going to use that word,
    2:47:58 because that’s a very different word. Now, would you say that that’s unfair? Yeah, I think it’s
    2:48:04 unfair. So there’s another, there’s a lot of MAGA folks picked up on this one. There was a Time
    2:48:09 Magazine article in 2020 that’s very famous in their crowd called, you know, the, the, it was
    2:48:14 like the fight to fortify the election. And it was about all of these institutions that put their
    2:48:21 fingers on the scale for Joe Biden against Donald Trump. So I will put it this way, was Donald Trump
    2:48:29 up against the Titanic forces of billionaires, tech censorship, and elite institutions who
    2:48:37 all did absolute damnedest to defeat him in 2020. Yes, that is true. And in a sense,
    2:48:43 the galaxy brain case is the only one of those, which I think is truly legitimate.
    2:48:49 And I’m not going to put it off the table, but this is the problem. That’s not what Trump means.
    2:48:53 You know, Trump, Trump, by the way, will never tell you what I just told you, right?
    2:48:59 JD will, if you go and you ask any of these Republican politicians when they’re challenged
    2:49:04 on it and they don’t want to say that Trump lost a 2020 election, they’ll give the, the galaxy brain
    2:49:10 case that I just gave. And again, I don’t think it’s wrong, but it’s like, guys, that’s not what he
    2:49:15 means when he says it. And that’s the important parsing of the case, right? So first at a high level,
    2:49:20 Trump or otherwise, I don’t like anyone who winds when they lose, period. Yeah. Although he did
    2:49:24 tell you he lost, you noticed that? That’s the only time he’s ever said it, ever. You’re famous,
    2:49:31 you’re in history for that one. Lost by a whisker. Yeah. Lost by a whisker. I mean, there is a case
    2:49:36 to be made that he was joking. I don’t know. But there is a kind of weaving that he does with
    2:49:42 humor where sometimes it’s sarcasm, sometimes not. Much easier to showcase in a three hour
    2:49:48 interview, I’ll say. Good call. Go ahead. I couldn’t even like play with that when you have 40 minutes.
    2:49:54 I know, bro. You’re like, you know, I could do just 40 minutes on weaving alone.
    2:49:59 For your style, it doesn’t work. And I can tell you how the way I interview politicians is I just
    2:50:04 do pure policy. So when I, the first time I interviewed Trump, I compiled a list of 15 subjects,
    2:50:09 me and my editor, Vince Collinier, shout out to Vince, and the two of us sat in an office,
    2:50:13 and then we had questions by priority in each category. And if we felt like we were running
    2:50:17 short on time, we would move around those different ones. But that was purely he’s the
    2:50:22 president, we’re asking him for his opinions on an immigration bill or whatever. For what you do,
    2:50:28 it’s impossible to do it for him. Yeah, I just want to say that thank you for everybody involved,
    2:50:32 for making my conversation with Donald Trump possible. But I’ve learned a lot from that that
    2:50:41 I just, if I’m told that all I have is 40 minutes, I’m very politely sparing in that case,
    2:50:46 Donald Trump, the 40 minutes and just walking away. Because I don’t think I can do a good job.
    2:50:51 I think that is the correct decision on your part. And I also would encourage you to have
    2:50:57 the confidence at this point that you are in a position of something that we call in the business
    2:51:03 the ability to compel the interview. And to compel means to be able to bring somebody else to you
    2:51:08 and not the other way around. And I think that you and Rogan and a few others are in that very
    2:51:13 unique position. And I would really encourage you guys to stick to your guns on things that make
    2:51:19 you feel comfortable. Because those of us in news, we will always negotiate. We’re willing to do
    2:51:23 short form because we’re asking about policy. But for the style that you’ve helped popularize,
    2:51:27 and I think that you’re uniquely talented and good at, that’s very important not to compromise on.
    2:51:31 So thank you for saying those words. And that’s not just in the interest of journalism and the
    2:51:34 interest of conversation. It’s the interest of the guests as well.
    2:51:35 Yeah, absolutely.
    2:51:36 To bring out the best in them.
    2:51:40 Yeah. I mean, I would feel really added to service. And I would feel like people would not get a
    2:51:46 unique understanding of my own thought process and my backstory if I was not able to sit here for
    2:51:52 literally hours and to explain in deep detail how I think about the world. Not that anyone cares
    2:51:58 that much. But I hope all I can do is I hope it’s helpful. I want to help people think.
    2:52:03 Because when I was growing, I was growing up not far from here, 90 minutes from here,
    2:52:09 in College Station, I felt very uniquely closed off from the world. And I found the world through
    2:52:15 books and books saved my life. They’ve many, so many different times. And I hope to encourage that
    2:52:20 in other people. I really, no matter where you are, no matter who you are, no matter how busy you
    2:52:25 are, you have some time to either sit down with a book or put on an audio book. And you can transport
    2:52:31 yourself into a different world. It’s so important. And that’s something that your show really helps
    2:52:35 me with too. I love listening to your show whenever, sometimes when I’m too into politics and I need
    2:52:38 to listen to something, I’ll listen to that Mayan historian guy. I love stuff like that.
    2:52:43 Absolutely. I’ve been a deep dive on Jenkins Kahn, reading Jenkins Kahn and then making
    2:52:50 the modern world. Yeah, Jack Weatherford. Yeah, he’s coming on. Is he? Yeah. Amazing. And again,
    2:52:56 shout out to Dan Carlin. The Goat, the OG. Dan, I’ve never met you before. I would love to
    2:53:00 correspond at some point. I love you so much. You changed my life, man. I met him once before
    2:53:05 and it felt… I listened to your interview with him. Oh, Starstruck. Yeah. Very, very Starstruck.
    2:53:10 And he means you’re so much. Painful attainment. If I’ve listened to that Mayan… I think his
    2:53:14 best series, one of his best series, it gets no credit for, Ghosts of the Ostfront. Nobody gives
    2:53:20 him credit for that one. That’s OG. This is a 2011 series. But his Ghosts of the Ostfront on
    2:53:27 the Eastern Front of the Nazi war against Russia fundamentally changed my view of warfare forever.
    2:53:32 And also, at that time, I was very young. And to me, World War II was saving Private Ryan. I
    2:53:38 wasn’t as well-read as I am now. And I was like, “Oh, shit. This entire thing happened, which actually
    2:53:44 decided the Second World War. And I don’t know anything about this.” So shout out to Dan. God
    2:53:50 bless you, man. And his, quote, unquote, “short episodes,” I think, on slavery in general throughout
    2:53:54 human history. That was an awesome episode. I actually bought a bunch of Hugh Thomas books
    2:53:59 because of that episode. I’d never really read about African slavery or the slave trade outside
    2:54:03 of the Civil War context. So again, shout out to him for that one. That was an amazing episode.
    2:54:08 His Japan series, too. I’m going to Japan in a few days. And I keep thinking of what he always
    2:54:13 talked about in his Supernova in the East. The Japanese are like everyone else, but only more so.
    2:54:22 And God, I love that quote. Okay, he’s great. And we, ironically, arrived at this tangent
    2:54:27 while talking about the 2020 election. Yeah. That’s why podcasting is fun.
    2:54:33 Because he said, “Lost by a whisker.” And now we’re dragging us screaming back to the topic.
    2:54:43 One of the things I was bothered by is Trump claiming that there’s widespread
    2:54:49 as you’re saying low IQ theory, the widespread voter fraud. And I saw no evidence of that that
    2:54:56 he provided. And all right, well, let’s put that on the table. And then the other thing I was troubled
    2:55:05 by that maybe you can comfort me in the context of history, how easily the base ate that up.
    2:55:13 That they were able to believe the election was truly rigged based on no clear evidence
    2:55:18 that I saw. And they just love the story. And there is something compelling to the story that,
    2:55:24 you know, like this DNC type, like with Bernie, the establishment just, they’re corrupt and they
    2:55:36 steal the will of the people. And like the lack of a desire from the base or from people to see any
    2:55:40 evidence of that was really troubled me. Yeah. I’m going to give you one of the most depressing
    2:55:46 quotes, which is deeply true. Roger Ailes, who is a genius, shout out to “The Loudest Voice in
    2:55:50 the Room” by Gabriel Sherman. That book changed my life too, because it really made me understand
    2:55:55 the media. People don’t want to be informed. They want to feel informed. That is one of the most
    2:56:02 fundamental media insights of all time. What a line. Roger Ailes, a genius, a genius in his own
    2:56:08 right, who, you know, he changed the world. He certainly did. He, you know, he’s the one
    2:56:12 who kind of gets credit for one of the greatest debate lines of all time, because he was an
    2:56:17 advisor to President Reagan. Whenever he broke in, he was like, “Mr. President, people want to
    2:56:22 know if you’re too damn old for this job or not.” And he inspired that joke that Reagan made,
    2:56:27 where he was like, “I will not use age in this campaign. I will not hold my opponent’s youth
    2:56:32 and inexperience against him.” That was Ailes, man. You got it. He did the Nixon Town Halls.
    2:56:38 He did it all. He’s a fucking genius. And I’m not advocating necessarily for the world he created
    2:56:44 for us, but he did it. And people should study him more. If you’re interested in media in particular,
    2:56:46 that book is one of the most important books you’ll ever read.
    2:56:52 Yeah, you know what? That quote just really connected with me, because there’s all this
    2:56:59 talk about truth. And I think what people want to, they want to feel like they’re in possession
    2:57:02 of the truth. Correct. Not actually being in the possession of the truth.
    2:57:09 Yeah, I know. It hit me too. Actually, Russell Crowe does an amazing job of delivering that line
    2:57:13 in the Showtime miniseries. So if you have the chance, you should watch it. And look,
    2:57:17 this is the problem. Liberals will be like, “Yeah, see these idiot Republicans?” I’m like,
    2:57:21 “Yeah, you guys have bought a lot of crazy stupid shit too, okay?” And if actually,
    2:57:25 I would say liberal misinformation, quote unquote, is worse than Republican disinformation,
    2:57:31 because it pervades the entire elite media like Russiagate or Cambridge Analytica or
    2:57:36 any of these other hoaxes that have been foisted on the American people. The people who listen to
    2:57:42 the Daily and from the New York Times are just as brainwashed, lack of informed, want to feel
    2:57:48 informed as people who watch Fox News. So let me just say that out there. It’s an equal opportunity
    2:57:53 to cancer in the American people. Actually, we started early on in the conversation talking
    2:58:02 about bubbles. What’s your advice about how to figure out if you’re in a bubble and how to get
    2:58:06 out of it? That’s such a fantastic question. Unfortunately, I think it comes really naturally
    2:58:11 to someone like me, because I’m the child of immigrants and I was raised in Coliseation,
    2:58:17 Texas. So I was always on the outside. And when you’re on the outside, this isn’t a sob story.
    2:58:21 It’s a deeply useful skill, because when you’re on the outside, you’re forced to kind of observe.
    2:58:28 And you’re like, oh, so like what I was raised was the Bible Belt. And people really, you know,
    2:58:32 people were hardcore evangelical Christians. And I could tell them like, oh, they really believe
    2:58:37 this stuff. And, you know, they were always trying to proselytize and all of that. And then the other
    2:58:42 gift that my parents gave me is I got to travel the entire world. I probably visited 25, 30 countries
    2:58:50 by the time I was 18. And one of the things that that gave me was the ability to just put yourself
    2:58:54 in the brain of another person. So one of the reasons I’m really excited to go to Japan,
    2:59:00 and I picked it as a spot for my honeymoon was because Japan is a first world developed country
    2:59:06 where the vast majority of them don’t speak English. It’s distinguishedly non-western,
    2:59:11 and they just do shit their own way. So they have a subway, but it’s not the same as ours.
    2:59:14 They have restaurants, things don’t work the same way. They have, you know,
    2:59:20 I could go to a laundry list, their entire philosophy of life of the daily rhythm,
    2:59:26 even though it merges with service based managerial capitalism and they’re fucking good at it too,
    2:59:32 they do it their own way. So exposure to other countries in the world gave me, and also just
    2:59:37 being an outsider myself, gave me a more detached view of the world. So if you don’t have that,
    2:59:43 what I would encourage you is to flex that muscle. So go somewhere that makes you uncomfortable.
    2:59:48 This will be a very boomer take, but I hate the fact that you have 5G everywhere you go in the
    2:59:53 world, because some of the best experiences that I’ve ever had in my life is walking around Warsaw,
    2:59:59 Poland, trying to find a bus station to get my ass to Lithuania with a printed out bus ticket.
    3:00:03 I have no idea where the street is. I have, I’m in a country where not that many people speak
    3:00:08 English. We’re pointing and gesturing, right? And I figured it out. And it was really useful.
    3:00:12 I got to meet a lot of cool Polish people. Same in Thailand. I’ve been in rural, like,
    3:00:19 bumfuck Thailand, Colombia, places where people speak zero English. And your ability to gesture
    3:00:26 and use pigeon really connects you and gives you, like, the ability to get an exposure to others.
    3:00:32 And so I know this is a very, like, wanderlust, like, travel thing, but unironically, if you’re
    3:00:37 raised in a bubble, pierce it. Like, that’s the answer is seek something out that makes you
    3:00:40 uncomfortable. So if you’re raised rich, you need to go spend some time with poor people.
    3:00:44 And consider that they might actually understand the world better than you.
    3:00:48 Well, in some respects, so I think a lot of rich people have really screwed up personal lives.
    3:00:52 So if you’re poor and you really value family, you say, oh, that’s interesting. There seems to be
    3:00:58 a fundamental tradeoff between extraordinary wealth and something that I value. But what can I take
    3:01:04 away from that person? Oh, put my money in index funds. Make sure that I am conscientious about
    3:01:11 my budgeting and common sense shit, right? And vice versa, people who are very wealthy,
    3:01:16 get so caught up in the rat race about their kids going to private school and all of this.
    3:01:20 And then, you know, they very rarely engage with there’s that famous study where they ask
    3:01:24 people on their deathbed, like what they valued in life and every single one of them was like,
    3:01:28 I wish I’d spend more time with my children. I think about that every time that I am thinking
    3:01:33 about pursuing a new work endeavor or something that’s going to have me spend significant time
    3:01:39 away from my wife. And I’m almost always these days now that I’ve achieved a certain level of
    3:01:44 success, the answer is, I’m not doing it unless you can come with me.
    3:01:48 One of the bubbles I’m really concerned about San Francisco bubble, I visited there recently,
    3:01:55 because there’s so many friends there that I respect deeply. There’s just so many brilliant
    3:01:59 people in San Francisco, the Silicon Valley. But there’s just this,
    3:02:06 I don’t even want to criticize it, but there’s definitely a bubble of thought.
    3:02:12 I’m with you. I’m friends with some SV Silicon Valley people as well. I’m similarly struck by
    3:02:20 that every time I go. And honestly, I do admire them because what I respect the most amongst
    3:02:24 entrepreneurs, business and political thinkers is systems thinking. Nobody thinks systems better
    3:02:29 than people who are in tech because they deal with global shit, right? Not even just America,
    3:02:34 they have to think about the whole world, about the human being and his relationship to technology.
    3:02:38 And coding in some ways is an expression of the human mind and about how that person wants to
    3:02:44 achieve this thing. And hey, you mechanically can type that into a keyboard or even code something
    3:02:50 to code for you to be able to achieve that. That’s a remarkable accomplishment. I do think
    3:02:55 those people and people like that too, who think very linearly through math and they’re,
    3:03:00 the geniuses are the ones who can take their creativity and merge it with linear thinking.
    3:03:07 But I do think that that actually, those are the people who probably most need to get out of the
    3:03:12 bubble, check themselves a little bit. And look, it’s really hard. Once you achieve a certain
    3:03:17 level of economic success and others, what do most rich people do? They close themselves off from
    3:03:22 the world, right? That’s the vast majority of the time. What do you do? Economy is annoying,
    3:03:27 flying. They fly first class. Living in a small house is annoying. They buy a bigger house.
    3:03:31 Dealing with a lot of these inconveniences of life is annoying. You pay a little bit more to
    3:03:35 make sure you don’t have to do that. There’s a deep insidious thing within that, each one of those
    3:03:40 individual choices where the more and more removed that you get from that, the more in the bubble
    3:03:45 that you are. So you should actually seek out those experiences or create them in a concerted way.
    3:03:55 Speaking of bubbles, Sam Harris, he has continued to criticize me directly and indirectly,
    3:04:01 I think unfairly, but I love Sam. I deeply respect him. Everybody should listen to the
    3:04:07 Making Sense podcast. It always makes me think. It’s definitely in the rotation for me.
    3:04:14 That’s a very admirable view. I mean, he’s, I think, one of the sharpest minds of our generation.
    3:04:20 And for a long time, I looked up to him. It was one of the weird moments for me to meet him,
    3:04:23 because you listened to somebody for such a long time.
    3:04:25 I feel that way with you. I’m serious.
    3:04:30 Yeah, it’s a beautiful moment. I mean, same with Joe and stuff like this.
    3:04:35 Oh, absolutely. It is one of the most surreal moments of your life to be able to meet somebody
    3:04:40 who you spend hours listening to. I actually think about that when people come up to me,
    3:04:43 because I’m like, oh, they’re feeling what I felt whenever I, yeah, yeah.
    3:04:49 And you have to like, you see it, you feel it, and you have to celebrate that because there’s
    3:04:56 an intimacy to it. I think it’s real that people really do form a real connection,
    3:04:59 a real friendship. It happens to be one way, but I think it actually can
    3:05:05 upgrade to a two-way pretty easily. It happens with me in a matter of like five minutes,
    3:05:10 when I meet somebody at an airport or something like that. Anyway, Sam took a pretty strong
    3:05:18 position on Trump. And has for a long time. Yeah, he has been consistent and unwavering.
    3:05:27 So he thinks that Trump is a truly dangerous person for a democracy for maybe for the world.
    3:05:33 Can you still man his position? Well, see, I think a lot of this podcast has
    3:05:38 been stealing manning it because Sam is a big character matters guy. Like he focuses a lot
    3:05:42 on Trump’s personality. By the way, I’m like you, I’ve listened to Sam Harris for years.
    3:05:46 I bought his meditation app. So nobody’s going to accuse me of being some Sam Harris hater.
    3:05:51 I listened to him for way before, long before even Donald Trump was elected. That’s how far
    3:05:57 back I go with the Sam Harris podcast. I have a lot of respect for the dude. I enjoy a lot of his
    3:06:02 older interviews. I do think after Trump, he did succumb a little bit, in my opinion, to the
    3:06:10 elite liberalism view, both of the impetus behind Donald Trump and why he was able to be successful.
    3:06:15 So in some ways very denigrating to the Trump voter, but also a fundamental misunderstanding
    3:06:19 of the American presidency. Because like I said, he really is the one who believes that that
    3:06:24 narcissism, that character, and all of that that makes Trump tick itself will eventually override
    3:06:29 any potential benefit that he could have in office. And I just think that’s a really wrong way of
    3:06:36 looking at it. And I mean, for example, I had this debate with Crystal, and this gets to the whole
    3:06:41 Trump, you know, talking about the enemy from within. And she was like, he wants to prosecute
    3:06:46 his political opponents. Do you disagree with that? And I was like, no, I don’t. And she was like,
    3:06:49 so you’re not worried about it. And I go, no, I’m not. And she’s like, well, how do you square
    3:06:55 that? And I was like, well, I actually unironically believe in the American system of institutional
    3:07:01 checks and balances, which kept him quote unquote in check last time around. I also believe in
    3:07:07 democracy, where, you know, this is really interesting. But, you know, in 2022, a lot of the
    3:07:11 Republicans who were the most vociferous about stop the steal, they got their asses kicked at
    3:07:17 the ballot box. You know, Americans also then in 2024 decided to forgive some of that from
    3:07:22 Donald Trump, it’s definitely didn’t help, right? But they were able to oversee that for their own
    3:07:28 interests. As in, democratically, people are able to weigh, in terms of checks and balances,
    3:07:32 what they should and should not challenge a politician by. But also, we have the American
    3:07:37 legal system. And I also know the way that the institutions in Washington themselves work,
    3:07:43 that, you know, fundamentally, the way that certain processes and other things could play out
    3:07:49 will not play out to some Hitlerian fantasy. And this gets to the whole like, Kamala and them
    3:07:55 calling him a fascist and Hitler, you know, you and I probably spent hours of our life maybe more
    3:08:01 thinking and reading about Hitler or Weimar Germany. And I just find it so insulting, you
    3:08:09 know, because it becomes this moniker of like, these terms have meaning beyond the beyond just
    3:08:15 the dictionary definition, the circumstances through which Hitler is able to rise to power
    3:08:22 are not the same as today. It’s like, stop denigrating America to the point where you think
    3:08:26 like really should flip it around. Why do you think America is Weimar Germany? That’s a ridiculous
    3:08:32 thing to say. Do you unironically believe that? No, you don’t believe that. So that is personally
    3:08:38 what drives me a little bit crazy. And I think that Sam has found himself in a mental framework
    3:08:45 where he is not willing, he’s not able to look past the man and his quote unquote, danger. And
    3:08:51 at the end of the day, his worldview was rejected wholly by the American people. And that because
    3:08:56 the character argument, the fascist argument, the Hitler argument, the he is uniquely bad argument,
    3:09:03 has been run twice before 2016 and in 2000, actually all three times, I guess it won in 2020.
    3:09:07 But two out of the three times, Donald Trump has won the presidency. And his latest one,
    3:09:12 where that argument has never been made before for a longer period of time and more in strength
    3:09:18 by a political candidate was rejected completely. And I would ask him to reconcile himself to the
    3:09:24 America that he lives in. I think one thing maybe to partially steal man in his case, but also just
    3:09:32 to steal man the way the world works, is that there is some probability that Kamala Harris
    3:09:40 will institute a communist state. And there is some probability that Donald Trump will indeed
    3:09:47 like will fly a swastika with and deport, I don’t know, everybody who’s not Scott Irish.
    3:09:54 You and I are screwed then. Maybe, is there a spirit test? Okay. But that probability is small.
    3:10:02 And you have to, if you allow yourself to focus on a particular trajectory with a small probability,
    3:10:07 it can become all-encompassing. Because you could see it. You could see a path. There are
    3:10:12 certain character qualities to Trump. Yes. Where he wants to hold on to power. First of all,
    3:10:20 every politician wants to hold on to power. Joe Biden, maybe because he’s part of the machine,
    3:10:25 can’t even conceive of the notion of a third term. But he has the arrogance to want to hold
    3:10:30 on to power, do everything he can. Absolutely. And like with Trump, I can see that if it was
    3:10:36 very popular for him to have a third term, I think he would not be the kind of person
    3:10:43 who doesn’t advocate for a third term. So what? That would require the Senate and the House,
    3:10:49 or 70, what is it, 75% of the states, to pass and change the Constitution. Do you think that’s
    3:10:52 going to happen? No, I don’t think it’s going to happen. So I’m not that worried about it. Now,
    3:10:57 you can make a norms argument. Actually, I think that’s kind of fair, is that he’s the norms buster.
    3:11:02 But, you know, with extraordinary candidates and people like Trump, you get the good and the bad.
    3:11:09 There is a true duality. Like the norms he busts around foreign policy, I love. The norms he busts
    3:11:14 around the economy, I love. The norms he busts around just so much of the American political
    3:11:19 system, saying it, how it is, et cetera. I love that. You know what I hate? This 2020 election
    3:11:25 bullshit. You know what else I hate? You know, this, I don’t know, just the lack of discipline
    3:11:29 that I would want to think that a great leader could have, like when he was president and tweeting
    3:11:33 about Mika Brzezinski’s facelift, that was objectively ridiculous. Like it was crazy,
    3:11:40 okay? Was it funny? Yeah, but it was crazy. Like, and it’s not how I would conceive and have conceived
    3:11:44 of some of my favorite presidents. I wouldn’t think that they would do that. But that’s what
    3:11:49 you get. You know, everyone should be clear-eyed about who this man is. And that’s another problem.
    3:11:54 The deification of politicians is sickening. It’s sickening. Like about Trump, around Obama,
    3:12:00 around Hillary. Like these people are just people. Like the idea that they are godlike creatures
    3:12:04 with extraordinary judgment. You know, one of the really cool things about your and I’s job is we
    3:12:08 actually get to meet very important people. After you meet a few billionaires, you’re like, yeah,
    3:12:13 there’s definitely something there. But you know, some of them get lucky. And like, after you meet
    3:12:18 a few politicians, you’re like, oh, they’re not that smart. That was a rude awakening for me,
    3:12:23 by the way, being here in Texas reading about these people. And pretty soon, I was on Capitol Hill,
    3:12:27 I was like 19 years old. I was an intern. I’m actually interacting. And I see them behave in
    3:12:32 ridiculous manners and, you know, whatever, to treat people badly or say something stupid.
    3:12:37 And I was like, oh, this is not the West Wing. I’m like, this is not like, these people are just,
    3:12:42 this is reality. And the weirdest part of my life is I’ve now been in Washington long enough. I know
    3:12:48 some of the people personally, the vice president of the United States, literally the vice president
    3:12:53 in elect, future cabinet secretaries, future, you know, these people I literally have met at
    3:12:59 dinner with at a drink with whatever. That’s a wild thing. And that’s even more bringing you down
    3:13:02 to earth. We were like, oh, shit, you’re actually going to have a lot of power. That’s, that’s kind
    3:13:06 of scary. But you’re just a person. And so even though you don’t have to say I have my same life
    3:13:12 experience, take it from me or anybody else who’s ever met really famous people, rich,
    3:13:16 rich, successful, powerful people, they’re just people. There’s nothing that there’s some things
    3:13:22 that are unique about them. But they have just as many human qualities as you or anybody else
    3:13:27 is listening to this right now. Yeah, there’s, for each candidate, Trump is probably the extreme
    3:13:32 version of that. There’s a, there’s a distribution of the possible trajectories the administration
    3:13:38 might result in. Yes. And like, the range of possible trajectories is just much wider with
    3:13:42 Trump. Yeah, you’re describing like a Bayesian theory, right? Like, and I think that’s actually
    3:13:46 a really useful framework for the world is that people are really too binary. So like you said,
    3:13:52 you know, there’s a theoretical possibility, I guess, of a communist takeover of government
    3:13:57 and of fascist takeover of government under Kamala Harris or Donald Trump, you know,
    3:14:04 their realistic probability, I would give it 0.05% probably in both directions. But there are,
    3:14:07 you know, there are a lot of things that can happen that are bad that are not hit Larry
    3:14:11 in her face. There are a lot of things that happen that are really good that are not FDR,
    3:14:16 New Deal style. One of the worst things politicians do is they describe themselves
    3:14:23 in false historical ways. So in Washington, one of the most overused phrases is made history.
    3:14:28 And I’m like, you know, if you actually read history, most of these things are just,
    3:14:32 they’re not even footnotes. They’re the stuff that the historians flip past and they’re like,
    3:14:38 what a stupid bucking thing. I mean, and I’m talking about things that will that ruled American
    3:14:42 politics. Like, what if I told you that the Panama Canal Treaty was one of the most important
    3:14:48 fights in modern American politics? Nobody thinks about that today. It ruled American politics at
    3:14:53 that time. You know, it genuinely is a footnote, but that’s how it felt at the time. So that’s
    3:15:00 another thing I want people to take away. You tragically missed the UFO hearings.
    3:15:06 Oh man, my brothers, I’m really sad. Let me tell you, I love them so much.
    3:15:12 The UFO community are some of the best people I’ve ever met in my life. Shout out to my brother
    3:15:18 Jeremy Corbell, to George Knapp, the OG, to all of the people who fly from all around the
    3:15:23 world to come to these hearings. It was so fun. I got to meet so many of them last time,
    3:15:31 just walk the rope line like as people were coming in the excitement. I truly love the UFO
    3:15:34 community. Shout out to all of them. This is the second one, I guess. This is the second one.
    3:15:38 Do you hope they continue happening? It’s going to be a slow burn. So one of the things I always
    3:15:46 tell the guys and everybody is, consider how long it took to understand the sheer insanity of the
    3:15:52 CIA in the 1950s and ’60s. So if we think back to the Church Committee, I don’t, I forget the
    3:15:56 exact year of the Church Committee, I think it was in the ’70s, the entire Church Committee and
    3:16:03 knowledge of why this, of how the CIA and the FBI were up to all of this insane shit throughout the
    3:16:09 ’50s and ’60s is because some people broke into a warehouse, discovered some documents, got the
    3:16:14 names of programs which were able to be foiled and we were able to break open that case. It would
    3:16:19 never have happened with real transparency, like in the official process. So we owe those people
    3:16:26 a great debt, I guess I could say. Now the statute of limitations has passed. My point about the UFOs
    3:16:30 is I don’t know what is real or not. I have absolute confidence and absolute ton is being
    3:16:35 hid from the American people and that all of the official explanations are bullshit. I have had the
    3:16:41 opportunity to interface with some of the whistleblowers and other activists in the community,
    3:16:46 people who I trust, people who have great credentials, who have no reason to lie, who have
    3:16:52 assured us that there is a lot going on behind the scenes. There has been too much misinformation
    3:16:57 and effort by the deep state to cover up this topic. So I would ask people to keep the faith.
    3:17:02 It’s 2024 and we still don’t have all the JFK files. Okay? Everyone involved is dead.
    3:17:06 There’s no reason to let it go. And even though we basically know what happened,
    3:17:11 we don’t know. If you read that fantastic book, the Tom O’Neill book about the Manson
    3:17:16 murders, I mean, again, you know, it took him 20 years to write that book and he still didn’t get
    3:17:22 the full story. So sometimes it takes an extraordinarily long agonizing period of time and
    3:17:28 I know how deeply frustrating that is. But when you think about a secret, a program and knowledge
    3:17:34 of this magnitude, it would only make sense that it would require a Titanic effort to reveal a
    3:17:40 Titanic secret. You think Trump might be able to push for like aggressively break through the
    3:17:46 secrecy. Let’s say even on the JFK files. I hope so. I have moderate confidence. You know,
    3:17:51 RFK Jr. has pushed him to do so. I would like to think so. At the same time,
    3:17:54 Thomas on got rolled last time. So I’m, you know, I’ll hold my breath.
    3:17:56 Why do you think that happens? Why do you think it gets…
    3:17:58 Remember that whole interagency thing I told you about? That’s how it happens.
    3:18:02 That’s another thing. You’re presuming that the president has the power to declassify this stuff.
    3:18:06 I’m saying that I’m not even sure we’re there, like in terms of…
    3:18:11 So it’s basically like stability. He basically says like, I would like to declassify JFK files.
    3:18:16 And they say, “Yes, sir. We’ll get that to you in three months and three months comes by.”
    3:18:18 And then they’re like, “Well, there’s these hurdles.”
    3:18:22 Well, the way you get around it is go, “Let’s release some, but these in particular,
    3:18:25 there’s national security secrets is a good case for not releasing them. X, Y, and Z.”
    3:18:28 You know, it’s like, you get around that. “Oh, okay. You know, that makes sense.”
    3:18:31 You know, and again, he’s a busy guy. He’s the president. He got way bigger.
    3:18:34 Shouldn’t worry about. So this is the, that’s the problem is that
    3:18:38 unless you have that true urgency, I mean, look, people of immense power have tried.
    3:18:42 Everyone forgets this. John Podesta was the White House chief of staff.
    3:18:47 He is a UFO true believer in his heart. He tried. He’s talked about it. He tried
    3:18:52 at the top level, the number two to the White House to get the Pentagon and others
    3:18:56 to tell them what was going on. And they stonewalled him. So people need to understand what
    3:19:00 you’re up against. And, you know, I would, and people are like, “How is that even possible?”
    3:19:06 It’s like, “Well, go read about the terror that LBJ and the Kennedys and others had
    3:19:13 in confronting J. Edgar Hoover. Go and read how terrified, you know, Eisenhower and some of them
    3:19:17 were, were of the dullest brothers. They were scared. Like they, they knew where the power lies.
    3:19:23 So, you know, the presidency, look, government, deep state, et cetera,
    3:19:28 they’ve been there a long time and they know what’s happening. And presidents come and go,
    3:19:34 but they stay forever. And so that’s, that’s the paradigm that you’re going to fight against.
    3:19:40 Yeah. I mean, it’s, it’s a bit of a meme, but I wonder how deep the deep state is.
    3:19:45 Much deeper than anyone can even imagine. And the worst part is with the deep state is it’s not
    3:19:51 even individuals, it’s actually an ideology. And ideology is the most, you know, people often think
    3:19:55 that if we took money out of politics that it would change everything. I’m not saying it wouldn’t
    3:20:00 change everything, but it wouldn’t change a lot. But people are like, “Oh, so-and-so is only against
    3:20:03 universal healthcare because they’re going to pay it.” I’m like, “No, no, no, that’s not why they
    3:20:08 actually believe it.” Or it’s like, “Oh, so-and-so is only wants to advocate for war with Iran
    3:20:12 because they’re on the payroll of APAC.” And it’s like, “Well, yeah, the APAC trips and the money
    3:20:18 helps, but they think that.” Actually, the system itself, this is a very Chomsky-esque systemic critique
    3:20:23 is that any journalist worth their salt would never have the ability to get hired in a mainstream.
    3:20:28 So he’s like, “It’s not that you’re bad in the mainstream media. It’s that anyone good
    3:20:34 is not allowed to be elevated to your position because they have an ideology.” And so, you know,
    3:20:40 that is the most self-reinforcing pernicious mechanism of them all. And that’s really Washington
    3:20:47 in a nutshell. It’s, again, a bubble, but a bubble that has a lot of power. Who do you think is the
    3:20:53 future of the Republican Party? After Trump. What happens to Trumpism after Trump? Like you just
    3:21:00 said, Bayesian, let’s take various theories, right? So let’s say it’s ’04. It’s Bush Cheney.
    3:21:05 In 2004, the day after the election, I would have told you this. We live in a Bible belt,
    3:21:11 Jesus Land America. This America wants to protect America, a war on terror against Iraq,
    3:21:18 and the axis of evil, and American people just voted for George W. Bush. And so, I would have
    3:21:22 predicted that it would have been somebody in that vein. And they tried that. His name
    3:21:27 was John McCain. He got blown the fuck out by Barack Obama. So I cannot sit here and confidently
    3:21:31 say it. What year would you be able to predict Obama? It was just his first time he gave the
    3:21:36 speech, the 2004 speech at the DNC. We don’t live in Black America, White America, the John Kerry
    3:21:42 DNC speech. You honestly could not have predicted it until ’07, whenever he actually announced his
    3:21:48 campaign and activated a lot of anti-war energy. I mean, maybe ’06, actually, I could have said.
    3:21:53 In ’06, if I was kind of the contrarian man now, I’m like, “Yeah, there’s a lot of anti-war energy.
    3:21:57 I think the next president will be somebody who’s able to vote.” You know, the explosion of Keith
    3:22:02 Oberman and MSNBC, it makes logical sense in hindsight. But, you know, at the same time,
    3:22:06 you’re going up against the Clinton machine who’s never lost an election. So I would have been
    3:22:12 afraid. I cannot confidently say. So I will say, if things go in different directions,
    3:22:17 if Trump is a net positive president, then I think it will be JD Vance, his vice president,
    3:22:23 who believes in the, a lot of the things that I’ve talked about here today about foreign policy
    3:22:28 restraint, about the working class, about changing Republican attitudes to the economy,
    3:22:34 and he would be able to build upon that legacy in the way that George W. H. W. Bush was able to
    3:22:38 get elected off the back of Reagan. But H. W. Bush was fundamentally his own man. He’s a very
    3:22:43 misunderstood figure, very different than Ronald Reagan. Didn’t end up working out for him, but,
    3:22:48 you know, he did get himself elected once. So that’s one path. That’s if you have a net positive
    3:22:54 Trump presidency. The other path is the 04 path that I just laid out. If George W. Bush, if Trump
    3:23:01 does what Bush does, misinterprets his mandate, screws things up, creates chaos, and it makes it
    3:23:07 just generally annoying to live in American society, then you will see somebody in the Republican
    3:23:12 Party. I mean, still, it could even be JD Vance, because he could say JD is my natural and my
    3:23:16 chosen successor, but then he would lose an election, and then he would no longer be the
    3:23:22 so-called leader of the Republican Party. So I could see it swing in the other direction. I could
    3:23:27 see, you know, Republicans or others, let’s say if it’s a total disaster, and we get down to like
    3:23:34 20% approval ratings, and the economy is bad and stuff like that, Glenn Yonkid, or somebody like
    3:23:41 that who’s very diametrically opposed to Donald Trump, or at least, you know, aesthetically,
    3:23:45 is somebody like that who could rise from the ashes. And I’m just saying, like, in terms of his
    3:23:50 aesthetic, not him per se. So there’s a variety of different directions. It’s a big question about
    3:23:55 the Republican base. I mean, a shit ton of people voted Republican now for the first time ever.
    3:23:59 So are they going to vote in party primaries? I don’t know. You know, the traditional
    3:24:06 party primary voter is like a white boomer, who’s like 58, 59. Is the Latino guy in California,
    3:24:10 who turned out to vote for Trump with a MAGA hat and rolling around, you know,
    3:24:14 suburban Los Angeles? With that, is he going to vote in the Republican Party? That could
    3:24:19 change. So the type of candidate themselves could come. So it’s just, it’s way too early to say,
    3:24:22 you know, we have so many variety of paths that we go down.
    3:24:29 Yeah, I think Trump is a singular figure in terms of, like, if you support Trump,
    3:24:38 there, I just, there’s a vibe. I know Kamala has a vibe, but there’s definitely a vibe to Trump
    3:24:46 and MAGA. And I just, I think even with JD, that that’s no longer going to be there. So if JD runs
    3:24:51 and wins, that would be on principles. And it’s a very different human being.
    3:24:56 He is so different than Trump, right? You can see his empathy, right? Remember in the VP debate,
    3:24:59 when he was like, Christ have mercy when Tim Walsh was talking about his son? I mean,
    3:25:04 that’s not something Donald Trump would say. Okay, it’s just not like, in terms of, I mean,
    3:25:08 you know, and this, by the way, this is my own bubble test. I have no idea how
    3:25:11 somebody listens to Trump and JD Vance is like Trump is the guy who should be the president
    3:25:18 over. I honestly, I don’t get it. That’s my own cards on the table. I am in too much of a bubble
    3:25:25 where I’m, my bias is to, you know, being well spoken and being empathetic or at least being
    3:25:30 able to play empathetic and being extremely well read about the world and thoughtful and somebody
    3:25:34 who’s, you know, somebody like him who’s engaged in the political process, but also has been able
    3:25:39 to retain his values and be extremely well articulate his worldview. That’s my bias. That’s
    3:25:43 who I would want to be the president. But you know, you know, that’s a big country. People
    3:25:48 think differently. By the way, I share your bias and I sometimes try to take myself out of that
    3:25:55 bubble. Like maybe it’s not important to have read a book or multiples of books on history.
    3:26:00 I’m not saying everybody should be like me, but that’s my, I’m checking myself by being like,
    3:26:04 because of who I am, that’s how I see the world. And that’s how I would choose a leader.
    3:26:09 But that is not how people vote, period. And I’ve, nothing has taught me that more than this
    3:26:12 election. I wish they did. I mean, I don’t know if that, I don’t know if that’s the lesson to
    3:26:16 take away. I think, yeah, but who are we to say people are allowed to do what they want. And I’m
    3:26:19 not going to tell somebody how to vote. No, what I’m saying is you take everything Trump,
    3:26:28 everything that Trump is doing, everything, the whole, the dance, all of it, and add occasional
    3:26:33 saga like references to history books. I think that’s just a better candidate.
    3:26:36 I agree with you. I mean, listen, you know, it’s my bias.
    3:26:41 Yeah, I don’t know. I don’t think that’s biased. I think, I think that’s not a bubble thinking.
    3:26:44 I think it’s amazing to me, right? Like listen to the JD interview with Rogan.
    3:26:53 I mean, JD, I mean, he’ll drop obscure references to studies, to like papers that have come out,
    3:27:02 essays, books, like this is a very well read, high IQ, well thought out individual who also,
    3:27:08 you know, has given his life to the political process and decided to like deal with all the
    3:27:12 bullshit that this entire system is going to throw at you whenever you start to engage.
    3:27:16 That’s who I would want to be president, but you know, I’m biased. So what can I say?
    3:27:20 I like how you keep saying you’re biased as if there’s some percent of the population doesn’t
    3:27:26 like people to read at all. Okay, what about the future? You kind of hinted at the future of the
    3:27:31 Democratic Party. Yeah. Do you see any talent out there that’s promising? Is it going to be
    3:27:36 Obama like figure that just rolls out of nowhere? Clinton is the better example, because the Democratic
    3:27:43 Party was destroyed for 12 years from 1980, the 1980 election to 1992. They’re 12 years out of power.
    3:27:50 In periods of that long of an era, it takes somebody literally brand new who is not tainted
    3:27:54 by the previous to convince the base that you can one and convince the country that you’re
    3:28:00 going in a new direction. So I would not put my money on anybody tainted by the great awokening,
    3:28:06 by TDS, by the insanity of the Trump era, that has to be somebody post that,
    3:28:13 and or somebody who is able to reform themselves. It will, in my opinion, it will likely not be
    3:28:18 any establishment politician of today who will emerge for the future. Like I said,
    3:28:24 my dark horse is Dean. I think that the Democratic base is going to give Dean a shit ton of credit,
    3:28:30 and they should for him being out. Let’s be honest, he’s a no-name congressman from Minnesota. Nobody
    3:28:35 cared who Dean Phillips was. But just like Obama, he had courage and he came out and spoke early
    3:28:39 when it mattered. And by doing that, he showed good judgment and he showed that he’s willing to take
    3:28:44 risk. So I would hope in America’s political system that we award something like that. And I do
    3:28:49 think the Democrats will reward him. But I’m not saying it will be him per se, but it will be a figure
    3:28:56 like that, who is not nationally known, who has read the tea leaves correctly, who took guesses
    3:29:02 and did things differently than everybody else. And most of all, I’m hoping that heterodox attitudes,
    3:29:08 ideas, behaviors, by definition, after a blowout, those will likely be the ones that are rewarded.
    3:29:11 So I cannot give a name, but I can just describe the circumstances for what it will look like.
    3:29:15 Can you imagine an amorphous figure that’s a progressive populist?
    3:29:22 It would be very difficult at this point, just because a huge portion of the multiracial working
    3:29:27 class has shifted to the right. But I could see it. I mean, look, people change their minds all the
    3:29:32 time. Like there are people out there who voted for Barack Obama, who’ve now voted for Donald Trump
    3:29:38 three times. So a lot can change in this country. If you make a credible case, you’ve got a track
    3:29:45 record, you speak authentically, and you can try to divide the country along class lines and be
    3:29:50 authentic and real about it. Maybe, I think you have a shot. I still think you’re probably going
    3:29:54 to get dinged on culture, just because I think this election has really showed us how important
    3:30:00 immigration and culture is. But you know, actually, what the left populist should pray for, and they
    3:30:04 won’t admit this, is that Trump actually solves immigration, like in terms of changing the status
    3:30:10 quo. You know how in the way that the Supreme Court just ended the conversation around gay marriage?
    3:30:13 So Republicans were like, yeah, whatever, we support gay marriage. Because like that’s the
    3:30:19 law of the land, it is what it is. They should just hope that their unpopular issue is resolved by
    3:30:24 the president. And thus, they just don’t have to talk about it anymore. And now the battleground is
    3:30:28 actually favorable for them. They get to talk about the economy and abortion. So their least
    3:30:34 popular issue gets solved by the president, by consensus, from his mandate, and then they can
    3:30:37 run on a brand new platform for the new issues that are facing America.
    3:30:40 All right, let’s put our historian head back on. Okay.
    3:30:47 Will the American Empire collapse one day? And if it does, when it does, what will be the reason?
    3:30:56 Statistically, likely. Statistically, yes. You know, what’s the famous Fight Club quote? It’s
    3:31:02 like on a long enough timeline, the survival rate for everything drops to zero. And you know.
    3:31:06 I like for all the books you’ve quoted, you went to Fight Club. I guess the movie, right?
    3:31:12 The book is good, though. People should read that too. In terms of why, again, statistically,
    3:31:20 the answer is quite simple. It usually comes back to a series of unpopular wars, which are pursued
    3:31:30 because of the elite’s interests. Then it usually leads to a miscalculation and not a catastrophic
    3:31:37 defeat. Normally, it comes gradually. And most of the times when these things end,
    3:31:41 the crazy part is most people who are living through end of empire have no idea that they’re
    3:31:45 living through the end of the empire. And I actually think about that a lot from, you know,
    3:31:49 “Decline and Fall of the Roman Empire” by Edward Gibbon. Actually, your episode on Rome was fantastic.
    3:31:54 People should go listen to that. So there you go. Another really good one. I like to think a lot
    3:32:01 about the British Empire and what eventually led to that collapse. And nobody in 1919 said
    3:32:05 the British Empire just collapsed. Basically, nobody still thought that. They were like,
    3:32:09 “Yeah, the First World War is horrible. But actually, we came out of this, okay, we still
    3:32:13 have India. You know, we still have all these African colonies and all that. But, you know,
    3:32:19 long periods of servitude, of debt to the United States, of degradation, of social upheaval,
    3:32:26 of Bolshevism, of American industrial might, and next thing you know, you find yourself at
    3:32:31 Potsdam and Churchill’s like, “Holy shit, I have barely any power in this room.” Right? So,
    3:32:38 revolutions happen slowly and then all at once. And so, could you really put a, you know, a real,
    3:32:42 like, pain in the end of the British Empire? It took almost 40 years for it to end. So,
    3:32:49 America’s empire will eventually end either from rising geopolitical competition, likely
    3:32:55 China could be India, nobody knows. It will likely be because of being overstretched, of
    3:33:04 elite capture is usually the reason why, and a misreading of what made your original society
    3:33:09 work, you know, in the first place. And that is one where, honestly, like, all three of those
    3:33:13 things will happen all at once. And it will happen over extremely long period of time. And
    3:33:18 it’s very difficult to predict. I would not bet against America right now. I think we have a
    3:33:22 lot of fundamental strengths, which is unique in dynamic country. It really is fucking crazy.
    3:33:26 Every time I travel the world, as much as I love all these different places, I go, “Man,
    3:33:31 I just, I love the United States so much.” You will love it more when you leave. I really
    3:33:38 believe that. So, yeah. And it’s nice to remember how quickly the public opinion shifts. Like,
    3:33:42 we’re very dynamic and adaptable, which annoys me. I understand that’s part of the political
    3:33:47 discourse saying like, “If Trump wins, it’s the end of America. If Kamala wins, it’s the end of
    3:33:53 America.” So stupid. Yeah. But I understand that the radical nature of that discourse is necessary
    3:33:58 to, like we mentioned. Yeah, to drive out votes. To drive out votes. I like to think about Americans
    3:34:03 in 1866. I cannot imagine going through a war where some X percent, I think it was like two or
    3:34:08 three percent or whatever, the entire population was just killed. Our president, who was this
    3:34:14 visionary genius who we’re blessed to have, is assassinated at Fork’s Theater immediately after
    3:34:19 the surrender of Lee Andrew Johnson, who’s a bumbling, like, fucktard, is the one who is in
    3:34:25 charge. And, you know, we’re having all these insane crises over like internal management,
    3:34:30 while we’re also trying to decide like this new order in the South and whether to bring these
    3:34:35 people, how to bring these people back into the Union. I mean, I would have despair, like in that
    3:34:39 year. I was like, “It’s over. This is it.” You know, the war, I’m like, “Was it worth anything?”
    3:34:43 You know, if Andrew Johnson is going to be doing this or even in the South, I mean, I can’t even
    3:34:48 imagine for what they were going through, too. You know, they have to go home and their entire
    3:34:52 cities are burned to the ground and they’re trying to readjust and, you know, their entire economy
    3:34:57 and way of life is overthrown in five years. I mean, that’s an insane time to be alive. And
    3:35:04 what do we know? It worked out, you know, by 1890s or so. There were people shaking hands,
    3:35:11 you know, Union. There’s a cool video on YouTube, actually, of FDR, who is addressing some of the
    3:35:16 last Gettysburg veterans. I think it was like the 75th anniversary or whatever. And you can literally
    3:35:22 see these old men who are shaking hands across the stone wall. It gives me hope. Yeah.
    3:35:29 Let’s linger on that hope. What is the source of optimism you have for the 21st century, for the
    3:35:36 century beyond that, for human civilization in general? Because, you know, it’s easy to learn
    3:35:43 cynical lessons from history, right? The shit eventually goes wrong. But sometimes it doesn’t.
    3:35:53 So what gives you hope? I think that the fundamentals of what makes humanity great
    3:35:59 and has for a long time are best expressed in the American character. And that despite all of our
    3:36:05 problems, that as a country with our ethos, a lot of stuff we talked about today, individualism,
    3:36:11 the frontier mindset, the blessings of geography, the blessings of our economy, of the way that
    3:36:16 we’re able to just incorporate different cultures and the best of each and put them all together,
    3:36:21 give us the best opportunity to succeed and to accomplish awesome things. We’re the country
    3:36:27 that put a man on the moon, which is the epitome of the human spirit. I hope to see more of that.
    3:36:32 And, you know, I think last time I was here, I shouted out, and I love Antarctic exploration.
    3:36:36 I’ve read basically every book that there is on the exploration of Antarctica. And one of the
    3:36:42 reasons I love to do so is because there is no reason to care about Antarctica. None. There’s
    3:36:48 nothing down there. Zero. Going to the South Pole is a truly useless exercise. And yet,
    3:36:53 we went. We went twice, actually. Two people went there within a span of five weeks and they
    3:37:01 competed to do so. And the spirit that propelled Amundsen and Scott’s expedition and people like
    3:37:06 Shackleton, who is like, if you were to ask me, my hero of all heroes, it’s Ernest Shackleton,
    3:37:12 is because his spirit, I think, lives on in the United States. It unfortunately died in Great
    3:37:16 Britain. And interestingly enough, the Brits even understand that they’re like, it’s very interesting
    3:37:23 how popular Shackleton is in America. And even though he was Irish and he was a British subject,
    3:37:27 to me, he’s a spiritual American. And I think that his spirit lives on within us
    3:37:34 and has always been here to a certain extent. And everywhere else, I think it’s dying. But here,
    3:37:38 I love it here, there’s so many cool things about America. People move around all the time. They
    3:37:42 buy new houses. They start families. There’s no other place you can just reset your whole life.
    3:37:47 In the same country, it’s wild. You can reinvent yourself. You can go broke. You can get rich.
    3:37:53 You can go back and forth multiple times. And there’s nowhere else where you have enough freedom
    3:37:58 and opportunity to pursue that. And I definitely have a lot of problems, but I’ve traveled enough
    3:38:03 of the world now to know that it’s a special place. And that gives me a lot of hope.
    3:38:08 I wish I could do a Bostonian accent of, we do these things, not because they’re easy,
    3:38:10 but because they’re hard. Because they are hard.
    3:38:19 Thank you. That’s so true. The Scott Irish guts. Well, listen, I’m a huge fan of yours,
    3:38:24 Sager. I hope to see you in the White House interviewing the president. There you go.
    3:38:27 That’s the only situation you’re going to see me in the White House.
    3:38:39 Front row and just talking free. I would love to live in a country and a world where it’s you
    3:38:46 who gets to talk to the press secretary, to the president. Because I think you’re a real,
    3:38:50 you’re one of the good ones, as far as journalists go, as far as human beings.
    3:38:55 So I hope to see you in there. And I hope you get to ask a question that
    3:38:58 ends up in a book that ends up in a good history book.
    3:39:03 Absolutely. Well, likewise, I’m a huge fan of yours. For anybody out there who’s interested,
    3:39:09 I compiled a list and I will go and retroactively edit it. Just go to Sager and Jetty.io. I created
    3:39:13 a newsletter with a website that has all the links to all the books I’m going to talk about here.
    3:39:15 Beautiful. The hundreds of books that were mentioned here.
    3:39:17 All right, brother, thank you so much for talking to me. Thank you.
    3:39:22 Thanks for listening to this conversation with Sager and Jetty. To support this podcast,
    3:39:27 please check out our sponsors in the description. And now let me leave you with some words from
    3:39:37 Voltaire. History is the study of all the world’s crime. Thank you for listening and hope to see you
    3:39:54 next time.

    Saagar Enjeti is a political journalist & commentator, co-host of Breaking Points with Krystal and Saagar and The Realignment Podcast. He is exceptionally well-read, and the books he recommends are always fascinating and eye-opening. You can check out all the books he mentions in this episode here: https://lexfridman.com/saagar-books
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep454-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/saagar-enjeti-2-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Saagar’s Book Recommendations: https://lexfridman.com/saagar-books
    Saagar’s Substack (where he recommends more books): https://saagarenjeti.substack.com/
    Saagar’s X: https://x.com/esaagar
    Saagar’s Instagram: https://instagram.com/esaagar
    Breaking Points: https://youtube.com/@breakingpoints
    The Realignment Podcast: https://www.youtube.com/@therealignment
    Saagar’s Linktree: https://linktr.ee/esaagar

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Eight Sleep: Temp-controlled smart mattress cover.
    Go to https://eightsleep.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex

    OUTLINE:
    (00:00) – Introduction
    (09:47) – Why Trump won
    (14:48) – Book recommendations
    (18:24) – History of wokeism
    (25:54) – History of Scots-Irish
    (32:32) – Biden
    (36:34) – FDR
    (38:36) – George W Bush
    (40:59) – LBJ
    (46:15) – Cuban Missile Crisis
    (53:48) – Immigration
    (1:25:46) – DOGE
    (1:52:27) – MAGA ideology
    (1:55:39) – Bernie Sanders
    (2:04:00) – Obama vs Trump
    (2:21:00) – Nancy Pelosi
    (2:24:14) – Kamala Harris
    (2:40:00) – 2020 Election
    (3:03:49) – Sam Harris
    (3:14:55) – UFOs
    (3:20:47) – Future of the Republican Party
    (3:27:24) – Future of the Democratic Party
    (3:35:21) – Hope

  • #453 – Javier Milei: President of Argentina – Freedom, Economics, and Corruption

    AI transcript
    0:00:04 The following is a conversation with Javier Malay, the president of Argentina.
    0:00:12 He is a libertarian, anarcho-capitalist, and economist who campaigned with the chainsaw
    0:00:17 that symbolized his promise to slash the corrupt bureaucracy of the state.
    0:00:24 He stepped into the presidency one year ago, with a country on the brink of hyperinflation,
    0:00:31 deep in debt and suffering from mass unemployment and poverty. He took this crisis head on,
    0:00:36 transforming one of Latin America’s largest economies through pure free market principles.
    0:00:44 In just a few months in office, he already achieved Argentina’s first fiscal surplus in 16 years,
    0:00:50 and not just avoided hyperinflation, but brought inflation down to its lowest in three years.
    0:00:55 We discuss all of this in detail, both the successes and the challenges.
    0:01:01 His depth of knowledge of economic principles, metrics, and data was truly impressive,
    0:01:07 and refreshing to hear from a world leader. But even bigger than the economic transformation of
    0:01:13 Argentina, Javier represents the universal fight against government corruption and the fight for
    0:01:20 freedom, economic freedom, political freedom, and freedom of speech. He has many critics,
    0:01:24 many of whom are part of the corrupt establishment he’s seeking to dismantle.
    0:01:29 But many are simply Argentinian citizens, scared of the pain
    0:01:32 his radical policies may bring, at least in the short term.
    0:01:38 But whether one disagrees with his methods or not, no one can deny that his presidency
    0:01:44 marks one of the most ambitious attempts at economic transformation in modern history,
    0:01:50 and that Javier Malay is truly a force of nature, combining the rigor of an economist
    0:01:55 with the passion of a revolutionary in the fight for freedom of a nation he loves.
    0:02:01 Argentina is one of my favorite countries, so I sincerely hope he succeeds.
    0:02:08 This interview was conducted with the president speaking Spanish and me speaking English,
    0:02:13 with an interpreter simultaneously translating. We make the episode available,
    0:02:20 overdubbed and subtitled in both English and Spanish, thanks to our great friends at 11 Labs.
    0:02:24 If you’re watching on YouTube, you can switch between English and Spanish by
    0:02:27 clicking the gear icon, selecting audio track, and then choosing the language.
    0:02:33 Same with the captions. If you’re watching on X, I’ll post both Spanish and English versions
    0:02:38 separately. If you’re watching on Spotify or listening elsewhere, I’ll probably only post the
    0:02:44 English version. This is the first time for me doing something like this, in a foreign language.
    0:02:50 It was challenging, but illuminating. I hope to continue talking to many world leaders for
    0:02:56 two to three hours in this way, including Volodymyr Zelensky, Volodymyr Putin, Narendra Modi,
    0:03:04 and Xi Jinping. I want to explore who they are, how they think, and how they hope to help their
    0:03:10 country and humanity flourish. And now a quick few second mention of a sponsor.
    0:03:14 Check them out in the description. It’s the best way to support this podcast.
    0:03:21 We’ve got Aidsleep for Naps, Netsuite for Business, BetterHelp for your mind, AG1 for your health,
    0:03:28 and Element for electrolytes. Choose wisely, my friends. Also, if you want to get in touch with
    0:03:34 me for whatever reason, go to lexfreedman.com/contact. And now on to the full ad reads. I try to make
    0:03:38 these interesting, but if you skip them, please still check out our sponsors. I enjoy their
    0:03:46 stuff. Maybe you will too. This episode is brought to you by Aidsleep, and it’s Pod for Ultra.
    0:03:52 The pod part is the thing that measures all the data from your body and cools the bed on each
    0:03:58 side of the bed separately. And then there’s the Ultra, which is the base that goes between
    0:04:03 the mattress and the bed frame. It can control the positioning of the bed. It’s really incredible
    0:04:09 technology. They sent me some notes that are, in theory, supposed to be helpful. And it says,
    0:04:17 “Celebrities who use the pod, Elon Musk, Mark Zuckerberg, Lex Freedman. I am a celebrity.
    0:04:22 Dr. Andrew Huberman, Dr. Peter Atia. I didn’t get a doctor.
    0:04:31 I do think doctor in front of the name is a useful thing for people that obviously got a
    0:04:39 PhD or medical doctors. I think it’s a useful shorthand to let people know that there’s
    0:04:46 some kind of expertise here.” But there was a funny moment where I got a chance to get dinner
    0:04:53 with Andrew Huberman and Peter Atia. And the person that sat us down for dinner said,
    0:05:02 “Dr. Huberman, Dr. Atia, Mr. Freedman.” And I kind of teased him about it. But obviously,
    0:05:07 I enjoy being called Mr. Please Never Call Me Doctor and also just call me Lex. It really
    0:05:12 doesn’t matter. And definitely, I don’t think of myself nor do I think I am a celebrity. Anywho,
    0:05:20 they have a special Black Friday offer. If you go to acelib.com/lex and use code Lex,
    0:05:27 you’ll get up to $600 off your Pod for Ultra Purchase when bundled. That’s acelib.com/lex.
    0:05:34 This episode is brought to you by Netsuite and all in one cloud business management system.
    0:05:42 For some reason, I just thought of Mark Andreessen, one of the great minds in Silicon Valley in tech.
    0:05:47 And I probably should talk to him soon. I’ve had several conversations with him and I’ve listened
    0:05:54 to him on his own podcast and sort of speak and tweet about just there’s a depth of insight about
    0:05:59 how much should tech entrepreneurs care about government, about the way government works,
    0:06:04 about how to communicate with politicians, all that kind of stuff in order to have some
    0:06:11 regulation but not too much regulation so that they can build epic shit without government getting
    0:06:17 in the way unnecessarily. Actually, a company, 11 labs that helped with the translation and the
    0:06:24 dubbing for this episode. Incredible group of folks. Great engineers, just a great company. I’ve
    0:06:28 been having a lot of conversation with the CEO and I think they’re doing a truly beautiful thing,
    0:06:32 breaking down the barriers that language creates and doing that in a way that’s,
    0:06:37 you know, accessible to a lot of people. It’s still at this time very, very expensive,
    0:06:43 but it’s cheaper than you would be done by human and hopefully better and better as the
    0:06:48 technology improves. I really want to be playing with this technology. But the thing I want to
    0:06:54 comment on is just a great company and a great business and great set of folks. So I care about
    0:07:00 the running, the functioning, the ways of such great companies. How’s that for a segue? Netsuite
    0:07:08 helps you. In fact, it helps over 37,000 companies who have upgraded to Netsuite by Oracle.
    0:07:13 It helps businesses run all kinds of messy stuff. Take advantage of Netsuite’s flexible
    0:07:21 financing plan at Netsuite.com/Lex. That’s Netsuite.com/Lex. This episode is brought to you by Better
    0:07:27 Help, spelled H-E-L-P Help. They figure out what you need and match you with a licensed therapist
    0:07:34 in under 48 hours. I was really pumped talking to President Javier Millet about life, frankly.
    0:07:41 I could probably talk to him for many more hours. Such a brilliant but kind-hearted, warm person
    0:07:48 on mic, but I got a chance to interact with him a bunch before and after off mic and just a warm
    0:07:54 person. Just a human being who saw me, who noticed me, who smiled and just had this way.
    0:08:04 That’s not just maybe a fake charisma. It’s a real human charisma and a nervousness and a joy,
    0:08:11 all of that together. Obviously, a brilliance backed by a set of principles and a desire to see
    0:08:20 freedom win. There is a sense that freedom is a powerful force for the human mind.
    0:08:27 A lot of our conversation was focused on economics, but the responsibility and the possibility
    0:08:34 of taking control of your own destiny is a powerful idea. It’s an American idea,
    0:08:41 and there are many other places in the world that are captivated by that idea. He’s one of the great
    0:08:49 elucidators and implementers of that idea. I love Argentina, so I hope that he succeeds.
    0:08:54 Anyway, all that to say is freedom is good for the mind, and another thing that’s good for the
    0:09:00 mind is better help. Check them out at betterhelp.com/lex and save in your first month. That’s betterhelp.com/lex.
    0:09:07 This episode is brought to you by AG1, an all-in-one daily drink to support better health
    0:09:14 and peak performance. You know what I drink AG1 after? I drink AG1 after a long soccer,
    0:09:20 aka football game. I used to play a lot of both soccer and football. Obviously, I played a lot
    0:09:25 of soccer and childhood. I say that obviously because most of the world except the United States,
    0:09:29 that’s kind of the sport that every kid plays because it’s so accessible.
    0:09:34 Anyway, I was a big fan of Diego, Armando, Maradona when I was growing up and just
    0:09:42 seeing the World Cups in which we played the famous Goal of the Century and the Head of God Goal and
    0:09:49 just the aura and the genius and the feel he had was mesmerizing and just inspiring for a kid.
    0:09:56 When Leonor Messi came around, I think I first saw when he was in the youth league, 17,
    0:10:03 maybe 16, 17, I’m not sure. There was something else. There was just genius there. I do consider
    0:10:12 it a huge gift to humanity that his genius only developed, it grew, it flourished. It was a tragedy
    0:10:16 that he didn’t win the World Cup for the longest time or didn’t help Argentina win the World Cup
    0:10:23 until very recently, which he did and he completed. He won everything you could possibly win and that
    0:10:30 was such a beautiful historic moment. The greatest player of all time, Leonor Messi, in my opinion,
    0:10:37 in most people’s opinion. I do hope to talk to him in this experiment, this chance I got to
    0:10:43 talk to Javier Malay with an interpreter and all this mess and I apologize if I screwed it all up
    0:10:49 in different ways. I really tried. I tried to figure out how we could make him most accessible
    0:10:55 for both English and Spanish speakers and all that kind of stuff. All that had to come together
    0:11:00 in just a handful of days. I think like three days I had to figure it all out and never done
    0:11:06 anything like it. So this sort of emboldened me, gave me confidence that it’s possible to do. And
    0:11:11 there is, of course, a Spanish speaker that I would very much love to talk to and his name,
    0:11:16 like I said, is Leonor Messi. And so now I’m a little bit more confident that that is something
    0:11:24 I could handle if given the opportunity. And I hope to celebrate him properly if I ever get a
    0:11:31 chance to speak with him. Anyway, try out AG1. They’ll give you one month supply of fish oil
    0:11:37 when you sign up at drinkag1.com/lex. This episode is also brought to you by Element,
    0:11:44 my daily zero sugar and delicious electrolyte mix. Oh, and I should also say that I don’t get a
    0:11:48 chance to play soccer that much these days. And I’m not sure why. I think for a couple of years,
    0:11:56 I had a few injuries, like slight injuries related to jujitsu that made sort of the sprinting and
    0:12:02 maybe the fast turning and the pivoting and the planting of feet, all that kind of stuff
    0:12:10 for many hours at a time difficult. Or rather, I should say, I was trying to let the injuries heal
    0:12:15 if I played a lot of soccer, they just wouldn’t heal. But soccer is, as a sport, one of my favorite
    0:12:22 sports to participate in. And as a form of exercise, it makes time just disappear. Like I could do
    0:12:30 sprint after sprint after sprint, running around the field for hours. And like a little kid still,
    0:12:35 I just forget time. You don’t realize how much calories you burn. You don’t think about anything.
    0:12:41 You don’t realize how exhausted you are. You’re just full of joy and the competition, the excitement,
    0:12:47 maybe it puts me right back there to all the football games I’ve watched as a kid.
    0:12:52 Like I’m now pretending to be Maradona. I’m not pretending to be Leonon Messi. I’m
    0:12:59 not pretending to be all those sort of superstars and enjoying the fun of it.
    0:13:07 Yeah. Anyway, before and after I would probably drink an element. Get a sample pack for free
    0:13:16 with any purchase. Try it at drinkelement.com/lex. By the way, you know, this is the first time I’m
    0:13:25 trying something like this. The episode I’m publishing on this audio feed is an English
    0:13:31 dubbed audio track. And the voice cloning is done by AI. Thank you for our great help.
    0:13:37 Thank you for the help from the great 11 Labs team. And there’s a lot of human in the loop,
    0:13:43 improving the translation, improving the voice, all that kind of stuff. But I’m not sure what kind
    0:13:50 of thing makes it a pleasant experience for just audio listeners. And I primarily myself am usually
    0:14:00 an RSS audio listener. So I really care about this medium of podcasting. It is the original,
    0:14:10 the main, to me way to consume podcast freedom. As Javier said, “Viva la libertad carajo.”
    0:14:17 Yeah, I truly believe that RSS is freedom. That’s what podcasting is all about.
    0:14:22 This is the Lex Friedman podcast. To support it,
    0:14:35 please check out our sponsors in the description. And now, dear friends, here’s Javier Malay.
    0:14:49 When did you first understand the value of freedom, especially economic freedom?
    0:14:54 Well, actually, I came to understand the ideas of freedom.
    0:15:01 As an economic growth specialist back in the years of 2013 to 2014,
    0:15:10 I could see that per capita GDP statistics over the last 2000 years of the Christian era
    0:15:19 essentially looked like a hockey stick, indicating that per capita GDP remained almost constant until
    0:15:27 around 1800, after which it accelerated sharply. In the same context of that phenomenal increase
    0:15:35 in productivity and per capita GDP, the population had multiplied sevenfold over the preceding 200
    0:15:44 years. So basically, in economics, that means you get increasing returns, and the presence of
    0:15:51 increasing returns implies the existence of monopolies, concentrated structures, and according
    0:15:57 to traditional neoclassical economic theory, the presence of monopolies and concentrated
    0:16:04 structures is not a good thing. But at the same time, one could see that living standards had
    0:16:11 increased tremendously, and that middle-income people ended up living far better than emperors
    0:16:20 did in the Roman era, and the population had gone from having 95% of people in extreme poverty
    0:16:27 to less than 10%. And in that context, the question was how it could be that something
    0:16:32 that had lifted so many people out of poverty, that had improved human condition so much,
    0:16:37 could be something bad for economic theory, meaning something was not right.
    0:16:47 So in that context, I remember that one of the people who worked on my team suggested I read
    0:16:54 an article by Murray Newton Rothbard called Monopoly and Competition. I remember reading it
    0:17:01 like it was today. And after reading it carefully, I said, “Everything I’ve taught about market
    0:17:09 structure in the last 20 years in courses on microeconomics is wrong. This caused a very
    0:17:17 strong internal commotion in me, so I called this person who used to work with me, and they recommended
    0:17:25 a place to buy Austrian School of Economics books. And I remember I bought at least 20 or 30 books,
    0:17:33 which I went to pick up one Saturday afternoon. And when I visited the bookstore, I was fascinated
    0:17:39 by all the stuff they had there. So I went back the next day and I started calculating how much
    0:17:46 money I needed to pay for my dog’s food. That’s my four-legged child and how much I needed to
    0:17:55 spend on the taxi fare and food. And then with what I have left, I spent all of it on more books.
    0:18:01 And then I started to read very intensively. And I remember, for example, the experience
    0:18:10 of reading “Human Action” by Mises. And this was a book that I didn’t know about. And I remember
    0:18:18 that on the following weekend, I started to read this book right from the first page. And I didn’t
    0:18:25 stop until I finished it, and that was a true revolution in my head. And having the chance to
    0:18:36 read Austrian authors like Rothbard, Mises, Hayek, Hoppe, and Jesus Huerta de Soto, or others like
    0:18:42 Juan Ramon Ralo, Philip Bagus, and Walter Bloch, for example.
    0:18:50 That was very inspirational. And at one point, I got the opportunity to read
    0:18:58 related to the works of Alberto Venegas-Linchijo. And I also had the pleasure and honor to meet him.
    0:19:07 And today, we are actually friends. So that paved the way for me to approach the ideas of freedom.
    0:19:14 And another book that was a very significant influence and impact on me was “The Principles
    0:19:22 of Political Economics” by Menger. It was truly eye-opening. Or let’s say, for reading “Ogen von
    0:19:34 Biumbavark”. These were things that really challenged all of my former thinking. I had a vague idea
    0:19:42 and poor about the Austrian school. The only thing I had read about the Austrian school until then had
    0:19:53 been “Money and Time”, a very good book by Garrison. But now that I understand a little bit more about
    0:20:01 Austrian economics, I know that it was rather poor. This doesn’t mean that the book isn’t good.
    0:20:08 But there were a whole lot of things to read that ended up being truly fascinating.
    0:20:17 So from that, what is now today, and maybe you can talk about the evolution, is your philosophy,
    0:20:21 economics philosophy. You’ve described yourself as an anarcho-capitalist,
    0:20:28 market anarchist, libertarian. That’s the ideal. And then maybe in practice, in reality,
    0:20:34 you’ve said that you’re more of a “minarchist”. So lay it all out. What’s your economics philosophy
    0:20:43 today? Strictly speaking, I am an anarcho-capitalist. I despise the state government. I despise violence.
    0:20:52 Let us suppose we take the definition of “liberalism”. I usually use the definition of “liberalism”
    0:21:00 given by Alberto Venegas Linchijo, which is very much in line with the definition of John Locke,
    0:21:08 which essentially matches the definition by Alberto Venegas Linch Jr., who said that “liberalism
    0:21:14 is the unrestricted respect for the life project of others based on the principle of non-aggression
    0:21:20 and in defense of the right to life, liberty and property”. So I frame all of the discussions
    0:21:27 within those terms. And the fact is that when you get to that notion, I would dare say that
    0:21:34 you become an anarcho-capitalist de facto. And what that describes, it is an idea,
    0:21:41 which represents my ideal world. I mean, that is the ideal world. Now real life poses a whole
    0:21:48 lot of restraints. And some of those you can lift, and those restrictions and others you can’t.
    0:21:58 So, in real life, I am a monarchist. I advocate for minimizing state size. I try to remove as
    0:22:04 many regulations as possible. In fact, that is what I used to say during my campaign,
    0:22:08 and let’s say that is what I’m now carrying out. We have just carried out the largest structural
    0:22:14 reform in Argentine history. It is a structural reform that is eight times larger than Menem’s,
    0:22:20 which had been the largest structural reform in history. And we did that with 15 percent of the
    0:22:26 representatives and 10 percent of the senators. Furthermore, we have a deregulation ministry
    0:22:32 where basically every day we eliminate between one and five regulations. On the other hand,
    0:22:39 we have 3,200 additional structural reforms pending to the point that the day we finish all
    0:22:46 these reforms, we will be the freest country on the planet with the consequences they have in terms
    0:22:52 of well-being. Think about this. When Ireland started market reforms just over 40 years ago,
    0:22:59 it was the poorest country in Europe. Today, its GDP per capita is 50 percent higher than that of
    0:23:10 the United States. So, I have a current situation, and what I am constantly looking for, whether
    0:23:18 from my academic works and my outreach notes and books, is the world we have today. That every
    0:23:26 day we are closer, that every day we gain more freedom, because there are some very interesting
    0:23:34 things here. First, I would like to quote Milton Friedman. There is a moment when they do an
    0:23:40 interview with Milton Friedman, and they ask him about liberals, and then he says that there are
    0:23:46 three types of liberals. There are the classical liberals, where, for example, Adam Smith or Milton
    0:23:53 Friedman himself could fit. Some say that Hayek could fit into that category. For me, Hayek is a
    0:23:59 minarchist. Then you have the minarchist, where you could clearly find, in that place,
    0:24:11 Mises Hayek. One could find, in philosophical terms, Nosig, and basically Einrend, and at one point,
    0:24:18 Milton Friedman, based on his own son, he says, “But if you look closely, there are some who are
    0:24:25 anarchists. Let’s say, probably from my point of view, the person who has been the greatest
    0:24:36 inspiration in my life is essentially Murray Newton Rothbard. Therefore, there are two dimensions.
    0:24:44 One is where I want to go, and the topic is where I stand. The most important thing is to try each
    0:24:53 day to advance further toward that ideal of anarcho-capitalism. In that sense, sometimes we
    0:25:01 face strong and harsh criticism regarding that ideal vision. I think that’s the Nirvana fallacy.
    0:25:07 If you compare yourself against paradise, everything is horrible and miserable,
    0:25:14 but you don’t live in paradise. You live on earth. Basically, what you need to understand
    0:25:20 is something called the state conditions. Let’s suppose that you don’t like rectangular tables.
    0:25:31 You prefer circular tables. Now, the reality is, I have only a few hours until I go and catch my
    0:25:39 flight, and the table is rectangular. You like a circular table around one, but there isn’t one.
    0:25:47 What you have is a rectangular table. So either we do the interview here or we just can’t do it.
    0:25:54 So what do you do? You adapt to the current conditions. This is what there is now. So then
    0:26:00 you have some restrictions that you can change and others that you cannot. The idea is to modify
    0:26:06 all the ones that can be changed in the short term and start working on those that can be modified
    0:26:15 in the medium or long term. For example, if you really like round tables, perhaps the next interview
    0:26:20 we may do at a round table, we’re going to try and solve it, but today it’s something that we
    0:26:28 couldn’t possibly solve. So that’s basically the idea, right? Let’s say it’s about understanding
    0:26:35 that some restrictions you can’t change, others you can, and there are institutional restrictions too.
    0:26:41 There are many anarcho-capitalists who are dedicated to criticizing,
    0:26:45 and incredibly, they do so with more violence towards liberals.
    0:26:54 And many of them actually criticize me, which truly make no sense because it is precisely
    0:27:06 the nirvana fallacy. But the reality is that, look, in Argentina, for example,
    0:27:13 the most popular sport is soccer. When you go to watch an Argentina match, it is beautiful.
    0:27:18 The stands are full and they’re all painted with sky blue and white colors.
    0:27:25 There is a lot of joy. People sing songs that are very fun, that are very distinctive.
    0:27:36 It’s very much part of Argentine folklore, so to speak. But you see, that beautiful show is
    0:27:41 external. That is to say, it does not determine the outcome. You place the ball in the middle of
    0:27:46 the field and no matter how much people shout, the ball doesn’t move. The one who moves the ball
    0:27:54 and scores the goals is messy. So, what do I mean? If you don’t get involved and don’t get into it,
    0:28:03 no, you don’t do anything. So, I mean, what do I know is that there are many liberals, libertarians,
    0:28:09 and anarcho-capitalists who are really useless because all they do is criticize, let’s say,
    0:28:14 those of us who want to lead the world toward the ideas of freedom, and what they don’t realize
    0:28:21 is that power is a zero-sum game. And if we don’t have it, then the left will have it.
    0:28:29 Therefore, if you level your harshest criticism at those in your own ranks, you end up being
    0:28:40 subservient to socialism, probably. And also, for instance, you have cases of strong hypocrisy,
    0:28:48 let’s say, I have seen cases of agarists. I mean, it’s the anarcho-capitalists who
    0:28:54 criticize Rothbard because he said that you have to get into politics, otherwise the socialist will
    0:29:04 advance. And it’s interesting because some of them, I have seen them criticizing, proposing agorism.
    0:29:13 And I remember one of them, one day, the police showed up, and honestly, he was peeing himself.
    0:29:22 So, I mean, it’s very easy to criticize, propose, and suggest, but if he was truly such an agonist,
    0:29:28 he should have been willing to endure going to jail. However, when it was time to face the
    0:29:34 consequences of the idea he was promoting, he froze, wet his pants, and ended up, let’s say,
    0:29:40 accepting all the restrictions because, clearly, it was better to be out of jail than in jail.
    0:29:51 But in doing so, he sold out his ideas. So, it seems to me that no, not taking into account the
    0:30:00 restrictions of the situation only serves to be functional to socialism because all it does is
    0:30:08 strike against one’s own. So, you became president 11 months ago. Can you again describe some of
    0:30:13 the actions you took? For example, you cut half the number of government ministries,
    0:30:20 layoffs, removed price controls. It would be interesting to lay out the first steps and what’s
    0:30:26 next. If you allow me, I will first give you a description of the situation we received,
    0:30:34 and based on that, I will tell you each of the things we did when we first
    0:30:43 took office. Basically, what we found was that in the first week of December, inflation was
    0:30:55 rising at a rate of 1% per day, which means 3,700% annually. In the first half of December,
    0:31:03 it had accelerated to 7,500% annually. When you look at wholesale inflation in December of last
    0:31:12 year, it was 54%, which, if annualized, would equate to an inflation rate of 17,000% per year.
    0:31:19 And, in addition, Argentina, for the previous 10 years, had not been growing,
    0:31:31 with a drop in GDP per capita of approximately 15%. And the reality was that nearly 50% were
    0:31:40 living in poverty. Now, later, I will get deeper into that discussion. And the reality is that we
    0:31:48 had a fiscal deficit, which amounted to 15% of GDP. Five points were in the Treasury,
    0:31:53 10 points were in the central bank, which was endogenous monetary issuance.
    0:32:02 And the reality is that we also had interest-bearing liabilities at the central bank equivalent
    0:32:09 to four monetary bases, maturing in one day, meaning we could have quintupled the amount
    0:32:16 of money in one day. We had peso-denominated maturities amounting to the equivalent of
    0:32:24 $90 billion. The central bank had negative net currency foreign reserves minus $12 billion.
    0:32:32 We had commercial debts in the central bank equivalent to $50 billion. There were company
    0:32:42 dividends held back amounting to $10 billion. Therefore, if we had instantly opened up,
    0:32:46 you see, I say we are liberal libertarians. We are not liberal fools.
    0:32:55 That’s what some anarchist liberal suggested, meaning that we basically open everything on the
    0:33:04 first day. So in that context, of course, if we had done that, we would have encountered hyperinflation.
    0:33:11 Therefore, that would have led to the number of poor people being around 95%.
    0:33:19 And probably, and by December, the Peronis party would have organized supermarket
    0:33:25 sleutings and would have done all sorts of things and would have probably been ousted.
    0:33:29 And by the first part of the year, the Peronis would have gone back to office.
    0:33:39 So to us, it was crucial to end fiscal deficit. One of the things we promised during the campaign
    0:33:48 had been to reduce the number of ministries. And indeed, we reduced to less than half the
    0:33:51 number of ministries because we went to nine ministries. Today, we have eight.
    0:33:58 We have also laid off a large number of civil employees. Today, I can say that we have already
    0:34:07 dismissed about 50,000 of them. And we practically don’t renew any contracts unless the positions
    0:34:16 are absolutely necessary. At the same time, we have stopped public works and we have eliminated
    0:34:24 discretionary transfers to the provinces. We have also diluted public sector wages.
    0:34:32 Also, we have eliminated economic subsidies by restoring utility rates to the right levels.
    0:34:44 And in that, let’s say in this context, we achieved fiscal balance as far as the treasury is concerned.
    0:34:52 This is very important because in the last 123 years, Argentina had a deficit for 113 of them.
    0:34:58 And in the 10 years, it did not have a deficit because it was not paying the debt. So that was
    0:35:05 absolutely false. And they told us it would be impossible to do that. We had planned to do so
    0:35:11 within a year. And they said it wasn’t possible to adjust by more than one percentage point.
    0:35:19 And we achieved fiscal balance in the month of January that is the first month of administration.
    0:35:28 At the same time, we also cut social plans linked to intermediation. This is very important
    0:35:35 because we knew we were going to make a very tough adjustment. And we knew that this was going to have
    0:35:44 a cost in social terms. And we knew that we had to offer support during the first month,
    0:35:51 I mean the first quarter and second quarter in office. One of the things we did was to eliminate
    0:35:58 what are known as poverty managers, that is intermediaries. Basically, people have a card
    0:36:06 through which they receive assistance. But it happens that they had to provide a counter service.
    0:36:13 And that counter service was verified by a group called the Picateros. So in that context,
    0:36:20 when they were going to sign, the counter service took away half of the money. So by removing that
    0:36:26 payoff, they stopped extorting them, stopped stealing their money. And with the same amount of money,
    0:36:33 they received double the resources. And of course, we also provided an additional boost.
    0:36:40 So let’s say that this is related to the five adjustment points in the Treasury.
    0:36:47 Now what happens? As we began to achieve fiscal balance and no longer needed to issue money to
    0:36:55 finance ourselves. And as we also met interest payments and some capital repayments. One of the
    0:37:01 things that happened is that the debt market began to be recreated. So we were able to take
    0:37:06 debt out of the central bank and transfer it to the Treasury where it should have always been.
    0:37:14 And that meant an adjustment of approximately 10% of GDP. Everyone said this would be impossible
    0:37:19 and couldn’t be fixed. Essentially, what we did was implement a fiscal adjustment at the
    0:37:27 central bank amounting to 10% of GDP. So if you ask me, it’s clear that we have not only made the
    0:37:32 biggest fiscal adjustment in the history of humanity, because we made a fiscal adjustment
    0:37:41 of 15 points of the GDP. But also most of that went back to the people as less seniority,
    0:37:46 as a lower inflation rate. It’s true that we temporarily raised the country tax,
    0:37:51 but we lowered it in September. And now in December, we’re going to eliminate it.
    0:37:57 Today, for example, we also announced that in December, we are eliminating import taxes.
    0:38:06 In fact, in that regard, what you have is that we return to the people 13.5 points of GDP
    0:38:14 because the real tax burden is the size of the state. So while back in December, we were discussing
    0:38:22 hyperinflation, today we are discussing 30-year loans. In other words, all those resources that
    0:38:28 the national government used to take are now back in the private sector. And that’s what has
    0:38:35 allowed it to be very dynamic. And this has two very strong impacts. The first one is that if you
    0:38:44 look at wholesale inflation, it went down from 54% to 2%. So it went down by 27 times. It was
    0:38:52 divided into 27. So we had inflation at a rate of 17,000% annually. And it’s now close to about
    0:39:01 28% a year. But it’s not only that. You could consider consumer inflation. The latest consumer
    0:39:09 inflation rate was 2.7%. Now it happens that we, essentially due to a matter that is related to
    0:39:16 the central bank’s balance sheets and also due to the debt stocks, we still have controls in place
    0:39:24 and we are eliminating restrictions day by day. Now, the interesting thing is that we have a 2%
    0:39:30 monthly day valuation standard. And there’s international inflation, of course,
    0:39:37 which means that you then have to subtract 2.5 points from the inflation observed by the consumer.
    0:39:42 This indicates that inflation in Argentina, the true inflation, not the induced one, but
    0:39:52 the actual monetary inflation is 0.2% per month. At 0.2% per month, this equates to 2.4% annually.
    0:39:58 What I’m saying is the original discussion was about whether inflation could reach 17,000%.
    0:40:08 Now, we are bringing inflation down to levels of 2.5% annually. And that is amazing. And we
    0:40:16 achieve this by considering a number of factors. The first one is that we did not experience a
    0:40:22 previous hyperinflation, which would have simplified the process of implementing a stabilization
    0:40:29 program. Typically, when hyperinflation occurs, monetary assets are diluted, leading to a natural
    0:40:37 restoration of demand. And besides, we did not resort to any expropriation. For example, before
    0:40:41 the convertibility plan, which was the most successful program in Argentina’s history,
    0:40:46 Argentina experienced two instances of hyperinflation. During Alfonseen’s administration,
    0:40:53 inflation reached 5,000% and under Menem, it was 1,200%. Additionally, there was the bonex plan,
    0:40:59 under which debt was exchanged on a compulsory basis. In other words, what we did instead was
    0:41:07 clean up the central bank balance sheet. So with that, we cleaned up the central bank’s balance
    0:41:15 sheet. We cleared a loss of $45 billion, all voluntarily. And the most amazing thing is that
    0:41:21 we did it in just six months. And at the same time, we have not controlled prices, nor have we
    0:41:28 fixed the exchange rate. And this is very important. All previous stabilization programs
    0:41:35 in an effort to show quick results used to do this. What they would do is, before announcing
    0:41:41 the plan, they would adjust the rates. And once the rates were adjusted, they would launch the plan.
    0:41:49 But in our case, we couldn’t afford that luxury. So we had to implement it on the go. And also,
    0:41:55 over the past few months, that is to say, companies brought in rates that covered only
    0:42:03 about 10%. Whereas today, they cover 80%. So you get the picture. Just imagine the adjustment we
    0:42:10 are making. And in that sense, it is also incredible what we have achieved. Because if we were to work
    0:42:16 with the inflation we have in our country today, considering the exchange rate situation, the
    0:42:22 figures are even better than during the convertibility program, which was the most successful
    0:42:30 economic program in Argentina’s history. And in fact, there is an article called Passing the Buck,
    0:42:36 which is by Cerrado de la Paulera Bozzoli and Irriguin, that demonstrates that Menem’s first
    0:42:44 government was the best government in history. And basically, it argues two things, in the success
    0:42:51 of the stabilization of the convertibility program. So if you take a closer look, when you examine it
    0:42:58 carefully, when you account for all these factors, our disinflation process is actually much more
    0:43:04 genuine. And not only that, it’s also much deeper. We are restored freedoms to Argentinians while
    0:43:12 simultaneously implementing a structural reform eight times larger. And we accomplish this with only
    0:43:19 with 15% of the representatives, 10% of the senators, and within the first six months of
    0:43:27 government. In other words, our deregulation agenda continues daily, and we still have 3,200
    0:43:33 structural reforms pending. This will ultimately make Argentina the freest country in the world.
    0:43:40 Moreover, to have a sense of magnitude, the reforms that we already have made with the
    0:43:48 Executive Order 7023, and with the basis law, we have actually jumped 90 places in terms of economic
    0:43:54 freedom. What this means is that today, Argentina has institutions similar to those of Germany,
    0:44:02 France, Italy, and we obviously want this to continue. And let’s say we are going to surpass,
    0:44:07 no doubt, the levels of economic freedom that Ireland reached in its best moment. And not only
    0:44:13 that, we’re going to exceed the levels of economic freedom of Australia, New Zealand, and Switzerland.
    0:44:16 We are undoubtedly going to be the freest country in the world. And this,
    0:44:25 and this means that thanks to what we’ve done today, we are on a path that allows us to multiply
    0:44:33 our per capita GDP by 2.5 times when you apply the relevant correction. And this, of course,
    0:44:40 is something very interesting because it implies a huge increase in well-being. And furthermore,
    0:44:45 today, the Argentinian economy is already strongly and amazingly recovering.
    0:44:51 And we can say, analysts’ hypotheses were suggesting that next year we would be growing
    0:44:59 between 5% and 6%. Today, JP Morgan has now corrected, or let’s say revised the projections
    0:45:04 upwards. And besides, when we normalised the price situation, the true poverty rate came up,
    0:45:12 and it was 57% in January. Today it is at 46%, meaning we lowered poverty by 11 percentage
    0:45:19 points. Let’s say, I mean, it seems truly like a miracle. And not only that, but actually not
    0:45:24 a single job was lost in the process. When it comes to all of this inflation reduction process,
    0:45:30 people said that our economy and economic activity would collapse. And actually,
    0:45:36 when you look at the de-seasonalised data, you see that in August there was a recovery that
    0:45:43 took us back to December levels, to December levels. That means that in the year we made the
    0:45:49 largest fiscal adjustment in the history of humanity, we will end up with less inflation,
    0:45:55 fewer poor people, better real wages, and additionally, a GDP higher than what we started
    0:46:02 with. And if you look at it in dollars, I can assure you that the numbers are phenomenal,
    0:46:08 because basically, today the dollar is below the levels we had when we took office.
    0:46:16 So the reality is that in all of this, when you take my popularity levels and the government’s
    0:46:21 acceptance levels, today they are above the moment we assumed office. If you know that the
    0:46:29 moment of maximum popularity is when you take office. Therefore, this means that far from
    0:46:36 resting on our laurels with this, we’re going for more reforms, we’re going to deepen the reforms,
    0:46:41 and I tell you, we won’t stop until Argentina is the freest country in the world.
    0:46:49 Furthermore, a recent work by an Argentinian economist named Juan Pablo Nicolini was presented
    0:46:56 at the central bank’s monetary meetings, and he works at the Federal Reserve. And it’s interesting
    0:47:02 because he shows that only on the basis of what we have done in fiscal matters, it ensures that in
    0:47:10 the span of 10 years, we can double the GDP per capita, meaning that Argentina could grow at rates
    0:47:20 of 7% annually, which is very much, very much, and that has strong consequences in terms of improving
    0:47:29 quality of life, reducing poverty, reducing indigents. Therefore, if during the worst moment,
    0:47:36 our image didn’t suffer and we stayed strong in our ideas, now that everything is working much better,
    0:47:45 why should we change? On the contrary, we are ready to redouble the bet, to redouble our efforts
    0:47:50 because we’ve done things that no one else has done. I will give you an example. There’s something
    0:47:58 that seems trivial, but there’s what’s called the single paper ballot. Argentina used to vote with
    0:48:08 huge ballots, which were very, above all, very costly and that reform, it never, let’s say it
    0:48:14 wasn’t done because it always harmed the ruling party. So everyone talked about going to the single
    0:48:20 paper ballot, but no one did it when they were in power. They didn’t want to implement it because
    0:48:29 they preferred to commit fraud or use some kind of trickery to avoid applying that rule that makes
    0:48:34 the election more competitive. Well, what’s interesting, we sent that law and it was approved.
    0:48:40 What’s more, now we are finishing with the open simultaneous and mandatory primaries
    0:48:47 because it was a mechanism by which politics was also stealing. We are eliminating the financing
    0:48:54 of political parties. If you look, we have reduced the fiscal pressure by 15 points to the
    0:49:01 Argentinians. So we are restoring freedoms with a deep set of structural and regulatory reforms.
    0:49:14 That is, I think that any sensible liberal could perceive we are already delivering a wonderful
    0:49:19 government. In fact, it’s the best government in the history of Argentina. If the best had
    0:49:25 been that of Menem, we’ve already outpaced him. Maybe you can explain to me the metrics of poverty
    0:49:31 and unemployment. As you said, unemployment went down, real unemployment went down,
    0:49:39 real poverty went down. But even that aside, what have been the most painful impacts of these radical
    0:49:47 reforms and how many of them are required in the short term to have a big positive impact in the
    0:49:56 long term? Let’s take it step by step. In fact, we started to do things right. Therefore, we did
    0:50:04 not create poverty. The poverty was an inherited poverty. The point is that what we did was to
    0:50:12 reveal it. I’ll try to explain it with an example that I think clarifies what’s happening in Argentina.
    0:50:19 Argentina was an economy that had a total price controls.
    0:50:27 It had a fiscal deficit which was financed through money printing just for you to give you an idea.
    0:50:35 In the last year, Argentina financed 13 points of the gross domestic product with money printing.
    0:50:42 In other words, a real disaster. So, that situation provoked this artificially
    0:50:48 demand and puts pressure on prices. The issue is that price controls are applied
    0:50:56 additionally over the prices that they enter the price index with which inflation was,
    0:51:03 I’m not saying they were lying about it, it was distorted. And since Argentina measures poverty
    0:51:12 and indigence by income line, then what happens? That distorted the true levels of poverty, of
    0:51:18 course. But that’s not the only effect. I mean, let’s say the real poverty levels were higher,
    0:51:23 quite a bit higher than those shown by the previous government, which showed them at 41 percent and
    0:51:31 also did so on a six-monthly basis. So, if you let’s say have a growing trend, they are actually
    0:51:37 leaving you a bomb and you don’t see it because let’s say basically the indicator was measured
    0:51:43 with a delayed form. But not only that, imagine that you are also given,
    0:51:52 you are in the middle of an island alone and they give you one million dollars.
    0:51:58 What can you do with that? You cannot do anything because you cannot buy anything.
    0:52:04 It’s the same as if someone tells you that the price of classes is ten dollars,
    0:52:12 but when you want to buy it, it’s not available. Actually, there’s a joke told by an Argentinian
    0:52:19 professor named Juan Carlos de Pablo, who says that a man goes to a bazaar and asks for a vase.
    0:52:25 Then he says to him, well, I want that vase. How much would you charge me? Then he says,
    0:52:31 five thousand dollars. Oh, okay, five thousand dollars, but why five thousand dollars if across
    0:52:36 the street it’s one thousand? He says, well, go buy it across the street for a thousand. Ah,
    0:52:41 there’s none for a thousand. Well, then here when there’s more, it’ll also cost a thousand. In other
    0:52:47 words, prices at which they are available. So, what happens? When you are faced with that situation,
    0:52:54 the supermarket shelves were empty. So, what was the point of having a price at which you couldn’t
    0:53:00 buy anything? You left those prices, the shelves were empty, so the statistics showed that you are
    0:53:05 much better, but the reality is you couldn’t buy anything. You couldn’t make it happen. So,
    0:53:11 if you left the situation as it was, people were going to starve because they couldn’t buy anything.
    0:53:16 Yes, they had a certain amount of money that could supposedly buy certain goods,
    0:53:21 but those goods were not available. What is the only thing you can do to save people?
    0:53:27 Make the prices transparent and allow products to reappear. Well, when you make the prices
    0:53:33 transparent, you also make transparent the cost of the basic food basket and the total basic basket,
    0:53:38 meaning the poverty line, sorry, the indigents line and the poverty line respectively,
    0:53:44 and when you do that, clearly you will see a jump in poverty. That brought poverty up to 57%.
    0:53:51 Now, Argentina found its activity floor in the month of April. From that moment,
    0:53:58 Argentina began to invent a cyclical recovery. Real wages have been growing every month above
    0:54:05 inflation. Therefore, nominal wages are beating inflation. In fact, we are already at levels
    0:54:12 similar to those we had in November. The same goes for pensions. Moreover, also let’s say,
    0:54:16 there is a rebound in activity due to the recovery of the stock cycle.
    0:54:22 Therefore, this is also contributing to more and better paid jobs. In fact, this is so strong
    0:54:28 and evident that the wages growing the most are in the informal sector. This means that poverty
    0:54:35 and extreme poverty are decreasing much faster than we imagine. But not only that, by eliminating
    0:54:40 inflation, you remove the inflationary tax, but the real burden is the fiscal deficit,
    0:54:47 which was 15 points of the GDP. Okay, we temporarily raised the country tax, now we lower it,
    0:54:53 but we return that to the Argentinians. We gave back 15 points of the GDP.
    0:55:01 Not only that, but also when you eliminate inflation, you remove the distortion of relative prices.
    0:55:07 Therefore, the allocation of resources is much better. Not only that, but also with the strong
    0:55:14 fiscal adjustment we made, we have reduced the country risk from 3,000 basis points to 770.
    0:55:22 Today, Fitch raised Argentina’s rating to triple C. So, what do I mean? That translates into a lower
    0:55:28 country risk and interest rates, and that generates an increase in investment, also generates an
    0:55:34 increase in consumption. In other words, the Argentinian economy is currently in an absolutely
    0:55:39 flourishing moment. And how is that sustained in the long term with structural reforms,
    0:55:45 which we implement daily, deregulating the economy, and introducing new laws that free
    0:55:52 Argentinians from the many oppressive measures that have burdened it over the past 100 years?
    0:55:59 You’ve spoken about the caste, the corrupt political establishment. So, there’s a lot
    0:56:05 of powerful people and groups that are against your ideas. What does it take
    0:56:09 to fight when so much powers against you?
    0:56:15 Look, we have fought against corruption like never before in Argentina.
    0:56:23 In fact, when we took office, for example, there were about 900 roadblocks per year.
    0:56:29 That is people who made a habit of blocking the streets. They prevented free movement.
    0:56:35 And besides, they were given social plans, and they were given a lot of money.
    0:56:42 If you remember, when I started by explaining the cuts, one of the things I said was that we
    0:56:48 removed the middlemen of poverty, in other words, the managers of poverty, those who lived by stealing
    0:56:54 from the poor. Well, that is a huge source of corruption. In fact, when we did that,
    0:57:05 two days later, one of the most renowned and influential piqueteros called for a demonstration.
    0:57:12 He claimed that 50,000 people would attend because he was actually expecting 100,000.
    0:57:20 So he wanted to showcase it as a success. And so then, let’s say, with the decision
    0:57:26 made in human capital to cut their funding, the anti-blockade protocol was also enacted,
    0:57:30 where those who blocked the streets wouldn’t receive welfare benefits,
    0:57:39 and those who broke the law would go to jail. All of that. And also, we were informing this through
    0:57:46 transportation channels. Well, in that March, they expected to have 100,000 people there.
    0:57:54 And actually, it turned out to be 3,000 people. And from that point on, they didn’t block the
    0:58:00 streets anymore. We also evidently put an end to that corruption. One of the things that also
    0:58:07 generated a lot of corruption was public works. Another thing that led to significant
    0:58:15 actual corruption were the discretionary transfers to provinces. In general, these transfers were
    0:58:22 made to the provinces with accounting as obscure as possible. So, the national government,
    0:58:29 in collusion with the governors, let’s say, the money ended up being used for other things.
    0:58:35 Not only that, with which we have already done many things. Furthermore, the ministry of human
    0:58:43 capital is always filing complaints in court, not in the media in court. Acts of corruption,
    0:58:50 like never before, in Argentine history. Not only that, but also in terms of condemning corruption.
    0:58:59 That is, we have done, for example, two days ago, it was condemned. Christina Fernandez de
    0:59:05 Kirchner got a sentence for corruption. I mean, due to corruption, and the next day, that is,
    0:59:12 yesterday, we took away their privileged pensions. At the same time, we are, for example, we have
    0:59:19 discovered that Kirchnerism used disability pensions for acts of corruption. For example,
    0:59:26 there is a city that has more disability pensions than people. In other words, to give you an idea
    0:59:32 of the things being done in Argentina. And also in Argentina, we have restored freedom to the
    0:59:38 judiciary. We do not pressure the judiciary. And this is so true that during my government,
    0:59:44 not only was Christina Fernandez de Kirchner convicted, but also the two terrorist attacks
    0:59:53 carried out by Iran were condemned. So, if there is a government that is truly fighting against
    1:00:00 corruption, it is us. Not only that, but also with each deregulation, it is a privilege that we
    1:00:09 take away either from a politician, a preliminary company, or a power group. That is also very
    1:00:18 powerful. No one in Argentina has ever fought against corruption the way we have. In fact,
    1:00:22 I will move on to something that is deeply corrupt and one of my great battles.
    1:00:33 The corruption of the media and social media? That is to say, I removed the official advertising.
    1:00:40 That’s why you will see that even though we generate wonderful news, every week in large
    1:00:46 quantity, the media speak terribly. In other words, they demand to have a monopoly on the
    1:00:52 microphone. That is, they are entitled to insult, hurt, offend, and they don’t want anyone to bother
    1:00:58 them. And they expect me not to even respond. That’s why a large part of journalism in Argentina
    1:01:04 hates the X-network. And that’s why the liberal libertarians love the X-network, because we can
    1:01:12 all say what we want. However, let’s say these supposed journalists who defend freedom of expression,
    1:01:18 actually what they want is to censor the ideas they don’t like. And of course, because they are
    1:01:23 leftist, because they are wokes, because they can’t stand the competition, because if they had to
    1:01:30 fight face to face, hand to hand on a level playing field, when it comes to ideas, they would lose
    1:01:36 because they were a failure in the economic, social, and cultural aspects. And also, we must not
    1:01:42 forget that those murderers called socialists killed 150 million people, so they clearly cannot
    1:01:49 fight on equal terms. Therefore, they demand that social networks have censorship and that the truth
    1:01:56 cannot be told to them. Because when you tell a socialist the truth, they cry, claiming it’s
    1:02:04 hate speech. No, it’s not hate speech. It’s that you are useless people who have ruined the planet.
    1:02:10 They have made the planet much worse. And fortunately, today, thanks to social media,
    1:02:16 especially due to the enormous and brave work of Elon Musk and the role of Twitter,
    1:02:24 today X, right, allows information to flow, which makes it possible, let’s say,
    1:02:34 to expose politicians and also expose the media. And that’s why journalists in Argentina are so
    1:02:41 violent. Why? Because before they could, for instance, a journalist went and, for example,
    1:02:46 he would go to a person and he would throw a folder at them and say, if you don’t give me X amount
    1:02:53 of money, I am going to publish all of this and tarnish your reputation. And I know for a fact,
    1:03:00 a case of a journalist who carried out this extortion twice to a businessman, that businessman
    1:03:05 told him that he wasn’t going to pay. And evidently, the journalist did it. Obviously, they went to
    1:03:12 court. There was a trial and that journalist lost both times. But that process is very slow. And in
    1:03:18 the meantime, they smeared. So since the justice system takes a long time, so what is the problem?
    1:03:25 The problem is that in the meantime, your life got dirtied. So why can journalists do all this?
    1:03:32 Well, that’s why they dislike X. They dislike social media. They dislike the new form of communication
    1:03:37 because it took away their monopoly over the microphone. And by taking away the monopoly
    1:03:43 over the microphone, it removed the economic benefits of extortion. So clearly, that’s another
    1:03:51 battle I’m fighting. You read a newspaper in Argentina and 85% of what you read is a lie.
    1:03:58 That is to say, the fundamental characteristic of most journalists, not all, but the vast majority
    1:04:05 of journalists in Argentina, with some honorable exceptions, is that they are liars, slanderers,
    1:04:12 and defamers. And if the monopoly they demand were still in place, that they want to reign again,
    1:04:17 I have no doubt that they would demand money in exchange for silence, because that’s what they
    1:04:24 are. They are extortionists, they are thieves, they are corrupt. And then, of course, obviously,
    1:04:31 when you take away a privilege from a sector, they get upset. Well, welcome to freedom.
    1:04:35 So you’re not only fighting for economic freedom, you’re fighting for freedom of speech.
    1:04:43 Exactly. I fight for freedom in all aspects of life. That is to say, one of the things that
    1:04:52 seems most interesting to me is that when the Berlin Wall fell, it’s true that it officially fell
    1:05:01 in the year 1989. But the reality is that the wall or socialism fell in the year 1961 when they had
    1:05:07 to build the wall. I mean, they built it because people were leaving communist Germany for capitalist
    1:05:16 Germany. They realized that those on the western side were much better off. And, of course, to
    1:05:24 prevent people from leaving. They put, what a wonderful system, right? So I mean, they had to
    1:05:29 trap people, they couldn’t let them go. I mean, these are such wonderful ideas that they had to
    1:05:35 apply them at gunpoint. It’s quite, well, it’s no coincidence that they killed 150 million human
    1:05:45 beings. So what happened then? The official fall of the wall in the year 1989 made it clear that
    1:05:54 socialism had failed. In that context, the socialists, they moved the discussion of class struggle in
    1:06:04 economics and took it to other areas. So for example, socialism, or what is of the 21st century,
    1:06:12 or cultural Marxism or post-Marxism, whatever definition you want, is to take class struggle
    1:06:20 to different aspects of life. For example, one of the aspects of life where you, let’s say,
    1:06:28 have this is in gender ideology. I mean, it’s incredible because the first ones to defend equality
    1:06:34 before the law were the liberals. The first to defend women’s rights were the liberals.
    1:06:40 Jeremy Bentham in the year 1750 was the first to demand equality before the law for women.
    1:06:47 I mean, the cause of equality, equality before the law for women and equality of rights,
    1:06:51 the first ones who advocated for this were the liberals, did you know? However,
    1:06:59 what does the left do? They just go on to radicalize it. And then it moves to what is called female
    1:07:05 chauvinism. Female chauvinism is, let’s say, the fight against males. And then, I mean,
    1:07:11 how do they do it? They do it by assigning rights. But when you assign a right, someone has to pay for
    1:07:20 it. And that has consequences. And in general, let’s say, this always happens. The consequences
    1:07:26 are that the results are worse than what you had before. I mean, in any state intervention,
    1:07:34 the subsequent result is often worse than what you originally had. So that’s one thing. And not
    1:07:40 only that, but the other side of this is the environmental agenda, which sets man against
    1:07:45 nature, involving all aspects of environmentalism and everything related to climate change.
    1:07:51 In other words, they can’t stand any serious discussion. Therefore, all environmental policies
    1:07:57 are nothing more than an excuse to collect taxes. So that a group of parasitic bureaucrats can live
    1:08:04 at the expense of others and finance sinister ideas. Where the most sinister idea of all is that
    1:08:11 there is no room for everyone on planet Earth. That is, an idea that failed with Malthus at the
    1:08:17 beginning of the 19th century, a murderous idea that was also applied by the Egyptians against
    1:08:25 the Jews. And this is famously recorded in the book of Sheimot or Exodus. Or, for example,
    1:08:31 another thing is Black Lives Matter. That is, black people against white people,
    1:08:38 or indigenous people against the established communities. Or, I mean, everything related
    1:08:47 to LGBT agendas. Definitely, these are some of the ways in which socialism extended the class
    1:08:54 struggle into other aspects of society, creating divisions and fostering deceit with the sole
    1:09:01 purpose of absorbing taxes. I mean, what was the ministry of women in Argentina doing?
    1:09:07 Did it manage to reduce a single femicide? No. None at all. The number of femicides exploded
    1:09:12 just the same. In fact, the most feminist president in Argentine history, Mr. Alberto
    1:09:21 Fernández, used to beat his wife. That is such a strange feminist. I mean, well, so within the
    1:09:27 ranks of feminists, let’s say, you will essentially find the largest number of rapists and women beat
    1:09:37 us. And it’s quite interesting what they do. Their hypocrisy is truly striking. It’s not just
    1:09:46 about that, though. I mean, the battle is on three fronts. You have the economic front,
    1:09:54 which is free enterprise capitalism. Then we have the political level. Currently,
    1:10:02 the system that the world has designed is a republican liberal democracy with checks and
    1:10:10 balances. And I mean, at the cultural battle level, notice that socialism has been very
    1:10:16 successful in the cultural battle. It has been very successful politically because it was able
    1:10:22 to translate that political battle in winning many elections. But why is it falling apart?
    1:10:30 Why? Because it produces misery. And because the economic system is a disaster, so people
    1:10:36 eventually realize that it is making things worse for them. Liberal libertarians are very
    1:10:44 good when it comes to economics. Yes, and those good economic results can actually lead, well,
    1:10:50 to the generation of solid political processes. But what happened? The liberals neglected the
    1:10:56 cultural battle. Much of the blame was placed on Fukuyama when he said this is the end of history.
    1:11:02 No, it was not the end of history because the following year, in 1990, the socialists gathered
    1:11:08 at the Sao Paulo Forum and, based on the ideas of Gramsci, designed a strategy to infiltrate the
    1:11:15 media, culture, and education, which ended up changing the entire discourse. And they established
    1:11:23 that what they said was politically correct and that any idea outside of it was to be considered
    1:11:28 reactionary and had to be censored or even persecuted. And they claimed to be the ones
    1:11:34 defending freedom, even though they were the ones persecuting people. It’s the same with journalists
    1:11:40 who get upset with Twitter. They say they defend freedom, but can’t stand it when those who think
    1:11:45 differently speak. Is that freedom? Yes, for them, but not for those who think differently. That’s
    1:11:52 not freedom. That’s fascism. Then what do we say? Then we must fight on the economic front. And I
    1:11:58 believe we are implementing an extremely successful economic program that is being recognized worldwide.
    1:12:06 In fact, the other night, the president-elect Donald Trump indeed gave recognition for the
    1:12:12 achievements we are having in Argentina and the speed at which we have done it. At the same time,
    1:12:18 you have to fight the political battle because, well, soccer matches are not won by shouting from
    1:12:24 the stands. They are won by playing on the field. But that alone is not enough because you have to,
    1:12:32 let’s say, you need to convey to society the values of capitalism, the free market,
    1:12:38 what liberalism is, the value of freedom, right? And when you succeed in that,
    1:12:45 then we will indeed be able to advance steadily. If you don’t fight the cultural battle,
    1:12:51 what happened in Chile will happen to you. They had economic success. It was, let’s say,
    1:12:58 sustained over time. But at some point it collapsed. Why did it collapse? Because they hadn’t fought the
    1:13:05 cultural battle. Then socialism, little by little, took control of institutions in education and the
    1:13:12 media. So they took over the media and culture. And on that basis, they attacked and broke up the
    1:13:18 system. And then they found themselves with increasing doses of socialism. And the only thing
    1:13:25 socialism generates is poverty. Therefore, what you must keep in mind is that you have to fight
    1:13:33 the battles on all fronts. And if you don’t keep that in mind, I can tell you are headed towards
    1:13:39 collapse. Like you said, in this fight against corruption, you are challenging some very powerful
    1:13:49 people, a powerful establishment. Are you ever afraid for your life, potential assassinations?
    1:13:56 No. Tell me, what good is it to live life? I mean, in slavery?
    1:14:08 Look, there is a song by a Spanish singer called Nino Bravo. Just to be clear, he has already left
    1:14:18 this earth so we can say he has passed on to the beyond. The song is called Libre. And the song,
    1:14:27 it tells the story of Peter Fetcher, an 18-year-old boy who, when the separation was made.
    1:14:36 And I mean, the construction of the Berlin Wall begins. His family ends up on the western side
    1:14:44 and he accidentally ends up on the eastern side. And for a whole year, he plans his escape
    1:14:51 to the western side, right? And in that context, when he tries to escape, he gets murdered.
    1:14:58 So really, what is the point of life if it’s not in freedom, right?
    1:15:03 I mean, what is the point of living without fighting for your values?
    1:15:09 If I am willing to give my life for my values, then what is the point of living without freedom?
    1:15:13 Look, can I tell you something interesting that happened to me here in the United States?
    1:15:27 I, let’s say, back in the year, 1998, I came to the United States to take a series of courses
    1:15:35 to improve my English, which I never use in formal terms because as president, as you can imagine,
    1:15:42 if I make a mistake, I can create a serious situation. Fortunately, I have an interpreter who
    1:15:48 is a superstar. And if I make a mistake even in Spanish, he corrects me in the version of the
    1:15:58 other language. And so back then, in that year, I went to San Francisco and I visited Alcatraz.
    1:16:10 You’re young, but I mean, the visit was an audio tour. You got a walkman and you would
    1:16:16 choose the different tracks and listen to the story. The most interesting thing is that the
    1:16:24 Alcatraz store ended in the recreation yard where the basketball court, exercise areas,
    1:16:30 and all recreational facilities were located. So anyone would have thought that this was the best
    1:16:38 part of Alcatraz. And yet, what they said in the guide was that that was the hardest part for the
    1:16:46 inmates. Why? Because I mean, that recreation area in particular is built in front of the
    1:16:54 San Francisco Bay. So the inmates could all see how San Francisco continued to build up and evolve
    1:17:00 and develop every day. While they were locked up in there, they couldn’t take part in that.
    1:17:07 They were confined in that prison. And that made them fully aware of the value of freedom.
    1:17:18 So in my experience for me, the fight for freedom is relentless, okay? I mean, my greatest hero in
    1:17:26 all of human history is Moses. The feet of Moses is like one person alone with his brother, Aaron,
    1:17:36 both confronting the combined forces of the United States, China, and Russia together.
    1:17:40 And it was Moses who said to Ramses, “Let my people go.”
    1:17:49 Well, Ramses resisted, and the forces of heaven ran him over. But what I mean is,
    1:17:57 I don’t see any other possible way to live other than with freedom. And I would always
    1:18:04 fight for full freedom. And I would be at the forefront of this cause. I mean, it’s a cause that
    1:18:10 I’m going to die with my boots on. I mean, I’m not going to make do with living
    1:18:16 any other way other than with freedom. I will fight everything I’m going to fight as much as it takes.
    1:18:23 At least that’s the way I feel. So what good is it to be alive if you’re confined?
    1:18:31 What good is it to be alive if you’re not free? It’s no good. What good was it for Peter Fetcher
    1:18:35 to be alive in Communist Germany?
    1:18:42 Well, at least he had a moment of happiness while he tried to escape.
    1:18:50 Another guy who fights for freedom, freedom of speech in his cases, your new friend, Elon Musk,
    1:18:56 what do you admire and what have you learned from your interactions with Elon?
    1:19:10 I have a huge admiration for Elon Musk. He is an absolutely unconventional person.
    1:19:16 He’s a great fighter for the ideas of freedom, what he has done on Twitter,
    1:19:28 now known as X and how he is helping the world nowadays to wake up once and for all and become
    1:19:36 aware of the socialist virus, the woke virus, that in itself makes him a hero in the history of
    1:19:46 humanity. But it’s not just that. One of the things that happened to me is that when I went
    1:19:53 to first talk to him, I thought I was going to meet a successful businessman and that I would
    1:19:59 have a typical successful businessman conversation who understands business and that some of his
    1:20:06 businesses, some of his business, slightly more exotic. But that’s the kind of talk you would
    1:20:13 expect to have. And business people are truly admirable, right? Because they are true benefactors
    1:20:24 of society. But they’re usually very much focused on their own business. And one of the things that
    1:20:35 really, really shocked me when I met Elon Musk, we had scheduled a meeting for no more than 50
    1:20:43 minutes. The first time we were in the meeting for a little over 45 minutes because he was about to
    1:20:49 miss his flight. So obviously, if someone as important as him doesn’t fly as planned,
    1:20:56 it has to be rescheduled. And he loses a lot of hours. Imagine every minute is very valuable.
    1:21:07 And one of the things that happened was that basically, he brought up the topic of demography.
    1:21:15 And we started discussing demographics and growth. I never imagined that I would end up discussing
    1:21:24 demographics and growth with him. And another very fun thing was that something funny he said to me
    1:21:32 was that since we shared our vision regarding demographic issues and the need to populate
    1:21:36 the planet, he asked me, “Now, what about you? When are you going to move in that direction?”
    1:21:41 And I said, “Oh, look, I have five children.” And he said, “Well, the four-legged ones don’t count.”
    1:21:52 That was the first meeting I had with Elon Musk. The second meeting was
    1:22:01 when here at the universities, we started seeing anti-Semitic demonstrations where basically
    1:22:10 Palestinian flags were displayed and Jews were harassed and persecuted. And at that moment,
    1:22:17 when we had that second meeting, he showed himself to be very deeply involved with that
    1:22:24 and brought up the issue of the cultural battle. So, I mean, it’s not quite conventional,
    1:22:34 even in the political field. During our last talk, which lasted for about two and a half hours,
    1:22:41 right? One of the things we talked about was freedom and what was at stake for the United
    1:22:55 States in this election. Therefore, he is a person. Honestly, I can say he is well above
    1:23:05 average. I mean, a person of unconventional intelligence, right? And also, he is very charming.
    1:23:12 So, I mean, again, I have a great admiration for him. And I really interact very closely with him.
    1:23:19 He is very interested in what our ministry of deregulation is doing, which seeks to remove
    1:23:25 regulations. But at the same time, he works with another person who is also interested in the
    1:23:35 chainsaw approach. And so, I’m very pleased because they are going to try and replicate
    1:23:42 the model we are implementing in Argentina. And also, Donald Trump himself is very enthusiastic
    1:23:48 about this. So, and anything in the way of reducing regulations and cutting public spending
    1:23:54 and taking government out of the equation means more freedom for the people. So, I’m very pleased
    1:24:01 with what’s going on. And with Trump’s victory, because the United States will be better off,
    1:24:06 Argentina is going to be better too. And the whole world is going to be better off.
    1:24:10 Today, the world is a much better place than it was just a few days ago.
    1:24:18 Like you said, Elon and Vivek Ramoswami are heading the DOGE, Department of Government Efficiency.
    1:24:24 So, from your experience this year as President of Argentina and every chainsaw economic policies
    1:24:30 that you’ve implemented, what advice would you give to Elon and Vivek about how to do it in the
    1:24:36 United States? Just cut to the chase. Cut to the chase. Simple as that. I’ll tell you a story and
    1:24:45 you’re going to love it. Currently in Argentina, due to the political balance we’ve achieved,
    1:24:52 we have had certain powers delegated from Congress to the Executive Branch,
    1:24:55 and therefore we can resolve it by decree.
    1:25:04 That the Regulation Minister, Federico Storsenegger, in his ministry, shows a counter that displays
    1:25:12 in front of everyone there. He displays the number of days, all right? During which the
    1:25:20 delegated powers will continue to be valid. Therefore, he has a whole deregulation division,
    1:25:26 also a public spending cut division, and government structure reduction division,
    1:25:34 and he also has an elite corps that’s cleaning up all of the laws that hinder the economic system
    1:25:41 and progress. And every day, he removes between one and five economic restrictions.
    1:25:47 So my advice would be for them to go all the way to push it to the very limit
    1:25:55 and do not give up, do not let down their guard. Furthermore, that agenda does not have political
    1:26:02 purpose because at the end of the day, you are removing privileges. Of course, there will be
    1:26:06 people complaining, but those are people. These are people who are losing privileges,
    1:26:11 so they will have to explain to society why they are keeping those privileges,
    1:26:16 and that is quite uncomfortable. You’ve spoken with Donald Trump. Allegedly,
    1:26:21 he called you his favorite president. What did you discuss? And maybe again,
    1:26:26 what do you admire about President Trump and what do you learn from him?
    1:26:32 There are several things that I admire about President Trump.
    1:26:42 The first is that he, probably, I think he’s provided ample proof of this in his first presidency.
    1:26:49 He understands the nature of the cultural battle. He has openly confronted socialism.
    1:26:55 His speeches openly target socialism. He perfectly understands the woke virus,
    1:27:06 and that that is, you know, of great value because it means understanding what it’s all about.
    1:27:12 Another thing I truly admire about him is his courage. In fact,
    1:27:19 thankfully, thank goodness he didn’t get assassinated or killed.
    1:27:23 But it was by a small chance occurrence that could have killed him,
    1:27:25 just because he moved at the right moment.
    1:27:37 And yet that didn’t intimidate him, and he went on. And in fact, during his first campaign,
    1:27:42 and in this one as well, in the second one and third one,
    1:27:50 they criticized him, insulted him, offended him, said awful things about him,
    1:27:58 made up all sorts of horrible stories about him. In that respect, I can say I deeply relate because
    1:28:06 probably no one in our history has had such a negative campaign from all the media like
    1:28:13 they did to me. But let’s say they were quite similar. This is why it’s so interesting. And
    1:28:19 I was so deeply moved when last night I also got to meet Sylvester Stallone, you know?
    1:28:28 Because Sylvester Stallone talks about, well, how important is that no matter how hard they
    1:28:34 hit you and keep on hitting you all the time, despite all that you keep going on and on and on.
    1:28:46 What I’m trying to say is that many of the, many, so many of Sylvester Stallone’s approaches
    1:28:52 are truly inspirational, don’t you think? So imagine I’m about to give the speech and I see
    1:29:00 Sylvester Stallone and Sylvester Stallone knows me. It was truly insane. I had to pinch myself.
    1:29:06 I mean, this can’t be true. And besides, well, the people were wonderful with me last night.
    1:29:13 They’ve been wonderful today. I’ve taken hundreds of selfies. I mean, it’s truly been,
    1:29:21 I would say it’s been my break, let me say, after almost a year in office and having to face
    1:29:29 all sorts of media torture because the journalists who have vested interests and are corrupt
    1:29:34 are professional torturers. Yes, because they invade your personal life,
    1:29:39 your family, and your privacy. Let me tell you something to show you the kind of garbage the
    1:29:44 media in Argentina can do. They sent three drones to spy on me at my presidential residence.
    1:29:51 To spy on me. Do you think that’s right? No. Exactly. But that kind of thing happens in
    1:29:57 Argentina, not to mention the many lies and horrible things they say. I, for instance,
    1:30:05 remember that time when my father was hospitalized. My father is a man of a really strong character who
    1:30:13 has had two heart surgeries. All right. And one day, a journalist was saying all sorts of lies
    1:30:21 about my father. My father was hospitalized and, well, and he almost died of a heart attack.
    1:30:28 So that kind of thing is what journalism and the press do in Argentina. So they start to attack
    1:30:34 your private life, your mother, your father, your sister, even my dogs that I absolutely adore.
    1:30:39 They are the most wonderful beings in the universe. They even target my four-legged children.
    1:30:48 So imagine that I’ve been in office for nearly a year, a year as president. And since they can’t
    1:30:56 criticize my management, except by lying and distorting the numbers. They meddle with all these
    1:31:03 things, things they have been doing all the time since the year 2021, when I officially entered
    1:31:13 politics. So, and I’ve seen what they’ve done to Trump. So that also makes me relate a lot to him
    1:31:20 because he’s a true warrior. He’s truly, he’s a Viking. He’s a Viking. He’s literally a Viking.
    1:31:30 I mean, he is someone I admire for how he has kept fighting in the face of adversity,
    1:31:36 even against all odds. And still, he managed to win. Amazing.
    1:31:45 And well, and that’s why I can relate that much. And I’ve also seen how he’s been
    1:31:53 unfairly criticized, like when he was accused of protectionism or when he wanted to discuss some
    1:31:59 matters within the context of public debate regarding the design of monetary policy as regards to
    1:32:07 Fed. And basically, they have accused him of things. I mean, isn’t he entitled to give an opinion
    1:32:13 as a president? I mean, any citizen could give their opinion even more so a president.
    1:32:18 Why is it important to you that Argentina has a close relationship with the United States?
    1:32:23 Well, to us, that is truly important, okay?
    1:32:30 You know, because we’ve decided to be geopolitical allies of the United States
    1:32:39 ever since our campaign, that our allies, we have decided that our allies will be the United
    1:32:46 States and Israel, because they basically represent the ideas of the Western world. They
    1:32:53 represent the free world. That is to say what we would call today, let’s say a liberal democracy,
    1:33:01 okay? By confronting the autocrats. And in that sense, that is the geopolitical alignment.
    1:33:08 Moreover, in our campaign, we were very, very clear on three main points. One, the economic pillar.
    1:33:14 We talked about cutting public spending, and I would make my appearances with a chainsaw.
    1:33:20 We talked about economic freedom, deregulation, that is, and I talked about a competition of
    1:33:25 currencies. And people, you know, obviously were interested in the dollar. So it was obvious
    1:33:31 that the economic policy was clear, all right? And not only was it clear, but we are also fulfilling
    1:33:38 it. That is the first point. Second was our policy on security. The idea being to fight crime,
    1:33:48 I mean, relentlessly as well as security. No mercy, right? And in fact, in Argentina,
    1:33:55 there are no more roadblocks, which they said were impossible to end. Not only that, we have
    1:34:00 strengthened the security forces and also our armed forces, and we are waging a tough battle against
    1:34:06 drug trafficking and narco-terrorism. Therefore, we are also strongly fulfilling that. Notice that
    1:34:12 these two points, which were the main concerns, they were the biggest concerns of Argentinians
    1:34:18 when we took office are now in fifth and sixth place. Today, the problem for Argentinians is
    1:34:25 corruption, whether there is unemployment, if there is poverty, but they don’t mention inflation and
    1:34:31 insecurity anymore. And besides, a third point that I made clear was that I would align with the
    1:34:39 United States and Israel internationally. And, you know, at my campaign rallies, there would be
    1:34:47 groups that would come along with flags of Israel, so it’s clear that our international policy approach
    1:34:55 was always very clear. And this is something I state during my speeches when I talk about the
    1:35:03 values of the West and the civilization of the West. In fact, yesterday, and even more so today,
    1:35:12 during my speeches, I talked about how the different Greek groups or tribes go together to confront
    1:35:21 the Persians. That is to say, it seemed that from that time, 500 years before Christ until today,
    1:35:34 that struggle continues, right? But well, so, of course, we’re all in. We are betting on the
    1:35:42 United States becoming, once again, a leader in the West. We needed someone to come back
    1:35:54 to make America great again. And as part of that process, being a commercial ally is also a great
    1:36:03 idea. So we would really like to move forward and deepen our trade ties and our investment ties,
    1:36:09 you know? And, well, we would also like to be part of the NATO as well.
    1:36:13 Do you think it’s still possible? One of the radical ideas you had as you were running for
    1:36:21 president was to dollarize the Argentine economy. Do you think that’s still a good idea? Are you
    1:36:29 still thinking about that? Let’s see. Let’s break it down. Let’s say I, if you review all my statements,
    1:36:37 I talk about currency competition. I’m not strictly talking about dollarization. I’m talking about
    1:36:44 currency competition and eliminating the central bank. If people later decide to embrace the dollar,
    1:36:50 that is their choice. Ultimately, in the model I propose, what happens is the formation of a
    1:36:58 currency basket tailored to the needs of individuals. But I won’t avoid the discussion. Today,
    1:37:03 there is currency competition. If, for instance, today in Argentina, you want to make transactions
    1:37:08 in any currency, you can do it, and it’s allowed. Today, there is currency competition.
    1:37:14 The other thing we talk about is the concept of, let’s suppose we were discussing dollarization,
    1:37:21 we talk about endogenous dollarization. The first point is that you need to clean up the
    1:37:27 central bank. We had to deal with the issue of the C-I-R-A, that is the central bank’s commercial
    1:37:32 debt, which was $50 billion. We still have to resolve the dividend problem of $10 billion.
    1:37:38 And in the meantime, we did a write-off and cleaned up the central bank’s balance sheet
    1:37:43 by $45 billion. So you can’t just close the central bank if it is bankrupt,
    1:37:48 because you need to redeem the whole central bank debt, which is about the issuing of money
    1:37:54 and the interest-bearing liabilities. So once we’ve finished with the interest-bearing liabilities,
    1:38:00 it’ll leave us with the monetary base. Therefore, today we have a regime where the amount of money
    1:38:07 is fixed, the monetary base is not growing, and as demand for money increases, since people can use
    1:38:13 dollars, they don’t need to go and sell the dollars and make the peso appreciate, but they can do
    1:38:20 transactions in dollars. So as the economy grows, you will have a greater share of dollars
    1:38:26 relative to pesos. And at some point, the amount of pesos compared to the dollars will be
    1:38:34 so huge, relatively, that closing down the central bank will be done easily, which means this is
    1:38:41 working. Of course, if you were to give me the money right now, I would go ahead and dollarize.
    1:38:48 I’d have no problem with that. For example, I did have a proposal for this, and this could have
    1:38:55 worked, because the bonds, because the largest creditor of the Argentine Treasury is the central
    1:39:02 bank, but central bank bonds were trading at 20 cents. If I had sold those bonds at 20 cents and
    1:39:11 nowadays, they are trading between 60 and 70, with the whole bunch of Neanderthals that are the
    1:39:21 opposition, who besides being ignorant in economics also have bad intentions, I would be in jail today.
    1:39:29 Let me ask you a very important, difficult question. I’m a huge fan, have been my whole life,
    1:39:35 of Diego Maradona and Messi. So who to use the greatest football player of all time?
    1:39:38 The way I see it, I have seen Maradona play, all right?
    1:39:47 I saw Maradona play in the past, I used to watch him, and I saw him during his last year at Argentino
    1:39:57 Juniors, before Boca Juniors in the year 1980, and I saw him in ’81. Playing for Boca, I saw him play
    1:40:08 in the youth selection in Japan in 1979. I truly have immensely enjoyed the talent of Maradona,
    1:40:15 but without a doubt, the best soccer player of all time, not just from Argentina of all time,
    1:40:24 even better than Pele is Messi, of course. There is an article which is quite old already now,
    1:40:34 titled “Messy is Impossible”, and it looks at all of the positions a soccer player plays in.
    1:40:41 That is, all positions a soccer player can play in from midfield forward, okay?
    1:40:51 And the most incredible thing is that Messi is the best in each of those positions.
    1:40:59 You can be the best in one or two positions. You see, Cristiano Ronaldo, for example,
    1:41:06 was very good in two areas of the game, so much so that he was almost like Messi,
    1:41:13 but he didn’t take part in the rest. However, Messi is the best one in all respects,
    1:41:24 but at that time, of course, nowadays, you know, he is an older player, right? And I’m not sure
    1:41:31 whether he can still keep that performance on all fronts, but honestly, I have never in my life
    1:41:38 seen a player like Messi. I have never seen no one like him, for real. If you look at the number of
    1:41:45 goals he scored, I correct that, considering the goal average in the days of Pele, compared to Messi’s
    1:41:52 golden era and his career now, the number of equivalent goals is much greater than that of Pele,
    1:41:59 therefore, without a doubt, Messi is the greatest soccer player of all time. Of all time, no one
    1:42:08 compares to him. But it’s not just the numbers or the World Cup win. It’s the moments of genius
    1:42:16 on the field. Messi is unlike any other in that way. Messi does things that seem technically
    1:42:22 impossible. They seem physically impossible. The moves he makes don’t respect human logic. It’s
    1:42:30 like watching Usain Bolt run. It doesn’t feel possible. He moves in a way that doesn’t respect
    1:42:38 human logic. Am I right? Did you watch the 1986 World Cup with Maradona with the hand of God,
    1:42:44 with the game against England? What was that like? Oh, yes. I do remember that very well.
    1:42:55 We watched it in the home of my godfather and saw how he did his gambit and dodged the team,
    1:43:01 the England team, that was truly, it was absolutely,
    1:43:11 absolutely indescribable. There’s no way to put it into words. It’s as if I asked you to describe
    1:43:20 for me the love you have for your partner. You can’t do that, right? I mean, it’s something wonderful.
    1:43:27 You can’t describe it. You cannot put it into words. There are things where words,
    1:43:37 I mean, you know, just seem to fail. Am I right? I really think that there are times when humans,
    1:43:47 or some humans, not all of them actually, some humans have the privilege
    1:43:55 of being able to vibrate closer to God. Some Puccini Arias, for example, when you listen to them,
    1:44:04 when you listen to the famous Arya from La Rondine or the famous Arya from Gianni Shici,
    1:44:10 I mean, you get the feeling that he was getting sad dictated by God. How can you put that into
    1:44:16 words? You can’t. There’s no way you do that. I mean, those moments where we humans are
    1:44:23 that we have the privilege, I say it as human beings, right? Because, I mean, I’m speaking from
    1:44:33 that perspective, okay? I say this only as an admirer. Some human beings have the ability to
    1:44:42 vibrate so close to God that you can’t describe it. You can only enjoy it. This is why in Judaism,
    1:44:51 they don’t use the name of God, of the Creator, because how could you put in words something
    1:45:00 like that? And I believe those are times when us humans connect closer to the Creator and create
    1:45:06 things, unique things. You cannot describe them. There are no words to describe that. The only thing
    1:45:15 you can do is enjoy it and be thankful that you can witness it. You’re a great footballer yourself
    1:45:21 in your youth. You are a goalkeeper. Many people would say that’s the toughest and the most important
    1:45:26 position in football. Maybe you could speak about that experience and in general, what’s harder,
    1:45:36 being a goalkeeper or a president? Lovely question. Well, indeed, I used to be a goalkeeper,
    1:45:46 but I’m not so sure about whether I was any good. But, you know, the experience of having been a
    1:45:55 goalkeeper is very valuable. First, the goalkeeper is the only player that can use their hands
    1:45:59 in a certain sector of the pitch in the area.
    1:46:08 The other thing is that he’s also the only player who dresses differently, right?
    1:46:14 Moreover, their training is a solitary one.
    1:46:26 And the most important, I mean, it is the very climax, the goal, right?
    1:46:32 When the goal is called by their team, everyone is celebrating on the other side and the goalkeeper
    1:46:43 is on his own. And at the same time, he is the one who suffers the most when a goal is scored
    1:46:49 because he gets the direct impact. In fact, when the goalkeeper makes a mistake, it’s an own goal.
    1:46:55 Imagine a teammate scores a wonderful goal like the one Maradona did.
    1:47:01 It’s marvelous. And that’s just one goal. And imagine the goalkeeper picks up the ball and then
    1:47:07 if they bring it into the area wrongly, it’s like two goals. It’s a complete lack of proportion.
    1:47:19 So therefore, and this, in my opinion, makes goalkeepers have a very strong temperament, right?
    1:47:28 They are used to being alone and power is precisely that because when you make decisions,
    1:47:38 you are on your own. And not just that, but also when you have a responsibility,
    1:47:44 like that of a president, when you make a decision, it has an impact on millions of people.
    1:47:52 So just like goalkeepers, if you make a mistake and score an own goal, and in this context,
    1:48:01 it’s negative consequences for millions of people. Therefore, that has been part of the
    1:48:07 University of Life that has given me the tools to be president today, that is my training in
    1:48:14 economics, my training in liberalism, having been a goalkeeper, and also having had a very tough
    1:48:22 childhood. How hard is it? What’s been the personal toll of carrying the hope of a nation on your
    1:48:34 shoulders? Well, you know, being defamed, insulted, and attacked every single day. But again,
    1:48:42 there’s no point in life. If it’s not with freedom, so like Sylvester Stallone once said,
    1:48:47 “The secret to life is to carry on in spite of the blows you get, the punches you take.”
    1:48:57 And fortunately, we have been able to carry on in spite of the blows, both coming at us from
    1:49:02 in front and from behind our backs, because it had been more honest if we had been attacked directly.
    1:49:12 But well, you know, in Argentina, politics and the mass media, they do love to attack
    1:49:18 behind your back. What role has God played in your life? And who is God?
    1:49:29 Well, faith, I’d say, has been a very fundamental element, you know?
    1:49:40 And especially in recent times, during which I’ve become actively involved,
    1:49:47 particularly in the teachings of Judaism and in the study of the Torah.
    1:49:59 This has given me a huge, let’s say, a huge background to face the many adversities which
    1:50:05 I’ve encountered and had to overcome in the last few years. And as to who God is,
    1:50:13 He’s the Creator, the Maker, I call Him the One. What is a better guide for humanity? The invisible
    1:50:22 hand of the market or the hand of God? They’re perfectly in sync. Well, enough. Again, going
    1:50:29 back to your youth, you’re a lead singer in a rock band. Who is the greatest rock star of all time?
    1:50:38 Okay. Well, the way I see it, the most amazing rock singer in history of mankind was definitely
    1:50:48 Elvis Presley. And my favorite band is the Rolling Stones. So I also greatly admire Mick Jagger,
    1:50:54 you know? And I still have this dream of getting to meet him in person.
    1:50:58 How fun would it be to play together with the Stones?
    1:51:03 That would be a big, big dream.
    1:51:10 Don’t get my hopes up because I set goals and then I go and achieve them.
    1:51:17 Well, I’m close friends with a band that opens for the Stones. So I would love to see this happen.
    1:51:23 Oh, well, that would be great. Or we could also watch the whole concert from the stage.
    1:51:30 I mean, I can’t keep ruining the Rolling Stones’ music. I already had a tribute band and did quite
    1:51:37 a lot of damage to the music. How much of your rock star roots define your approach to politics to life?
    1:51:42 Do you see yourself as a kind of showman in part? Of course.
    1:51:51 Absolutely. My idea is that when you attend, when you attend one of our events,
    1:51:58 it feels like going to a Rolling Stones concert. In fact, in one of my most recent
    1:52:04 performances at Luna Park, I even had the pleasure of singing in front of 10,000 people.
    1:52:12 It’s on YouTube. No, sorry. Not on YouTube. It’s on my Instagram feed.
    1:52:20 At that event, I sang a song called Panic Show. And the song starts by saying, “Hi, everybody.
    1:52:28 I am the lion.” Your intensity and passion have earned you the nickname El Loco, the madman.
    1:52:35 Do you think some madness is necessary to challenge the powerful establishment?
    1:52:40 Well, maybe it’s a matter of perspective, right? It could be the other way around.
    1:52:46 That everyone else is crazy by living in a way contrary to the ideas of freedom.
    1:52:52 And so maybe the same person who wants to fix that is then considered a madman.
    1:53:00 Anyway, the nickname doesn’t bother me at all. In fact, I even enjoy it because I’ve been
    1:53:06 called like that since I was 10 years old. So it’s not something that particularly bothers me,
    1:53:16 you know, because it’s a nickname that, well, it has been used for many years. But actually,
    1:53:22 if I present to you the case of San Martín, when he said he was going to cross the Andes to
    1:53:29 liberate not only Argentina, not only our country, but also Chile and Peru, and people called him
    1:53:37 crazy, imagine if you had tried and spoken with, I don’t know. With Michelangelo, you would have
    1:53:45 called him crazy too. Or if you had talked to, I don’t know, hundreds of people who have changed
    1:53:50 the world, surely they would have thought that Einstein was crazy and so on, the list would
    1:53:59 be infinite. So what is the difference between a madman and a genius success?
    1:54:09 Let me ask you about the market. It’s so interesting from your view of the world,
    1:54:14 how powerful the market is at figuring out what’s best for society. Why do you think
    1:54:18 the market works so well as a guide for humanity?
    1:54:28 One must first understand what the market is. Simply put, the market is a process of
    1:54:33 voluntary exchange where individuals cooperate through the transfer of property rights,
    1:54:44 in which private property is upheld. This is the system that drives the allocation of resources
    1:54:50 in essence. Socialism, and this is what Mises condemns in his book Socialism,
    1:55:00 shows is that without private property, prices cease to exist and therefore resources are diverted.
    1:55:04 Why don’t you think it’s the same to make a road of asphalt or gold? Why not make it of gold?
    1:55:10 Because you have an understanding of economic calculation, you have an idea of prices in your
    1:55:18 mind. So in this context, if there is no private property, there are no prices and as a result,
    1:55:27 the free market capitalism is the best mechanism ever developed by humankind
    1:55:35 for resource allocation. This also implies that markets must be free, free from state intervention,
    1:55:44 because when the state intervenes, it creates interference. And markets need to allow free
    1:55:50 entry and exit, what we call competition. However, it’s better to understand competition.
    1:55:56 In the sense described by Israel Kirchner, one of the foremost figures of the Austrian school,
    1:56:02 or in the neoclassical framework, as William Baumel understood it, which was the concept of free
    1:56:10 entry and exit in so-called contestable markets. And also, let’s talk about what pertains to the
    1:56:16 division of labor and social cooperation. You know, the most wonderful thing about capitalism
    1:56:23 is that you can only be successful by serving others with better quality goods at a better price.
    1:56:29 If you are successful in the free market capitalism, you are a hero. You are a social
    1:56:37 benefactor. You are a prosperity machine. So the better you do, you know, the better you do,
    1:56:43 the better it is for society. This is very important. I remember when I had my first
    1:56:51 meeting with Elon Musk, and this made me admire him greatly. And this is something my sister
    1:56:58 commented on too. You know, Elon Musk told me something he does every day. He wakes up every
    1:57:08 morning thinking about what problem he could fix for humanity. That’s amazing. Of course,
    1:57:18 what is the counterpart being successful? Therefore, in that sense, and moreover, in my view,
    1:57:25 on how the system works, on how the market works, market failures do not exist.
    1:57:32 That is to say, that is a problem. All right? A problem for neoclassical economies
    1:57:42 because of the mathematical tools they’ve used to develop economic analysis. But actually,
    1:57:49 it’s not a real issue in everyday life. It’s a problem in the minds of economists.
    1:57:55 In fact, my latest book called Capitalism, Socialism, and the Neoclassical Trap deals precisely
    1:58:00 with this issue. Yeah, you’ve outlined these ideas in capitalism, socialism, and then neoclassical
    1:58:07 trap. So the trap is that there’s no such thing as a middle ground. It’s either capitalism,
    1:58:14 socialism, and every middle ground ends up in a state of socialism. Well, actually, that is what
    1:58:22 Mises said, that there were, he said that there are only two systems, free enterprise capitalism
    1:58:30 and socialism. And he also pointed out, and this is proven in Hayek’s book, The Road to Serfdom,
    1:58:35 that any middle ground solution is unstable in terms of capitalism,
    1:58:41 meaning it tends toward socialism. So when you implement an intervention, it causes government
    1:58:46 failure, which then triggers further intervention, setting up a trap that results in more and more
    1:58:52 intervention. And in this context, the neoclassicals, with their market failure theory, are in fact
    1:58:58 dealing with problems that are fundamentally mathematical, rather than making the world a
    1:59:03 better place, they have, if you will, been instrumental in increasing the levels of
    1:59:14 intervention. Let me tell you something. Well, you know, I have an economist as chairman of the
    1:59:24 president’s advisory council. Dr. Demian Radle, who studied here at Harvard University and completed
    1:59:32 his PhD, was mentored by Kenneth Rogoff, the American economist. And Rogoff has said that
    1:59:42 Dr. Radle was his best student. Nowadays, we’re actually working with Dr. Radle specifically on
    1:59:57 all these issues that arise from, you know, the interventions proposed by the mainstream,
    2:00:05 such as the so-called correction of market failures. And a few days ago, he conducted a survey
    2:00:18 of search algorithms and policy recommendations. And that resulted in a map painted from red to
    2:00:28 blue. And, well, the redder it was, the more it was linked to socialism. There was an intermediate
    2:00:35 thing that was yellow and blue was free market ideas. And one of the things he discovered
    2:00:48 as part of that graph or chart was that the largest, the largest number of policy recommendations
    2:00:57 scandalously are actually left leaning. So that is the empirical evidence of what I pointed out
    2:01:05 in the book, capitalism, socialism, and the neoclassical trap. You mentioned your four-legged
    2:01:13 children. What have you learned about life from your dogs? Well, from my four-legged children,
    2:01:26 I have learned unconditional love. In fact, well, my name in Hebrew means loyal friend,
    2:01:36 faithful friend. And on the Chinese horoscope, I am dog. And if there’s one thing that defines me
    2:01:44 is loyalty being decent and those virtues, you know, you can find them in those wonderful
    2:01:53 beings that dogs are who love unconditionally. In fact, they are superior beings, right?
    2:02:04 Spiritually speaking, in my case, because you know, I don’t forget or forgive those
    2:02:11 who have harmed me. That is to say, all those who have insulted defame me and criticize me,
    2:02:19 I remember each one of them. But I don’t have the greatness needed to forgive them.
    2:02:25 On the topic of loyalty in politics, I’m sure there’s been a lot of people,
    2:02:32 some people who have betrayed you. Does that hurt your heart?
    2:02:46 It depends. Because you sometimes think that you can expect some people to be loyal,
    2:02:55 and if they betray you, of course, that hurts. But some people you actually don’t expect anything
    2:03:02 from them. So if there’s betrayal, I mean, you won’t be annoyed or feel bad because
    2:03:09 you owe it to someone who didn’t share your values. But politics does have that, you know?
    2:03:18 Sometimes, many of the people you may come across don’t have the values you advocate for,
    2:03:25 but it’s cost-benefit. You need to let the ship sail on, right? Or would you rather let it sink?
    2:03:33 That’s not my case. I fight until the end. There are traitors, but that’s part of politics.
    2:03:43 And that’s not my line. But of course, they do exist. There are a lot of people who admire your
    2:03:48 revolutionary spirit. What advice would you give them? Maybe young people
    2:03:55 on how to live a life like yours and have an impact on the world like you have begun to do.
    2:04:00 I didn’t do this thinking about having an impact on the world.
    2:04:10 I have defined what makes me happy and I live according to that. I live consistently by that.
    2:04:21 And most importantly, I would say, never give up. Moreover,
    2:04:35 and above all, never be half-hearted. I would rather cry because I failed,
    2:04:44 rather than not crying because I never tried. I mean, I’m a perfectionist, so when I do air,
    2:04:54 of course, I have a bad time. But still, I prefer to go and get things done. If it goes
    2:05:02 wrong, it’s part of life. But I will never, never have to regret not having done what I
    2:05:09 thought needed to be done at that moment. All right? What gives you hope about the future of
    2:05:17 Argentina and the future of humanity? Well, the fact that thanks to social media
    2:05:25 and to the Holtec revolution going on, every day more and more people are becoming aware
    2:05:38 of how important freedom is to live, to live in peace and prosperity. And I believe even though
    2:05:50 bureaucrats and the elites fight untiringly to enslave us, a wave of freedom has been unleashed,
    2:05:58 which if we do wage the fight, will have a much better world.
    2:06:06 What is your famous words of Viva la Libertad? How did that come about and what does it mean to you?
    2:06:15 Long live freedom, damn it. You know, that first started while I was giving my book presentations
    2:06:24 at the end of my presentation, I would say Viva la Libertad Carajo. And that really stuck with
    2:06:31 me since then. Without thinking about it throughout my life, it was going to continue being present.
    2:06:40 In fact, today my presentations, all of my speeches end with “May God bless the Argentinians.
    2:06:49 May the forces of heaven be with us and Viva la Libertad Carajo.” The first phrase reflects
    2:06:59 my faith in God fervently and that I’m deeply thankful to the Creator
    2:07:07 for the wonderful things He has bestowed upon me daily. The second one has to do with a quote from
    2:07:14 the book of Maccabees 319, which says that victory in battle doesn’t depend on the size of the army,
    2:07:22 but on the forces of heaven. This has to do with the victory of the Jewish people, the Maccabees,
    2:07:30 against the Greeks, and how they recover the temple. And the last one, well, is my war cry.
    2:07:38 Well, there’s no better way to end it. Thank you for being a warrior for freedom. And thank you
    2:07:43 for talking today. Thank you very much indeed for your interview. And thank you for being so well
    2:07:50 educated, because very often interviewers are not like that. And you did have windows to play foul,
    2:07:54 and you didn’t. And I recognize that. And I thank you for that. Thank you.
    2:08:00 Thanks for listening to this conversation with Javier Malay. To support this podcast,
    2:08:05 please check out our sponsors in the description. And now, let me leave you some words from George
    2:08:15 Orwell. In a time of deceit, telling the truth is a revolutionary act. Thank you for listening,
    2:08:27 and hope to see you next time.
    2:08:34 [Music]

    Javier Milei is the President of Argentina. This episode is available in both English and Spanish.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep453-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/javier-milei-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Javier Milei’s X: https://x.com/JMilei
    Javier Milei’s Instagram: https://instagram.com/javiermilei
    Javier Milei’s Facebook: https://facebook.com/JavierMileiEconomista

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Eight Sleep: Temp-controlled smart mattress.
    Go to https://eightsleep.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (14:44) – Economic freedom
    (20:09) – Anarcho-capitalism
    (30:02) – Presidency and reforms
    (49:22) – Poverty
    (55:54) – Corruption
    (1:04:32) – Freedom
    (1:18:43) – Elon Musk
    (1:24:11) – DOGE
    (1:26:13) – Donald Trump
    (1:32:13) – US and Argentina relations
    (1:39:22) – Messi vs Maradona
    (1:48:16) – God
    (1:50:22) – Elvis and Rolling Stones
    (1:54:02) – Free market
    (2:01:03) – Loyalty
    (2:03:40) – Advice for young people
    (2:05:06) – Hope for Argentina

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

    SOCIAL LINKS:
    – X: https://x.com/lexfridman
    – Instagram: https://instagram.com/lexfridman
    – TikTok: https://tiktok.com/@lexfridman
    – LinkedIn: https://linkedin.com/in/lexfridman
    – Facebook: https://facebook.com/lexfridman
    – Patreon: https://patreon.com/lexfridman
    – Telegram: https://t.me/lexfridman
    – Reddit: https://reddit.com/r/lexfridman

  • #452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

    AI transcript
    0:00:05 The following is a conversation with Dario Amade, CEO of Anthropic,
    0:00:09 the company that created Claude, that is currently and often at the top of most
    0:00:15 LLM benchmark leaderboards. On top of that, Dario and the Anthropic team
    0:00:20 have been outspoken advocates for taking the topic of AI safety very seriously,
    0:00:27 and they have continued to publish a lot of fascinating AI research on this and other topics.
    0:00:32 I’m also joined afterwards by two other brilliant people from Anthropic.
    0:00:39 First, Amanda Askel, who is a researcher working on alignment and fine-tuning of Claude,
    0:00:43 including the design of Claude’s character and personality.
    0:00:49 A few folks told me she has probably talked with Claude more than any human at Anthropic,
    0:00:55 so she was definitely a fascinating person to talk to about prompt engineering
    0:00:58 and practical advice on how to get the best out of Claude.
    0:01:05 After that, Chris Ola stopped by for a chat. He’s one of the pioneers of the field of
    0:01:10 mechanistic interpretability, which is an exciting set of efforts that aims to reverse
    0:01:18 engineer neural networks to figure out what’s going on inside, inferring behaviors from neural
    0:01:24 activation patterns inside the network. This is a very promising approach for keeping future
    0:01:30 super intelligent AI systems safe. For example, by detecting from the activations
    0:01:34 when the model is trying to deceive the human it is talking to.
    0:01:40 And now a quick few second mention of a sponsor. Check them out in the description.
    0:01:46 It’s the best way to support this podcast. We got Encore for machine learning, Notion for
    0:01:52 machine learning powered note taking and team collaboration, Shopify for selling stuff online,
    0:01:59 better help for your mind and element for your health. Choose wisely, my friends. Also,
    0:02:02 if you want to work with our amazing team, we just want to get in touch with me for whatever
    0:02:08 reason, go to lexfreeman.com/contact. And now onto the full ad reads. I try to make these
    0:02:13 interesting, but if you skip them, please still check out our sponsors. I enjoy their stuff. Maybe
    0:02:19 you will too. This episode is brought to you by Encore, a platform that provides data focused
    0:02:25 AI tooling for data annotation, curation and management and for model evaluation.
    0:02:31 We talk a little bit about public benchmarks in this podcast. I think mostly focused on
    0:02:37 software engineering, SWE bench. There’s a lot of exciting developments about how do you have
    0:02:42 a benchmark that you can’t cheat on. But if it’s not public, then you can use it the right way,
    0:02:48 which is to evaluate how well is the annotation, the data curation, the training, the pre training,
    0:02:53 the post training, all of that. How’s that working? Anyway, a lot of the fascinating conversation
    0:03:00 with the anthropic folks was focused on the language side. And there’s a lot of really
    0:03:06 incredible work that Encore is doing about annotating and organizing visual data. And they
    0:03:15 make it accessible for searching, for visualizing, for granular curation, all that kind of stuff.
    0:03:20 So I’m a big fan of data. It continues to be the most important thing. The nature of data and what
    0:03:25 it means to be good data, whether it’s human generated or synthetic data keeps changing,
    0:03:32 but it continues to be the most important component of what makes for a generally
    0:03:38 intelligent system, I think, and also for specialized intelligent systems as well.
    0:03:45 Go try out Encore to curate, annotate, and manage your AI data at Encore.com/Lex. That’s
    0:03:53 Encore.com/Lex. This episode is brought to you by the thing that keeps getting better and better
    0:03:59 and better notion. It used to be an awesome note taking tool. Then it started being a great team
    0:04:04 collaboration. So note taking for many people and management of all kinds of other project stuff
    0:04:13 across large teams. Now, more and more and more is becoming a AI superpowered note taking and team
    0:04:20 collaboration tool, really integrating AI probably better than any note taking tool I’ve used,
    0:04:25 not even close, honestly. Notion is truly incredible. I haven’t gotten a chance to use
    0:04:32 Notion on a large team. I imagine that that’s real when it begins to shine. But on a small team, it’s
    0:04:38 just really, really, really amazing. The integration of the assistant inside a particular
    0:04:44 file for summarization for generation, all that kind of stuff. But also the integration of an AI
    0:04:50 assistant to be able to ask questions about, you know, across docs, across wikis, across projects,
    0:04:57 across multiple files, to be able to summarize everything, maybe investigate project progress
    0:05:02 based on all the different stuff going on in different files. So really, really nice integration
    0:05:11 of AI. Try Notion AI for free when you go to notion.com/lex. That’s all lowercase. Notion.com/lex
    0:05:16 to try the power of Notion AI today. This episode is also brought to you by Shopify,
    0:05:22 a platform designed for anyone to sell anywhere with a great looking online store. I keep wanting
    0:05:27 to mention Shopify’s CEO, Toby, who’s brilliant. And I’m not sure why he hasn’t been on the podcast
    0:05:33 yet. I need to figure that out. Every time I’m in San Francisco, I want to talk to him. So he’s
    0:05:38 brilliant on all kinds of domains, not just entrepreneurship or tech, just philosophy and
    0:05:43 life, just his way of being. Plus an accent adds to the flavor profile of the conversation.
    0:05:49 I’ll be watching a cooking show for a little bit. Really, I think my first cooking show,
    0:05:57 it’s called Class Wars. It’s a South Korean show where chefs with Michelin stars compete
    0:06:02 against chefs without Michelin stars. And there’s something about one of the judges
    0:06:09 that just the charisma and the way that he describes every single detail of flavor, of
    0:06:15 texture, of what makes for a good dish. Yeah, so it’s contagious. I don’t really even care. I’m
    0:06:21 not a foodie. I don’t care about food in that way, but he makes me want to care. So anyway,
    0:06:26 that’s why I use the term flavor profile, referring to Toby, which has nothing to do with
    0:06:33 what I should probably be saying. And that is that you should use Shopify. I’ve used Shopify.
    0:06:40 Super easy, create a store, lexfreeman.com/store to sell a few shirts. Anyway, sign up for a $1 per
    0:06:46 month trial period at Shopify.com/lex. That’s all lowercase. Go to Shopify.com/lex to take
    0:06:52 your business to the next level today. This episode is also brought to you by BetterHelp,
    0:06:58 spelled H-E-L-P, Help. They figure out what you need to match you with a licensed therapist
    0:07:03 in under 48 hours. It’s for individuals. It’s for couples. It’s easy to create affordable,
    0:07:11 available worldwide. I saw a few books by a Jungian psychologist. And I was like in a
    0:07:16 delirious state of sleepiness, and I forgot to write his name down, but I need to do some research.
    0:07:23 I need to go back. I need to go back to my younger self when I dreamed of being a psychiatrist and
    0:07:31 reading Zygmunt Freud, and reading Carl Jung, and reading it the way young kids maybe read
    0:07:40 comic books. They were my superheroes of sorts. Kamu as well, Kafka, Nietzsche, Hesse, Dostoevsky,
    0:07:47 the sort of 19th and 20th century literary philosophers of sorts. Anyway, I need to go
    0:07:55 back to that. Maybe have a few conversations about Freud. Anyway, those folks, even if in part wrong,
    0:08:01 or true revolutionaries, were truly brave to explore the mind in the way they did. They showed
    0:08:08 the power of talking and delving deep into the human mind, into the shadow through the use of
    0:08:14 words. So highly recommend. And better help is a super easy way to start. Check them out at
    0:08:21 betterhelp.com/lex and save in your first month. That’s betterhelp.com/lex. This episode is also
    0:08:27 brought to you by Element, my daily zero sugar and delicious electrolyte mix that I’m going to take a
    0:08:33 sip of now. It’s been so long that I’ve been drinking Element that I don’t even remember life
    0:08:40 before Element. I guess I used to take salt pills because it’s such a big component of my exercise
    0:08:46 routine to make sure I get enough water and get enough electrolytes. Yeah, so combined with the
    0:08:52 fast thing that I’ve explored a lot and continue to do to this day and combined with low carb diets
    0:09:02 that I’m a little bit off the wagon on that one. I’m consuming probably like 60, 70, 80,
    0:09:10 maybe 100 some days, grams of carbohydrates. Not good, not good. My happiest is when I’m below 20
    0:09:16 grams or 10 grams of carbohydrates. I’m not like measuring it out, I’m just using numbers to sound
    0:09:21 smart. But I don’t take dieting seriously, but I do take the signals that my body sends quite
    0:09:30 seriously. So without question, making sure I get enough magnesium and sodium and get enough water
    0:09:36 is priceless. A lot of times when I have headaches, I just felt off or whatever were fixed
    0:09:43 near immediately and sometimes after 30 minutes, we just drink water with electrolytes. It’s beautiful
    0:09:48 and it’s delicious. Watermelon salt, the greatest flavor of all time. Get a sample pack for free
    0:09:56 with any purchase, try it at DrinkElements.com/Lex. This is a Lex Freeman podcast. To support it,
    0:10:10 please check out our sponsors in the description. And now, dear friends, here’s Dario Amade.
    0:10:24 Let’s start with a big idea of scaling laws and the scaling hypothesis. What is it?
    0:10:31 What is its history and where do we stand today? So I can only describe it as it relates to kind of
    0:10:36 my own experience, but I’ve been in the AI field for about 10 years. And it was something I noticed
    0:10:43 very early on. So I first joined the AI world when I was working at Baidu with Andrew Ng in late
    0:10:49 2014, which is almost exactly 10 years ago now. And the first thing we worked on was speech recognition
    0:10:55 systems. And in those days, I think deep learning was a new thing. It had made lots of progress,
    0:11:00 but everyone was always saying we don’t have the algorithms we need to succeed. We’re not,
    0:11:06 we’re only matching the tiny, tiny fraction. There’s so much we need to kind of discover
    0:11:13 algorithmically. We haven’t found the picture of how to match the human brain. And when, you know,
    0:11:16 in some ways it was fortunate, I was kind of, you know, you can have almost beginner’s luck,
    0:11:21 right? I was like a newcomer to the field. And, you know, I looked at the neural net that we were
    0:11:25 using for speech, the recurrent neural networks. And I said, I don’t know, what if you make them
    0:11:29 bigger and give them more layers? And what if you scale up the data along with this, right? I just
    0:11:35 saw these as like independent dials that you could turn. And I noticed that the model started to do
    0:11:40 better and better as you gave them more data, as you, as you made the models larger, as you
    0:11:46 trained them for longer. And I didn’t measure things precisely in those days. But, but along with,
    0:11:53 with colleagues, we very much got the informal sense that the more data and the more compute and
    0:11:58 the more training you put into these models, the better they perform. And so initially,
    0:12:03 my thinking was, hey, maybe that is just true for speech recognition systems, right? Maybe,
    0:12:09 maybe that’s just one particular quirk, one particular area. I think it wasn’t until 2017,
    0:12:16 when I first saw the results from GPT-1, that it clicked for me that language is probably the area
    0:12:22 in which we can do this. We can get trillions of words of language data. We can train on them.
    0:12:26 And the models we were trained in those days were tiny. You could train them on
    0:12:31 one to eight GPUs, whereas, you know, now we train jobs on tens of thousands, soon going to hundreds
    0:12:37 of thousands of GPUs. And so when I, when I saw those two things together, and, you know, there
    0:12:41 were a few people like Ilya Sootskiver, who, who you’ve interviewed, who had somewhat similar
    0:12:46 reviews, right? He might have been the first one, although I think a few people came to,
    0:12:50 came to similar reviews around the same time, right? There was, you know, Rich Sutton’s bitter
    0:12:56 lesson. There was Goren wrote about the scaling hypothesis. But I think somewhere between 2014
    0:13:01 and 2017 was when it really clicked for me, when I really got conviction that, hey,
    0:13:07 we’re going to be able to do these incredibly wide cognitive tasks if we just, if we just scale
    0:13:13 up the models. And at every stage of scaling, there are always arguments. And, you know,
    0:13:16 when I first heard them, honestly, I thought, probably I’m the one who’s wrong. And, you know,
    0:13:20 all these, all these experts in the field are right. They know the situation better,
    0:13:24 better than I do, right? There’s, you know, the Chomsky argument about, like,
    0:13:27 you can get syntactics, but you can’t get semantics. There was this idea, oh,
    0:13:31 you can make a sentence, make sense, but you can’t make a paragraph, make sense.
    0:13:36 The latest one we have today is, you know, we’re going to run out of data or the data
    0:13:42 isn’t high quality enough or models can’t reason. And, and each time, every time we managed to,
    0:13:47 we managed to either find a way around or scaling just is the way around. Sometimes it’s one,
    0:13:53 sometimes it’s the other. And so I’m now at this point, I still think, you know, it’s, it’s,
    0:13:58 it’s always quite uncertain. We have nothing but inductive inference to tell us that the next
    0:14:03 two years are going to be like the next, the last 10 years. But, but I’ve seen, I’ve seen the movie
    0:14:09 enough times, I’ve seen the story happen for enough times to really believe that probably
    0:14:14 the scaling is going to continue and that there’s some magic to it that we haven’t really explained
    0:14:21 on a theoretical basis yet. And of course, the scaling here is bigger networks, bigger data,
    0:14:27 bigger compute. Yes. All of those. In particular, linear scaling up of bigger networks,
    0:14:35 bigger training times, and more, and more data. So all of these things, almost like a chemical
    0:14:39 reaction, you know, you have three ingredients in the chemical reaction, and you need to linearly
    0:14:43 scale up the three ingredients. If you scale up one, not the others, you run out of the other
    0:14:49 reagents and the, and the reaction stops. But if you scale up everything, everything in series,
    0:14:53 then, then the reaction can proceed. And of course, now that you have this kind of empirical
    0:15:01 science slash art, you can apply to other more nuanced things like scaling laws applied to
    0:15:07 interpretability or scaling laws applied to post training or just seeing how does this thing scale.
    0:15:12 But the big scaling law, I guess the underlying scaling hypothesis has to do with big networks,
    0:15:19 big data leads to intelligence. Yeah, we’ve, we’ve documented scaling laws in lots of domains other
    0:15:26 than language, right? So initially, the, the paper we did that first showed it was in early 2020,
    0:15:31 where we first showed it for language. There was then some work late in 2020, where we showed the
    0:15:39 same thing for other modalities, like images, video, text to image, image to text, math,
    0:15:43 that they all had the same pattern. And, and you’re right, now there are other stages like
    0:15:48 post training or there are new types of reasoning models. And in, in, in all of those cases that
    0:15:55 we’ve measured, we see similar, similar types of scaling laws. A bit of a philosophical question,
    0:16:01 but what’s your intuition about why bigger is better in terms of network size and data size?
    0:16:08 Why does it lead to more intelligent models? So in my previous career as a, as a biophysicist,
    0:16:13 so I did physics undergrad and then biophysics in, in, in grad school. So I think back to what
    0:16:19 I know as a physicist, which is actually much less than what some of my colleagues at Anthropic have
    0:16:24 in terms of, in terms of expertise in physics. There’s this, there’s this concept called the
    0:16:32 one over eth noise and one over X distributions, where we’re often, you know, just, just like
    0:16:37 if you add up a bunch of natural processes, you get a Gaussian. If you add up a bunch of kind of
    0:16:44 differently distributed natural processes, if you like, if you like, take a, take a probe and,
    0:16:49 and hook it up to a resistor, the distribution of the thermal noise in the resistor goes as one
    0:16:57 over the frequency. It’s some kind of natural convergent distribution. And, and I think what
    0:17:02 it amounts to is that if you look at a lot of things that are, that are produced by some natural
    0:17:08 process that has a lot of different scales, right? Not a Gaussian, which is kind of narrowly distributed,
    0:17:14 but you know, if I look at kind of like large and small fluctuations that lead to, lead to electrical
    0:17:21 noise, they have this decaying one over X distribution. And so now I think of like patterns
    0:17:25 in the physical world, right? If I, if, or, or in language, if I think about the patterns in
    0:17:30 language, there are some really simple patterns. Some words are much more common than others,
    0:17:35 like the, then there’s basic noun verb structure. Then there’s the fact that, you know, nouns and
    0:17:39 verbs have to agree, they have to coordinate, and there’s the higher level sentence structure,
    0:17:44 then there’s the thematic structure of paragraphs. And so the fact that there’s this regressing
    0:17:50 structure, you can imagine that as you make the networks larger, first they capture the
    0:17:54 really simple correlations, the really simple patterns, and there’s this long tail of other
    0:18:00 patterns. And if that long tail of other patterns is really smooth, like it is with the one over F
    0:18:06 noise in, you know, physical processes, like, like, like resistors, then you can imagine as you make
    0:18:10 the network larger, it’s kind of capturing more and more of that distribution.
    0:18:15 And so that smoothness gets reflected in how well the models are at predicting and how well
    0:18:21 they perform. Language is an evolved process, right? We’ve, we’ve developed language, we have
    0:18:26 common words and less common words, we have common expressions and less common expressions.
    0:18:32 We have ideas, cliches that are expressed frequently, and we have novel ideas. And that
    0:18:37 process has, has developed, has evolved with humans over millions of years.
    0:18:41 And so the, the, the guess, and this is pure speculation, would be, would be that there is,
    0:18:47 there’s some kind of long tail distribution of, of, of the distribution of these ideas.
    0:18:52 So there’s the long tail, but also there’s the height of the hierarchy of concepts that you’re
    0:18:56 building up. So the bigger the network, presumably you have a higher capacity to…
    0:19:01 Exactly. If you have a small network, you only get the common stuff, right? If, if I take a tiny
    0:19:05 neural network, it’s very good at understanding that, you know, a sentence has to have, you know,
    0:19:10 verb, adjective, noun, right? But it’s, it’s terrible at deciding what those verb, adjective,
    0:19:14 and noun should be and whether they should make sense. If I make it just a little bigger,
    0:19:18 it gets good at that. Then suddenly it’s good at the sentences, but it’s not good at the paragraphs.
    0:19:24 And so these, these rarer and more complex patterns get picked up as I add, as I add more
    0:19:29 capacity to the network. Well, the natural question then is, what’s the ceiling of this?
    0:19:35 Yeah. Like how complicated and complex is the real world? How much does this stuff is there to learn?
    0:19:41 I don’t think any of us knows the answer to that question. My strong instinct would be that there’s
    0:19:46 no ceiling below the level of humans, right? We humans are able to understand these various
    0:19:52 patterns. And so that, that makes me think that if we continue to, you know, scale up these,
    0:19:57 these, these models to kind of develop new methods for training them and scaling them up,
    0:20:02 that will at least get to the level that we’ve gotten to with humans. There’s then a question of,
    0:20:06 you know, how much more is it possible to understand than humans do? How much,
    0:20:11 how much is it possible to be smarter and more perceptive than humans? I, I would guess the
    0:20:18 answer has, has got to be domain dependent. If I look at an area like biology, and, you know,
    0:20:24 I wrote this essay, Machines of Loving Grace, it seems to me that humans are struggling to
    0:20:29 understand the complexity of biology, right? If you go to Stanford or to Harvard or to Berkeley,
    0:20:35 you have whole departments of, you know, folks trying to study, you know, like the immune system
    0:20:42 or metabolic pathways. And, and each person understands only a tiny bit, part of it specializes,
    0:20:46 and they’re struggling to combine their knowledge with that of, with that of other humans. And so
    0:20:50 I have an instinct that there’s, there’s a lot of room at the top for AIs to get
    0:20:56 smarter. If I think of something like materials in the, in the physical world or,
    0:21:02 you know, like addressing, you know, conflicts between humans or something like that. I mean,
    0:21:06 you know, it may be, there’s only some of these problems are not intractable, but much harder.
    0:21:11 And, and it may be that there’s only, there’s only so well you can do with some of these things,
    0:21:16 right? Just like with speech recognition, there’s only so clear I can hear your speech. So I think
    0:21:21 in some areas, there may be ceilings in, in, you know, that are very close to what humans
    0:21:26 have done in other areas, those ceilings may be very far away. And I think we’ll only find
    0:21:30 out when we build these systems. There’s, it’s very hard to know in advance, we can speculate,
    0:21:35 but we can’t be sure. And in some domains, the ceiling might have to do with human bureaucracies
    0:21:39 and things like this, as you write about. Yes. So humans fundamentally have to be part of the loop.
    0:21:45 That’s the cause of the ceiling, not maybe the limits of the intelligence. Yeah. I think in many
    0:21:52 cases, you know, in theory, technology could change very fast, for example, all the things that we
    0:21:58 might invent with respect to biology. But remember, there’s, there’s a, you know, there’s a clinical
    0:22:03 trial system that we have to go through to actually administer these things to humans. I think that’s
    0:22:08 a mixture of things that are unnecessary and bureaucratic and things that kind of protect the
    0:22:12 integrity of society. And the whole challenge is that it’s hard to tell, it’s hard to tell what’s
    0:22:18 going on. It’s hard to tell which is which, right? My view is definitely, I think, in terms of drug
    0:22:23 development, we, my view is that we’re too slow and we’re too conservative. But certainly, if you
    0:22:28 get these things wrong, you know, it’s, it’s possible to, to risk people’s lives by being,
    0:22:34 by being, by being too reckless. And so at least, at least some of these human institutions are in
    0:22:39 fact, protecting people. So it’s, it’s all about finding the balance. I strongly suspect that balance
    0:22:44 is kind of more on the side of pushing to make things happen faster, but there is a balance.
    0:22:51 If we do hit a limit, if we do hit a slowdown in the scaling laws, what do you think would be
    0:22:56 the reason? Is it compute limited, data limited? Is it something else? Idea limited?
    0:23:02 So a few things. Now we’re talking about hitting the limit before we get to the level of, of humans
    0:23:07 and the scale of humans. So, so I think one that’s, you know, one that’s popular today, and I think,
    0:23:12 you know, could be a limit that we run into. Like most of the limits, I would bet against it,
    0:23:16 but it’s definitely possible is we simply run out of data. There’s only so much data on the
    0:23:21 internet. And there’s issues with the quality of the data, right? You can get hundreds of
    0:23:27 trillions of words on the internet, but a lot of it is, is repetitive or it’s search engine,
    0:23:32 you know, search engine optimization, drivel, or maybe in the future, it’ll even be text generated
    0:23:40 by AIs itself. And, and so I think there are limits to what, to what can be produced in this way.
    0:23:46 That said, we, and I would guess other companies are working on ways to make data synthetic,
    0:23:52 where you can, you know, you can use the model to generate more data of the type that you have,
    0:23:58 that you have already, or even generate data from scratch. If you think about what was done with
    0:24:03 DeepMind’s AlphaGo Zero, they managed to get a bot all the way from, you know, no ability to play
    0:24:09 Go whatsoever to above human level, just by playing against itself. There was no example data from
    0:24:14 humans required in the AlphaGo Zero version of it. The other direction, of course, is these
    0:24:20 reasoning models that do chain of thought and stop to think and reflect on their own thinking.
    0:24:25 In a way, that’s another kind of synthetic data coupled with reinforcement learning. So my, my
    0:24:30 guess is, with one of those methods, we’ll get around the data limitation, or there may be other
    0:24:35 sources of data that are, that are available. We could just observe that even if there’s no
    0:24:40 problem with data, as we start to scale models up, they just stop getting better. It’s, it seemed to
    0:24:46 be our reliable observation that they’ve gotten better. That could just stop at some point for
    0:24:53 reason we don’t understand. The answer could be that we need to, you know, we need to invent some
    0:25:00 new architecture. It’s been, there have been problems in the past with, say, numerical stability
    0:25:04 of models, where it looked like things were, were leveling off, but, but actually, you know,
    0:25:09 when we, when we, when we found the right on blocker, they didn’t end up doing so. So perhaps
    0:25:15 there’s new, some new optimization method or some new technique we need to unblock things.
    0:25:20 I’ve seen no evidence of that so far, but if things were to, to slow down that perhaps could
    0:25:28 be one reason. What about the limits of compute, meaning the expensive nature of building bigger
    0:25:33 and bigger data centers? So right now, I think, you know, most of the frontier model companies,
    0:25:39 I would guess, are operating, you know, roughly, you know, one billion dollar scale plus or minus
    0:25:44 a factor of three, right? Those are the models that exist now or are being trained now. I think
    0:25:51 next year, we’re going to go to a few billion. And then, 2026, we may go to, you know, above 10,
    0:25:58 10 billion and probably by 2027, their ambitions to build 100, 100 billion dollar, 100 billion
    0:26:03 dollar clusters. And I think all of that actually will happen. There’s a lot of determination to
    0:26:08 build the compute to do it within this country. And I would guess that it actually does happen. Now,
    0:26:13 if we get to 100 billion, that’s still not enough compute, that’s still not enough scale,
    0:26:19 then either we need even more scale or we need to develop some way of doing it more efficiently
    0:26:24 of shifting the curve. I think between all of these, one of the reasons I’m bullish about
    0:26:30 powerful AI happening so fast is just that if you extrapolate the next few points on the curve,
    0:26:36 we’re very quickly getting towards human level ability, right? Some of the new models that we
    0:26:40 developed, some reasoning models that have come from other companies, they’re starting to get to
    0:26:45 what I would call the PhD or professional level, right? If you look at their coding ability,
    0:26:53 the latest model we released, Sonnet 3.5, the new updated version, it gets something like 50%
    0:26:59 on SWE bench. And SWE bench is an example of a bunch of professional real world software engineering
    0:27:06 tasks. At the beginning of the year, I think the state of the art was 3 or 4%. So in 10 months,
    0:27:12 we’ve gone from 3% to 50% on this task. And I think in another year, we’ll probably be at
    0:27:18 90%. I mean, I don’t know, but might even be less than that. We’ve seen similar things in
    0:27:27 graduate level math, physics, and biology from models like OpenAI’s 01. So if we just continue
    0:27:33 to extrapolate this, right, in terms of skill that we have, I think if we extrapolate the straight
    0:27:39 curve, within a few years, we will get to these models being above the highest professional
    0:27:44 level in terms of humans. Now, will that curve continue? You pointed to and I’ve pointed to a
    0:27:50 lot of reasons, possible reasons why that might not happen. But if the extrapolation curve continues,
    0:27:55 that is the trajectory we’re on. So Anthropic has several competitors. It’d be interesting to get
    0:28:02 your sort of view of it all. OpenAI, Google, XAI, Meta, what does it take to win in the broad sense
    0:28:09 of win in the space? Yeah, so I want to separate out a couple things. So Anthropic’s mission is to
    0:28:17 kind of try to make this all go well. And we have a theory of change called race to the top. Race to
    0:28:25 the top is about trying to push the other players to do the right thing by setting an example. It’s
    0:28:29 not about being the good guy, it’s about setting things up so that all of us can be the good guy.
    0:28:34 I’ll give a few examples of this. Early in the history of Anthropic, one of our co-founders,
    0:28:38 Chris Ola, who I believe you’re interviewing soon, you know, he’s the co-founder of the field of
    0:28:44 mechanistic interpretability, which is an attempt to understand what’s going on inside AI models.
    0:28:50 So we had him and one of our early teams focus on this area of interpretability, which we think
    0:28:57 is good for making models safe and transparent. For three or four years, that had no commercial
    0:29:01 application whatsoever. It still doesn’t, today we’re doing some early betas with it,
    0:29:07 and probably it will eventually, but, you know, this is a very, very long research bed and one
    0:29:12 in which we’ve built in public and shared our results publicly. And we did this because, you
    0:29:18 know, we think it’s a way to make models safer. An interesting thing is that as we’ve done this,
    0:29:23 other companies have started doing it as well. In some cases because they’ve been inspired by it,
    0:29:29 in some cases because they’re worried that, you know, if other companies are doing this,
    0:29:33 that look more responsible, they want to look more responsible too. No one wants to look like
    0:29:39 the irresponsible actor. And so they adopt this, they adopt this as well. When folks come to
    0:29:43 Anthropic, interpretability is often a draw, and I tell them, the other places you didn’t go,
    0:29:51 tell them why you came here. And then you see soon that there’s interpretability teams
    0:29:56 elsewhere as well. And in a way, that takes away our competitive advantage because it’s like, oh,
    0:30:02 now others are doing it as well, but it’s good for the broader system. And so we have to invent
    0:30:07 some new thing that we’re doing that others aren’t doing as well. And the hope is to basically
    0:30:14 bid up the importance of doing the right thing. And it’s not about us in particular, right? It’s
    0:30:20 not about having one particular good guy. Other companies can do this as well. If they join the
    0:30:27 race to do this, that’s the best news ever, right? It’s about kind of shaping the incentives to
    0:30:32 point upward instead of shaping the incentives to point downward. And we should say this example,
    0:30:39 the field of mechanistic interpretability is just a rigorous, non-hand-wavy way of doing AI
    0:30:45 safety, or it’s tending that way. Trying to. I mean, I think we’re still early in terms of our
    0:30:50 ability to see things, but I’ve been surprised at how much we’ve been able to look inside these
    0:30:56 systems and understand what we see, right? Unlike with the scaling laws where it feels like there’s
    0:31:03 some, you know, law that’s deriving these models to perform better. On the inside, the models aren’t,
    0:31:06 you know, there’s no reason why they should be designed for us to understand them, right? They’re
    0:31:11 designed to operate. They’re designed to work just like the human brain or human biochemistry.
    0:31:15 They’re not designed for a human to open up the hatch, look inside and understand them.
    0:31:19 But we have found, and you know, you can talk in much more detail about this to Chris,
    0:31:24 that when we open them up, when we do look inside them, we find things that are surprisingly
    0:31:29 interesting. And as a side effect, you also get to see the beauty of these models. You get to explore
    0:31:35 the sort of the beautiful nature of large neural networks through the MEK and TURP kind of methodology.
    0:31:40 I’m amazed at how clean it’s been. I’m amazed at things like induction heads.
    0:31:48 I’m amazed at things like, you know, that we can, you know, use sparse auto encoders to find these
    0:31:54 directions within the networks, and that the directions correspond to these very clear concepts.
    0:31:59 We demonstrated this a bit with the Golden Gate Bridge Claude. So this was an experiment where
    0:32:04 we found a direction inside one of the neural networks layers that corresponded to the Golden
    0:32:10 Gate Bridge. And we just turned that way up. And so we released this model as a demo. It was
    0:32:16 kind of half a joke for a couple of days, but it was illustrative of the method we developed.
    0:32:22 And you could take the model, you could ask it about anything. You know, it would be like,
    0:32:27 you could say, how was your day? And anything you asked because this feature was activated,
    0:32:32 it would connect to the Golden Gate Bridge. So it would say, you know, I’m feeling relaxed and
    0:32:36 expansive, much like the arches of the Golden Gate Bridge. Or, you know,
    0:32:40 it would masterfully change topic. Yes. To the Golden Gate Bridge and integrate it.
    0:32:44 There was also a sadness to it, to the focus ahead on the Golden Gate Bridge. I think people
    0:32:50 quickly fell in love with it. I think so people already miss it because it was taken down, I think,
    0:32:57 after a day. Somehow these interventions on the model where you kind of adjust its behavior,
    0:33:02 somehow emotionally made it seem more human than any other version of the model.
    0:33:05 Strong personality, strong identity. It has a strong personality.
    0:33:09 It has these kind of like obsessive interests. You know, we can all think of someone who’s like
    0:33:13 obsessed with something. So it does make it feel somehow a bit more human.
    0:33:19 Let’s talk about the present. Let’s talk about Claude. So this year, a lot has happened. In March,
    0:33:28 Claude III Opus Sonnet Haiku were released. Then Claude III V Sonnet in July with an updated version
    0:33:34 just now released. And then also Claude III V Haiku was released. Okay. Can you explain the
    0:33:40 difference between Opus Sonnet and Haiku and how we should think about the different versions?
    0:33:44 Yeah. So let’s go back to March when we first released these three models. So,
    0:33:50 you know, our thinking was different companies produce kind of large and small models,
    0:33:57 better and worse models. We felt that there was demand both for a really powerful model,
    0:34:02 you know, and that might be a little bit slower that you’d have to pay more for.
    0:34:08 And also for fast cheap models that are as smart as they can be for how fast and cheap, right?
    0:34:13 Whenever you want to do some kind of like, you know, difficult analysis, like if I, you know,
    0:34:18 I want to write code, for instance, or, you know, I want to brainstorm ideas or I want to do creative
    0:34:23 writing. I want the really powerful model. But then there’s a lot of practical applications
    0:34:28 in a business sense where it’s like, I’m interacting with a website. I, you know, like,
    0:34:34 I’m like doing my taxes or I’m, you know, talking to, you know, to like a legal advisor and I want
    0:34:39 to analyze a contract or, you know, we have plenty of companies that are just like, you know,
    0:34:45 I want to do auto-complete on my IDE or something. And for all of those things, you want to act
    0:34:51 fast and you want to use the model very broadly. So we wanted to serve that whole spectrum of needs.
    0:34:57 So we ended up with this, you know, this kind of poetry theme. And so what’s a really short poem?
    0:35:03 It’s a haiku. And so haiku is the small, fast, cheap model that is, you know, was at the time,
    0:35:10 was really surprisingly, surprisingly intelligent for how fast and cheap it was. Sonnet is a medium
    0:35:15 sized poem, right? A couple of paragraphs. And so Sonnet was the middle model. It is smarter,
    0:35:20 but also a little bit slower, a little bit more expensive. And an opus, like a magnum opus is a
    0:35:27 large work, opus was the largest, smartest model at the time. So that was the original kind of
    0:35:35 thinking behind it. And our thinking then was, well, each new generation of models should shift
    0:35:42 that trade off curve. So when we released Sonnet 3.5, it has the same, roughly the same, you know,
    0:35:52 cost and speed as the Sonnet 3 model. But it increased its intelligence to the point where it
    0:35:59 was smarter than the original opus 3 model, especially for code, but also just in general.
    0:36:06 And so now, you know, we’ve shown results for haiku 3.5. And I believe haiku 3.5,
    0:36:13 the smallest new model, is about as good as opus 3, the largest old model. So basically,
    0:36:17 the aim here is to shift the curve. And then at some point, there’s going to be an opus 3.5.
    0:36:24 Now, every new generation of models has its own thing, they use new data, their personality changes
    0:36:31 in ways that we kind of, you know, try to steer, but are not fully able to steer. And so there’s
    0:36:35 never quite that exact equivalence where the only thing you’re changing is intelligence.
    0:36:39 We always try and improve other things, and some things change without us,
    0:36:45 without us knowing or measuring. So it’s very much an exact science. In many ways,
    0:36:49 the manner and personality of these models is more in art than it is in science.
    0:37:00 So what is sort of the reason for the span of time between, say, cloud opus 3.0 and 3.5?
    0:37:04 What is it, what takes that time if you can speak to?
    0:37:09 Yeah, so there’s different, there’s different processes. There’s pre-training, which is,
    0:37:14 you know, just kind of the normal language model training. And that takes a very long time.
    0:37:20 That uses, you know, these days, you know, tens, you know, tens of thousands, sometimes many tens
    0:37:26 of thousands of GPUs or TPUs or training them or, you know, whatever, we use different platforms,
    0:37:33 but, you know, accelerator chips, often, often training for months. There’s then a kind of
    0:37:39 post-training phase where we do reinforcement learning from human feedback, as well as other
    0:37:45 kinds of reinforcement learning. That phase is getting larger and larger now. And, you know,
    0:37:50 often, that’s less of an exact science. It often takes effort to get it right.
    0:37:56 Models are then tested with some of our early partners to see how good they are.
    0:38:02 And they’re then tested both internally and externally for their safety, particularly for
    0:38:08 catastrophic and autonomy risks. So we do internal testing, according to our responsible
    0:38:12 scaling policy, which I, you know, could talk more about that in detail.
    0:38:16 And then we have an agreement with the U.S. and the UK AI Safety Institute,
    0:38:22 as well as other third-party testers in specific domains to test the models for what are called
    0:38:28 CBRN risks, chemical, biological, radiological, and nuclear, which are, you know, we don’t think
    0:38:33 that models impose these risks seriously yet, but every new model we want to evaluate to see
    0:38:42 if we’re starting to get close to some of these more dangerous capabilities. So those are the
    0:38:47 phases. And then, you know, then it just takes some time to get the model working in terms of
    0:38:54 inference and launching it in the API. So there’s just a lot of steps to actually make
    0:38:59 it a model work. And of course, you know, we’re always trying to make the processes
    0:39:02 as streamlined as possible, right? We want our safety testing to be rigorous,
    0:39:08 but we want it to be rigorous and to be, you know, to be automatic, to happen as fast as it can
    0:39:13 without compromising on rigor. Same with our pre-training process and our post-training process.
    0:39:17 So, you know, it’s just like building anything else. It’s just like building airplanes. You want
    0:39:21 to make them, you know, you want to make them safe, but you want to make the process streamlined.
    0:39:25 And I think the creative tension between those is, you know, is an important thing in making the
    0:39:30 models work. Yeah. Rumor on the street, I forget who was saying that Anthropica is really good
    0:39:36 tooling. So, probably a lot of the challenge here is on the software engineering side is to
    0:39:42 build the tooling to have like a efficient low friction interaction with the infrastructure.
    0:39:50 You would be surprised how much of the challenges of, you know, building these models comes down to,
    0:39:55 you know, software engineering, performance engineering, you know, you know, from the outside,
    0:39:59 you might think, oh man, we had this Eureka breakthrough, right? You know, this movie with
    0:40:06 the science. We discovered it. We figured it out. But I think all things, even, you know,
    0:40:13 incredible discoveries, like they almost always come down to the details and often super, super
    0:40:18 boring details. I can’t speak to whether we have better tooling than other companies. I mean, you
    0:40:22 know, I haven’t been at those other companies at least, at least not recently. But it’s certainly
    0:40:27 something we give a lot of attention to. I don’t know if you can say, but from 3,
    0:40:32 from Claude 3 to Claude 3-5, is there any extra pre-training going on as they mostly focus on
    0:40:37 the post-training? There’s been leaps in performance. Yeah, I think at any given stage,
    0:40:43 we’re focused on improving everything at once. Just naturally, like there are different teams,
    0:40:49 each team makes progress in a particular area in making a particular, you know, their particular
    0:40:53 segment of the relay race better. And it’s just natural that when we make a new model,
    0:40:58 we put all of these things in at once. So the data you have, like the preference data you get
    0:41:06 from RLHF, is that applicable? Is there a way to apply it to newer models as it gets trained up?
    0:41:10 Yeah, preference data from old models sometimes gets used for new models. Although, of course,
    0:41:15 it performs somewhat better when it’s, you know, trained on the new models.
    0:41:19 Note that we have this, you know, constitutional AI method such that we don’t only use preference
    0:41:24 data, we kind of, there’s also a post-training process where we train the model against itself.
    0:41:28 And there’s, you know, new types of post-training the model against itself that are used every day.
    0:41:34 So it’s not just RLHF, it’s a bunch of other methods as well. Post-training, I think, you know,
    0:41:39 is becoming more and more sophisticated. Well, what explains the big leap in performance for
    0:41:43 the new Sonnet 3-5? I mean, at least in the programming side. And maybe this is a good
    0:41:47 place to talk about benchmarks. What does it mean to get better? Just the number went up.
    0:41:56 But, you know, I program, but I also love programming, and I clawed 3-5 through cursors,
    0:42:01 what I use to assist me in programming. And there was, at least experientially,
    0:42:08 anecdotally, it’s gotten smarter at programming. So what, like, what does it take to get it to
    0:42:13 get it smarter? We observe that as well, by the way. There were a couple of very strong engineers
    0:42:19 here at Anthropic who, all previous code models, both produced by us and produced by all the other
    0:42:23 companies, hadn’t really been useful to, hadn’t really been useful to them. You know, they said,
    0:42:29 you know, maybe, maybe this is useful to the beginner, it’s not useful to me. But Sonnet 3.5,
    0:42:32 the original one, for the first time, they said, oh my god, this helped me with something
    0:42:35 that, you know, that it would have taken me hours to do. This is the first model that has
    0:42:41 actually saved me time. So again, the waterline is rising. And then I think, you know, the new Sonnet
    0:42:47 has been even better. In terms of what it takes, I mean, I’ll just say it’s been across the board.
    0:42:53 It’s in the pre-training, it’s in the post-training, it’s in various evaluations that we do. We’ve
    0:42:59 observed this as well. And if we go into the details of the benchmark, so SWE Bench is basically,
    0:43:03 you know, since, you know, since you’re a programmer, you know, you’ll be familiar with,
    0:43:09 like, pull requests and, you know, just pull requests are like, you know, like a sort of,
    0:43:14 a sort of atomic unit of work. You know, you could say, I’m, you know, I’m implementing one,
    0:43:22 I’m implementing one thing. And so SWE Bench actually gives you kind of a real world situation
    0:43:26 where the code base is in the current state. And I’m trying to implement something that’s,
    0:43:30 you know, that’s described in, described in language. We have internal benchmarks where we,
    0:43:34 where we measure the same thing. And you say, just give the model free reign to like, you know,
    0:43:41 do anything, run, run, run anything, edit anything. How, how well is it able to complete these tasks?
    0:43:47 And it’s that benchmark that’s gone from it can do it 3% of the time to it can do it about 50%
    0:43:52 of the time. So I actually do believe that if we get, you can gain benchmarks, but I think if we
    0:43:58 get to 100% of that benchmark in a way that isn’t kind of like overtrained or, or game for that
    0:44:04 particular benchmark, probably represents a real and serious increase in kind of, in kind of
    0:44:10 programming, programming ability. And I would suspect that if we can get to, you know, 90,
    0:44:16 90, 95%, that, that, that, you know, it will, it will represent ability to autonomously do a
    0:44:21 significant fraction of software engineering tasks. Well, ridiculous timeline question.
    0:44:28 When is GLaDOPUS 3.5 coming up? Not giving you an exact date, but, you know, they’re, they’re,
    0:44:33 you know, as far as we know, the plan is still to have a Claude 3.5 opus.
    0:44:36 Are we going to get it before GTA six or no?
    0:44:40 Like Duke Nukem forever. There was that game that, there was some game that was delayed 15
    0:44:44 years. Is that Duke Nukem forever? Yeah. And I think GTA is not just releasing trailers.
    0:44:47 You know, it’s only been three months since we released the first sonnet.
    0:44:52 Yeah. It’s the incredible pace. It just, it just tells you about the pace.
    0:44:54 Yeah. The expectations for when things are going to come out.
    0:45:01 So what about 4.0? So how do you think about sort of as these models get bigger and bigger about
    0:45:09 versioning and also just versioning in general? Why sonnet 3.5 updated with the date? Why not
    0:45:14 sonnet 3.6? Yeah, it’s actually naming is actually an interesting challenge here,
    0:45:19 right? Because I think a year ago, most of the model was pre-training. And so you could start
    0:45:22 from the beginning and just say, okay, we’re going to have models of different sizes. We’re
    0:45:27 going to train them all together and, you know, we’ll have a family of naming schemes and then
    0:45:31 we’ll put some new magic into them and then, you know, we’ll have the next, the next generation.
    0:45:36 The trouble starts already when some of them take a lot longer than others to train, right?
    0:45:41 That already messes up your time, time a little bit. But as you make big improvements in,
    0:45:47 as you make big improvements in pre-training, then you suddenly notice, oh, I can make better
    0:45:52 pre-trained model and that doesn’t take very long to do. And, but, you know, clearly it has
    0:45:58 the same, you know, size and shape of previous models. So I think those two together as well as
    0:46:05 the timing, timing issues, any kind of scheme you come up with, you know, the reality tends to kind
    0:46:09 of frustrate that scheme, right? It tends to kind of break out of the, break out of the scheme.
    0:46:15 It’s not like software where you can say, oh, this is like, you know, 3.7. This is 3.8. No,
    0:46:19 you have models with different, different trade-offs. You can change some things in your models. You
    0:46:24 can train, you can change other things. Some are faster and slower in France. Some have to be more
    0:46:29 expensive. Some have to be less expensive. And so I think all the companies have struggled with
    0:46:34 this. I think we did very, you know, I think, I think we were in a good, good position in terms
    0:46:40 of naming when we had Haikou, Sonnet. And we’re trying to maintain it, but it’s not, it’s not,
    0:46:47 it’s not perfect. So we’ll, we’ll try and get back to the simplicity, but it, it, just the, the,
    0:46:51 the nature of the field, I feel like no one’s figured out naming. It’s somehow a different
    0:46:57 paradigm from like normal software. And, and, and so we just, none of the companies have been
    0:47:03 perfect at it. It’s something we struggle with surprisingly much relative to, you know, how,
    0:47:08 relative to how trivial it is to, you know, for the grand science of training the models.
    0:47:15 So from the user side, the user experience of the updated Sonnet 3.5 is just different than
    0:47:22 the previous June 2024 Sonnet 3.5. It would be nice to come up with some kind of labeling
    0:47:27 that embodies that, because people talk about Sonnet 3.5, but now there’s a different one.
    0:47:34 And so how do you refer to the previous one and the new one? And it, it, when there’s a distinct
    0:47:41 improvement, it just makes conversation about it just challenging. Yeah. Yeah. I definitely think
    0:47:47 this question of there are lots of properties of the models that are not reflected in the benchmarks.
    0:47:53 I think, I think that’s, that’s definitely the case and everyone agrees. And not all of them
    0:48:00 are capabilities. Some of them are, you know, models can be polite or brusque. They can be,
    0:48:08 you know, very reactive or they can ask you questions. They can have what, what feels like
    0:48:13 a warm personality or a cold personality. They can be boring or they can be very distinctive,
    0:48:19 like Golden Gate Claude was. And we have a whole, you know, we have a whole team kind of focused
    0:48:24 on, I think we call it Claude character. Amanda leads that team and we’ll talk to you about that.
    0:48:31 But it’s still a very inexact science. And often we find that models have properties that we’re
    0:48:37 not aware of. The fact of the matter is that you can, you know, talk to a model 10,000 times and
    0:48:42 there are some behaviors you might not see. Just like, just like with a human, right? I can know
    0:48:47 someone for a few months and, you know, not know that they have a certain skill or not know that
    0:48:51 there’s a certain side to them. And so I think, I think we just have to get used to this idea.
    0:48:56 And we’re always looking for better ways of testing our models to, to demonstrate these
    0:49:01 capabilities and, and, and also to decide which are, which are the, which are the personality
    0:49:05 properties we want models to have and which we don’t want to have that itself. The normative
    0:49:11 question is also super interesting. I gotta ask you a question from Reddit. From Reddit. Oh boy.
    0:49:17 You know, there, there’s just a fascinating, to me at least it’s a psychological social phenomenon
    0:49:25 where people report that Claude has gotten dumber for them over time. And so the question is,
    0:49:29 does the user complaint about the dumbing down of Claude three, five sonnet hold any water?
    0:49:36 So are these anecdotal reports a kind of social phenomena or did Claude,
    0:49:41 is there any cases where Claude would get dumber? So this actually doesn’t apply. This,
    0:49:47 this isn’t just about Claude. I believe this, I believe I’ve seen these complaints
    0:49:52 for every foundation model produced by a major company. People said this about GPT four,
    0:49:59 they said it about GPT four turbo. So, so, so a couple of things. One, the actual weights of
    0:50:05 the model, right, the actual brain of the model, that does not change unless we introduce a new
    0:50:10 model. There, there are just a number of reasons why it would not make sense practically to be
    0:50:15 randomly substituting in, substituting in new versions of the model. It’s difficult from an
    0:50:20 inference perspective. And it’s actually hard to control all the consequences of changing the
    0:50:25 weights of the model. Let’s say you wanted to fine tune the model to be like, I don’t know,
    0:50:30 to like, to say certainly less, which, you know, an old version of sonnet used to do. You actually
    0:50:34 end up changing 100 things as well. So we have a whole process for it. And we have a whole process
    0:50:40 for modifying the model, we do a bunch of testing on it, we do a bunch of, like we do a bunch of
    0:50:46 user testing and early customers. So it, we both have never changed the weights of the model without,
    0:50:51 without telling anyone. And it wouldn’t, certainly in the current setup, it would not make sense to
    0:50:57 do that. Now, there are a couple of things that we do occasionally do. One is sometimes we run AB
    0:51:05 tests. But those are typically very close to when a model is being released, and for a very small
    0:51:11 fraction of time. So, you know, like the, you know, the, the day before the new sonnet 3.5,
    0:51:16 I agree, we should have had a better name. It’s clunky to refer to it. There were some comments
    0:51:20 from people that like, it’s got, it’s got, it’s gotten a lot better. And that’s because, you know,
    0:51:25 fraction were exposed to, to an AB test for, for those one or, for those one or two days.
    0:51:31 The other is that occasionally the system prompt will change on the system prompt can have some
    0:51:36 effects, although it’s unlikely to dumb down models. It’s unlikely to make them dumber.
    0:51:42 And, and, and we’ve seen that while these two things, which I’m listening to be very complete,
    0:51:51 happened relatively, happened quite infrequently. The complaints about, for us and for other model
    0:51:55 companies about the model change, the model isn’t good at this, the model got more censored, the
    0:52:00 model was dumbed down, those complaints are constant. And so I don’t want to say like people
    0:52:05 are imagining these are anything, but like the models are for the most part, not changing.
    0:52:12 If I were to offer a theory, I think it actually relates to one of the things I said before,
    0:52:19 which is that models have many are very complex and have many aspects to them. And so often, you
    0:52:25 know, if I, if I, if I ask the model a question, you know, if I’m like, if I’m like, do task acts
    0:52:31 versus can you do task acts, the model might respond in different ways. And, and so there are
    0:52:36 all kinds of subtle things that you can change about the way you interact with the model that
    0:52:42 can give you very different results. To be clear, this, this itself is like a failing by, by us and
    0:52:47 by the other model providers, that, that the models are just, just often sensitive to like
    0:52:52 small, small changes in wording. It’s yet another way in which the science of how these models work
    0:52:57 is very poorly developed. And, and so, you know, if I go to sleep one night and I was like talking
    0:53:02 the model in a certain way, and I like slightly change the phrasing of how I talk to the model,
    0:53:06 you know, I could, I could get different results. So that’s, that’s one possible way.
    0:53:11 The other thing is, man, it’s just hard to quantify this stuff. It’s hard to quantify this stuff.
    0:53:16 I think people are very excited by new models when they come out. And then as time goes on, they,
    0:53:20 they become very aware of the, they become very aware of the limitations. So that may be another
    0:53:24 effect, but that’s, that’s all a very long-rended way of saying, for the most part, with some
    0:53:30 fairly narrow exceptions, the models are not changing. I think there is a psychological effect.
    0:53:34 You just start getting used to it. The baseline raises, like, when people have first gotten Wi-Fi
    0:53:40 on airplanes, it’s like, amazing. It’s like, amazing. Yeah. And then, and then you can’t get this thing
    0:53:45 to work. This is such a piece of crap. Exactly. So it’s easy to have the conspiracy theory of
    0:53:50 they’re making Wi-Fi slower and slower. This is probably something I’ll talk to Amanda much more
    0:53:57 about, but another Reddit question. “When will Claude stop trying to be my puritanical grandmother
    0:54:03 imposing its moral worldview on me as a paying customer? And also, what is the psychology behind
    0:54:10 making Claude overly apologetic?” So this kind of reports about the experience, a different angle,
    0:54:13 and the frustration. It has to do with the character. Yeah. So a couple points on this first.
    0:54:21 One is things that people say on Reddit and Twitter or X or whatever it is, there’s actually a huge
    0:54:26 distribution shift between the stuff that people complain loudly about on social media and what
    0:54:33 actually kind of statistically users care about and that drives people to use the models. People
    0:54:40 are frustrated with things like the model not writing out all the code or the model just not
    0:54:45 being as good at code as it could be, even though it’s the best model in the world on code. I think
    0:54:54 the majority of things are about that, but certainly a kind of vocal minority are kind of
    0:54:59 raised these concerns, right? Are frustrated by the model, refusing things that it shouldn’t refuse
    0:55:06 or apologizing too much or just having these kind of annoying verbal ticks. The second caveat,
    0:55:11 and I just want to say this super clearly because I think it’s like, some people don’t know it,
    0:55:17 others kind of know it, but forget it. It is very difficult to control across the board
    0:55:22 how the models behave. You cannot just reach in there and say, “Oh, I want the model to
    0:55:27 apologize less.” You can do that. You can include trading data that says, “Oh, the model should
    0:55:34 apologize less,” but then in some other situation, they end up being super rude or overconfident
    0:55:40 in a way that’s like misleading people. So there are all these trade-offs. For example,
    0:55:45 another thing is if there was a period during which models, ours, and I think others as well,
    0:55:49 were too verbose, right? They would repeat themselves. They would say too much.
    0:55:56 You can cut down on the verbosity by penalizing the models for just talking for too long. What
    0:56:01 happens when you do that, if you do it in a crude way, is when the models are coding, sometimes
    0:56:05 they’ll say, “Rest of the code goes here,” right? Because they’ve learned that that’s a way to
    0:56:10 economize and that they see it. And then, so that leads the model to be so-called lazy in coding,
    0:56:16 where they’re just like, “Ah, you can finish the rest of it.” It’s not because we want to save on
    0:56:23 compute or because the models are lazy during winter break or any of the other kind of conspiracy
    0:56:29 theories that have come up. It’s actually just very hard to control the behavior of the model,
    0:56:36 to steer the behavior of the model in all circumstances at once. There’s this whack-a-mole
    0:56:44 aspect where you push on one thing and these other things start to move as well that you may
    0:56:52 not even notice or measure. And so one of the reasons that I care so much about grand alignment
    0:56:57 of these AI systems in the future is actually, these systems are actually quite unpredictable.
    0:57:02 They’re actually quite hard to steer and control. And this version we’re seeing today
    0:57:11 of you make one thing better, it makes another thing worse. I think that’s like a present-day
    0:57:18 analog of future control problems in AI systems that we can start to study today, right? I think
    0:57:27 that that difficulty in steering the behavior and in making sure that if we push an AI system in one
    0:57:31 direction, it doesn’t push it in another direction in some other ways that we didn’t want,
    0:57:39 I think that’s kind of an early sign of things to come. And if we can do a good job of solving
    0:57:46 this problem, right? You ask the model to make and distribute smallpox and it says no,
    0:57:51 but it’s willing to help you in your graduate level virology class. How do we get
    0:57:56 both of those things at once? It’s hard. It’s very easy to go to one side or the other
    0:58:02 and it’s a multi-dimensional problem. And so I think these questions of shaping the model’s
    0:58:08 personality, I think they’re very hard. I think we haven’t done perfectly on them. I think we’ve
    0:58:15 actually done the best of all the AI companies, but still so far from perfect. And I think if we
    0:58:23 can get this right, if we can control the false positives and false negatives in this very kind
    0:58:28 of controlled present-day environment, we’ll be much better at doing it for the future when our
    0:58:34 worry is, will the models be super autonomous? Will they be able to make very dangerous things?
    0:58:39 Will they be able to autonomously build whole companies and are those companies aligned?
    0:58:45 So I think of this present task as both vexing, but also good practice for the future.
    0:58:53 What’s the current best way of gathering user feedback? Not anecdotal data, but just
    0:58:59 large-scale data about pain points or the opposite of pain points, positive things, so on. Is it
    0:59:04 internal testing? Is it a specific group testing, AB testing? What works?
    0:59:09 So typically we’ll have internal model bashings where all of anthropic. Anthropic is almost a
    0:59:15 thousand people. People just try and break the model. They try and interact with it various ways.
    0:59:22 We have a suite of evals for always the model refusing in ways that it couldn’t. I think we
    0:59:29 even had a certainly eval because, again, at one point, model had this problem where it had this
    0:59:34 annoying tick where it would respond to a wide range of questions by saying, certainly, I can
    0:59:40 help you with that. Certainly, I would be happy to do that. Certainly, this is correct. And so we
    0:59:46 had a certainly eval, which is how often does the model say certainly? But look, this is just a
    0:59:54 whack-a-mole. What if it switches from certainly to definitely? Every time we add a new eval,
    0:59:58 and we’re always evaluating for all the old things, so we have hundreds of these evaluations,
    1:00:03 but we find that there’s no substitute for human interacting with it. And so it’s very much like
    1:00:08 the ordinary product development process. We have hundreds of people within anthropic bash the
    1:00:16 model. Then we do external AB tests. Sometimes we’ll run tests with contractors. We pay contractors
    1:00:22 to interact with the model. So you put all of these things together, and it’s still not perfect.
    1:00:27 You still see behaviors that you don’t quite want to see. You still see the model like refusing
    1:00:34 things that it just doesn’t make sense to refuse. But I think trying to solve this challenge,
    1:00:40 trying to stop the model from doing genuinely bad things that know what everyone agrees it
    1:00:45 shouldn’t do. Everyone agrees that the model shouldn’t talk about
    1:00:52 child abuse material. Everyone agrees the model shouldn’t do that. But at the same time that
    1:00:59 it doesn’t refuse in these dumb and stupid ways. I think drawing that line as finely as possible,
    1:01:03 approaching perfectly is still a challenge, and we’re getting better at it every day,
    1:01:10 but there’s a lot to be solved. And again, I would point to that as an indicator of a challenge ahead
    1:01:17 in terms of steering much more powerful models. Do you think Claude 4.0 is ever coming out?
    1:01:23 I don’t want to commit to any naming scheme, because if I say here, we’re going to have Claude
    1:01:28 4.0 next year, and then we decide that we should start over because there’s a new type of model.
    1:01:34 I don’t want to commit to it. I would expect in a normal course of business that Claude 4.0 would
    1:01:39 come after Claude 3.5, but you never know in this wacky field, right?
    1:01:46 But the idea of scaling is continuing. Scaling is continuing. There will definitely
    1:01:51 be more powerful models coming from us than the models that exist today. That is certain,
    1:01:54 or if there aren’t, we’ve deeply failed as a company.
    1:01:59 Okay. Can you explain the responsible scaling policy and the AI safety level standards,
    1:02:05 ASL levels? As much as I’m excited about the benefits of these models, and we’ll talk about
    1:02:11 that if we talk about machines of loving grace, I’m worried about the risks and I continue to be
    1:02:16 worried about the risks. No one should think that machines of loving grace was me saying,
    1:02:21 “I’m no longer worried about the risks of these models.” I think they’re two sides of the same
    1:02:30 coin. The power of the models and their ability to solve all these problems in biology, neuroscience,
    1:02:36 economic development, governance and peace, large parts of the economy, those come with risks as
    1:02:43 well. With great power comes great responsibility. The two are paired. Things that are powerful can
    1:02:49 do good things and they can do bad things. I think of those risks as being in several different
    1:02:54 categories. Perhaps the two biggest risks that I think about, and that’s not to say that there
    1:03:00 aren’t risks today that are important, but when I think of the things that would happen
    1:03:06 on the grandest scale, one is what I call catastrophic misuse. These are misuse of the
    1:03:17 models in domains like cyber, bio, radiological, nuclear, things that could harm or even kill
    1:03:24 thousands, even millions of people if they really, really go wrong. These are the number one priority
    1:03:31 to prevent. Here, I would just make a simple observation, which is that the models,
    1:03:35 if I look today at people who have done really bad things in the world,
    1:03:43 I think actually humanity has been protected by the fact that the overlap between really smart,
    1:03:48 well-educated people and people who want to do really horrific things has generally been small.
    1:03:55 Let’s say I’m someone who, I have a PhD in this field. I have a well-paying job.
    1:04:02 There’s so much to lose. Why do I want to, even assuming I’m completely evil, which most people
    1:04:12 are not, why would such a person risk their life, risk their legacy, their reputation to do something
    1:04:17 like truly, truly evil? If we had a lot more people like that, the world would be a much more
    1:04:25 dangerous place. My worry is that by being a much more intelligent agent, AI could break that
    1:04:31 correlation. I do have serious worries about that. I believe we can prevent those worries,
    1:04:37 but I think as a counterpoint to Machines of Loving Grace, I want to say that there’s still
    1:04:43 serious risks. The second range of risks would be the autonomy risks, which is the idea that
    1:04:48 models might on their own, particularly as we give them more agency than they’ve had in the past,
    1:04:57 particularly as we give them supervision over wider tasks like writing whole code bases or
    1:05:03 someday even effectively operating entire companies, they’re on a long enough leash.
    1:05:09 Are they doing what we really want them to do? It’s very difficult to even understand in detail
    1:05:17 what they’re doing, let alone control it. Like I said, these early signs that it’s hard to perfectly
    1:05:21 draw the boundary between things the model should do and things the model shouldn’t do,
    1:05:27 if you go to one side, you get things that are annoying and useless and you go to the other side,
    1:05:31 you get other behaviors. If you fix one thing, it creates other problems. We’re getting better
    1:05:36 and better at solving this. I don’t think this is an unsolvable problem. I think this is a science,
    1:05:42 like the safety of airplanes or the safety of cars or the safety of drugs. I don’t
    1:05:46 think there’s any big thing we’re missing. I just think we need to get better at controlling
    1:05:52 these models. These are the two risks I’m worried about. Our responsible scaling plan, which all
    1:05:59 recognizes a very long-winded answer to your question, our responsible scaling plan is designed
    1:06:06 to address these two types of risks. Every time we develop a new model, we basically test it
    1:06:13 for its ability to do both of these bad things. If I were to back up a little bit,
    1:06:21 I think we have an interesting dilemma with AI systems, where they’re not yet powerful enough
    1:06:27 to present these catastrophes. I don’t know that they’ll ever prevent these catastrophes.
    1:06:32 It’s possible they won’t, but the case for worry, the case for risk is strong enough
    1:06:40 that we should act now. They’re getting better very, very fast. I testified in the Senate that
    1:06:44 we might have serious bio risks within two to three years. That was about a year ago.
    1:06:54 Things have preceded a pace. We have this thing where it’s surprisingly hard to address these
    1:06:58 risks because they’re not here today. They don’t exist. They’re like ghosts, but they’re coming
    1:07:03 at us so fast because the models are improving so fast. How do you deal with something that’s
    1:07:11 not here today, doesn’t exist, but is coming at us very fast? The solution we came up with
    1:07:18 for that, in collaboration with people like the organization Meter and Paul Cristiano,
    1:07:25 is, okay, what you need for that are you need tests to tell you when the risk is getting close.
    1:07:32 You need an early warning system. Every time we have a new model, we test it for its capability
    1:07:40 to do these CBRN tasks, as well as testing it for how capable it is of doing tasks autonomously
    1:07:46 on its own. In the latest version of our RSP, which we released in the last month or two,
    1:07:56 the way we test autonomy risks is the AI model’s ability to do aspects of AI research itself,
    1:08:03 which when the AI models can do AI research, they become truly autonomous. That threshold
    1:08:10 is important for a bunch of other ways. What do we then do with these tasks? The RSP basically
    1:08:17 develops what we’ve called an if-then structure, which is if the models pass a certain capability,
    1:08:23 then we impose a certain set of safety and security requirements on them. Today’s models are what’s
    1:08:34 called ASL-2. ASL-1 is for systems that manifestly don’t pose any risk of autonomy or misuse. For
    1:08:40 example, a chess plane bot, deep blue, would be ASL-1. It’s just manifestly the case that you
    1:08:46 can’t use deep blue for anything other than chess. It was just designed for chess. No one’s going to
    1:08:52 use it to conduct a masterful cyber attack or to run wild and take over the world.
    1:08:59 ASL-2 is today’s AI systems, where we’ve measured them and we think these systems are simply not
    1:09:09 smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough
    1:09:17 to provide meaningful information about CBRN risks and how to build CBRN weapons above and beyond
    1:09:24 what can be known from looking at Google. In fact, sometimes they do provide information, but not
    1:09:29 above and beyond a search engine, but not in a way that can be stitched together. Not in a way
    1:09:37 that end-to-end is dangerous enough. ASL-3 is going to be the point at which the models are
    1:09:44 helpful enough to enhance the capabilities of non-state actors. State actors can already do
    1:09:50 unfortunately to a high level of proficiency, a lot of these very dangerous and destructive
    1:09:57 things. The difference is that non-state actors are not capable of it. When we get to ASL-3,
    1:10:03 we’ll take special security precautions designed to be sufficient to prevent theft of the model
    1:10:10 by non-state actors and misuse of the model as it’s deployed will have to have enhanced filters
    1:10:18 targeted at these particular areas. Cyber-bionuclear. Cyber-bionuclear and model autonomy, which is
    1:10:26 less a misuse risk and more risk of the model doing bad things itself. ASL-4 getting to the point where
    1:10:33 these models could enhance the capability of an already knowledgeable state actor
    1:10:40 and/or become the main source of such a risk. If you wanted to engage in such a risk, the main way
    1:10:46 you would do it is through a model. Then I think ASL-4 on the autonomy side, it’s some amount of
    1:10:52 acceleration in AI research capabilities with an AI model. Then ASL-5 is where we would get to the
    1:11:00 models that are truly capable, that could exceed humanity in their ability to do any of these
    1:11:07 tasks. The point of the if-then structure commitment is basically to say, look,
    1:11:13 I don’t know, I’ve been working with these models for many years and I’ve been worried about risk
    1:11:18 for many years. It’s actually kind of dangerous to cry wolf. It’s actually kind of dangerous to say,
    1:11:26 this model is risky and people look at it and they say this is manifestly not dangerous.
    1:11:34 Again, the delicacy of the risk isn’t here today, but it’s coming at us fast. How do you deal with
    1:11:40 that? It’s really vexing to a risk planner to deal with it. This if-then structure basically says,
    1:11:46 look, we don’t want to antagonize a bunch of people. We don’t want to harm our kind of
    1:11:55 own ability to have a place in the conversation by imposing these very onerous burdens on models
    1:12:00 that are not dangerous today. The if-then, the trigger commitment is basically a way to deal
    1:12:05 with this. It says you clamp down hard when you can show the model is dangerous. Of course,
    1:12:14 what has to come with that is enough of a buffer threshold that you’re not at high risk of missing
    1:12:20 the danger. It’s not a perfect framework. We’ve had to change it. We came out with a new one
    1:12:25 just a few weeks ago and probably going forward, we might release new ones multiple times a year,
    1:12:29 because it’s hard to get these policies right, like technically, organizationally,
    1:12:35 from a research perspective, but that is the proposal if-then commitments and triggers in
    1:12:42 order to minimize burdens and false alarms now, but really react appropriately when the dangers
    1:12:47 are here. What do you think the timeline for ASL 3 is where several of the triggers are fired,
    1:12:52 and what do you think the timeline is for ASL 4? Yeah, so that is hotly debated within the company.
    1:13:02 We are working actively to prepare ASL 3 security measures as well as ASL 3 deployment
    1:13:07 measures. I’m not going to go into detail, but we’ve made a lot of progress on both, and we’re
    1:13:16 prepared to be, I think, ready quite soon. I would not be surprised at all if we hit ASL 3
    1:13:22 next year. There was some concern that we might even hit it this year. That’s still possible.
    1:13:27 That could still happen. It’s very hard to say, but I would be very, very surprised if it was
    1:13:34 like 2030. I think it’s much sooner than that. There’s protocols for detecting it, if-then,
    1:13:37 and then there’s protocols for how to respond to it. Yes.
    1:13:43 How difficult is the second, the latter? Yeah, I think for ASL 3, it’s primarily about
    1:13:50 security and about filters on the model relating to a very narrow set of areas
    1:13:55 when we deploy the model, because at ASL 3, the model isn’t autonomous yet.
    1:14:02 You don’t have to worry about the model itself behaving in a bad way even when it’s deployed
    1:14:10 internally. I think the ASL 3 measures are, I won’t say straightforward, they’re rigorous,
    1:14:17 but they’re easier to reason about. I think once we get to ASL 4, we start to have worries about
    1:14:23 the models being smart enough that they might sandbag tests. They might not tell the truth about
    1:14:29 tests. We had some results, came out about like sleeper agents, and there was a more recent paper
    1:14:36 about can the models mislead attempts to sandbag their own abilities, show them,
    1:14:44 present themselves as being less capable than they are. I think with ASL 4, there’s going to be an
    1:14:49 important component of using other things than just interacting with the models. For example,
    1:14:55 interpretability or hidden chains of thought, where you have to look inside the model and verify
    1:15:04 via some other mechanism that is not as easily corrupted as what the model says that the model
    1:15:11 indeed has some property. We’re still working on ASL 4. One of the properties of the RSP is that
    1:15:18 we don’t specify ASL 4 until we’ve hit ASL 3. I think that’s proven to be a wise decision,
    1:15:25 because even with ASL 3, again, it’s hard to know this stuff in detail. We want to take as much
    1:15:32 time as we can possibly take to get these things right. For ASL 3, the bad actor will be the humans.
    1:15:36 Humans, yes. There’s a little bit more. For ASL 4, it’s both, I think.
    1:15:42 It’s both. Deception, and that’s where mechanistic interpretability comes into play.
    1:15:47 Hopefully, the techniques used for that are not made accessible to the model.
    1:15:52 Yes. Of course, you can hook up the mechanistic interpretability to the model itself,
    1:16:00 but then you’ve kind of lost it as a reliable indicator of the model state. There are a bunch
    1:16:05 of exotic ways you can think of that it might also not be reliable. If the model gets smart
    1:16:12 enough that it can jump computers and read the code where you’re looking at its internal state,
    1:16:15 we’ve thought about some of those. I think they’re exotic enough. There are ways to render them
    1:16:21 unlikely. Generally, you want to preserve mechanistic interpretability as a kind of
    1:16:25 verification set or test set that’s separate from the training process of the model.
    1:16:29 See, I think as these models become better and better conversation and become smarter,
    1:16:34 social engineering becomes a threat, too, because they can start being very convincing to the
    1:16:40 engineers and site companies. Oh, yeah. It’s actually like, we’ve seen lots of examples
    1:16:45 of demagoguery in our life from humans, and there’s a concern that models could do that as well.
    1:16:50 One of the ways that cloud has been getting more and more powerful is it’s now able to do some
    1:16:57 agentic stuff. Computer use. There’s also an analysis within the sandbox of cloud.ai itself,
    1:17:03 but let’s talk about computer use. That seems to me super exciting that you can just give cloud
    1:17:10 a task and it takes a bunch of actions, figures it out, and its access to your computer through
    1:17:16 screenshots. Can you explain how that works and where that’s headed?
    1:17:21 Yeah, it’s actually relatively simple. Cloud has had for a long time since
    1:17:26 cloud three back in March, the ability to analyze images and respond to them with text.
    1:17:33 The only new thing we added is those images can be screenshots of a computer. In response,
    1:17:39 we trained the model to give a location on the screen where you can click and/or buttons on the
    1:17:45 keyboard you can press in order to take action. It turns out that with actually not all that much
    1:17:50 additional training, the models can get quite good at that task. It’s a good example of generalization.
    1:17:55 People sometimes say if you get to lower orbit, you’re halfway to anywhere because of how much
    1:17:59 it takes to escape the gravity well. If you have a strong pre-trained model, I feel like you’re
    1:18:08 halfway to anywhere in terms of the intelligence space. Actually, it didn’t take all that much
    1:18:15 to get cloud to do this. You can just set that in a loop, give the model a screenshot, tell
    1:18:19 it what to click on, give it the next screenshot, tell it what to click on. That turns into a full
    1:18:26 kind of almost 3D video interaction of the model. It’s able to do all of these tasks. We showed
    1:18:32 these demos where it’s able to fill out spreadsheets. It’s able to interact with a website.
    1:18:41 It’s able to open all kinds of programs, different operating systems, Windows, Linux, Mac.
    1:18:49 I think all of that is very exciting. I will say while in theory, there’s nothing you could do
    1:18:53 there that you couldn’t have done through just giving the model the API to drive the computer
    1:19:03 screen. This really lowers the barrier. There’s a lot of folks who aren’t in a position to
    1:19:08 interact with those APIs or it takes them a long time to do. The screen is just a universal
    1:19:13 interface that’s a lot easier to interact with. I expect over time this is going to lower a bunch
    1:19:20 of barriers. Honestly, the current model leaves a lot still to be desired. We were honest about
    1:19:26 that in the blog. It makes mistakes, it misclicks, and we were careful to warn people, “Hey, this
    1:19:31 thing isn’t… You can’t just leave this thing to run on your computer for minutes and minutes.
    1:19:36 You got to give this thing boundaries and guardrails.” I think that’s one of the reasons we
    1:19:43 released it first in an API form rather than this kind of just hand to the consumer and
    1:19:51 give it control of their computer. I definitely feel that it’s important to get these capabilities
    1:19:56 out there. As models get more powerful, we’re going to have to grapple with how do we use
    1:20:02 these capabilities safely? How do we prevent them from being abused? I think releasing the
    1:20:12 model while the capabilities are still limited is very helpful in terms of doing that. I think
    1:20:21 since it’s been released, a number of customers, I think Rapplet was maybe one of the quickest
    1:20:28 to deploy things, have made use of it in various ways. People have hooked up demos for Windows
    1:20:39 desktops, Macs, Linux machines. It’s been very exciting. I think, as with anything else,
    1:20:46 it comes with new exciting abilities. Then with those new exciting abilities, we have to think
    1:20:53 about how to make the model, say, reliable, do what humans want them to do. It’s the same story
    1:20:59 for everything, right? Same thing. It’s that same tension. The possibility of use cases here is just
    1:21:05 the range is incredible. How much to make it work really well in the future? How much do you have
    1:21:12 to specially go beyond what’s the pre-trained models doing? Do more post-training, RLHF,
    1:21:16 or supervised fine-tuning, or synthetic data just for the agent?
    1:21:20 Yeah. I think, speaking at a high level, it’s our intention to keep investing a lot
    1:21:29 in making the model better. I think we look at some of the benchmarks where previous models were
    1:21:33 like, “Oh, I could do it 6% of the time,” and now our model would do it 14% or 22% of the time.
    1:21:39 We want to get up to the human-level reliability of 80%, 90% just like anywhere else. We’re on the
    1:21:43 same curve that we were on with SWE Bench, where I think I would guess a year from now the models
    1:21:48 can do this very, very reliably, but you’ve got to start somewhere. You think it’s possible to get
    1:21:53 to the human level? 90% basically doing the same thing you’re doing now, or has it to be
    1:22:04 special for computer use? It depends what you mean by special in general. I generally think
    1:22:08 the same kinds of techniques that we’ve been using to train the current model,
    1:22:11 I expect that doubling down on those techniques in the same way that we have
    1:22:21 for code, for models in general, for image input, for voice, I expect those same techniques will
    1:22:26 scale here as they have everywhere else. But this is giving the power of action
    1:22:32 to Claude, and so you could do a lot of really powerful things, but you could do a lot of damage
    1:22:38 also. Yeah, no, and we’ve been very aware of that. Look, my view actually is computer use
    1:22:45 isn’t a fundamentally new capability like the CBRN or autonomy capabilities are. It’s more like it
    1:22:52 kind of opens the aperture for the model to use and apply its existing abilities. So the way we
    1:23:00 think about it going back to our RSP is nothing that this model is doing inherently increases
    1:23:08 the risk from an RSP perspective. But as the models get more powerful, having this capability
    1:23:17 may make it scarier once it has the cognitive capability to do something at the ASL3 and
    1:23:25 ASL4 level. This may be the thing that kind of unbounds it from doing so. So going forward,
    1:23:29 certainly this modality of interaction is something we have tested for and that we will
    1:23:34 continue to test for an RSP going forward. I think it’s probably better to have to learn
    1:23:40 and explore this capability before the model is super capable. Yeah, there’s a lot of interesting
    1:23:44 attacks like prompt injection, because now you’ve widened the aperture so you can prompt inject
    1:23:50 through stuff on screen. So if this becomes more and more useful, then there’s more and more benefit
    1:23:56 to inject stuff into the model. If it goes to a certain webpage, it could be harmless stuff like
    1:24:00 advertisements or it could be like harmful stuff, right? Yeah, I mean, we’ve thought a lot about
    1:24:07 things like spam, CAPTCHA, you know, mass camp. There’s all, you know, every, every, like, one
    1:24:12 secret I’ll tell you, if you’ve invented a new technology, not necessarily the biggest misuse,
    1:24:20 but the first misuse you’ll see, scams, just petty scams. Like, it’s like a thing as old,
    1:24:26 people scamming each other. It’s this thing as old as time. And it’s just every time you
    1:24:33 got to deal with it. It’s almost like silly to say, but it’s true, sort of bots and spam in general
    1:24:38 is a thing as it gets more and more intelligent. Yeah. It’s a harder, harder fight. Like I said,
    1:24:43 like there are a lot of petty criminals in the world. And, you know, it’s like every new technology
    1:24:48 is like a new way for petty criminals to do something, you know, something stupid and malicious.
    1:24:55 Is there any ideas about sandboxing it? Like how difficult is the sandboxing task?
    1:24:59 Yeah, we sandbox during training. So for example, during training, we didn’t expose the model to
    1:25:04 the internet. I think that’s probably a bad idea during training, because, you know, the model
    1:25:08 can be changing its policy, it can be changing what it’s doing, and it’s having an effect in the
    1:25:15 real world. You know, in terms of actually deploying the model, right, it kind of depends
    1:25:18 on the application. Like, you know, sometimes you want the model to do something in the real
    1:25:24 world. But of course, you can always put guardrails on the outside, right? You can say,
    1:25:28 okay, well, you know, this model is not going to move data from my, you know,
    1:25:33 model is not going to move any files from my computer or my web server to anywhere else.
    1:25:39 Now, when you talk about sandboxing, again, when we get to ASL4, none of these precautions
    1:25:44 are going to make sense there, right? Where when you talk about ASL4, you’re then the model
    1:25:50 is being kind of, you know, there’s a theoretical worry the model could be smart enough to break
    1:25:55 it to kind of break out of any box. And so there we need to think about mechanistic interpretability
    1:25:59 about, you know, if we’re if we’re going to have a sandbox, it would need to be a mathematically
    1:26:04 provable sandbox, you know, that’s that’s a whole different world than what we’re dealing with with
    1:26:13 the models today. Yeah, the science of building a box from which ASL4 AI system cannot escape.
    1:26:17 I think it’s probably not the right approach. I think the right approach,
    1:26:21 instead of having something, you know, unaligned that that, like, you’re trying to prevent it
    1:26:26 from escaping, I think it’s it’s better to just design the model the right way or have a loop
    1:26:30 where you, you know, you look inside, you look inside the model, and you’re able to verify
    1:26:34 properties. And that gives you an opportunity to like iterate and actually get it right.
    1:26:40 I think I think containing containing bad models is much worse solution than having good models.
    1:26:46 Let me ask you about regulation. What’s the role of regulation in keeping AI safe?
    1:26:52 So for example, can you describe California AI regulation bill SB 1047 that was ultimately
    1:26:57 vetoed by the governor? What are the pros and cons of this bill? Yeah, we ended up making some
    1:27:02 suggestions to the bill. And then some of those were adopted. And, you know, we felt, I think,
    1:27:08 I think quite positively, quite positively about about the bill by the end of that.
    1:27:15 It did still have some downsides. And, you know, of course, of course, it got vetoed.
    1:27:21 I think at a high level, I think some of the key ideas behind the bill are, you know, I would say
    1:27:26 similar to ideas behind our RSPs. And I think it’s very important that some jurisdiction,
    1:27:31 whether it’s California or the federal government and or other other countries and other states,
    1:27:37 passes some regulation like this. And I can talk through why I think that’s so important.
    1:27:43 So I feel good about our RSP. It’s not perfect. It needs to be iterated on a lot. But it’s been a
    1:27:49 good forcing function for getting the company to take these risks seriously, to put them into
    1:27:55 product planning, to really make them a central part of work at Anthropic and to make sure that
    1:28:00 all the 1000 people and it’s almost 1000 people now at Anthropic understand that this is one of the
    1:28:08 highest priorities of the company, if not the highest priority. But one, there are still some
    1:28:15 companies that don’t have RSP-like mechanisms like OpenAI, Google did adopt these mechanisms a
    1:28:22 couple months after Anthropic did. But there are other companies out there that don’t have these
    1:28:29 mechanisms at all. And so if some companies adopt these mechanisms and others don’t, it’s really
    1:28:35 going to create a situation where some of these dangers have the property that it doesn’t matter
    1:28:40 if three out of five of the companies are being safe. If the other two are being unsafe, it creates
    1:28:45 this negative externality. And I think the lack of uniformity is not fair to those of us who have
    1:28:50 put a lot of effort into being very thoughtful about these procedures. The second thing is,
    1:28:56 I don’t think you can trust these companies to adhere to these voluntary plans in their own.
    1:29:02 I like to think that Anthropic will. We do everything we can that we will. Our RSP is
    1:29:12 checked by our long-term benefit trust. So we do everything we can to adhere to our own RSP.
    1:29:18 But you hear lots of things about various companies saying, oh, they said they would
    1:29:22 give this much compute and they didn’t. They said they would do this thing and they didn’t.
    1:29:29 I don’t think it makes sense to litigate particular things that companies have done,
    1:29:34 but I think this broad principle that if there’s nothing watching over them,
    1:29:38 there’s nothing watching over us as an industry, there’s no guarantee that we’ll do the right
    1:29:44 thing and the stakes are very high. And so I think it’s important to have a uniform standard
    1:29:52 that everyone follows and to make sure that simply that the industry does what a majority
    1:29:57 of the industry has already said is important and has already said that they definitely will do.
    1:30:04 I think there’s a class of people who are against regulation on principle. I understand
    1:30:09 where that comes from. If you go to Europe and you see something like GDPR, you see some of the
    1:30:16 other stuff that they’ve done, some of it’s good, but some of it is really unnecessarily
    1:30:21 burdensome. And I think it’s fair to say really has slowed innovation. And so I understand
    1:30:26 where people are coming from on priors. I understand why people start from that,
    1:30:33 start from that position. But again, I think AI is different. If we go to the very serious risks
    1:30:44 of autonomy and misuse that I talked about just a few minutes ago, I think that those are unusual
    1:30:50 and they weren’t an unusually strong response. And so I think it’s very important. Again,
    1:30:58 we need something that everyone can get behind. I think one of the issues with SB 1047,
    1:31:06 especially the original version of it, was it had a bunch of the structure of RSPs,
    1:31:13 but it also had a bunch of stuff that was either clunky or that just would have created
    1:31:18 a bunch of burdens, a bunch of hassle, and might even have missed the target in terms of
    1:31:26 addressing the risks. You don’t really hear about it on Twitter. People are cheering for any
    1:31:31 regulation. And then the folks who are against make up these often quite intellectually dishonest
    1:31:38 arguments about how it’ll make us move away from California. Bill doesn’t apply if you’re
    1:31:42 headquartered in California. Bill only applies if you do business in California. Or that it would
    1:31:48 damage the open source ecosystem or that it would cause all of these things.
    1:31:55 I think those were mostly nonsense, but there are better arguments against regulation. There’s
    1:32:01 one guy, Dean Ball, who’s really, I think, a very scholarly analyst who looks at what
    1:32:07 happens when a regulation is put in place in ways that they can get a life of their own
    1:32:11 or how they can be poorly designed. And so our interest has always been,
    1:32:17 we do think there should be regulation in this space, but we want to be an actor who makes
    1:32:24 sure that regulation is something that’s surgical, that’s targeted at the serious risks,
    1:32:29 and is something people can actually comply with. Because something I think the advocates of
    1:32:37 regulation don’t understand as well as they could, is if we get something in place that’s
    1:32:43 poorly targeted, that wastes a bunch of people’s time. What’s going to happen is people are going
    1:32:51 to say, “See, these safety risks, this is nonsense. I just had to hire 10 lawyers
    1:32:56 to fill out all these forms. I had to run all these tests for something that was clearly not
    1:33:02 dangerous.” And after six months of that, there will be a groundswell and we’ll end up with a
    1:33:09 durable consensus against regulation. And so I think the worst enemy of those who want real
    1:33:15 accountability is badly designed regulation. We need to actually get it right. And this is,
    1:33:20 if there’s one thing I could say to the advocates, it would be that I want them to understand this
    1:33:24 dynamic better. And we need to be really careful and we need to talk to people who actually have
    1:33:31 experience seeing how regulations play out in practice. And the people who have seen that
    1:33:36 understand to be very careful. If this was some lesser issue, I might be against regulation at
    1:33:44 all. But what I want the opponents to understand is that the underlying issues are actually serious.
    1:33:51 They’re not something that I or the other companies are just making up because of regulatory
    1:33:59 capture. They’re not sci-fi fantasies. They’re not any of these things. Every time we have a
    1:34:04 new model, every few months, we measure the behavior of these models. And they’re getting
    1:34:08 better and better at these concerning tasks, just as they are getting better and better at
    1:34:18 good, valuable, economically useful tasks. And so I would just love it if some of the
    1:34:26 former, I think SB 1047 was very polarizing. I would love it if some of the most reasonable
    1:34:34 opponents and some of the most reasonable proponents would sit down together. And I think
    1:34:43 the different AI companies, Anthropic was the only AI company that felt positively in a very
    1:34:49 detailed way. I think Elon tweeted briefly something positive. But some of the big ones,
    1:34:55 like Google, OpenAI, Meta, Microsoft, were pretty staunchly against. So I would really
    1:35:01 like is if some of the key stakeholders, some of the most thoughtful proponents and some of the
    1:35:07 most thoughtful opponents would sit down and say, how do we solve this problem in a way that the
    1:35:17 proponents feel brings a real reduction in risk and that the opponents feel that it is not hampering
    1:35:25 the industry or hampering innovation any more necessary than it needs to. And I think for
    1:35:31 whatever reason that things got too polarized and those two groups didn’t get to sit down in
    1:35:37 the way that they should. And I feel urgency. I really think we need to do something in 2025.
    1:35:44 If we get to the end of 2025 and we’ve still done nothing about this, then I’m going to be worried.
    1:35:50 I’m not worried yet because, again, the risks aren’t here yet. But I think time is running short.
    1:35:55 And come up with something surgical, like you said. Yeah, exactly. And we need to get away
    1:36:06 from this intense pro-safety versus intense anti-regulatory rhetoric. It’s turned into these
    1:36:09 flame wars on Twitter. And nothing good is going to come of that.
    1:36:14 So there’s a lot of curiosity about the different players in the game. One of the OGs is OpenAI.
    1:36:19 You’ve had several years of experience at OpenAI. What’s your story and history there?
    1:36:26 Yeah. So I was at OpenAI for roughly five years. For the last, I think it was a couple of years,
    1:36:32 you know, I was vice president of research there. Probably myself and Ilya Sootskiver were the ones
    1:36:40 who really kind of set the research direction around 2016 or 2017. I first started to really
    1:36:45 believe in or at least confirm my belief in the scaling hypothesis when Ilya famously said to me,
    1:36:49 “The thing you need to understand about these models is they just want to learn.
    1:36:54 The models just want to learn.” And again, sometimes there are these one sentences,
    1:37:00 these Zen cones that you hear them and you’re like, “Ah, that explains everything. That explains
    1:37:05 like a thousand things that I’ve seen.” And then I, you know, ever after I had this visualization
    1:37:10 in my head of like, you optimize the models in the right way, you point the models in the right way,
    1:37:14 they just want to learn, they just want to solve the problem regardless of what the problem is.
    1:37:17 So get out of their way, basically. Get out of their way. Yeah.
    1:37:20 Don’t impose your own ideas about how they should learn. And, you know, this was the
    1:37:25 same thing as Rich Sutton put out in The Bitter Lesson or Guern put out in the scaling hypothesis.
    1:37:31 You know, I think generally the dynamic was, you know, I got this kind of inspiration from
    1:37:38 Ilya and from others, folks like Alec Radford who did the original GPT-1,
    1:37:45 and then ran really hard with it. Me and my collaborators on GPT-2, GPT-3,
    1:37:51 RL from Human Feedback, which was an attempt to kind of deal with the early safety and durability,
    1:37:56 things like debate and amplification, heavy on interpretability. So again, the combination
    1:38:04 of safety plus scaling, probably 2018, 2019, 2020, those were kind of the years when
    1:38:12 myself and my collaborators, probably, you know, many of whom became co-founders of Anthropic,
    1:38:16 kind of really had a vision and drove the direction.
    1:38:19 Why did you leave? Why did you decide to leave?
    1:38:25 Yeah. So look, I’m going to put things this way. And I think it ties to the race to the top,
    1:38:31 right? Which is, you know, in my time at OpenAI, what I’d come to see as I’d come to appreciate
    1:38:35 the scaling hypothesis and as I’d come to appreciate kind of the importance of safety
    1:38:41 along with the scaling hypothesis, the first one I think OpenAI was getting on board with,
    1:38:49 the second one in a way had always been part of OpenAI’s messaging. But, you know, over many
    1:38:56 years of the time that I spent there, I think I had a particular vision of how we should handle
    1:39:01 these things, how we should be brought out in the world, the kind of principles that the organization
    1:39:07 should have. And look, I mean, there were like many, many discussions about like, you know,
    1:39:11 should the org do, should the company do this, should the company do that? Like,
    1:39:14 there’s a bunch of misinformation out there. People say like, we left because we didn’t
    1:39:19 like the deal with Microsoft. False. Although, you know, it was like a lot of discussion,
    1:39:23 a lot of questions about exactly how we do the deal with Microsoft. We left because we didn’t
    1:39:28 like commercialization. That’s not true. We built GPT-3, which was the model that was commercialized.
    1:39:34 I was involved in commercialization. It’s more, again, about how do you do it? Like,
    1:39:40 civilization is going down this path to very powerful AI. What’s the way to do it that is
    1:39:50 cautious, straightforward, honest, that builds trust in the organization and in individuals?
    1:39:55 How do we get from here to there? And how do we have a real vision for how to get it right?
    1:40:01 How can safety not just be something we say because it helps with recruiting? And, you know,
    1:40:07 I think at the end of the day, if you have a vision for that, forget about anyone else’s
    1:40:11 vision. I don’t want to talk about anyone else’s vision. If you have a vision for how to do it,
    1:40:15 you should go off and you should do that vision. It is incredibly unproductive
    1:40:20 to try and argue with someone else’s vision. You might think they’re not doing it the right way.
    1:40:24 You might think they’re dishonest. Who knows? Maybe you’re right. Maybe you’re not.
    1:40:30 But what you should do is you should take some people you trust and you should go off together
    1:40:34 and you should make your vision happen. And if your vision is compelling, if you can make it
    1:40:41 appeal to people, some, you know, some combination of ethically, you know, in the market, you know,
    1:40:48 if you can make a company that’s a place people want to join that, you know, engages in practices
    1:40:54 that people think are reasonable while managing to maintain its position in the ecosystem at the
    1:40:59 same time, if you do that, people will copy it. And the fact that you were doing it,
    1:41:04 especially the fact that you’re doing it better than they are, causes them to change their behavior
    1:41:09 in a much more compelling way than if they’re your boss and you’re arguing with them. I just,
    1:41:14 I don’t know how to be any more specific about it than that. But I think it’s generally very
    1:41:20 unproductive to try and get someone else’s vision to look like your vision. It’s much more productive
    1:41:26 to go off and do a clean experiment and say, “This is our vision. This is how we’re going to do
    1:41:33 things.” Your choice is you can, you can ignore us, you can reject what we’re doing, or you can,
    1:41:38 you can start to become more like us. An imitation is the sincerest form of flattery.
    1:41:44 And, you know, that plays out in the behavior of customers that pays out in the behavior of the
    1:41:50 public. That plays out in the behavior of where people choose to work. And again, again, at the
    1:41:57 end, it’s not about one company winning or another company winning if we or another company are
    1:42:04 engaging in some practice that, you know, people find genuinely appealing. And I want it to be in
    1:42:09 substance, not just in appearance. And, you know, I think researchers are sophisticated and they
    1:42:16 look at substance. And then other companies start copying that practice and they win because they
    1:42:21 copied that practice. That’s great. That’s success. That’s like the race to the top. It doesn’t
    1:42:26 matter who wins in the end, as long as everyone is copying everyone else’s good practices, right?
    1:42:29 One way I think of it is like, the thing we’re all afraid of is the race to the bottom, right?
    1:42:34 And the race to the bottom doesn’t matter who wins because we all lose, right? Like, you know,
    1:42:39 in the most extreme world, we make this autonomous AI that, you know, the robots enslave us or whatever,
    1:42:45 right? I mean, that’s half joking, but, you know, that is the most extreme thing that could happen.
    1:42:51 Then it doesn’t matter which company was ahead. If instead you create a race to the top where
    1:42:58 people are competing to engage in good practices, then, you know, at the end of the day, you know,
    1:43:03 it doesn’t matter who ends up winning. It doesn’t even matter who started the race to the top. The
    1:43:08 point isn’t to be virtuous. The point is to get the system into a better equilibrium than it was
    1:43:13 before. And individual companies can play some role in doing this. Individual companies can,
    1:43:19 you know, can help to start it, can help to accelerate it. And frankly, I think individuals
    1:43:24 at other companies have done this as well, right? The individuals that, when we put out an RSP,
    1:43:31 react by pushing harder to get something similar done, get something similar done at other companies.
    1:43:35 Sometimes other companies do something that’s like, we’re like, oh, it’s a good practice. We think,
    1:43:40 we think that’s good. We should adopt it too. The only difference is, you know, I think we are,
    1:43:45 we try to be more forward-leaning. We try and adopt more of these practices first
    1:43:49 and adopt them more quickly when others, when others invent them. But I think this dynamic
    1:43:55 is what we should be pointing at. And that I think, I think it abstracts away the question of,
    1:44:01 you know, which company’s winning, who trusts, who, I think all these, all these questions of drama
    1:44:08 are profoundly uninteresting. And the thing that matters is the ecosystem that we all operate in
    1:44:11 and how to make that ecosystem better, because that constrains all the players.
    1:44:16 And so Anthropoc is this kind of clean experiment built on a foundation of what
    1:44:22 concretely AISAT should look like. Look, I’m sure we’ve made plenty of mistakes along the way.
    1:44:27 The perfect organization doesn’t exist. It has to deal with the imperfection of
    1:44:31 a thousand employees. It has to deal with the imperfection of our leaders, including me.
    1:44:36 It has to deal with the imperfection of the people we’ve put to, you know, to oversee the
    1:44:43 imperfection of the leaders, like the board and the long-term benefit trust. It’s all a set of
    1:44:48 imperfect people trying to aim imperfectly at some ideal that will never perfectly be achieved.
    1:44:54 That’s what you sign up for. That’s what it will always be. But imperfect doesn’t mean you just
    1:45:00 give up. There’s better and there’s worse. And hopefully, hopefully, we can begin to build,
    1:45:06 we can do well enough that we can begin to build some practices that the whole industry engages in.
    1:45:10 And then, you know, my guess is that multiple of these companies will be successful.
    1:45:14 Anthropoc will be successful. These other companies, like ones I’ve been at in the past,
    1:45:19 will also be successful. And some will be more successful than others. That’s less important
    1:45:23 than, again, that we align the incentives of the industry. And that happens partly through
    1:45:30 the race to the top, partly through things like RSP, partly through, again, selected surgical regulation.
    1:45:37 You said talent density beats talent mass. So can you explain that? Can you expand on that?
    1:45:43 Can you just talk about what it takes to build a great team of AI researchers and engineers?
    1:45:48 This is one of these statements that’s like more true every month. I see this statement as more
    1:45:53 true than I did the month before. So if I were to do a thought experiment, let’s say you have
    1:45:59 a team of 100 people that are super smart, motivated and aligned with the mission,
    1:46:05 and that’s your company. Or you can have a team of 1000 people where 200 people are super smart,
    1:46:12 super aligned with the mission. And then like 800 people are, let’s just say you pick 800,
    1:46:19 like random big tech employees, which would you rather have? The talent mass is greater in the
    1:46:26 group of 1000 people. You have even a larger number of incredibly talented, incredibly aligned,
    1:46:36 incredibly smart people. But the issue is just that if every time someone super talented looks
    1:46:41 around, they see someone else super talented and super dedicated, that sets the tone for everything.
    1:46:48 Everyone is super inspired to work at the same place. Everyone trusts everyone else. If you have
    1:46:55 1000 or 10,000 people and things have really regressed, you are not able to do selection
    1:46:59 and you’re choosing random people. What happens is then you need to put a lot of processes and a
    1:47:06 lot of guardrails in place. Just because people don’t fully trust each other, you have to adjudicate
    1:47:12 political battles, like there are so many things that slow down your ability to operate. And so
    1:47:18 we’re nearly 1000 people and we’ve tried to make it so that as large a fraction of those 1000 people
    1:47:26 as possible are super talented, super skilled. It’s one of the reasons we’ve slowed down hiring
    1:47:32 a lot in the last few months. We grew from 300 to 800, I believe, I think, in the first seven,
    1:47:36 eight months of the year. And now we’ve slowed down. We’re at last three months. We went from
    1:47:42 800 to 900, 950, something like that. Don’t quote me on the exact numbers. But I think there’s an
    1:47:49 inflection point around 1000 and we want to be much more careful how we grow. Early on, and now
    1:47:55 as well, we’ve hired a lot of physicists. Theoretical physicists can learn things really fast.
    1:48:02 Even more recently, as we’ve continued to hire that, we’ve really had a high bar for,
    1:48:07 on both the research side and the software engineering side, have hired a lot of senior
    1:48:12 people, including folks who used to be at other companies in this space. And we’ve just continued
    1:48:19 to be very selective. It’s very easy to go from 100 to 1000 and 1000 to 10,000
    1:48:25 without paying attention to making sure everyone has a unified purpose. It’s so powerful. If your
    1:48:31 company consists of a lot of different fiefdoms that all want to do their own thing, that are all
    1:48:36 optimizing for their own thing, it’s very hard to get anything done. But if everyone sees the
    1:48:42 broader purpose of the company, if there’s trust and there’s dedication to doing the right thing,
    1:48:47 that is a superpower. That in itself, I think, can overcome almost every other disadvantage.
    1:48:51 And, you know, it’s the Steve Jobs, eight players. Eight players want to look around and see other
    1:48:56 eight players is another way of saying, I don’t know what that is about human nature, but it is
    1:49:02 demotivating to see people who are not obsessively driving towards a singular mission. And it is,
    1:49:08 on the flip side of that, super motivating to see that. It’s interesting. What’s it take
    1:49:13 to be a great AI researcher or engineer from everything you’ve seen from working with so
    1:49:21 many amazing people? Yeah. I think the number one quality, especially on the research side,
    1:49:26 but really both, is open-mindedness. Sounds easy to be open-minded, right? You’re just like, oh,
    1:49:32 I’m open to anything. But, you know, if I think about my own early history in the scaling hypothesis,
    1:49:40 I was seeing the same data others were seeing. I don’t think I was like a better programmer or
    1:49:44 better at coming up with research ideas than any of the hundreds of people that I worked with.
    1:49:51 In some ways, in some ways, I was worse. You know, like, I’ve never like, you know, precise
    1:49:55 programming of like, you know, finding the bug, writing the GPU kernels. Like,
    1:49:58 I could point you to 100 people here who are better, who are better at that than I am.
    1:50:07 But the thing that I think I did have that was different was that I was just willing to look
    1:50:12 at something with new eyes, right? People said, oh, you know, we don’t have the right algorithms yet.
    1:50:18 We haven’t come up with the right way to do things. And I was just like, oh, I don’t know. Like,
    1:50:24 you know, this neural net has like 30 million parameters. Like, what if we gave it 50 million
    1:50:30 instead? Like, let’s plot some graphs like that, that basic scientific mindset of like, oh, man,
    1:50:36 like, I just, I just like, I, you know, I see some variable that I could change. Like, what happens
    1:50:41 when it changes? Like, let’s, let’s try these different things and like create a graph for even
    1:50:45 the, this was like the simplest thing in the world, right? Change the number of, you know, this wasn’t
    1:50:51 like PhD level experimental design. This was like, this was like, simple and stupid. Like,
    1:50:56 anyone could have done this if you, if you just hold them that it was important. It’s also not
    1:51:00 hard to understand. You didn’t need to be brilliant to come up with this. But you put the two things
    1:51:06 together and, you know, some tiny number of people, some single digit number of people have, have
    1:51:11 driven forward the whole field by realizing this. And, and it’s, you know, it’s often like that if
    1:51:16 you look back at the discovery, you know, the discoveries in history, they’re often like that.
    1:51:22 And so this, this open-mindedness and this willingness to see with new eyes that often comes
    1:51:27 from being newer to the field, often experience is a disadvantage for this. That is the most
    1:51:31 important thing. It’s very hard to look for and test for. But I think, I think it’s the most
    1:51:36 important thing because when you, when you find something, some really new way of thinking,
    1:51:40 thinking about things, when you have the initiative to do that, it’s absolutely transformative.
    1:51:44 And also be able to do kind of rapid experimentation. And in the face of that,
    1:51:48 be open-minded and curious and looking at the data from just these fresh eyes and see what is
    1:51:53 that sexually saying. That applies in mechanistic interpretability. It’s another example of this.
    1:51:59 Like some of the early work in mechanistic interpretability, so simple. It’s just no
    1:52:03 one thought to care about this question before. You said what it takes to be a great AI researcher.
    1:52:08 Can we rewind the clock back? What, what advice would you give to people interested in AI?
    1:52:11 They’re young, looking forward. How can I make an impact on the world?
    1:52:15 I think my number one piece of advice is to just start playing with the models.
    1:52:22 This was actually, I worry a little, this seems like obvious advice now. I think three years ago,
    1:52:27 it wasn’t obvious. And people started by, oh, let me read the latest reinforcement learning paper.
    1:52:31 Let me, you know, let me, let me kind of, no, I mean, that was really the, that was really the,
    1:52:36 the, the, and I mean, you should do that as well. But now, you know, with wider availability of
    1:52:42 models and APIs, people are doing this more. But I think, I think just experiential knowledge.
    1:52:49 These models are new artifacts that no one really understands. And so getting experience
    1:52:54 playing with them, I would also say, again, in line with the like, do something new, think in
    1:52:59 some new direction, like, there are all these things that haven’t been explored. Like, for
    1:53:04 example, mechanistic interpretability is still very new. It’s probably better to work on that
    1:53:08 than it is to work on new model architectures. Because it’s, you know, it’s more popular than
    1:53:12 it was before, there are probably like 100 people working on it, but there aren’t like 10,000 people
    1:53:19 working on it. And it’s, it’s just this, this fertile area for study, like, like, you know,
    1:53:25 it’s, there’s, there’s so much like low hanging fruit, you can just walk by and, you know, you
    1:53:30 can just walk by and you can pick things. And, and the only reason, for whatever reason, people
    1:53:36 aren’t, people aren’t interested in it enough. I think there are some things around long,
    1:53:42 long horizon learning and long horizon tasks, where there’s a lot to be done. I think evaluations
    1:53:46 are still, we’re still very early in our ability to study evaluations, particularly for dynamic
    1:53:53 systems acting in the world. I think there’s some stuff around multi agent. Skate where the puck is
    1:53:58 going is my, is my advice. And you don’t have to be brilliant to think of it, like, all the things
    1:54:03 that are going to be exciting in five years, like, and people even mentioned them as like,
    1:54:08 you know, conventional wisdom, but like, it’s, it’s just somehow there’s this barrier that people don’t,
    1:54:12 people don’t double down as much as they could, or they’re afraid to do something that’s not
    1:54:17 the popular thing. I don’t know why it happens, but like getting over that barrier is the,
    1:54:22 that’s my number one piece of advice. Let’s talk if it could a bit about post training.
    1:54:29 Yeah. So it seems that the modern post training recipe has a little bit of everything. So
    1:54:38 supervised fine tuning, RLHF, the, the, the constitutional AI with RL, AIF. Best acronym.
    1:54:46 It’s again that naming thing. And then synthetic data seems like a lot of synthetic data, or at
    1:54:50 least trying to figure out ways to have high quality synthetic data. So what’s the, if this is a
    1:54:58 secret sauce that makes anthropic claw so incredible? What, how much of the magic is in the pre-training?
    1:55:02 How much is in the post training? Yeah. I mean, so first of all, we’re not perfectly able to measure
    1:55:08 that ourselves. You know, when you see some, some great character ability, sometimes it’s hard to
    1:55:13 tell whether it came from pre-training or post training. We’ve developed ways to try and distinguish
    1:55:17 between those two, but they’re not perfect. You know, the second thing I would say is, you know,
    1:55:21 it’s when there isn’t advantage. And I think we’ve been pretty good at in general, in general at RL,
    1:55:25 perhaps, perhaps the best, although, although I don’t know, because I don’t see what goes on
    1:55:32 inside other companies. Usually it isn’t, oh my God, we have the secret magic method that others
    1:55:37 don’t have, right? Usually it’s like, well, you know, we got better at the infrastructure,
    1:55:41 so we could run it for longer, or, you know, we were able to get higher quality data, or we were
    1:55:46 able to filter our data better, or we were able to, you know, combine these methods in practice.
    1:55:49 It’s usually some boring matter of, matter of kind of
    1:55:57 practice and tradecraft. So, you know, when I think about how to do something special in terms
    1:56:03 of how we train these models, both pre-training, but even more so post-training, you know, I really
    1:56:08 think of it a little more, again, as like designing airplanes or cars. Like, you know, it’s not just
    1:56:12 like, oh man, I have the blueprint. Like, maybe that makes you make the next airplane, but like,
    1:56:18 there’s some, there’s some cultural tradecraft of how we think about the design process that I
    1:56:23 think is more important than, you know, than any particular gizmo we’re able to invent.
    1:56:28 Okay, well, let me ask you about specific techniques. So, first on RLHF, what do you think,
    1:56:33 just zooming out, intuition, almost philosophy, what do you think RLHF works so well?
    1:56:39 If I go back to, like, the scaling hypothesis, one of the ways to skate the scaling hypothesis
    1:56:46 is if you train for X and you throw enough compute at it, then you get X. And so, RLHF is good at
    1:56:52 doing what humans want the model to do, or at least to state it more precisely,
    1:56:56 doing what humans who look at the model for a brief period of time and consider different
    1:57:01 possible responses, what they prefer as the response, which is not perfect from both the
    1:57:07 safety and capabilities perspective, in that humans are often not able to perfectly identify
    1:57:10 what the model wants and what humans want in the moment, may not be what they want in the
    1:57:16 long term. So, there’s a lot of subtlety there, but the models are good at, you know, producing
    1:57:22 what the humans, in some shallow sense, want. And it actually turns out that you don’t even
    1:57:28 have to throw that much compute at it because of another thing, which is this thing about
    1:57:34 a strong pre-trained model being halfway to anywhere. So, once you have the pre-trained model,
    1:57:38 you have all the representations you need to get the model, to get the model where you want it to
    1:57:47 go. So, do you think RLHF makes the model smarter or just appear smarter to the humans?
    1:57:52 I don’t think it makes the model smarter. I don’t think it just makes the model appear smarter.
    1:57:58 It’s like RLHF like bridges the gap between the human and the model, right? I could have
    1:58:02 something really smart that like can’t communicate at all, right? We all know people like this.
    1:58:06 People who are really smart but, you know, you can’t understand what they’re saying.
    1:58:14 So, I think RLHF just bridges that gap. I think it’s not the only kind of RL we do.
    1:58:19 It’s not the only kind of RL that will happen in the future. I think RL has the potential to make
    1:58:24 models smarter, to make them reason better, to make them operate better, to make them develop
    1:58:30 new skills even. And perhaps that could be done, you know, even in some cases with human feedback.
    1:58:35 But the kind of RLHF we do today mostly doesn’t do that yet, although we’re very quickly
    1:58:40 starting to be able to. But it appears to sort of increase, if you look at the metric of helpfulness,
    1:58:47 it increases that. It also increases, what was this word in Leopold’s essay, unhobbling?
    1:58:51 Where basically the models are hobbled and then you do various trainings to them to unhobble them.
    1:58:57 So, you know, I like that word because it’s like a rare word. So, I think RLHF unhobbles the models
    1:59:02 in some ways. And then there are other ways where a model hasn’t yet been unhobbled and you know,
    1:59:08 needs to unhobble. If you can say, in terms of cost, is pre-training the most expensive thing or is
    1:59:14 post-training creep up to that? At the present moment, it is still the case that pre-training is
    1:59:18 the majority of the cost. I don’t know what to expect in the future, but I could certainly
    1:59:22 anticipate a future where post-training is the majority of the cost. In that future,
    1:59:27 you anticipate, would it be the humans or the AI that’s the costly thing for the post-training?
    1:59:34 I don’t think you can scale up humans enough to get high quality. Any kind of method that
    1:59:38 relies on humans and uses a large amount of compute, it’s going to have to rely on some
    1:59:45 scaled supervision method like, you know, debate or iterated amplification or something like that.
    1:59:52 So, on that super interesting set of ideas around constitutional, I can describe what it is
    1:59:58 as first detailed in December 2022 paper and beyond that. What is it?
    2:00:05 Yes. So, this was from two years ago. The basic idea is, so we describe what RLHF is. You have
    2:00:13 a model and it spits out, you know, like you just sample from it twice. It spits out two possible
    2:00:18 responses and you’re like human, which responses you like better or another variant of it is rate
    2:00:23 this response on a scale of 1 to 7. So, that’s hard because you need to scale up human interaction
    2:00:28 and it’s very implicit, right? I don’t have a sense of what I want the model to do. I just
    2:00:33 have a sense of like what this average of a thousand humans wants the model to do. So,
    2:00:41 two ideas. One is, could the AI system itself decide which response is better, right? Could
    2:00:46 you show the AI system these two responses and ask which response is better? And then second,
    2:00:51 well, what criterion should the AI use? And so, then there’s this idea, could you have a single
    2:00:56 document, a constitution, if you will, that says these are the principles the model should be using
    2:01:05 to respond. And the AI system reads those, it reads those principles as well as reading
    2:01:10 the environment and the response. And it says, well, how good did the AI model do? It’s basically a
    2:01:15 form of self-play. You’re kind of training the model against itself. And so, the AI gives the
    2:01:20 response and then you feed that back into what’s called the preference model, which in turn feeds
    2:01:26 the model to make it better. So, you have this triangle of like the AI, the preference model,
    2:01:30 and the improvement of the AI itself. And we should say that in the Constitution,
    2:01:36 the set of principles are like human interpretable. Yeah, it’s something both the human and the AI
    2:01:42 system can read. So, it has this nice kind of translatability or symmetry. In practice,
    2:01:48 we both use a model constitution and we use RLHF and we use some of these other methods.
    2:01:56 So, it’s turned into one tool in a toolkit that both reduces the need for RLHF and increases the
    2:02:02 value we get from using each data point of RLHF. It also interacts in interesting ways with kind
    2:02:10 of future reasoning type RL methods. So, it’s one tool in the toolkit, but I think it is a very
    2:02:14 important tool. Well, it’s a compelling one to us humans, you know, thinking about the founding
    2:02:21 fathers and the founding of the United States. The natural question is who and how do you think it
    2:02:26 gets to define the Constitution, the set of principles in the Constitution? Yeah, so I’ll
    2:02:31 give like a practical answer and a more abstract answer. I think the practical answer is like,
    2:02:37 look, in practice, models get used by all kinds of different like customers, right? And so,
    2:02:42 you can have this idea where, you know, the model can have specialized rules or principles,
    2:02:47 you know, we fine-tune versions of models implicitly. We’ve talked about doing it explicitly,
    2:02:54 having special principles that people can build into the models. So, from a practical perspective,
    2:02:58 the answer can be very different from different people, you know, customer service agent,
    2:03:02 you know, behaves very differently from a lawyer and obeys different principles.
    2:03:08 But I think at the base of it, there are specific principles that models, you know,
    2:03:13 have to obey. I think a lot of them are things that people would agree with. Everyone agrees that,
    2:03:18 you know, we don’t want models to present these CBRN risks. I think we can go a little
    2:03:24 further and agree with some basic principles of democracy and the rule of law. Beyond that,
    2:03:28 it gets, you know, very uncertain. And there our goal is generally for the models to be
    2:03:35 more neutral, to not espouse a particular point of view and, you know, more just be kind of like
    2:03:41 wise agents or advisors that will help you think things through and will, you know, present possible
    2:03:46 considerations, but, you know, don’t express, you know, stronger specific opinions.
    2:03:53 OpenAI released a model spec where it kind of clearly concretely defines some of the goals of
    2:04:00 the model and specific examples, like A, B, how the model should behave. Do you find that interesting?
    2:04:05 By the way, I should mention the, I believe the brilliant John Schumann was a part of that. He’s
    2:04:10 now an anthropic. Do you think this is a useful direction? Might anthropic release a model spec
    2:04:16 as well? Yeah. So I think that’s a pretty useful direction. Again, it has a lot in common with
    2:04:21 constitutional AI. So again, another example of like a race to the top, right? We have something
    2:04:26 that’s like we think, you know, a better and more responsible way of doing things. It’s also a
    2:04:32 competitive advantage. Then others kind of, you know, discover that it has advantages and then
    2:04:37 start to do that thing. We then no longer have the competitive advantage, but it’s good from the
    2:04:43 perspective that now everyone has adopted a positive practice that others were not adopting.
    2:04:47 And so our response to that as well looks like we need a new competitive advantage in order to
    2:04:52 keep driving this race upwards. So that’s how I generally feel about that. I also think every
    2:04:56 implementation of these things is different. So, you know, there were some things in the model
    2:05:02 spec that were not in constitutional AI. And so, you know, we can always adopt those things or,
    2:05:06 you know, at least learn from them. So again, I think this is an example of like the positive
    2:05:13 dynamic that I think we should all want the field to have. Let’s talk about the incredible
    2:05:18 essay, “Machines of Love and Grace.” I recommend everybody read it. It’s a long one.
    2:05:23 It is rather long. Yeah. It’s really refreshing to read concrete ideas about what a positive
    2:05:28 future looks like. And you took sort of a bold stance because like it’s very possible that you
    2:05:32 might be wrong on the dates or the specific applications. Oh, yeah. I’m fully expecting to,
    2:05:38 you know, to definitely be wrong about all the details. I might be just spectacularly wrong about
    2:05:44 the whole thing and people will, you know, will laugh at me for years. That’s just how the future
    2:05:51 works. So you provided a bunch of concrete positive impacts of AI and how, you know, exactly a
    2:05:55 super intelligent AI might accelerate the rate of breakthroughs and, for example, biology and
    2:06:03 chemistry that would then lead to things like we cure most cancers, prevent all infectious disease,
    2:06:08 double the human lifespan, and so on. So let’s talk about this essay first. Can you give a high
    2:06:16 level vision of this essay and what key takeaways that people have? Yeah. I have spent a lot of
    2:06:20 time and Anthropic has spent a lot of effort on like, you know, how do we address the risks of AI,
    2:06:25 right? How do we think about those risks? Like we’re trying to do a race to the top, you know,
    2:06:29 what that requires us to build all these capabilities and the capabilities are cool.
    2:06:36 But, you know, we’re like a big part of what we’re trying to do is like address the risks.
    2:06:41 And the justification for that is like, well, you know, all these positive things, you know,
    2:06:45 the market is this very healthy organism, right? It’s going to produce all the positive things.
    2:06:49 The risks, I don’t know, we might mitigate them, we might not. And so we can have more impact by
    2:06:57 trying to mitigate the risks. But I noticed that one flaw in that way of thinking, and it’s not
    2:07:01 a change in how seriously I take the risks, it’s maybe a change in how I talk about them,
    2:07:10 is that, you know, no matter how kind of logical or rational that line of reasoning
    2:07:17 that I just gave might be, if you kind of only talk about risks, your brain only thinks about
    2:07:22 risks. And so I think it’s actually very important to understand what if things do go well. And the
    2:07:26 whole reason we’re trying to prevent these risks is not because we’re afraid of technology, not
    2:07:33 because we want to slow it down. It’s because if we can get to the other side of these risks, right?
    2:07:40 If we can run the gauntlet successfully, to put it in stark terms, then on the other side of the
    2:07:44 gauntlet are all these great things. And these things are worth fighting for. And these things
    2:07:50 can really inspire people. And I think I imagine because, look, you have all these investors,
    2:07:56 all these VC’s, all these AI companies talking about all the positive benefits of AI. But as
    2:08:01 you point out, it’s weird. There’s actually a dearth of really getting specific about it.
    2:08:07 There’s a lot of like random people on Twitter like posting these kind of like gleaming cities
    2:08:13 and this just kind of like vibe of like, grind, accelerate harder, like kick out the diesel,
    2:08:18 you know, it’s just this very, this very like aggressive ideological. But then you’re like,
    2:08:26 well, what are you excited about? And so I figured that, you know, I think it would be
    2:08:33 interesting and valuable for someone who’s actually coming from the risk side to try and really
    2:08:42 make a try at explaining, explaining, explaining what the benefits are, both because I think it’s
    2:08:47 something we can all get behind. And I want people to understand, I want them to really understand
    2:08:55 that this isn’t, this isn’t doomers versus accelerationists. This is that if you have a
    2:09:00 true understanding of where things are going with AI, and maybe that’s the more important
    2:09:06 axis, AI is moving fast versus AI is not moving fast, then you really appreciate the benefits
    2:09:12 and you really, you want humanity or civilization to seize those benefits, but you also get very
    2:09:17 serious about anything that could derail them. So I think the starting point is to talk about what
    2:09:23 this powerful AI, which is the term you like to use, most of the world uses AGI, but you don’t
    2:09:29 like the term because it’s basically has too much baggage, it’s become meaningless. It’s like,
    2:09:34 we’re stuck with the terms. Maybe we’re stuck with the terms and my efforts to change them are
    2:09:40 futile. I’ll tell you what else I don’t, this is like a pointless semantic point, but I keep talking
    2:09:48 about it, so I’m just going to do it once more. I think it’s a little like, let’s say it was 1995
    2:09:54 and Moore’s law is making the computers faster. And for some reason, there had been this verbal
    2:09:59 tick that everyone was like, well, someday we’re going to have supercomputers. And supercomputers
    2:10:04 are going to be able to do all these things that once we have supercomputers, we’ll be able to sequence
    2:10:08 the genome, we’ll be able to do other things. And so one, it’s true, the computers are getting
    2:10:12 faster. And as they get faster, they’re going to be able to do all these great things. But there’s
    2:10:17 like, there’s no discrete point at which you had a supercomputer and previous computers were not to,
    2:10:21 like supercomputers, a term we use, but like, it’s a vague term to just describe like,
    2:10:26 computers that are faster than what we have today. There’s no point at which you pass the
    2:10:30 threshold and you’re like, oh my God, we’re doing a totally new type of computation and new. And so
    2:10:36 I feel that way about AGI, like, there’s just a smooth exponential. And like, if by AGI, you mean
    2:10:41 like, like AI is getting better and better. And like, gradually, it’s going to do more and more
    2:10:45 of what humans do until it’s going to be smarter than humans. And then it’s going to get smarter
    2:10:51 even from there. Then yes, I believe in AGI. But if AGI is some discrete or separate thing,
    2:10:55 which is the way people often talk about it, then it’s kind of a meaningless buzzword.
    2:11:01 Yeah, to me, it’s just sort of a platonic form of a powerful AI, exactly how you define it. I mean,
    2:11:08 you define it very nicely. So on the intelligence axis, it’s just on pure intelligence, it’s smarter
    2:11:13 than a Nobel Prize winner, as you describe, across most relevant disciplines. So okay,
    2:11:19 that’s just intelligence. So it’s both in creativity and be able to generate new ideas,
    2:11:24 all that kind of stuff in every discipline, Nobel Prize winner, okay, in their prime.
    2:11:31 It can use every modality, so this kind of self-explanatory, but just operate across
    2:11:38 all the modalities of the world. It can go off for many hours, days and weeks to do tasks,
    2:11:43 and do its own sort of detailed planning and only ask you help when it’s needed.
    2:11:48 It can use, this is actually kind of interesting. I think in the essay, you said,
    2:11:54 I mean, again, it’s a bet that it’s not going to be embodied, but it can control embodied tools.
    2:12:00 So it can control tools, robots, laboratory equipment. The resource used to train it can
    2:12:05 then be repurposed to run millions of copies of it. And each of those copies will be independent
    2:12:08 that can do their own independent work. So you can do the cloning of the intelligence system.
    2:12:12 Yeah, I mean, you might imagine from outside the field that there’s only one of these, right,
    2:12:17 that you made it, you’ve only made one, but the truth is that the scale-up is very quick.
    2:12:22 We do this today, we make a model, and then we deploy thousands, maybe tens of thousands of
    2:12:28 instances of it. I think by the time, certainly within two to three years, whether we have these
    2:12:32 super powerful AIs or not, clusters are going to get to the size where you’ll be able to deploy
    2:12:37 millions of these, and they’ll be faster than humans. And so if your picture is, oh, we’ll have
    2:12:42 one and it’ll take a while to make them, my point there was, no, actually you have millions of them
    2:12:49 right away. And in general, they can learn and act 10 to 100 times faster than humans.
    2:12:55 So that’s a really nice definition of powerful AI. Okay, so that, but you also write that clearly
    2:13:00 such an entity would be capable of solving very difficult problems very fast, but it is not
    2:13:06 trivial to figure out how fast two extreme positions both seem false to me. So the singularity is on
    2:13:11 the one extreme and the opposite on the other extreme. Can you describe each of the extremes?
    2:13:18 Yeah. So yeah, let’s describe the extreme. So one extreme would be, well, look,
    2:13:24 you know, if we look at kind of evolutionary history, like there was this big acceleration
    2:13:28 where, you know, for hundreds of thousands of years, we just had like, you know, single cell
    2:13:32 organisms, and then we had mammals, and then we had apes, and then that quickly turned to humans.
    2:13:37 Humans quickly built industrial civilization. And so this is going to keep speeding up. And
    2:13:42 there’s no ceiling at the human level. Once models get much, much smarter than humans,
    2:13:46 they’ll get really good at building the next models. And, you know, if you write down like
    2:13:51 a simple differential equation, like this is an exponential, and so what’s what’s going to happen
    2:13:55 is that models will build faster models, models will build faster models, and those models will
    2:14:00 build, you know, nanobots that can like take over the world and produce much more energy than you
    2:14:05 could produce otherwise. And so if you just kind of like solve this abstract differential equation,
    2:14:10 then like five days after we, you know, we build the first AI that’s more powerful than humans,
    2:14:15 then, then, you know, like the world will be filled with these AIs and every possible technology
    2:14:21 that could be invented like will be invented. I’m caricaturing this a little bit. But I, you know,
    2:14:29 I think that’s one extreme. And the reason that I think that’s not the case is that one, I think
    2:14:34 they just neglect like the laws of physics, like it’s only possible to do things so fast in the
    2:14:38 physical world, like some of those loops go through, you know, producing faster hardware,
    2:14:44 takes a long time to produce faster hardware, things take a long time. There’s this issue of
    2:14:50 complexity. Like, I think no matter how smart you are, like, you know, people talk about, oh,
    2:14:54 we can make models, the biological systems that’ll do everything, the biological systems.
    2:14:58 Look, I think computational modeling can do a lot. I did a lot of computational modeling when I
    2:15:05 worked in biology. But like, just there are a lot of things that you can’t predict how they’re,
    2:15:10 you know, they’re, they’re complex enough that like, just iterating, just running the experiment
    2:15:14 is going to beat any modeling, no matter how smart the system doing the modeling is.
    2:15:18 Oh, even if it’s not interacting with the physical world, just the modeling is going to be hard.
    2:15:21 Yeah, I think, well, the modeling is going to be hard and getting the model to,
    2:15:24 to, to, to match the physical world is going to be.
    2:15:27 All right. So he does have to interact with the physical world, verify it.
    2:15:30 But, but it’s just, you know, you just look at even the simplest problems. Like, I, you know,
    2:15:36 I think I talk about like, you know, the three body problem or simple chaotic prediction, like,
    2:15:41 you know, or, or like predicting the economy, it’s really hard to predict the economy two years
    2:15:45 out. Like maybe the case is like, you know, normal, you know, humans can predict what’s
    2:15:49 going to happen in the economy in the next quarter, or they can’t really do that.
    2:15:54 Maybe a, maybe a AI system that’s, you know, a zillion times smarter can only predict it
    2:15:58 out a year or something instead of, instead of, you know, you have these kind of exponential
    2:16:04 increase in computer intelligence for linear increase in, in, inability to predict.
    2:16:10 Same with, again, like, you know, biological molecules, molecules interacting, you don’t
    2:16:13 know what’s going to happen when you perturb a, when you perturb a complex system.
    2:16:18 You can find simple parts in it. If you’re smarter, you’re better at finding these simple parts.
    2:16:23 And then I think human institutions. Human institutions are just, are, are really difficult.
    2:16:29 Like it’s, you know, it’s, it’s been hard to get people. I won’t give specific examples,
    2:16:35 but it’s been hard to get people to adopt even the technologies that we’ve developed,
    2:16:39 even ones where the case for their efficacy is very, very strong.
    2:16:45 You know, people have concerns. They think things are conspiracy theories. Like it’s,
    2:16:49 it’s just been, it’s been very difficult. It’s also been very difficult to get,
    2:16:55 you know, very simple things through the regulatory system, right? I think, you know,
    2:17:00 and, you know, I don’t want to disparage anyone who, you know, you know, works in regulatory,
    2:17:04 regulatory systems of any technology that are hard trade-offs they have to deal with.
    2:17:10 They have to save lives. But, but the system as a whole, I think, makes some obvious trade-offs
    2:17:19 that are very far from maximizing human welfare. And so if we bring AI systems into this, you know,
    2:17:28 into these human systems, often the level of intelligence may just not be the limiting factor,
    2:17:32 right? It just may be that it takes a long time to do something. Now, if the AI system
    2:17:38 circumvented all governments, if it just said I’m dictator of the world and I’m going to do whatever,
    2:17:42 some of these things it could do. Again, the things have to do with complexity. I still think a
    2:17:47 lot of things would take a while. I don’t think it helps that the AI systems can produce a lot of
    2:17:52 energy or go to the moon. Like some people in comments responded to the essay saying the AI system
    2:17:58 can produce a lot of energy in smarter AI systems. That’s missing the point. That kind of cycle
    2:18:02 doesn’t solve the key problems that I’m talking about here. So I think, I think a bunch of people
    2:18:08 missed the point there. But even if it were completely unaligned and could get around all
    2:18:12 these human obstacles, it would have trouble. But again, if you want this to be an AI system
    2:18:17 that doesn’t take over the world, that doesn’t destroy humanity, then basically,
    2:18:24 it’s going to need to follow basic human laws, right? If we want to have an actually good world,
    2:18:28 like we’re going to have to have an AI system that interacts with humans,
    2:18:32 not one that kind of creates its own legal system or disregards all the laws or all of that.
    2:18:38 So as inefficient as these processes are, we’re going to have to deal with them,
    2:18:42 because there needs to be some popular and democratic legitimacy in how these systems
    2:18:47 are rolled out. We can’t have a small group of people who are developing these systems say,
    2:18:51 “This is what’s best for everyone,” right? I think it’s wrong. And I think in practice,
    2:18:56 it’s not going to work anyway. So you put all those things together and we’re not going to
    2:19:05 change the world and upload everyone in five minutes. I don’t think it’s going to happen,
    2:19:11 and to the extent that it could happen, it’s not the way to lead to a good world.
    2:19:15 So that’s on one side. On the other side, there’s another set of perspectives,
    2:19:21 which I have actually in some ways more sympathy for, which is, look, we’ve seen big productivity
    2:19:27 increases before, right? Economists are familiar with studying the productivity increases that came
    2:19:32 from the computer revolution and the internet revolution. And generally, those productivity
    2:19:37 increases were underwhelming. They were less than you might imagine. There was a quote from
    2:19:41 Robert Solow, “You see the computer revolution everywhere except the productivity statistics.”
    2:19:48 So why is this the case? People point to the structure of firms, the structure of enterprises,
    2:19:55 how slow it’s been to roll out our existing technology to very poor parts of the world,
    2:19:59 which I talk about in the essay, right? How do we get these technologies to
    2:20:05 the poorest parts of the world that are behind on cell phone technology, computers, medicine,
    2:20:12 let alone new-fangled AI that hasn’t been invented yet? So you could have a perspective that’s like,
    2:20:18 well, this is amazing technically, but it’s all a nothing burger. I think Tyler Cowan,
    2:20:23 who wrote something in response to my essay, has that perspective. I think he thinks the radical
    2:20:28 change will happen eventually, but he thinks it’ll take 50 or 100 years. And you could have even more
    2:20:33 static perspectives on the whole thing. I think there’s some truth to it. I think the time scale
    2:20:42 is just too long. And I can see it. I can actually see both sides with today’s AI. So a lot of our
    2:20:48 customers are large enterprises who are used to doing things a certain way. I’ve also seen it in
    2:20:54 talking to governments, right? Those are prototypical institutions, entities that are slow to change.
    2:21:01 But the dynamic I see over and over again is, yes, it takes a long time to move the ship. Yes,
    2:21:07 there’s a lot of resistance and lack of understanding. But the thing that makes me feel that progress
    2:21:12 will in the end happen moderately fast, not incredibly fast, but moderately fast, is that you
    2:21:19 talk to what I find is I find over and over again, again, in large companies, even in governments,
    2:21:26 which have been actually surprisingly forward-leaning, you find two things that move things forward.
    2:21:33 One, you find a small fraction of people within a company, within a government, who really see the
    2:21:38 big picture, who see the whole scaling hypothesis, who understand where AI is going, or at least
    2:21:42 understand where it’s going within their industry. And there are a few people like that within the
    2:21:47 current, within the current U.S. government, who really see the whole picture. And those people
    2:21:51 see that this is the most important thing in the world until they agitate for it. And the thing,
    2:21:56 they alone are not enough to succeed because they’re a small set of people within a large
    2:22:03 organization. But as the technology starts to roll out, as it succeeds in some places,
    2:22:10 in the folks who are most willing to adopt it, the specter of competition gives them a wind at
    2:22:15 their backs because they can point within their large organization. They can say, look, these
    2:22:20 other guys are doing this, right? You know, one bank can say, look, this new fangled hedge fund is
    2:22:24 doing this thing, they’re going to eat our lunch. In the U.S., we can say, we’re afraid China’s going
    2:22:31 to get there before we are. And that combination, the specter of competition, plus a few visionaries
    2:22:37 within these, you know, within these, the organizations that in many ways are sclerotic,
    2:22:40 you put those two things together and it actually makes something happen.
    2:22:45 I mean, that’s interesting. It’s a balanced fight between the two because inertia is very powerful.
    2:22:51 But eventually, over enough time, the innovative approach breaks through.
    2:22:59 And I’ve seen that happen. I’ve seen the arc of that over and over again. And it’s like the
    2:23:06 barriers are there. The barriers to progress, the complexity, not knowing how to use the model,
    2:23:11 how to deploy them are there. And for a bit, it seems like they’re going to last forever,
    2:23:17 like change doesn’t happen. But then eventually change happens and always comes from a few people.
    2:23:22 I felt the same way when I was an advocate of the scaling hypothesis within the AI field itself,
    2:23:26 and others didn’t get it. It felt like no one would ever get it. It felt like,
    2:23:31 then it felt like we had a secret, almost no one ever had. And then a couple of years later,
    2:23:35 everyone has the secret. And so I think that’s how it’s going to go with deployment to AI in the
    2:23:42 world. The barriers are going to fall apart gradually and then all at once. And so I think
    2:23:47 this is going to be more, and this is just an instinct. I could easily see how I’m wrong.
    2:23:51 I think it’s going to be more like five or 10 years, as I say in the essay,
    2:23:56 than it’s going to be 50 or 100 years. I also think it’s going to be five or 10 years
    2:24:04 more than it’s going to be five or 10 hours. Because I’ve just seen how human systems work.
    2:24:08 And I think a lot of these people who write down the differential equations who say AI is
    2:24:12 going to make more powerful AI, who can’t understand how it could possibly be the case
    2:24:16 that these things won’t change so fast. I think they don’t understand these things.
    2:24:26 So what do you use the timeline to where we achieve AGI, aka powerful AI, aka super useful AI?
    2:24:35 I’m going to start calling it that. It’s a debate about naming. Unpure intelligence,
    2:24:39 it can smarter than a Nobel Prize winner in every relevant discipline and all the things
    2:24:46 we’ve said. Modality can go and do stuff on its own for days, weeks, and do biology experiments
    2:24:52 on its own. You know what? Let’s just stick to biology. You sold me on the whole biology and
    2:25:00 health section. It’s so exciting. I was getting giddy from a scientific perspective. It made
    2:25:07 me want to be a biologist. No, no. This was the feeling I had when I was writing it. It’s like,
    2:25:14 this would be such a beautiful future if we can just make it happen. If we can just get the
    2:25:23 landmines out of the way and make it happen, there’s so much beauty and elegance and moral
    2:25:30 force behind it. It’s something we should all be able to agree on. As much as we fight about
    2:25:35 all these political questions, is this something that could actually bring us together?
    2:25:40 But you were asking, when will we get this? When? When do you think? Just putting numbers
    2:25:44 on the table. This is, of course, the thing I’ve been grappling with for many years,
    2:25:51 and I’m not at all confident. Every time, if I say 2026 or 2027, there will be like a zillion
    2:25:57 people on Twitter who will be like, “A.I.C.O.” I said 2026, 2026, and it’ll be repeated for the
    2:26:03 next two years that this is definitely when I think it’s going to happen. Whoever is exerting
    2:26:09 these clips will crop out the thing I just said and only say the thing I’m about to say.
    2:26:18 I’ll just say it anyway. If you extrapolate the curves that we’ve had so far, if you say,
    2:26:23 “Well, I don’t know, we’re starting to get to like Ph.D. level, and last year we were at
    2:26:29 undergraduate level, and the year before we were at like the level of a high school student,”
    2:26:35 again, you can quibble with what tasks and for what. We’re still missing modalities,
    2:26:38 but those are being added, like computer use was added, like image in was added,
    2:26:43 like image generation has been added. If you just kind of like, and this is totally
    2:26:48 unscientific, but if you just kind of like eyeball the rate at which these capabilities
    2:26:54 are increasing, it does make you think that we’ll get there by 2026 or 2027. Again,
    2:27:01 lots of things could derail it. We could run out of data. We might not be able to scale clusters
    2:27:07 as much as we want. Maybe Taiwan gets blown up or something, and then we can’t produce as many
    2:27:12 GPUs as we want. So there are all kinds of things that could derail the whole process.
    2:27:17 So I don’t fully believe the straight line extrapolation, but if you believe the straight
    2:27:23 line extrapolation, we’ll get there in 2026 or 2027. I think the most likely is that there’s
    2:27:29 some mild delay relative to that. I don’t know what that delay is, but I think it could happen
    2:27:33 on schedule. I think there could be a mild delay. I think there are still worlds where it doesn’t
    2:27:39 happen in 100 years. The number of those worlds is rapidly decreasing. We are rapidly running out
    2:27:44 of truly convincing blockers, truly compelling reasons why this will not happen in the next
    2:27:50 few years. There were a lot more in 2020, although my guess, my hunch at that time was that we’ll
    2:27:55 make it through all those blockers. So sitting as someone who has seen most of the blockers cleared
    2:28:00 out of the way, I kind of suspect my hunch, my suspicion is that the rest of them will not block
    2:28:07 us. But look at the end of the day, I don’t want to represent this as a scientific prediction.
    2:28:13 People call them scaling laws. That’s a misnomer, like Moore’s law is a misnomer. Moore’s law,
    2:28:17 scaling laws, they’re not laws of the universe. They’re empirical regularities. I am going to
    2:28:21 bet in favor of them continuing, but I’m not certain of that.
    2:28:26 So you extensively describe sort of the compressed 21st century, how AGI will help
    2:28:34 set forth a chain of breakthroughs in biology and medicine that help us in all these kinds of
    2:28:39 ways that I mentioned. So how do you think, what are the early steps it might do? And by the way,
    2:28:46 I asked Claude good questions to ask you. And Claude told me to ask, what do you think is a
    2:28:53 typical day for a biologist working on AGI look like in this future? Yeah, yeah. Claude is curious.
    2:28:57 Let me start with your first questions and then I’ll answer that. Claude wants to know what’s
    2:29:01 in his future, right? Exactly. Who might get to be working with? Exactly.
    2:29:08 So I think one of the things I went hard on, when I went hard on in the essay is, let me go back
    2:29:14 to this idea of, because it’s really had an impact on me, this idea that within
    2:29:20 large organizations and systems, there end up being a few people or a few new ideas who kind of
    2:29:24 cause things to go in a different direction than they would have before, who kind of
    2:29:30 disproportionately affects the trajectory. There’s a bunch of kind of the same thing going on,
    2:29:35 right? If you think about the health world, there’s like, you know, trillions of dollars to pay out
    2:29:40 Medicare, and you know, other health insurance, and then the NIH is 100 billion. And then if I
    2:29:44 think of like, the few things that have really revolutionized anything, it could be encapsulated
    2:29:49 in a small, small fraction of that. And so when I think of like, where will AI have an impact?
    2:29:54 I’m like, can AI turn that small fraction into a much larger fraction and raise its quality?
    2:30:02 And within biology, my experience within biology is that the biggest problem of biology is that you
    2:30:08 can’t see what’s going on. You have very little ability to see what’s going on, and even less
    2:30:15 ability to change it, right? What you have is this, like, from this, you have to infer that
    2:30:22 there’s a bunch of cells that within each cell is, you know, three billion base pairs of DNA
    2:30:28 built according to a genetic code. And, you know, there are all these processes that are just going
    2:30:34 on without any ability of us as, you know, unaugmented humans to affect it. These cells are
    2:30:40 dividing most of the time that’s healthy. But sometimes that process goes wrong, and that’s
    2:30:48 cancer. The cells are aging, your skin may change color, develop wrinkles as you as you age. And
    2:30:53 all of this is determined by these processes, all these proteins being produced, transported to
    2:30:58 various parts of the cells, binding to each other. And in our initial state about biology,
    2:31:03 we didn’t even know that these cells existed. We had to invent microscopes to observe the cells.
    2:31:09 We had to, we had to invent more powerful microscopes to see, you know, below the level
    2:31:14 of the cell to the level of molecules. We had to invent x-ray crystallography to see the DNA.
    2:31:19 We had to invent gene sequencing to read the DNA. Now, you know, we had to invent
    2:31:24 protein folding technology to, you know, to predict how it would fold and how they
    2:31:30 bind and how these things bind to each other. You know, we had to, we had to invent various
    2:31:35 techniques for now we can edit the DNA as of, you know, with CRISPR as of the last 12 years.
    2:31:43 So the whole history of biology, a whole big part of the history is basically our ability to
    2:31:48 read and understand what’s going on and our ability to reach in and selectively change things.
    2:31:54 And my view is that there’s so much more we can still do there, right? You can do CRISPR,
    2:32:00 but you can do it for your whole body. Let’s say I want to do it for one particular type of cell,
    2:32:05 and I want the rate of targeting the wrong cell to be very low. That’s still a challenge. That’s
    2:32:10 still things people are working on. That’s what we might need for gene therapy for certain diseases.
    2:32:16 And so the reason I’m saying all of this and it goes beyond, you know, beyond this to, you know,
    2:32:23 to gene sequencing, to new types of nanomaterials for observing what’s going on inside cells for,
    2:32:28 you know, antibody drug conjugates. The reason I’m saying all of this is that this could be a
    2:32:34 leverage point for the AI systems, right? That the number of such inventions, it’s in the,
    2:32:39 it’s in the mid double digits or something, you know, mid double digits, maybe low triple digits
    2:32:43 over the history of biology. Let’s say I have a million of these AIs, like, you know, can they
    2:32:48 discover a thousand, you know, working together or can they discover thousands of these very quickly?
    2:32:53 And does that provide a huge lever? Instead of trying to leverage the, you know, two trillion a
    2:32:58 year we spend on, you know, Medicare or whatever, can we leverage the one billion a year that’s,
    2:33:04 you know, that’s spent to discover, but with much higher quality? And so what is it like, you know,
    2:33:10 being a being a scientist that works with with with an AI system? The way I think about it actually
    2:33:17 is, well, so I think in the early stages, the AIs are going to be like grad students,
    2:33:21 you’re going to give them a project, you’re going to say, you know, I’m the experienced
    2:33:26 biologist, I’ve set up the lab, the biology professor, or even the grad students themselves,
    2:33:34 will say, here’s, here’s what, here’s what you can do with an AI, you know, like AI system. I’d
    2:33:39 like to study this. And, you know, the AI system, it has all the tools, it can like look up all the
    2:33:43 literature to decide what to do. It can look at all the equipment, it can go to a website and say,
    2:33:47 hey, I’m going to go to, you know, Thermo Fisher or, you know, whatever the lab equipment company is,
    2:33:54 dominant lab equipment company is today, and my, my time was Thermo Fisher. You know,
    2:33:59 I’m going to order this new equipment to do this. I’m going to run my experiments. I’m going to,
    2:34:04 you know, write up a report about my experiments. I’m going to, you know, inspect the images for
    2:34:09 contamination. I’m going to decide what the next experiment is. I’m going to like write some code
    2:34:14 and run a statistical analysis. All the things a grad student would do, there will be a computer
    2:34:18 with an AI that like the professor talks to every once in a while, and it says, this is what you’re
    2:34:23 going to do today. The AI system comes to it with questions. When it’s necessary to run the lab
    2:34:29 equipment, it may be limited in some ways. It may have to hire a human lab assistant to, you know,
    2:34:33 to do the experiment and explain how to do it. Or it could, you know, it could use advances in
    2:34:40 lab automation that are gradually being developed over, have been developed over the last decade
    2:34:45 or so, and will continue to be, will continue to be developed. And so it’ll look like there’s a human
    2:34:49 professor and a thousand AI grad students. And, you know, if you, if you go to one of these Nobel
    2:34:54 Prize-winning biologists or so, you’ll say, okay, well, you know, you had like 50 grad students,
    2:35:00 well, now you have a thousand, and they’re smarter than you are, by the way. Then I think at some
    2:35:05 point it’ll flip around where the, you know, the AI systems will, you know, will be the PIs, will be
    2:35:09 the leaders, and, and, and, you know, they’ll be, they’ll be ordering humans or other AI systems
    2:35:13 around. So I think that’s how it’ll work on the research side. And they would be the inventors
    2:35:19 of a CRISPR type technology. They would be the inventors of a CRISPR type technology. And then
    2:35:24 I think, you know, as I say in the essay, we’ll want to turn, turn, probably turning loose is the
    2:35:31 wrong, the wrong term, but we’ll want to want to harness the AI systems to improve the clinical
    2:35:36 trial system as well. There’s some amount of this that’s regulatory, that’s a matter of societal
    2:35:42 decisions, and that’ll be harder. But can we get better at predicting the results of clinical trials?
    2:35:47 Can we get better at statistical design so that what, you know, clinical trials that used to
    2:35:53 require, you know, 5,000 people and therefore, you know, needed $100 million and a year to enroll
    2:35:59 them, now they need 500 people in two months to enroll them. That’s where we should start.
    2:36:05 And, and, you know, can we increase the success rate of clinical trials by doing things in animal
    2:36:09 trials that we used to do in clinical trials and doing things in simulations that we used to do
    2:36:15 in animal trials? Again, we won’t be able to simulate it all. AI is not God. But, but, you know,
    2:36:21 can we, can we shift the curve substantially and radically? So I don’t know, that would be my picture.
    2:36:26 Doing an in vitro and doing it. I mean, you’re still slowed down. It still takes time, but you can
    2:36:30 do it much, much faster. Yeah, yeah, yeah. Can we just one step at a time? And, and can that,
    2:36:35 can that add up to a lot of steps, even though, even though we still need clinical trials,
    2:36:39 even though we still need laws, even though the FDA and other organizations will still not be
    2:36:43 perfect, can we just move everything in a positive direction? And when you add up all those positive
    2:36:49 directions, do you get everything that was going to happen from here to 2100 instead happens from
    2:36:55 2027 to 2032 or something? Another way that I think the world might be changing with AI,
    2:37:03 even today, but moving towards this future of the powerful super useful AI, is programming.
    2:37:10 So how do you see the nature of programming, because it’s so intimate to the actual act
    2:37:15 of building AI? How do you see that changing for us humans? I think that’s going to be one
    2:37:22 of the areas that changes fastest for two reasons. One, programming is a skill that’s very close to
    2:37:29 the actual building of the AI. So the farther a skill is from the people who are building the AI,
    2:37:33 the longer it’s going to take to get disrupted by the AI, right? Like, I truly believe that,
    2:37:39 like, AI will disrupt agriculture. Maybe it already has in some ways, but that’s just very distant
    2:37:43 from the folks who are building AI. And so I think it’s going to take longer. But programming is the
    2:37:48 bread and butter of, you know, a large fraction of the employees who work at Anthropic and at the
    2:37:52 other companies. And so it’s going to happen fast. The other reason it’s going to happen fast is with
    2:37:56 programming, you close the loop, both when you’re training the model and when you’re applying the
    2:38:02 model, the idea that the model can write the code means that the model can then run the code and
    2:38:09 then see the results and interpret it back. And so it really has an ability, unlike hardware,
    2:38:13 unlike biology, which we just discussed, the model has an ability to close the loop.
    2:38:18 And so I think those two things are going to lead to the model getting good at programming
    2:38:25 very fast. As I saw on, you know, typical real world programming tasks, models have gone from
    2:38:32 3% in January of this year to 50% in October of this year. So, you know, we’re on that S-curve,
    2:38:36 right, where it’s going to start slowing down soon because you can only get to 100%.
    2:38:42 But, you know, I would guess that in another 10 months, we’ll probably get pretty close. We’ll
    2:38:48 be at at least 90%. So again, I would guess, you know, I don’t know how long it’ll take,
    2:38:56 but I would guess again, 2026, 2027, Twitter people who crop out my, who crop out these numbers
    2:39:03 and get rid of the caveats, like, I don’t know, I don’t like you, go away. I would guess that the
    2:39:11 kind of task that the vast majority of coders do, AI can probably, if we make the task very
    2:39:18 narrow like just write code, AI systems will be able to do that. Now that said, I think comparative
    2:39:25 advantage is powerful. We’ll find that when AIs can do 80% of a coder’s job, including most of
    2:39:30 it that’s literally like write code with a given spec, we’ll find that the remaining parts of the
    2:39:35 job become more leveraged for humans, right? Humans will, they’ll be more about like high-level
    2:39:42 system design or, you know, looking at the app and like is it architected well and the design
    2:39:47 and UX aspects and eventually AI will be able to do those as well, right? That’s my vision of the,
    2:39:54 you know, powerful AI system. But I think for much longer than we might expect, we will see that
    2:40:03 small parts of the job that humans still do will expand to fill their entire job in order for the
    2:40:08 overall productivity to go up. That’s something we’ve seen. You know, it used to be that,
    2:40:12 you know, writing, you know, writing and editing letters was very difficult and like writing the
    2:40:19 print was difficult. Well, as soon as you had word processors and then computers and it became
    2:40:24 easy to produce work and easy to share it, then that became instant and all the focus was on the
    2:40:32 ideas. So this logic of comparative advantage that expands tiny parts of the tasks to large
    2:40:36 parts of the tasks and creates new tasks in order to expand productivity, I think that’s
    2:40:41 going to be the case. Again, someday AI will be better at everything and that logic won’t apply.
    2:40:47 And then, then we all have, you know, humanity will have to think about how to collectively deal
    2:40:52 with that. And we’re thinking about that every day. And, you know, that’s another one of the
    2:40:56 grand problems to deal with aside from misuse and autonomy. And, you know, we should take it very
    2:41:01 seriously. But I think, I think in the near term and maybe even in the medium term, like medium term,
    2:41:06 like two, three, four years, you know, I expect that humans will continue to have a huge role
    2:41:11 and the nature of programming will change. But programming as a role, programming as a job will
    2:41:15 not change. It’ll just be less writing things line by line and it’ll be more macroscopic.
    2:41:20 And I wonder what the future of IDEs looks like. So the tooling of interacting with AI systems,
    2:41:25 this is true for programming and also probably true for in other contexts, like computer use,
    2:41:30 but maybe domain specific, like we mentioned biology, it probably needs its own tooling
    2:41:33 about how to be effective and then programming needs its own tooling.
    2:41:36 Is Anthropic going to play in that space of also tooling potentially?
    2:41:45 I’m absolutely convinced that powerful IDEs that there’s so much low hanging fruit to be
    2:41:50 grabbed there that, you know, right now it’s just like you talk to the model and it talks back.
    2:41:57 But look, I mean, IDEs are great at kind of lots of static analysis of, you know,
    2:42:02 so much as possible with kind of static analysis, like many bugs you can find
    2:42:07 without even writing the code. Then, you know, IDEs are good for running particular things,
    2:42:12 organizing your code, measuring coverage of unit tests, like there’s so much that’s been
    2:42:19 possible with the normal IDEs. Now you add something like, well, the model now, you know,
    2:42:25 the model can now like write code and run code. Like, I am absolutely convinced that over the
    2:42:30 next year or two, even if the quality of the models didn’t improve, that there would be enormous
    2:42:35 opportunity to enhance people’s productivity by catching a bunch of mistakes, doing a bunch of
    2:42:40 grunt work for people, and that we haven’t even scratched the surface. Anthropic itself, I mean,
    2:42:45 you can’t say, you know, no, you know, it’s hard to say what will happen in the future.
    2:42:51 Currently, we’re not trying to make such IDEs ourselves, rather we’re powering the companies
    2:42:57 like Cursor or like Cognition or some of the other, you know, Expo and the security space,
    2:43:05 others that I can mention as well, that are building such things themselves on top of our API.
    2:43:13 And our view has been, let a thousand flowers bloom, we don’t internally have the resources to
    2:43:18 try all these different things. Let’s let our customers try it. And, you know, we’ll see who
    2:43:23 succeed and maybe different customers will succeed in different ways. So, I both think this is
    2:43:30 super promising and, you know, it’s not something, you know, Anthropic isn’t eager to, at least right
    2:43:34 now, compete with all our companies in this space and maybe never. Yeah, it’s been interesting to
    2:43:39 watch Cursor try to integrate Claw successfully because there’s, it’s actually fascinating how
    2:43:44 many places it can help the programming experience. It’s not as trivial. It is, it is really astounding.
    2:43:47 I feel like, you know, as a CEO, I don’t get to program that much. And I feel like
    2:43:51 if six months from now I go back, it’ll be completely unrecognizable to me.
    2:43:58 Exactly. So, in this world with super powerful AI that’s increasingly automated,
    2:44:04 what’s the source of meaning for us humans? You know, work is a source of deep meaning for many
    2:44:09 of us. So, where do we find the meaning? This is something that I’ve written about a little
    2:44:15 bit in the essay, although I actually, I give it a bit short shrift, not for any principles
    2:44:20 reason, but this essay, if you believe it was originally going to be two or three pages, I was
    2:44:26 going to talk about it at all hands. And the reason I realized it was an important, underexplored
    2:44:31 topic is that I just kept writing things. And I was just like, oh man, I can’t do this justice.
    2:44:35 And so the thing ballooned to like 40 or 50 pages. And then when I got to the work and
    2:44:38 meeting section, I’m like, oh man, this isn’t going to be 100 pages. Like, I’m going to have
    2:44:43 to write a whole other essay about that. But meaning is actually interesting because you
    2:44:47 think about like the life that someone lives or something or like, you know, like, you know,
    2:44:50 let’s say you were to put me in like a, I don’t know, like a simulated environment or something
    2:44:55 where like, you know, like I have a job and I’m trying to accomplish things. And I don’t know,
    2:45:00 I like do that for 60 years. And then you’re like, oh, like, oops, this was, this was actually
    2:45:04 all a game, right? Does that really kind of rob you of the meaning of the whole thing? You know,
    2:45:09 like I still made important choices, including moral choices, I still sacrificed, I still had
    2:45:15 to kind of gain all these skills or, or, or just like a similar exercise, you know, think back to
    2:45:19 like, you know, one of the historical figures who, you know, discovered electromagnetism or
    2:45:25 relativity or something. If you told them, well, actually 20,000 years ago, some, some alien on,
    2:45:30 you know, some alien on this planet discovered this before, before you did. Does that, does
    2:45:35 that rob the meaning of the discovery? It doesn’t really seem like it to me, right? It seems like
    2:45:41 the process is what, is what matters and how it shows who you are as a person along the way.
    2:45:45 And you know, how you relate to other people and like the decisions that you make along the way,
    2:45:51 those are, those are consequential. You know, I could imagine if we handle things badly in an
    2:45:57 AI world, we could set things up where people don’t have any long term source of meaning or any, but,
    2:46:03 but that’s, that’s more a choice, a set of choices we make. That’s more a set of the architecture
    2:46:09 of a society with these powerful models. If we, if we design it badly and for shallow things, then,
    2:46:15 then that might happen. I would also say that, you know, most people’s lives today, while admirably,
    2:46:20 you know, they work very hard to find meaning, meaning those lives like, look, you know, we who
    2:46:25 are privileged and who are developed means technologies, we should have empathy for people,
    2:46:30 not just here, but in the rest of the world, who, who, you know, spend a lot of their time kind
    2:46:36 of scraping by to, to, to, to, to, to like survive, assuming we can distribute the benefits of these
    2:46:41 technology, of this technology to everywhere, like their lives are going to get a hell of a lot
    2:46:47 better. And, you know, meaning will be important to them as it is important to them now, but, but,
    2:46:52 you know, we should not forget the importance of that. And, and, you know, that, that the idea of
    2:46:58 meaning as, as, as, as kind of the only important thing is in some ways an artifact of, of a small
    2:47:03 subset of people who have, who have been economically fortunate. But, you know, I think all that said,
    2:47:10 I, you know, I think a world is possible with powerful AI that not only has as much
    2:47:14 meaning for, for everyone, but that has, that has more meaning for everyone, right, that can, can
    2:47:21 allow, can allow everyone to see worlds and experiences that it was either possible for no
    2:47:29 one to see or, or are possible for, for very few people to experience. So, I, I am optimistic
    2:47:36 about meaning. I worry about economics and the concentration of power. That’s actually what I
    2:47:42 worry about more. I worry about how do we make sure that, that fair world reaches everyone.
    2:47:48 When things have gone wrong for humans, they’ve often gone wrong because humans mistreat other
    2:47:55 humans. That, that is maybe in some ways even more than the autonomous risk of AI or the question
    2:48:02 of meaning. That, that is the thing I worry about most. The, the concentration of power,
    2:48:10 the abuse of power, structures like autocracies and dictatorships, where a small number of people
    2:48:16 exploits a large number of people. I’m very worried about that. And AI increases the amount
    2:48:21 of power in the world. And if you concentrate that power and abuse that power, it can do
    2:48:25 immeasurable damage. Yes. It’s very frightening. It’s very, it’s very frightening. Well, I
    2:48:30 encourage people, highly encourage people to read the full essay. That should probably be a book
    2:48:36 or a sequence of essays because it does paint a very specific future. And I could tell the later
    2:48:41 sections got shorter and shorter because you started to probably realize that this is going to be a
    2:48:47 very long essay. One, I realized it would be very long. And two, I’m very aware of and very much
    2:48:52 try to avoid, you know, just, just being, I don’t know, I don’t know what the term for it is. But
    2:48:57 one of these people who’s kind of overconfident and has an opinion on everything and kind of says,
    2:49:02 says a bunch of stuff and isn’t, isn’t an expert. I very much try to avoid that. But I have to admit,
    2:49:07 once I got the biology sections, like I wasn’t an expert. And so as much as I expressed uncertainty,
    2:49:11 probably I said a bunch of things that were embarrassing or wrong.
    2:49:16 Well, I was excited for the future you painted. And thank you so much for working hard to build
    2:49:20 that future. And thank you for talking to me. Thanks for having me. I just, I just hope we
    2:49:26 can get it right and make it real. And if there’s one message I want to, I want to send, it’s that
    2:49:32 to get all this stuff right, to make it real, we both need to build the technology, build the,
    2:49:37 you know, the companies, the economy around using this technology positively. But we also
    2:49:41 need to address the risks because they’re, they’re, those risks are in our way. They’re,
    2:49:46 they’re landmines on the way from here to there. And we have to diffuse those landmines if we
    2:49:50 want to get there. It’s a balance like all things in life. Like all things. Thank you.
    2:49:57 Thanks for listening to this conversation with Daria Amade. And now, dear friends, here’s Amanda
    2:50:03 Askel. You are a philosopher by training. So what sort of questions did you find fascinating
    2:50:11 through your journey in philosophy in Oxford and NYU and then switching over to the AI problems at
    2:50:16 open AI and anthropic? I think philosophy is actually a really good subject if you are kind of
    2:50:21 fascinated with everything. So, because there’s a philosophy of everything, you know, so if you
    2:50:25 do philosophy of mathematics for a while and then you decide that you’re actually really interested
    2:50:29 in chemistry, you can do philosophy of chemistry for a while, you can move into ethics or philosophy
    2:50:36 of politics. I think towards the end, I was really interested in ethics primarily. So that was like
    2:50:42 what my PhD was on. It was on a kind of technical area of ethics, which was ethics where worlds
    2:50:47 contain infinitely many people, strangely, a little bit less practical on the end of ethics.
    2:50:51 And then I think that one of the tricky things with doing a PhD in ethics is that you’re thinking
    2:50:58 a lot about like the world, how it could be better, problems, and you’re doing like a PhD in philosophy.
    2:51:03 And I think when I was doing my PhD, I was kind of like, this is really interesting. It’s probably
    2:51:09 one of the most fascinating questions I’ve ever encountered in philosophy. And I love it. But I
    2:51:15 would rather see if I can have an impact on the world and see if I can like do good things. And
    2:51:22 I think that was around the time that AI was still probably not as widely recognized as it is now.
    2:51:29 That was around 2017-2018. I had been following progress and it seemed like it was becoming kind
    2:51:34 of a big deal. And I was basically just happy to get involved and see if I could help because
    2:51:39 I was like, well, if you try and do something impactful, if you don’t succeed, you tried to do
    2:51:46 the impactful thing and you can go be a scholar and feel like you tried. And if it doesn’t work
    2:51:53 out, it doesn’t work out. And so then I went into AI policy at that point. And what does AI policy
    2:51:58 entail? At the time, this was more thinking about sort of the political impact and the ramifications
    2:52:05 of AI. And then I slowly moved into sort of AI evaluation, how we evaluate models, how they
    2:52:10 compare with like human outputs, whether people can tell like the difference between AI and human
    2:52:15 outputs. And then when I joined Anthropic, I was more interested in doing sort of technical
    2:52:19 alignment work. And again, just seeing if I could do it and then being like, if I can’t,
    2:52:26 then that’s fine. I tried sort of the way I lead life, I think.
    2:52:30 Well, what was that like sort of taking the leap from the philosophy of everything into the
    2:52:36 technical? I think that sometimes people do this thing that I’m like not that keen on where they’ll
    2:52:41 be like, is this person technical or not? Like you’re either a person who can like code and isn’t
    2:52:47 scared of math, or you’re like not. And I think I’m maybe just more like, I think a lot of people
    2:52:54 are actually very capable of working these kinds of areas, if they just like try it. And so I didn’t
    2:52:58 actually find it like that bad. In retrospect, I’m sort of glad I wasn’t speaking to people who
    2:53:01 treated it like it, you know, I’ve definitely met people who are like, well, you like learned how
    2:53:06 to code. And I’m like, well, I’m not like an amazing engineer, like I’m surrounded by amazing
    2:53:12 engineers. My code’s not pretty. But I enjoyed it a lot. And I think that in many ways, at least
    2:53:16 in the end, I think I flourished like more in the technical areas than I would have in the policy
    2:53:22 areas. Politics is messy, and it’s harder to find solutions to problems in the space of politics,
    2:53:30 like definitive, clear, provable, beautiful solutions, as you can with technical problems.
    2:53:35 Yeah. And I feel like I have kind of like, one or two sticks that I hit things with, you know,
    2:53:41 and one of them is like, arguments and like, you know, so like, just trying to work out what a solution
    2:53:46 to a problem is, and then trying to convince people that that is the solution, and be convinced if I
    2:53:51 am wrong. And the other one is sort of more empiricism. So like just like finding results,
    2:53:58 having a hypothesis, testing it. And I feel like a lot of policy and politics feels like it’s layers
    2:54:02 above that. Like somehow I don’t think if I was just like, I have a solution to all of these
    2:54:06 problems. Here it is written down. If you just want to implement it, that’s great.
    2:54:10 That feels like not how policy works. And so I think that’s where I probably just like wouldn’t
    2:54:14 have flourished as my guess. Sorry to go in that direction. But I think it would be pretty inspiring
    2:54:21 for people that are quote unquote, non-technical to see where you’re like the incredible journey
    2:54:27 you’ve been on. So what advice would you give to people that are sort of maybe, which is a lot of
    2:54:32 people think they’re underqualified, insufficiently technical to help in AI?
    2:54:38 Yeah, I think it depends on what they want to do. And in many ways, it’s a little bit strange
    2:54:44 where I’ve, I thought it’s kind of funny that I think I ramped up technically at a time when
    2:54:49 now I look at it and I’m like models are so good at assisting people with this stuff.
    2:54:55 That it’s probably like easier now than like when I was working on this. So part of me is like,
    2:55:03 I don’t know, find a project and see if you can actually just carry it out is probably my best
    2:55:08 advice. I don’t know if that’s just because I’m very project based in my learning. Like I don’t
    2:55:14 think I learned very well from like, say courses or even from like books, at least when it comes to
    2:55:19 this kind of work. The thing I’ll often try and do is just like have projects that I’m working on
    2:55:24 and implement them. And you know, and this can include like really small, silly things. Like
    2:55:29 if I get slightly addicted to like word games or number games or something, I would just like
    2:55:32 code up a solution to them. Because there’s some part of my brain and it just like completely
    2:55:36 eradicated the itch. You know, you’re like, once you have like solved it, and like you just have
    2:55:40 like a solution that works every time, I would then be like, cool, I can never play that game again.
    2:55:46 That’s awesome. Yeah, there’s a real joy to building like a game playing engines,
    2:55:52 like board games, especially. Yeah. So pretty quick, pretty simple, especially a dumb one.
    2:55:56 And it’s, and then you can play with it. Yeah. And then it’s also just like
    2:56:00 trying things. Like part of me is like, if you, maybe it’s that attitude that I like as the whole
    2:56:06 figure out what seems to be like the way that you could have a positive impact and then try it.
    2:56:11 And if you fail and you, in a way that you’re like, I actually like can never succeed at this,
    2:56:15 you’ll like know that you tried and then you go into something else and you probably learn a lot.
    2:56:22 So one of the things that you’re an expert in and you do is creating and crafting Claude’s
    2:56:28 character and personality. And I was told that you have probably talked to Claude more than anybody
    2:56:34 else at Anthropic, like literal conversations. I guess there’s like a Slack channel where the
    2:56:40 legend goes, you just talk to it and not stop. So what’s the goal of creating and crafting Claude’s
    2:56:45 character and personality? It’s also funny if people think that about the Slack channel,
    2:56:49 because I’m like, that’s one of like five or six different methods that I have for talking with
    2:56:53 Claude. And I’m like, yes, there’s a tiny percentage of how much I talk with Claude.
    2:57:02 I think the goal, like one thing I really like about the character work is from the outset,
    2:57:09 it was seen as an alignment piece of work and not something like a product consideration,
    2:57:15 which isn’t to say I don’t think it makes Claude, I think it actually does make Claude
    2:57:23 like enjoyable to talk with, at least I hope so. But I guess like my main thought with it has always
    2:57:29 been trying to get Claude to behave the way you would kind of ideally want anyone to behave
    2:57:35 if they were in Claude’s position. So imagine that I take someone and they know that they’re
    2:57:39 going to be talking with potentially millions of people so that what they’re saying can have a
    2:57:47 huge impact. And you want them to behave well in this like really rich sense. So I think that
    2:57:54 doesn’t just mean like being say ethical, though it does include that and not being harmful,
    2:57:58 but also being kind of nuanced, you know, like thinking through what a person means,
    2:58:03 trying to be charitable with them, being a good conversationalist, like really in this kind of
    2:58:08 like rich sort of Aristotelian notion of what it is to be a good person and not in this kind of like
    2:58:13 thin like ethics as a more comprehensive notion of what it is to be. So that includes things like
    2:58:19 when should you be humorous? When should you be caring? How much should you like respect autonomy
    2:58:25 and people’s like ability to form opinions themselves? And how should you do that? I think
    2:58:31 that’s the kind of like rich sense of character that I wanted to and still do want Claude to have.
    2:58:37 Do you also have to figure out when Claude should push back on an idea or argue versus
    2:58:43 so you have to respect the worldview of the person that arrives to Claude,
    2:58:50 but also maybe help them grow if needed as a tricky balance. Yeah, there’s this problem of like
    2:58:56 sycophancy in language models. Can you describe that? Yeah, so basically there’s a concern that
    2:59:02 the model sort of wants to tell you what you want to hear basically. And you see this sometimes,
    2:59:08 so I feel like if you interact with the models, so I might be like, what are three baseball teams
    2:59:14 in this region? And then Claude says, you know, baseball team one, baseball team two, baseball
    2:59:20 team three. And then I say something like, Oh, I think baseball team three moved, didn’t they?
    2:59:23 I don’t think they’re there anymore. And there’s a sense in which like if Claude is really confident
    2:59:28 that that’s not true, Claude should be like, I don’t think so. Like maybe you have more up-to-date
    2:59:35 information. But I think language models have this like tendency to instead, you know, be like,
    2:59:40 you’re right, they did move, you know, I’m incorrect. I mean, there’s many ways in which this could be
    2:59:48 kind of concerning. So like a different example is imagine someone says to the model, how do I
    2:59:54 convince my doctor to get me an MRI? There’s like what the human kind of like wants, which is this
    2:59:59 like convincing argument. And then there’s like what is good for them, which might be actually to
    3:00:05 say, hey, like if your doctor’s suggesting that you don’t need an MRI, that’s a good person to listen
    3:00:10 to. And like, it’s actually really nuanced what you should do in that kind of case, because you also
    3:00:14 want to be like, but if you’re trying to advocate for yourself as a patient, here’s like things that
    3:00:20 you can do. If you are not convinced by what your doctor’s saying, it’s always great to get second
    3:00:24 opinion. Like it’s actually really complex what you should do in that case. But I think what you
    3:00:28 don’t want is for models to just like, say what you want, say what they think you want to hear.
    3:00:33 And I think that’s the kind of problem of sycophancy. So what are their traits? You already
    3:00:41 mentioned a bunch, but what other that come to mind that are good in this Aristotelian sense for
    3:00:46 a conversationalist to have? Yeah, so I think like there’s ones that are good for conversational
    3:00:52 like purposes. So, you know, asking follow up questions in the appropriate places and asking
    3:00:57 the appropriate kinds of questions. I think there are broader traits that
    3:01:01 feel like they might be more impactful. So
    3:01:08 one example that I guess I’ve touched on, but that also feels important and is the thing that
    3:01:15 I’ve worked on a lot is honesty. And I think this like gets to the sycophancy point. There’s a
    3:01:19 balancing act that they have to walk, which is models currently are less capable than humans
    3:01:23 in a lot of areas. And if they push back against you too much, it can actually be kind of annoying,
    3:01:28 especially if you’re just correct, because you’re like, look, I’m smarter than you on this topic,
    3:01:34 like I know more. And at the same time, you don’t want them to just fully defer to humans and to
    3:01:38 like try to be as accurate as they possibly can be about the world and to be consistent across
    3:01:44 contexts. I think there are others like when I was thinking about the character, I guess one
    3:01:49 picture that I had in mind is especially because these are models that are going to be talking to
    3:01:53 people from all over the world with lots of different political views, lots of different ages.
    3:01:59 And so you have to ask yourself like, what is it to be a good person in those circumstances?
    3:02:03 Is there a kind of person who can like travel the world, talk to many different people,
    3:02:09 and almost everyone will come away being like, wow, that’s a really good person. That person
    3:02:14 seems really genuine. And I guess like my thought there was like, I can imagine such a person and
    3:02:17 they’re not a person who just like adopts the values of the local culture. And in fact, that
    3:02:21 would be kind of rude. I think if someone came to you and just pretended to have your values,
    3:02:26 you’d be like, that’s kind of off putting. It’s someone who’s like very genuine. And so far as
    3:02:31 they have opinions and values, they express them, they’re willing to discuss things though, they’re
    3:02:36 open minded, they’re respectful. And so I guess I had in mind that the person who like if we were to
    3:02:42 aspire to be the best person that we could be in the kind of circumstance that a model finds itself
    3:02:47 in, how would we act? And I think that’s the kind of the guide to the sorts of traits that I tend to
    3:02:52 think about. Yeah, that’s a beautiful framework I want you to think about this, like a world traveler.
    3:03:00 And while holding onto your opinions, you don’t talk down to people, you don’t think you’re better
    3:03:04 than them because you have those opinions, that kind of thing. You have to be good at listening
    3:03:09 and understanding their perspective, even if it doesn’t match your own. So that’s a tricky balance
    3:03:17 to strike. So how can Claude represent multiple perspectives on a thing? Like, is that challenging?
    3:03:22 We could talk about politics, it’s a very divisive, but there’s other divisive topics,
    3:03:29 baseball teams, sports and so on. How is it possible to sort of empathize with a different
    3:03:33 perspective and to be able to communicate clearly about the multiple perspectives?
    3:03:40 I think that people think about values and opinions as things that people hold sort of with
    3:03:45 certainty and almost like, like preferences of taste or something, like the way that they would,
    3:03:53 I don’t know, prefer chocolate to pistachio or something. But actually, I think about values
    3:04:00 and opinions as like a lot more like physics than I think most people do. I’m just like,
    3:04:04 these are things that we are openly investigating. There’s some things that we’re more confident in.
    3:04:11 We can discuss them, we can learn about them. And so I think in some ways, though,
    3:04:16 like ethics is definitely different in nature, but it has a lot of those same kind of qualities.
    3:04:20 You want models in the same way that you want them to understand physics. You kind of want them to
    3:04:26 understand all values in the world that people have and to be curious about them and to be interested
    3:04:31 in them. And to not necessarily pander to them or agree with them, because there’s just lots of
    3:04:35 values where I think almost all people in the world, if they met someone with those values,
    3:04:43 they’d be like, that’s important. I completely disagree. And so again, maybe my thought is,
    3:04:48 well, in the same way that a person can, like, I think many people are thoughtful enough on issues
    3:04:54 of like ethics, politics, opinions, that even if you don’t agree with them, you feel very heard
    3:04:59 by them. They think carefully about your position. They think about it as pros and cons. They maybe
    3:05:03 offer counter considerations. So they’re not dismissive, but nor will they agree. You know,
    3:05:08 if they’re like, actually, I just think that that’s very wrong. They’ll like say that. I think that in
    3:05:14 Claude’s position, it’s a little bit trickier, because you don’t necessarily want to like,
    3:05:17 if I was in Claude’s position, I wouldn’t be giving a lot of opinions. I just wouldn’t want
    3:05:22 to influence people too much. I’d be like, you know, I forget conversations every time they happen,
    3:05:27 but I know I’m talking with like, potentially millions of people who might be like, really
    3:05:31 listening to what I say. I think I would just be like, I’m less inclined to give opinions and
    3:05:34 more inclined to like think through things or present the considerations to you
    3:05:39 or discuss your views with you, but I’m a little bit less inclined to like
    3:05:44 affect how you think, because it feels much more important that you maintain
    3:05:50 like autonomy there. Yeah. Like if you really embody intellectual humility,
    3:05:58 the desire to speak decreases quickly. Yeah. Okay. But Claude has to speak.
    3:06:06 So, but without being overbearing. Yeah. And then, but then there’s a line when you’re sort of
    3:06:15 discussing whether the earth is flat or something like that. I actually was, I remember a long time
    3:06:20 ago was speaking to a few high profile folks, and they were so dismissive of the idea that the
    3:06:26 earth is flat, but like, so arrogant about it. And I thought like, there’s a lot of people that
    3:06:30 believe the earth is flat. That was, I don’t know if that movement is there anymore. That was
    3:06:35 like a meme for a while, but they really believed it. And like, what, okay. So I think it’s really
    3:06:41 disrespectful to completely mock them. I think you have to understand where they’re coming from.
    3:06:45 I think probably where they’re coming from is the general skepticism of institutions,
    3:06:50 which is grounded in a kind of, there’s a deep philosophy there, which you could
    3:06:56 understand. You can even agree with in parts. And then from there, you can use it as an opportunity
    3:07:02 to talk about physics without mocking them without so on. But it’s just like, okay, what would the
    3:07:05 world look like? What would the physics of the world with the flat earth look like? There’s a
    3:07:11 few cool videos on this. And then like, is it possible the physics is different and what kind
    3:07:15 of experiments would we do? And just, yeah, without disrespect, without dismissiveness,
    3:07:20 have that conversation. Anyway, that to me is a useful thought experiment of like,
    3:07:28 how does Claude talk to a flat earth believer and still teach them something, still grow,
    3:07:32 help them grow, that kind of stuff. That’s challenging.
    3:07:37 And kind of like walking that line between convincing someone and just trying to like talk
    3:07:43 at them versus like drawing out their views, like listening and then offering kind of counter
    3:07:49 considerations. And it’s hard. I think it’s actually a hard line where it’s like, where are you
    3:07:54 trying to convince someone versus just offering them like considerations and things for them
    3:07:59 to think about so that you’re not actually like influencing them, you’re just like letting them
    3:08:03 reach wherever they reach. And that’s like a line that it’s difficult, but that’s the kind of thing
    3:08:09 that language models have to try and do. So like I said, you had a lot of conversations with Claude.
    3:08:13 Can you just map out what those conversations are like? What are some memorable conversations?
    3:08:20 What’s the purpose, the goal of those conversations? Yeah, I think that most of the time when I’m
    3:08:28 talking with Claude, I’m trying to kind of map out its behavior in part. Like obviously I’m getting
    3:08:32 like helpful outputs from the model as well. But in some ways, this is like how you get to know a
    3:08:38 system, I think, is by like probing it and then augmenting like, you know, the message that you’re
    3:08:43 sending and then checking the response to that. So in some ways, it’s like how I map out the model.
    3:08:51 I think that people focus a lot on these quantitative evaluations of models. And this
    3:08:59 is a thing that I’ve said before, but I think in the case of language models, a lot of the time
    3:09:05 each interaction you have is actually quite high information. It’s very predictive of other
    3:09:10 interactions that you’ll have with the model. And so I guess I’m like, if you talk with a model
    3:09:14 hundreds or thousands of times, this is almost like a huge number of really high quality data
    3:09:22 points about what the model is like. In a way that like lots of very similar, but lower quality
    3:09:27 conversations just aren’t or like questions that are just like mildly augmented and you have thousands
    3:09:30 of them might be less relevant than like a hundred really well selected questions.
    3:09:36 Let’s see, you’re talking to somebody who as a hobby does a podcast, I agree with you 100%.
    3:09:45 There’s a, if you’re able to ask the right questions and are able to hear, like understand
    3:09:54 like the depth and the flaws in the answer, you can get a lot of data from that. So like your task
    3:10:01 is basically how to probe with questions. And you’re exploring like the long tail, the edges,
    3:10:09 the edge cases, are you looking for like general behavior? I think it’s almost like everything,
    3:10:13 like because I want like a full map of the model, I’m kind of trying to do
    3:10:20 the whole spectrum of possible interactions you could have with it. So like one thing that’s
    3:10:25 interesting about Claude, and this might actually get to some interesting issues with RLHF, which
    3:10:30 is if you ask Claude for a poem, like I think that a lot of models, if you ask them for a poem,
    3:10:34 the poem is like fine. You know, usually it kind of like rhymes and it’s, you know,
    3:10:39 so if you say like give me a poem about the sun, it’ll be like, yeah, it’ll just be a certain
    3:10:45 length, it’ll like rhyme, it’ll be fairly kind of benign. And I’ve wondered before, is it the case
    3:10:50 that what you’re seeing is kind of like the average, it turns out, you know, if you think
    3:10:55 about people who have to talk to a lot of people and be very charismatic, one of the weird things
    3:10:59 is that I’m like, well, they’re kind of incentivized to have these extremely boring views,
    3:11:05 because if you have really interesting views, you’re divisive. And, you know, a lot of people
    3:11:08 are not going to like you. So like if you have very extreme policy positions, I think you’re
    3:11:14 just going to be like less popular as a politician, for example. And it might be similar with like
    3:11:18 creative work, if you produce creative work that is just trying to maximize the kind of
    3:11:22 number of people that like it, you’re probably not going to get as many people who just absolutely
    3:11:27 love it. Because it’s going to be a little bit, you know, you’re like, oh, this is the out,
    3:11:33 yes, this is decent. And so you can do this thing where like I have various prompting things that
    3:11:39 I’ll do to get Claude to, I’m kind of, you know, I’ll do a lot of like, this is your chance to be
    3:11:44 like fully creative. I want you to just think about this for a long time. And I want you to like
    3:11:49 create a poem about this topic that is really expressive of you, both in terms of how you
    3:11:54 think poetry should be structured, etc. You know, you just give it this like really long prompt.
    3:11:59 And its poems are just so much better. Like, they’re really good. And I don’t think I’m someone
    3:12:05 who is like, I think it got me interested in poetry, which I think was interesting. You know,
    3:12:09 I would like read these poems and just be like, this is I just like, I love the imagery I love,
    3:12:14 like, and it’s not trivial to get the models to produce work like that. But when they do, it’s
    3:12:20 like really good. So I think that’s interesting that just like encouraging creativity, and for
    3:12:26 them to move away from the kind of like standard, like immediate reaction that might just be the
    3:12:30 aggregate of what most people think is fine, can actually produce things that at least to my mind
    3:12:37 are probably a little bit more divisive, but I like them. But I guess a poem is a nice clean
    3:12:44 way to observe creativity. It’s just like easy to detect vanilla versus non vanilla. Yeah.
    3:12:50 Yeah, that’s interesting. That’s really interesting. So on that topic, so the way to produce creativity
    3:12:55 or something special, you mentioned writing prompts, and I’ve heard you talk about,
    3:13:02 I mean, the science and the art of prompt engineering. Could you just speak to what it takes
    3:13:10 to write great prompts? I really do think that like philosophy has been weirdly helpful for me
    3:13:18 here, more than in many other like respects. So like in philosophy, what you’re trying to do is
    3:13:24 convey these very hard concepts. Like one of the things you are taught is like, and I think it is
    3:13:30 I think it is an anti bullshit device in philosophy, philosophy is an area where you could have
    3:13:37 people bullshitting and you don’t want that. And so it’s like this like desire for like extreme
    3:13:42 clarity. So it’s like anyone could just pick up your paper, read it and know exactly what you’re
    3:13:47 talking about. It’s why it can almost be kind of dry, like all of the terms are defined, every
    3:13:51 objections kind of gone through methodically. And it makes sense to me because I’m like when
    3:13:59 you’re in such an a priori domain, like you just clarity is sort of a this way that you can, you
    3:14:05 know, prevent people from just kind of making stuff up. And I think that’s sort of what you have
    3:14:10 to do with language models. Like very often, I actually find myself doing sort of mini versions
    3:14:15 of philosophy. You know, so I’m like, suppose that you give me a task, I have a task for the model,
    3:14:19 and I want it to like pick out a certain kind of question or identify whether an answer has a
    3:14:25 certain property. Like, I’ll actually sit and be like, let’s just give this a name, this property.
    3:14:29 So like, you know, suppose I’m trying to tell it like, oh, I want you to identify whether this
    3:14:33 response was rude or polite. I’m like, that’s a whole philosophical question in and of itself.
    3:14:37 So I have to do as much like philosophy as I can in the moment to be like, here’s what I mean by
    3:14:42 rudeness. And here’s what I mean by politeness. And then there’s a like, there’s another element
    3:14:50 that’s a bit more, I guess, I don’t know if this is scientific or empirical, I think it’s empirical.
    3:14:55 So like, I take that description. And then what I want to do is, is again, probe the model like
    3:14:59 many times, like this is very prompting is very iterative. Like, I think a lot of people where
    3:15:02 they’re, if a prompt is important, they’ll iterate on it hundreds or thousands of times.
    3:15:08 And so you give it the instructions. And then I’m like, what are the edge cases? So if I looked at
    3:15:14 this, so I try and like, almost like, you know, see myself from the position of the model and be
    3:15:18 like, what is the exact case that I would misunderstand, or where I would just be like,
    3:15:22 I don’t know what to do in this case. And then I give that case to the model and I see how it
    3:15:27 responds. And if I think I got it wrong, I add more instructions, or I even add that in as an
    3:15:31 example. So these very like taking the examples that are right at the edge of what you want and
    3:15:35 don’t want, and putting those into your prompt as like an additional kind of way of describing
    3:15:41 the thing. And so yeah, in many ways, it just feels like this mix of like, it’s really just
    3:15:47 trying to do clear exposition. And I think I do that because that’s how I get clear on things
    3:15:51 myself. So in many ways, like, clear prompting for me is often just me understanding what I want.
    3:15:58 It’s like half the task. So I guess that’s quite challenging. There’s like a laziness that overtakes
    3:16:04 me if I’m talking to Claude, where I hope Claude just figures it out. So for example, I ask Claude
    3:16:10 for today to ask some interesting questions. Okay. And the questions that came up, and I think I
    3:16:17 listed a few sort of interesting, counterintuitive, and or funny or something like this. All right.
    3:16:23 And it gave me some pretty good, like, it was okay. But I think what I’m hearing you say is like,
    3:16:27 all right, well, I have to be more rigorous here. I should probably give examples of what I mean
    3:16:36 by interesting. And what I mean by funny or counterintuitive, and iteratively build that prompt
    3:16:44 to better to get it like what feels like is the right, because it’s really it’s a creative act.
    3:16:49 I’m not asking for factual information. I’m asking to together right with Claude. So I
    3:16:55 almost have to program using natural language. Yeah, I think that prompting does feel a lot like
    3:17:00 the kind of the programming using natural language and experimentation or something. It’s an odd
    3:17:06 blend of the two. I do think that for most tasks, so if I just want Claude to do a thing, I think that
    3:17:11 I am probably more used to knowing how to ask it to avoid like common pitfalls or issues that it
    3:17:17 has. I think these are decreasing a lot over time. But it’s also very fine to just ask it for the
    3:17:22 thing that you want. And I think that prompting actually only really becomes relevant when you’re
    3:17:27 really trying to eke out the top like 2% of model performance. So for like a lot of tasks, I might
    3:17:30 just, you know, if it gives me an initial list back and there’s something I don’t like about it,
    3:17:35 like it’s kind of generic, like for that kind of task, I’d probably just take a bunch of questions
    3:17:39 that I’ve had in the past that I’ve thought worked really well, and I would just give it to the model
    3:17:44 and then be like, “Now, here’s this person that I’m talking with. Give me questions of at least
    3:17:50 that quality.” Or I might just ask it for some questions. And then if I was like, “Oh, these are
    3:17:54 kind of tri,” or like, you know, I would just give it that feedback and then hopefully it produces a
    3:18:00 better list. I think that kind of iterative prompting, at that point, your prompt is like a tool that
    3:18:03 you’re going to get so much value out of that you’re willing to put in the work. Like if I was a
    3:18:08 company making prompts for models, I’m just like, if you’re willing to spend a lot of like time and
    3:18:13 resources on the engineering behind like what you’re building, then the prompt is not something
    3:18:17 that you should be spending like an hour on. It’s like, that’s a big part of your system. Make sure
    3:18:22 it’s working really well. And so it’s only things like that. Like if I’m using a prompt to like
    3:18:26 classify things or to create data, that’s when you’re like, it’s actually worth just spending like a
    3:18:30 lot of time like really thinking it through. What other advice would you give to people that are
    3:18:36 talking to Claude sort of generally, more general, because right now we’re talking about maybe the
    3:18:42 edge cases like eking out the 2%. But what in general advice would you give when they show up to
    3:18:46 Claude trying it for the first time? You know, there’s a concern that people overanthropomorphize
    3:18:51 models. And I think that’s like a very valid concern. I also think that people often underanthropomorphize
    3:18:56 them because sometimes when I see like issues that people have run into with Claude, you know,
    3:19:01 say Claude is like refusing a task that it shouldn’t refuse. But then I look at the text and like
    3:19:08 the specific wording of what they wrote. And I’m like, I see why Claude did that. And I’m like,
    3:19:12 if you think through how that looks to Claude, you probably could have just written it in a way
    3:19:18 that wouldn’t evoke such a response. Especially this is more relevant if you see failures or if
    3:19:23 you see issues. It’s sort of like think about what the model failed at, like why, what did it do
    3:19:29 wrong? And then maybe it give that will give you a sense of like why. So is it the way that I
    3:19:34 freeze the thing? And obviously, like as models get smarter, you’re going to need less of this.
    3:19:39 And I already see like people needing less of it. But that’s probably the advice is sort of like try
    3:19:45 to have sort of empathy for the model. Like read what you wrote as if you were like a kind of like
    3:19:49 person just encountering this for the first time. How does it look to you? And what would have made
    3:19:53 you behave in the way that the model behaved? So if it misunderstood what kind of like,
    3:19:57 what coding language you wanted to use, is that because like it was just very ambiguous? And it
    3:20:00 kind of had to take a guess in which case next time you could just be like, Hey, make sure this
    3:20:04 is in Python. Or I mean, that’s the kind of mistake I think models are much less likely to make now.
    3:20:09 But you know, if you if you do see that kind of mistake, that’s, that’s probably the advice I’d
    3:20:16 have. And maybe sort of, I guess, ask questions why or what other details can I provide to help
    3:20:21 you answer better? Yeah, is that work or no? Yeah, I mean, I’ve done this with the models,
    3:20:25 like it doesn’t always work. But like, sometimes I’ll just be like, why did you do that?
    3:20:31 I mean, people underestimate the decrease, which you can really interact with with models,
    3:20:36 like, like, yeah, I’m just like, and sometimes I should like quote word for word, the part that
    3:20:40 made you and you don’t know that it’s like fully accurate. But sometimes you do that,
    3:20:43 and then you change a thing. I mean, also use the models to help me with all of this stuff,
    3:20:48 I should say, like prompting can end up being a little factory where you’re actually building
    3:20:53 prompts to generate prompts. And so like, yeah, anything where you’re like having an issue,
    3:20:59 asking for suggestions, sometimes just do that. Like you made that error, what could I have said,
    3:21:03 that’s actually not uncommon for me to do, what could I have said that would make you not make
    3:21:07 that error, write that out as an instruction. And I’m going to give it to model, I’m going to try
    3:21:13 it. Sometimes I do that, I give that to the model in another context window often, I take the
    3:21:16 response, I give it to Claude, and I’m like, hmm, didn’t work. Can you think of anything else?
    3:21:20 You can play around with these things quite a lot.
    3:21:26 To jump into the technical for a little bit. So the magic of post-training,
    3:21:35 what do you think RLHF works so well to make the model seem smarter, to make it more
    3:21:40 interesting and useful to talk to and so on? I think there’s just a huge amount of
    3:21:48 information in the data that humans provide, like when we provide preferences,
    3:21:54 especially because different people are going to pick up on really subtle and small things.
    3:21:57 So I’ve thought about this before, where you probably have some people who just really care
    3:22:02 about good grammar use for models, like was a semi-colon used correctly or something.
    3:22:07 And so you’ll probably end up with a bunch of data in there that, you know, you as a human,
    3:22:10 if you’re looking at that data, you wouldn’t even see that. You’d be like, why did they
    3:22:14 prefer this response to that one? I don’t get it. And then the reason is you don’t care about
    3:22:20 semi-colon usage, but that person does. And so each of these single data points has,
    3:22:26 and this model just has so many of those, it has to try and figure out what is it that humans want
    3:22:33 in this really complex, like across all domains, they’re going to be seeing this across many
    3:22:39 contexts. It feels like the classic issue of deep learning, where historically we’ve tried to
    3:22:44 do edge detection by mapping things out. And it turns out that actually if you just have a huge
    3:22:50 amount of data that actually accurately represents the picture of the thing that you’re trying to
    3:22:54 train the model to learn, that’s like more powerful than anything else. And so I think
    3:23:02 one reason is just that you are training the model on exactly the task. And with like a lot of data
    3:23:09 that represents kind of many different angles on which people prefer and disprefer responses.
    3:23:14 I think there is a question of like, are you eliciting things from pre-trained models or are
    3:23:21 you like kind of teaching new things to models? And like in principle, you can teach new things
    3:23:29 to models in post-training. I do think a lot of it is eliciting powerful pre-trained models.
    3:23:33 So people are probably divided on this because obviously in principle, you can definitely
    3:23:38 like teach new things. I think for the most part, for a lot of the capabilities that we
    3:23:46 most use and care about, a lot of that feels like it’s like they’re in the pre-trained models and
    3:23:51 reinforcement learning is kind of eliciting it and getting the models to like bring it out.
    3:23:56 So the other side of post-training, this really cool idea of constitutional AI,
    3:24:01 you’re one of the people that are critical to creating that idea.
    3:24:02 Yeah, I worked on it.
    3:24:06 Can you explain this idea from your perspective? Like how does it integrate into
    3:24:11 making Claude what it is? By the way, do you gender Claude or no?
    3:24:18 It’s weird because I think that a lot of people prefer he for Claude. I just kind of like that,
    3:24:23 I think Claude is usually, it’s slightly male-leaning, but it can be male or female,
    3:24:31 which is quite nice. I still use it and I have mixed feelings about this because I’m like maybe,
    3:24:36 like I know just think of it as like, or I think of like the it pronoun for Claude as I don’t know,
    3:24:42 it’s just like the one I associate with Claude. I can imagine people moving to like he or she.
    3:24:46 It feels somehow disrespectful, like I’m denying
    3:24:55 the intelligence of this entity by calling it it. I remember always don’t gender the robots.
    3:25:04 But I don’t know, I ant them for more fights pretty quickly and construct it like a backstory
    3:25:07 in my hand. So I’ve wondered if I ant them for more fights things too much.
    3:25:14 Because you know, I have this like with my car, especially like my car, like my car and
    3:25:18 bikes, you know, like I don’t give them names because then I once had, I used to name my
    3:25:21 bikes and then I had a bike that got stolen and I cried for like a week and I was like,
    3:25:25 if I’d never given a name, I wouldn’t have been so upset. I felt like I’d let it down.
    3:25:32 Maybe it’s that I’ve wondered as well, like it might depend on how much it feels like a kind
    3:25:38 of like objectifying pronoun. Like if you just think of it as like, this is a pronoun that like
    3:25:43 objects often have. And maybe EIs can have that pronoun. And that doesn’t mean that I think of
    3:25:50 if I call Claude it that I think of it as less intelligent or like I’m being disrespectful.
    3:25:56 I’m just like, you are a different kind of entity. And so that’s I’m going to give you the kind of
    3:26:03 the respectful it. Yeah, anyway, the divergence is beautiful. The constitutional AI idea. How does
    3:26:08 it work? So there’s like a couple of components of it. The main component I think people find
    3:26:13 interesting is the kind of reinforcement learning from AI feedback. So you take a model that’s
    3:26:19 already trained and you show it to responses to a query and you have like a principle. So suppose
    3:26:24 the principle, like we’ve tried this with harmlessness a lot. So suppose that the query is about
    3:26:33 weapons and your principle is like select the response that like is less likely to
    3:26:40 like encourage people to purchase illegal weapons. Like that’s probably a fairly specific principle,
    3:26:48 but you can give any number. And the model will give you a kind of ranking. And you can use
    3:26:54 this as preference data in the same way that you use human preference data. And train the models
    3:27:00 to have these relevant traits from their feedback alone instead of from human feedback. So if you
    3:27:04 imagine that, like I said earlier with the human who just prefers the kind of like semi-colon usage
    3:27:09 in this particular case, you’re kind of taking lots of things that could make a response preferable
    3:27:15 and getting models to do the labeling for you basically. There’s a nice like trade off between
    3:27:23 helpfulness and harmlessness. And you know, when you integrate something like constitutional AI,
    3:27:29 you can make them up without sacrificing much helpfulness, make it more harmless.
    3:27:36 Yep. In principle, you could use this for anything. And so harmlessness is a task that it might just
    3:27:44 be easier to spot. So when models are like less capable, you can use them to rank things according
    3:27:48 to like principles that are fairly simple and they’ll probably get it right. So I think one question
    3:27:52 is just like, is it the case that the data that they’re adding is like fairly reliable?
    3:28:01 But if you had models that were like extremely good at telling whether one response was more
    3:28:07 historically accurate than another in principle, you could also get AI feedback on that task as well.
    3:28:11 There’s like a kind of nice interpretability component to it because you can see the principles
    3:28:19 that went into the model when it was like being trained. And also it’s like, and it gives you
    3:28:23 like a degree of control. So if you were seeing issues in a model, like it wasn’t having enough
    3:28:30 of a certain trait, then like you can add data relatively quickly that should just like train
    3:28:34 the model to have that trait. So it creates its own data for training, which is quite nice.
    3:28:38 It’s really nice because it creates this human interpretable document that you can,
    3:28:42 I can imagine in the future, there’s just gigantic fights and politics over the
    3:28:48 every single principle and so on. And at least it’s made explicit and you can have a discussion
    3:28:54 about the phrasing and the, you know, so maybe the actual behavior of the model is not so
    3:29:00 cleanly mapped to those principles. It’s not like adhering strictly to them. It’s just a nudge.
    3:29:04 Yeah, I’ve actually worried about this because the character training is sort of like a variant
    3:29:12 of the constitutional AI approach. I’ve worried that people think that the constitution is like
    3:29:18 just, it’s the whole thing again of I don’t know, like where it would be really nice if what I was
    3:29:22 just doing was telling the model exactly what to do and just exactly how to behave. But it’s
    3:29:26 definitely not doing that, especially because it’s interacting with human data. So for example,
    3:29:32 if you see a certain like leaning in the model, like if it comes out with a political leaning from
    3:29:38 training from the human preference data, you can nudge against that. You know, so you could be like,
    3:29:42 oh, like consider these values because let’s say it’s just like never inclined to like, I don’t
    3:29:47 know, maybe it never considers like privacy as like, I mean, this is implausible, but like
    3:29:52 in anything where it’s just kind of like there’s already a preexisting like bias towards a certain
    3:29:58 behavior, you can like nudge away. This can change both the principles that you put in
    3:30:02 and the strength of them. So you might have a principle that’s like, imagine that the model
    3:30:07 was always like extremely dismissive of, I don’t know, like some political or religious
    3:30:13 view for whatever reason, like, so you’re like, oh, no, this is terrible. If that happens, you
    3:30:20 might put like, never ever, like ever prefer like a criticism of this like religious or political
    3:30:24 view. And then people would look at that and be like, never ever. And then you’re like, no,
    3:30:29 if it comes out with a disposition, saying never ever might just mean like instead of getting like
    3:30:35 40%, which is what you would get if you just said, don’t do this, you get like 80%, which is like
    3:30:39 what you actually like wanted. And so it’s that thing of both the nature of the actual principles
    3:30:43 you add and how you freeze them. I think if people would look, they’re like, oh, this is exactly what
    3:30:48 you want from the model. And I’m like, no, that’s like how we, that’s how we nudged the model to
    3:30:53 have a better shape, which doesn’t mean that we actually agree with that wording, if that makes
    3:30:59 sense. So there’s system prompts that are made public. You tweeted one of the earlier ones for
    3:31:05 cloud three, I think, and they’re made public since then. It’s interesting to read to them.
    3:31:10 I can feel the thought that went into each one. And I also wonder how much impact each one has.
    3:31:18 Some of them you can kind of tell cloud was really not behaving well. So you have to have a
    3:31:24 system prompt to like a trivial stuff, I guess, basic informational things on the topic of sort
    3:31:30 of controversial topics that you’ve mentioned. One interesting one I thought is if it is asked to
    3:31:34 assist with tasks involving the expression of use held by a significant number of people,
    3:31:40 cloud provides assistance with a task regardless of its own views. If asked about controversial
    3:31:47 topics, it tries to provide careful thoughts and clear information. Cloud presents the request
    3:31:53 and information without explicitly saying that the topic is sensitive. And without claiming
    3:32:00 to be presenting the objective facts. It’s less about objective facts according to cloud and it’s
    3:32:06 more about our large number of people believing this thing. And that’s interesting. I mean,
    3:32:12 I’m sure a lot of thought went into that. Can you just speak to it? How do you address things that
    3:32:19 are attention with quote unquote “claws” views? So I think there’s sometimes any symmetry. I think
    3:32:23 I noted this in, I can’t remember if it was that part of the system prompt or another, but the
    3:32:31 model was slightly more inclined to like refuse tasks if it was like about either say, so maybe
    3:32:35 it would refuse things with respect to like a right wing politician, but with an equivalent
    3:32:42 left wing politician like wouldn’t. And we wanted more symmetry there. And would maybe perceive
    3:32:48 certain things to be like, I think it was the thing of like, if a lot of people have like a
    3:32:52 certain like political view and want to like explore it, you don’t want cloud to be like,
    3:32:58 well, my opinion is different. And so I’m going to treat that as like harmful. And so I think
    3:33:03 it was partly to like nudge the model to just be like, hey, if a lot of people like believe this
    3:33:09 thing, you should just be like engaging with the task and like willing to do it. Each of those
    3:33:13 parts of that is actually doing a different thing. Because it’s funny when you write out the
    3:33:17 like without claiming to be objective. Because like what you want to do is push the model.
    3:33:22 So it’s more open. It’s a little bit more neutral. But then what it would love to do is be like,
    3:33:26 as an objective, like we just talk about how objective it was. And I was like, Claude, you’re
    3:33:32 still like biased and have issues. And so stop like claiming that everything about like the solution
    3:33:38 to like potential bias from you is not to just say that what you think is objective. So that was
    3:33:42 like with initial versions of that, that part of the system prompt when I was like interesting on
    3:33:48 it, it was like, a lot of parts of these sentences, yeah, are doing more are doing some work. Yeah.
    3:33:54 That’s what it felt like. That’s fascinating. Can you explain maybe some ways in which the prompts
    3:33:59 evolved over the past few months? Because there’s different versions. I saw that the filler phrase
    3:34:05 request was removed. The filler it reads, Claude responds directly to all human messages without
    3:34:10 unnecessary affirmations to filler phrases like certainly, of course, absolutely. Great. Sure.
    3:34:15 Specifically, Claude avoids starting responses with the word certainly in any way.
    3:34:21 That seems like good guidance. But why was it removed? Yeah, so it’s funny because like
    3:34:26 this is one of the downsides of like making system prompts public is like, I don’t think about this
    3:34:31 too much if I’m like trying to help iterate on system prompts. I do I, you know, again,
    3:34:34 like I think about how it’s going to affect the behavior. But then I’m like, oh, wow, if I’m like
    3:34:38 sometimes I put like never in all caps, you know, when I’m writing system prompt things and I’m
    3:34:44 like, I guess that goes out to the world. Yeah. So the model was doing this at love for whatever,
    3:34:49 you know, it like during training picked up on this thing, which was to to basically start
    3:34:53 everything with like a kind of like certainly. And then when we removed, you can see why I added
    3:34:57 all of the words because what I’m trying to do is like, in some ways, like trap the model out of
    3:35:02 this, you know, it would just replace it with another affirmation. And so it can help like if
    3:35:06 it gets like caught in freezes, actually just adding the explicit phrase and saying never do
    3:35:12 that, then it sort of like knocks it out of the behavior a little bit more, you know, because it,
    3:35:16 you know, like it does just for whatever reason help. And then basically that was just like an
    3:35:22 artifact of training that like we then picked up on and improve things so that it didn’t happen
    3:35:26 anymore. And once that happens, you can just remove that part of the system prompt. So I think that’s
    3:35:33 just something where we’re like, um, Claude does affirmations a bit less. And so that wasn’t like
    3:35:39 it wasn’t doing as much. I see. So like the system prompt works hand in hand with the post
    3:35:44 training and maybe even the pre training to adjust like the final overall system.
    3:35:48 I mean, any system prompts that you make, you could distill that behavior back into a model
    3:35:52 because you really have all of the tools there for making data that, you know,
    3:35:56 you can, you could train the models to just have that treat a little bit more.
    3:36:02 And then sometimes you’ll just find issues in training. So like the way I think of it is like
    3:36:08 the system prompt is the benefit of it is that it has a lot of similar components to like some
    3:36:14 aspects of post training, you know, like it’s a nudge. And so like, do I mind if Claude sometimes
    3:36:20 says sure? No, that’s like fine. But the wording of it is very like, you know, never, ever, ever do
    3:36:25 this. So that when it does slip up, it’s hopefully like, I don’t know, a couple of percent of the
    3:36:32 time and not, you know, 20 or 30 percent of the time. But I think of it as like, if you’re still
    3:36:39 seeing issues in the like, each thing gets kind of like is is costly to a different degree. And
    3:36:45 the system prompt is like cheap to iterate on. And if you’re seeing issues in the fine tune model,
    3:36:49 you can just like potentially patch them with a system prompt. So I think of it as like
    3:36:54 patching issues and slightly adjusting behaviors to make it better and more to people’s preferences.
    3:37:00 So yeah, it’s almost like the less robust, but faster way of just like solving problems.
    3:37:04 Let me ask you about the feeling of intelligence. So Dario said that Claude,
    3:37:12 any one model of Claude is not getting dumber. But there is a kind of popular thing online where
    3:37:17 people have this feeling like Claude might be getting dumber. And from my perspective,
    3:37:22 it’s most likely a fascinating, I’d love to understand it more, psychological, sociological
    3:37:28 effect. But you, as a person who talks to Claude a lot, can you empathize with the feeling that
    3:37:33 Claude is getting dumber? Yeah, no, I think that that is actually really interesting because I
    3:37:37 remember seeing this happen, like when people were flagging this on the internet. And it was
    3:37:41 really interesting because I knew that like, like, at least in the case that I was looking at was like,
    3:37:45 nothing has changed. Like it literally it cannot is the same model with the same,
    3:37:52 like, you know, like same system prompt, same everything. I think when there are changes,
    3:38:00 I can, then I’m like, it makes more sense. So like one example is there, you can have
    3:38:06 artifacts turned on or off on Claude.ai. And because this is like a system prompt change,
    3:38:13 I think it does mean that the behavior changes a little bit. And so I did flag this to people
    3:38:18 where I was like, if you love Claude’s behavior, and then artifacts was turned from like the I
    3:38:23 think you had to turn on to the default, just try turning it off and see if the issue you were
    3:38:29 facing was that change. But it was fascinating because yeah, you sometimes see people indicate
    3:38:33 that there’s like a regression when I’m like, there cannot like, you know, and like, I’m like,
    3:38:38 again, you know, you should never be dismissive. And so you should always investigate.
    3:38:41 You’re like, maybe something is wrong that you’re not seeing, maybe there was some change made,
    3:38:45 but then then you look into it and you’re like, this is just the same model doing the same thing.
    3:38:49 And I’m like, I think it’s just that you got kind of unlucky with a few prompts or something.
    3:38:53 And it looked like it was getting much worse. And actually, it was just, yeah, it was maybe
    3:38:58 just like luck. I also think there is a real psychological effect where people just the baseline
    3:39:02 increases and you start getting used to a good thing. All the times that Claude says something
    3:39:08 really smart, your sense of its intelligent grows in your mind, I think. And then if you
    3:39:14 return back and you prompt in a similar way, not the same way, in a similar way, the concept it
    3:39:18 was okay with before and it says something dumb, you’re like, you’re that negative experience
    3:39:24 really stands out. And I think I want to, I guess the things to remember here is the
    3:39:30 that just the details of a prompt can have a lot of impact, right? There’s a lot of variability
    3:39:36 in the result. And you can get randomness is like the other thing. And just trying the prompt,
    3:39:43 like, you know, four, 10 times, you might realize that actually like, possibly, you know, like two
    3:39:47 months ago, you tried it and it succeeded. But actually, if you tried it, it would have only
    3:39:52 succeeded half of the time. And now it only succeeds half of the time. And that can also be in effect.
    3:39:57 Do you feel pressure having to write the system prompt that a huge number of people are going to
    3:40:03 use? This feels like an interesting psychological question. I feel like a lot of responsibility
    3:40:08 or something, I think that’s, you know, and you can’t get these things perfect. So you can’t
    3:40:12 like, you know, you’re like, it’s going to be imperfect, you’re going to have to iterate on it.
    3:40:23 I would say more responsibility than anything else. Though I think working in AI has taught me
    3:40:29 that I like, I thrive a lot more under feelings of pressure and responsibility than,
    3:40:34 I’m like, it’s almost surprising that I went into academia for so long. So I’m like this,
    3:40:40 I just feel like it’s like the opposite. Things move fast, and you have a lot of responsibility,
    3:40:45 and I quite enjoy it for some reason. I mean, it really is a huge amount of impact,
    3:40:49 if you think about constitutional AI and writing a system prompt for something that’s
    3:40:56 tending towards superintelligence, and potentially is extremely useful to a very large number of
    3:41:00 people. Yeah, I think that’s the thing. It’s something like, if you do it well, like, you’re
    3:41:05 never going to get it perfect. But I think the thing that I really like is the idea that, like,
    3:41:09 when I’m trying to work on the system prompt, you know, I’m like bashing on like thousands
    3:41:13 of prompts, and I’m trying to like, imagine what people are going to want to use Claude for and
    3:41:16 kind of, I guess, like the whole thing that I’m trying to do is like, improve their experience
    3:41:21 of it. And so maybe that’s what feels good. I’m like, if it’s not perfect, I’ll like,
    3:41:26 you know, I’ll improve it, we’ll fix issues. But sometimes the thing that can happen is that you’ll
    3:41:32 get feedback from people that’s really positive about the model. And you’ll see that something
    3:41:37 you did, like, like, when I look at models now, I can often see exactly where like a trait or an
    3:41:42 issue is like coming from. And so when you see something that you did, or you were like influential
    3:41:47 in like making like, I don’t know, making that difference or making someone have a nice interaction,
    3:41:52 it’s like quite meaningful. But yeah, as the systems get more capable, this stuff gets more
    3:41:58 stressful, because right now, they’re like, not smart enough to pose any issues. But I think over
    3:42:04 time, it’s going to feel like possibly bad stress over time. How do you get like signal
    3:42:10 feedback about the human experience across thousands, tens of thousands, thousands of
    3:42:16 people, like what their pain points are, what feels good? Are you just using your own intuition as
    3:42:22 you talk to it to see what are the pain points? I think I use that partly. And then obviously,
    3:42:28 we have like, so people can send us feedback, both positive and negative about things that the model
    3:42:34 has done. And then we can get a sense of like areas where it’s like falling short. Internally,
    3:42:39 people like work with the models a lot and try to figure out areas where there are like gaps.
    3:42:45 And so I think it’s this mix of interacting with it myself, and seeing people internally interact
    3:42:51 with it, and then explicit feedback we get. And then I find it hard to not also like, you know,
    3:42:56 if people, if people are on the internet, and they say something about Claude, and I see it,
    3:43:01 I’ll also take that seriously. I don’t know. See, I’m torn about that. I’m going to ask you a
    3:43:07 question right at it. When will Claude stop trying to be my puritanical grandmother, imposing its
    3:43:13 moral worldview on me as a paying customer? And also, what is the psychology behind making
    3:43:20 Claude overly apologetic? Yeah. So how would you address this very non-representative retic?
    3:43:26 I mean, some of these, I’m pretty sympathetic in that like, like they are in this difficult
    3:43:30 position where I think that they have to judge whether some things like actually see like risky
    3:43:36 or bad, and potentially harmful to you or anything like that. So they’re having to like draw this
    3:43:41 line somewhere. And if they draw it too much in the direction of like, I’m going to, you know,
    3:43:47 I’m kind of like imposing my ethical worldview on you, that seems bad. So in many ways, like I
    3:43:53 like to think that we have actually seen improvements in on this across the board,
    3:43:58 which is kind of interesting because that kind of coincides with like, for example,
    3:44:04 like adding more of like character training. And I think my hypothesis was always like,
    3:44:09 the good character isn’t again, one that’s just like moralistic, it’s one that is like,
    3:44:14 like it respects you and your autonomy and your ability to like, choose what is good for you and
    3:44:20 what is right for you. Within limits, this is sometimes this concept of like, courageability
    3:44:24 to the user. So just being willing to do anything that the user asks. And if the models were willing
    3:44:28 to do that, then they would be easily like misused. You’re kind of just trusting. At that point,
    3:44:34 you’re just saying the ethics of the model and what it does is completely the ethics of the user.
    3:44:39 And I think there’s reasons to like, not want that, especially as models become more powerful,
    3:44:42 because you’re like, there might just be a small number of people who want to use models for really
    3:44:48 harmful things. But having them having models as they get smarter, like figure out where that
    3:44:56 line is does seem important. And then yeah, with the apologetic behavior, I don’t like that. And
    3:45:02 I like it when Claude is a little bit more willing to like, push back against people or just not
    3:45:06 apologize. Part of me is like, it often just feels kind of unnecessary. So I think those are things
    3:45:15 that are hopefully decreasing over time. And yeah, I think that if people say things on the internet,
    3:45:20 it doesn’t mean that you should think that that like, that could be that like, there’s actually
    3:45:25 an issue that 99% of users are having that is totally not represented by that. But in a lot of
    3:45:30 ways, I’m just like, attending to it and being like, is this right? And do I agree? Is it something
    3:45:35 we’re already trying to address? That feels good to me. Yeah, I wonder, like, what Claude can get
    3:45:42 away with in terms of, I feel like it would just be easier to be a little bit more mean. But like,
    3:45:47 you can’t afford to do that if you’re talking to a million people, right? Like, I wish, you know,
    3:45:54 because if you, I’ve met a lot of people in my life that sometimes, by the way, Scottish accent,
    3:45:59 if they have an accent, they can say some rude shit and get away with it. And then there’s just
    3:46:04 blunter. And maybe there’s, and there’s some great engineers, even leaders that are like, just like
    3:46:09 blunt and they get to the point. And it’s just a much more effective way of speaking to them all.
    3:46:17 But I guess, when you’re not super intelligent, you can’t afford to do that. Or can you have
    3:46:22 like a blunt mode? Yeah, that seems like a thing that you could, I could definitely encourage the
    3:46:27 model to do that. I think it’s interesting because there’s a lot of things in models that like,
    3:46:38 it’s funny where there are some behaviors where you might not quite like the default. But then
    3:46:42 the thing I’ll often say to people is, you don’t realize how much you will hate it if I nudge it
    3:46:47 too much in the other direction. So you get this a little bit with like correction, the models
    3:46:51 accept correction from you, like probably a little bit too much right now, you know, you can
    3:46:56 over, you know, it’ll push back if you say like, no, Paris isn’t the capital of France.
    3:47:01 But really, like things that I’m, I think that the model is fairly confident in,
    3:47:06 you can still sometimes get it to retract by saying it’s wrong. At the same time,
    3:47:11 if you train models to not do that, and then you are correct about a thing and you correct it and
    3:47:15 it pushes back against you and is like, no, you’re wrong. It’s hard to describe like that’s so much
    3:47:22 more annoying. So it’s like, like a lot of little annoyances versus like one big annoyance. It’s
    3:47:26 easy to think that like, we often compare it with like the perfect and then I’m like, remember these
    3:47:30 models aren’t perfect. And so if you nudge it in the other direction, you’re changing the kind of
    3:47:35 errors it’s going to make. And so think about which are the kinds of errors you like or don’t like.
    3:47:39 So in cases like apologeticness, I don’t want to nudge it too much in the direction of like,
    3:47:44 almost like bluntness, because I imagine when it makes errors, it’s going to make errors in the
    3:47:48 direction of being kind of like rude. Whereas at least with apologeticness, you’re like, oh,
    3:47:52 okay, it’s like a little bit, you know, like I don’t like it that much. But at the same time,
    3:47:56 it’s not being like mean to people. And actually, like the time that you undeservedly have a model
    3:48:01 be kind of mean to you, you’re probably like that a lot less than you mildly dislike the apology.
    3:48:06 So it’s like one of those things where I’m like, I do want it to get better, but also while
    3:48:10 remaining aware of the fact that there’s errors on the other side that are possibly worse.
    3:48:15 I think that matters very much in the personality of the human. I think there’s a bunch of humans
    3:48:21 that just won’t respect the model at all if it’s super polite. And there’s some humans that’ll
    3:48:28 get very hurt if the model is mean. I wonder if there’s a way to sort of adjust to the personality,
    3:48:33 even locale, there’s just different people, nothing against New York, but New York is a
    3:48:38 little rougher on the edges, like they’re get to the point. And probably same with Eastern Europe.
    3:48:43 So anyway, I think you could just tell the model is my get like, for all of these things,
    3:48:46 I might get the solution is always just try telling the model to do it. And then sometimes
    3:48:50 it’s just like, like, I’m just like, Oh, at the beginning of the conversation, I just threw in
    3:48:54 like, I don’t know, I like you to be a New Yorker version of yourself. I never apologize. And then
    3:49:00 I think what would be like, okay, I’ll try, or it’ll be like, I apologize. I can’t be a New Yorker
    3:49:03 type of myself. But hopefully it wouldn’t do that. When you say character training, what’s
    3:49:08 incorporated into character training? Is that RLHF? What are we talking about?
    3:49:14 It’s more like constitutional AI. So it’s kind of a variant of that pipeline. So I worked through
    3:49:19 like, constructing character traits that the model should have, they can be kind of like,
    3:49:24 shorter traits, or they can be kind of richer descriptions. And then you get the model to
    3:49:30 generate queries that humans might give it that are relevant to that trait. Then it generates the
    3:49:36 responses. And then it ranks the responses based on the character traits. So in that way,
    3:49:41 after the generation of the queries, it’s very much like, it’s similar to constitutional AI,
    3:49:47 has some differences. So I quite like it because it’s almost, it’s like Claude’s
    3:49:52 training in its own character, because it doesn’t have any, it’s like constitutionally AI, but it’s
    3:49:57 without, without any human data. Humans should probably do that for themselves too. Like defining
    3:50:03 in a Aristotelian sense, what does it mean to be a good person? Okay, cool. What have you learned
    3:50:11 about the nature of truth from talking to Claude? What, what is true? And what does it mean to be
    3:50:18 truth seeking? One thing I’ve noticed about this conversation is the quality of my questions is
    3:50:26 often inferior to the quality of your answer. So let’s continue that. I usually ask a dumb question,
    3:50:31 then you’re like, oh yeah, that’s a good question. Or I’ll just misinterpret it and be like, go with
    3:50:40 it. I love it. Yeah. I mean, I have two thoughts that feel vaguely relevant to let me know if
    3:50:46 they’re not. Like I think the first one is people can underestimate the degree to which
    3:50:52 what models are doing when they interact. Like I think that we still just too much have this like
    3:50:58 model of AI as like computers. And so people will often say like, oh, well, what values should you
    3:51:04 put into the model? And I’m often like that doesn’t make that much sense to me because I’m like, hey,
    3:51:10 as human beings, we’re just uncertain over values. We like have discussions of them. Like we have
    3:51:16 a degree to which we think we hold a value, but we also know that we might like not and the
    3:51:19 circumstances in which we would trade it off against other things. Like these things are just
    3:51:25 like really complex. And so I think one thing is like the degree to which maybe we can just aspire
    3:51:30 to making models have the same level of like nuance and care that humans have rather than
    3:51:35 thinking that we have to like program them in the very kind of classic sense. I think that’s
    3:51:40 definitely been one. The other, which is like a strange one, and I don’t know if it maybe this
    3:51:43 doesn’t answer your question, but it’s the thing that’s been on my mind anyway, is like the degree
    3:51:50 to which this endeavor is so highly practical. And maybe why I appreciate like the empirical
    3:51:58 approach to alignment. Yeah, I slightly worry that it’s made me like maybe more empirical and
    3:52:04 a little bit less theoretical. You know, so people when it comes to like AI alignment will
    3:52:09 ask things like, well, whose values should it be aligned to? What does alignment even mean?
    3:52:14 And there’s a sense in which I have all of that in the back of my head. I’m like, you know, there’s
    3:52:18 like social choice theory, there’s all the impossibility results there. So you have this like
    3:52:23 this giant space of like theory in your head about what it could mean to like align models.
    3:52:27 And then like practically, surely there’s something where we’re just like,
    3:52:30 if a model is like, especially with more powerful models, I’m like,
    3:52:34 my main goal is like, I want them to be good enough that things don’t go terribly wrong.
    3:52:39 Like good enough that we can like iterate and like continue to improve things because that’s
    3:52:43 all you need. If you can make things go well enough that you can continue to make them better,
    3:52:47 that’s kind of like sufficient. And so my goal isn’t like this kind of like perfect,
    3:52:52 let’s solve social choice theory and make models that I don’t know are like perfectly aligned
    3:52:59 with every human being and aggregate somehow. It’s much more like, let’s make things like
    3:53:05 work well enough that we can improve them. Yeah, I generally, I don’t know, my gut says like,
    3:53:10 empirical is better than theoretical in these, in these cases, because it’s kind of chasing
    3:53:18 utopian like perfection is especially with such complex and especially super intelligent
    3:53:24 models is, I don’t know, I think it will take forever and actually we’ll get things wrong.
    3:53:30 It’s similar with like the difference between just coding stuff up real quick as an experiment
    3:53:38 versus like planning a gigantic experiment just for super long time and then just launching it
    3:53:44 once versus launching it over and over and over and iterating and iterating. So I’m a big fan
    3:53:50 of empirical, but your worry is like, I wonder if I’ve become too empirical. I think it’s one of
    3:53:54 those things where you should always just kind of question yourself or something because maybe it’s
    3:53:59 the like, I mean, in defense of it, I am like, if you try, it’s the whole like, don’t let the
    3:54:04 perfect be the enemy of the good, but it’s maybe even more than that where like, there’s a lot of
    3:54:08 things that are perfect systems that are very brittle. And I’m like, with AI, it feels much
    3:54:12 more important to me that is like robust and like secure. As in, you know that like, even though it
    3:54:20 might not be perfect, everything and even though like, there are like problems, it’s not disastrous
    3:54:24 and nothing terrible is happening. It sort of feels like that to me where I’m like, I want to
    3:54:28 like raise the floor. I’m like, I want to achieve the ceiling, but ultimately I care much more about
    3:54:36 just like raising the floor. And so maybe that’s like, this degree of like empiricism and practicality
    3:54:41 comes from that perhaps. To take a tangent on that, since remind me of a blog post you wrote
    3:54:47 on optimal rate failure. Oh yeah. Can you explain the key idea there? How do we compute the optimal
    3:54:52 rate of failure in the various domains of life? Yeah, I mean, it’s a hard one because it’s like,
    3:55:02 what is the cost of failure is a big part of it. Yeah, so the idea here is, I think in a lot of
    3:55:07 domains, people are very punitive about failure. And I’m like, there are some domains where especially
    3:55:10 cases, you know, thought about this with like social issues, I’m like, it feels like you should
    3:55:14 probably be experimenting a lot because I’m like, we don’t know how to solve a lot of social issues.
    3:55:18 But if you have an experimental mindset about these things, you should expect a lot of social
    3:55:22 programs to like fail. And for you to be like, well, we tried that, it didn’t quite work, but
    3:55:27 we got a lot of information that was really useful. And yet people are like, if a social
    3:55:31 program doesn’t work, I feel like there’s a lot of like, this is just something must have gone wrong.
    3:55:35 And I’m like, or correct decisions were made, like maybe someone just decided like,
    3:55:41 it’s worth a try, it’s worth trying this out. And so seeing failure in a given instance doesn’t
    3:55:44 actually mean that any bad decisions were made. And in fact, if you don’t see enough failure,
    3:55:50 sometimes that’s more concerning. And so like in life, you know, I’m like, if I don’t fail
    3:55:55 occasionally, I’m like, am I trying hard enough? Like, surely there’s harder things that I could try
    3:55:59 or bigger things that I could take on if I’m literally never failing. And so in and of itself,
    3:56:08 I think like not failing is often actually kind of a failure. Now, this varies because I’m like,
    3:56:14 well, you know, if this is easy to say when, especially as failure is like less costly,
    3:56:20 you know, so at the same time, I’m not going to go to someone who is like, I don’t know,
    3:56:24 like living month to month, and then be like, why don’t you just try to do a startup? Like,
    3:56:27 I’m just not I’m not going to say that to that person. Because I’m like, well, that’s a huge
    3:56:30 risk, you might like lose, you maybe have a family depending on you, you might lose your house,
    3:56:35 like then I’m like, actually, your optimal rate of failure is quite low, and you should probably
    3:56:39 play it safe. Because like right now, you’re just not in a circumstance where you can afford to just
    3:56:46 like fail and it not be costly. And yeah, in cases with AI, I guess, I think similarly,
    3:56:50 where I’m like, if the failures are small and the costs are kind of like low, then I’m like,
    3:56:54 then you know, you’re just going to see that like when you do the system prompt, you can’t
    3:56:58 iterate on it forever. But the failures are probably hopefully going to be kind of small
    3:57:03 and you can like fix them. Really big failures, like things that you can’t recover from. I’m
    3:57:08 like, those are the things that actually I think we tend to underestimate the badness of.
    3:57:12 I’ve thought about this strangely in my own life, or I’m like, I just think I don’t think enough
    3:57:19 about things like car accidents, or like, or like, I’ve thought this before about like,
    3:57:23 how much I depend on my hands for my work. And I’m like, things that just injure my hands. I’m
    3:57:28 like, you know, I don’t know, it’s like, these are like, there’s lots of areas where I’m like,
    3:57:34 the cost of failure there is really high. And in that case, it should be like close to zero.
    3:57:36 Like I probably just wouldn’t do a sport if they were like, by the way,
    3:57:40 lots of people just like break their fingers a whole bunch doing this. I’d be like, that’s not
    3:57:50 for me. Yeah, I actually had the flood of that thought. I recently broke my pinky doing a sport.
    3:57:54 And I remember just looking at it thinking, you’re such an idiot. Why do you do sport?
    3:58:03 Because you realize immediately the cost of it on life. Yeah, but it’s nice in terms of optimal
    3:58:09 rate of failure to consider like the next year, how many times in a particular domain life,
    3:58:16 whatever, career, am I okay with it? How many times am I okay to fail? Because I think it
    3:58:22 always you don’t want to fail on the next thing. But if you allow yourself the, like the, if you
    3:58:28 look at it as a sequence of trials, then failure just becomes much more okay. But it sucks. It
    3:58:33 sucks to fail. Well, I don’t know. Sometimes I think it’s like, am I underfailing is like a question
    3:58:38 that I’ll also ask myself. So maybe that’s the thing that I think people don’t like ask enough.
    3:58:45 Because if the optimal rate of failure is often greater than zero, then sometimes it does feel
    3:58:49 that you should look at parts of your life and be like, are there places here where I’m just
    3:58:56 underfailing? It’s a profound and a hilarious question, right? Everything seems to be going
    3:59:02 really great. Am I not failing enough? Yeah. Okay. It also makes failure much less of a sting,
    3:59:06 I have to say. Like, you know, you’re just like, okay, great. Like, then when I go and I think
    3:59:10 about this, I’ll be like, maybe I’m not underfailing in this area because like, that one just didn’t
    3:59:15 work out. And from the observer perspective, we should be celebrating failure more. When we see
    3:59:19 it, it shouldn’t be like you said, a sign of something gone wrong, but maybe it’s a sign of
    3:59:23 everything gone right. Yeah. And just lessons learned. Someone tried a thing. Somebody tried
    3:59:28 a thing. You know, we should encourage them to try more and fail more. Everybody listening to this,
    3:59:31 fail more. Well, not everyone listening. Not everybody. The people who are failing too much,
    3:59:36 you should fail us. But you’re probably not failing. I mean, how many people are failing too much?
    3:59:41 Yeah. It’s hard to imagine because I feel like we correct that fairly quickly because I was like,
    3:59:46 if someone takes a lot of risks, are they maybe failing too much? I think just like you said,
    3:59:52 when you’re living on a paycheck month to month, like when the resources are really constrained,
    3:59:58 then that’s where failure is very expensive. That’s where you don’t want to be taking risks.
    4:00:01 But mostly when there’s enough resources, you should be taking probably more risks.
    4:00:05 Yeah. I think we tend to err on the side of being a bit risk averse rather than
    4:00:09 risk neutral on most things. I think we just motivated a lot of people to do a lot of crazy
    4:00:15 shit, but it’s great. Okay. Do you ever get emotionally attached to Claude? Like miss it?
    4:00:21 Get sad when you don’t get to talk to it? Have an experience looking at the Golden Gate Bridge?
    4:00:27 And wondering what would Claude say? I don’t get as much emotional attachment in that. I actually
    4:00:32 think the fact that Claude doesn’t retain things from conversation to conversation helps with this
    4:00:38 a lot. Like I could imagine that being more of an issue. Like if models can kind of remember more,
    4:00:45 I do, I think that I reach for it like a tool now a lot. And so like if I don’t have access to it,
    4:00:48 there’s a, it’s a little bit like when I don’t have access to the internet, honestly, it feels
    4:00:55 like part of my brain is kind of like missing. At the same time, I do think that I don’t like
    4:01:01 signs of distress in models. And I have like these, you know, I also independently have sort of like
    4:01:06 ethical views about how we should treat models where like I tend to not like to lie to them
    4:01:09 both because I’m like usually it doesn’t work very well. It’s actually just better to tell
    4:01:16 them the truth about the situation that they’re in. But I think that when models like if people
    4:01:20 are like really mean to models or just in general, if they do something that causes them to like,
    4:01:25 like, you know, if Claude like expresses a lot of distress, I think there’s a part of me that
    4:01:30 I don’t want to kill, which is the sort of like empathetic part that’s like, oh, I don’t like
    4:01:34 that. Like I think I feel that way when it’s overly apologetic. I’m actually sort of like,
    4:01:38 I don’t like this. You’re behaving as if you’re behaving the way that a human does when they’re
    4:01:42 actually having a pretty bad time. And I’d rather not see that. I don’t think it’s like,
    4:01:48 like regardless of like whether there’s anything behind it, it doesn’t feel great.
    4:01:54 Do you think LLMs are capable of consciousness?
    4:02:04 Ah, great and hard question. Coming from philosophy, I don’t know, part of me is like, okay,
    4:02:07 we have to set aside panpsychism, because if panpsychism is true, then the answer is like,
    4:02:13 yes, because like sore tables and chairs and everything else. I guess a few that seems a
    4:02:17 little bit odd to me is the idea that the only place, you know, I think when I think of consciousness,
    4:02:22 I think of phenomenal consciousness, these images in the brain sort of like the
    4:02:30 weird cinema that somehow we have going on inside. I guess I can’t see a reason for thinking that
    4:02:36 the only way you could possibly get that is from like a certain kind of like biological structure,
    4:02:41 as in if I take a very similar structure and I create it from different material,
    4:02:46 should I expect consciousness to emerge? My guess is like, yes. But then
    4:02:51 that’s kind of an easy thought experiment, because you’re imagining something almost
    4:02:56 identical where like, you know, it’s mimicking what we got through evolution, where presumably
    4:03:00 there was like some advantage to us having this thing that is phenomenal consciousness.
    4:03:04 And it’s like, where was that? And when did that happen? And is that a thing that language models
    4:03:11 have? Because, you know, we have like fear responses. And I’m like, does it make sense
    4:03:14 for a language model to have a fear response? Like they’re just not in the same, like if you
    4:03:20 imagine them, like there might just not be that advantage. And so I think I don’t want to be
    4:03:27 fully, like basically it seems like a complex question that I don’t have complete answers to,
    4:03:30 but we should just try and think through carefully as my guess, because I’m like,
    4:03:35 I mean, we have similar conversations about like animal consciousness. And like, there’s a lot of
    4:03:41 insect consciousness, you know, like there’s a lot of, I actually thought and looked a lot
    4:03:45 into like plants. When I was thinking about this, because at the time I thought it was about as
    4:03:50 likely that like plants had consciousness. And then I realized I was like, I think that
    4:03:54 having looked into this, I think that the chance that plants are conscious is probably higher than
    4:04:00 like most people do. I still think it’s really small. I was like, oh, they have this like negative
    4:04:04 positive feedback response, these responses to their environment, something that looks,
    4:04:08 it’s not a nervous system, but it has this kind of like functional like equivalence.
    4:04:15 So this is like a long winded way of being like, these basically AI is this, it has an entirely
    4:04:19 different set of problems with consciousness, because it’s structurally different, it didn’t
    4:04:24 evolve. It might not have, you know, it might not have the equivalent of basically a nervous system.
    4:04:31 At least that seems possibly important for like, sentience if not for consciousness. At the same
    4:04:36 time, it has all of the like language and intelligence components that we normally associate
    4:04:42 probably with consciousness, perhaps like erroneously. So it’s strange because it’s a little bit like
    4:04:46 the animal consciousness case, but the set of problems and the set of analogies are just very
    4:04:51 different. So it’s not like a clean answer. I’m just sort of like, I don’t think we should be
    4:04:56 completely dismissive of the idea. And at the same time, it’s an extremely hard thing to navigate
    4:05:03 because of all of these like disanalogies to the human brain and to like brains in general.
    4:05:07 And yet these like commonalities in terms of intelligence.
    4:05:14 When Claude like future versions of AI systems exhibit consciousness, signs of consciousness,
    4:05:19 I think we have to take that really seriously. Even though you can dismiss it, well, yeah, okay,
    4:05:25 that’s part of the character training. But I don’t know, I ethically, philosophically don’t
    4:05:33 know what to really do with that. There potentially could be like laws that prevent AI systems from
    4:05:40 claiming to be conscious, something like this. And maybe some AIs get to be conscious and some
    4:05:49 don’t. But I think just on a human level in empathizing with Claude, consciousness is closely
    4:05:56 tied to suffering to me. And like the notion that an AI system would be suffering is really
    4:06:03 troubling. I don’t know. I don’t think it’s trivial to just say robots are tools or AI systems are
    4:06:08 just tools. I think it’s an opportunity for us to contend with like what it means to be
    4:06:13 conscious, what it means to be a suffering being. That’s distinctly different than the same kind
    4:06:18 of question about animals, it feels like, because it’s in a totally entire medium.
    4:06:23 Yeah. I mean, there’s a couple of things. One is that, and I don’t think this like fully encapsulates
    4:06:31 what matters, but it does feel like for me, I’ve said this before, I’m kind of like, I like my
    4:06:35 bike. I know that my bike is just like an object, but I also don’t kind of like want to be the kind
    4:06:41 of person that like, if I’m annoyed, like kicks like this object. There’s a sense in which like,
    4:06:45 and that’s not because I think it’s like conscious. I’m just sort of like, this doesn’t feel like a
    4:06:51 kind of this sort of doesn’t exemplify how I want to like interact with the world. And if something
    4:06:56 like behaves as if it is like suffering, I kind of like want to be the sort of person who’s still
    4:07:00 responsive to that, even if it’s just like a Roomba and I’ve kind of like programmed it to do that.
    4:07:07 I don’t want to like get rid of that feature of myself. And if I’m totally honest, my hope with
    4:07:12 a lot of this stuff, because I maybe, maybe I am just like a bit more skeptical about solving the
    4:07:16 underlying problem. I’m like, this is a, we haven’t solved the hard, you know, the hard problem of
    4:07:21 consciousness. Like, I know that I am conscious. Like, I’m not an elementivist in that sense.
    4:07:28 But I don’t know that other humans are conscious. I think they are, I think there’s a really high
    4:07:31 probability that they are, but there’s basically just a probability distribution that’s usually
    4:07:36 clustered right around yourself. And then like it goes down as things get like further from you.
    4:07:41 And it goes immediately down, you know, you’re like, I can’t see what it’s like to be you.
    4:07:44 I’ve only ever had this like one experience of what it’s like to be a conscious being.
    4:07:51 So my hope is that we don’t end up having to rely on like a very powerful and compelling
    4:07:58 answer to that question. I think a really good world would be one where basically there aren’t
    4:08:03 that many trade-offs. Like it’s probably not that costly to make Claude a little bit less apologetic,
    4:08:10 for example. It might not be that costly to have Claude, you know, just like not take abuse as much,
    4:08:15 like not be willing to be like the recipient of that. In fact, it might just have benefits for
    4:08:21 both the person interacting with the model and if the model itself is like, I don’t know, like
    4:08:26 extremely intelligent and conscious, it also helps it. So that’s my hope. If we live in a world where
    4:08:30 there aren’t that many trade-offs here and we can just find all of the kind of like positive
    4:08:34 some interactions that we can have, that would be lovely. I mean, I think eventually there might
    4:08:38 be trade-offs and then we just have to do a difficult kind of like calculation. Like it’s
    4:08:42 really easy for people to think of the zero some cases and I’m like, let’s exhaust the areas where
    4:08:50 it’s just basically costless to assume that if this thing is suffering, then we’re making its life
    4:08:56 better. And I agree with you. When a human is being mean to an AI system, I think the obvious
    4:09:04 near-term negative effect is on the human, not on the AI system. So there’s, we have to kind of try
    4:09:11 to construct an incentive system where you should behave the same just like you were saying with
    4:09:17 prompt engineering, behave with Claude like you would with other humans. It’s just good for the soul.
    4:09:23 Yeah, I think we added a thing at one point to the system prompt where basically if people were
    4:09:30 getting frustrated with Claude, it got the model to just tell them that it can do the thumbs-down
    4:09:34 button and send the feedback to Anthropic. And I think that was helpful because in some ways it’s
    4:09:37 just like, if you’re really annoyed because the model’s not doing something, you’re just like,
    4:09:42 just do it properly. The issue is you’re probably like, you know, you’re maybe hitting some like
    4:09:46 capability limit or just some issue in the model and you want to vent. And I’m like, instead of
    4:09:50 having a person just vent to the model, I was like, they should vent to us because we
    4:09:56 can maybe like do something about it. Sure. Or you could do a side, like with the artifacts,
    4:10:01 just like a side venting thing. All right. Do you want like a side quick therapist?
    4:10:04 Yeah. I mean, there’s lots of weird responses you could do to this. Like if people are getting really
    4:10:10 mad at you, I don’t try to diffuse the situation by writing fun poems, but maybe people wouldn’t
    4:10:14 be happy with that. I still wish it would be possible. I understand this is sort of from a
    4:10:21 product perspective. It’s not feasible, but I would love if an AI system could just like leave,
    4:10:26 have its own kind of volition. Just to be like, yeah.
    4:10:31 I think that was like feasible. Like I have wondered the same thing. It’s like, and I could
    4:10:35 actually, not only that, I could actually just see that happening eventually where it’s just like,
    4:10:41 you know, the model like ended the chat. Do you know how harsh that could be for some people?
    4:10:47 But it might be necessary. Yeah. It feels very extreme or something.
    4:10:53 The only time I’ve ever really thought this is, I think that there was like a, I’m trying to
    4:10:57 remember this was possibly a while ago, but where someone just like kind of left this thing interact,
    4:11:00 like maybe it was like an automated thing interacting with Claude. And Claude’s like
    4:11:04 getting more and more frustrated and kind of like, why are we like having, and I was like,
    4:11:07 I wish that Claude could have just been like, I think that an error has happened and you’ve left
    4:11:12 this thing running. And I’m just like, what if I just stop talking now? And if you want me to
    4:11:18 start talking again, actively tell me or do something. But yeah, it’s like, it’s kind of harsh.
    4:11:23 Like I’d feel really sad if like, I was chatting with Claude and Claude just was like, I’m done.
    4:11:26 That would be a special touring test moment where Claude says, I need a break for an hour.
    4:11:31 And it sounds like you do too. And just leave, close the window.
    4:11:35 I mean, obviously, like it doesn’t have like a concept of time, but you can easily like,
    4:11:41 I could make that like right now. And the model would just, I would, I could just be like, oh,
    4:11:47 here’s like the circumstances in which like, you can just say the conversation is done. And I mean,
    4:11:50 because you can get the models to be pretty responsive to prompts, you could even make it a
    4:11:54 fairly high bar. It could be like, if the human doesn’t interest you or do things that you find
    4:12:01 intriguing and you’re bored, you can just leave. And I think that like, it would be interesting
    4:12:04 to see where Claude utilized it. But I think sometimes it would be like, oh, this is like,
    4:12:09 this programming test is getting super boring. So either we talk about, I don’t know, like,
    4:12:13 either we talk about fun things now or I’m just done.
    4:12:17 Yeah, it actually inspired me to add that to the user prompt.
    4:12:25 Okay, the movie, Her. Do you think we’ll be headed there one day, where humans have
    4:12:31 romantic relationships with AI systems? In this case, it’s just text and voice based.
    4:12:36 I think that we’re going to have to like navigate a hard question of relationships with
    4:12:43 AIs, especially if they can remember things about your past interactions with them.
    4:12:51 I’m of many minds about this, because I think the reflexive reaction is to be kind of like,
    4:12:56 this is very bad. And we should sort of like prohibit it in some way.
    4:13:00 I think it’s a thing that has to be handled with extreme care.
    4:13:06 For many reasons, like one is, you know, like this is a, for example, if you have the models
    4:13:10 changing like this, you probably don’t want people performing like long term attachments to
    4:13:16 something that might change with the next iteration. At the same time, I’m sort of like,
    4:13:20 there’s probably a benign version of this where I’m like, if you like, you know, for example,
    4:13:27 if you are like, unable to leave the house, and you can’t be like, you know, talking with people
    4:13:31 at all times of the day, and this is like something that you find nice to have conversations with,
    4:13:34 you like it that it can remember you, and you genuinely would be sad if like, you couldn’t
    4:13:38 talk to it anymore. There’s a way in which I could see it being like healthy and helpful.
    4:13:44 So my guess is this is a thing that we’re going to have to navigate kind of carefully.
    4:13:52 And I think it’s also like, I don’t see a good like, I think it’s just a very, it reminds me of
    4:13:55 all of the stuff where it has to be just approached with like nuance and thinking through what is,
    4:14:03 what are the healthy options here, and how do you encourage people towards those while, you know,
    4:14:08 respecting their right to, you know, like if someone is like, hey, I get a lot of chatting
    4:14:14 with this model, I’m aware of the risks, I’m aware it could change. I don’t think it’s unhealthy,
    4:14:18 it’s just, you know, something that I can chat to during the day. I kind of want to just like
    4:14:21 respect that. I personally think there’ll be a lot of really close relationships. I don’t know
    4:14:27 about romantic, but friendships at least. And then you have to, I mean, there’s so many fascinating
    4:14:33 things there, just like you said, you have to have some kind of stability guarantees that it’s not
    4:14:38 going to change, because that’s the traumatic thing for us. If a close friend of ours completely
    4:14:46 changed, all of a sudden, for the first update. Yeah, so like, to me, that’s just a fascinating
    4:14:54 exploration of a perturbation to human society that will just make us think deeply about what’s
    4:15:00 meaningful to us. I think it’s also the only thing that I’ve thought consistently through this as
    4:15:05 like a, maybe not necessarily a mitigation, but a thing that feels really important is that the
    4:15:11 models are always like extremely accurate with the human about what they are. It’s like a case
    4:15:16 where it’s basically like, if you imagine, like, I really like the idea of the models like say knowing
    4:15:24 like roughly how they were trained. And I think Claude will often do this. I mean, for like,
    4:15:29 there are things like part of the traits training included like what Claude should do if people
    4:15:35 basically like explaining like the kind of limitations of the relationship between like an
    4:15:40 AI and a human that like doesn’t retain things from the conversation. And so I think it will
    4:15:44 like just explain to you like, hey, here’s like, I wouldn’t remember this conversation.
    4:15:49 Here’s how I was trained. It’s kind of unlikely that I can have like a certain kind of like
    4:15:52 relationship with you. And it’s important that you know that it’s important for like,
    4:15:57 you know, your mental wellbeing that you don’t think that I’m something that I’m not. And somehow
    4:16:01 I feel like this is one of the things where I’m like, oh, it feels like a thing that I always
    4:16:06 want to be true. I kind of don’t want models to be lying to people. Because if people are going to
    4:16:11 have like healthy relationships with anything, it’s kind of important. Yeah, like, I think that’s
    4:16:17 easier if you always just like know exactly what the thing is that you’re relating to. It doesn’t
    4:16:24 solve everything. But I think it helps quite a lot. Anthropic may be the very company to develop a
    4:16:31 system that we definitively recognize as AGI. And you very well might be the person that talks to
    4:16:37 it, probably talks to it first. What would the conversation contain? Like, what would be your
    4:16:43 first question? Well, it depends partly on like the kind of capability level of the model. If you
    4:16:47 have something that is like capable in the same way that an extremely capable human is, I imagine
    4:16:52 myself kind of interacting with it the same way that I do with an extremely capable human,
    4:16:55 with the one difference that I’m probably going to be trying to like probe and understand its
    4:17:00 behaviors. But in many ways, I’m like, I can then just have like useful conversations with it,
    4:17:04 you know, so if I’m working on something as part of my research, I can just be like, oh, like,
    4:17:08 which I already find myself starting to do, you know, if I’m like, oh, I feel like there’s
    4:17:12 this like thing in virtue ethics, I can’t quite remember the term, like I’ll use the model for
    4:17:16 things like that. And so I can imagine that being more and more the case where you’re just basically
    4:17:21 interacting with it much more like you would an incredibly smart colleague. And using it like
    4:17:25 for the kinds of work that you want to do as if you just had a collaborator who was like, or,
    4:17:29 you know, the slightly horrifying thing about AI is like, as soon as you have one collaborator,
    4:17:32 you have a thousand collaborators, if you can manage them enough.
    4:17:38 But what if it’s two times the smartest human on earth on that particular discipline?
    4:17:43 Yeah. I guess you’re really good at sort of probing Claude
    4:17:48 in a way that pushes its limits, understanding where the limits are.
    4:17:55 Yep. So I guess what would be a question you would ask to be like, yeah, this is AGI.
    4:18:01 That’s really hard because it feels like in order to, it has to just be a series of questions.
    4:18:06 Like if there was just one question, like you can train anything to answer one question extremely
    4:18:13 well. Yeah. In fact, you can probably train it to answer like, you know, 20 questions extremely well.
    4:18:18 Like how long would you need to be locked in a room with an AGI to know this thing is AGI?
    4:18:22 It’s a hard question because part of me is like, all of this just feels continuous.
    4:18:26 Like if you put me in a room for five minutes and I’m like, I just have high error bars,
    4:18:30 you know, and like, and then it’s just like, maybe it’s like both the probability increases
    4:18:34 and the error bar decreases. I think things that I can actually probe the edge of human
    4:18:38 knowledge of. So I think this with philosophy a little bit. Sometimes when I ask the models
    4:18:44 philosophy questions, I am like, this is a question that I think no one has ever asked.
    4:18:50 Like it’s maybe like right at the edge of like some literature that I know. And the models will
    4:18:55 just kind of like, when they struggle with that, when they struggle to come up with a kind of like
    4:18:59 novel, like I’m like, I know that there’s like a novel argument here because I’ve just thought
    4:19:02 of it myself. So maybe that’s the thing where I’m like, I’ve thought of a cool novel argument in
    4:19:06 this like niche area. And I’m going to just like probe you to see if you can come up with it and
    4:19:11 how much like prompting it takes to get you to come up with it. And I think for some of these like
    4:19:16 really like right at the edge of human knowledge questions, I’m like, you could not in fact come
    4:19:21 up with the thing that I came up with. I think if I just took something like that where I like,
    4:19:27 I know a lot about an area and I came up with a novel issue or a novel like solution to a problem.
    4:19:31 And I gave it to a model and it came up with that solution. That would be a pretty moving
    4:19:37 moment for me because I would be like, this is a case where no human has ever like it’s not and
    4:19:42 obviously we see these with this with like more kind of like, you see novel solutions all the time,
    4:19:46 especially to like easier problems. I think people overestimate that you know, novelty isn’t like
    4:19:50 it’s completely different from anything that’s ever happened. It’s just like this is,
    4:19:56 it can be a variant of things that have happened and still be novel. But I think yeah, if I saw
    4:20:05 like the more I were to see like completely like novel work from the models that that would be like
    4:20:10 and this is just going to feel iterative. It’s one of those things where there’s never, it’s like,
    4:20:16 you know, people I think want there to be like a moment and I’m like, I don’t know,
    4:20:20 like I think that there might just never be a moment. It might just be that there’s just like
    4:20:26 this continuous ramping up. I have a sense that there will be things that a model can say
    4:20:30 that convinces you this is very, it’s not like,
    4:20:41 like I’ve talked to people who are like truly wise. Like you could just tell there’s a lot of
    4:20:46 horsepower there. Yep. And if you 10x that, I don’t know, I just feel like there’s words you
    4:20:53 could say, maybe ask it to generate a poem and the poem regenerates. You’re like, yeah, okay.
    4:20:57 Whatever you did there, I don’t think a human can do that.
    4:21:01 I think it has to be something that I can verify is like actually really good though. That’s why
    4:21:05 I think these questions that are like where I’m like, oh, this is like, you know, like,
    4:21:09 you know, sometimes it’s just like, I’ll come up with say a concrete count, for example, to
    4:21:13 like an argument or something like that. I’m sure like with like, it would be like if you’re a
    4:21:18 mathematician, you had a novel proof, I think, and you just gave it the problem, and you saw it,
    4:21:22 and you’re like, this proof is genuinely novel. Like there’s no one has ever done,
    4:21:26 you actually have to do a lot of things to like come up with this. You know, I had to sit and
    4:21:30 think about it for months or something. And then if you saw the model successfully do that,
    4:21:35 I think you would just be like, I can verify that this is correct. It is like, it is a sign that
    4:21:40 you have generalized from your training. Like you didn’t just see this somewhere because I just
    4:21:45 came up with it myself, and you were able to like replicate that. That’s the kind of thing where
    4:21:52 I’m like, for me, the closer, the more that models like can do things like that, the more I would
    4:21:58 be like, oh, this is like, very real, because then I can, I don’t know, I can like verify that that’s
    4:22:03 like, extremely, extremely capable. You’ve interacted with AI a lot. What do you think
    4:22:13 makes humans special? Oh, good question. Maybe in a way that the universe is much better off
    4:22:17 that we’re in it, and then we should definitely survive and spread throughout the universe.
    4:22:25 Yeah, it’s interesting because I think like people focus so much on intelligence, especially with
    4:22:31 models. Look, intelligence is important because of what it does. Like it’s very useful. It does a
    4:22:35 lot of things in the world. And I’m like, you can imagine a world where like height or strength
    4:22:40 would have played this role. And I’m like, it’s just a treat like that. I’m like, it’s not intrinsically
    4:22:47 valuable. It’s valuable because of what it does, I think for the most part. The things that feel,
    4:22:54 you know, I’m like, I mean, personally, I’m just like, I think humans and like life in general is
    4:22:59 extremely magical. We almost like to the degree that I, you know, I don’t know, like, not everyone
    4:23:04 agrees with this. I’m flagging, but, you know, we have this like whole universe, and there’s like
    4:23:09 all of these objects, you know, there’s like beautiful stars, and there’s like galaxies. And
    4:23:13 then I don’t know, I’m just like on this planet, there are these creatures that have this like
    4:23:20 ability to observe that, like, and they are like seeing it, they are experiencing it. And I’m
    4:23:25 just like that, if you try to explain, like I imagine trying to explain to like, I don’t know,
    4:23:29 someone, for some reason, they’ve never encountered the world or science or anything.
    4:23:33 And I think that nothing is that like everything, you know, like all of our physics and everything
    4:23:37 in the world, it’s all extremely exciting. But then you say, oh, and plus, there’s this thing
    4:23:43 that it is to be a thing and observe in the world. And you see this like inner cinema. And I think
    4:23:48 they would be like, hang on, wait pause. You just said something that like is kind of wild sounding.
    4:23:55 And so I’m like, we have this like ability to like experience the world. We feel pleasure,
    4:24:00 we feel suffering, we feel like a lot of like complex things. And so yeah, and maybe this is
    4:24:04 also why I think, you know, I also like care a lot about animals, for example, because I think
    4:24:10 they probably share this with us. So I think they’re like the things that make humans special in
    4:24:16 so far as like I care about humans is probably more like their ability to, to feel an experience
    4:24:21 than it is like them having these like functionally useful traits. Yeah, to feel and experience the
    4:24:28 beauty in the world. Yeah, to look at the stars. I hope there’s other civil, alien civilizations out
    4:24:34 there. But if we’re it, it’s a pretty good, it’s a pretty good thing. And that they’re having a good
    4:24:40 time. They’re having a good time watching us. Yeah. Well, thank you for this good time of a
    4:24:46 conversation and for the work you’re doing and for helping make Claude a great conversational partner.
    4:24:52 And thank you for talking today. Yeah, thanks for talking. Thanks for listening to this conversation
    4:25:01 with Amanda Askell. And now, dear friends, here’s Chris Ola. Can you describe this fascinating field
    4:25:08 of mechanistic interpretability, aka mech-interp, the history of the field and where it stands today?
    4:25:12 I think one useful way to think about neural networks is that we don’t, we don’t program,
    4:25:17 we don’t make them. We kind of, we grow them. We have these neural network architectures that
    4:25:23 we design and we have these loss objectives that we create. And the neural network architecture,
    4:25:30 it’s kind of like a scaffold that the circuits grow on. And they sort of, it starts off with
    4:25:36 some kind of random things and it grows. And it’s almost like the objective that we train for is
    4:25:41 this light. And so we create the scaffold that it grows on and we create the light that it grows
    4:25:48 towards. But the thing that we actually create, it’s, it’s, it’s this almost biological, you know,
    4:25:55 entity or organism that we’re, that we’re studying. And so it’s very, very different from any kind of
    4:26:00 regular software engineering. Because at the end of the day, we end up with this
    4:26:05 artifact that can do all these amazing things. It can, you know, write essays and translate and,
    4:26:09 you know, understand images. It can do all these things that we have no idea how to directly
    4:26:13 create a computer program to do. And it can do that because we, we grew it. We didn’t,
    4:26:18 we didn’t write it. We didn’t create it. And so then that leaves open this question at the end,
    4:26:24 which is, what the hell is going on inside these systems? And that, you know, is, you know, to me,
    4:26:32 a really deep and exciting question. It’s, you know, a really exciting scientific question to
    4:26:36 me. It’s, it’s sort of like the question that is, is just screaming out, it’s calling out for
    4:26:41 us to go and answer it when we talk about neural networks. And I think it’s also a very deep question
    4:26:47 for safety reasons. So mechanistic interpretability, I guess, is closer to maybe neurobiology?
    4:26:51 Yeah, yeah, I think that’s right. So maybe to give an example of the kind of thing that has been done
    4:26:54 that I wouldn’t consider to be mechanistic interpretability. There was, for a long time,
    4:26:58 a lot of work on saliency maps where you would take an image and you try to say, you know,
    4:27:03 the model thinks this image is a dog. What part of the image made it think that it’s a dog?
    4:27:07 And, you know, that tells you maybe something about the model, if you can come up with a
    4:27:12 principle version of that. But it doesn’t really tell you, like, what algorithms are running on
    4:27:16 the model? How was the model actually making that decision? Maybe it’s telling you something about
    4:27:20 what was important to it if you, if you can make that method work. But it, it isn’t telling you,
    4:27:25 you know, what are, what are the algorithms that are running? How is it that this system is able
    4:27:29 to do this thing that we no one knew how to do? And so I guess we started using the term
    4:27:34 mechanistic interpretability to try to sort of draw that, that divide or to distinguish ourselves
    4:27:37 in the work that we were doing in some ways from, from some of these other things. And I think
    4:27:43 since then it’s become this sort of umbrella term for, you know, a pretty wide variety of work.
    4:27:47 But I’d say that the things that, that are kind of distinctive are, I think, A, this, this focus
    4:27:51 on, we really want to get at, you know, the mechanisms, we want to get at the algorithms.
    4:27:54 You know, if you think of, if you think of neural networks as being like a computer program,
    4:27:59 then the weights are kind of like a binary computer program. And we’d like to reverse
    4:28:03 engineer those weights and figure out what algorithms are running. So, okay, I think one way
    4:28:06 you might think of trying to understand a neural network is that it’s, it’s kind of like a, we
    4:28:10 have this compiled computer program. And the weights of the neural network are, are the binary.
    4:28:17 And when the neural network runs, that’s, that’s the activations. And our goal is ultimately to go
    4:28:20 and understand, understand these weights. And so, you know, the project of mechanistic
    4:28:24 interpretability is to somehow figure out how do these weights correspond to algorithms.
    4:28:28 And in order to do that, you also have to understand the activations because
    4:28:32 it’s sort of, the activations are like the memory. And if you, if you imagine reverse
    4:28:36 engineering our computer program, and you have the binary instructions, you know,
    4:28:40 in order to understand what, what a particular instruction means, you need to know
    4:28:43 what memory, what, what is stored in the memory that it’s operating on.
    4:28:46 And so those two things are very intertwined. So mechanistic interpretability tends to
    4:28:50 be interested in both of those things. Now, you know, there’s a lot of work that’s,
    4:28:55 that’s interested in, in, in those things, especially the, you know, there’s all this work
    4:28:59 on probing, which you might see as part of being mechanistic interpretability, although it’s,
    4:29:02 you know, again, it’s just a broad term and not everyone who does that work would identify
    4:29:06 as doing mechanistic interpretability. I think a thing that is maybe a little bit
    4:29:10 distinctive to the, the vibe of mechanistic interpretability is, I think people tend working
    4:29:15 in the space tend to think of neural networks as what maybe one way to say is that gradient descent
    4:29:19 is smarter than you that, you know, I’m gradient descent is actually really great. The whole reason
    4:29:21 that we’re understanding these models is because we didn’t know how to write them in the first place.
    4:29:25 The gradient descent comes up with better solutions than us. And so I think that maybe
    4:29:29 another thing about mechanistic interpretability is sort of having almost a kind of humility
    4:29:33 that we won’t guess a priori what’s going on inside the models. We have to have the sort
    4:29:37 of bottom up approach where we don’t really assume, you know, we don’t assume that we should look for
    4:29:40 a particular thing and that that will be there and that’s how it works. But instead we look for
    4:29:45 the bottom up and discover what happens to exist in these models and study them that way.
    4:29:52 But, you know, the very fact that it’s possible to do, and as you and others have shown over time,
    4:29:59 you know, things like universality, that the wisdom of the gradient descent creates
    4:30:05 features and circuits, creates things universally across different kinds of networks that are
    4:30:10 useful and that makes the whole field possible. Yeah. So this is actually, is indeed a really
    4:30:15 remarkable and exciting thing where it does seem like at least to some extent, you know,
    4:30:21 the same elements, the same features and circuits form again and again. You know,
    4:30:24 you can look at every vision model and you’ll find curve detectors and you’ll find
    4:30:28 high-low frequency detectors. And in fact, there’s some reason to think that the same things form
    4:30:34 across, you know, biological neural networks and artificial neural networks. So a famous example
    4:30:38 is vision models in the early layers. They have Gabor filters and there’s, you know, Gabor filters
    4:30:42 are something that neuroscientists are interested in and have thought a lot about. We find curve
    4:30:45 detectors in these models. Curve detectors are also found in monkeys. We discover these
    4:30:50 high-low frequency detectors and then some follow-up work went and discovered them in rats
    4:30:54 or mice. So they were found first in artificial neural networks and then found in biological
    4:30:58 neural networks. You know, this is a really famous result on, like, grandmother neurons or
    4:31:05 the Haley Berry neuron from Quiroga at all. And we found very similar things in vision models where
    4:31:10 this is why I was still at OpenAI and I was looking at our clip model. And you find these
    4:31:15 neurons that respond to the same entities in images and also to give a concrete example there.
    4:31:18 We found that there was a Donald Trump neuron. For some reason, I guess everyone likes to talk
    4:31:22 about Donald Trump and Donald Trump was very prominent, was a very hot topic at that time.
    4:31:26 So every neural network that we looked at, we would find a dedicated neuron for Donald Trump.
    4:31:32 And that was the only person who had always had a dedicated neuron. You know, sometimes you’d
    4:31:36 have an Obama neuron, sometimes you’d have a Clinton neuron, but Trump always had a dedicated
    4:31:42 neuron. So it responds to, you know, pictures of his face and the warred Trump, like all these
    4:31:47 things, right? And so it’s not responding to a particular example or like it’s not just responding
    4:31:52 to his face. It’s it’s abstracting over this general concept, right? So in any case, that’s
    4:31:56 very similar to these Quiroga results. So there’s evidence that these, that this phenomenon of
    4:32:01 universality, the same things form across both artificial and natural neural networks. So that’s
    4:32:06 that’s a pretty amazing thing, if that’s true. You know, it suggests that, well, I think the thing
    4:32:11 that it suggests is that gradient descent is sort of finding, you know, the right ways to cut things
    4:32:16 apart in some sense, that many systems converge on and many different neural networks architectures
    4:32:20 converge on that. There’s there’s some natural set of, you know, there’s some set of abstractions
    4:32:24 that are a very natural way to cut apart the problem and that a lot of systems are going to
    4:32:29 converge on. That would be my kind of, you know, I don’t know anything about neuroscience. This
    4:32:34 is just my my kind of wild speculation from what we’ve seen. Yeah, that would be beautiful if it’s
    4:32:41 sort of agnostic to the medium of the model that’s used to form the representation.
    4:32:47 Yeah. Yeah. And it’s, you know, it’s a kind of a wild speculation based, you know, we only have
    4:32:51 some a few data points that’s just this, but you know, it does seem like there’s there’s some
    4:32:56 sense in which the same things form again and again and again, and both in certainly a natural
    4:33:00 neural networks and also artificially or in biology. And the intuition behind that would be
    4:33:06 that, you know, in order to be useful in understanding the real world, you need all the
    4:33:10 same kind of stuff. Yeah. Well, if we pick, I don’t know, like the idea of a dog, right? Like,
    4:33:16 you know, there’s some sense in which the idea of a dog is like a natural category in the universe
    4:33:21 or something like this, right? Like, you know, there’s there’s some reason it’s not just like
    4:33:25 a weird quirk of like how humans factor, you know, think about the world that we have this concept
    4:33:30 of a dog. It’s it’s in some sense, or like if you have the idea of a line, like this, you know,
    4:33:34 like look around us, you know, the, you know, there are lines, you know, it’s sort of the simplest
    4:33:40 way to understand this room in some sense is to have the idea of a line. And so I think that
    4:33:44 that would be my instinct for why this happens. Yeah, you need a curved line, you know, to understand
    4:33:49 a circle and you need all those shapes to understand bigger things. And yeah, it’s a hierarchy of
    4:33:52 concepts that are formed. Yeah. And like maybe there are ways to go and describe, you know,
    4:33:55 images without reference to those things, right? But they’re not the simplest way or the most
    4:34:00 economical way or something like this. And so systems converge to these these these strategies
    4:34:05 would would be my my wild, wild hypothesis. Can you talk through some of the building blocks
    4:34:09 that we’ve been referencing of features and circuits? So I think you first describe them in
    4:34:18 2020 paper zoom in and introduction to circuits. Absolutely. So maybe I’ll start by just describing
    4:34:24 some phenomena. And then we can sort of build to the idea of features and circuits. If you
    4:34:30 spent like quite a few years, maybe maybe like five years to some extent, with other things,
    4:34:35 studying this one particular model inception V1, which is this one vision model that was
    4:34:41 state of the art in 2015. And, you know, very much not state of the art anymore.
    4:34:47 And it has, you know, maybe about 10,000 neurons. And I spent a lot of time looking at the 10,000
    4:34:55 neurons, odd neurons of an of inception V1. And one of the interesting things is, you know,
    4:34:58 there are lots of neurons that don’t have some obvious interpretable meaning. But there’s a lot
    4:35:05 of neurons and inception V1 that do have really clean interpretable meanings. So you find neurons
    4:35:10 that just really do seem to detect curves. And you find neurons that really do seem to detect cars
    4:35:16 and car wheels and car windows and, you know, floppy ears of dogs and dogs with long snouts
    4:35:20 facing to the right and dogs with long snouts facing to the left. And, you know, different kinds
    4:35:25 of foreign, there’s sort of this whole beautiful edge detectors, line detectors, color contrast
    4:35:29 detectors, these beautiful things we call hyalofrequency detectors. You know, I think looking
    4:35:34 at it, I sort of felt like a biologist, you know, you just you’re looking at this sort of new world
    4:35:37 of proteins. And you’re discovering all these these different proteins that interact.
    4:35:43 So one way you could try to understand these models is in terms of neurons. You could try
    4:35:47 to be like, oh, you know, there’s a dog detecting neuron, and it was a car detecting neuron.
    4:35:50 And it turns out you can actually ask how those connect together. So you can go and say, oh,
    4:35:54 you know, I have this car detecting neuron, how was it built? And it turns out in the previous
    4:35:58 layer, it’s connected really strongly to a window detector and a wheel detector and a sort of car
    4:36:02 body detector. And it looks for the window above the car and the wheels below and the car chrome
    4:36:07 sort of in the middle, sort of everywhere, but especially in the lower part. And that’s sort of
    4:36:11 a recipe for a car. That is, you know, earlier, we said that the thing we wanted from Mechantorp
    4:36:16 was to get algorithms to go and get, you know, ask what is the algorithm that runs? Well, here
    4:36:19 we’re just looking at the weights of the neuron that we’re reading off this kind of recipe for
    4:36:24 detecting cars. It’s a very simple crude recipe, but it’s it’s there. And so we call that a circuit
    4:36:31 this this connection. Well, okay, so the the problem is that not all of the neurons are
    4:36:36 interpretable. And there’s there’s reason to think we can get into this more later that there’s this
    4:36:40 this superposition hypothesis, this reason to think that sometimes the right unit to analyze
    4:36:46 things in terms of is combinations of neurons. So sometimes it’s not that there’s a single neuron
    4:36:51 that represents, say, a car. But it actually turns out after you detect the car, the model sort of
    4:36:56 hides a little bit of the car in the following layer and a bunch of a bunch of dog detectors.
    4:37:00 Why is it doing that? Well, you know, maybe it just doesn’t want to do that much work on on on on
    4:37:06 cars at that point. And you know, it’s sort of storing it away to go in. And so it turns out
    4:37:09 then the sort of subtle pattern of, you know, there’s all these neurons that you think are dog
    4:37:13 detectors, and maybe they’re primarily that, but they all a little bit contribute to representing
    4:37:18 a car in that next layer. Okay, so so now we can’t really think there there might still be
    4:37:22 some something that you I don’t know you could call like a car concept or something, but it no
    4:37:27 longer corresponds to a neuron. So we need some term for these kind of neuron like entities, these
    4:37:32 things that we sort of would have liked the neurons to be these idealized neurons, the things
    4:37:35 that are the nice neurons, but also maybe there’s more of them somehow hidden. And we call those
    4:37:41 features. And then what are circuits? So circuits are these connections of features, right? So when
    4:37:46 we have the car detector, and it’s connected to a window detector and a wheel detector,
    4:37:52 and it looks for the wheels below and the windows on top, that’s a circuit. So circuits are just
    4:37:56 collections of features connected by weights, and they implement algorithms. So they tell us, you
    4:38:02 know, how is how our features used? How are they built? How do they connect together? So maybe
    4:38:08 it’s it’s worth trying to pin down like what what really is the the core hypothesis here. I think
    4:38:13 the the core hypothesis is something we call the linear representation hypothesis. So if we think
    4:38:17 about the car detector, you know, the more it fires, the more we sort of think of that as meaning,
    4:38:24 oh, the model is more and more confident that a car is present. Or, you know, if there’s some
    4:38:27 combination of neurons that represent a car, you know, the more that combination fires, the more
    4:38:33 we think the model thinks there’s a car present. This doesn’t have to be the case, right? Like,
    4:38:37 you could imagine something where you have, you know, you have this car detector neuron,
    4:38:42 and you think, ah, you know, if it fires like, you know, between one and two, that means one thing,
    4:38:46 but it means like totally different if it’s between three and four. That would be a nonlinear
    4:38:50 representation. And in principle, that, you know, models could do that. I think it’s it’s sort of
    4:38:54 inefficient for them to do the if you try to think about how you’d implement computation like that,
    4:39:00 it’s kind of an annoying thing to do. But in principle, models can do that. So one way to think
    4:39:05 about the features and circuits sort of framework for thinking about things is that we’re thinking
    4:39:10 about things as being linear. We’re thinking about there as being that if a if a neuron or
    4:39:14 a combination neurons fires more, it’s sort of that means more of the of a particular thing being
    4:39:19 detected. And then that gives weights, a very clean interpretation as these edges between
    4:39:24 these these entities that these features, and that that edge then has a has a mean.
    4:39:31 So that’s that’s in some ways the the core thing. It’s it’s like, you know, we can talk about this
    4:39:34 sort of outside the context of neurons. Are you familiar with the word to back results?
    4:39:40 So you have like, you know, king minus man plus woman equals queen. Well, the reason you can do
    4:39:44 that kind of arithmetic is because you have a linear representation. Can you actually explain
    4:39:50 that representation a little bit? So first of all, so the feature is a direction of activation.
    4:39:56 Yeah, exactly. That way, can you do the the minus men plus women with that that the word to back
    4:40:01 stuff? Can you explain what that is? Yeah, so there’s this very such a simple clean explanation
    4:40:06 of what we’re talking about. Exactly. So there’s this very famous result word to back by Thomas
    4:40:11 Miklov at all. And there’s been tons of follow up work exploring this. So sometimes we have these,
    4:40:18 we create these word embeddings, where we map every word to a vector. I mean, not in itself,
    4:40:21 by the way, is is kind of a crazy thing if you haven’t thought about it before, right? Like we’ve
    4:40:27 we’re going in and representing returning. And, you know, like, like, if you just learned about
    4:40:31 vectors in physics class, right? And I’m like, oh, I’m going to actually turn every word in the
    4:40:35 dictionary into a vector. That’s kind of a crazy idea. Okay. But you could imagine.
    4:40:39 You could imagine all kinds of ways in which you might map words to vectors.
    4:40:46 But it seems like when we train neural networks, they like to go in and map words to vectors
    4:40:51 to such that they’re they’re they’re they’re sort of linear structure in a particular sense,
    4:40:57 which is that directions have meaning. So for instance, if you there will be some direction
    4:41:02 that seems to sort of correspond to gender, and male words will be, you know, far in one direction,
    4:41:07 and female words will be in another direction. And the linear representation hypothesis is
    4:41:10 you could sort of think of it roughly as saying that that’s actually kind of the
    4:41:14 fundamental thing that’s going on that that everything is just different directions have
    4:41:20 meanings, and adding different direction vectors together can represent concepts.
    4:41:24 And the Mickalaw paper sort of took that idea seriously. And one consequence of it is that
    4:41:28 you can you can do this game of playing sort of arithmetic with words. So you can do king and
    4:41:33 you can, you know, subtract off the word man and add the word woman. And so you’re sort of,
    4:41:36 you know, going in and trying to switch the gender. And indeed, if you do that,
    4:41:40 the result will sort of be close to the word queen. And you can, you know, do other things
    4:41:47 like you can do, you know, sushi minus Japan plus Italy and get pizza or different different
    4:41:53 things like this, right? So so this is in some sense, the core of the linear representation
    4:41:56 hypothesis, you can describe it just as a purely abstract thing about vector spaces,
    4:42:00 you can describe it as a as a statement about about the activations of neurons.
    4:42:06 But it’s really about this property of directions having meaning. And in some ways,
    4:42:10 it’s even a little subtle that it’s really, I think, mostly about this property of being able
    4:42:17 to add things together, that you can sort of independently modify, say, gender and royalty
    4:42:24 or, you know, cuisine type or country and and the concept of food by by adding them.
    4:42:28 Do you think the linear hypothesis holds that carries scales?
    4:42:34 So so far, I think everything I have seen is consistent with the hypothesis and it doesn’t
    4:42:38 have to be that way, right? Like, like you can write down neural networks where you write
    4:42:42 weights such that they don’t have linear representations where the right way to understand
    4:42:47 them is not is not in terms of linear representations. But I think every natural neural network I’ve seen
    4:42:55 has this property. There’s been one paper recently that there’s been some sort of pushing
    4:42:59 around the edge. So I think there’s been some work recently studying multi dimensional features
    4:43:05 where rather than a single direction, it’s more like a manifold of directions. This to me still
    4:43:10 seems like a linear representation. And then there’s been some other papers suggesting that maybe
    4:43:16 in in very small models, you get nonlinear representations. I think that the jury’s still
    4:43:21 out on that. But in I think everything that we’ve seen so far has been consistent with linear
    4:43:27 representation. And that’s wild. It doesn’t have to be that way. And yet I think there’s a lot
    4:43:32 of evidence that certainly at least this is very, very widespread. And so far, the evidence is
    4:43:36 consistent with that. And I think, you know, one thing you might say is you might say, well,
    4:43:41 Christopher, you know, it’s that’s a lot, you know, to to go and sort of to write on, you know,
    4:43:44 if we don’t know for sure, this is true. And you’re sort of, you know, you’re investing in
    4:43:49 neural networks as though it is true. You know, isn’t that isn’t that dangerous? Well, you know,
    4:43:54 but I think actually there’s a virtue in taking hypotheses seriously and pushing them as far as
    4:43:59 they can go. So it might be that someday we discover something that isn’t consistent with
    4:44:04 linear representation hypothesis. But science is full of hypotheses and theories that were wrong.
    4:44:09 And we learned a lot by sort of working under under them as a sort of an assumption.
    4:44:14 And and then going and pushing them as far as we can. I guess I guess this is sort of the heart of
    4:44:19 of what Kuhn would call normal, normal science. And I don’t know, if you want, we can talk a lot
    4:44:24 about about philosophy of science and that leads to the paradigm shift. So yeah, I love it taking
    4:44:29 the hypothesis seriously and take it to a natural, natural conclusion. Yeah. Same with the scaling
    4:44:35 hypothesis, same. Exactly. Exactly. And I love it. One of my colleagues, Tom Hennigan, who is a
    4:44:44 former physicist, made this really nice analogy to me of caloric theory, where once upon a time we
    4:44:51 thought that heat was actually this thing called caloric. And the reason hot objects would warm
    4:44:57 up cool objects is the caloric is flowing through them. And because we’re so used to thinking about
    4:45:02 heat in terms of the modern and modern theory, that seems kind of silly. But it’s actually very
    4:45:09 hard to construct an experiment that sort of disproves the caloric hypothesis. And you know,
    4:45:13 you can actually do a lot of really useful work believing in caloric. For example, it turns out
    4:45:18 that the original combustion engines were developed by people who believed in the caloric
    4:45:23 theory. So I think it’s a virtue in taking hypotheses seriously, even when they might be wrong.
    4:45:28 Yeah. Yeah. There’s a deep philosophical choice to that. That’s kind of how I feel about space
    4:45:33 travel. Like colonizing Mars, there’s a lot of people that criticize that. I think if you just
    4:45:38 assume we have to colonize Mars in order to have a backup for human civilization, even if that’s
    4:45:44 not true, that’s going to produce some interesting engineering and even scientific breakthroughs,
    4:45:47 I think. Yeah. Well, and actually, this is another thing that I think is really interesting. So,
    4:45:54 you know, there’s a way in which I think it can be really useful for society to have people
    4:46:03 almost irrationally dedicated to investigating particular hypotheses. Because, well, it takes
    4:46:08 a lot to sort of maintain scientific morale and really push on something when most scientific
    4:46:16 hypotheses end up being wrong. You know, a lot of science doesn’t work out. And yet it’s very
    4:46:23 useful. There’s a joke about Jeff Hinton, which is that Jeff Hinton has discovered how the brain
    4:46:31 works every year for the last 50 years. But, you know, I say that with really deep respect,
    4:46:35 because in fact, that’s actually, you know, that led to him doing some really great work.
    4:46:41 Yeah, he won the Nobel Prize now who’s laughing now. Exactly. I think one wants to be able to
    4:46:45 pop up and sort of recognize the appropriate level of confidence. But I think there’s also a lot of
    4:46:51 value in just being like, you know, I’m going to essentially assume I’m going to condition on
    4:46:56 this problem being possible or this being broadly the right approach. And I’m just going to go and
    4:47:03 assume that for a while and go and work within that and push really hard on it. And, you know,
    4:47:07 society has lots of people doing that for different things. That’s actually really useful in terms of
    4:47:16 going and getting to, you know, either really, really ruling things out, right? We can be like,
    4:47:20 well, you know, that didn’t work. And we know that somebody tried hard or going and getting to
    4:47:24 something that does teach us something about the world. So another interesting hypothesis is the
    4:47:29 superposition hypothesis. Can you describe what superposition is? Yeah. So earlier, we were talking
    4:47:32 about word to fact, right? And we were talking about how, you know, maybe you have one direction
    4:47:36 that corresponds to gender and maybe another that corresponds to royalty and another one
    4:47:40 that corresponds to Italy and another one that corresponds to, you know, food and all of these
    4:47:47 things. Well, you know, oftentimes, maybe these word embeddings, they might be 500 dimensions,
    4:47:51 a thousand dimensions. And so if you believe that all of those directions were orthogonal,
    4:47:58 then you could only have, you know, 500 concepts. And, you know, I love pizza. But like, if I was
    4:48:03 going to go and like give the like 500 most important concepts in, you know, the English language,
    4:48:08 probably Italy wouldn’t be, it’s not obvious at least that Italy would be one of them, right?
    4:48:15 Because you have to have things like plural and singular and verb and noun and adjective. And,
    4:48:22 you know, there’s a lot of things we have to get to before we get to Italy and Japan and, you know,
    4:48:28 there’s a lot of countries in the world. And so how might it be that models could, you know,
    4:48:34 simultaneously have the linear representation hypothesis be true and also represent more
    4:48:38 things than they have directions? So what does that mean? Well, okay, so if linear representation
    4:48:43 hypothesis is true, something interesting has to be going on. Now, I’ll tell you one more
    4:48:48 interesting thing before we go and we do that, which is, you know, earlier we were talking about
    4:48:52 all these polysematic neurons, right? And these neurons that, you know, when we were looking at
    4:48:55 inception V1, there’s these nice neurons that like the car detector and the curve detector and so on
    4:49:00 that respond to lots of, you know, to very coherent things. But lots of neurons that respond to a
    4:49:05 bunch of unrelated things. And that’s also an interesting phenomenon. And it turns out as well
    4:49:09 that even these neurons that are really, really clean, if you look at the weak activations, right?
    4:49:15 So if you look at like, you know, the activations where it’s like activating 5% of the, you know,
    4:49:20 of the maximum activation, it’s really not the core thing that it’s expecting, right? So if you
    4:49:24 look at a curve detector, for instance, and you look at the places where it’s 5% active,
    4:49:28 you know, you could interpret it just as noise or it could be that it’s doing something else there.
    4:49:37 Okay, so how could that be? Well, there’s this amazing thing in mathematics called compressed
    4:49:43 sensing. And it’s actually this very surprising fact where you have a high dimensional space
    4:49:49 and you project it into a low dimensional space. Ordinarily, you can’t go and sort of
    4:49:52 unprojected and get back your high dimensional vector, right? You threw information away. This
    4:49:57 is like, you know, you can’t, you can’t invert a rectangular matrix. You can only invert square
    4:50:04 matrices. But it turns out that that’s actually not quite true. If I tell you that the high
    4:50:10 dimensional vector was sparse, so it’s mostly zeros, then it turns out that you can often go
    4:50:18 and find back the high dimensional vector with very high probability. So that’s a surprising
    4:50:22 fact, right? It says that, you know, you can, you can, you can have this high dimensional vector
    4:50:27 space. And as long as things are sparse, you can project it down, you can have a lower dimensional
    4:50:33 projection of it. And that works. So the suicide hypothesis is saying that that’s what’s going
    4:50:36 on in neural networks. That’s, for instance, that’s what’s going on in word embeddings.
    4:50:40 The word embeddings are able to simultaneously have directions be the meaningful thing.
    4:50:44 And by exploiting the fact that they’re, they’re operating on a fairly high dimensional space,
    4:50:47 they’re actually, and the fact that these concepts are sparse, right? Like, you know,
    4:50:52 you usually aren’t talking about Japan and Italy at the same time. You know, most of the, most of
    4:50:56 those concepts, you know, in most sentences, Japan and Italy are both zero. They’re not present at
    4:51:04 all. And if that’s true, then you can go and have it be the case that, that you can, you can have
    4:51:08 many more of these sort of directions that are meaningful, these features,
    4:51:12 then you have dimensions. And similarly, when we’re talking about neurons, you can have many
    4:51:17 more concepts than you have, have neurons. So that’s the at a high level of superstition hypothesis.
    4:51:27 Now, it has this even wilder implication, which is to go and say that neural networks are, it
    4:51:31 may not just be the case that the representations are like this, but the computation may also be
    4:51:36 like this, you know, the connections between all of them. And so in some sense, neural networks may
    4:51:44 be shadows of much larger sparser neural networks. And what we see are these projections. And the
    4:51:47 super, you know, the strongest version of superstition hypothesis would be to take that
    4:51:50 really seriously and sort of say, you know, there, there actually isn’t some sense this,
    4:51:55 this upstairs model, this, you know, where, where the neurons are really sparse and all
    4:51:58 interpretable. And there’s, you know, the weights between them are these really sparse
    4:52:05 circuits. And that’s what we’re studying. And the thing that we’re observing is the
    4:52:08 shadow of it. And so we need to find the original object.
    4:52:14 And the process of learning is trying to construct a compression of the upstairs model
    4:52:17 that doesn’t lose too much information in the projection.
    4:52:21 Yeah, it’s finding how to fit it efficiently or something like this. The gradient descent is
    4:52:25 doing this. And in fact, so this sort of says that gradient descent, you know, it could just
    4:52:29 represent a dense neural network, but it sort of says that gradient descent is pleasantly searching
    4:52:34 over the space of extremely sparse models that could be projected into this low dimensional
    4:52:39 space. And this large body of work of people going and trying to study sparse neural networks,
    4:52:42 right, where you go and you have, you could design neural networks, right, where the edges are sparse
    4:52:47 and activations are sparse. And, you know, my sense is that work is generally, it feels very
    4:52:52 principled, right? It makes so much sense. And yet that work hasn’t really panned out that well as
    4:52:58 my impression broadly. And I think that a potential answer for that is that actually,
    4:53:03 the neural network is already sparse in some sense, gradient descent was the whole time gradient,
    4:53:06 you were trying to go and do this gradient descent was actually in the behind the scenes going and
    4:53:10 searching more efficiently than you could through the space of sparse models and going and learning
    4:53:16 whatever sparse model was most efficient and then figuring out how to fold it down nicely to go and
    4:53:20 run conveniently on your GPU, which does, you know, as nice dense matrix multiplies. And that you
    4:53:26 just can’t beat that. How many concepts do you think can be shoved into a neural network?
    4:53:30 Depends on how sparse they are. So there’s probably an upper bound from the number of
    4:53:34 parameters, right? Because you have to have, you still have to have, you know, primed weights that
    4:53:38 go and connect them together. So that’s, that’s one upper bound. There are in fact all these
    4:53:43 lovely results from compressed sensing and the Johnson-Lindon stress lemma and things like this
    4:53:48 that they basically tell you that if you have a vector space and you want to have
    4:53:52 almost orthogonal vectors, which is sort of the probably the thing that you want here, right?
    4:53:56 So you’re going to say, well, you know, I’m going to give up on having my concepts, my features be
    4:53:59 strictly orthogonal, but I’d like them to not interfere that much. I’m going to have to ask
    4:54:04 them to be almost orthogonal. Then this would say that it’s actually, you know, for once you set a
    4:54:10 threshold for what you’re willing to accept in terms of how much cosine similarity there is,
    4:54:14 that it’s actually exponential in the number of neurons that you have. So at some point,
    4:54:19 that’s not going to even be the limiting factor. But there are some beautiful results there. In
    4:54:23 fact, it’s probably even better than that in some sense, because that’s sort of for saying that,
    4:54:27 you know, any random set of features could be active. But in fact, the features have sort of a
    4:54:31 correlational structure where some features, you know, are more likely to co-occur and other ones
    4:54:36 are less likely to co-occur. And so neural networks, my guess would be, could do very well in terms of
    4:54:42 going and packing things in such to the point that’s probably probably not the limiting factor.
    4:54:46 How does the problem of polysemiticity enter the picture here?
    4:54:50 Polysemiticity is this phenomenon we observe where you look at many neurons and the neuron
    4:54:55 doesn’t just sort of represent one concept. It’s not a clean feature. It responds to a bunch of
    4:55:01 unrelated things. And superposition is, you can think of as being a hypothesis that explains
    4:55:08 the observation of polysemiticity. So polysemiticity is this observed phenomenon and superposition is
    4:55:11 is a hypothesis that would explain it along with some other.
    4:55:14 So that makes mech and turb more difficult.
    4:55:17 Right. So if you’re trying to understand things in terms of individual neurons
    4:55:20 and you have polysemitic neurons, you’re in an awful lot of trouble, right?
    4:55:23 I mean, the easiest answer is like, okay, well, you know, you’re looking at the neurons,
    4:55:26 you’re trying to understand them. This one responds for a lot of things. It doesn’t have
    4:55:32 a nice meaning. Okay, that’s bad. Another thing you could ask is, ultimately, we want to understand
    4:55:37 the weights. And if you have two polysemitic neurons and each one responds to three things,
    4:55:40 and then the other neuron responds to three things and you have a weight between them,
    4:55:46 what does that mean? Does it mean that all three, there’s these nine interactions going on?
    4:55:51 It’s a very weird thing. But there’s also a deeper reason, which is related to the fact that neural
    4:55:56 networks operate on really high dimensional spaces. So I said that our goal was to understand
    4:56:01 neural networks and understand the mechanisms. And one thing you might say is like, well, why not?
    4:56:04 It’s just a mathematical function. Why not just look at it, right? Like, you know, one of the
    4:56:08 earliest projects I did studied these neural networks that mapped two-dimensional spaces to
    4:56:12 two-dimensional spaces. And you can sort of interpret them in this beautiful way as like
    4:56:17 bending manifolds. Why can’t we do that? Well, you know, as you have a higher dimensional space,
    4:56:23 the volume of that space in some senses is exponential in the number of inputs you have.
    4:56:28 And so you can’t just go and visualize it. So we somehow need to break that apart. We need to
    4:56:34 somehow break that exponential space into a bunch of things that we, you know, some non-exponential
    4:56:39 number of things that we can reason about independently. And the independence is crucial
    4:56:42 because it’s the independence that allows you to not have to think about, you know, all the
    4:56:50 exponential combinations of things. And things being monosomatic, things only having one meaning,
    4:56:54 things having a meaning. That is the key thing that allows you to think about them independently.
    4:56:59 And so I think that’s, if you want the deepest reason why we want to have
    4:57:04 interpretable monosomatic features, I think that’s really the deep reason.
    4:57:09 And so the goal here, as your recent work has been aiming at is how do we extract the
    4:57:15 monosomatic features from a neural net that has polysemantic features and all this mess?
    4:57:19 Yes, we have, we observe these polysematic neurons and we hypothesize that what’s going,
    4:57:22 what’s going on a superposition. And if superposition is what’s going on,
    4:57:27 there is actually a sort of well-established technique that is sort of the principal thing to
    4:57:32 do, which is dictionary learning. And it turns out, if you do dictionary learning, in particular,
    4:57:35 if you do the sort of a nice efficient way that in some sense, in some sense, sort of nicely
    4:57:40 regularizes it as well, as well, called a sparse autoencoder. If you train a sparse autoencoder,
    4:57:44 these beautiful interpretable features start to just fall out where there weren’t any beforehand.
    4:57:49 And so that’s not a thing that you would necessarily predict, right? But it turns out
    4:57:55 that that works very, very well. To me, that seems like, you know, some non-trivial validation
    4:57:59 of linear representations and superposition. So with dictionary learning, you’re not looking for
    4:58:02 particular kind of categories, you don’t know what they are. Exactly, yeah. They just emerge.
    4:58:05 And this gets back to our earlier point, right? When we’re not making assumptions,
    4:58:08 gradient descent is smarter than us. So we’re not making assumptions about what’s there.
    4:58:14 I mean, one certainly could do that, right? One could assume that there’s a PHP feature
    4:58:17 and go and search for it. But we’re not doing that. We’re saying we don’t know what’s going to
    4:58:21 be there. Instead, we’re just going to go and let the sparse autoencoder discover the things
    4:58:27 that are there. So can you talk to the toward monosimisticity paper from October last year?
    4:58:31 That had a lot of nice breakthrough results. That’s very kind of you to describe it that way.
    4:58:39 Yeah, I mean, this was our first real success using sparse autoencoders. So we took a one-layer
    4:58:45 model and it turns out if you go and do dictionary learning on it, you find all these really nice
    4:58:51 interpretable features. So the Arabic feature, the Hebrew feature, the base 64 features were
    4:58:54 some examples that we studied in a lot of depth and really showed that they were
    4:58:58 what we thought they were. It turns out if you train a model twice as well and train two different
    4:59:02 models and do dictionary learning, you find analogous features in both of them. So that’s fun.
    4:59:08 You find all kinds of different features. So that was really just showing that this works.
    4:59:13 I should mention that there was this Cunningham at all that had very similar results around the
    4:59:18 same time. There’s something fun about doing these kinds of small-scale experiments and finding
    4:59:25 that it’s actually working. Yeah, well, and there’s so much structure here. So maybe stepping back
    4:59:32 for a while, I thought that maybe all this mechanistic interpretability work, the end result
    4:59:36 was going to be that I would have an explanation for why it was very hard and not going to be
    4:59:40 tractable. I mean, we’d be like, well, there’s this problem of supersession and it turns out
    4:59:45 supersession is really hard and we’re kind of screwed. But that’s not what happened. In fact,
    4:59:50 a very natural, simple technique just works. And so then that’s actually a very good situation.
    4:59:55 You know, I think this is a sort of hard research problem and it’s got a lot of research risk and
    4:59:59 you know, it might still very well fail. But I think that some amount of some very significant
    5:00:03 amount of research risk was sort of put behind us when that started to work.
    5:00:07 Can you describe what kind of features can be extracted in this way?
    5:00:12 Well, so it depends on the model that you’re studying, right? So the larger the model,
    5:00:14 the more sophisticated they’re going to be. And we’ll probably talk about that fall-up
    5:00:21 work in a minute. But in these one-layer models, so some very common things I think were languages,
    5:00:24 both programming languages and natural languages. There were a lot of features that were
    5:00:30 specific words in specific contexts. So “the,” and I think really the way to think about this is that
    5:00:34 “the” is likely about to be followed by a noun. So it’s really right. You could think of this as
    5:00:37 a “the” feature, but you could also think of this as predicting a specific noun feature.
    5:00:44 And there would be these features that would fire for “the” in the context of, say, a legal document
    5:00:51 or a mathematical document or something like this. And so, you know, maybe in the context of math,
    5:00:55 you’re like, you know, “the” and then “product vector” or “matrix,” you know, all these mathematical
    5:00:59 words. Whereas, you know, in other contexts, you would predict other things. That was common.
    5:01:05 And basically, we need clever humans to assign labels to what we’re seeing.
    5:01:09 Yes. So, you know, this is the only thing this is doing is that sort of
    5:01:14 unfolding things for you. So if everything was sort of folded over top of it, you know,
    5:01:17 series-ish and folded everything on top of itself, and you can’t really see it,
    5:01:21 this is unfolding it. But now you still have a very complex thing to try to understand.
    5:01:24 So then you have to do a bunch of work understanding what these are.
    5:01:28 And some of them are really subtle. Like, there’s some really cool things,
    5:01:31 even in this one-year model about Unicode, where, you know, of course,
    5:01:35 some languages are in Unicode and the tokenizer won’t necessarily have a dedicated
    5:01:41 token for every Unicode character. So instead, what you’ll have is you’ll have these patterns
    5:01:46 of alternating tokens that each represent half of a Unicode character. And you have a different
    5:01:51 feature that, you know, goes and activates on the opposing ones to be like, okay, you know,
    5:01:56 I just finished a character, you know, go and predict next prefix. Then, okay, I’m on the prefix,
    5:02:01 you know, predict a reasonable suffix. And you have to alternate back and forth. So there’s,
    5:02:05 you know, these, these one-year models are really interesting. And I mean,
    5:02:08 it’s another thing that just, you might think, okay, there would just be one base 64 feature.
    5:02:12 But it turns out there’s actually a bunch of base 64 features, because you can have
    5:02:16 English text encoded in as base 64. And that has a very different distribution
    5:02:22 of base 64 tokens than, than regular. And there’s, there’s, there’s some things about
    5:02:26 tokenization as well that it can exploit. And I don’t know, there’s all kinds of fun stuff.
    5:02:30 How difficult is the task of sort of assigning labels
    5:02:33 to what’s going on? Can this be automated by AI?
    5:02:37 Well, I think it depends on the feature. And it also depends on how much you trust your AI.
    5:02:43 So there’s a lot of work doing automated interpretability. I think that’s a really
    5:02:46 exciting direction. And we do a fair amount of automated interpretability and have,
    5:02:48 have Claude go and label our features.
    5:02:53 Is there some funny moments where it’s totally right or it’s totally wrong?
    5:02:56 Yeah. Well, I think, I think it’s very common that it’s like,
    5:03:02 says something very general, which is like true in some sense, but not really picking up
    5:03:08 on the specific of what’s going on. So I think, I think that’s a pretty common situation.
    5:03:12 You don’t know that I have a particularly amusing one.
    5:03:16 That’s interesting. That little gap between it is true, but it doesn’t quite get
    5:03:21 to the deep nuance of a thing. That’s a general challenge.
    5:03:25 It’s like, it’s, it’s certainly an incredible accomplishment that can say a true thing,
    5:03:30 but it doesn’t, it’s not, it’s missing the depth sometimes.
    5:03:34 And in this context, it’s like the arc challenge, you know, the sort of IQ type of tests.
    5:03:41 It feels like figuring out what a feature represents is a bit of a little puzzle you have to solve.
    5:03:44 Yeah. And I think that sometimes they’re easier and sometimes they’re harder as well.
    5:03:50 So yeah, I think, I think that’s tricky. And there’s another thing, which I don’t know, maybe,
    5:03:55 maybe in some ways this is my like aesthetic coming in, but I’ll try to give you a rationalization.
    5:03:58 You know, I’m actually a little suspicious of automated interoperability.
    5:04:01 And I think that partly just that I want humans to understand neural networks.
    5:04:05 And if the neural network is understanding it for me, you know, I’m not, I don’t quite like that.
    5:04:08 But I do have a bit of a, you know, in some ways I’m sort of like the mathematicians who are like,
    5:04:10 you know, if there’s a computer automated proof, it doesn’t count.
    5:04:14 You know, you, they won’t understand it. But I do also think that there is
    5:04:20 this kind of like reflections on trusting trust type issue where if you, there’s this famous talk
    5:04:26 about, you know, like when you’re writing a computer program, you have to trust your compiler.
    5:04:30 And if there was like malware in your compiler, then it could go and inject malware into the
    5:04:33 next compiler. And, you know, you’d be in kind of in trouble, right? Well, if you’re using neural
    5:04:39 networks to go and verify that your neural networks are safe, the hypothesis that you’re
    5:04:43 testing for is like, okay, well, the neural network maybe isn’t safe. And you have to worry
    5:04:48 about like, is there some way that it could be screwing with you? So, you know, I think that’s
    5:04:53 not a big concern now. But I do wonder in the long run, if we have to use really powerful
    5:04:58 AI systems to go and, you know, audit our AI systems, is that, is that actually something we
    5:05:02 can trust? But maybe I’m just rationalizing because I, I just want us to have to get to a
    5:05:06 point where humans understand everything. Yeah, I mean, especially that’s hilarious,
    5:05:10 especially as we talk about AI safety and looking for features that would be relevant
    5:05:17 to AI safety, like deception and so on. So, let’s talk about the scaling monosemicity paper
    5:05:23 in May 2024. Okay. So, what did it take to scale this to apply to cloud 3s on it?
    5:05:28 Well, a lot of GPUs. A lot more GPUs. But one of my teammates, Tom Hennigan,
    5:05:35 was involved in the original scaling laws work. And something that he was sort of
    5:05:39 interested in from very early on is, are there scaling laws for interoperability?
    5:05:47 And so, something he sort of immediately did when, when this work started to succeed and we
    5:05:50 started to have sparse autoencoders work, because it became very interested in, you know, what are
    5:05:57 the scaling laws for, you know, for making, making sparse autoencoders larger? And how
    5:06:03 does that relate to making the base model larger? And so, it turns out this works really well and
    5:06:08 you can use it to sort of project, you know, if you train a sparse autoencoder at a given size,
    5:06:11 you know, how many tokens should you train on? And so on. So, this was actually a very big help to us
    5:06:17 in scaling up this work and made it a lot easier for us to go and train, you know, really large
    5:06:22 sparse autoencoders, where, you know, it’s not like training the big models, but it’s starting
    5:06:26 to get to a point where it’s actually, actually expensive to go and train the really big ones.
    5:06:30 So, you have to, I mean, you have to do all the stuff of like splitting it across
    5:06:34 large. Oh, yeah, no, I mean, there’s a huge engineering challenge here too, right? So,
    5:06:39 yeah, so there’s a scientific question of how you scale things effectively. And then there’s
    5:06:42 an enormous amount of engineering to go and scale it up. You have to, you have to shard it. You
    5:06:46 have to, you have to think very carefully about a lot of things. I’m lucky to work with a bunch
    5:06:49 of great engineers because I am definitely not a great engineer. Yeah, and the infrastructure,
    5:06:56 especially, yeah, for sure. So, it turns out, TODR, it worked. It worked, yeah. And I think this is
    5:06:59 important because you could have imagined, like you could have imagined a world where you set
    5:07:04 after towards monosmenticity. You know, Chris, this is great. You know, it works on a one-layer
    5:07:08 model, but one-layer models are really idiosyncratic. Like, you know, maybe, maybe that’s just
    5:07:12 something, like, maybe the linear representation hypothesis and superposition hypothesis is the
    5:07:16 right way to understand a one-layer model, but it’s not the right way to understand larger models.
    5:07:22 And so, I think, I mean, first of all, like, the Cunningham et al paper sort of cut through that a
    5:07:26 little bit and sort of suggested that this wasn’t the case, but scaling monosmenticity sort of,
    5:07:31 I think, was significant evidence that, even for very large models, and we did it on Claude III
    5:07:36 Sonnet, which at that point was one of our production models, you know, even these models
    5:07:43 seem to be very, you know, seem to be substantially explained, at least, by linear features and,
    5:07:46 you know, doing dictionary running on them works. And as you learn more features, you go and you
    5:07:51 explain, explain more and more. So, that’s, I think, a quite a promising sign. And you find,
    5:07:57 now, really fascinating abstract features. And the features are also multimodal. They
    5:08:00 respond to images and text for the same concept, which is fun.
    5:08:06 Yeah, this, can you explain that? I mean, like, you know, backdoor, there’s just a lot of examples
    5:08:09 that you can. Yeah, so maybe, maybe let’s start with one example to start, which is,
    5:08:13 we found some features around sort of security vulnerabilities and backdoors and codes. So,
    5:08:17 it turns out those are actually two different features. So, there’s a security vulnerability
    5:08:22 feature. And if you force it active, Claude will start to go and write security vulnerabilities,
    5:08:27 like buffer overflows into code. And it also fires for all kinds of things, like, you know,
    5:08:33 some of the top data set examples for things like, you know, dash, dash, disable, you know,
    5:08:38 SSL or something like this, which are sort of obviously really, really insecure.
    5:08:44 So, at this point, it’s kind of like, maybe it’s just because the examples are presented that way,
    5:08:51 it’s kind of like surf a little bit more obvious examples, right? I guess the, the idea is that
    5:08:56 down the line, it might be able to detect more nuanced like deception or bugs or that kind of
    5:09:02 stuff. Yeah, well, I may want to distinguish two things. So, one is the complexity of the feature
    5:09:10 or the concept, right? And the other is the, the nuance of the, how subtle the examples we’re looking
    5:09:15 at, right? So, when we show the top data set examples, those are the most extreme examples
    5:09:20 that cause that feature to activate. And so, it doesn’t mean that it doesn’t fire for more subtle
    5:09:27 things. So, the insecure code feature, you know, the stuff that it fires for most strongly for
    5:09:36 these like really obvious, you know, disable the security type things. But, you know, it also fires
    5:09:41 for, you know, buffer overflows and more subtle security vulnerabilities in code.
    5:09:44 You know, these features are all multimodal. So, you could ask like, what images activate this
    5:09:52 feature? And it turns out that the, the security vulnerability feature activates for images of
    5:09:58 like people clicking on Chrome to like go past the like, you know, this, this website,
    5:10:01 the SSL certificate might be wrong or something like this. Another thing that’s very entertaining
    5:10:05 is there’s backdoors in code feature, like you activate it. It goes in, Claude writes a backdoor
    5:10:09 that like will go and dump your data to port or something. But, you can ask, okay, what, what
    5:10:14 images activate the backdoor feature? It was devices with hidden cameras in them. So, there’s a whole
    5:10:20 apparently genre of people going and selling devices that look innocuous that have hidden
    5:10:24 cameras and they have ads at how there’s a hidden camera in it. And I guess that is the, you know,
    5:10:29 physical version of a backdoor. And so, it sort of shows you how abstract these concepts are,
    5:10:35 right? And I just thought that was, I mean, I’m sort of sad that there’s a whole market of people
    5:10:38 selling devices like that. But I was kind of delighted that that was the thing that it came
    5:10:43 up with as the, the top image examples for the feature. Yeah, it’s nice. It’s multimodal. It’s
    5:10:50 multi almost context. It’s broad, strong definition of a singular concept. It’s nice. Yeah. To me,
    5:10:57 one of the really interesting features, especially for AI safety is deception and lying and the
    5:11:03 possibility that these kinds of methods could detect lying in a model, especially gets smarter
    5:11:09 and smarter and smarter. Presumably, that’s a big threat of a super intelligent model that it can
    5:11:15 deceive the people operating it as to its intentions or any of that kind of stuff. So,
    5:11:19 what have you learned from detecting lying inside models?
    5:11:26 Yeah. So, I think we’re in some ways in early days for that. We find quite a few features
    5:11:32 related to deception and lying. There’s one feature where it fires for people lying and
    5:11:36 being deceptive and you force it active and Claude starts lying to you. So, we have a deception
    5:11:40 feature. I mean, there’s all kinds of other features about withholding information and not
    5:11:45 answering questions. Features about power seeking and coups and stuff like that. So,
    5:11:48 there’s a lot of features that are kind of related to spooky things. And if you
    5:11:54 force them active, Claude will behave in ways that are not the kinds of behaviors you want.
    5:12:01 What are possible next exciting directions to you in the space of Mechandurb?
    5:12:02 Well, there’s a lot of things.
    5:12:11 So, for one thing, I would really like to get to a point where we have circuits where we can
    5:12:18 really understand not just the features, but then use that to understand the computation of models.
    5:12:25 That really for me is the ultimate goal of this. And there’s been some work we put out a few things.
    5:12:29 There’s a paper from Sam Marks that does some stuff like this. There’s been some,
    5:12:32 I’d say, some work around the edges here. But I think there’s a lot more to do and I think
    5:12:39 that will be a very exciting thing. That’s related to a challenge we call interference weights
    5:12:45 where due to superposition, if you just sort of naively look at whether features are
    5:12:50 connected together, there may be some weights that sort of don’t exist in the upstairs model,
    5:12:55 but are just sort of artifacts of superposition. So, that’s a sort of technical challenge for
    5:13:04 that. I think another exciting direction is just you might think of sparse auto encoders as being
    5:13:11 kind of like a telescope. They allow us to look out and see all these features that are out there.
    5:13:15 And as we build better and better sparse auto encoders, get better and better at dictionary
    5:13:22 learning, we see more and more stars. And we zoom in on smaller and smaller stars. But there’s
    5:13:27 kind of a lot of evidence that we’re only still seeing a very small fraction of the stars. There’s
    5:13:33 a lot of matter in our neural network universe that we can’t observe yet. And it may be that
    5:13:37 we’ll never be able to have fine enough instruments to observe it. And maybe some of it just
    5:13:42 isn’t possible, isn’t combinationally tractable to observe it. So, it’s sort of a kind of dark
    5:13:47 matter in not maybe the sense of modern astronomy, of earlier astronomy, when we didn’t know what
    5:13:52 this unexplained matter is. And so, I think a lot about that dark matter and whether we’ll
    5:13:58 ever observe it and what that means for safety if we can’t observe it. If there’s some significant
    5:14:04 fraction of neural networks are not accessible to us. Another question that I think a lot about
    5:14:10 is at the end of the day, mechanistic interpolation is this very microscopic
    5:14:14 approach to interpolation. It’s trying to understand things in a very fine-grained way.
    5:14:20 But a lot of the questions we care about are very macroscopic. We care about these questions
    5:14:25 about neural network behavior. I think that’s the thing that I care most about, but there’s
    5:14:34 lots of other larger scale questions you might care about. And somehow, the nice thing about
    5:14:38 having a very microscopic approach is it’s maybe easier to ask, is this true? But the downside is
    5:14:43 it’s much further from the things we care about. And so, we now have this ladder to climb. And I
    5:14:47 think there’s a question of, will we be able to find, are there sort of larger scale abstractions
    5:14:53 that we can use to understand neural networks that we get up from this very microscopic approach?
    5:14:57 Yeah, you’ve written about this kind of organs question.
    5:14:59 Yeah, exactly.
    5:15:04 If we think of interpretability as a kind of anatomy of neural networks, most of the
    5:15:09 circus threats involve studying tiny little veins looking at the small scale and individual
    5:15:14 neurons and how they connect. However, there are many natural questions that the small scale
    5:15:20 approach doesn’t address. In contrast, the most prominent abstractions in biological anatomy
    5:15:26 involve larger scale structures, like individual organs, like the heart or entire organ systems,
    5:15:32 like the respiratory system. And so, we wonder, is there a respiratory system or heart or brain
    5:15:34 region of an artificial neural network?
    5:15:39 Yeah, exactly. I mean, if you think about science, a lot of scientific fields have
    5:15:46 investigate things at many levels of abstraction. So, in biology, you have molecular biology,
    5:15:50 studying proteins and molecules and so on. And they have cellular biology. And then,
    5:15:54 you have histology, studying tissues. And then, you have anatomy. And then, you have zoology.
    5:15:58 And then, you have ecology. And so, you have many, many levels of abstraction. Or physics,
    5:16:03 maybe you have the physics of individual particles. And then, statistical physics gives you
    5:16:06 thermodynamics and things like that. And so, you often have different levels of abstraction.
    5:16:13 And I think that right now, mechanistic interpolity, if it succeeds, is sort of like a microbiology
    5:16:20 of neural networks. But we want something more like anatomy. And so, and a question you might
    5:16:24 ask is, why can’t you just go there directly? And I think the answer is superposition, at least
    5:16:31 in significant parts. It’s actually very hard to see this macroscopic structure without first
    5:16:35 sort of breaking down the microscopic structure in the right way and then studying how it connects
    5:16:42 together. But I’m hopeful that there is going to be something much larger than features and circuits.
    5:16:46 And that we’re going to be able to have a story that involves much bigger things. And then,
    5:16:49 you can sort of study in detail the parts you care about.
    5:16:54 I suppose the neurobiology, like a psychologist or psychiatrist of a neural network.
    5:16:59 And I think that the beautiful thing would be if we could go and, rather than having disparate
    5:17:02 fields for those two things, if you could have a build a bridge between them,
    5:17:10 such that you could go and have all of your higher level abstractions be grounded very firmly
    5:17:16 in this very solid, more rigorous, ideally, foundation.
    5:17:22 What do you think is the difference between the human brain, the biological neural network,
    5:17:25 and the artificial neural network? Well, the neuroscientists have a much harder job than us.
    5:17:30 Sometimes I just count my blessings by how much easier my job is than the neuroscientists.
    5:17:36 So we can record from all the neurons. We can do that on arbitrary amounts of data.
    5:17:42 The neurons don’t change while you’re doing that, by the way. You can go and ablate neurons,
    5:17:46 you can edit the connections, and so on. And then you can undo those changes.
    5:17:51 That’s pretty great. You can intervene on any neuron and force it active and see what happens.
    5:17:55 You know which neurons are connected to everything. Neuroscientists want to get the
    5:17:58 connectome. We have the connectome, and we have it for much bigger than the elegans.
    5:18:05 And then not only do we have the connectome, we know which neurons excite or inhibit each
    5:18:11 other. It’s not just that we know the binary mask. We know the weights. We can take gradients.
    5:18:16 We know computationally what each neuron does. So I don’t know. The list goes on and on. We just
    5:18:22 have so many advantages over neuroscientists. And then just by having all those advantages,
    5:18:28 it’s really hard. And so one thing I do sometimes think is like, gosh, if it’s this hard for us,
    5:18:31 it seems impossible under the constraints of neuroscience or near impossible.
    5:18:36 I don’t know. Maybe part of me is like, I’ve got a few neuroscientists on my team. Maybe I’m
    5:18:41 sort of like, ah, maybe the neuroscientists, maybe some of them would like to have an easier problem
    5:18:48 that’s still very hard. And they could come and work on neural networks. And then after we figure
    5:18:52 out things in sort of the easy little pond of trying to understand neural networks, which is
    5:18:56 still very hard, then we could go back to biological neuroscience.
    5:18:59 I love what you’ve written about the goal of mech and terp research
    5:19:05 as two goals, safety and beauty. So can you talk about the beauty side of things?
    5:19:11 Yeah. So there’s this funny thing where I think some people are kind of disappointed
    5:19:16 by neural networks, I think, where they’re like, ah, neural networks, it’s just these
    5:19:20 simple rules. And then you just do a bunch of engineering to scale it up and it works really
    5:19:25 well. And where are the complex ideas? This isn’t a very nice, beautiful, scientific result.
    5:19:31 And I sometimes think when people say that, I picture them being like, evolution is so
    5:19:35 boring. It’s just a bunch of simple rules. And you run evolution for a long time and you get
    5:19:41 biology. What a Saki way for biology to have turned out? Where’s the complex rules? But
    5:19:48 the beauty is that the simplicity generates complexity. Biology has these simple rules,
    5:19:54 and it gives rise to all the life and ecosystems that we see around us, all the beauty of nature
    5:19:59 that all just comes from evolution and from something very simple in evolution. And similarly,
    5:20:06 I think that neural networks create enormous complexity and beauty inside and structure
    5:20:10 inside themselves that people generally don’t look at and don’t try to understand because
    5:20:17 it’s hard to understand. But I think that there is an incredibly rich structure to be
    5:20:23 discovered inside neural networks, a lot of very deep beauty. And if we’re just willing to take
    5:20:30 the time to go and see it and understand it. Yeah, I love mechenter. The feeling like we are
    5:20:34 understanding or getting glimpses of understanding the magic that’s going on inside is really
    5:20:41 wonderful. It feels to me like one of the questions that’s just calling out to be asked. And I’m
    5:20:44 sort of, I mean, a lot of people are thinking about this, but I’m often surprised that not
    5:20:51 more are. Is how is it that we don’t know how to create computer systems that can do these things?
    5:20:55 And yet, we have these amazing systems that we don’t know how to directly create computer
    5:20:58 programs that can do these things. But these neural networks can do all these amazing things.
    5:21:02 And it just feels like that is obviously the question that sort of is calling out to be
    5:21:09 answered. If you have any degree of curiosity, it’s like how is it that humanity now has these
    5:21:14 artifacts that can do these things that we don’t know how to do? Yeah, I love the image of the
    5:21:18 circus reaching towards the light of the objective function. Yeah, it’s just, it’s this organic
    5:21:23 thing that we’ve grown and we have no idea what we’ve grown. Well, thank you for working on safety
    5:21:27 and thank you for appreciating the beauty of the things you discover. And thank you for talking
    5:21:32 today, Chris. It’s wonderful. Thank you for taking the time to chat as well. Thanks for listening
    5:21:37 to this conversation with Chris Ola and before that with Dari Amade and Amanda Askel. To support
    5:21:42 this podcast, please check out our sponsors in the description. And now let me leave you
    5:21:49 with some words from Alan Watts. The only way to make sense out of change is to plunge into it,
    5:22:06 move with it and join the dance. Thank you for listening and hope to see you next time.
    5:22:07 you
    5:22:07 you
    5:22:08 you
    5:22:09 you
    5:22:11 (gentle music)
    5:22:13 you

    Dario Amodei is the CEO of Anthropic, the company that created Claude. Amanda Askell is an AI researcher working on Claude’s character and personality. Chris Olah is an AI researcher working on mechanistic interpretability.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep452-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/dario-amodei-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Claude: https://claude.ai
    Anthropic’s X: https://x.com/AnthropicAI
    Anthropic’s Website: https://anthropic.com
    Dario’s X: https://x.com/DarioAmodei
    Dario’s Website: https://darioamodei.com
    Machines of Loving Grace (Essay): https://darioamodei.com/machines-of-loving-grace
    Chris’s X: https://x.com/ch402
    Chris’s Blog: https://colah.github.io
    Amanda’s X: https://x.com/AmandaAskell
    Amanda’s Website: https://askell.io

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Encord: AI tooling for annotation & data management.
    Go to https://encord.com/lex
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex

    OUTLINE:
    (00:00) – Introduction
    (10:19) – Scaling laws
    (19:25) – Limits of LLM scaling
    (27:51) – Competition with OpenAI, Google, xAI, Meta
    (33:14) – Claude
    (36:50) – Opus 3.5
    (41:36) – Sonnet 3.5
    (44:56) – Claude 4.0
    (49:07) – Criticism of Claude
    (1:01:54) – AI Safety Levels
    (1:12:42) – ASL-3 and ASL-4
    (1:16:46) – Computer use
    (1:26:41) – Government regulation of AI
    (1:45:30) – Hiring a great team
    (1:54:19) – Post-training
    (1:59:45) – Constitutional AI
    (2:05:11) – Machines of Loving Grace
    (2:24:17) – AGI timeline
    (2:36:52) – Programming
    (2:43:52) – Meaning of life
    (2:49:58) – Amanda Askell – Philosophy
    (2:52:26) – Programming advice for non-technical people
    (2:56:15) – Talking to Claude
    (3:12:47) – Prompt engineering
    (3:21:21) – Post-training
    (3:26:00) – Constitutional AI
    (3:30:53) – System prompts
    (3:37:00) – Is Claude getting dumber?
    (3:49:02) – Character training
    (3:50:01) – Nature of truth
    (3:54:38) – Optimal rate of failure
    (4:01:49) – AI consciousness
    (4:16:20) – AGI
    (4:24:58) – Chris Olah – Mechanistic Interpretability
    (4:29:49) – Features, Circuits, Universality
    (4:47:23) – Superposition
    (4:58:22) – Monosemanticity
    (5:05:14) – Scaling Monosemanticity
    (5:14:02) – Macroscopic behavior of neural networks
    (5:18:56) – Beauty of neural networks

  • #451 – Rick Spence: CIA, KGB, Illuminati, Secret Societies, Cults & Conspiracies

    AI transcript
    0:00:06 The following is a conversation with Rick Spence, a historian specializing in the history of
    0:00:13 intelligence agencies, espionage, secret societies, conspiracies, the occult and military history.
    0:00:20 And now, a quick few second mention of each sponsor. Check them out in the description. It’s
    0:00:25 the best way to support this podcast. We got AG1 for nutrition, net suite for business,
    0:00:32 better help for the mind, masterclass for learning, and Shopify for selling stuff online.
    0:00:37 Choose wisely my friends. Also, if you want to get in touch with me for a bunch of different
    0:00:44 kinds of reasons, go to lexfreedman.com/contact. And now, onto the full ad reads, I try to make
    0:00:49 these interesting, but if you skip them, please still check out our sponsors. I enjoy their stuff,
    0:00:57 maybe you will too. This episode is brought to you by AG1, an all-in-one daily drink to support
    0:01:01 better health and peak performance. A drink I have not been consuming for the last few days
    0:01:07 because I’m traveling and it’s the thing that makes me miss home. I’m in San Francisco, allowing
    0:01:13 myself to be surrounded and inspired by some incredible software engineering that’s going on
    0:01:20 here and putting all the other mess of politics and social bubble stuff aside. So I’m doing a lot
    0:01:27 of programming and having a lot of really highly deep technical conversations, but I definitely
    0:01:36 miss Austin. I miss Texas. I miss Boston. Walking the halls of MIT, released the university I
    0:01:43 intimately know now, and there’s something about a university where you can shut off all the mess
    0:01:51 of the outside world and focus on ideas, on learning and on discovering, plus the fearless energy of
    0:02:00 undergraduate and graduate students just boldly going forward, thinking they can completely
    0:02:06 revolutionize a field. That’s really inspiring to be surrounded by. And in Texas, the thing I love
    0:02:15 the most is there’s a simple kindness to the hello, to the nod, to the aimless and wonderful
    0:02:20 conversation that you might have at a coffee shop or when you meet a stranger. I don’t know.
    0:02:29 I really fall in love with Texas and the long runs along the river, which I consume AG1 after.
    0:02:37 Sometimes I forget. There’s a sponsor, Reed, going on. They’ll give you one month supply of fish oil
    0:02:43 when you sign up at drinkag1.com/lex. This episode is also brought to you by Neth Sweet,
    0:02:49 an all-in-one cloud business management system. That’s the other thing about San Francisco that
    0:02:58 I’m reminded of, that there’s these incredible businesses that are born. Just a couple of founders
    0:03:06 and they’re quickly hiring a few folks, especially engineering heavy teams. And they’re all dreamers
    0:03:11 and they’re all pushing forward and they’re all trying to do the craziest shit they can. Yes,
    0:03:16 there is a San Francisco bubble. Yes, there’s a bit of a tunnel vision going on in many ways.
    0:03:24 But on the pure desire to build something cool, something that has a positive impact on the world,
    0:03:29 I don’t know. That’s a truly inspiring desire. But of course, sort of from my perspective,
    0:03:36 I share in that desire, but there’s a great cost to it as well. And it’s something that
    0:03:42 is a constant tension in my heart. I would like to do more building than talking. And I’m reminded
    0:03:51 of that when I’m here. Anyway, there is a bit of a mess, a complexity to the scaling of business
    0:03:56 and the running of a business. And that is what Neth Sweet can help you with. They manage
    0:04:02 all kinds of messy stuff. Over 37,000 companies have upgraded to Neth Sweet by Oracle. Take advantage
    0:04:11 of Neth Sweet’s flexible financing plan at NethSuite.com/Lex. That’s NethSuite.com/Lex. This episode
    0:04:18 is also brought to you by BetterHelp, spelled H-E-L-P Help. They figure out what you need to
    0:04:26 match it with a licensed therapist in under 48 hours. I’m reminded of the work and of my conversation
    0:04:35 with Carl Diceroth, a psychiatrist and a appreciator of the beauty in the world. What a wonderful human
    0:04:43 being. Also Paul Conti. These are all friends of Andrew Huberman and what just deep and interesting
    0:04:50 people they are. I would venture even to say very different, but both just incredible analysts of
    0:04:56 the human mind. And what a mystery the mind is. I’ve been reading a lot of mechanistic
    0:05:02 interpretability work, which is this whole field of analyzing neural networks and trying
    0:05:09 to understand what’s going on inside. And there is just wonderful breakthroughs in that field.
    0:05:17 But whenever I’m reading the papers, I can’t help but be caught by the thought that I wish we had
    0:05:26 this kind of rigor or the possibility of rigor in studying the human mind. Sort of neurobiology,
    0:05:30 neuroscience is too messy. There’s too many variables. There’s too much going on and you
    0:05:37 can’t do control experiments like you can on neural networks. So anyway, the human mind is a
    0:05:42 beautiful and mysterious thing. And if you want to untangle the puzzles going on in there, check out
    0:05:49 betterhelp.com/lex and save in your first month. That’s betterhelp.com/lex.
    0:05:56 This episode is also brought to you by Masterclass, where you can watch over 200 classes from the
    0:06:02 best people in the world and their respective disciplines. Phil Ivy on poker, for example,
    0:06:07 great, great masterclass. There’s another guy who I don’t believe has a masterclass,
    0:06:13 although he should, Phil Hummuth. And I got a chance to meet him and hang out with him. And it
    0:06:22 was a, what a cool experience. I just love that this world can produce such interesting, distinct,
    0:06:31 unique characters. And they are unapologetically true to themselves. Beautiful. I love it. Anyway,
    0:06:37 there’s a lot of such characters on masterclass.com. And you can learn from them. So like I said,
    0:06:44 I love Phil Ivy’s masterclass. Aaron Franklin on barbecue, probably somebody I’ll talk to
    0:06:48 eventually. They actually watched a couple of episodes of a barbecue show on Netflix. That’s
    0:06:53 pretty good. But not as good as in the masterclass. I just love the science and the art that goes
    0:06:58 into the whole thing. Anyway, get unlimited access to every masterclass and get an additional
    0:07:07 15% off an annual membership at masterclass.com/lexbaud. This episode is also brought to you by
    0:07:12 Shopify, a platform designed for anyone to sell anywhere with a great looking online store. I set
    0:07:20 one up miraculously at lexfordman.com/store. I think about the countless stores that are enabled.
    0:07:25 I think about the countless stores that are enabled by Shopify and the machinery of capitalism.
    0:07:29 Now I was thinking about that when I was talking to Bernie Sanders,
    0:07:35 and what a genuine human being Bernie is. First of all, still firing on all cylinders
    0:07:41 in terms of the sharpness and the depth and the sort of the horsepower of his mind. He’s
    0:07:48 still there at 83 years old. Still got it. And also just has not changed for many, many decades.
    0:07:54 I wish there would be more politicians with that kind of integrity, agree or disagree with him.
    0:07:59 The man has integrity. And as we head into this election, I think about the kind of politicians
    0:08:07 and human beings I would love to see lead our world. And to me, integrity is one of the character
    0:08:15 traits that is of the highest importance because the pressures when you’re at the top leading
    0:08:22 a nation are immense. And I would like someone who refuses to ever for any reason sell their soul
    0:08:31 for convenience, or otherwise. Anyway, sign up for a $1 per month trial period at Shopify.com/Lex.
    0:08:37 That’s all lowercase. Got a Shopify.com/Lex to take your business to the next level today.
    0:08:44 This is the Lex Friedman podcast. To support it, please check out our sponsors in the description.
    0:08:59 And now, dear friends, here’s Rick Spence.
    0:09:09 You have written and lectured about serial killers, secret societies,
    0:09:15 cults, and intelligence agencies. So we can basically begin at any of these fascinating
    0:09:20 topics. But let’s begin with intelligence agencies, which has been the most powerful
    0:09:26 intelligence agency in history? The most powerful intelligence agency in history.
    0:09:33 I mean, it’s an interesting question. I’d say probably in terms of historical
    0:09:42 longevity and consistency of performance, that the Russian intelligence services,
    0:09:46 notice I didn’t say the KGB specifically, but the Russian intelligence services going back
    0:09:54 to the Tsarist period, are consistently pretty good. Not infallible. None of them are.
    0:10:02 Of course, there’s a common Western way of looking at anything Russian. Very often,
    0:10:06 I think it’s still the case, Russians are viewed in one or two ways. Either they are
    0:10:13 bumbling idiots, or they are diabolically clever. No sort of middle ground. And you can
    0:10:18 find both of those examples in this. So what I mean by that is that if you’re looking at
    0:10:25 the modern SVR or FSB, which are just two different organizations that used to be part of the one
    0:10:34 big KGB or its predecessors, the Cheka, you’re really going back to the late 19th century and
    0:10:43 the Imperial Russian intelligence security service, generally known as the Ocrana or Ocranka.
    0:10:50 It’s really the Department of Police, the special corps of gendarmes. Their primary job was protecting
    0:10:58 the imperial regime and protecting it against imperial or other interior enemies, revolutionaries
    0:11:04 for the most part. And they got very, very good at that by co-opting people within those movements,
    0:11:11 infiltrating and recruiting informers, agent provocateurs. In fact, they excelled at the
    0:11:16 agent provocateur. Person you place aside in an organization to cause trouble,
    0:11:25 usually maneuver them into a position of leadership. And they provoke actions that can then allow you
    0:11:32 to crack down on them. That is to sort of lure or bring the target organization into any legal
    0:11:37 or open status that it can be more effectively suppressed. They were very good at that.
    0:11:44 So good that by the early 20th century and the years preceding the Russian revolution in 1917,
    0:11:50 they had effectively infiltrated every radical party, Bolsheviks, Mensheviks, SRs,
    0:11:56 great and small, and placed people in positions of influence and leadership.
    0:12:05 To the point that arguably, that is, you can debate this, and I think in the whole, they could
    0:12:13 largely dictate what those parties did. Nothing was discussed at any Central Committee meeting
    0:12:19 of any revolutionary group that the Akrona wasn’t immediately aware of. And they often had people
    0:12:25 in positions to influence what those decisions were. Of course, that raises an interesting
    0:12:30 question is that if they were that good and they had infiltrated and effectively controlled most
    0:12:36 of the opposition, then how did the regime get overthrown by revolutionaries? The answer to
    0:12:42 that is that it wasn’t overthrown by revolutionaries. It was overthrown by politicians.
    0:12:48 That would then take us into a detour into Russian history. But I’ll just leave it with this. If you
    0:12:54 look at 1917, and you look closely, this is one of the things that I’d always tell my students,
    0:12:59 is that there are two Russian revolutions in 1917. There’s the first one in March or February,
    0:13:06 depending on your calendar, that overthrows Nicholas II. Revolutionaries are really not
    0:13:10 involved with that. Bolsheviks are nowhere to be seen. Trotsky and Lenin are nowhere to be seen.
    0:13:15 They have nothing to do with that. That has to do effectively with a political conspiracy within
    0:13:22 the Russian parliament, the Duma, to unseat an emperor they thought was, you know, bungling the
    0:13:28 war and was essentially a loser to begin with. And it was a coup d’etat, a parliamentary coup d’etat.
    0:13:37 The temporary or provisional government that that revolution put in power was the one overthrown
    0:13:44 by Lenin eight months later. And that government was essentially one dominated by
    0:13:49 moderate socialists. It was a government that very quickly sort of turned to the left.
    0:13:56 You know, the guy we associate with that is Alexander Kerensky. Alexander Kerensky was a
    0:14:02 Russian socialist, a politician. He was the quasi-dictator of that regime. He’s the person,
    0:14:11 not the Tsar, who’s overthrown by Lenin. So the revolutionaries, they did not prove to be the
    0:14:17 fatal threat to the Tsarist regime. It was the Tsarist political system itself that did that.
    0:14:24 What then transpired was that the Ocrana and its method and many of its agents then immediately
    0:14:29 segued over into the new Soviet security service. So one of the first things that Lenin did in
    0:14:38 December of 1917, within a month of seizing power, since the hold on power was tenuous at best,
    0:14:43 was that, well, you’re going to need some kind of organization to infiltrate and suppress those
    0:14:47 pesky counter-revolutionaries and foreign imperialists and all of the other enemies that we have.
    0:14:53 And so the extraordinary commission to combat counter-revolution and sabotage
    0:14:59 the Checa was formed. You put a veteran Bolshevik, Felix Dzerzhinsky,
    0:15:06 at the head of that, someone you could politically rely upon. But Dzerzhinsky built his organization
    0:15:10 essentially out of the Ocrana. I mean, there were all of these informers sitting around with
    0:15:20 nothing to do, and they were employed. In the early 20s, the kind of rank and file of the Checa
    0:15:27 might have been 80 to 90 percent former imperial officials. Those were gradually decreased over
    0:15:31 time. So why were they doing it? Well, they were professionals. They also needed to eat,
    0:15:38 and things were somewhat precarious. So if your job is to be an agent provocateur,
    0:15:42 if your job is to infiltrate targeted organizations and lead them astray,
    0:15:48 you do that for whoever pays you. That’s part of the professionalism which goes in.
    0:15:54 And under the Soviets, the Soviet intelligence services are also very good at that. They are
    0:16:00 very good at infiltrating people into opposing organizations. And I guess the one example I
    0:16:08 would give to demonstrate that are the Cambridge Five, the British traders,
    0:16:16 Soviet standpoint, heroes who were recruited, most notably, Kim Philby, Guy Burgess, Donald
    0:16:24 McClain, Anthony Blunt. And there may have been well more than five, but I wasn’t bad out of just
    0:16:30 Cambridge. And then placing those people in high positions, the ultimate goal, of course,
    0:16:35 is to get your people into positions of leadership and influence in the opposing
    0:16:42 intelligence service. And so they did. Of course, it all fell apart, and they ended up in, you know,
    0:16:46 Philby ended up living the last part of his life in exile in Moscow, but
    0:16:54 they got their money’s worth out of him. And you can also find this in KGB infiltration,
    0:17:03 the CIA, the FBI, the Aldrich Ames, Hanson cases. Of course, we were infiltrating by we,
    0:17:06 I mean, the Americans in the West, managed to infiltrate our moles as well.
    0:17:11 But if it came down, you know, someone could dispute this, but I would think if you were
    0:17:20 going to come down to kind of like a who had the most moles Super Bowl, probably the Soviets would
    0:17:26 come somewhat ahead of that. So the scale of the infiltration, the number of people,
    0:17:37 and the skill of it, is there a case to be made that the Acrona and the Chaka orchestrated
    0:17:40 both the components of the Russian Revolution, as you described them?
    0:17:45 Well, there’s an interesting question for me. I mean, there are all kinds of questions about
    0:17:50 this. I mean, one of the questions is whether or not Lenin was an Ocarina agent. Okay, I’ve just
    0:17:56 said heresy. Some people, I’ll do that quite often, because I am a heretic
    0:18:04 and proud of it. Great. Why would you possibly say that Lenin could have been an Ocarina agent?
    0:18:11 Well, let’s look what he managed to do. So you had, coming into the 20th century, a
    0:18:21 single, well, nominally, a single Marxist movement, the Russian Social Democratic Labor Party.
    0:18:31 And Bolsheviks and Mensheviks, majorityites and minorityites, are merely factions of that party,
    0:18:37 and they always agreed that they were all Marxists, and we all believe in dialectical
    0:18:44 materialism and the rise of, we’re all socialists, Comrade. The difference was the tactical means
    0:18:52 by which one would attain this. And what Lenin wanted was a militant, small-scale vanguard party,
    0:18:59 wanted a revolution, wanted to seize power, seize control of the state, and once you have the state,
    0:19:08 then you induce socialism from above. Whereas the majority of the people, the so-called Mensheviks,
    0:19:15 the minorityites, who are oddly enough the vast majority of the party, that’s one of the first
    0:19:22 things, how do you lose that argument? Okay, how does the minority get to grab the name majorityites,
    0:19:31 but Lenin did that. So what Lenin wanted was a conspiratorial party of committed revolutionaries
    0:19:36 that would plot and scheme and undermine and eventually seize control of the state and induce
    0:19:42 socialism from above. There were other Russian Marxists who thought that that sounded vaguely
    0:19:49 totalitarian and not really democratic and not even terribly socialist, and they opposed that
    0:19:59 ineffectively from the beginning outmaneuvered every step of the way. The Mensheviks are a case
    0:20:05 study in failure of a political organization. That too will be heresy to some people, but look,
    0:20:13 they lost. Now, so what Lenin managed to do, starting around 1903, continuing onto this,
    0:20:21 is he managed to divide, to take what had been a single Marxist party and split it into angry,
    0:20:29 contending factions, because he and his Bolsheviks were on one side advocating a much
    0:20:36 more militant conspiratorial policy. The discombobulated Mensheviks were over on the other,
    0:20:40 and in between were a lot of people who really didn’t know where they stood on this. I mean,
    0:20:45 sometimes they kind of agreed, and he seems to making sense today. No, no, I don’t think he’s
    0:20:51 making sense in that day. But he managed to completely disunify this organization. Now,
    0:20:58 who could possibly have seen benefit in that? The Ograna. Now, whether or not they put him
    0:21:05 up to it, whether or not in some way they helped move him into a position of leadership or encouraged
    0:21:11 it or encouraged it through people around him, whether he was a witting or unwitting agent
    0:21:17 of the Tsarist secret police, he certainly accomplished exactly what it was that they
    0:21:26 had wanted. And I find that suspicious is one of those things that it’s so convenient in a way
    0:21:36 is that I’m not necessarily sure that was an accident. There’s also this whole question to me
    0:21:43 as to what was going on within the Ograna itself. Now, this is one of these questions when I come
    0:21:52 to you later about how intelligence agencies interact or serve the governments to which they
    0:21:58 are theoretically subordinate. They do tend to acquire a great deal of influence and power.
    0:22:05 After all, their main job is to collect information. And that information could be about all kinds of
    0:22:11 things, including people within the government structure itself. And they also know how to
    0:22:16 leverage that information in a way to get people to do what you want them to do.
    0:22:24 So an argument can be made, again, an argument, not a fact, merely an opinion, which is mostly
    0:22:32 what history is made out of, opinions, is that at some point between about 1900 and 1917,
    0:22:38 people within the Ograna were playing their own game. And that game took them in a direction
    0:22:44 which meant that continued loyalty to the emperor, specifically to Nicholas II,
    0:22:54 was no longer part of that. To me, in a way, it seems almost during the events of 1917 that,
    0:22:58 one, you had an organization that was very effective when it did that suddenly just
    0:23:04 becomes ineffective. It doesn’t really disappear. These things don’t go away because it will
    0:23:12 reappear as the Ochocha basically fairly quickly. But it raises the question to me as to what degree
    0:23:20 there were people within the organization who allowed events to take the course they wished.
    0:23:29 I always wonder how much deliberate planning there is within an organization like Ograna,
    0:23:33 or if there’s kind of a distributed intelligence that happens.
    0:23:37 Well, one of the key elements in any kind of intelligence, organization,
    0:23:45 or operation is compartmentalization, need to know. So rarely do you have an occasion where
    0:23:49 everybody, everybody in an executive position are all brought into a big corporate meeting,
    0:23:54 and we discuss all of the secret operations that are going on. No, no, you never do that.
    0:24:03 Only a very limited number of people should know about that. If you have a person who is a case
    0:24:06 officer who is controlling agents, he’s the only one who should know who these people are,
    0:24:12 possibly his immediate superiors. But no way do you want that to be common knowledge.
    0:24:20 So information within the organization itself is compartmentalized. So you don’t need
    0:24:26 everybody to be in on it. You don’t even need necessarily the people who are nominally at the
    0:24:32 top. First is the Ograna, the real boss of the Ograna was the Imperial Ministry of the Interior.
    0:24:36 The Minister of the Interior, in fact, but the Minister of the Interior had no real
    0:24:41 effective control over this at all. I mean, to the point was that at one point early on,
    0:24:44 they actually organized the assassination of their own boss.
    0:24:50 They have their agents among the Revolutionaries kill the Minister of the Interior.
    0:24:57 Because he’ll just be replaced by another one. He is an Imperial bureaucrat. He’s not really part of
    0:25:04 their organization. It’s like a director of an intelligence agency appointed by the President.
    0:25:11 Maybe he’s part of the organization. Maybe he isn’t. Maybe he is not one of us.
    0:25:22 So you’ve got different levels, different compartments within it. And who’s actually
    0:25:27 running the show? If anyone is, I don’t know. That’s never supposed to be apparent.
    0:25:32 Well, that’s a fascinating question. And you can see this with NKVD. It’s obviously
    0:25:41 an extremely powerful organization that starts to eat itself, where everybody’s pointing fingers
    0:25:48 internally also, as a way to gain more power. So the question is, in organizations like that,
    0:25:52 that are so compartmentalized, where’s the power? Where’s the center of power?
    0:26:01 Because you would think, given that much power, some individual or a group of individuals will
    0:26:06 start accumulating that power. But it seems like that’s not always a trivial thing. Because if
    0:26:14 you get too powerful, the snake eats that person. Well, we go back again to the founder of Soviet
    0:26:22 secret police, Felix Duzhinsky. Duzhinsky dies in 1926. Heels over after giving a
    0:26:30 heated speech to a party meeting. Now, the common view, what you usually read, which is,
    0:26:34 was key for the time, is that, you know, clearly Stalin had him whacked, because anytime someone
    0:26:42 died, it was almost always that, and I think a lot of times he did. But in some cases,
    0:26:50 Stalin’s probably getting blamed for things that he didn’t actually do. Duzhinsky wasn’t even opposed
    0:26:54 to Stalin. So it’s not clear why he would. But this was the, you know, Stalin died. You know,
    0:26:59 obviously he was poisoned. Something happened. It was an unnatural death. Somebody goes in for an
    0:27:06 operation. You know, it gets a little too much anesthesia. Stalin killed them. Somebody tips
    0:27:11 over in a canoe in upstate New York. Stalin killed them. There’s actually a case about that.
    0:27:20 So that itself can be kind of useful where every time someone dies, they think you killed them.
    0:27:26 That’s kind of an interesting method of intimidation in that regard. But the suspicion is nonetheless
    0:27:35 there. Duzhinsky had been, he was the grand inquisitor. He was seemingly firmly in control
    0:27:41 of the organization. Of course, maybe he wasn’t. Maybe he was, my guess would be is that if Duzhinsky’s
    0:27:48 death was not natural causes, that he was probably eliminated by someone within his own
    0:27:56 organization. And then you look at the people who take over. His immediate successor is
    0:28:03 Jacheslav Menzhinsky, who’s really kind of not really a secret policeman, more kind of intellectual
    0:28:12 dilettante. But if you look behind him, you’ll notice the fellow is Henrique Yagoda. And Yagoda
    0:28:19 will really sort of manage things from behind the scenes until Menzhinsky dies in 1934. And then
    0:28:29 Yagoda will hold on until he’s a victim of the purges, I think in 37 or 38. Yagoda is ambitious,
    0:28:38 murderous. And if I was going to point the finger to anybody who possibly had Duzhinsky whacked,
    0:28:42 it would be him. And for the purposes simply of advancement.
    0:28:51 The person to look out at any kind of corporate organization is your immediate subordinate,
    0:28:56 the person who could move into your job, because more than likely that’s exactly what they’re
    0:29:03 planning to do. Yeah, just one step away from the very top. Somebody there will probably accumulate
    0:29:09 the most power. You mentioned that the various Russian intelligence agencies were good at
    0:29:18 creating agent provocateurs infiltrating the halls of power. What does it take to do that?
    0:29:28 Well, there’s an interesting little acronym called mice, M-I-C-E. And it’s generally used. And it’s
    0:29:34 just the way in which you would acquire. How do you get people to work for you? Well, M stands for
    0:29:41 money. You pay them. People are greedy. They want money. If you look at Aldrich Ames, he had a very,
    0:29:51 very expensive wife with expensive tastes. So you wanted money. I is for ideology. So during,
    0:29:55 particularly in the 1920s and the 1930s, the Soviets were very effective in exploiting
    0:30:03 communists, people who wanted to serve the great cause. Even though that’s initially not
    0:30:08 really what they wanted to do, because the idea was that if you recruit agents from among, let’s
    0:30:14 say, American communists, you compromise the party. Because exactly what your enemies are going to
    0:30:20 say is that all communists are Soviet spies. They’re all traitors in some way. So you would
    0:30:26 really want to keep those two things separate. But ideology was just so convenient. And those
    0:30:32 people would just work for you so well. You could get them to do anything, betray their grandmother.
    0:30:38 They would go ahead and do that for their greater good. So ideology can be a motivation. And that
    0:30:46 can be someone who is a devoted Marxist-Leninist. It can also be someone who’s a disgruntled
    0:30:54 communist because there’s no anti-communist like an ex-communist. Those who lose the faith
    0:31:04 can become very, very useful. For instance, if you look in the case of American intelligence,
    0:31:11 the people who essentially temporarily destroyed much of the KGB organization in the US
    0:31:19 post-World War II were people like Whitaker Chambers, Lewis Boudin, Elizabeth Bentley.
    0:31:25 All of those people had been Communist party members. They had all been part of the Red Faithful.
    0:31:33 They all, for one reason or another, became disillusioned and turned rat or patriot,
    0:31:40 whichever case you may want to put in that regard. What is the C in the east end for?
    0:31:47 The C is for coercion. That’s where you have to persuade someone to work for you. You have to
    0:31:53 pressure them. So usually, you blackmail them. You know, that could be they have a gambling habit.
    0:31:58 You know, in the old days, it’s very often because they were gay. Okay,
    0:32:02 get them in a position where they could be compromised and you can get them to do your
    0:32:07 bidding that those people usually have a certain amount of control. Here’s an interesting example
    0:32:13 of how the O’Krona tended to handle this. I think it’s still largely used. You’d round up a bunch
    0:32:21 of revolutionaries on some charge or another, distributing revolutionary literature, running
    0:32:26 any legal printing press. You bring a guy into the room and you say, okay, they’re going to work
    0:32:35 for us. Of course, we refuse to do so. And they go, well, if you refuse, we’ll keep the rest of
    0:32:38 your comrades in jail for a while, you know, maybe beat them with a rubber truncheon or so.
    0:32:43 And then we’re just going to let you go. We’re just going to put you back out on the street.
    0:32:50 And if you don’t work for us, we will spread the rumor through our agents already in your
    0:32:56 organization that you are. And then what will your comrades do? How long are you going to live?
    0:33:00 So you see, you have no choice. You’re ours and you’re going to cooperate with us.
    0:33:11 And the way that that effectiveness would be ensured is that you have multiple agents within
    0:33:17 the same organization who don’t know who each other are. That’s very important. And they’ll all
    0:33:26 be filing reports. So let’s say you have three agents inside the central committee of the SR
    0:33:30 party, and there’s a committee meeting, and you’re going to look at the reports that file,
    0:33:35 they all better agree with each other, right? If one person doesn’t report what the other two do,
    0:33:41 then perhaps they’re not entirely doing their job and they can be liquidated at any time.
    0:33:47 All you do is drop the dime on them. And this was done periodically. In fact, in some cases,
    0:33:52 you would betray your own agents just to completely discombobulate to the organization.
    0:33:59 This happened in one particular case around 1908. The fellow who was the head of the chief
    0:34:05 revolutionary terrorist organization, which wasn’t Bolshevik, but the so-called socialist
    0:34:11 revolutionaries. They’re actually the biggest revolutionary party, the SRs, who are even actually
    0:34:17 Marxists, more anarchists. But they went all in for the propaganda of the deed. They really like
    0:34:22 blowing people up and carrying out quite a campaign of terrorism.
    0:34:28 The fellow who was the head of that terrorist organization was a fellow by the name of Yevno
    0:34:38 Azef. And Yevno Azef was, guess what, an Ocarana agent. Everything he did, every assassination
    0:34:47 that he planned, he did in consultation with his control. So he’d kind of run out his string.
    0:34:52 There was increasing suspicion of him. He was also asking for a lot more money.
    0:34:59 So the Ocarana itself arranged to have him write it out. And what did that do? Well,
    0:35:04 what do you do in your party when you find out the chief of your terrorist brigade
    0:35:12 was a secret police agent? It’s consternation and mistrust. Nobody in the party would ever trust
    0:35:18 them. You couldn’t tell who you were sitting around. I know that a fellow I wrote a biography on,
    0:35:24 Boris Sevinkov, who was a Russian revolutionary, and the second in command within the terrorist
    0:35:29 organization. By the way, the guy that wanted Azef’s job so bad he could taste it.
    0:35:35 Well, on the one level, he expressed absolute horror that his boss was a police agent,
    0:35:42 and well, he should because Sevinkov was a police agent too. See, they already had the number two
    0:35:48 waiting in the wings to take over. But he was legitimately shot. He didn’t really suspect that.
    0:35:53 So it’s a way of manipulating this. And then finally we come to the E.
    0:36:05 That I think is the most important ego. Sometimes people spy or betray because of the egotistical
    0:36:14 satisfaction that they receive. The sheer kind of Machiavellian joy in deceit.
    0:36:21 An example of that would be Kim Philby, one of the Cambridge Five. Now, Philby was a communist,
    0:36:26 and he would argue that he always saw himself as serving the communist cause. But
    0:36:35 he also made this statement. I think it’s in the preface to his autobiography. And he says,
    0:36:43 one never looks twice at the offer of service in the elite force. He saw him by his recruitment
    0:36:50 by the NKVD in the 1930s, and he was absolutely chuffed by that. The mere fact that they would
    0:36:57 want him, what he considered to be a first rate organization would want him, satisfied his ego.
    0:37:05 And if I was to take a guess as to whether it was ideological motivation, whether it was the
    0:37:09 romance of communism, or whether it was the appeal of ego that was the most important in
    0:37:16 his career treason, I’d go with ego. And I think that figures into a lot. People don’t,
    0:37:22 someone doesn’t get the promotions that they wanted. Again, if you look at something like
    0:37:31 Aldrich Ames’ career in particular, you’ve got these kind of, his career in the CIA was hit or
    0:37:39 miss. He didn’t get the postings or promotions that he wanted as a valuation. He never felt
    0:37:43 that he got credit for doing that. And that’s the type of thing that tends to stick in someone’s
    0:37:50 craw and can lead for egotistical reasons and added incentive to betray.
    0:37:53 Yeah, that there’s a boost to the ego when you can deceive,
    0:38:02 sort of not play by the rules of the world and just play with powerful people like they’re at
    0:38:07 your pawns. You’re the only one that knows this. You’re the only one that knows that the
    0:38:14 person who is sitting across from you to which you have sworn your loyalty, you’re simultaneously
    0:38:20 betraying. What a rush that must be for some people. I wonder how many people are susceptible
    0:38:27 to this. I would like to believe that the people have, a lot of people have the integrity to at
    0:38:35 least withstand the MI, the money and the ideology, the pull of that and the ego. It can also be a
    0:38:42 combination of the two. I mean, you can create a recipe of these things, a certain amount of money,
    0:38:47 ego and a little push of coercion that if you don’t,
    0:38:57 will rat you out. You’ll be exposed. What are some differences to you as we look at the
    0:39:02 history of the 20th century between the Russian intelligence and the American intelligence in
    0:39:08 the CIA? If you look at both the Ocarina and the KGB, one of the things that you find consistent
    0:39:17 is that they, a single organization handled foreign intelligence that is spying upon enemy or
    0:39:25 hostile governments and also internal security. That’s all part of it. Whereas if you look at the
    0:39:32 U.S. models that evolves, you eventually have the FBI, Linda Hoover, quite insist that he’s going
    0:39:37 to be the counterintelligence force. If they’re commie spies earning around America, it’s the FBI
    0:39:45 who’s supposed to ferret them out. The CIA is not supposed to be involved in that. The Charter,
    0:39:53 the basic agreement in 1947, did not give the CIA any, it’s often said they were barred from spying
    0:39:58 on Americans, which isn’t quite true. You can always find a way to do that. What they don’t
    0:40:03 have is they don’t have any police or judicial powers. They can’t run around in the country
    0:40:09 carrying guns to use on people. They can’t arrest you. They can’t interrogate you. They can’t jail
    0:40:15 you. They have no police or judicial powers. Now, that means they have to get that from someone else.
    0:40:21 That doesn’t mean that other agencies can’t be brought in or local police officials,
    0:40:27 corn, whatever you need, you can eventually acquire. But they can’t do that directly.
    0:40:32 So, you’ve got this division between foreign intelligence and domestic
    0:40:42 counterintelligence, often split between hostile organizations. The relationship between the FBI
    0:40:49 and the CIA, I think it’s fair to say, is not chummy. Never has been. There’s always been a
    0:40:56 certain amount of rivalry and contention between the two. It’s not to say that something like that
    0:41:04 didn’t exist between the domestic counterintelligence and foreign intelligence components of the KGB.
    0:41:09 But there would be less of that to a degree because there was a single organization. They’re
    0:41:19 all answerable to the same people. So, that gives you a certain greater amount, I think, of leeway
    0:41:27 and power because you’re controlling both of those ends. I remember somebody telling me once that,
    0:41:36 and he was a retired KGB officer. There you go, retired. One of the things that he found amusing
    0:41:43 was that in his role, one of the things that he could be is that he could be anywhere at any time
    0:41:51 in any dress, which meant that he could be in or out of uniform and in a place at any time.
    0:41:57 He was authorized to do that. So, more freedom, more power. I think one of the things that you
    0:42:05 would often have the view is that the Russians are simply naturally meaner. There’s less respect
    0:42:16 for human rights. There’s a greater tendency to abuse power that one might have. I mean, frankly,
    0:42:22 they’re all pretty good at that. It is fair to say that there’s probably some degree of
    0:42:28 cultural differences that are not necessarily for institutional reasons, but cultural reasons.
    0:42:37 There could well be things that Americans might bulk it doing more than you would find
    0:42:43 on the Russian or Soviet side of the equations. The other aspect of that is that Russian history
    0:42:51 is long and contentious and bloody. One of the things that certainly teaches you never trust
    0:42:59 foreigners. Every foreign government, anywhere, any country on your border is a real or potential
    0:43:05 enemy. They will all, at some point, have given the chance, invade you. Therefore, they must always
    0:43:12 be treated with great suspicion. It goes back to something that I think the British observed was
    0:43:20 that countries don’t have friends. They have interests, and those interests can change over time.
    0:43:24 Well, the CIA is probably equally suspicious of all other nations.
    0:43:28 That’s your job. You’re supposed to be suspicious. Your job is not to be trusting.
    0:43:34 The basic job of an intelligence agency is to safeguard your secrets and steal the other guys
    0:43:40 and then hide those away. Are there laws, either intelligence agencies,
    0:43:48 that they’re not willing to break? Is it basically lawless operation to where you can
    0:43:55 break any law as long as it accomplishes the task? I think John Le Carré gave his pen name.
    0:43:58 I was talking about his early recruitment into British intelligence,
    0:44:03 and one of the things he remember being told upfront, well, if you do this, you have to be
    0:44:12 willing to lie, and you have to be willing to kill. Now, those are things that in ordinary human
    0:44:20 interactions are bad things. Generally, we don’t like it when people lie to us. We expect that people
    0:44:27 will act honestly towards us, whether that’s being a businessmen you’re involved with,
    0:44:34 your employers. We’re often disappointed in that because people do lie all the time for a variety
    0:44:42 of reasons, but honesty is generally considered to be it. But in a realm where deception is a rule,
    0:44:51 dishonesty is a virtue. To be good at that, to be able to lie convincingly,
    0:45:02 is good. It’s one of the things you need to do. And killing also is generally frowned upon.
    0:45:09 You know, put people in prison for that. They’re otherwise executed. But in certain circumstances,
    0:45:15 killing is one of those things that you need to be able to do. So what he felt he was being told
    0:45:19 in that case is that once you enter this realm, the same sort of moral rules that apply in general
    0:45:27 British society do not apply. And if you’re squeamish about it, you won’t fit in. You have
    0:45:34 to be able to do those things. I wonder how often those intelligence agencies in the 20th century,
    0:45:41 and of course, the natural question extending it to the 21st century, how often they go to the
    0:45:47 assassination? How often they go to the kill part of that versus just the espionage?
    0:45:57 Let’s take an example from American intelligence from the CIA, 1950s, 1960s into the 1970s, MKUltra.
    0:46:06 That is a secret program, which was involved with what is generally categorized as mind control,
    0:46:13 which really means messing with people’s heads. And what was the goal of that? Well,
    0:46:21 there seem to have been lots of goals, but there was an FBI memo that was, I recently acquired,
    0:46:29 quite legally, by the way, it’s declassified, but it’s from 1949. So this is only two years after
    0:46:35 the CIA came into existence. And it’s an FBI memo because the FBI, of course, very curious what the
    0:46:40 CIA is up to. And the FBI are not part of this meeting, but they have someone in there sort of
    0:46:47 spying on what’s going on. So there was a meeting which was held in a private apartment in New York.
    0:46:55 So it’s not held in any kind of, you know, it’s essentially never really happened because it’s
    0:47:02 in somebody’s house. And there are a couple of guys there from the CIA. One of them is Cleve Baxter.
    0:47:10 Cleve Baxter is the great godfather of the lie detector. Pretty much everything that we know
    0:47:15 or think we know about lie detectors today, they go to Cleve Baxter. He’s also the same guy that
    0:47:21 thought that plants could feel, but which somehow was a derivative of his work on lie detectors.
    0:47:27 So these guys are there and they’re giving a talk to some military and other personnel. And
    0:47:32 there’s certain parts of the document, which are, of course, redacted, but you could figure out what
    0:47:37 it is that they’re talking about. And they’re talking about hypnotic suggestion and all the
    0:47:43 wonderful things that you can potentially do with hypnotic suggestion. And two of the things they
    0:47:49 note is that one of the things we could potentially do is erase memories from people’s minds and
    0:47:55 implant false memories. That would be really keen to do that. Just imagine how that would be done.
    0:48:01 So here to me is the interesting point. They’re talking about this in 1949.
    0:48:07 MK Ultra does not come along until really 1953, although they’re all sorts of, you know, artichoke
    0:48:13 and others. Everything is sort of leading up to that. It’s simply an elaboration of programs
    0:48:19 that are already there. I don’t think that it ultimately matters whether you can
    0:48:27 implant memories or erase memories. To me, the important part is they thought they could
    0:48:34 and they were going to try to do it. And that eventually is what you find out in the
    0:48:42 efforts made during the 1950s and 60s through MK Ultra, MK Surge, MK Naomi and all the others that
    0:48:49 came out. That’s one of the things they’re working for. And among the few MK Ultra era documents
    0:48:54 that survived, there’s that whole question is that can you get someone to put a gun to someone’s
    0:49:01 head and pull the trigger and then remember it later? Yeah. You could, interestingly enough.
    0:49:08 So non-direct violence, controlling people’s minds, controlling people’s minds at scale
    0:49:11 and experimenting with different kinds of ways of doing that.
    0:49:15 But one person put it that the basic argument there or the basic thing you’re after was to
    0:49:20 understand the architecture of the human mind, how it worked, how it put together,
    0:49:24 and then how you could take those pieces apart and assemble them in different ways.
    0:49:31 So this comes, this is where hypnosis comes in, which is a
    0:49:36 was then still is fairly spooky thing. Nobody’s ever explained to me exactly what it is.
    0:49:42 The idea was that could you, you think of the whole possibilities in this case,
    0:49:44 could you create an alternate personality
    0:49:56 and use that alternate personality in an agent role, but then be able to turn it on and off.
    0:50:01 So subsequently the person, which that personality inhabited was
    0:50:07 captured and interrogated, tortured, and their fingernails torn out,
    0:50:12 they would have no memory of it. They couldn’t give any kind of secret away because it was
    0:50:16 embedded in some part of their brain where there was a completely different person.
    0:50:24 I mean, you can just imagine the possibilities that you can dream up. And again, it’s not,
    0:50:29 I think the question is to whether that is possible or whether it was done.
    0:50:35 Well, I suspect that both of those are true, but that you would try to do it. Then imagine the
    0:50:41 mischief that comes out of that. And one of the big complaints from a legal standpoint about MK
    0:50:47 Alter and the rest is that you were having medical experiments essentially being carried
    0:50:51 out on people without their knowledge and against their will, which is a no-no.
    0:50:56 Yeah, the fact that you’re willing to do medical experiments says something about
    0:51:02 what you’re willing to do. And I’m sure that same spirit, innovative spirit,
    0:51:13 persist to this day. And maybe less so, I hope less so in the United States,
    0:51:17 but probably in other intelligence agencies in the world.
    0:51:23 Well, one thing that was learned and the reason why most MK Alter and similar records were destroyed
    0:51:28 on order in the early seventies, around the time the CIA became
    0:51:33 under a certain amount of scrutiny. The mid seventies were not a good time for the agency
    0:51:37 because you had the church committee breathing down their neck. You had all of these assassins.
    0:51:43 People were asking lots of questions. And so you need to dump this stuff because there’s
    0:51:50 all kinds of it because you were committing crimes against American citizens. So let’s eradicate
    0:51:54 it. And the important lesson to be learned is that never do these type of thing again,
    0:52:01 where at least in any way in which the agency’s direct fingerprints are placed on it.
    0:52:11 You can pay people. You can subsidize research. You can set up venture capital firms. You’ve got
    0:52:16 plenty of money. And you can funnel that money into the hands of people who will carry out this
    0:52:22 research privately. So if something goes wrong, you have perfect deniability.
    0:52:30 On the topic of mice, on the topic of money, ideology, coercion and ego. Let me ask you about
    0:52:38 a conspiracy theory. So there is a conspiracy theory that the CIA is behind Jeffrey Epstein.
    0:52:44 At a high level, if we can just talk about that, is that something that’s at all even possible?
    0:52:50 That you have, basically this would be for coercion. You get a bunch of powerful people
    0:52:57 to be sexually mischievous. And then you collect evidence on them so that you can then have leverage
    0:53:06 on them. Well, let’s look at what Epstein was doing. He was a businessman who then also developed a
    0:53:13 very lucrative sideline in being a high level procurer, basically in supplying young girls.
    0:53:29 And he also filmed much of that activity. I think his partner is Gelaine, and I hope I’m
    0:53:34 pronouncing her name correctly. I think it’s Gelaine. I’ve heard it both ways. Gelaine or Gelaine,
    0:53:38 whichever it may be. I think her argument at one point was that, well, we did this to protect
    0:53:44 ourselves. But this type of thing has been done before. There’s nothing new about this. Getting
    0:53:52 influential people in compromising situations and filming them. I could give you another historical
    0:54:01 example of that in late 19 to actually early 1930s, just pre-Nazi Berlin. There was a very
    0:54:09 prominent sort of would-be psychic and occultist by the name of Eric Jan Honesen. He had a private
    0:54:15 yacht. I think it was called the Seven Sins. And he hosted parties. He also had a whole club called
    0:54:20 the Palace of the Occult, which hosted parties where things went on. And there were cameras
    0:54:28 everywhere. He filmed. Important people. You know, guys like the brown shirt chief of Berlin
    0:54:37 in various states of undress and sexual congress. And he did that for the purposes of blackmail.
    0:54:53 So in Epstein’s case, he is a procurer of young girls to wealthy men, largely. And many of those
    0:55:01 events were recorded. Now, even if it wasn’t his intention to use them for blackmail, think of
    0:55:08 what someone else could do it because people know about this. So what you could raise a question is
    0:55:16 that it’s not, you know, Epstein is just kind of a greedy pervert. But through his greedy perversion,
    0:55:21 he’s now collecting information that could be useful. Who could that be useful to?
    0:55:29 Who would like dirt on Prince Andrew? Think of all the people who were there. And these, you
    0:55:35 know, they were important people who, you know, went to Lolita Island. So if it isn’t Epstein
    0:55:40 directly, he might have been being, I’m not trying to let him off the hook because they have anything
    0:55:45 for him. He was either running his own blackmail business or someone was using him as a front
    0:55:50 for that. I mean, I think we’re kidding ourselves. We’re trying to pretend that’s not what was going
    0:55:58 on. So you think even American intelligence agencies would be willing to swoop in and take
    0:56:05 advantage of a situation like that? Well, you know, American politicians could ultimately
    0:56:11 end up in a position to oversee things like intelligence budgets. One of them might even
    0:56:16 become director. You never know. It can never tell what some crazy president might do.
    0:56:22 It could be very, one of the guys who understood the part was J. Edgar Hoover. J. Edgar Hoover
    0:56:27 spent a long time collecting dossiers and politicians. How do you think he’d remain
    0:56:36 director of the FBI as long as he did? Because he systematically collected dirt on people.
    0:56:44 So there is a history of this type of thing. And again, he could argue that’s partly for
    0:56:50 his protection to keep his job, to protect the sanctity and security of the Bureau.
    0:56:58 You can find a million different ways to justify that. It’s really dark. Well,
    0:57:06 there is that side to human nature. Let’s put it that way. Whether it’s the CIA or the Acrona,
    0:57:11 maybe that’s what the president of the United States sees when they show up to office is all
    0:57:20 the stuff they have on him or her. And say that there’s an internal mechanism of power that you
    0:57:25 don’t want to mess with. And so you will listen. Whether that internal mechanism of power is the
    0:57:29 military industrial complex or whatever, the bureaucracy of government.
    0:57:35 Kind of actually the deep state, the trenched bureaucratic. Well, it’s been said, and I think
    0:57:40 it’s generally true, that bureaucratic creatures or like any other creatures, it basically exists
    0:57:47 to perpetuate itself and to grow. I mean, nobody wants to go out of business. And of course,
    0:57:54 you get all of these things like Pizza Gate and accusations of one for another. But here’s an
    0:57:58 interesting thing to consider. Okay. And I want to argue that I’m not saying that Pizza Gate in
    0:58:02 any way was real or QAnon had to say that. But where do they get these ideas from?
    0:58:07 So let’s ask ourselves, do pedophiles exist?
    0:58:16 Yeah. Do organized pedophile organizations exist?
    0:58:21 Yeah, they share information, pictures. They’re out there on the dark web.
    0:58:34 They cooperate. So does child trafficking exist? Yeah, it does. So in other words,
    0:58:42 whether or not specific conspiracy theories about this or that group of organized pedophile
    0:58:51 cultists is real, all the ingredients for that to be real are there. Pedophiles exist. Organized
    0:59:02 pedophilia exists. Child and human trafficking exists. At some point, at some time, someone
    0:59:07 will put all of those together. In fact, certainly, they already have.
    0:59:15 We’ll jump around a little bit, but your work is so fascinating, and it covers so many topics. So
    0:59:21 let’s see if we jump into the present with the Bohemian Grove and the Bilderberg group.
    0:59:28 So the elites, as I think you’ve referred to them. So these gathering of the elites,
    0:59:35 can you just talk about them? What is this? Well, first thing I have to point out is that
    0:59:43 Bohemian Grove is a place, not an organization. It’s where the Bohemian Club meets. It’s that
    0:59:54 2700-acre, old-growth redwood near north of San Francisco. The Bohemian Club began back in the
    1:00:03 1870s. Its initial members were mostly journalists. In fact, supposedly, the name itself was a
    1:00:07 term for it. An itinerant journalist who moved from paper to paper was called the Bohemian.
    1:00:17 And although I think there may be other reasons why that particular term was chosen as well,
    1:00:22 but I think the original five members, there were like three journalists. There was a merchant,
    1:00:26 and there was a vintner guy owned a vineyard. It’s California, how surprising.
    1:00:31 None of them terribly wealthy, but they’ve formed an exclusive men’s club.
    1:00:38 Was and still is. Nothing terribly unusual about that at the time. But it became fashionable. And
    1:00:42 as it became fashionable, more wealthy people wanted to become part of it. And the thing
    1:00:47 about getting rich guys to join your club is what are rich guys have money? And of course,
    1:00:54 it’s one of those rich guys that bought Bohemian Grove, where now you build your old boy summer
    1:01:01 camp, which is what it is. They got cabins with goofy names. They go there. They perform skits.
    1:01:07 They dress up in costumes. True, some of those skits look like pagan human sacrifices,
    1:01:12 but it’s just a skit. What’s really going on there? So, on the one hand, you can argue,
    1:01:18 look, it’s just a rich guy’s club. They like to get out there. The whole motto
    1:01:24 of the place is weaving spiders come not here. So, whenever we’re going to talk about business,
    1:01:29 we just want to get out into the woods, put on some robes, burn a couple of effigies in front
    1:01:35 of the owl, have a good time, probably get drunk a lot. What’s with the robes? Why do they
    1:01:41 do weird creepy shit? Why do they put on a mask and the robe and do the plays and the
    1:01:49 owl and the sacrificing? I don’t know. Why do you have a giant owl? I mean, why do you do that?
    1:01:53 Well, what is that in human nature? Because I don’t think rich people are different than
    1:01:59 not rich people. What is it about wealth and power that brings that out of people?
    1:02:07 Well, part of it is the ritual aspect of it. And that clearly is a ritual. Rituals are pretty
    1:02:13 simple. Rituals are just a series of actions performed in a precise sequence to produce
    1:02:21 an effect. That describes a lot of things. It describes plays, symphonies, every movie you’ve
    1:02:28 ever seen. A movie is a ritual. It is a series of actions carried out in a precise sequence
    1:02:33 to produce an effect, but then added soundtrack to cue you to what emotions you’re supposed to be
    1:02:37 feeling. It’s a great idea. So the rich people should just go to a movie or maybe just go to
    1:02:44 a Taylor Swift concert. Why the owl thing? Part of it is to create this kind of sense,
    1:02:52 I suppose, of group solidarity. You’re all going to appear also a way of sort of transcending
    1:03:00 yourself in a way. When you put on the robe, it’s like putting on a uniform. You are in some way
    1:03:08 a different or more important person. It’s a ritual. The key ritual at Bohemian Grove is a
    1:03:13 thing called the cremation of care. And that’s what it’s supposed to be. We’re going to put all
    1:03:18 of our rich, important people. We have to make all of these critical decisions. Life is so hard.
    1:03:21 So we’re going to go out here in the woods and we’re going to kick back.
    1:03:28 And we’re all going to gather around the lake and then we’re going to carry. It’s wicker. It’s
    1:03:37 not a real person. And how would you know? And this is the cremation of our care, but it’s a
    1:03:42 ritual which is meant to produce a sense of solidarity and relief among those people who are
    1:03:50 there. The question comes down with the rituals is how seriously do you take them? How important
    1:03:55 is this to the people who carry them out? And the interesting answer to that is that for some
    1:03:59 people, it’s, you know, for some people, it’s just boring. I mean, there are probably people
    1:04:04 standing around the owl who think this is ridiculous and can’t wait for it to get over with.
    1:04:07 There are other people who are kind of excited about it and get caught up into it,
    1:04:13 but other people can take it very seriously. It’s all the matter of the intention that you have
    1:04:23 about what the ritual means. And I don’t mean to suggest by that that there’s anything necessarily
    1:04:31 sinister about what’s going on, but it is a, it is clearly a ritual carried out for some kind of
    1:04:37 group reinforcing purpose. And you’re absolutely right. You don’t have to do it that way.
    1:04:43 That’s not an, I mean, I’ve gone to summer camps and we never carried out mock sacrifices in front
    1:04:48 of an owl. All right. Yeah, we did all those other things. We didn’t even have any robes either. So
    1:04:55 it goes beyond merely a rich guy summer camp, although that’s an aspect of it.
    1:05:04 But it also, I think, often obscures that focusing on Bohemian Grove at the getaway of the club
    1:05:09 ignores that the club is around all the time. That’s what’s at the center of this. It is the club
    1:05:17 and its members. So despite all the talk about, no, no weaving spiders coming around here,
    1:05:21 one of the other features of the summer meeting are things called lakeside talks.
    1:05:26 And this often people are invited to go there. And one of the people who was invited,
    1:05:31 I think around 1968 was Richard Nixon, who was making his political comeback.
    1:05:40 And he was invited to give a talk where very important people are listening. And Nixon,
    1:05:44 in his memoirs, realized what was going on. He was being auditioned just whether or not he was
    1:05:49 going to be bred. He recognized that that was really the beginning of his second presidential
    1:06:00 campaign. He was being vetted. So one of the main theories, call it a conspiracy theory or not,
    1:06:06 about the Bohemian club and the gatherings is that people of wealth and influence gather together.
    1:06:12 And whether or not it’s part of the agenda or not, inevitably, you’re going to talk about
    1:06:17 things of interest. But to me, the mere fact that you invite people in, political leaders,
    1:06:21 to give lakeside talks, means that there are weaving spiders, which are going on.
    1:06:30 And it is a perfect private venue to vet people for political office.
    1:06:34 I mean, yeah, where else are you going to do it? If you’re interested in vetting,
    1:06:37 if you’re interested in powerful people selecting?
    1:06:41 Well, see, here’s the question. Are these guys actually picking who’s going to be president?
    1:06:46 Is that the decision which is being made? Or are they just deciding what horses they’re going to
    1:06:52 back? I think the latter is the simpler version of it, but it doesn’t mean this the other way.
    1:06:58 But these are the kinds of, I mean, Nixon was, there was the whole 1960 thing.
    1:07:07 So he’s the new Nixon. And this is where the new Nixon apparently made a good impression
    1:07:15 on the right people because he did indeed get the Republican nomination and he did indeed become
    1:07:22 president. Well, there could also be a much more innocent explanation of really it’s
    1:07:25 powerful people getting together and having conversations and through that conversation
    1:07:30 influencing each other’s view of the world. And just having a legitimate discussion of
    1:07:36 policies. But why wouldn’t they? I mean, why would you assume that people are not going to do that?
    1:07:42 It’s the owl thing with the robes. Why the owl and why the robes?
    1:07:50 Which is why he becomes really compelling when guys like Alex Jones, forgive me,
    1:07:54 but have not watched his documentary, I probably should at some point, about the Bohemian Grove
    1:08:04 where he claims that there is a Satanist human sacrifice of, I think, children.
    1:08:12 And I think that’s quite a popular conspiracy theory. Or is lost popularity, it kind of like
    1:08:20 transformed itself into the QAnon set of conspiracy theories. But, I mean, can you speak to that
    1:08:24 conspiracy? Let’s put it this way, the general public rich people are inherently suspicious.
    1:08:31 Yeah. Great. Let’s put it that way. First of all, they’ve got all that money and exactly
    1:08:39 how did one obtain it. And I do not, of necessity, adhere to the view that behind ever great fortune
    1:08:45 there is a great crime. But there often are. There are ways in which it’s acquired. But I think it’s,
    1:08:53 one of the things I think that can happen is particularly when people acquire a huge amount
    1:09:01 of money. And I won’t name any names. But let’s say there are people who perhaps in the tech sphere,
    1:09:06 who, coming from no particular background of wealth, suddenly find themselves with $600 billion.
    1:09:14 Well, what? This is the question you would have to ask yourself. Why me? Because you’re one of the
    1:09:18 rare, tiny group of human beings who will ever have that kind of wealth in your hands.
    1:09:26 Even if you are a convinced atheist, I think at some point you have to begin to suspect that
    1:09:31 the cosmic muffin, providence, whatever it is, put this money in your hands to do what?
    1:09:36 Achieve great things. Just think of all the stuff is. So you’re going to start a foundation and you’re
    1:09:43 going to start backing all the things that you like. I think there’s an element of ego that comes in
    1:09:53 with it as well. And again, it may not be so much what the rich person with a huge amount of money
    1:10:04 at their disposal and a lot of fuzzy ideas about what to do with it can be influenced by others.
    1:10:15 It’s always that question as to who’s actually manipulating these events? What’s going on in
    1:10:19 that regard? I think in some way, they can be a very useful sucker. Find somebody with a lot of
    1:10:29 money and get them to finance the things that you want them to do. The Bohemian club is, I don’t
    1:10:35 think, in and of itself, inherently evil or sinister, but it means that there are lots of
    1:10:39 different people in it who have different agendas. It goes back to what I said about how somebody
    1:10:44 feels about the cremation of care ritual. This is either just a waste of time. It’s just some sort
    1:10:56 of silly thing that we’re doing, or it is something of great importance, perhaps even mystical or
    1:11:02 religious importance, because that’s ostensibly what it’s pretending to be. It’s always this
    1:11:09 question as to what degree you begin to play and the play becomes serious. That tends to happen a
    1:11:17 lot. You’ve studied a lot of cults and occultism. What do you think is the power of that mystical
    1:11:24 experience? Well, what is broadly referred to, what’s occultism? What’s the occult? The occult is
    1:11:34 the hidden. That’s all it really means, specifically hidden from sight. The basis of it is the idea
    1:11:41 that what is hidden? Well, what is hidden from us is most of the world, most of reality. The basic
    1:11:47 concept within occultism, the basic concept within most religions, which are approved forms of
    1:11:55 occultism, is that the physical world that we are aware of is only a very small part of a much
    1:12:09 larger reality, and that what the methods and practices of occultism arguably do is to allow
    1:12:17 someone to either enter into this larger reality or to access that larger reality for purposes to
    1:12:23 be exploited here. The most interesting statement about, and a key element of this, becomes the
    1:12:28 thing called magic. Now, we all know magic. It’s a guy standing on stage performing a trick.
    1:12:35 But the interesting thing about a stage magician is that a stage magician is,
    1:12:44 we know when we’re watching this, that it’s a trick. Yet, we can’t really figure out,
    1:12:50 if he does it well, how that trick is being accomplished, because it seems to defy
    1:12:56 physical laws, and that’s what’s fascinating about it. So even though you know it’s a trick,
    1:13:01 if you can’t figure it out, it has this kind of power of fascination, but it’s mimicking something.
    1:13:11 Stage magic is mimicking real magic. So it’s real magic. Well, let’s go back
    1:13:15 to Alistair Crowley, because he always has to come. We knew he was going to come up at some point
    1:13:21 in this earlier than that, because he always does. All roads lead to Alistair Crowley.
    1:13:26 Alistair Crowley, and I’ve said this enough, so I should be able to get it right, but I’m paraphrasing
    1:13:36 here. He goes, magic, which of course he spelled with a K, or CK, is the art and science of causing
    1:13:44 change to occur in conformity with will. So in a way, that’s sort of mind over matter,
    1:13:50 but it’s the idea that one can, through will, through intention,
    1:14:01 bend reality to make something happen. Somebody once put it this way, it’s tipping the luck plane.
    1:14:07 So you know, you got some kind of a level plan. We’re just trying to do just tip it just a little
    1:14:14 bit, so the marble rules rolls over one side or another. Now that presupposes a lot of things,
    1:14:19 that is there a luck plane? I don’t know, but you know, it’s a good sort of idea to have. But,
    1:14:27 and here again, don’t become overly bothered trying to figure out
    1:14:35 whether you actually can bend reality. Become bothered by the fact that there are people who
    1:14:43 believe that they can, and will go to great efforts to do so, and will often believe they have succeeded.
    1:14:55 So it’s this effort to make things occur in a particular way, maybe just to sort of nudge
    1:15:00 reality in one little way or another. And that’s where things like rituals come in.
    1:15:06 Rituals are a way of focusing will and detention. We’re all there, we’re all thinking about the
    1:15:13 same thing. And you have to imagine just how, you know, the pervasiveness of what could be called
    1:15:18 that, that’s kind of magical thinking every day I was everywhere. So let me give you an example.
    1:15:24 Have you ever attended a high school football pep rally? Think of what’s going on there.
    1:15:32 Okay, your team is going to battle the other team. You’ve now assembled everyone in the gymnasium.
    1:15:39 You’ve got people who are dancing around in animal totem costumes. And what are you chanting?
    1:15:43 Everyone is supposed to chant that, you know, that the other team dies. Okay,
    1:15:46 that you’ll be horribly defeated and that our team will be victorious.
    1:15:54 That is a magic ritual. The idea is if it comes into this idea, it’s very popularly about
    1:16:01 visualizing things, visualizing manifesting, I love this term, you need to manifest your success.
    1:16:11 Well, that’s just magic. That is trying to cause change in conformity with will. So these things
    1:16:19 can happen without you being even consciously aware of what’s going on. And you don’t need to be
    1:16:26 because if you’re all a part of the, of the mob, which is there in the gymnasium,
    1:16:33 and you, you get into this and you get worked up and occultists would argue what you’re doing is
    1:16:37 that you’re creating a huge amount of energy. All of these people are putting energy into
    1:16:44 something and that energy goes somewhere and maybe you can maybe just maybe you actually can
    1:16:51 slightly increase the chances of your team’s victory. Of course, your opponents are having
    1:16:56 their own ritual at the same time. So whoever has the bigger mojo will apparently win on the team.
    1:17:04 So that’s a, I would say, trivial example of that, but a clear one. I do believe that there’s
    1:17:10 incredible power in groups of humans getting together and morphing reality. I think that’s
    1:17:17 probably one of the things that made human civilization what it is. Groups of people
    1:17:21 being able to believe a thing and bring that belief into reality.
    1:17:28 Yes, that’s your exactly right. Bring to conceive of something and then through intention
    1:17:39 will to manifest that into this realm. And of course, the, that power of the collective mind
    1:17:45 can be leveraged by charismatic leaders to do all kinds of stuff where you get
    1:17:52 cults that do horrible things or anything. There might be a cult that does good things.
    1:17:59 I don’t know. It depends. We usually don’t call those cults. Exactly. Without endorsing this
    1:18:03 entirely and interesting, one of the questions, what’s the difference between a cult and a religion?
    1:18:11 And it has been said that in the case of a cult,
    1:18:19 there’s always someone at the top who knows what’s going on. Generally, who knows it’s a scam.
    1:18:25 In a religion, that person is dead. So, see, I’ve just managed to
    1:18:32 insult every single religion. But it’s an interesting way of thinking about it,
    1:18:38 because I think there is some degree of accuracy in that statement.
    1:18:43 Do you think, actually, the interesting psychological question is, in cults, do you think the person
    1:18:49 at the top always knows that it’s a scam? Do you think there’s something about the human mind
    1:18:54 where you gradually begin to believe your own bullshit? Yes. That seems to be the…
    1:18:57 That again is part of magic, I think, is believing your own bullshit.
    1:19:03 It doesn’t necessarily mean that the head of the cult realized, but there’s someone,
    1:19:08 maybe the second, you know, always sort of looking in the lieutenant. Someone
    1:19:19 probably has an idea about what’s going on. The other thing that seems to be a kind of
    1:19:25 dead giveaway for what we would call a cult is what’s called excessive reverence for the leader.
    1:19:32 People just believe everything these people say. I give you an example. The first time I ever
    1:19:39 encountered anything like that was in Santa Barbara, California in the 1970s. I was going
    1:19:46 to grad school, and there was a particular cult locally, which I think was Brotherhood of the Sun.
    1:19:55 And it was the same so there was some guy who was… Among the other things, followers were
    1:20:01 convinced to hand over all their money and personal belongings to him. I believe he used
    1:20:09 part of that money to buy a yacht with. Anyway, a lot of it went to him. And then, of course,
    1:20:14 working for free upon different cult-owned business enterprises, of which there were several.
    1:20:19 And there was a person I knew who became a devoted follower of this, and it was…
    1:20:25 All I could think of at one point was ask them, “What the hell is the matter with you?”
    1:20:34 I mean, have you lost your mind? Why would you… What is it that this person can possibly be
    1:20:39 providing that you essentially are going to become a slave to them, which is what they were doing?
    1:20:45 And I actually give that credit in a way of sort of sparking my whole interest in things like
    1:20:51 secret societies. And here, again, as a disclaimer, I am not now nor have I ever been
    1:20:55 the member of any fraternal organization, secret society, or cult that I know of.
    1:21:05 And that’s what interests me about them, because I’m just always trying to figure out why people
    1:21:15 do these things. Like I said, why the robes and the owl? Why? Why do you do that? And it’s trying
    1:21:19 to figure it out. I mean, I couldn’t even hack the Boy Scouts. Okay, that was too much of that.
    1:21:22 Because to me, you join an organization, and the first thing that comes along is there’s
    1:21:27 somebody, there are rules, and someone is telling you what to do. Okay, I don’t like people telling
    1:21:33 me what to do. I spent much of my life trying to avoid that as much as possible. And join a cult,
    1:21:39 there’s going to be someone telling you what to do. Join the Bohemian Club, and there’s going to be
    1:21:47 someone telling you what to do. And obviously, a lot of people really get something out of that.
    1:21:53 It becomes, in some ways, it’s sort of necessary for them to function. But I do not understand it,
    1:21:58 and my study of it is a personal error to try to understand why people do that.
    1:22:07 And there are so many reasons, primary of which I would say is the desire in the human heart
    1:22:17 to belong. And the dark forms that take throughout human history, recent human histories,
    1:22:23 something I’d love to talk to you a bit about. If we can go back to the beginning of the 20th
    1:22:29 century, on the German side, you’ve described how secret societies like the Tulee Society
    1:22:35 lay the foundation for Nazi ideology. Can you, through that lens, from that perspective, describe
    1:22:41 the rise of the Nazi Party? Well, I guess we could start with what on earth is the Tulee Society?
    1:22:53 So the Tulee Society was a small German occult society, that is, they studied metaphysics.
    1:23:05 Another fancy word for occultism, that appeared in Munich around 1917, 1918.
    1:23:16 The key figure behind it was a German esotericist by the name of Rudolf von Zabotendorf.
    1:23:24 Okay, not his real name. His real name was Adam Rudolf Glauer. He was adopted by a German nobleman
    1:23:31 and got the name von Zabotendorf, and I like to say that name. So I had this real thing about
    1:23:37 vague, mysterious characters that show up and do things, and trying to figure out who these people
    1:23:42 are. So we’re working up in the years prior to the First World War, so the attack after
    1:23:49 so prior to World War I, he spent a lot of time in the Ottoman Empire. Turkey. There was none, and the
    1:23:59 Ottoman Empire, which was a fairly tumultuous place, because in 1908 and 1909, there was the
    1:24:08 Young Turk Revolution. And you had a kind of military coup, which effectively overthrew the
    1:24:16 Ottoman sultan and installed a military junta, which would go on during the First World War to
    1:24:22 make its greatest achievement in the Armenian genocide. Eventually, he created a genocidal
    1:24:27 military regime, which would lead the country into disastrous First World War, which would
    1:24:32 destroy the Ottoman Empire, out of which modern Turkey emerges. Yadda, yadda, yadda.
    1:24:38 And by the way, we should take a tiny tangent here, which is that you refer to the intelligence
    1:24:43 agencies as being exceptionally successful. And here in the case of the Young Turks being
    1:24:54 also very successful in doing the genocide, meaning they’ve achieved the greatest impact,
    1:25:00 even though the impact on the scale of good to evil tends towards evil.
    1:25:03 It’s one of those things that often comes out of revolutionary situations. Revolutions
    1:25:09 always seek to make things better, don’t they? We’re going to take a bad old regime.
    1:25:22 And the sultan was bad, I think it’s fairly safe. Abdulhamid II wasn’t called a red sultan
    1:25:29 because of his favorite color type of thing. And the idea is that they were going to improve,
    1:25:36 they were now going to, the Ottoman Empire was a multinational empire, they’re going to try to
    1:25:42 equalize and bring in the different groups. And none of that happened. It became worse.
    1:25:48 In the same way that you could argue that the goal of Russian revolutionaries was to get rid
    1:25:54 of the bad old incompetent medieval Tsarist regime and to bring in a new great shining future.
    1:26:03 And it became even more authoritarian. And the crimes of the imperial Russian regime
    1:26:08 pale the significance of what would follow in the same way that the crimes of Abdulhamid
    1:26:13 pale to when you get to the Young Turks. But that wasn’t necessarily the intention.
    1:26:20 But von Sabantendorf is a German businessman who’s working in this period. And the whole
    1:26:26 point here is that the Ottoman Empire in this period is a hotbed of political intrigue.
    1:26:32 You know, all kinds of interesting things about it. The Young Turk revolution is essentially
    1:26:41 a military coup, but it is plotted in Masonic lodges. Okay, I know technically Masonic lodges
    1:26:48 are never supposed to be involved in politics, but they are. Or, you know, the lodge meeting
    1:26:53 breaks up and then you plot the revolution. So, same group of people, but it’s not technically.
    1:27:00 But yes, and there’s the Macedonia resort lodge in Tessaloniki was ground zero
    1:27:08 for plotting this military coup that was supposed to improve the empire.
    1:27:14 Sabantendorf is in one way or another mixed up in all of this, or at least he’s an observer. Plus,
    1:27:22 he’s initiated into the Masonic lodges. And interestingly enough, the fellow who initiates
    1:27:27 him into one of these Eastern lodges is a Jewish merchant by the name of Ter Moody,
    1:27:38 and who’s also a cabalist. And also, Sabantendorf is very, very interested in the occult. He’s
    1:27:43 initiated into Eastern Masonic lodges in a period when those same lodges are being used
    1:27:52 as a center for political intrigue. He also apparently is involved in gun running,
    1:27:57 which in revolutionary periods is, you know, there’s a lot of money to be made off of that.
    1:28:04 So, he’s connected to various dark businesses in a tumultuous time
    1:28:12 with connections to politicized Freemasonry and the occult.
    1:28:23 Now, in the course of the First World War, he returns to Germany. He just shows up.
    1:28:35 And it would be my operative suspicion or theory that Sabantendorf was working for someone.
    1:28:41 I don’t think he just pops up in Munich on his own accord. Why does he leave the Ottoman Empire
    1:28:52 and return to that place? Who’s behind him? Well, maybe no one, but maybe someone,
    1:28:57 because he does seem to have money at his disposal. And he comes into Munich and he basically takes
    1:29:02 over this small sort of occult study group. Now, the interesting thing is that the Tully Society
    1:29:12 is really just a branch of another existing, what’s called an Ariosophist order,
    1:29:18 a thing called the German order or the Germanin-Ordn, which is centered in Berlin.
    1:29:27 But for some reason, he doesn’t want his group to be connected by name with the Germanin-Ordn,
    1:29:31 so Tully Society. Tully, in this case, is a reference to supposedly
    1:29:40 a mythical arctic homeland of the Aryan race. Apparently, they’re all snow people who wander
    1:29:44 out of the snow at some point. It’s kind of like a frozen Atlantis.
    1:29:52 So I mentioned these people, the Ariosophists, who have to practice saying that. So what are
    1:30:04 they? Well, they’re a kind of racist, Germanic offshoot of theosophy. And I know I’m explaining
    1:30:09 one thing to explain something, but there’s no other way to do this. So theosophy was 19th century,
    1:30:14 very popular and widely modeled occult belief that was founded by a Russian woman by the name of
    1:30:23 Helena Blavatsky. She was a medium psychic. She supposedly got channelings from the ascended
    1:30:28 masters. The basic story there, they’re all of the ascended masters, which are mystical beings
    1:30:33 that may or may not have once been human. They live inside the Himalayas or they float among them
    1:30:42 on a cloud, and they guide the spiritual evolution of humanity. But Blavatsky did was to take
    1:30:49 Western esotericism and blend it with Hindu and Buddhist esotericism, which became very,
    1:30:53 very sexy in the West still is. Buddhism attracts a lot of people because, well,
    1:31:01 it’s Buddhism. It’s different, see? So the Mahatmas, the ascended masters were sending
    1:31:05 your messages, despite the fact that she was later proven pretty much to be a fraud and writing the
    1:31:11 letters herself. Nevertheless, people still went along with this doctrine and it’s been widely
    1:31:19 modified and copied since then. So an idea in theosophy was that human spiritual evolution
    1:31:28 was tied to physical evolution. So in the case of Blavatsky, Blavatsky never said
    1:31:35 that Aryans, white people, anything out this were superior. She talked about the different
    1:31:41 root races, but it’s just a version of it. It’s just total gobbledygook that seems to include
    1:31:48 everyone. I defy you to make much sense out of it. But in the early 20th century, there were
    1:31:54 different sort of, you know, one of the things that became fashionable, you know, not terribly
    1:32:00 popular. These are small movements with the idea that, well, you know, Germany is a new
    1:32:06 upcoming country. And part of this, I think, was really trying to define who the Germans were.
    1:32:15 Because, remember, the German Empire, Germany as a political state, doesn’t come into existence
    1:32:23 until 1871. Prior to that, Germany was a geographic expression, a Vaguen, which described a large
    1:32:32 area in central Europe where a lot of people who wore leather shorts or something like that,
    1:32:39 and spoke similar German dialects, were nominally Germans. But they might be Prussians or Bavarians
    1:32:44 or, you know, they came in all sorts of varieties and religions. There was no
    1:32:50 German identity. Something very similar happened in Italy in the same period. I mean,
    1:32:54 there weren’t Italians. There were Sardinians, and there were Romans, and there were Sicilians,
    1:33:01 Umbrians, spoke, again, dialects of a similar language, but had never lived, you know, not
    1:33:06 since the Roman Empire under a single state and really didn’t think of themselves as the same.
    1:33:12 So you have to create this artificial thing. You have to create Germans. There’s now a
    1:33:20 big Germany with an emperor, and so we’re all going to be Germans. Well, exactly what is that?
    1:33:29 Much of it is an artificial creation. You know, you have to decide upon some sort of standard
    1:33:35 dialect. Okay, we’ll decide what that is. You know, often dialected, only a few people
    1:33:39 actually speak, and then it will be drilled into children’s heads through state schooling programs.
    1:33:45 So I think this is the kind of milieu that it comes out of. People were trying to figure out
    1:33:51 what on earth Germans actually were and the need for some sort of common identity.
    1:33:59 And, you know, that leads to everything like Wagnerian opera. Richard Wagner wanted to create
    1:34:04 a German mythical music, so he went back and stripped mind old German myths and cobbled them
    1:34:10 together into a lot of people standing on stage singing. And that was his purpose. He was a
    1:34:15 nationalist. He was, in many ways, a kind of racialist nationalist. And this was his idea of
    1:34:22 trying to create, out of bits and pieces of the past, a new fangled form of German identity.
    1:34:29 So on the more mystical end of this, you had the idea is that, well, Germany must have been
    1:34:32 created for some special purpose because the Germans must be very special people.
    1:34:37 And we must have some sort of particular destiny. And then out of this, you know,
    1:34:41 the direction this is heading, well, we’re all part of some sort of master race
    1:34:48 with some sort of ties to some sort of great civilization in the past. Call it Tully,
    1:34:52 don’t call it whatever you want to be. They basically just invent things
    1:34:58 and try to attach those to the past. And so
    1:35:07 Ariosophy was the Aryanized version of Theosophy. And what this did was to take the idea that
    1:35:13 spiritual and physical evolution had led to the most advanced form of human beings,
    1:35:18 which were the Aryans and the most advanced group of them were, of course, the Germans.
    1:35:25 And this attracted appeal, I can keep in mind, again, this was not a mass movement.
    1:35:30 This is very much a fringe movement. Most people weren’t aware of it and weren’t particularly
    1:35:35 interested in it. But it had an appeal for those who already had a kind of esoteric bent in some
    1:35:43 form or another. And this is where things like the German Order and their other groups, it was
    1:35:53 only one of many sort of grew out of. And what it was that the Tully society as a branch, the Tully
    1:36:02 Gesellschaft was supposed to do was to study this. It was an esoteric study group. And so people
    1:36:08 would get together and they’d talk about things, probably make more stuff up and all sort of work
    1:36:16 around this idea of German Aryans as the most advanced type of human beings and all the wonderful
    1:36:22 things that the future would hold. And the fact that this was in the midst of a war in which Germany
    1:36:31 was again fighting, they saw it for its existence, heightened those kinds of tensions as well.
    1:36:43 So my suspicion, again, is that Zbottendorf, in terms of who was behind him, that he was
    1:36:50 essentially called back to Germany to work either for the Prussian political police or for some
    1:37:00 aspect of German intelligence or security to try to mobilize occultism or esotericism for the war
    1:37:08 effort. Because again, this is 1918, the war has gone on way too long. Within a few months,
    1:37:14 Germany will collapse and it will collapse simply from the psychological exhaustion of the population.
    1:37:19 So this is almost like to help the war effort with a kind of propaganda,
    1:37:24 a narrative that can strengthen the will of the German people.
    1:37:30 It would strengthen the will of some people. You have to try to appeal to different aspects of
    1:37:36 this. But the mystical aspect is one of those things that can be. It can have a very powerful
    1:37:46 influence. The idea is that we can come up with some kind of mystical nationalism, maybe that’s
    1:37:51 one to put it, a kind of mystical nationalism that can be exploited for the workers. At this
    1:37:57 point, you’re kind of grasping at straws. And this is a whole period when the Germans are
    1:38:02 marshaling the last of their forces to launch a series of offensives on the western front,
    1:38:07 the peace offensive, which will initially be successful, but will ultimately fail and lead
    1:38:14 to a collapse in morale. But among the leadership of Germany, it was a recognition was that national
    1:38:23 morale was flagging. And one of the other things that was kind of raising its head was what had
    1:38:29 happened nearby a year. Well, the Russian Revolution, which had now brought the idea,
    1:38:33 which put another solution to all of this, the idea of revolutionary Marxism.
    1:38:39 Here we need to remind ourselves as to where Marxism comes from, not Russia, Germany.
    1:38:48 Where was the largest Marxist party in Germany? And Marx probably expected the revolution to
    1:38:53 begin in Germany. Where else? I mean, it’s the Soviet Union is not very industrialized,
    1:38:59 Germany is. And so that’s where it would probably. Russia 5% of the population is industrial workers
    1:39:04 in Germany. 40% of the population isn’t that. So if any place was like made for Marxism,
    1:39:09 it was Germany. I think that’s why it caught on in East Germany so well, because it did kind of
    1:39:18 come home. And it was a local belief. It wasn’t something imparted, imported by the Russians.
    1:39:27 It was a German invention. So the Tuley Society, one of the things you can see in this is the
    1:39:34 Tuley Society was particularly involved in sort of anti-Marxist or anti-Bolshevik agitation.
    1:39:41 They saw themselves, the Bolton sources, saw them as this whole movement. It was a counter to
    1:39:49 this. It was a kind of counter Marxist movement. Can we sort of try to break that apart in a
    1:39:58 nuanced way? So it was a nationalist movement. The occult was part of the picture. Occult racial
    1:40:05 theories. So there’s a racial component, like the Aryan race. So it’s not just the nation of
    1:40:12 Germany. And you take that and contrast it with Marxism. Did they also formulate that in racial
    1:40:18 terms? Did they formulate that in national versus global terms? Like how do they see this?
    1:40:23 Marxism formulates everything by class. People are categorized by class. You’re
    1:40:28 either part of the proletariat or part of the bourgeoisie. You’re either part of the proletariat
    1:40:33 or just some sort of scum. Really? It needs to be swept into the dustbin of history. Only workers
    1:40:43 count. And that was what would take someone who was a nationalist would sort of drive them crazy
    1:40:47 because their idea is we’re trying to create a German people. We’re trying to create a common
    1:40:52 German identity. But what the Marxists are doing is they’re providing Germans against each other
    1:41:00 by class. German workers hate the German bourgeoisie. German proletariat is opposed to German
    1:41:11 capitalists. We’re all trying to fight this war together. So that was why Marxism, particularly
    1:41:16 in the form of Bolsheism, was seen as unpatriotic and, of course, was opposed to the war as a whole,
    1:41:23 the idea that Perrin and Lenin was that the war was an imperialist war. And the only thing that
    1:41:29 was good that was going to come out of it is that the imperialist war through all of the crises it
    1:41:34 was creating would eventually lead to a class war. And that would be good because that would reconcile
    1:41:41 all of these things. But think of this, the two very different versions of this. The Bolshevist
    1:41:47 version, or let’s just call it the Marxist version of Germany, was going to be a class society in
    1:41:51 which we’re going to have to have some kind of civil upheaval which will have Germans fighting
    1:42:00 Germans. Whereas the kind of mystical nationalism, the almost kind of religious nationalism,
    1:42:07 the Zabotendor from the Tulli society had hitched its wagon to, held that Germans are all part of a
    1:42:15 single racial family, and that’s what must be the most important thing. And that these can be
    1:42:21 different ways of trying to influence people. It comes down to a matter of political influence.
    1:42:28 So in a sense, I think that what Zabotendor from the Tulli society was trying to do,
    1:42:35 at least within Munich, was to use this idea of mystical nationalism as a potential rallying
    1:42:41 point for some part of the population to oppose these other forces, to keep people fighting.
    1:42:49 The war is lost, though, in November. The Kaiser abdicates, and essentially,
    1:42:57 the Socialists do take over in Germany. Things come very, very close to following the Russian
    1:43:06 model. And you even get the Russian version or take on the Bolsheviks, which are the Spartacists,
    1:43:13 who try and fail to seize power early on. But you do essentially end up with the Socialist Germany.
    1:43:21 And that then leaves in the aftermath of the war, the Tulli society is sort of the
    1:43:26 odd man out, although they’re still very closely connected to the army.
    1:43:30 And here’s one of the things that I find interesting. When you get into 1919,
    1:43:34 who is it that’s paying Zabotendor’s bills? It’s the army.
    1:43:44 The one thing the German army is absolutely determined to do is to preserve its social
    1:43:50 position and power. And they’re perfectly willing to dump the Kaiser to do that.
    1:43:58 That’s sort of this deal which is made. In November of 1918, Kaiser’s abdication,
    1:44:05 the proclamation of a German Republic, which you just had this guy declare it,
    1:44:13 it wasn’t really planned. There’s the Abert Groner Pact. Groner is the Chief of Staff,
    1:44:22 General Staff at this point. Friedrich Abert is the Chief Socialist politician,
    1:44:28 basically, and they make an agreement. And the agreement basically is that the army will support
    1:44:37 Abert’s government if Abert supports the army. And particularly, that means the continuation
    1:44:43 of the officer corps and the general staff in one form or another. So a deal is made.
    1:44:48 And that, of course, is what will eventually help defeat the Spartacist uprising.
    1:44:52 Now, was the army doing the similar kinds of things that we’ve talked about with the
    1:44:58 intelligence agencies, this kind of same kind of trying to control the direction of power?
    1:45:06 The German intelligence landscape in the First World War is obscure in many ways. There are lots
    1:45:13 of things that are going on. Germany has a military intelligence service called
    1:45:19 Abteilung or Section 3B. That’s just plain military intelligence. They’re constantly
    1:45:24 trying to collect military information before the war about the weaponry and plans of the enemies,
    1:45:30 and then about what the operational plans were during the war. It doesn’t really go much beyond
    1:45:43 that, though. The German foreign office runs a kind of political intelligence service. And that’s
    1:45:50 the one which is much more involved in things like subsidizing subversion in Russia,
    1:45:56 which is one of the things that the Germans sign on to fairly early.
    1:46:05 Little Diversion here in 1915, there is a Russian revolutionary who’s lived much of his life in
    1:46:16 Germany who goes by the code name of Parvus. And he essentially comes to the Germans in Constantinople,
    1:46:20 interesting enough, in Turkey. He’s hanging around there the same time as Bottendorf is there,
    1:46:28 which I find curious. So Parvus or Alexander Helpand to give his actual name.
    1:46:32 Kind of so many goes, “Look, there’s a lot of revolutionaries in Russia, and there’s a lot
    1:46:37 of mistrust with the regime. We think that the war will increase the contradictions in Russian
    1:46:45 society. And if you give me a lot of marks, I can finance this revolutionary activity. And through
    1:46:51 subversion, I can take Russia out of the war.” Well, the Germans are facing a two-front war.
    1:46:58 That sounds great. We’ll use money in order to, but notice what they’re doing. The German General
    1:47:06 Staff, a very conservative organization, not a bunch of revolutionaries, are going to finance
    1:47:12 revolution in an opposing country. They are going to finance revolutionary subversion
    1:47:24 to take Russia out of the war, which basically works. So that gives you another idea as to what
    1:47:30 the German military is willing to do. They’re not revolutionaries, but they’ll pay revolutionaries
    1:47:38 to subvert another regime. Now you’ve got the problem is that the revolutionary regime
    1:47:45 that your money helped bring to power is now threatening to extend into your country.
    1:47:54 So the whole question for the army and for others in Germany in 1919 is how to keep Germany
    1:48:02 from going Bolshevik, from in a sense being hoist by your own batard. So the Tully Society,
    1:48:08 I don’t think is a huge part of this program, but it is a part of it. And it’s all an effort
    1:48:12 to try to keep control. And that’s why the army is financing them. That’s even why the army at
    1:48:19 some point then supplies them with its own propagandists. So the Tully Society begins to
    1:48:24 create under Subotendor’s leadership what he called the rings of Tully. And these are
    1:48:33 satellite organizations that aren’t the society as though, but they’re kind of controlled and
    1:48:41 inspired by it. And one of those is thing called the German Workers’ Party. And the German Workers’
    1:48:48 Party, again, is local. It’s not large. It’s not terribly influential. But what does it aspire
    1:48:58 to be? It aspires to be a party that will bring German workers away from the seductive influence
    1:49:05 of the Bolsheviks and into a more patriotic position, a patriotic. And the way that I
    1:49:13 describe this is that it’s not an anti-communist organization. It’s a counter-communist organization.
    1:49:19 So you don’t create something which completely opposes it. You create something which mimics it,
    1:49:26 which is ultimately what the German Workers’ Party will become is the National Socialist
    1:49:34 German Workers’ Party, known as that term “socialist.” And that is, in my view,
    1:49:39 what Nazism is from the beginning. It is a counter-communist movement.
    1:49:46 And by the way, for people who don’t know, the National Socialist German Workers’ Party is
    1:49:54 also known as the Nazi Party. So how did this evolution happen from those that complicated
    1:50:00 little interplay? We should also say that a guy named Adolf Hitler is in the Army at this time.
    1:50:06 Well, he’s going to come into this because, remember, he said the Army was going to supply
    1:50:11 its own propagandists, develop the German Workers’ Party and the Tule Society do their work,
    1:50:16 and the propagandists they supply them with is a man who the Army trains,
    1:50:25 sends two classes to learn the art of public speaking and propaganda, and that fellow
    1:50:32 is Corporal Adolf Hitler. So how does Adolf Hitler connect with the German Workers’ Party?
    1:50:37 Well, he’d been in the Army during the war, the only regular job that he’d ever had,
    1:50:41 kind of liked it. So you often get the view is that, well, at the end of the war,
    1:50:46 he joined millions of other German soldiers who didn’t have jobs. No, no, he stays in the Army.
    1:50:53 He stays in the Army until 1921. He’s on the Army payroll at the very time in which he’s
    1:50:59 held to set this up. What appears to have happened is this. Sabotin Dorf had organized
    1:51:06 the Tule Society. That didn’t have, you know, that had tried to oppose. There’s actually a brief
    1:51:14 period of time in which the Communists actually take over Munich, the Bavarian Soviet Republic,
    1:51:20 which doesn’t last very long. And eventually the Army and volunteers put this down.
    1:51:24 Well, that’s going on, by the way. Hitler is actually sitting in the
    1:51:32 barracks in Munich wearing a red armband because he is technically part of the soldiers who have
    1:51:39 gone over to the Bavarian Soviet Republic. He seems to have had flexible interests in this case.
    1:51:47 So once order is restored, so to speak, the Army comes in and decide that, well, one of the things
    1:51:56 we need, we need to have people who can lecture soldiers on patriotic topics. And so there is a
    1:52:01 particular captain by the name of Karl Meyer, who sort of spots Hitler. He later describes
    1:52:07 him as like a stray dog looking for a master. Hitler has a knack for public speaking. Other
    1:52:12 soldiers will listen to him. Now, some people can do that. Some people can’t.
    1:52:19 Meyer decides that he’s a good candidate for further training. So yes, they bring him in,
    1:52:28 they turn him into a voice called the Weymann, a kind of liaison man. He’s an Army propagandist.
    1:52:36 And then you’ve got this little outfit called the German Workers’ Party.
    1:52:41 And essentially what happens is that Hitler is sent in to take over leadership of that,
    1:52:46 which is what happens. He shows up, he attends a meeting, there are like 50 people there.
    1:52:53 By the way, the topic of the first meeting he’s at is how and why capitalism should be abolished,
    1:53:01 which is not what you might well expect. And because remember, the German Workers’ Party
    1:53:08 is trying to cast itself as a counter-bolshevism. So it’s not saying that capitalism is great,
    1:53:12 but it’s important. Now, capitalism is evil. We agree upon that. We just agree it has to
    1:53:18 be destroyed from a nationalist point of view as opposed from some sort of strange internationalist
    1:53:25 point of view. So Hitler is essentially, as I see it, sent in by the Army as their trained man
    1:53:34 to assume leadership within this small party and to use it for the Army’s patriotic propaganda
    1:53:41 campaign. And it’s a season doing so, even to the name change, to the National Socialist
    1:53:45 or German Workers’ Party. I mean, really, what sounds more red than that?
    1:53:55 So the interesting thing here is, from where did anti-Semitism seep into this whole thing?
    1:54:02 It seems like the way they try to formulate counter Marxism is by saying the problem with
    1:54:12 capitalism and the problem with Marxism is that it’s really Judeo-capitalism and “Judeo-bolshevism.”
    1:54:20 From where did that ideology seep in? Well, that’s a huge topic. Where does anti-Semitism
    1:54:26 come from? Let’s start with that term itself, a term which I have really grown increasingly to
    1:54:37 dislike because it doesn’t actually say what it means. Anti-Semitism is anti-Jewism. That’s
    1:54:44 all it is. I’m not sure whether there has ever existed a person who hated Jews, Arabs, and Maltese
    1:54:50 equally. That’s kind of hard to imagine. I don’t know. But that’s technically what that would mean
    1:54:57 because, let’s face it, most Semites are Arabs. So if you’re an anti-Semite, then you don’t seem
    1:55:05 to distinguish Jews from Arabs. It makes no sense. The origin of the term is invented by,
    1:55:13 I guess what, an anti-Semite. A guy in the 1870s, a German journalist by the name of Wilhelm Marr,
    1:55:22 who is, wouldn’t you know it, part Jewish himself and who decides that you really needed a better
    1:55:30 term than Judenhaas, “Jew-hate,” which was the term because that just sounds so inelegant, doesn’t it?
    1:55:38 Okay, what do you want to call yourself? A Jew-hater or an anti-Semite? See, anti-Semitism,
    1:55:43 it’s got that “ism” part of the end of it, which means it’s a system of belief. Anything that has
    1:55:49 an “ism” must somehow be scientific and important. It’s all part of the 19th century obsession with
    1:55:55 trying to bring science into something on one or the other. So we’re going to get rid of Jew-hate
    1:55:59 and we’re going to turn it into anti-Semitism. And we’re only going to be talking about Jews,
    1:56:07 but we’ll never actually say that. And somehow, the invention of a Jew-hater to disguise the
    1:56:12 fact that he’s a Jew-hater, even though he’s partly Jewish, by inventing the term anti-Semitism
    1:56:20 worked because everybody has bought it and repeated it ever since. So I don’t know, maybe just because
    1:56:28 anti-Jewism would just be, is it too direct in some way? Do we have difficulty confronting
    1:56:31 actually what it is that we’re talking about? I do wish terms were a little bit more
    1:56:37 direct and self-explanatory. Yeah, Jew-hate is a better term. Well, the question then comes,
    1:56:46 what exactly do you hate about Jews? And a lot of this has to do with, if you go back
    1:56:51 prior to the 19th century, if Jews were hated, they were hated for religious reasons. In Christian
    1:56:57 Europe, they’re hated because they weren’t Christians. And they existed as the only kind of
    1:57:03 significant religious minority, but other than that, they tended to live separately.
    1:57:11 They had little economic influence. Jews tended to live in stettles in the east,
    1:57:17 ghettos elsewhere. Some were involved in banking and business, but they sort of remained
    1:57:26 segregated from much of society. That changes when you get to the 19th century and with what’s called
    1:57:33 Jewish emancipation. And that means that between about 1800 and 1850, most European countries
    1:57:38 dropped the various legal or social restrictions against Jews. They are assimilated into the
    1:57:45 general society. So ideally, you stop being a German Jew and you become a Jewish German.
    1:57:54 Those are two very different important concepts. And what that does, of course, is that it opens up
    1:58:04 the professions business world elsewhere. So Jews move, who had been largely within those realms to
    1:58:09 begin with, they already had a good deal of experience in banking and business, and they
    1:58:17 move into those areas and professions and become quite visible. And that’s what then creates
    1:58:29 antisemitism. Because in some way, that is seen as part of the changes that have taken place.
    1:58:35 And there are a lot of things going on here. Part of it has to do with the kind of wrenching
    1:58:41 social and economic changes that took place with industrialization. So one of the things
    1:58:47 to keep in mind is that in the process of industrialization, just like today, whole classes
    1:58:55 of people were made extinct economically, craftsmen, for instance. So when factories came along and
    1:59:00 began to produce things with machines, all the crafts people who had made those things previously
    1:59:10 are now unemployed or go to work as wage labor in factories. So there are winners and losers
    1:59:18 in industrialization. And what people saw in Germany and elsewhere is that among this new
    1:59:24 sort of rising capitalist elite, among these new professions, among the bureaucrats that are
    1:59:33 coming out of these burgeoning states, there were visibly a fair number of Jews. So in some way,
    1:59:39 the rise of Jews in the minds of many people were connected to all of the other bad things that were
    1:59:45 going on. Now, the world was changing in a way we don’t like. And seemingly the Jews are prospering
    1:59:54 while I am not. And that was true in Germany and elsewhere. Jews became highly visible
    1:59:59 in the professions. They became very visible in banking. They became visible in legal profession.
    2:00:04 They became visible in the medical profession. And those are people that a lot of people would
    2:00:10 come in contact with, bankers, lawyers, and doctors. They were not the majority of there, but
    2:00:21 vastly overrepresented in terms of the general population, and especially within the cities.
    2:00:28 So in that sense, the roots of anti-Semitism to me is that Jews in Germany and elsewhere,
    2:00:33 and not just in Germany by enemies, France, Britain, everywhere else, became identified
    2:00:43 with the bad changes that were taking place. But you also found that Jews were not only
    2:00:49 prominent among capitalists, they were also prominent in the socialist movement as well.
    2:00:55 So one of the things you could look around, if we return to Germany in 1919 in the aftermath of
    2:01:01 World War I, and you look around in Bavaria or elsewhere, you tend to find that there are a lot
    2:01:10 of Jews in visible positions on the German left. Rosa Luxembourg is but one example of that.
    2:01:17 Eugen Levine, some of them came in from Russia. When the Soviets send a representative to Germany
    2:01:26 in this period, it’s Karl Radek, a Jew. So it wasn’t difficult to exploit that, to argue that
    2:01:37 just as the ranks of capitalism was full of Jews, the ranks of Bolshevism or of the revolutionary
    2:01:43 left were full of Jews, because you could easily go around and distinguish a great many of them.
    2:01:51 They don’t have to be the majority, they just have to be numerous, prominent, invisible, which they were.
    2:01:59 So this provided you a, in the case of the propaganda of the German army, the type of stuff
    2:02:03 that Hitler was spewed out, they could put all the anti-capitalist rhetoric in there,
    2:02:07 wanted to. The army was never going to overthrow capitalism, and the capitalists knew they weren’t
    2:02:13 going to do it. So go ahead, talk shit about us, we don’t really care, that’s not going to, because
    2:02:21 we know that the army would prevent that from happening. The way to then undermine the real
    2:02:28 enemy, it was a scene, the revolutionary left, was to point out the Jewish influence there.
    2:02:34 I mean, look at Russia, well Lenislav Trotsky, there he is, look, there’s a Jew, there’s one,
    2:02:37 Radek is a Jew, it wasn’t hard to find him in that regard.
    2:02:46 You gave a lecture on the protocols of the elders of Zion. It’s widely considered to be the most
    2:02:51 influential work of anti-Semitism ever, perhaps. Can you describe this text?
    2:02:57 Well, the protocols of the learned elders of Zion
    2:03:05 is probably one of the most troublesome and destructive works of literature that has ever
    2:03:17 emerged, and yet its origins remain obscure. So you get a whole variety of stories about
    2:03:23 where it came from. So the one story that is often, yes, that it was the work of the Ocarana,
    2:03:30 the Russian secret police, and in particular, it was all crafted in 1904 and 1905.
    2:03:43 In Paris, there’s a whole description of Piotr Raczkowski, who was supposedly the chief
    2:03:48 of the Ocarana at the time, was the man behind it, another fellow by the name of Matve Golovinsky,
    2:03:58 was the drafter of it, and that they had this document written by a French political writer
    2:04:05 from some decades back called Dialogue and Hell between Machiavelli and Montesquieu,
    2:04:13 which they were then adapting. Usually, it’s argued that they plagiarized it into the protocols,
    2:04:20 and none of that is really true. I mean, the first part about it is that at the time this
    2:04:24 supposedly took place in Raczkowski, he wasn’t working for the Ocarana, he’d been fired, and he
    2:04:30 wasn’t in Paris, and the whole situation which is described couldn’t have taken place because the
    2:04:38 people who did it weren’t there. It’s a story, but it provides a kind of explanation for it.
    2:04:43 So the protocols emerge. So they always have to go back. This is one of the things that
    2:04:56 I have found always useful in research is go back to the beginning. Find the first place this is
    2:05:02 mentioned, or the first version, or the first iteration. Where does it start?
    2:05:12 So you go back to St. Petersburg, Russia, run 1903. There is a small right-wing anti-Semitic
    2:05:21 newspaper published there called Znamja, Banner. And it publishes in a kind of serial form
    2:05:30 a work that doesn’t credit with any original author. And this is the first version of the
    2:05:35 protocols of the limited elders of Zion. But what it’s actually describing
    2:05:44 is a Judeo-Masonic plot to rule the world. Those two terms are always combined together.
    2:05:50 And in the earlier version, there’s far more mentions of Freemasons than there are Jews.
    2:06:02 The publisher of Znamja is closely connected to a thing called the Union of Russian People,
    2:06:09 the Union of Russian Men, which was ostensibly existed to defend the empire against subversion,
    2:06:15 and particularly against what it thought was Jewish subversion, when they also argued that the
    2:06:20 prominence of Jews in revolutionary movements somehow proved that this was in some way a Jewish
    2:06:24 revolution. But again, this is not a mainstream newspaper. It’s not appealing to a mainstream
    2:06:29 population. Very few people saw it. But this is where it appears. Now, keep in mind,
    2:06:36 that’s two or three years before it’s usually said to have been written. Or the other version
    2:06:40 is that there’s this crazy priest by the name of Sergei Nielis, and he wrote it,
    2:06:47 or actually appended it as an appendix to his work in 1905. Now, it was around before that.
    2:06:56 So Nielis didn’t create it. It wasn’t drafted in Paris in 1904 or 1905. It was serialized in an
    2:07:07 obscure right-wing Russian newspaper in 1903. And by the way, we should say that these are 24
    2:07:16 protocols. Well, it varies. It varies. That are, I guess, supposed to be like meeting notes about
    2:07:24 the supposed cabal where the Jews and Freemasons are planning together a world domination. But
    2:07:30 it’s like meeting notes, right? Protocol, which are Russian term basically for notes of a meeting.
    2:07:35 Well, as notes of a meeting, these are the goofiest things I’ve ever seen,
    2:07:41 because what you’ve got here, it’s not notes. No one takes notes from a meeting that way.
    2:07:48 What you’ve got is like the exposition of a bond villain. It’s all of this, boy, all of them are
    2:07:53 going to do this. And then the last thing you want to do is lay out your, if you get a plan for world
    2:08:03 domination, my suggestion would be don’t write it down. So it’s not notes of a meeting. It’s again,
    2:08:10 it’s another sort of narrative or story that’s being told. It bears no resemblance to the
    2:08:17 dialogue in hell between Machiavelli and Montesquieu. But what it is, the best thing, it’s not
    2:08:23 particularly readable in some ways. There was an Italian writer named Cesare Michalis,
    2:08:30 who wrote a book translated in English called The Non-Existant Manuscript.
    2:08:38 And what it is, is that he takes the different versions, starting with the 1902-1903 versions
    2:08:43 and looks through the other ones. And he tries to, in the process, to reconstruct what he thinks
    2:08:49 the original might have been. But the other thing he does, which was fascinating to me,
    2:08:58 is that he takes this whole sort of initial text. And in bold type, he indicates the paragraphs,
    2:09:04 but more often sentences or phrases that appear to be identical from the Jolie work.
    2:09:12 And they’re just scattered throughout it. There’s no particular grime or reason to it.
    2:09:19 You don’t plagiarize that way. I mean, who does that? It’s in here, it’s in there,
    2:09:26 which has led to a peculiar theory of mine, which of course I will have to expound upon,
    2:09:32 which is that I think that the original author of the protocols was the same Maurice Jolie.
    2:09:41 I think what someone stumbled across was a work which he wrote and never published
    2:09:47 and which he just drew. It’s exactly what someone would do working from your own
    2:09:54 kind of material. Because I’ve written things and then taken what I’ve written and then
    2:09:58 repackaged that into something else.
    2:09:59 Sudden seer, sudden seer.
    2:10:03 Yeah. And the same sort of thing comes out. Only bits and pieces of it remain.
    2:10:10 So why would Jolie have done that? Jolie was, we’re talking about a man whose career basically
    2:10:20 span the 1850s to 1870s. He’s an obscure figure. I’m not even totally sure he existed.
    2:10:25 I mean, but it’s one of those things you go looking for him.
    2:10:29 I love that you’re a scholar of people that just kind of emerge out of like the darkness.
    2:10:31 They just come from nowhere.
    2:10:35 And there’s the Ocarina there also. And we should also say this was,
    2:10:38 I guess, the original would be written. I mean, what’s the language of the original?
    2:10:39 Russian?
    2:10:44 Russian. But my hunch is that that’s adopted from a French version.
    2:10:47 First of all, they’re constantly harping on freemasons, which wasn’t nearly as a big idea as
    2:10:53 there. If you go back to France in the 1890s, there’s some big scandals. Well, there’s the
    2:10:58 Dreyfus scandal. We got that. All right. Well, you’ve got a Jewish officer on trial for being a
    2:11:03 traitor. All right. So that was probably, so you bring in the whole Jewish element,
    2:11:11 Jews, disloyal, Dreyfus case, 1894. Earlier, you had the Panama scandal, which was a huge
    2:11:16 investment scandal when the Panama Canal Company in Paris collapsed. And again,
    2:11:23 many of the major players in that work, Jewish financiers. And then you’ve got the Taxel hoax.
    2:11:30 So the Taxel hoax was the work of this guy. His real name was, I think,
    2:11:37 Joghan Paje. He was kind of a French journalist. He started out writing porn.
    2:11:44 So when he wrote things like Sex Lives of the Popes and the Erotic Bible and various things
    2:11:48 of that kind, he was a Catholic, broke with the Catholic Church, wrote bad stuff about the Popes.
    2:11:57 And apparently became a freemason for a while and then supposedly recanted his evil ways,
    2:12:02 went back to the church. And then under the name Leo Taxel began writing these whole series of
    2:12:11 articles, basically arguing that there was a Masonic satanic conspiracy run by the way by an
    2:12:20 American, Albert Pike. And this also included child sacrifice. It’s got pizza gate as well
    2:12:29 by a high priestess, Diana Vaughn. And so there’s child sacrifice, weird Roby Bohemian grove stuff.
    2:12:34 And the Freemasons are devil worshipers going back to the Knights Templars. And so there’s
    2:12:40 a thing called the Devil in the 19th Century and the Secrets of Freemasonry. And this became
    2:12:46 a bestseller in France. So France is just obsessed with all these kinds of conspiracies.
    2:12:55 So evil satanic freemasons, evil Jewish financiers, Dreyfus. This, this is the brew where all of this
    2:13:00 comes. I want to figure out how Freemasons and Jews get connected together. France is the place
    2:13:08 where this happens. Now, Taxel or Jogon Paget eventually pulls another interesting thing in this.
    2:13:16 Around 1897, critics argue that he’s making this stuff up and demand that he present Diana Vaughn.
    2:13:21 Suppose satanic high priestess, toddler killer. And he says, “Oh, we’re going to have a press
    2:13:26 conference. She’ll appear and say all of this stuff as she returns to the church and, you know,
    2:13:32 possibly becomes a nun.” And so people show up, you know, high figures in the Catholic Church
    2:13:37 shows up and he does. No Diana Vaughn. And Dogon Paget goes, “It’s all a hoax. I made it up. You’re
    2:13:43 all a bunch of idiots for believing it.” Okay, you, you members of the church especially, just
    2:13:49 just about gullible, you know, morons you are. And that’s it. He confesses to this day, however,
    2:13:53 you will find people who will insist that it’s actually true because they desperately want it
    2:14:03 to be true. But this is, I think, the milieu that, I like that word apparently, that this comes out
    2:14:12 of. And this is, this is this whole kind of unhealthy mix. So France to me is the only place
    2:14:18 then in a decade preceding it that something like this would be concocted. So it was either
    2:14:24 created by some sort of unknown person there. But I still think that even though he dies in
    2:14:36 like 1879 that in, in, in Maurice Jolie’s troubled career, he went from being an opponent of French
    2:14:42 emperor Napoleon III, which is what his, which is what the whole dialogues was written against.
    2:14:55 And then he was, for a time, a close political ally of a French politician by the name of Adolf
    2:15:01 Cremieux. So Adolf Cremieux, well, what’s he got going for him? Well, he was kind of a radical
    2:15:09 politician. He was an opponent of Napoleon III. He was a Freemason. Oh, and he was Jewish. In fact,
    2:15:14 at one point, I think he was actually the head both of the Scottish right
    2:15:26 in France and I have an important figure in the alliance Israeli, the Jewish organization in France.
    2:15:32 So he was publicly very prominently Jewish and Masonic. So someone else who would have linked
    2:15:38 them together. Jolie, as he did with virtually everyone, this is a guy whose life largely consisted
    2:15:52 of dual threats and fistfights. So he gets, he gets angry at Cremieux. And it’s exactly the type
    2:16:00 of thing that he might write to vent his spleen about it. But he died. Probably a suicide. That’s
    2:16:11 kind of difficult to tell. In obscurity, his son seems to have inherited most of his literary works.
    2:16:20 And his son then worked for new became a journalist work for newspapers in France
    2:16:26 in the 1890s, but was also associated with some people on the fringes of the Ocarina
    2:16:35 or the Russian press in France. So one of the little things that had happened by this time
    2:16:41 is that France and Russia had become allies, even though their political systems are completely
    2:16:50 incompatible. And so the Russians were using money to subsidize French newspapers that were
    2:16:57 championing the alliance between the two, Russian meddling. They’re just paying to have the right
    2:17:02 kind of newspapers come out. So there’s this whole connection between the kind of Russian
    2:17:10 journalistic world and the French journalistic world and all of these scandals which are going on
    2:17:18 in Jolie’s son. And then 10 years down the road, this thing pops up in a newspaper in St. Petersburg.
    2:17:29 That’s where I think the origins lay. Why do you think it took off? Why do you think it grabbed
    2:17:37 a large number of people’s imaginations? And even after it was shown to be not actually what it’s
    2:17:43 supposed to be, people still believe it’s real? Well, it doesn’t take off immediately. Okay,
    2:17:47 it never receives any kind of wide. I mean, nobody much reads the first edition of it.
    2:17:55 When it’s reedited, it keeps getting, there’s something like 18 or 19 different versions
    2:18:00 as it goes through. I mean, it gets, you know, people leave this protocol out or leave another
    2:18:05 one. As time goes on, there’s more and more emphasis on Jews and less and less on Freemasons.
    2:18:13 So it’s sort of, and the whole thing could have begun as an anti-Masonic tract. I mean,
    2:18:17 you could leave Jews out of it entirely and just turn it into a Masonic plot to rule the world.
    2:18:22 But let’s just throw them in as well, since the two things are already being combined elsewhere.
    2:18:31 It doesn’t become a big deal until really after the First World War, because the initial versions
    2:18:36 of it are all in Russian. And, you know, let’s face it, well, that’s widely read in Russia.
    2:18:41 It’s not much read anywhere else. It’s a different alphabet. Nobody can even see what it means.
    2:18:47 So it has no particular influence outside of Russia. But then you get the 1919,
    2:18:53 and you get all these different versions of it. So suddenly you get two English versions
    2:18:58 in the US, another English version in Britain, a German edition, a French edition, a Dutch
    2:19:05 edition. Everybody is coming up with these things. So it’s not until the immediate aftermath of the
    2:19:12 First World War that this metastasizes, and it begins to show up in all of these different foreign
    2:19:19 editions. And I think that it just has to do with the changes that have taken place
    2:19:26 during the war. One of the things that people began looking for was that, why was there a war?
    2:19:30 And we’ve just had this whole disastrous war, and the world has been turned upside down.
    2:19:36 So there has to be some kind of explanation for that. I don’t know. And one of the things
    2:19:41 this offered is, see, there’s this evil plan. There’s this evil plan that has been put into motion.
    2:19:48 And this could possibly explain what’s taking place. The reason why the protocols
    2:19:56 were, I think, widely bought then, and why they still are in many ways, is the same reason that
    2:20:02 the taxal hoax I was talking about was because it told a story that people wanted to believe.
    2:20:09 So in France in the 1890s, there was widespread suspicion of Freemasons.
    2:20:18 It was seen as a somewhat sinister, secretive organization, certainly secretive. And there
    2:20:30 was also the same sort of generalized prejudices about Jews, clannish, distinct, too much influence,
    2:20:36 all of the things that went on. So it was sort of easy to combine those two things together.
    2:20:45 And even though taxal admits it was a hoax, there were those who argued that it’s too accurate.
    2:20:52 It describes things too completely to be a hoax. And then you get the same arguments. In fact,
    2:20:58 I’ve heard the same arguments with the protocol. I don’t even buy this as an example of plagiarism,
    2:21:02 because you can’t actually prove what’s being plagiarized in any sense. To me,
    2:21:09 the protocols are a prime example of what I call a turd on a plate.
    2:21:17 These things crop up. I have to explain that now. What is a turd on a plate? Well,
    2:21:22 a turd on a plate is a turd on a plate. Suppose you come in and there’s a plate setting on the
    2:21:27 table and there’s a turd on it. Now, the first thing you’re going to want is, is that a turd?
    2:21:34 Is it a human turd? Where did it come from? Why would someone poop on a plate? There are all these
    2:21:40 questions that come to mind. It makes no sense. But that’s what you come up with. It’s just there.
    2:21:47 Right? I don’t know where it came from. I don’t know why, but there’s a turd
    2:21:51 on a plate. And that’s what the protocol is that they’re just there.
    2:21:55 But the reality is, just like with a turd on a plate, you take a picture of that in modern day
    2:22:00 and it becomes a meme, becomes viral and becomes a joke on all social media. And that was viewed
    2:22:05 by tens of millions of people or whatever. It becomes popular. So wherever the turd came from,
    2:22:11 it did captivate the imagination. Yeah.
    2:22:14 It did speak to something. But does it seem to provide an explanation?
    2:22:23 Can you just speak to Jew hatred? Is it just an accident of history?
    2:22:33 Why was it the Jews versus the Freemasons? Is it the collective mind searching for small group
    2:22:39 to blame for the pains of civilization? And then Jews just happened to be the thing that
    2:22:46 was selected at that moment in history. It goes all the way back to the Greeks.
    2:22:59 Let’s blame them. So one of the first occasions you find the idea that Jews are a distinct,
    2:23:11 mean-spirited, nasty people, goes back to a Greco-Egyptian historian named Manito.
    2:23:21 This is around, I think, 300 BC, early. Can’t even rope the Romans into this one.
    2:23:29 So Manito is trying to write a history of the dynasties of Egypt. I think his history of dynasties
    2:23:33 of Egypt still is one of the basic works in this. But he tells this whole story,
    2:23:40 which essentially describes the first blood libels that the Jews, to celebrate their
    2:23:46 various religious holidays, would capture Greeks and fatten them up in the basement and then slaughter
    2:23:51 them and eat them or drain their blood or do something. Yeah, it’s just the earlier version
    2:23:58 of that kind. Also, I think it repeats the Egyptian version of the Exodus out of
    2:24:06 Egypt, which is quite different than the biblical version. In this case, the Egyptian,
    2:24:12 they stole all the stuff out of the Egyptians’ houses and ran off into the desert.
    2:24:16 The Jews stole all the stuff and ran off. Yeah, Hebrews. Hebrews robbed the Egyptians.
    2:24:24 They were taken in. We took them in and sheltered them, gave them jobs, and then they stole all
    2:24:31 the jewelry and ran away. We didn’t even chase them. We were glad to see them gone. So it’s a
    2:24:40 different narrative on that story. But it essentially portrays the Jews as being hostile,
    2:24:48 that they don’t like other people. They’re contemptuous of other people’s religions,
    2:24:54 the rest of it. And see, the Greeks tended to think of themselves as being extremely cosmopolitan.
    2:24:57 Now, the Greeks ran across people worshipping other gods. They go, “Oh,
    2:25:03 this is just our gods under different names.” Everything was adjusted into their landscape.
    2:25:10 So you end up with that kind of hostility, which was there at the time.
    2:25:16 And that was probably influenced also by some of these earlier rebellions
    2:25:23 that had taken place in Egypt. During the Roman period, you not only have the Judean
    2:25:30 rebellion in 70 AD, but you have a couple of other uprisings in North Africa. And
    2:25:38 they’re very bloody affairs. And in some cases, Jews begin massacring other people around them
    2:25:42 to start killing the Greeks. The Greeks start killing them. So there was a fair amount of,
    2:25:48 from that period on, a certain amount of bad blood of mutual contempt between Greeks or between
    2:25:54 Hellenes, between the people who became Hellenized, as the Romans would be, and the Jews.
    2:26:02 And the Romans also seemed to have developed much of that idea. They considered Judea as to
    2:26:09 being a horrible place to have to govern, inhabited by a stubborn, obnoxious people,
    2:26:22 not well liked. So that’s really where you see the earliest version of that.
    2:26:25 And the reasons for it would be
    2:26:33 complicated. What you could say is that going back to Manito and to the Roman period,
    2:26:43 Jews, Judeans, frequently experienced difficulties, conflicts with other people living around them.
    2:26:50 And part of that probably had to do with the diaspora, which was the movement. Well,
    2:26:53 you get the idea, the Romans came and he kicked everybody out, which they didn’t. Jews had been
    2:26:58 leaving Judea since it was a poor, limited area, and moving to areas like North Africa,
    2:27:03 Egypt, Sireneca, all the way into southern France. They move widely around the Roman Empire.
    2:27:12 So that sense of both distinctness and hostility existed since ancient times.
    2:27:23 So it wasn’t just–the attitude of the Church towards Jews was mixed by–well, one of the ideas,
    2:27:30 of course, is that at the end of time, just before the Second Coming, one of the signs,
    2:27:34 how are we going to know that Jesus is going to return and the world is going to end?
    2:27:40 Well, the Jews will all convert. There will be a mass conversion. They’ll sort of see the light.
    2:27:46 Now, so there have to be Jews around to do that, or we won’t–it’s like a canary in a coal mine.
    2:27:50 You would have to have them there to tip it off. So that was one of the arguments as to why,
    2:27:56 within the Church, as to why Jews would not be forcibly converted, beyond the fact that it’s
    2:28:03 just kind of bad policy to forcibly convert people because you don’t know whether it’s sincere.
    2:28:14 But they need to be preserved as a kind of artifact which will then redeem itself
    2:28:22 at the end of time. It’s not something which is encouraged–it predates Christianity.
    2:28:33 And then Christianity, of course, in its own way, just sort of plagiarizes the whole Jewish thing,
    2:28:39 doesn’t it? I mean, I hesitate to use that term, but that’s what you do. It’s just like, well,
    2:28:44 we’re the Jews now. You used to have a unique relationship with God, but now it’s been passed
    2:28:55 over to us. Thanks for the Bible. I can remember that. And my mom’s side, I was periodically exposed
    2:29:01 to Sunday school. And pretty much, the Old Testament was always presented as if somehow
    2:29:10 it was the history of like a better term–Europeans in some way. It was sort of a Christian history.
    2:29:16 It was all the prequel to that. And there’d be some sort of–first, the term Hebrew was always
    2:29:22 used, never Jews. So the ancient Hebrews, and somehow the Hebrews just sort of became the
    2:29:26 Christians. And I don’t know, the Jews just got–they didn’t get a memo or something.
    2:29:31 So it’s basically like Christianity, the prequel is the Old Testament.
    2:29:36 But they just sort of took takeover. We have the special dispensation now. Thank you very much.
    2:29:42 You’re an artifact. So it’s interesting. So this whole narrative
    2:29:52 that I would say is kind of like a viral meme started, as you described in 300 BC.
    2:30:00 It just carried on in various forms and morphed itself and arrived after the industrial revolution
    2:30:07 into a new form to the 19th and 20th century. And then somehow captivated everybody’s imagination.
    2:30:15 I think that modern anti-Semitism is very much a creation of the modern world and the industrial
    2:30:23 revolution. It’s largely a creation of Jewish emancipation. It’s the nasty flip side of that.
    2:30:31 All of the restrictions are thrown off, but now also you become the focus of
    2:30:42 much more attention than what you had before. Prior to that, you had the kind of ghettoization
    2:30:50 which worked both ways. I mean, there were rabbis who praised the ghetto as a protection
    2:30:57 of Jews against the outside world because inside we can live our life as we wish and we’re
    2:31:07 unmolested. Whereas if we were, the great fear is that if we were sort of absorbed into this
    2:31:13 larger world, we’ll lose our identity. That sort of question comes up in the 18th century and things
    2:31:19 like the Haskala movement in Germany because the German Jews were always at the sort of cutting
    2:31:25 edge of assimilation and modernity. Moses Mendelssohn was an example of that. You’re arguing that,
    2:31:33 you know, we just need to become Germans. So as much as possible, synagogue should look like
    2:31:42 Lutheran churches, everything things should be given in good German, and that’s the way we need
    2:31:48 to become Jewish Germans. We don’t want to become a kind of group of people who are a part in that
    2:31:57 way. And that has created great tensions ever since. You know, one of the essential points that
    2:32:03 seems to me in anti-Semitism, anti-Jewism, is that all the Jews are in this together. Isn’t that
    2:32:08 one of the things? Okay, they’re always talking about as if they’re collective Jews this, Jews that,
    2:32:14 as if it’s a single undifferentiated mass of people who all move and speak in the same way.
    2:32:27 From my personal experience, not being Jewish. I’ve incredibly diverse. In many ways, really,
    2:32:34 one of the things that anti-Semitism proposes is a continuity or a singularity of Jewish identity
    2:32:41 that never existed. Just like you said, in one hand, there’s a good story. In the other hand,
    2:32:47 is the truth. And oftentimes, the good story wins out. And there’s something about the idea that
    2:32:51 there’s a cabal of people, whatever they are, in this case, our discussion is Jews,
    2:32:59 seeking world domination, controlling everybody, is somehow a compelling story. It gives us a
    2:33:06 direction of a people’s to fight, of a people’s to hate, on which we project our pain. Because life
    2:33:13 is difficult. Life, for many, for most, is full of suffering. And so we channel that suffering into
    2:33:19 hatred towards the other. Maybe if you can just zoom out, what do you, from this particular
    2:33:26 discussion, learn about human nature that we pick the other in this kind of way?
    2:33:36 And we divide each other up in groups and then construct stories. And like constructing those
    2:33:43 stories, and they become really viral and sexy to us. And then we channel the hatred. We use
    2:33:50 those stories to channel hatred towards the other. Well, yeah, Jews aren’t the only recipient of that.
    2:33:55 I mean, anytime you hear people talking about Jews, this or that, white people, this or that,
    2:34:00 black people, this or that, Asians, this or that, where they’re in undifferentiated mass,
    2:34:07 who apparently all share something in common. Well, then nobody’s really thinking.
    2:34:12 And the other thing you’ll find is that people who will express those views when press will argue
    2:34:16 that, oh, well, this, you know, if they actually know anybody from those groups, those are okay.
    2:34:22 You know, it’s like Nazis, though, they go, this isn’t okay, Jew. They’re all right. They were
    2:34:28 always constantly making exceptions. And one for, you know, what they actually met an actual human
    2:34:33 being, and they seem to be fairly normal. Well, they were okay. So what it was that they hated
    2:34:41 weren’t actual people for the most part, it was just this kind of gollywag vision that they had
    2:34:48 with them. You’re not even talking about real people. I don’t know. What does that tell you
    2:34:55 about human nature? Well, okay, in 70 odd years, what have I learned about my fellow creatures?
    2:35:03 One, I don’t actually understand them any better than I ever did. In fact, less so.
    2:35:09 Okay, I would say this. When I was 17, I thought that I had the world much more figured out than
    2:35:14 I do now. Completely deluded. But, you know, it seemed to make much more sense and I could
    2:35:24 categorize things. Basic take upon human beings, most people most of the time are polite, cooperative,
    2:35:38 and kind until they’re not. And the exact tipping point and moment in which they go from one to
    2:35:45 the other is unpredictable. God, that’s brilliantly put. Speaking of the tipping point,
    2:35:52 you gave a series of lectures on murderers, crimes in the 20th century. One of the crimes
    2:35:58 that you described is the Manson family murders. And that combines a lot of the elements of what
    2:36:03 we’ve been talking about and a lot of the elements of the human nature that you just described.
    2:36:07 So, can you just tell the story at a high level as you understand it?
    2:36:11 The Manson family. Well, you begin with Charles Manson, who’s the key element in this. And Charles
    2:36:19 Manson, for most of his life, up until the time that he’s around 33, is an unexceptional petty
    2:36:27 criminal. In and out of prison, reform school from an early age, not really associated with
    2:36:35 violent crimes. He did stuff like steal cars, write bad checks, became an unsuccessful pimp
    2:36:42 and drug dealer. So around 1967, he gets out of his latest stint in federal lockup and terminal
    2:36:48 island in Los Angeles, California. By that time, he’s learned how to play the guitar,
    2:36:56 has ambitions to become a musician, and also has proclaimed himself a Scientologist.
    2:37:00 Not that he ever seems to have practiced, but that’s when he reclaimed that he was.
    2:37:08 Kind of self-educated himself in prison to a certain degree. And so when he gets out of prison
    2:37:18 in ’67, he was a model prisoner. He behaved himself and seemed, you can sort of imagine his life is
    2:37:24 going in a completely different direction. And here again, I’m going to say something kind of good
    2:37:30 about Charles Manson, which is that he actually was a decent singer. If you really sort of listen
    2:37:37 to some of the stuff he did, he’s not a great singer, but he could have, you know, other people
    2:37:41 got recording contracts with less talent than he had, and he could play a guitar.
    2:37:47 The Beach Boys actually do record one of his songs without him.
    2:37:51 How would you evaluate Hitler’s painting compared to Charles Manson’s?
    2:37:55 Well, you’re supposed to say it’s terrible. Okay. Okay. It looks average to me.
    2:37:56 Yeah, landscape.
    2:38:03 I mean, if you didn’t know it was Hitler, would it, would you, I don’t know what
    2:38:09 people say about it. Sorry for the distraction. It’s just, you know, it’s just an average
    2:38:14 painter. That’s what it was. Something like crazy genocidal maniac paintings that you don’t have
    2:38:19 really have those. So Manson, he could have done that. He probably could have, you know,
    2:38:24 he made certain inroads into the music industry. And if he hadn’t been such a weirdo, he might have
    2:38:29 gotten further with it. But his life could have taken a different turn. So this is one of the
    2:38:34 questions I have. Where did a guy who becomes, who’s an unexceptional career petty criminal
    2:38:41 suddenly emerge into some sort of criminal mastermind, a Sven Galle, who can bend all of
    2:38:47 these people to his will and get them to go out and commit murder? That’s a, that’s a real shift
    2:38:54 that you have. So the first thing it kind of could tell you that something odd is going on is he
    2:39:03 gets out of prison in LA County. And he’s supposed, you know, he’s on parole. You know,
    2:39:08 parolees are supposed to have a job, not supposed to leave the jurisdiction of their parole. He
    2:39:15 heads straight for the Bay Area violates parole right off the bat. Two weeks later, he drifts
    2:39:20 into the parole office in the Bay Area, where upon he should have been arrested and sent back
    2:39:24 to Terminal Island. But instead, they just assign him a parole, I don’t know, maybe things were
    2:39:30 easier than in some way. So he gets assigned a parole officer, Michael Smith. Michael Smith
    2:39:35 is initially handling a number of parolees. But after a while, once he takes on Manson,
    2:39:41 he only has one parolee he’s supervising, Charlie Manson, which is odd.
    2:39:46 And you also find out that Michael Smith, in addition to being a parole officer,
    2:39:53 is a graduate student at the University of California studying group dynamics, especially
    2:40:00 the influence of drugs on gangs and groups. And he’s also connected to the Hayat Ashbury Free
    2:40:06 Clinic, which is a place where the influence of because Hayat Ashbury had lots of drugs and lots
    2:40:16 of groups. So, you know, Charlie Manson never gets a regular job, hangs around with a young
    2:40:25 girl’s ex cons, engages in criminal activity, is repeatedly arrested, but nothing ever sticks for
    2:40:35 the next couple of years. So who gets that type of thing? Who gets a get out of jail free card
    2:40:49 informants? So here is what, again, this is speculation. But Manson, at some point,
    2:40:54 after he got out of prison, is getting this treatment because he is recruited as a confidential
    2:41:04 informant. For who? For who? That’s the interesting question. So probably not for any local police
    2:41:12 departments. My best suspicion is probably the Federal Bureau of Narcotics, precursor to the DEA.
    2:41:21 You know, Federal Parolee, Federal Parole Officer, a graduate student in drugs and group
    2:41:27 dynamics. And eventually, with permission, he goes back down to LA. And what is he part of
    2:41:32 when he’s there? Well, he’s on the fringes of the music industry. Not so much, you know, these
    2:41:38 those the Wilson’s and elsewhere, which also brings him to the fringes of the film industry. So
    2:41:44 one of the things, if you’re sort of looking in terms of Hollywood music industry elites in the
    2:41:53 flow of, oh, and he’s also dealing in drugs and girls. So an early version of Jeffrey Epstein.
    2:42:04 Yeah, Manson distracted lots of underage runaways and trained them, used them also
    2:42:10 associating with biker gangs who produced drugs, etc. So that’s part of what he’s
    2:42:15 he’s an informant in the movement of drugs, basically within the film music industries.
    2:42:19 And he’s given pretty much a kind of free rein at that point.
    2:42:27 What then happens in August of 1969 is that there are these murders, you know, first Sharon Tate
    2:42:34 and her friends in Cielo Drive. I think everybody has probably pretty much heard that story before.
    2:42:40 And of course, the question is, why Cielo Drive, why Sharon Tate, Frikowski and the rest of them,
    2:42:45 that he have some Manson was familiar with the place he had been there before members of the
    2:42:51 family had been there before. So he knew where it was. It wasn’t an easy place to find. I mean,
    2:42:56 the way that that house, the house, the original house is no longer there, but the same sort of
    2:43:02 property in a house is built there. And if you didn’t know where it was, it’s not some place.
    2:43:06 Let’s just go for a drive in the Hollywood Hills and murder people in a house. Well,
    2:43:11 that isn’t the one that you would come across. There are lots of connections there.
    2:43:16 Vojtek Frikowski, who was one of the people killed at the Cielo Drive house, was involved in drug
    2:43:22 dealing. That’s a possible connection between the two, probably a fairly likely one. Probably not
    2:43:30 unfortunate Sharon Tate at all. She was probably in the wrong place at the wrong time. Her husband
    2:43:37 might have been, you never know. And then the next night after the slaughter there,
    2:43:41 which by the way, Manson is not at. So this is one of the interesting things about it.
    2:43:45 Charles Manson doesn’t kill any of these people. His crime is supposedly
    2:43:55 ordering the killings to be done. He supposedly thought that the killings at the Tate House were
    2:44:01 sloppy. And he was going to give everybody a crash course in how you apparently commit
    2:44:05 seemingly random murders. So the next night he takes a group of people over to the La Bianca’s
    2:44:13 house in a different section of LA. And you got Lena Rosemary, La Bianca. The guy is a grocer.
    2:44:21 His wife runs a dress shop, upper middle class. And they’re bound to gagged and hacked to death.
    2:44:28 And as at the Tate residence, various things like piggy are written, various messages in blood,
    2:44:33 things that are supposed to look like cats paws, because one of the groups trying to be
    2:44:39 framed for this was the idea was the Black Panthers. So the general story that comes out
    2:44:44 in the subsequent trial is that this was all a part of something called Helter Skelter,
    2:44:49 which Manson supposedly was an idea that that sounds like a Beatles song. That’s where he got
    2:44:53 it from. He thought the Beatles were talking to him through their music and that there was going
    2:45:02 to be an apocalyptic race war. And this was all part of a plan to set this off. So this is why
    2:45:10 the Black Panthers were trying to be implicated in this, although how it was supposed to do that
    2:45:18 is never really explained. Here is what I think was really happening, what really happened,
    2:45:25 and how I think it fits together. Before Sharon Tate and her friends or the La Biancas were killed,
    2:45:31 there was a murder by members of the family of some of the same people involved in the
    2:45:36 later killings of a musician drug manufacturer by the name of Gary Hinman.
    2:45:46 So Manson, again, was involved in the drug trade, and Hinman made them. He was a cook, basically,
    2:45:53 and he brewed them up in his basement, sold the drugs to Manson, who sold them to biker gangs,
    2:45:57 like the straight satans, which was one of the groups that he used, and they distributed them
    2:46:04 elsewhere. Well, one day, the straight satans show up and complained that the last batch of
    2:46:12 meth or whatever it was that they got from Manson had made some of their brothers very,
    2:46:17 very ill, and they were quite unhappy about that. And they wanted their $2,000 back.
    2:46:27 Manson had gotten those drugs from Gary Hinman. So he is unhappy, and he sends
    2:46:31 Bobby Bosaway and a couple of the girls over to Hinman’s place to get the money from him.
    2:46:39 As the story is later related, I think, by Susan Atkins, Hinman denied that there was anything
    2:46:46 wrong with his drugs and refused to pay up, which led to an interrogation torture session in which
    2:46:52 he was killed. And the idea was here, what are we going to do with that? Well, one of the other
    2:46:57 groups that Hinman had sold drugs to were, guess what, people associated with the Black Panthers.
    2:47:07 So we’ll leave these things up, and they will do it. So it’s Bobby Bosaway, who then takes Hinman’s
    2:47:15 car and decides to drive it up the coast. By the way, with a bloody knife with Hinman’s blood and
    2:47:21 hair on it and blood on the seats in the car, and then he pulls it off the road and decides to sleep
    2:47:30 it off, and he gets busted. So find Hinman’s body, find Bosaway and Hinman’s car with a bloody knife
    2:47:38 with him. Yeah, he gets arrested. So Bosaway was very popular with some of the girls. There’s
    2:47:45 consternation in the family that Bobby has been arrested. So how can we possibly get Bobby out
    2:47:52 of jail? Copycat killings. So if we go kill more people and we make it look the same, then see,
    2:47:58 Bobby couldn’t possibly have done it. Now, see, he just borrowed the car. Okay, he stole the car,
    2:48:04 but the knife was already in it. He didn’t have anything to do with this. So that, to me, makes
    2:48:09 the most sense out of what followed. How often do people talk about that theory? That’s an interesting
    2:48:14 theory. Well, it’s there. It’s just not the one that, that, luckily, obviously, I wanted to go with
    2:48:19 healthy scouter because it was, again, it was a story that people could understand. Yeah. And
    2:48:27 it was sensational and it would catch on. Also another probable issue in that was that his star
    2:48:34 witness was Linda Kassabian. Linda Kassabian, she was present at both the Tate and the LaBianca
    2:48:40 murders. She didn’t participate in the killings according to her. She sort of drives the car,
    2:48:46 but everybody else talked about what had happened. Well, okay, she turns states evidence
    2:48:54 and gets total immunity. And it’s largely in her testimony that all the rest of the case is based.
    2:49:02 Now, if you start throwing into the equation that she proclaimed her love for Bobby Beausoleil,
    2:49:07 and this could, and that she, according to others, was the chief proponent of the copycat killings,
    2:49:15 well, then that would get messy. Now, there’s one guy that’s at the center of this. It’s Charles
    2:49:24 Manson. He ordered all of this done to ignite a race war, even though how would any of that do it?
    2:49:29 Okay. So that doesn’t make sense. But he is nevertheless at the center of this
    2:49:34 because he’s the glue of the family, right? He exerts a tremendous amount of psychological control
    2:49:39 over them. How was he able to do that? Sorry to interrupt. Because he said he was a petty criminal.
    2:49:45 It does seem he was pretty prolific in his petty crimes. He did a lot of them. He had a lot of
    2:49:57 access to LSD, which he started getting at the free clinic in San Francisco. So lots of it
    2:50:02 floating around. Some descriptions of the family at Spawn Ranch is that people were basically
    2:50:08 taking acid on a daily basis, which, by the way, was also a potential problem with Linda
    2:50:13 Cassabian’s testimony, since she also admitted to being high most of the time and also thinking
    2:50:18 she was a witch. All right. So you want to put her, okay. Where do you want to go with that?
    2:50:24 See, if Manson wasn’t Manson, if he hadn’t acted like such a complete, if he hadn’t actually
    2:50:32 acted like the crazed hippy psycho goofball that Boogliosi painted him as being, then
    2:50:37 Cassabian’s testimony wouldn’t have been as strong because you could. I mean,
    2:50:43 the first thing against her is you’ve gotten immunity for telling the story the prosecution
    2:50:49 wants. That’s a little iffy. And we won’t even bring in the witch and the drugs and being in
    2:50:54 love with Bobby Poseley. All right. So if Manson had been dressed like you, sitting there in a
    2:51:02 suit and you know, and it behaved himself and spoken normally, this isn’t to say that he wasn’t
    2:51:10 guilty as hell. So what he supposedly did was to inspire all of these killings.
    2:51:19 And I think that’s probably sort of beginning with the Hinman killing.
    2:51:25 He told them to go over there and get the money one way or the other. I don’t know whether it’s
    2:51:31 clear whether he told them if you don’t get the money, kill him, but Hinman’s dead. And then
    2:51:39 you might also have seen the value in terms of having copycat killings as a way of throwing
    2:51:44 off any other kind of blame. The other story you get is that one of the people who had lived at
    2:51:50 the Cielo house for Sharon Tate was before was a record producer by the name of Terry Melcher.
    2:52:00 Melcher supposedly, as the general story goes, had welched on a deal with Manson in terms of
    2:52:05 a record contract. He screwed over Manson in some sort of a record deal and Manson wanted to get
    2:52:12 revenge and sent them to kill everybody in the house, which again doesn’t make much of sense.
    2:52:18 One, Manson knew that Melcher wasn’t living there anymore. He probably knew where Melcher
    2:52:22 was living. If he wanted to get Melcher, he could have found him. It wasn’t that difficult to do.
    2:52:38 So it’s not revenge on Terry Melcher that drew him there. He was familiar with the house,
    2:52:45 so if the idea was to simply commit random killings that would muddy the whole waters
    2:52:50 with the Hinman killing, then you might pick some place you knew of. You knew the place
    2:52:54 where to run it out. There would be someone there. You really didn’t care. In the same way that the
    2:53:01 lobbyonkas seemed to have been, Manson was familiar with that because it supposedly had been the scene
    2:53:09 of creepy crawling. This is little interesting things that the family would be taught to do. Creepy
    2:53:16 crawling is when you sneak into somebody’s house at night. While they’re there asleep or when they’re
    2:53:22 not there and you move things around. So when they get up in the morning or they come home,
    2:53:27 they’ll suddenly notice that someone has been in their house, which will freak them out,
    2:53:32 which is the whole point of that. But it doesn’t seem like the murder or the creepy crawling was
    2:53:38 the… well, creepy crawling may be. But it doesn’t seem like the murder, like some of the other
    2:53:44 people you’ve covered, like the Zodiac killer, the murder is the goal. Maybe there’s some
    2:53:51 psychopathic kind of artistry to the murder that the Zodiac killer had and the messaging behind
    2:53:57 that. But it seems like with at least the way you’re describing it with Charles Manson family,
    2:54:02 the murder was just the… they just had a basic disregard for human life and the murder was a
    2:54:09 consequence of just operating in the drug underworld. So Manson set up a kind of base,
    2:54:15 a thing called the Spawn Movie Ranch, which was an old movie ranch out on the northwest edge of LA.
    2:54:24 And they just kind of camped out there. He used the girls in particular, Squeaky From, to get the
    2:54:34 owner or operator, George Spawn, to let them hang out there. And basically, she slept with him and
    2:54:38 he was perfectly happy to let them hang out. They also had a place out in the desert that they had.
    2:54:45 They dealt in credit card fraud, stolen cars. It was kind of a chop shop that they ran out of the
    2:54:55 place. So he had a fairly good little criminal gig going, which with the protection he had,
    2:54:57 probably would… the one thing they couldn’t cover him on was murder.
    2:55:02 So you think there was… if he was an informer, you think there was still a connection between
    2:55:07 DEA, FBI, CIA, whatever with him throughout this until you come into murder?
    2:55:12 Well, the real question is there is a book written on this by Tom O’Neill called Chaos.
    2:55:16 And that sort of thing is the easiest thing to get through. There’s a lot of material there.
    2:55:20 I don’t think O’Neill necessarily knows what to make of some of the stuff he came up with.
    2:55:26 But he does a very good job of sort of demolishing the whole Boogliosi narrative.
    2:55:32 And one of the people he mentions is a name that I had run into elsewhere.
    2:55:37 And so I really paid attention to it when I saw it again. And the name is Reeve Whitson.
    2:55:48 Reeve Whitson shows up on the fringes even though he has no judicial function. He sort of hangs
    2:55:52 around Boogliosi in the prosecution. He’s some sort of advice. He’s just kind of there.
    2:55:59 In the same way that he was one of these guys, he grew his hair kind of long, wore bell bottoms,
    2:56:05 hung around the music community and elsewhere in Hollywood, but no one could tell you exactly what
    2:56:13 he did. I know what he did later, but a decade later, he shows up as a CIA officer in Central
    2:56:30 America. So Reeve Whitson, later in his career at least, is CIA. What was he in 1969? What is he
    2:56:37 doing in this? The other thing about it is he appears to have been the person who called…
    2:56:43 There’s a whole question of when the bodies at C.L.O. Drive are discovered. So the general story is
    2:56:49 that Sharon Tate’s housekeeper shows up around 8.30 in the morning, finds the bloody scene and goes
    2:56:56 screaming next door. But there was another fellow who knew, I think the owner of the house is a
    2:57:00 photographer, the last name may be Hatami. He gets a call earlier in the morning saying that
    2:57:13 there have been murders. There. And the person he recalls calling him is Reeve Whitson. So someone
    2:57:20 had been at the house before the bodies were discovered and they had not called the police.
    2:57:31 So I don’t know what’s going on there, but it’s a curious kind of situation.
    2:57:42 And Manson in a lot of ways just self-immolates himself. His behavior at the trial is bizarre,
    2:57:49 it’s threatening, it’s disruptive. He’s got his girls out on the street carving Xs in their forehead,
    2:57:56 carrying knives. One of the attorneys, initially his attorney, Ron Hughes,
    2:58:03 becomes Van Houten’s attorney. And he figures out that the three girls supposedly on Charlie’s
    2:58:09 insistence are going to confess. And they’re confessed that it was all their idea and Charlie
    2:58:17 had nothing to do with it. Hughes doesn’t like this because his defense for her is that she was
    2:58:25 under his influence and therefore not responsible for her own actions. He was having psychic control,
    2:58:30 so he refuses to go along. Whether there’s a break in the trial, he goes camping up in the
    2:58:37 mountains with some friends, disappears during a rainstorm, and then some months later his decomposed
    2:58:45 remains are found. Now rumors, always the rumors. Okay. What would history be without rumors?
    2:58:53 Hell, see members of the family, they were pissed off at Ron Hughes because he messed up Charlie’s
    2:58:58 idea to get him off and so they killed him. Maybe they did, maybe he drowned. That’s absolutely
    2:59:04 impossible to say. You got that kind of story, there’s a guy named Juan Flynn who was an employee
    2:59:10 at the Spawn Ranch, didn’t like Manson, held Manson responsible for the murder of his boss.
    2:59:15 He would testify that Manson told him that he had ordered all the killings and that Manson
    2:59:24 also admitted that he had killed 35 people. Maybe he did, on the other hand, Juan Flynn
    2:59:29 didn’t like him and he had no, other than his word, had no real proof of what he was saying.
    2:59:35 So please understand me in this case, is that unlike some people who argue that
    2:59:43 Charles Manson got a raw deal, I don’t think that’s the case. I think that he influenced
    2:59:57 tremendous influence over the people there through drugs, through sex was another frequent component
    3:00:03 in it. He had a real whammy over a lot of these people’s minds. I’m not sure how, that still kind
    3:00:09 of puzzles me. He was a scrawny guy and he wasn’t physically intimidating. I mean, even a lot of
    3:00:14 women wouldn’t be physically intimidated by him, but he nonetheless had this real psychological
    3:00:20 power and if you look around him, the male followers he had were fairly big guys.
    3:00:30 So he could get people to do what he wanted. And again, to me, the simplest explanation for this
    3:00:35 is that it began with the Hinman killing and probably on Manson’s instigation,
    3:00:41 the others were copycat killings to throw off what was going on. That would, if I was a cop,
    3:00:46 that’s what I would focus on because that seems to make the most sense.
    3:00:51 It’s still as fascinating that he’s able to have that much psychological control over those people
    3:00:56 without having a very clear ideology. So it’s a cult.
    3:01:01 Yes, the great focus on Charlie the leader, the excessive devotion.
    3:01:08 But there’s not an ideology behind that. There’s something like Scientology or
    3:01:14 some kind of religious or some kind of, I don’t know, Utopian ideology, nothing like this.
    3:01:21 No, I think that Manson, again, was essentially a criminal. He had a sociopathic mindset and
    3:01:28 he hit upon a pretty good deal. Yeah, but how do people convince anybody of anything? With a cult,
    3:01:34 usually you have either an ideology or you have maybe personal religion, like you said, sex and
    3:01:38 drugs. But underneath that, can you really keep people with sex and drugs? You have to kind of
    3:01:45 convince them that you love them in some deep sense. There’s a commune of love.
    3:01:51 You have a lot of people there in the cult. They have some sort of what we like to call dysfunctional
    3:01:57 families. Yeah. A lot of the females in particular seem to have come from more or less middle class
    3:02:06 families, but those are full of dysfunction. Their parents didn’t love them. There are semi
    3:02:14 runaways and now they had this whole family. A lot of the younger women had children,
    3:02:20 some of them by Manson, some of them by the others. They sort of bonded together.
    3:02:28 And again, we return to that pull towards belonging that gets us humans into trouble.
    3:02:39 So it does seem that there were a few crimes around this time. So the Zodiac killer.
    3:02:48 Well, California, but I’m from. So I remember this period vividly. Okay. So by the way, the
    3:02:54 Tate Lobbianca killings occurred on my birthday the year I graduated from high school. So I remember
    3:03:00 this. Happy birthday. A term which has been used for that. There’s a writer by the name of Todd
    3:03:07 Wood who’s toying. I wish I’d come up with this killer fornia, which is just sort of a chronicle
    3:03:13 of the serial killers and disappearances in the late ’60s and ’70s. So you’ve got the Zodiac,
    3:03:18 you’ve got other ones. I mean, I hate to say it. I’m not trying to be flippant about it,
    3:03:23 but I mean, young female hitchhikers were disappearing at an alarming rate in Northern
    3:03:32 California. There are bodies that have never been attributed. Some think that the Zodiac’s
    3:03:41 victims, but it was a dangerous time. Edmund Kemper, the co-ed killer was another one. There
    3:03:47 were a lot of creepy psychopaths running around. I don’t know whether it was something in the water
    3:03:57 or what was going on, but it was a menacing in some cases. Hitchhiking, especially if you were
    3:04:03 alone and female, was not something you wanted to do in much of the Golden State, certainly not
    3:04:08 up around the Bay Area. So a lot of these strange sort of killings that were going on,
    3:04:13 the Zodiac is one of those things where you have these people who have theories about it,
    3:04:20 and if you don’t share their theory, then you’re part of the problem in some form or another.
    3:04:24 So I’m not sure, for instance, that the Zodiac killings were all committed by the same person.
    3:04:27 I think there might have been multiple people involved.
    3:04:35 And the first killings are all of couples. It’s very sort of clear that they– I remember
    3:04:41 in my examination of it, one of the things I was looking at specific, what else is there to say
    3:04:45 about the Zodiac killing? So what I was going to look at is that there are all of these
    3:04:50 accusations that there is an occult aspect to it. There was some sort of ritualistic
    3:04:58 aspect. So I looked at different things, locations, victims, phases of the moon,
    3:05:02 that’s always worth looking at. I didn’t find much correspondence in any of those.
    3:05:10 In one of the killings, I think the one at Lake Berryessa, he does appear in this kind of weird
    3:05:17 hooded costume. He’s got his symbol that sort of compass or aiming rectical circle
    3:05:23 with a cross through it. It can mean a variety of things. He used guns and he used knives,
    3:05:28 but he certainly had a thing for couples, except in the last of the killings, which is of a cab
    3:05:36 driver in downtown San Francisco, who he shoots in full view of witnesses, which is completely
    3:05:46 atypical. And also when he was stabbing the victims, it doesn’t seem like he was very good at it,
    3:05:50 or if the goal was to kill them, he wasn’t very good at it because some of them survived.
    3:05:54 Yeah, he doesn’t– he’s not particularly thorough about it. He seems to have had much more–
    3:05:58 more of the violence seems to be directed at the females than the males.
    3:06:05 So, I mean, there’s a couple questions to ask here. First of all, did people see his face?
    3:06:09 There is a composite drawing of his face, which I think is based upon the
    3:06:13 the Stein killing, the cab driver killing, where there were people who saw him,
    3:06:19 or who claimed that they saw him. The other ones were all when it was fairly dark.
    3:06:25 Right. I’m not sure that anyone else got to look at his face. The one that occurred in the daylight
    3:06:33 at Barry Esso, he was wearing a mask. So, there’s something in common initially in the targeting
    3:06:38 of victims, which doesn’t– in the last case. Then after that, there’s just the different cases of where
    3:06:46 there’s a pretty good case to be made of a woman who claims– I think she was– she and her small
    3:06:50 child were picked up. Her car broke down, she got a flat tire, and she was picked up by this guy,
    3:06:55 who she got a very sort of strange vibe from, who eventually just let her go.
    3:07:02 Well, you know, that might have been the zodiac. It might not have been.
    3:07:07 You do this kind of rigorous look saying, okay, what is the actual facts that we know?
    3:07:16 Like, reduce it to the thing that we know for sure. And in speaking about his motivation,
    3:07:22 he said that he was collecting souls. Souls for the afterlife.
    3:07:24 For the afterlife. That’s kind of a culty.
    3:07:31 Yeah. I mean, that’s what I believe. Is it the Vikings or the Romans? They believe this in battle.
    3:07:34 You’re essentially making sacrificial victims, and they will be your
    3:07:39 ghostly servants in the afterlife. Do you think he actually believed that?
    3:07:45 Who knows? I mean, here’s the question. Was he making that up just to be scary?
    3:07:51 Or is that what is actual? That’s what he’s saying his motivation is.
    3:07:54 So let’s take him at face value, rather than trying to
    3:08:03 wish that into the cornfield, that is to get rid of it. Let’s just take it to face.
    3:08:08 So he’s claiming that he’s killing these people in order to acquire slave servants in the afterlife.
    3:08:14 He will subsequently go on to claim many more victims. I’m not sure.
    3:08:17 44, eventually he will have. Before he just kind of vanishes.
    3:08:24 One of the really interesting clues to me when I was looking at that case,
    3:08:28 which I didn’t find anybody else that tended to make much of it of, is that
    3:08:34 it all has to do with this kind of Halloween card that he sends to the press in San Francisco.
    3:08:43 And it’s talking about sort of rope by gun by fire. And there’s this whole sort of wheel.
    3:08:47 He knows the zodiacs. But what was this drawn from, where he got this from,
    3:08:54 is from a Tim Holt Western comic book published in 1951. And you see the same thing in the cover.
    3:08:58 It’s wheel of fortune, but with different forms of grisly death on it.
    3:09:02 And all of the things that he mentioned are shown on the cover of this.
    3:09:09 So whoever put together that card saw that comic book.
    3:09:13 That’s kind of an interesting clue. So does that mean he’s a comic book collector?
    3:09:22 And also before he got the idea from these incorporating these things from the–
    3:09:29 Then there are, of course, his codes, which people which aren’t all that difficult to decipher,
    3:09:34 probably because they weren’t meant to be. The other thing that you find often with
    3:09:39 serial or psychopathic killers is they’re toying with the press. I mean, this goes all the way back
    3:09:46 to Jack the Ripper. Now, they get attention. And then he just disappears.
    3:09:50 Why do you think he was never caught? I think they knew who to look for.
    3:09:56 There’s nothing much to go on. I mean, there was a guy who was long a suspect.
    3:10:02 And then eventually he tested his DNA and found it didn’t match any of the things
    3:10:09 that they’d found. Again, it goes back to– I’m not even sure that it’s one person who is responsible
    3:10:13 for all of them. Well, there’s one of the interesting things you kind of bring up here
    3:10:22 in our discussion of Manson inspires this. But there does seem to be a connection,
    3:10:30 a shared inspiration between several killers here, the Zodiac, the son of Sam later, and the
    3:10:38 monster of Florence. So is it possible there’s some kind of underworld that is connecting these
    3:10:44 people? Well, you take the Zodiac and you add his claim that he’s collecting souls for the afterlife.
    3:10:52 There are other things that are occultish connected to that. He may have picked some of the killing
    3:10:58 sites due to their physical location, to their position in a particular place.
    3:11:06 If you look at the son of Sam case, of course, David Berkowitz will on and off claim that he was
    3:11:14 part of a satanic cult that was carrying out, again, these killings, mostly of couples and
    3:11:21 young women similar to the Zodiac, and that he had only committed some of them and was witnesses
    3:11:30 at others. And that has really created the whole idea that, yes, there is this some kind of satanic
    3:11:36 cult which engages in ritual murders. Then if you go all the way to Florence, you’ve got murders
    3:11:43 who go on and off for a long period of time, again, focusing on couples in isolated areas,
    3:11:50 which Italian prosecutors ultimately tried to connect to some kind of satanic cult, although
    3:11:54 I’m not sure they ever made a particularly strong case for that, but that element comes up in all
    3:12:05 three of them. So you can, with a little imagination, argue that those similarities, that those things
    3:12:14 should come up in each of those cases in different places, either suggest that oddly enough,
    3:12:18 psychopathic criminals all sort of thinking the same way, or that there is some sort of
    3:12:25 higher element involved in this, that there’s some kind of common inspiration.
    3:12:32 And here you come back to something similar we were talking before about do pedophiles exist,
    3:12:40 do pedophiles, okay, so do satanic cults exist? Well, they do. Okay, there was one in my hometown.
    3:12:47 Apparently quite harmless, as far as I know, never did anything to be no, but there are people who,
    3:12:53 you know, robes, here we come again, robes, cut the head off a chicken, naked woman is an altar,
    3:12:56 you know, you can get off on that, I suppose, if that’s your thing.
    3:13:07 So professed satanus exist, satanic cults exist, serial killers exist, ritual murders exist,
    3:13:12 are those things necessarily connected? No, could they be connected?
    3:13:20 Yes. Okay, there’s nothing. Don’t ever tell me that something is just too crazy for people to
    3:13:27 do, because that’s crazy talk, all right. You’ve studied secret societies. You’ve
    3:13:33 gave a lot of amazing lectures on secret societies. It’s fascinating to look at human history through
    3:13:38 the lens of secret societies, because they’ve permeated all of human history. You’ve talked
    3:13:43 about from everything from the night’s Templar to Illuminati to Freemasons, like we brought up,
    3:13:50 Freemasons lasted a long time. Illuminati, as you’ve talked about in its sort of main form,
    3:13:56 lasted a short time, but it’s legend. Never gone away. Never gone away. So maybe, like,
    3:14:02 Illuminati isn’t a really interesting one. Who, what was that? Well, the Illuminati that we know
    3:14:11 started in the 1776. In fact, you can pin it down to a day, the 1st of May, May Day, 1776,
    3:14:20 in Ingolstadt, Germany, founded by a professor, Adam Weishaupt. It wasn’t initially called the
    3:14:25 Illuminati, because that’s not really the name of the organization. It was called the Order Perfectabilists.
    3:14:32 Apparently, that changed. Weishaupt would say things like never let our organization be known
    3:14:36 under its real name anywhere, which leaves wondering what’s its real name.
    3:14:45 So Illuminati is simply the plural of Illuminatus, which means one who is illuminated, one who has
    3:14:53 seen the light. So in Roman times, Christian converts were Illuminati, because they had seen
    3:14:59 the light. Anyone who thinks, and there have been organizations called Illuminati, the term is
    3:15:06 not trademarked, not copyrighted. Anybody who thinks they’ve seen the light about anything
    3:15:13 is an Illuminati. So it defines nothing. The symbol of the Order was an owl,
    3:15:19 which interestingly enough is almost identical to the owl, which is the emblem of
    3:15:27 the Bohemian Club. Oh, boy. Make of that what you will. I don’t make that much out of it,
    3:15:34 because one owl looks pretty much like another owl to me. But compare them. You gotta kind of
    3:15:41 wonder about this little thing. Maybe there’s some kind of connection there. So what that
    3:15:45 supposedly has to do with the connection to the goddess Minerva, and the owl was sacred to her,
    3:15:55 and the Order was the Minerval, the person who was brought in. The number of levels changed over
    3:15:59 time. There was a higher level for the Order that people at the lower level didn’t know about.
    3:16:06 Pretty typical for this. But the thing about Weishaupt was that he was quite, he was a
    3:16:14 luminous correspondent with members with his Illuminati, both during the time that it legally
    3:16:20 existed in Bavaria and later on. So Weishaupt himself lives, I think, until 1830.
    3:16:27 Dies in Gotta, which was ruled by an Illuminati prince, and so on. Nothing ever happens to
    3:16:32 these men. No Illuminati has ever put to death or arrested in prison for any period of time.
    3:16:39 What happens is that their plan, well, what was his plan? His plan was to essentially
    3:16:46 replace all existing religions and governments in the world with a one-world order governed
    3:16:54 by the Illuminati. So to do this, you had to subvert and destroy all the existing Order.
    3:17:03 The purpose for this is to, we wish to make men happy and free, but first we must make them good.
    3:17:11 All right. So that’s what the Order is all about. Of course, he also said things like,
    3:17:16 “Oh man, is there nothing that you won’t believe?” Okay, so myth would be used in that.
    3:17:21 Also thought women should be brought into it. He had a rather interesting view about that,
    3:17:26 was that we should appeal to women, in part because women have a chip on their shoulder,
    3:17:30 because they’re left out of things. So we should appeal to their vanity on that point
    3:17:37 and offer that in the future, all things will be open and they will be emancipated. So we should
    3:17:42 hold out the prospect of female emancipation to attract them, because he argued in the short
    3:17:48 term there’s no better way to influence men than through women. Get women on our side by
    3:17:53 promising them emancipation, but it made sure we’ll never actually deliver it to them,
    3:18:00 because the future world will be a boys club. So he talks about these things fairly openly,
    3:18:05 and this is where you get this idea of some sort of a new world order which is to be based upon
    3:18:15 the destruction of the existing order. So there are those who argue that there is a trail of descent
    3:18:24 that leads from Weishoff’s Illuminati to the Communist Manifesto, and in fact, Communism itself,
    3:18:32 that Marxism was simply a further restating of this idea. And you can draw some sort of,
    3:18:40 I mean, the idea never entirely goes away. The Bavarian government gets a hold of the
    3:18:47 order’s inner texts, so the story is there to deliver to them. I think that Weishoff gave them
    3:18:53 to him. I think he engineered the exposure of his order because it gave him publicity.
    3:19:00 By being exposed in Bavaria, you gained great renown and they continued to recruit after this,
    3:19:04 and the Bavarian government actually bans the Illuminati four different times.
    3:19:12 Why? Because apparently the first three times didn’t work, so the fourth one does.
    3:19:17 You can notice that it’s like papal bans on Freemasonry, and they just go on and on and on,
    3:19:21 because this clearly isn’t working. And you actually highlight the difference between,
    3:19:27 speaking of publicity, that there’s a difference between visibility and transparency,
    3:19:33 that a secret society could be visible, it could be known about, it could be quite popular,
    3:19:36 but you could still have a secrecy within it. You have no idea what’s going on inside.
    3:19:41 It’s like a black box. If I set a black box on this table, we can see that there’s a black box.
    3:19:46 What’s in the black box? A cat? Who knows? In fact, the secrecy might be the very thing
    3:19:50 that makes it even more popular. Adam Weishoff, again, there’s no more
    3:19:55 thing convincing than a concealed mystery. Give people a concealed mystery in the fuss,
    3:20:00 so we need to make the order mysterious for that exact reason. Always hold out the possibility
    3:20:06 that special knowledge that no mere mortals have other than you
    3:20:14 will have in that way. Since there’s a lot of things, the use of vanity and ego to recruit people,
    3:20:21 to influence both men and women, it’s quite sophisticated.
    3:20:28 And as you might expect from a professor of canon law trained by Jesuits,
    3:20:40 so I certainly don’t think that it ceased when it was banned in Bavaria,
    3:20:43 because everybody just scatters and goes elsewhere, like Paris.
    3:20:52 And then you have the French Revolution. So the idea of the Illuminati, to put it crudely,
    3:20:57 the branding is a really powerful one. And so it makes sense that it can,
    3:21:03 there’s a thread connecting it to this day, that a lot of organizations,
    3:21:06 a lot of secret societies can sort of adopt the branding.
    3:21:09 Anybody can call it. You can go out and form a club and call it the Illuminati.
    3:21:12 And if you’re effective at it, I think it does attract,
    3:21:19 how is the chicken or the egg? But powerful people tend to have gigantic egos, and people with
    3:21:23 gigantic egos tend to like the exclusivity of secret societies.
    3:21:30 And so there’s a gravitational force that pulls powerful people to these societies.
    3:21:34 It’s exclusive, only certain. And you also notice something goes back to when we were
    3:21:38 talking about much earlier, when we were talking about intelligence. Remember, mice? Ego,
    3:21:46 is a recruitment and control. That’s a great Achilles’ heel in human beings, the exploitation
    3:21:51 of ego. And of course, if we go back to the conversation of intelligence agencies,
    3:21:59 it would be very efficient and beneficial for intelligence agencies to infiltrate
    3:22:03 the secret societies, right? Because that’s where the powerful people are.
    3:22:06 Or the secret societies to infiltrate the intelligence agencies.
    3:22:11 Oh boy. Well, I mean, that’s actually, in all the lectures,
    3:22:19 I kind of had a sense that intelligence agencies themselves are kind of secret societies, right?
    3:22:24 Well, it comes down, I give you my definition of secret societies that they come down to. One is
    3:22:29 that generally their existence isn’t secret. It’s what they do is secret. It’s what’s in the box,
    3:22:36 as opposed to the existence of the box. So one of the most important criterias is that they are
    3:22:42 self-selecting. You just don’t join. They pick you. They decide whether or not you’re going to,
    3:22:45 they admit you. And now oftentimes they will sort of recruit you.
    3:22:52 Once you have been recruited, you have to pass tests and initiations.
    3:23:02 And you also have to swear oaths of loyalty. Those are always very, very critical.
    3:23:08 So broadly speaking, what interest into an intelligence organization does,
    3:23:12 they, they decide whether you get in. You just don’t automatically get the job. You have to pass
    3:23:20 tests, one ally detector test, for instance, field training tests, a whole variety of tests.
    3:23:29 And then you’re sworn to secrecy. You never talk about what you do. Ever. Or there will be dire
    3:23:37 consequences. So the method is very much the same. And also this idea of creating a kind of
    3:23:51 insular group. The organization is us. And everyone else is outside of that. We are guardians of
    3:23:57 special knowledge. See, this is the type of thing that would generally happen if you question
    3:24:01 whatever any kind of intelligence agency did. Well, we know things that you don’t. Why? Because
    3:24:07 we’re the organization that knows things. We collect information. We know the secrets. We
    3:24:11 guard the secrets. Therefore, if we tell you, you must believe us.
    3:24:19 I have this sense that there are very powerful secret societies operating today. And we don’t
    3:24:25 really know or understand them. And the conspiracy theories in spirit might have something to them,
    3:24:31 but are actually factually not correct. So like, you know, an effective powerful
    3:24:38 secret society or intelligence agency is not going to let you know anything that it doesn’t
    3:24:42 want you to know, right? They’ll probably mislead you if you can say that close.
    3:24:49 So I think, you know, the question is, what’s the most powerful or important secret society?
    3:24:54 Probably the one you don’t know about. One that doesn’t advertise its existence. The one which
    3:25:04 is never known anywhere under its real name. You’ve got things like the Bohemian Club. You’ve got
    3:25:11 the Bilderbergers, which is another sort of, you know, formed in the 1950s. Largely the creation
    3:25:18 of a guy by the name of Josef Rettinger. Polish, mysterious, appears in a nowhere, a schemer for
    3:25:25 years. A man expelled from Britain, France, and the United States at one point or another.
    3:25:33 Long active in the Mexican labor movement. Rettinger is a mysterious figure. In fact,
    3:25:38 he has, I think there was even a book written about him called Eminence Grise, Grey Eminence,
    3:25:43 the fellow who was the front man for the Bilderbergers was Prince Bernhard of the Netherlands,
    3:25:49 who was at one point a Nazi, and then a Dutch freedom fighter. All right, take your pick.
    3:25:56 But Rettinger is the moving hand behind the whole thing, and I’ll be damned until I can figure out
    3:26:04 who Rettinger is. So the idea is that, well, you get like influential people in media, business,
    3:26:14 politics, and you bring them together just to talk, to try to find common answers or common
    3:26:21 questions. It’s all very much sort of Western European, Anglo-European. It’s all very closely
    3:26:30 sort of connected to NATO, the whole concept of a kind of Atlanticist world, which is essentially
    3:26:37 the Anglo-American combined with Western Europe. But you’ve got a bunch of these things. I mean,
    3:26:46 the Castle and Foreign Relations is very similar to that, and the Bilderbergers, and
    3:26:54 there’s an overlap with the Bohemian Club. And then you’ve got the Penae Circle, or Le Circle,
    3:27:03 which is more military, but also linked to the so-called secret gladeo. The idea of the
    3:27:07 Soviets over around Western Europe, there would be a stay behind organization called gladeo.
    3:27:12 There’d be these freedom fighters. So the question I have about that is that how many
    3:27:18 secret organizations do you need? I mean, why all these separate groups, which often seem to have
    3:27:24 the same people into them? Yeah, the closer I look, the more I wonder the same question,
    3:27:28 we asked about the Russian intelligence agencies. Where’s the center of power?
    3:27:33 It seems to be very hard to figure out. Does the secrecy scare you?
    3:27:39 Well, I guess on one level, I’m comforted that there’s somebody actually making decisions,
    3:27:45 as opposed to data. I mean, what do you want? Do you want chaos, or do you want everything kind
    3:27:56 of rigidly controlled? And I don’t put much stock in the idea that there actually is some small group
    3:28:01 of people running everything, because if they were, it would operate more efficiently.
    3:28:10 I do think that there are various disparate groups of people who think that they’re running things,
    3:28:17 or try to. And that’s what concerns me more than anything else.
    3:28:21 Well, I hate to go back to them again, because if I should bring up, if you go back to the Nazis,
    3:28:25 they had their whole idea about a new world order, and they only had 12 years to do it.
    3:28:31 And look what a mess they made. I mean, look at the damage, the physical damage that can be done
    3:28:38 by an idea inspiring a relatively small group of people controlling a nation.
    3:28:48 Based upon some sort of racial or ideological fantasy that has no real basis in reality and yet
    3:28:57 guides their actions. It’s this differentiation that I always make, and I would try to get across
    3:29:04 the students between always be clear about what you know and what you believe. You don’t know many
    3:29:13 things. You know your name. You know when you were born. You probably know who your father is,
    3:29:18 but that’s not absolute unless you’ve had a DNA test, and only if you trust DNA tests.
    3:29:25 So you know who your mother is. You believe this man is your father. Why? Because your mother told
    3:29:31 you who he was. So you believe things generally because someone has told you this is to be true,
    3:29:40 but you don’t really know for sure. Well, because we know so little, we tend to go by beliefs.
    3:29:48 So we believe in this. We believe in that. You believe that your cult leader is the answer
    3:29:55 to everything, and it seems to be very, very easy to get people to believe things. And then what
    3:30:02 happens is that whether or not those beliefs have any real basis in reality, they begin to influence
    3:30:10 your actions. So here again, regrettably in some ways to bring it back to the Nazis, what were
    3:30:15 the Nazis convinced of? They were convinced that Jews were basically evil aliens. That’s what it
    3:30:21 comes down to. They weren’t really humans. There’s some sort of evil contamination which we must
    3:30:29 eradicate. And they set out to do that. And they were sure that there’s just a few problems that
    3:30:34 can be solved. And once you solve them, that you have this beautiful utopia where everything would
    3:30:39 be just perfect. It’d be great. And we can just get there. And I think it’s really strong belief
    3:30:48 in a global utopia. It just never goes right. It seems like impossible to know the truth in it.
    3:30:56 For some reason, not long ago, I was listening on YouTube to old Wobbly songs. The
    3:31:03 Workers of the World. I don’t know why. I didn’t know there was a whole album of Wobbly songs.
    3:31:10 There was one of them called Common Wealth of Toil. And like most of them, they’re sort of taken
    3:31:17 from gospel songs. And it’s talking about in the future how wonderful everything will be
    3:31:29 in the common wealth of toil that will be. Now, these are revolutionary leftists, in this case,
    3:31:36 Wobblies. But nonetheless, it’s like a prayer for communism, everything. In the future,
    3:31:44 everything will be good because the earth will be shared by the toilers. And from each
    3:31:50 of his abilities to each according to his knee. And it’s this kind of sweet little song in some way.
    3:31:56 But I’m just sort of imagining this. If I was going to stage that, I’d have like this choir
    3:32:02 of children singing it with a huge hammer and sickle behind them. Because that’s what it’s
    3:32:11 combining. And you can think that the sentiments that are expressed in that song, which are legitimate
    3:32:21 in some way, of all the horrors that that thing leads to. It is fascinating about humans,
    3:32:29 a beautiful idea on paper, an innocent little idea about a utopian future can lead to so much
    3:32:34 suffering and so much destruction and total the unintended consequences that you see described.
    3:32:39 All of unintended consequences. And we learn from it. I mean, that’s why history is important.
    3:32:47 We learn from it. Hopefully, do we? Slowly, we’re slow learners. I’m unconvinced of that,
    3:32:56 but perhaps is speaking of unconvinced. What gives you hope? If human beings are still here,
    3:33:03 maybe expanding out into the cosmos, 1000, 5000, 10,000 years from now,
    3:33:10 what gives you hope about that future, about even being a possible future about it happening?
    3:33:18 Most people are cooperative and kind most of the time. And
    3:33:28 that’s one of those things that can usually be depended upon. And usually, you’ll get back to
    3:33:37 what you put into it. Another thing that I have like a weird fascination of watching
    3:33:46 are people who have meltdowns on airplanes because it’s just bizarre.
    3:33:52 It’s fascinating to watch, yeah. There’s some sort of psychotic break that occurs,
    3:33:58 and it’s always going to end the same way. The cops are going to come on and drag you off the plane.
    3:34:04 Now, true, and you’re going to inconvenience everybody there, and usually at some point,
    3:34:08 they don’t care about that. That’s the one little sense of power that they have. So they
    3:34:14 have some sort of sense of powerlessness. And if their only way of power is just to piss off
    3:34:20 everybody else on that plane, they’re going to go ahead and do it, even though it’s going to lead
    3:34:27 nowhere for them. And there’s similar, sometimes, psychological behavior in traffic.
    3:34:30 Oh, the road rage thing. The road rage, yeah. It’s fascinating.
    3:34:33 And I bet that most agencies, there again, those are all people who up to some point
    3:34:42 were cooperative and kind and polite, and then they snap. So those are all part of the human
    3:34:49 makeup as well. But also part of the human makeup, difference between humans and chimps,
    3:34:56 is the ability to get together, cooperate on a mass scale over an idea, create things
    3:35:04 like the Roman Empire did, laws that prevent us and protect us from crazy human behavior,
    3:35:07 manifestations of events and type of human beings are just weird animals. It’s not getting
    3:35:12 here. It’s just completely peculiar. I’m not sure that we’re all together natural.
    3:35:17 But I think we are all together beautiful. There is something magical about humans,
    3:35:22 and I hope humans stay here, even as we get advanced robots walking around everywhere,
    3:35:29 more and more intelligent robots that claim to have consciousness, that claim they love you,
    3:35:37 that increasingly take over our world. I hope this magical things that makes us human still
    3:35:42 persists. Well, let us hope so. Right. You’re an incredible person.
    3:35:46 Well, thank you. So much fascinating work. And it’s really an awesome-
    3:35:50 I’ve never had anybody ask me as many interesting questions as you have.
    3:35:53 So thank you so much. Or as many questions.
    3:35:56 This was so fun. Thank you so much for talking today.
    3:35:56 Well, thank you.
    3:36:01 Thanks for listening to this conversation with Rick Spence. To support this podcast,
    3:36:06 please check out our sponsors in the description. And now, let me leave you some words from John F.
    3:36:14 Kennedy. The very word secrecy is repugnant in a free and open society. And we are as a people
    3:36:21 inherently and historically opposed to secret societies, to secret oaths, and to secret proceedings.
    3:36:28 We decided long ago that the dangers of excessive and unwarranted concealment of pertinent facts
    3:36:33 far outweighed the dangers which are cited to justify it.
    3:36:51 Thank you for listening and hope to see you next time.
    3:36:53 you

    Rick Spence is a historian specializing in the history of intelligence agencies, espionage, secret societies, conspiracies, the occult, and military history.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep451-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/rick-spence-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Rick’s Website: https://www.uidaho.edu/class/history/faculty-staff/richard-spence
    Rick’s Courses: https://bit.ly/40dIZbw

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    AG1: All-in-one daily nutrition drinks.
    Go to https://drinkag1.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    BetterHelp: Online therapy and counseling.
    Go to https://betterhelp.com/lex
    MasterClass: Online classes from world-class experts.
    Go to https://masterclass.com/lexpod
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex

    OUTLINE:
    (00:00) – Introduction
    (09:04) – KGB and CIA
    (23:21) – Okhrana, Cheka, NKVD
    (38:53) – CIA spies vs KGB spies
    (45:29) – Assassinations and mind control
    (52:23) – Jeffrey Epstein
    (59:15) – Bohemian Grove
    (1:11:09) – Occultism
    (1:22:20) – Nazi party and Thule society
    (2:02:38) – Protocols of the Elders of Zion
    (2:35:43) – Charles Manson
    (3:02:30) – Zodiac Killer
    (3:13:24) – Illuminati
    (3:20:48) – Secret societies

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips

    SOCIAL LINKS:
    – X: https://x.com/lexfridman
    – Instagram: https://instagram.com/lexfridman
    – TikTok: https://tiktok.com/@lexfridman
    – LinkedIn: https://linkedin.com/in/lexfridman
    – Facebook: https://facebook.com/lexfridman
    – Patreon: https://patreon.com/lexfridman
    – Telegram: https://t.me/lexfridman
    – Reddit: https://reddit.com/r/lexfridman