Author: The Gray Area with Sean Illing

  • What “near death” feels like

    AI transcript
    0:00:08 Support for this show comes from ServiceNow, a company that helps people do more fulfilling work, the work they actually want to do.
    0:00:11 You know what people don’t want to do? Boring, busy work.
    0:00:20 But ServiceNow says that with their AI agents built into the ServiceNow platform, you can automate millions of repetitive tasks in every corner of a business.
    0:00:24 IT, HR, customer service, and more.
    0:00:29 And the company says that means your people can focus on the work that they want to do.
    0:00:32 That’s putting AI agents to work for people.
    0:00:37 It’s your turn. You can get started at ServiceNow.com slash AI dash agents.
    0:00:45 When does fast grocery delivery through Instacart matter most?
    0:00:50 When your famous grainy mustard potato salad isn’t so famous without the grainy mustard.
    0:00:52 When the barbecue’s lit, but there’s nothing to grill.
    0:00:56 When the in-laws decide that, actually, they will stay for dinner.
    0:00:59 Instacart has all your groceries covered this summer.
    0:01:02 So download the app and get delivery in as fast as 60 minutes.
    0:01:06 Plus, enjoy $0 delivery fees on your first three orders.
    0:01:08 Service fees exclusions and terms apply.
    0:01:11 Instacart. Groceries that over-deliver.
    0:01:14 What happens when we die?
    0:01:20 I’ve always been a cold, hard materialist on this one.
    0:01:30 Our brain shuts down, consciousness fades away, and the lights go out.
    0:01:36 And beyond that, what else is there to say?
    0:01:44 I had no experience of life before I was born, and I expect to have no experience of life after I die.
    0:01:51 As best I can tell, that’s the most reasonable assumption we can make about death.
    0:01:55 But most reasonable does not mean definitely true.
    0:02:06 There’s the conventional view taken by major religions that the shape of your afterlife depends on the quality of your actual life.
    0:02:08 I have my issues with that.
    0:02:10 But it’s a widely held belief.
    0:02:16 The point, in any case, is that this is one of the oldest questions we have.
    0:02:23 Which means there are all sorts of theories about how consciousness, in some form, might survive the death of the body.
    0:02:30 However unlikely these possibilities might be, they’re not impossible.
    0:02:35 And if they’re not impossible, how seriously should we take them?
    0:02:41 I’m Sean Elling, and this is The Gray Area.
    0:02:54 Today’s guest is Sebastian Younger.
    0:03:04 He’s a former war reporter, a documentarian, and the author of several books, including his most recent one called In My Time of Dying.
    0:03:09 Younger’s not the religious or superstitious type.
    0:03:14 He’s a self-described atheist and a science-minded rationalist.
    0:03:22 And I suspect he would have given a very confident response to that question about life after death.
    0:03:25 Until the day he almost died.
    0:03:30 An experience that didn’t necessarily transform his worldview.
    0:03:33 But it did shake it up.
    0:03:42 I wouldn’t say my answer to the what happens when we die question is all that different after reading the book.
    0:03:46 But I would say that I’m less certain about it.
    0:03:48 And that’s sort of the point.
    0:04:00 Sebastian Younger, welcome to the show.
    0:04:01 Very nice to be here.
    0:04:02 Thanks for having me.
    0:04:14 Before we get to the strangeness of your near-death experience, can you just describe what happened to you the day you almost died?
    0:04:16 Just to set the scene here a little bit.
    0:04:16 Yeah.
    0:04:18 So I was 58 years old.
    0:04:20 I’ve been a lifelong athlete.
    0:04:22 My health is, like, very good.
    0:04:32 And so it never occurred to me that I would have a sudden medical issue that would send me to the ER or kill me, you know, sort of drop me in my boots, as it were.
    0:04:37 So I just had no thoughts like that about myself.
    0:04:44 And so one afternoon, it was during COVID, my family and I were living in a house in the woods in Massachusetts that has no cell phone coverage.
    0:04:46 It’s at the end of a dead-end dirt road.
    0:04:51 On the property is a cabin, no electricity or anything like that.
    0:04:54 And we went out there to spend a couple of hours.
    0:04:58 And literally, in mid-sentence, I felt this sort of bolt of pain in my abdomen.
    0:05:00 And I couldn’t make it go away.
    0:05:02 I sort of twisted and turned.
    0:05:03 I thought it was indigestion.
    0:05:06 And I stood up and almost fell over.
    0:05:07 And so I sat back down.
    0:05:09 I said to my wife, I’m going to need help.
    0:05:10 I don’t know what’s wrong.
    0:05:11 I’ve never felt anything like this.
    0:05:22 What was happening, I later found out, was that I had an undiagnosed aneurysm in my pancreatic artery, and one of several arteries that go through the pancreas.
    0:05:25 And one of them had a bulge in it from a weak spot.
    0:05:28 And aneurysms are widowmakers.
    0:05:34 I mean, they’re really, really deadly, particularly in the abdomen, because it’s hard for the doctors to find them.
    0:05:40 And if you’re stabbed in the stomach and an artery is severed, the doctors sort of know where to put their finger, as it were, to plug the leak.
    0:05:44 But if it’s just internal hemorrhage, your abdomen’s basically a big bowl of spaghetti.
    0:05:46 It’s very, very hard to find it.
    0:05:50 So I was losing probably a pint of blood every 10 or 15 minutes.
    0:05:54 And, you know, there’s like 10 pints in the human body, 10 or 12 pints.
    0:05:55 So you can do the math.
    0:05:58 And I was a one-hour drive from the nearest hospital.
    0:06:01 I was a human hourglass, basically.
    0:06:04 So by the time they got me there, I’d probably lost two-thirds of my blood.
    0:06:06 My blood pressure was 60 over 40.
    0:06:09 And I was in end-stage hemorrhagic shock.
    0:06:10 I was probably 10 minutes from dead.
    0:06:11 But I was still conscious.
    0:06:15 Blessedly, I had no idea that I was dying.
    0:06:17 I was enormously confused by what was happening.
    0:06:20 And I had no clue about the seriousness of it.
    0:06:21 60 over 40.
    0:06:24 My God, how are you even still alive at that point?
    0:06:28 That’s sort of where you cross over into a place where you can’t recover from,
    0:06:31 even if you get a massive blood transfusion, which I got.
    0:06:37 I mean, if you need that much blood, receiving that much blood causes other problems that can also kill you.
    0:06:47 So you can die in the hospital from blood loss with plenty of blood in your veins because other things happen chemically in your bloodstream that will kill you.
    0:06:47 It’s deadly.
    0:06:51 And I was sort of right on the cusp of when that could reasonably have started to happen.
    0:06:58 And I’d actually had sort of intermittent pain in my abdomen for about six months, which just being an idiot dude, I just ignored.
    0:06:59 Right.
    0:07:01 And, you know, it was bad enough to make me sit down at times.
    0:07:02 I was like, oh, what’s that?
    0:07:04 And then it would go away and I’d forget about it.
    0:07:12 And that was probably the aneurysm getting to a kind of critical point where it was starting to leak a little bit, starting to bleed a little bit or something.
    0:07:16 You know, if I’d gone to the doctor, I could have avoided a lot of drama, but I didn’t.
    0:07:21 Yeah, note to everyone in the audience, if you know something’s wrong with your body, don’t fuck around.
    0:07:22 Go get it checked out.
    0:07:25 Yeah, I mean, pain’s an indicator and persistent pain’s an indicator.
    0:07:34 And frankly, your unconscious mind, listen, you know, I’m an atheist, I’m a rationalist, I’m an anti-mystic, I hate woo-woo stuff.
    0:07:38 My dad was a physicist and an atheist, just like that’s who I am.
    0:07:42 But the unconscious mind actually has access to a lot of information about the body.
    0:07:47 It communicates with your conscious mind in these strange signals and intuitions and feelings.
    0:07:54 And one of the stranger things about this was the first time I felt this pain in my abdomen, I had this bizarre thought.
    0:07:59 I thought, huh, that’s the kind of pain where you later find out, oh my God, I have terminal cancer.
    0:08:07 Like, I immediately thought this was a mortal threat and then immediately dismissed it as, you know, listen, you just have a pain in your abdomen, like don’t worry about it.
    0:08:11 And what was the survival rate for your condition that day?
    0:08:20 The survival rate is as low as 30%, but I assume that that’s for a reasonable transport time to the hospital.
    0:08:23 It took me 90 minutes to get to a doctor.
    0:08:28 My survival chances were extremely low.
    0:08:33 The brain does such strange things in these moments.
    0:08:37 You knew on some level that something was really wrong here.
    0:08:45 But even at the hospital, you write about not having any grand thoughts about life or mortality or even about your family.
    0:08:49 You wrote, I had all the introspection of a gut shot coyote, which is a great line.
    0:08:52 But what the hell is that about?
    0:08:58 You think it’s just a kind of defense mechanism in the brain or is it just plain old fashioned shock?
    0:09:04 I was in hemorrhagic shock and deep into hypothermia, which comes with hemorrhagic shock.
    0:09:06 I was in an enormous amount of pain.
    0:09:12 So blood in your abdomen, outside of your vascular system is extremely irritating to the organs.
    0:09:15 I was in and out of consciousness, which I didn’t know.
    0:09:19 I mean, if you go in and out of consciousness, you don’t know it.
    0:09:25 You think it’s all one stream of consciousness, but actually what drops out is the parts where you’re unconscious.
    0:09:27 You have no idea you’re in and out of consciousness.
    0:09:29 So I didn’t know that about the situation.
    0:09:30 And it was belly pain.
    0:09:37 And I had this sort of distant thought, you know, it may turn out you’re going to wake up in the hospital tomorrow morning with really grim news that you have a tumor in your abdomen.
    0:09:42 And, you know, I mean, I sort of was aware that that might happen, but I didn’t know it was going down right now.
    0:09:43 Like I had no idea.
    0:09:50 And, you know, I had the level of sort of situational awareness that like someone who’s really, really drunk might have.
    0:09:53 And I was an animal, you know, pain turns you into an animal.
    0:09:54 I was an animal.
    0:09:55 I was a wounded animal.
    0:10:04 So when this happened, if your wife, Barbara, wasn’t with you, if you were out running or something like that, you’re probably dead right now.
    0:10:05 And we’re not talking.
    0:10:08 I mean, how much did that thought rack your brain in the aftermath?
    0:10:11 Oh, afterwards, I was tormented by that.
    0:10:21 I mean, any other situation, I mean, the traffic jam and the Cross Bronx Expressway, if I was on an airplane, hiking in the woods, running, I mean, anything like anything.
    0:10:23 And as it was, I barely made it.
    0:10:27 Another strange thing that I should mention about the unconscious.
    0:10:40 So two nights prior at dawn, so about 36 hours before the aneurysm ruptured, I was woken by this terrible dream, a nightmare, and it was that I was dead.
    0:10:42 Not that I was dying or going to die.
    0:10:43 I was dead.
    0:10:44 I was a spirit.
    0:10:48 And I was looking down on my family, and they were grieving.
    0:10:49 They were sobbing.
    0:10:53 And I was trying to yell to them and wave my arms, like, I’m here.
    0:10:54 It’s okay.
    0:10:54 I’m right here.
    0:10:55 It’s all right.
    0:10:56 Everything’s okay.
    0:10:59 And then I was made to understand that I had died.
    0:11:01 I was beyond their reach.
    0:11:03 And there was no going back.
    0:11:04 And this was just how it is.
    0:11:07 And I was headed out into the darkness.
    0:11:09 And I was so bereft.
    0:11:12 I was so anguished by this that it woke me up.
    0:11:13 I mean, I was just like, oh, my God.
    0:11:16 Thank God that was just a dream.
    0:11:24 As a rationalist, I have to sort of think, all right, your unconscious mind has some mechanism of knowing if there’s a mortal threat going on.
    0:11:28 And it doesn’t know how to communicate with dumbass up there who’s, you know, okay, six months of pain.
    0:11:29 He’s still not taking notice.
    0:11:31 All right, now what do we do?
    0:11:34 All right, well, let’s give him a really bad nightmare, right?
    0:11:35 Oh, he’s still not listening?
    0:11:37 Well, we tried.
    0:11:41 You know, I feel like the unconscious mind is sort of like a little bit in that place with us.
    0:11:44 Yeah, we’re about to careen into some potentially woo-woo stuff here.
    0:11:50 So let me pause, back up just a hair, and then we’ll ease into it.
    0:11:54 Because I want to actually get to the near-death experience itself.
    0:11:59 The way you write about it in the book is so unbelievably vivid.
    0:12:02 I mean, I really feel like I experienced it just reading it.
    0:12:10 There’s a moment when the surgeons and the nurses are working on you, and they’re on your right side.
    0:12:14 And then on your left side, there’s this pit of blackness.
    0:12:15 It’s scary as hell.
    0:12:23 And your father, who I think has been dead eight years at this point, appears before you or above you.
    0:12:25 Tell me about that.
    0:12:26 Right, yeah.
    0:12:32 So the doctor was busy trying to put a large-gauge needle into my jugular vein, you know, through my neck.
    0:12:34 It sounds a lot worse than it actually is.
    0:12:36 It didn’t particularly hurt.
    0:12:36 It sounds bad.
    0:12:38 It sounds bad, yeah.
    0:12:40 I mean, I think they numb you with lidocaine.
    0:12:42 So actually, I didn’t feel much except the kind of pressure.
    0:12:45 But at any rate, so they were working on that.
    0:12:51 And seeming to take a long time, and suddenly this black pit opened up underneath me that I started getting pulled into.
    0:12:55 You know, again, think of me as extremely drunk, right?
    0:12:56 Like, I’m like, whoa, what’s that?
    0:12:59 Like, it didn’t occur to me, like, black pit, that makes no sense.
    0:13:02 Like, I was like, oh, there’s the pit.
    0:13:04 Like, why am I getting pulled into it?
    0:13:12 And I didn’t know I was dying, but I sort of had this animal sense that if you—you don’t want to go into the infinitely black pit that just opened up underneath you.
    0:13:15 Like, that’s just a bad idea, and if you get sucked in there, you’re probably not coming back.
    0:13:17 Like, that was the feeling I had about it.
    0:13:18 And I started to panic.
    0:13:23 And that’s when my dead father appeared above me in this sort of energy form.
    0:13:24 It’s hard to describe.
    0:13:26 I can’t describe what it was like.
    0:13:27 I just perceived him.
    0:13:30 It’s not like there was a poster board of him floating above me.
    0:13:31 It wasn’t quite that tangible.
    0:13:35 And he was communicating this incredible benevolence and love.
    0:13:37 He’s like, listen, you don’t have to fight it.
    0:13:38 You can come with me.
    0:13:38 I’ll take care of you.
    0:13:39 It’s going to be okay.
    0:13:42 I was horrified.
    0:13:44 I was like, go with you.
    0:13:44 You’re dead.
    0:13:46 I’m not going anywhere with you.
    0:13:48 Like, what are you talking about?
    0:13:48 Get out of here.
    0:13:50 Like, I was horrified.
    0:13:54 And I said to the doctor, because I was conversant, you’ve got to hurry.
    0:13:55 You’re losing me.
    0:13:56 I’m going right now.
    0:14:00 And I didn’t know where I was going, but I was very clear I was headed out, and I did not want to.
    0:14:02 And I knew he had to hurry.
    0:14:03 So you say communicating.
    0:14:04 What does that mean?
    0:14:05 Is he actually talking to you?
    0:14:10 Is it gesturing or just a feeling or is it telepathically or what?
    0:14:12 I didn’t hear words, right?
    0:14:16 But his communication to me, I guess you would have to classify it as telepathic.
    0:14:17 But it was very specific.
    0:14:19 You don’t have to fight this.
    0:14:21 I’m here.
    0:14:21 I’ll take care of you.
    0:14:22 You can come with me.
    0:14:30 And so, you know, again, now I’m a rationalist, but I’m a rationalist with questions.
    0:14:35 Like, I’m a rationalist with a serious question of, like, what was that?
    0:14:36 Is it just neurochemistry?
    0:14:42 I mean, when I woke up the next morning in the ICU and the nurse came in, and I was in a lot of distress.
    0:14:43 I was throwing up blood.
    0:14:44 I was a freaking mess.
    0:14:45 I was still not.
    0:14:46 I could have still died at that point.
    0:14:48 I mean, I was not out of the woods at all.
    0:14:52 And the nurse came in and said, wow, congratulations, Mr. Younger.
    0:14:53 You made it.
    0:14:54 We almost lost you last night.
    0:14:55 You almost died.
    0:14:59 And when she said that, that’s when I remembered my father.
    0:15:01 I was like, oh, my God, I saw my father.
    0:15:03 And I saw the pit.
    0:15:05 And it all came rushing back to me.
    0:15:07 A rationalist with questions.
    0:15:07 I love that.
    0:15:09 That may be my religion.
    0:15:10 Yeah, right.
    0:15:11 If I have one.
    0:15:19 I mean, given what I know about your dad from this book, that he would appear to you almost like an angel.
    0:15:29 Seems like exactly the kind of thing he and you, hyper-rationalists and whatnot, would have dismissed as supernatural nonsense before this.
    0:15:33 He would have said, as I’m sort of inclined to say, but not entirely.
    0:15:37 I think he would have said, well, you know, I’m sure there’s certain neurochemical explanations.
    0:15:39 It’s the brain in distress.
    0:15:47 There’s probably all kinds of things going on neurochemically, high cortisol levels, this and that, like dopamine, whatever.
    0:15:50 I mean, you know, you can make the brain hallucinate.
    0:15:52 You can, you know, epileptics have visions.
    0:15:55 You know, I mean, there’s analogous phenomena in life with people.
    0:15:57 And so I think he probably would have ascribed it to that.
    0:16:07 And I’m inclined to as well, you know, sort of, except there’s one thing that sort of stuck in my mind that the doctors and the rationalists couldn’t quite explain.
    0:16:11 And let me just say, reiterate again, I’m an atheist.
    0:16:13 Now, I still do not believe in God.
    0:16:15 Atheist means that you do not believe in God.
    0:16:17 I do not believe in God.
    0:16:32 There’s something you describe in the book that was maybe the most holy shit moment for me.
    0:16:35 And there are several holy shit moments in this story.
    0:16:41 So a few days before your dad died of heart failure, you had an intense dream.
    0:16:43 He was in Boston.
    0:16:44 You were in New York.
    0:16:50 But you woke up in the middle of the night as though he was screaming your name from the next room.
    0:16:51 You look at the clock.
    0:16:54 And it was 3.15 a.m.
    0:17:02 And then a few hours later, your mom calls, tells you to go to Boston as soon as you can because your dad tried to throw himself out of bed in a panic.
    0:17:08 And when you asked her what time that happened, she said 3.15 a.m.
    0:17:10 I mean, come on, Sebastian.
    0:17:12 What the hell is that?
    0:17:13 That’s crazy.
    0:17:14 It is crazy.
    0:17:18 And again, the rationalist in me is like, okay, does that prove there’s a God?
    0:17:19 No, not really.
    0:17:23 It means that humans can communicate in ways that science doesn’t understand.
    0:17:25 And even communicate across distance.
    0:17:38 And there’s, at the quantum level, at the subatomic level, there actually is instantaneous communication between particles across vast distances, even across the entire universe.
    0:17:39 And that’s known to be true.
    0:17:40 And we don’t know why.
    0:17:41 We can’t explain how that works.
    0:17:43 But we know that it does work.
    0:17:48 So if that’s possible, can human minds communicate with, quote, telepathy?
    0:17:53 That seems to be something that almost everyone experiences with people they love.
    0:17:55 So to me, it stands to reason that it’s possible.
    0:18:00 Well, you talk to plenty of doctors and scientists about this.
    0:18:04 You even tried talking to some of your own doctors about your experience.
    0:18:06 What do they make of it?
    0:18:08 I’m sure they take you seriously.
    0:18:15 But how seriously do they take this story and stories like this, near-death experiences, that is?
    0:18:18 Well, it depends on the doctor who you’re talking to.
    0:18:19 It depends on the researcher.
    0:18:28 And there’s a whole body of research conducted by doctors and neurobiologists and all kinds of very accomplished, educated people.
    0:18:32 There’s a lot of documentation of what are called NDEs, near-death experiences.
    0:18:43 And sort of hovering above loved ones, as I did in my dream, or seeing a dead person show up to escort you over the threshold are very, very common for near NDEs.
    0:18:46 Now, I didn’t know this, so I wasn’t projecting something that I knew.
    0:18:55 So some researchers have concluded that this is sort of verifiable proof that there is some kind of afterlife that we don’t understand.
    0:19:02 And they do use the word afterlife, which is, of course, on a semantic level is kind of a problem, because death is the end of life.
    0:19:06 So afterlife, I don’t even know quite what that means.
    0:19:07 It’s clearly not life.
    0:19:09 But they do come to that conclusion.
    0:19:13 And then there’s a lot of other scientists and doctors, like, nonsense, it’s neurobiology.
    0:19:15 We can explain all of this.
    0:19:20 And after I came home from the hospital, it was not a sort of joyful party.
    0:19:22 I was enormously traumatized.
    0:19:27 The fact that I’d almost left my children fatherless was devastating to me.
    0:19:36 I became very sort of paranoid that now that I sort of looked over the precipice and realized that any moment of any day, you can suddenly find yourself dying.
    0:19:39 In entirely unpredictable ways.
    0:19:40 Like, that really rattled me.
    0:19:53 And then I got into this other existential bind, which was, I started to worry that maybe I had died, and that I was a ghost, and that I was sort of haunting my family, and they couldn’t see me.
    0:19:57 And I just thought they could see me and were interacting with me, but actually, I wasn’t really there.
    0:20:01 And I know that sounds totally silly, but it was a real fear.
    0:20:04 And at one point, I went to my wife, and I was like, tell me I’m here.
    0:20:07 They just tell me that I’m, you know, she said, of course you’re here.
    0:20:08 And she sort of reassured me.
    0:20:12 But in my mind, I’m like, this is exactly what a hallucination would say to you, right?
    0:20:19 Like, I was in a real, very, very difficult place, which is not uncommon for someone who survived something like this.
    0:20:27 So I started researching, and eventually I tracked down researching NDEs and quantum physics and all this stuff, trying to explain what happened to me.
    0:20:32 And Parnie was kind of rooting that maybe, wow, maybe there is an afterlife.
    0:20:34 Maybe we don’t need to be scared of death.
    0:20:37 You know, like, ooh, wow, these stories are pretty hard to refute.
    0:20:43 And then I’d read the rationalists, and I was like, oh, well, like, nice try, but this clearly is just nonsense.
    0:20:50 So I called on some colleagues of my father who were younger than him who were really fond of my dad.
    0:20:55 And I invited them for lunch, and I told them what happened to me, and I said, what do you think my dad would have thought of this?
    0:21:05 And at one point I asked, what would the odds be of my father reappearing above me, reconstituting himself on some level above me as I was dying?
    0:21:07 Are there odds for such a thing?
    0:21:11 And he said, well, this is how scientists think, right?
    0:21:13 He took me totally literally.
    0:21:14 He was like, all right, well, let’s see.
    0:21:19 He’s like, well, I would say probably about 10 to the minus 60.
    0:21:20 Very specific.
    0:21:21 Very specific.
    0:21:28 It’s a number with one chance and a number that has 60 zeros following it, roughly.
    0:21:30 I was like, what?
    0:21:31 What are you talking?
    0:21:32 How did you come to that number?
    0:21:40 He said, well, it’s roughly the odds of all the oxygen molecules converging in one corner of the room and suffocating us.
    0:21:42 The odds are not zero.
    0:21:49 They’re almost infinitely small, but they’re roughly, according to statistical mechanics, they’re roughly 1 to the minus 60.
    0:22:04 And so those are the odds of the molecules that made up your father or the subatomic particles that made up your father randomly and kind of miraculously having a sort of like reunion in the corner of the room.
    0:22:06 Like, there are numbers for this.
    0:22:11 And so at that point, I realized the infinite rationality of the scientific mind.
    0:22:19 I think when I got to that part of the book, I was reminded that I most definitely do not have the brain of a physicist, for better or worse.
    0:22:21 Yeah, for better or worse.
    0:22:28 You know, that sort of focus of thought makes human relationships hard because my father missed a lot of the sort of the human element, right?
    0:22:30 The sort of emotional element.
    0:22:38 He was a very sweet man, but very distant and had no idea how to relate to children or really had sometimes a tough time with adults.
    0:22:47 So when he appeared above me, it struck me as the most overtly loving, generous, big hearted thing he’d ever done.
    0:22:57 When we get back from the break, what can science tell us about near-death experiences?
    0:22:59 Stay with us.
    0:23:12 Support for the gray area comes from Mint Mobile.
    0:23:15 As someone who runs a lot, I’m pretty familiar with sweat.
    0:23:19 And while I’m used to it at this point, it doesn’t make it any less gross.
    0:23:22 But there’s good sweat and bad sweat.
    0:23:27 And the bad sweat comes when you open your phone bill and see all the fees they’re charging you.
    0:23:32 Thankfully, Mint Mobile wants to help you keep cool this summer with a phone bill you’ll never have to sweat.
    0:23:40 At Mint Mobile, all plans come with high-speed data and unlimited talk and text delivered on the nation’s largest 5G network.
    0:23:46 You can use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts.
    0:23:54 And if you ditch your overpriced wireless, you can get three months of premium wireless service from Mint Mobile for $15 a month.
    0:23:57 This year, you can skip breaking a sweat and breaking the bank.
    0:24:03 You can get your summer savings and shop premium wireless plans at mintmobile.com slash gray area.
    0:24:06 That’s mintmobile.com slash gray area.
    0:24:13 Upfront payment of $45 for a three-month, five-gigabyte plan required, equivalent to $15 a month.
    0:24:18 New customer offer for first three months only, then full-price plan options available.
    0:24:19 Taxes and fees extra.
    0:24:21 See Mint Mobile for details.
    0:24:28 Support for the show comes from Shopify.
    0:24:32 Starting a business is all about turning your ideas into reality.
    0:24:35 And to see it through, you need the right tools.
    0:24:37 Tools like Shopify.
    0:24:42 Shopify is a commerce platform behind millions of businesses around the world.
    0:24:46 And, they say, 10% of all e-commerce in the U.S.
    0:24:52 From household names like Mattel and Gymshark to brands just getting started,
    0:24:54 like Dr. Sean Elling’s drillings and fillings.
    0:25:00 And the gray area sole patch dye for hip cats who want to look as young as they feel.
    0:25:06 Shopify’s design studio lets you build a big, beautiful online store that matches your brand style.
    0:25:10 You can also use their AI tools to step up your content creation.
    0:25:15 Plus, you can easily create email and social media campaigns to meet your customers wherever they are.
    0:25:19 If you’re ready to sell, you can be ready for Shopify.
    0:25:24 You can turn your big business idea into reality with Shopify on your side.
    0:25:30 You can sign up for your $1 per month trial period and start selling today at Shopify.com slash Fox.
    0:25:33 Go to Shopify.com slash Fox.
    0:25:35 Shopify.com slash Fox.
    0:25:40 I’m Claire Parker.
    0:25:42 And I’m Ashley Hamilton.
    0:25:44 And this week, we’re discussing Hilaria Baldwin.
    0:25:46 Why does she have so many kids?
    0:25:50 She will not answer that question for you in a way that you want it answered,
    0:25:55 but she will respond to every single thing ever written about her in a tabloid in a deeply cryptic way.
    0:25:57 She’s taking on the tough questions like,
    0:26:00 does ADD make you speak with a Spanish accent?
    0:26:03 Does an older man guarantee happiness in a marriage?
    0:26:08 We talked to Eliza McClam and Julia Hava from Binge-topia podcast.
    0:26:11 They are Hilaria Baldwin experts,
    0:26:15 and they dove deep with us on Hilaria’s latest memoir, Manual Not Included.
    0:26:20 You can listen to new episodes of Celebrity Memoir Book Club every Tuesday on Amazon Music.
    0:26:32 Getting back to the science,
    0:26:37 Do we really understand what happens in the brain during these experiences?
    0:26:40 Does science have a firm grasp of this?
    0:26:41 Yes and no.
    0:26:44 I mean, there was a case where a man was dying.
    0:26:46 I think he’d had a stroke.
    0:26:53 And they had electrodes attached to his skull to signal different brain activity to know how to treat him.
    0:26:56 And he passed some point of no return.
    0:26:57 And the doctor said,
    0:26:58 OK, it’s OK.
    0:27:00 You can sort of turn the machines off, basically.
    0:27:03 But the sensors were still in place on his skull.
    0:27:13 And so they had the chance to watch what was happening to the brainwaves in real time as a person died.
    0:27:22 And what they found was that in the 30 seconds before and after the moment of death—and, of course, death isn’t just confined to a single moment.
    0:27:23 It’s a spectrum.
    0:27:30 But there was a surge in brain activity related to dreaming and memories and all kinds of other things.
    0:27:37 And so one of the things that might happen when people die is that they experience this sort of flood of sensations from their life.
    0:27:38 Why would they?
    0:27:39 Who knows?
    0:27:41 Like, it’s hard to come up with a sort of Darwinian reason.
    0:27:42 Like, how would that be adaptive?
    0:27:43 The person’s dying.
    0:27:45 It’s not a question of survival and procreation.
    0:27:48 And Darwinism is not concerned with emotional comfort.
    0:27:51 It doesn’t matter in those sort of Darwinian arithmetic.
    0:27:52 So it’s hard to know what to make of that.
    0:27:54 But they did have one chance to do that.
    0:27:57 Science is reductionist by design.
    0:28:09 You can study near-death experiences, and you can map the neurochemical changes, and you can give a purely materialist explanation for them.
    0:28:11 But do you think it’s wise to leave it there?
    0:28:19 Or do you think there’s something just inherently mysterious about this that we just can’t quite understand?
    0:28:23 At one point, someone said to me, you know, you couldn’t explain what happened to you in rational terms.
    0:28:27 Why didn’t you turn to mystical terms?
    0:28:32 And I said, because rational terms is what an explanation is.
    0:28:38 And the alternative is a story, right?
    0:28:43 And humans use stories to comfort themselves about things they can’t explain.
    0:28:50 I don’t choose to use the God story or the afterlife story to comfort myself about the unexplainable, which is like what’s going to happen when I die.
    0:29:01 But let me say that the one thing that really stood out, I mean, I sort of bought all the neurochemical explanations, all of the sort of hard-boiled rationalists, like we’re biological beings.
    0:29:03 When we die, that’s it.
    0:29:12 And the flurry of experiences that dying people have is just the dying brain frantically bombarding us with signals, like what’s going on?
    0:29:13 Like, stop, stop, stop, stop, stop.
    0:29:16 Like, you know, that kind of sort of neurological confusion.
    0:29:18 Except for one thing.
    0:29:20 And what I don’t understand is this.
    0:29:27 Like, if you give a room full of people LSD, we know that 100% of those people will have hallucinations.
    0:29:28 We know why.
    0:29:29 We know how that works.
    0:29:30 There’s no mystery there.
    0:29:33 You don’t need God to explain that.
    0:29:36 But they’ll all hallucinate different things, right?
    0:29:41 And what’s strange about dying is that only the dying seem to see the dead.
    0:29:45 And they do that in societies all around the world and have for ages.
    0:29:48 I mean, there’s many historical accounts of this as well.
    0:29:51 And the people who aren’t dying do not see the dead.
    0:29:55 And often the dead are unwelcome and they’re a shock.
    0:29:58 It’s not some reassuring vision of Aunt Betty, right?
    0:30:00 And it’s just like, Dad, what are you doing here?
    0:30:06 Or my mother, as she died, she saw her dead brother, who she was not on speaking terms with.
    0:30:08 And when she saw him, she was horrified.
    0:30:10 She was like, what’s he doing here?
    0:30:12 And I said, Mom, it’s your brother.
    0:30:14 I mean, I just took a guess, right?
    0:30:15 I said, Mom, it’s your brother George.
    0:30:17 You have to be nice to him.
    0:30:18 He’s come a long way to see you.
    0:30:22 And she just frowned and said, we’ll see about that.
    0:30:23 You know, she died a day later.
    0:30:26 So it’s not like these are comforting visions or in projections.
    0:30:34 And the fact that only the dying see the dead is the one thing that science can’t quite explain.
    0:30:40 It’s the one thing that really does make me wonder, you know, maybe we don’t understand everything in scientific terms.
    0:30:50 Maybe there is something missing here that is very significant about how reality works, how life and death work, what consciousness is, and ultimately what the universe is.
    0:30:54 I don’t want to fetishize doubt or make a virtue of doubt.
    0:31:02 But this is the kind of stuff that just leaves me in that same place that just the position of, man, I don’t really know.
    0:31:02 Yeah.
    0:31:04 And I’m not sure it’s knowable.
    0:31:05 And that’s okay.
    0:31:06 Yeah.
    0:31:09 I mean, like I said, some people rush in with stories to fill that gap.
    0:31:10 A lot can go wrong there.
    0:31:23 One of the theories about consciousness, a theory that Schrodinger ascribed to, who was one of the pioneers of quantum physics, is that consciousness is actually suffuses the entire universe.
    0:31:34 And there’s a kind of colossus of consciousness in the universe, which is 93 billion light years wide at the moment, just so that you understand the scale of the universe.
    0:31:43 And that our individual consciousness is sort of a very, very limited experience of the universal consciousness.
    0:31:46 It’s sort of scaled down to sort of the puny human size.
    0:31:49 But actually, there is a universal consciousness.
    0:32:01 And there’s a theory called biocentrism that this consciousness completely affects how the universe is constructed physically, that there’s a symbiotic relationship between physical reality and consciousness where they actually depend on each other.
    0:32:03 And you can’t prove it.
    0:32:04 You can’t disprove it.
    0:32:05 It’s a fascinating theory.
    0:32:09 But it’s where, for me, there’s a little bit of comfort.
    0:32:21 Like, no, I do not believe in God, and I certainly don’t believe in an afterlife where I, as Sebastian Junger, sort of continue on without the need to eat or sleep, and I can kind of float around talking to all the people I miss.
    0:32:34 But it’s possible that when we die, that the sort of quantum information that involved our identity and our consciousness is reunited with the grand consciousness, the colossus.
    0:32:40 There is something there that I find a little comforting and scientifically possible, right?
    0:32:43 It’s just we’re never going to prove it because I think we just don’t have the tools.
    0:32:49 And even to say that there’s an afterlife is not to say that there’s a God, necessarily.
    0:32:55 There could be some post-life reality that we just don’t understand or one that’s far weirder than we can imagine.
    0:33:00 But that would not mean that any of our religious stories are true.
    0:33:02 It would just mean shit’s a lot weirder than we thought.
    0:33:09 Yeah, I mean, as I say in the book, you know, our understanding of reality might be akin to a dog’s understanding of a television set.
    0:33:16 No concept that what they’re watching is a product of the screen and the wider context that produced the screen.
    0:33:21 I mean, religious people, and I, you know, I’ve obviously a number of friends who are religious.
    0:33:27 Like, when they hear this story of mine, they’re very fond of saying, so, are you still an atheist?
    0:33:29 Like, you saw your dead father while you were dying.
    0:33:30 Are you still an atheist?
    0:33:34 And, of course, my pat little answer is, look, I saw my dad, not God.
    0:33:36 Like, if I’d seen God, we might have a conversation to have.
    0:33:52 But I saw my dad, and as you point out, it’s entirely possible that there could be some kind of creator God that created biological life in the universe that when it dies, it dies absolutely and completely, and there’s no, quote, afterlife.
    0:34:01 Or there could be a post-death existence at some quantum level that we don’t and can’t understand in a completely physical universe that has no God.
    0:34:06 The two things don’t require each other, and you could have one or the other or neither or both.
    0:34:07 It’s all possible.
    0:34:18 One of the medical paradoxes here is that people who are dying experience near-total brain function collapse,
    0:34:26 and yet their awareness seems to crystallize, which seems impossible on its face.
    0:34:28 Do scientists have an explanation for this?
    0:34:32 Is it even a paradox at all, or does it just seem that way to someone on the outside who doesn’t understand it?
    0:34:34 I don’t think anyone knows.
    0:34:39 You know, ultimately, no one even knows if what we perceive during life is true.
    0:34:47 I mean, it’s known at the quantum level that observing a particle, a subatomic particle, changes its behavior.
    0:34:50 Now, of course, when you observe something, it’s a totally passive act.
    0:34:52 You’re not bombarding it with something, right?
    0:34:53 You’re just watching.
    0:35:01 If a particle, a photon, is sent through two slits in an impassable barrier, and it’s unobserved by a conscious mind,
    0:35:04 it will go through both slits simultaneously.
    0:35:09 And once you observe it, it’s forced to pick one slit.
    0:35:18 So, as the early physicists said, observation creates the reality that’s being observed, and then the snake starts to swallow its tail.
    0:35:27 And it’s been proposed that the universe is one massive wave function of all possibilities, of all things.
    0:35:39 And that the arrival of conscious thought, conscious perception, forced the entire observable universe to collapse into one single thing, which is the universe that we know.
    0:35:56 I will say this, I mean, if there is a heaven or afterlife, I don’t think it’s what most people think it is, which is a projection of our earthly wishes, and a rather transparent one at that.
    0:36:10 But it might be some bizarre quantum reality that I can’t even pretend to understand, because I don’t know the first thing about physics or quantum mechanics, other than that great line from Einstein calling it spooky action at a distance.
    0:36:12 This is sort of where you land, too, right?
    0:36:17 That reality is just very strange, and who the hell knows what’s really going on, or what’s really possible, for that matter.
    0:36:24 Yeah, I mean, at the quantum level, things happen that contradict everything we understand about the macroscopic level.
    0:36:27 So you can’t walk through two doorways at the same time.
    0:36:28 You can’t be in two places at once.
    0:36:30 But at the quantum level, you can.
    0:36:40 And so that opens the possibility of extremely strange—things that are extremely strange in the macroscopic world being absolutely ordinary in the quantum world.
    0:36:46 But the granddaddy of them all is the universe.
    0:37:01 The universe came from nothing and expanded from nothing to hundreds of millions of light years across in an amount of time that is too small to measure.
    0:37:08 So if that’s possible, and we know it’s possible because it happened, we can prove that it happened.
    0:37:10 We are proof that it happened.
    0:37:15 If that’s possible, in some ways, what isn’t possible?
    0:37:22 It’s just a question of, like, how limited our brains are, our amazing brains, but how limited are they in what we can perceive and explain?
    0:37:26 You use the phrase the other side a lot in the book.
    0:37:31 And, you know, someone was clinically dead, they glimpsed the other side, and then they came back.
    0:37:36 I mean, on some level, this is just the only language we have to describe such things.
    0:37:42 But what is your understanding of the other side as you sit here now?
    0:37:42 Is it a place?
    0:37:44 Is it more like an awareness?
    0:37:47 Or is it just neurochemicals detonating in our brains?
    0:37:57 Well, I mean, my direct experience of it was it was an infinitely black, deep pit that would swallow you and never let you back.
    0:38:01 And where you would become part of the nothingness that’s in it.
    0:38:06 Whatever you want to say about this, I did have a dream where I experienced being dead.
    0:38:08 Whatever you want to make of that, I did have that dream.
    0:38:14 And the experience of that dream, for whatever it’s worth, is that I was a spirit.
    0:38:18 I didn’t exist physically, but I existed as a collection of thoughts.
    0:38:28 And that that entity that was thinking was being pulled away from everything I knew and loved out into the nothingness forever.
    0:38:40 And there was a sense of the nothingness being an enormous circle that I was going to start sort of like proceeding around.
    0:38:42 And an infinitely huge circle.
    0:38:44 There was a sort of circularity to it.
    0:38:46 A kind of orbit to it.
    0:38:48 And I was getting pulled into this orbit of nothingness.
    0:38:51 And it made me panic, right?
    0:38:52 It was horrified.
    0:38:53 Like, there are my children.
    0:38:54 There’s my wife.
    0:38:58 So for me, the other side is nothing.
    0:39:00 I mean, it’s not like, oh, it’s the other bank of the river.
    0:39:03 You know, as the joke goes, like, how do I get to the other side of the river?
    0:39:04 You’re on the other side.
    0:39:05 It’s not like that.
    0:39:07 And that’s a kind of comforting vision.
    0:39:10 And it’s one that religions seem fond of.
    0:39:11 But it’s not at all how I see it.
    0:39:17 And, you know, if it were that way, you’d be looking at an eternity of consciousness with no escape.
    0:39:20 Which is its own hell, right?
    0:39:23 I mean, I could barely get through math class in high school.
    0:39:24 50 minutes, right?
    0:39:25 That felt like an eternity.
    0:39:28 Really, you want to be conscious for eternity with no way out?
    0:39:31 I mean, at least with life, if you need a way out, you can kill yourself.
    0:39:34 There’s no way out of an eternity of consciousness.
    0:39:38 And suppose that includes unbearable pain or grief.
    0:39:39 Suppose it’s unpleasant.
    0:39:46 People often talk about the near-death experience as though it’s a gift.
    0:39:53 To get that close to death and survive, the story goes, is supposed to bring clarity and peace or something like that.
    0:39:55 Do you find this to be true?
    0:40:01 It brought an enormous amount of trauma and anxiety and depression afterwards that I eventually worked through.
    0:40:03 And I mean work.
    0:40:05 I mean, it was work to climb out of that.
    0:40:10 The ICU nurse who told me that I’d almost died, she came back an hour later and said,
    0:40:11 How are you doing?
    0:40:13 And I said, Not that well.
    0:40:14 And she said, Try this.
    0:40:18 Instead of thinking about it like something scary, think about it like something sacred.
    0:40:20 And then she walked out.
    0:40:26 And so, you know, as an atheist, I’m happy to use the word sacred for its other wonderful meanings.
    0:40:29 You don’t need God to understand that some things are sacred.
    0:40:41 So for me, that word means what’s the information that people need to lead lives with greater dignity and courage and less pain.
    0:40:43 That’s sacred knowledge.
    0:40:47 So did I come back from that precipice with any sacred knowledge?
    0:40:51 And it took me a long time to sort of answer that question.
    0:40:54 And I read about Dostoevsky.
    0:40:57 He sort of provided the final answer in some ways for me.
    0:41:01 So when he was a young man, before he was a writer, he was a little bit of a political agitator.
    0:41:05 And this is the 1840s during the times of the Tsar and serfdom.
    0:41:13 And he and his sort of like his woke brothers were agitating for freeing the serfs, you know, much like in the United States, there was talk about fending slavery.
    0:41:20 And the Tsar didn’t take kindly to the intelligentsia talking about such nonsense.
    0:41:24 So he threw these kids in jail, but no one thought it was a particularly serious situation.
    0:41:25 Right.
    0:41:33 And then finally, they were released and, you know, they were sort of put into a wagon and they assumed they were going to be released to their families after eight months.
    0:41:45 And instead, they were driven to a city square and tied to posts and a firing squad was arrayed against them.
    0:41:52 And the rifles were leveled and the rifles were cocked and the men waited for the order to fire.
    0:42:04 And what happened, we know what Dostoevsky was thinking because a writer galloped into the square and said, the Tsar forgives them.
    0:42:06 It was all theater, but they didn’t know that, of course.
    0:42:08 The Tsar forgives them.
    0:42:10 You know, do not stand down.
    0:42:11 Like, do not kill them.
    0:42:23 So Dostoevsky, through a character that is widely thought to be a substitute for himself in a book called The Idiot, notices sunlight glinting off a roof and thinks to himself,
    0:42:24 in moments, I’m going to join the sunlight.
    0:42:26 I’ll be part of all things.
    0:42:34 And that if I should survive this somehow by some miracle, I will treat every moment as an infinity.
    0:42:38 I’ll treat every moment like the miracle that it actually is.
    0:42:50 And, of course, that’s an almost zen appreciation for reality that’s impossible to maintain while you’re changing the baby’s diapers and the smoke alarm’s going off because you burned the dinner and blah, blah, blah.
    0:42:52 Of course, we’re humans and we get sucked into our drama.
    0:43:10 But if you can have some awareness at some point that life happens only in moments and that those moments are sacred and miraculous, if you can get there once in a while, if you can understand that the sunlight glinting off the roof, that you’re part of it and it’s part of you.
    0:43:11 And one day it’s all going to be the same thing.
    0:43:16 If you can do that, you will have reached a place of real enlightenment.
    0:43:18 And I think it deepens your life.
    0:43:20 You had a great line in the book.
    0:43:28 You wrote, it’s an open question whether a full and unaverted look at death crushes the human psyche or liberates it.
    0:43:29 And it really is, isn’t it?
    0:43:35 I mean, we all know that death is inevitable and that it can come on any day.
    0:43:42 And living in constant contact with that reality is supposed to be motivation for being more present, for living in the moment, as they say.
    0:43:49 But no matter how hard we think about it, our death remains an abstraction until it arrives.
    0:43:51 And I just don’t know how you can be prepared for that.
    0:44:02 And I love what your wife, Barbara, says about that in the book to the effect of that attitude of life where you feel like you’re always at risk of losing everything.
    0:44:06 That doesn’t seem to be healthy, to be in that space all the time.
    0:44:16 That’s the needle we have to thread, is be aware of our mortality, but not taken hostage by that awareness, which is what happened to me in the immediate aftermath of almost dying.
    0:44:25 So I should say that two of the young men who were with Dostoevsky, by his account, were insane for the rest of their lives.
    0:44:27 They never psychologically recovered from the shock.
    0:44:31 Dostoevsky went in another direction.
    0:44:32 He went towards, you know, a kind of enlightenment.
    0:44:33 I don’t know.
    0:44:38 I guess never thinking about death seems as unwise as obsessing over it.
    0:44:40 So maybe there’s some sweet spot in between.
    0:44:41 That’s where we’re supposed to toggle.
    0:44:46 You know, one of the definitions of consciousness is to be able to imagine yourself in the future.
    0:44:51 Well, if you can imagine yourself in the future, you’re going to have to imagine yourself dead because that’s what the future holds.
    0:45:00 And once we’re neurologically complex enough to have that thought, it would be paralyzing for the puny efforts of our lives.
    0:45:03 If we weren’t able to use an enormous amount of denial.
    0:45:07 So we have this abstract knowledge that, you know, all is for naught, right?
    0:45:08 And we’re going to die.
    0:45:13 But we have to keep it out of our daily awareness because otherwise it would demotivate us.
    0:45:15 It would keep us apathetic and crazy.
    0:45:19 And so it’s a balancing act that the human mind does.
    0:45:34 And so the trick, I think, in terms of a kind of healthy enlightenment is to allow in that awareness of death only to the extent where it makes life seem precious, but not to the extent where it makes life seem so fleeting that why bother?
    0:45:40 And maybe that’s just our fate as finite, painfully self-aware creatures.
    0:45:43 We live, we keep rolling our boulders up the hill until the lights go out.
    0:45:47 And as Camus says, we must imagine Sisyphus happy.
    0:45:48 Oh, wonderful.
    0:45:49 I didn’t know that quote.
    0:45:50 That’s a wonderful quote.
    0:46:06 After one more short break, we talk about how confronting death changes the way you live.
    0:46:07 Stay with us.
    0:46:21 Hey, guys, it’s Andy Roddick, former world number one tennis player and now a podcaster.
    0:46:25 It’s clay season in pro tennis, and that means the French Open.
    0:46:31 On our show, Served, with me, Andy Roddick, we have wall-to-wall coverage for the entire two weeks.
    0:46:39 We kick things off with a draw special presented by Amazon Prime, breaking down both the men’s and women’s brackets, making picks, and yeah, probably getting most of them wrong.
    0:46:44 Plus, on June 3rd, my idol, Andre Agassi, is joining Served.
    0:46:45 Be sure to tune in.
    0:46:50 After that, we wrap all things French Open with a full recap show, also presented by Amazon Prime.
    0:46:51 That’s June 10th.
    0:46:57 So be sure to find the show, Served, with me, Andy Roddick, on YouTube or wherever you get your podcasts.
    0:47:06 This week on Prof G Markets, we speak with Aswath Damodaran, Professor of Finance at NYU’s Stern School of Business.
    0:47:12 He shares his take on the recent tariff turmoil and what he’s watching as we head into second quarter earnings.
    0:47:21 This is going to be a contest between market resilience and economic resilience as to whether, in fact, the markets are overestimating the resilience of the economy.
    0:47:29 And that’s what the actual numbers are going to deliver is maybe the economy and markets are a lot more resilient than we gave them credit for.
    0:47:38 In which case, we’ll come out of this year just like we came out of 2020 and 2022 with much less damage than we thought would be created.
    0:47:42 You can find that conversation exclusively on the Prof G Markets feed.
    0:48:01 You spent so much of your life taking risks, calculated risks, I would say, now that you’ve almost died, now that you’re a parent, the game has changed.
    0:48:04 I imagine the calculus for you is much different as well.
    0:48:08 Oh, I stopped war reporting after my buddy Tim was killed in 2011.
    0:48:17 I saw what his death did to everyone who loved him, and I just realized that going off to war suddenly looked like a selfish act, not a noble one.
    0:48:19 And so I stopped doing it.
    0:48:26 And then six years later, I had my first child, you know, and I’m an older dad, so I feel extremely lucky, extremely lucky to be a father.
    0:48:31 And I’m the most risk-averse person you’ll ever meet now.
    0:48:33 I won’t cross Houston Street against the walklight.
    0:48:34 I mean, you know, it’s ridiculous.
    0:48:46 Being a parent is emancipatory in the sense that you’re not living for yourself anymore, which I do believe, I’ve come to believe, is a happier, more fulfilling existence.
    0:48:54 But it makes the prospect of death even worse because of what you leave behind, because the people you love need you.
    0:48:56 That is what terrifies me.
    0:49:02 I had a recent scare with a mole, a funky-looking mole on my arm, and I was so worried about it.
    0:49:05 And my wife was like, you’re fine, you’re fine.
    0:49:10 But I mean, I was Googling, what does melanoma look like and all this shit?
    0:49:12 Oh, Bob Marley had a melanoma on his foot?
    0:49:14 Oh, shit, it can happen to him.
    0:49:17 Those are the thoughts running through my mind.
    0:49:22 Not that I would cease to be, but that my son would not have a father.
    0:49:24 And that is the most terrifying thought I’ve ever had.
    0:49:30 I talked to a fireman, a father of four, I think, a fairly young man who was trapped in a burning building.
    0:49:31 He couldn’t get out.
    0:49:33 I mean, he was so desperate.
    0:49:37 He started, it was a brick exterior wall, and he started trying to punch his way through it.
    0:49:38 He obviously couldn’t.
    0:49:41 And he finally got to a window.
    0:49:42 There was zero visibility.
    0:49:43 It was so filled with smoke.
    0:49:47 And he finally got to a window and threw himself out headfirst and survived.
    0:49:48 And another guy didn’t survive.
    0:49:53 But in those terrible moments, he kept thinking, my son’s going to grow up without a father.
    0:49:56 Once you’re a parent, like, it’s foremost in your mind.
    0:50:07 And if you’re a parent when you’re young, you know, that’s the point in your life when you’re enormously driven by your own desires and curiosity and juggling that with the responsibilities of parenthood is extremely hard.
    0:50:11 And frankly, it’s pretty easy to resent the obligations, right?
    0:50:13 I mean, I’m glad I wasn’t a parent at 25.
    0:50:15 I think I would have been a selfish parent.
    0:50:15 Same.
    0:50:18 Like, I became a parent at 55.
    0:50:21 And by that point, I didn’t interest me anymore.
    0:50:22 Like, I wanted to be a father.
    0:50:27 In that sense, as long as I live a long life, it will have been a very good choice for me.
    0:50:28 I didn’t interest me anymore.
    0:50:30 That’s a good line.
    0:50:31 I may have to steal that.
    0:50:37 There’s a beautiful passage at the end of the book that I’d like to read, if you don’t mind.
    0:50:37 Yeah.
    0:50:40 Because it feels like an appropriate way to wrap this up.
    0:50:41 So now I’m courting you.
    0:50:50 One might allow the quick thought that it is odd that so many religions, so many dying people, so many ecstatics,
    0:50:57 and so many quantum physicists, believe that death is not a final severing, but an ultimate merging.
    0:51:05 And that the reality we take to be life is, in fact, a passing distraction from something so profound, so real, so all-encompassing,
    0:51:13 that many return to their paltry bodies on the battlefield or hospital gurney, only with great reluctance and a kind of embarrassment.
    0:51:16 How can I pass up the truth for an illusion?
    0:51:18 That’s the end of the quote.
    0:51:29 What I would say to that is that there’s something in me that revolts against any ideology that thinks of life itself as an illusion.
    0:51:35 I mean, this is why I didn’t care for Christianity, the religion of my community, when I was younger.
    0:51:43 Because I didn’t like the idea that this life is some kind of way station en route to the next life, which is supposed to be the more important life.
    0:51:49 But hearing these accounts of NDEs, your account, it gives me pause.
    0:51:50 I don’t know how else to say it.
    0:51:51 I don’t know what to think.
    0:51:52 I don’t know what’s true.
    0:51:55 There’s something here, something worth taking seriously.
    0:51:56 I guess that’s all I know.
    0:51:59 I guess I’ll stop there and let you close this out with your own thoughts on that.
    0:52:00 Yeah.
    0:52:10 So I’m a journalist, and I try to keep my biases out of my work, and I do not come to assertions, to conclusions that aren’t backed up by fact.
    0:52:27 So what I found in my research is that there was an extraordinary number of people who, on the threshold of death, like I was, looked back and thought, that’s not the real thing.
    0:52:29 Life’s not the real thing.
    0:52:30 I’m entering the real thing now.
    0:52:41 And then I was surprised that there were some extremely smart people and non-religious people, like Schrodinger, like the physicists, who had a sort of similar thought.
    0:52:50 And so I put that in there not because I’m trying to convince anyone of anything, and I don’t even know what I believe particularly, but it’s good information.
    0:52:51 It’s important.
    0:52:52 It’s interesting information.
    0:53:02 It either says something profound about the human brain’s capacity for self-delusion, or it contains something profound about the nature of physical reality.
    0:53:13 And I doubt we’ll ever know which it is, but it’s important to keep both in mind and to take all the information we can from these extraordinary experiences and to take them at face value, to take them literally.
    0:53:15 Like, these people really did experience this.
    0:53:16 What does it mean?
    0:53:17 I’m going to leave it right there.
    0:53:22 Once again, the book is called In My Time of Dying.
    0:53:24 I read it cover to cover in a day.
    0:53:27 Just a sublime and honest book.
    0:53:28 I can’t recommend it enough.
    0:53:31 Sebastian Younger, this was a pleasure.
    0:53:32 Thank you.
    0:53:32 Thank you.
    0:53:34 I really enjoyed the conversation.
    0:53:46 All right.
    0:53:48 Another episode about death.
    0:53:49 How about that?
    0:53:54 As you can tell, it’s a recent favorite of mine.
    0:53:59 I just, I love the intensity of it, and I love the honesty.
    0:54:10 And for a show that prides itself on leaning into the questions and not needing final answers, this one felt pretty on brand.
    0:54:12 What did you think?
    0:54:17 You can drop us a line at TheGreyAreaAtVox.com and let us know.
    0:54:21 And if you don’t have time for that, rate, review, subscribe.
    0:54:23 That stuff really helps, and we appreciate it.
    0:54:35 This episode was produced by John Ahrens, edited by Jorge Just, engineered by Patrick Boyd, and Alex Overington wrote our theme music.
    0:54:38 New episodes of The Grey Area drop on Mondays.
    0:54:40 Listen and subscribe.
    0:54:41 New episodes of The Grey Area drop on Mondays.
    0:54:42 New episodes of The Grey Area drop on Mondays.
    0:54:43 New episodes of The Grey Area drop on Mondays.
    0:54:43 New episodes of The Grey Area drop on Mondays.
    0:54:44 New episodes of The Grey Area drop on Mondays.
    0:54:45 New episodes of The Grey Area drop on Mondays.
    0:54:45 New episodes of The Grey Area drop on Mondays.
    0:54:45 New episodes of The Grey Area drop on Mondays.
    0:54:46 New episodes of The Grey Area drop on Mondays.
    0:54:47 New episodes of The Grey Area drop on Mondays.

    Sebastian Junger came as close as you possibly can to dying. While his doctors struggled to revive him, the veteran reporter and avowed rationalist experienced things that shocked and shook him, leaving him with profound questions and unexpected revelations. In his book, In My Time of Dying, he explores the mysteries and commonalities of people’s near-death experiences.

    In this episode, which originally aired in May 2024, he joins Sean to talk about what it’s like to almost die and what quantum physics can tell us about the afterlife.

    Host: Sean Illing (⁠⁠@SeanIlling⁠⁠)

    Guest: Sebastian Junger, journalist and author of ⁠⁠In My Time of Dying: How I Came Face to Face With the Idea of an Afterlife⁠

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Help us plan for the future of The Gray Area by filling out a brief survey: voxmedia.com/survey. Thank you!

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Machiavelli on how democracies die

    AI transcript
    0:00:07 The Tribeca Festival is back June 4th through 15th and it’s packed with can’t miss experiences.
    0:00:12 Catch Sandra Oh on a live podcast recording of The Interview from the New York Times.
    0:00:17 Cheer on track and field superstar Alison Felix in the documentary She Runs the World.
    0:00:24 Or catch my mom Jane, Mariska Hargitay’s moving documentary feature directorial debut about her
    0:00:29 mother, Hollywood icon Jane Mansfield. There’s something for everyone. Get your tickets now at
    0:00:30 Tribecafilm.com.
    0:01:02 As that means your people can focus on the work that they want to do. That’s putting
    0:01:08 AI agents to work for people. It’s your turn. You can get started at servicenow.com slash
    0:01:10 AI-agents.
    0:01:22 Very few ideas stand the test of time. And very few works of literature or philosophy are remembered
    0:01:30 even 50 or 100 years after they were written. What about 500? How many 16th century philosophers do you
    0:01:33 think you think you could name? Thomas Moore? Sure.
    0:01:39 Francis Bacon. Yeah, but it’s high school biology textbooks that are keeping him alive.
    0:01:43 Montaigne. Can you name a single thing he wrote?
    0:01:48 If you’re a philosophy sicko that listens to this show, maybe.
    0:02:03 But normies? I doubt it. But then there’s Machiavelli. What’s up with him? Why, after 500 years, is Machiavelli so famous?
    0:02:09 Why does his writing, especially the prince, still resonate so much today?
    0:02:22 And why, after 500 years of being dissected, analyzed, dissertated, read, and re-read, is Machiavelli so often misunderstood?
    0:02:33 What was he really up to? What have we missed? And what can Machiavelli tell us about our world right now?
    0:02:38 I’m Sean Elling, and this is The Gray Area.
    0:02:48 Today’s guest is Erika Benner. She’s a political philosopher and the author of numerous books about Machiavelli,
    0:02:56 including my favorite, Be Like the Fox, which offers a new interpretation of Machiavelli’s most famous work,
    0:03:06 The Prince, which Machiavelli wrote in exile after the Medici family overthrew Florence’s fledgling Republican government.
    0:03:14 For centuries, The Prince has been widely viewed as a how-to manual for tyrants.
    0:03:16 But Benner disagrees.
    0:03:24 She says it’s actually a veiled, almost satirical critique of authoritarian power.
    0:03:29 And she argues that Machiavelli is more timely than you might imagine.
    0:03:37 He wrote about why democracies get sick and die, about the dangers of inequality and partisanship,
    0:03:43 and even about why appearance and perception matter far more than truth and facts.
    0:03:50 If she’s right, Machiavelli is very much a philosopher for our times,
    0:03:54 with something to say about this moment.
    0:03:57 So I invite her on the show to talk about it.
    0:04:01 Erika Benner, welcome to the show.
    0:04:02 Hi, Sean. Thanks for having me.
    0:04:12 There is the popular caricature of Machiavelli, with which I think most people are familiar, you know, the conniving, manipulative, sneaky figure.
    0:04:15 And then there’s the real Machiavelli.
    0:04:17 Tell me about the gap between those two.
    0:04:24 It’s massive, because if you go and you read, like, Machiavelli’s correspondence, they’re hilarious.
    0:04:26 Like, he’s the funniest guy on earth.
    0:04:33 The first kind of literary piece that we’ve got notes about, but we don’t actually have anymore,
    0:04:35 because it was kind of concealed and then the Pope’s banned everything he wrote.
    0:04:42 But one of the first things he wrote was a satirical play about, like, the powers that be in Republican Florence.
    0:04:44 So he’s a satirist.
    0:04:51 I mean, one of the things I just always want to kind of bring out about the gap between this cold, calculating advisor to princes saying, you know,
    0:04:55 better to be feared than loved is this guy is hilarious all the time.
    0:04:56 And he’s a dramatist.
    0:04:59 Like, he’s a brilliant writer of plays and dramas.
    0:05:04 And so that’s one gap that when you read, when you come to The Prince,
    0:05:10 I kind of urge people always to kind of bear in mind that before he wrote The Prince and then after he wrote The Prince,
    0:05:12 he was writing political satires.
    0:05:18 And I think we’re in a kind of atmosphere now in the world where that might be easier to see for a lot of people.
    0:05:25 Because if you imagine somebody who doesn’t want to be too direct and preach to people in criticizing the great leaders,
    0:05:28 but still wants to kind of take the piss out of them.
    0:05:32 He just does that in a very, very subtle Florentine way.
    0:05:34 I don’t know if that kind of answers your question.
    0:05:36 No, no, it does.
    0:05:40 So, you know, when I was in graduate school for political theory,
    0:05:49 Machiavelli is introduced as kind of like the first truly modern political scientist,
    0:05:53 sort of like the Galileo of politics.
    0:05:55 Is that how you think of him?
    0:05:57 Is that how we should think of him?
    0:05:58 No.
    0:06:00 I absolutely don’t think of him.
    0:06:01 Say more.
    0:06:03 Perfectly, no.
    0:06:18 Machiavelli was somebody whose main examples and main interests when it comes to, like, you know, thinking about how politics should work.
    0:06:20 His main interest is in ancient history.
    0:06:21 That’s clear.
    0:06:27 He’s somebody who’s really, really grounded in ancient history, like most of the people who are educated in his times.
    0:06:33 His, you know, second big, big book is called The Discourses on Livy.
    0:06:46 And that is a commentary on ancient Rome, which is trying to draw from history examples that can, you know, serve as cautionary, you know, warnings to people of his own times and for the future.
    0:06:51 But also help us to think about what is actually, you know, what is a good leader in politics?
    0:06:52 What is prudent?
    0:06:58 And what kind of sometimes seems like a prudent policy, like something to actually achieve some good?
    0:07:06 But then if you really, really think ahead and look at history as well and realize what people have done before along those lines, you kind of see the problems.
    0:07:13 I don’t think he’s a scientist in a political science way at all, thank goodness.
    0:07:16 Maybe you could replace political scientist with just political thinker.
    0:07:18 Maybe that’s a little broader.
    0:07:21 But you’re right, he is very interested in the past.
    0:07:31 And he seems to have a bit of a beef with a lot of these ancient philosophers, you know, the Plato’s and the Aristotle’s.
    0:07:40 He seemed to think that they were naive, if that’s the right word, that they weren’t looking at the political world the way it actually is.
    0:07:50 But instead, they were projecting their own ideals and fantasies onto the political world and then dreaming up a kind of politics in light of that.
    0:07:54 What was his beef with the ancients?
    0:08:00 What did you think they misunderstood or got wrong or ignored about actual life?
    0:08:03 The kind of view that you just outlined.
    0:08:09 I don’t want to get nerdy and too academic about this, but that is talked out.
    0:08:10 You’re about to tell me why I’m wrong, aren’t you?
    0:08:18 Out of a couple of lines where he says things like, oh, you know, philosophers have imagined ideal republics.
    0:08:18 Yes.
    0:08:21 But in reality, that’s just one quote.
    0:08:26 And then there’s some other little quotes you can pluck out and you can turn into a big modernist system,
    0:08:31 which quite a few of my very esteemed Machiavelli scholar colleagues have done.
    0:08:48 But actually, you know, in context, it’s a lot more complicated than that because he also has many quotes elsewhere where he said the greatest thing you can do if you can’t actually create a republic is to imagine one.
    0:09:04 He has that in one of his pieces that’s called the discourses on government of Florence, where he’s trying to advise them on how to save their flailing, fragile new non-republic and saying, maybe you should turn this non-republic back into a republic.
    0:09:11 And he says, I’m not in a position anymore to help do that in practice like I used to be.
    0:09:16 So I’m trying to go in the footsteps of Plato and Aristotle and imagine one where you can’t have one.
    0:09:24 He is not, there is no sharp contrast in Machiavelli properly understood between his ideals and his realism.
    0:09:30 And realism, you know, realism and idealism don’t have to be opposite.
    0:09:45 That idea in itself is something I think people are really waking up to more and more that if we try to go, you know, if you imagine that you can follow a realistic path towards a better world without having an ideal or two to guide it.
    0:09:57 You know, he sort of has the same problem Nietzsche has in that there’s so much irony in so many different voices in his work that it practically begs you to misinterpret it.
    0:10:02 Or at the very least, it makes it very easy for the reader to project whatever they want onto the work.
    0:10:05 And so it’s his fault, I guess is what I’m saying.
    0:10:07 Yeah, but this is what they did.
    0:10:20 I mean, this is another thing that I hope we will, I mean, I was just reading something today about AI and why, you know, somebody like a professor saying, don’t we still want students to learn how to read difficult, ambiguous works?
    0:10:36 And that reading, one of the reasons we want students to keep reading and not to filter everything to interpret it through a machine is that reading is like a practice in listening and a practice in hearing things that are subtly off or that you could work with.
    0:10:38 And that’s, and that’s what you need in politics, right?
    0:10:47 Especially in a democracy or a republic, you need people not just to go by hard rules, but to hear the subtleties and be able to judge for themselves.
    0:10:49 That’s what he’s trying to get us to do.
    0:11:00 Like as, you know, part of recovering the republic is readers have to see, you know, he’s telling you all these shocking things that you ought to do and that princes ought to do.
    0:11:05 And readers are supposed to be kind of saying, hang on, hang on, let me judge that for myself.
    0:11:11 All right, look, let’s, let’s set about the work of, of cutting through some of these.
    0:11:12 You don’t believe me, Sean, do you?
    0:11:13 No, I, I do.
    0:11:14 I do.
    0:11:23 I mean, I, I think, I think once you get into the business of trying to distinguish, you know, what is the wink wink and what is meant to be taken literally, it’s very difficult.
    0:11:27 But, but you are, you are a much closer reader of Machiavelli than I am.
    0:11:29 So, um, I’m not going to challenge you on that.
    0:11:33 And I think your reading is actually very interesting and very persuasive.
    0:11:42 Um, part of what I’m doing here is because the popular image of him is so cemented as this, you know, deceptive figure.
    0:11:47 Um, I’m really trying to set that up so that you can, you can challenge it kind of, you know, piece by piece.
    0:11:50 So, uh, let’s start, right?
    0:12:03 I mean, I think one of the, certainly the, the conventional popular view of Machiavelli is that he is someone who wanted to draw this neat, clear line between morality and politics.
    0:12:04 They wanted to sever these things.
    0:12:08 Um, but you write in the book that that’s not true, right?
    0:12:15 That he simply wanted to put, and now I’m quoting, he simply wanted to put morality on firmer, purely human foundations.
    0:12:17 So, what does that mean?
    0:12:20 How is it different from what people think he’s doing?
    0:12:39 Well, what is true is that he often criticizes the morality of the, of the, of the, let’s say the hyper, kind of hyper-Christianity or spirituality that, you know, takes morals into the, puts morals and, and what, you know, judgments of right and wrong into the hands of priests and popes.
    0:12:48 And, and, and, and some abstract kind of God that, that, you know, he, he may or may not believe in, but doesn’t think it’s something we can totally access as, as humans.
    0:12:58 We can’t, you know, so that we, if we want to think about morality, both on a personal level, but certainly in politics, uh, we’ve got to kind of go back to basics.
    0:13:01 Think about what is the behavior of human beings?
    0:13:02 What is human nature?
    0:13:10 What are the drives that kind of propel human beings to do the stuff that we call good or bad?
    0:13:11 And that’s one of the fundamentals.
    0:13:16 I think he wants to say, we should see human beings not as fundamentally good or evil.
    0:13:23 We shouldn’t think that human beings can ever be angels and we shouldn’t see them as devils when they behave badly until they really behave badly.
    0:13:24 And then we can call them evil.
    0:13:25 And sometimes he does.
    0:13:29 He calls people cruel, inhuman and evil sometimes, but very seldom.
    0:13:36 But the basic is if you want to develop a human morality, you, you study yourself, you study other humans.
    0:13:39 You don’t put yourself above other humans because you’re just one, two.
    0:13:57 And, and then you kind of start from there and say, right, what kind of politics is going to make such people coexist in ways that are not going to aim at some like divine order, you know, something that’s going to bring higher and higher kind of godly ethics into human life.
    0:14:01 We’ve got to be more modest and just talk about, we’re all going to be arguing.
    0:14:03 We’re all going to be difficult.
    0:14:05 Let’s have rules and laws that help us coexist.
    0:14:08 Well, let’s talk about the prince.
    0:14:08 I take it.
    0:14:13 You do not think this book is very well understood in the popular imagination.
    0:14:14 Is that about right?
    0:14:16 Do you think most people have this book wrong?
    0:14:25 And if they do, tell me what you think is the, the most glaring, obvious misinterpretation of what he’s actually up to there.
    0:14:30 Because I think what people think he’s up to is giving this handbook to tyrants.
    0:14:33 I think that is what most people think.
    0:14:34 Tell me how that’s wrong.
    0:14:36 I mean, I used to think that too.
    0:14:40 I used to have to teach Machiavelli as part of lots of different thinkers.
    0:14:42 And I would just say, well, it’s a handbook for tyrants.
    0:14:47 But then he wrote the discourses, which is a very, very Republican book, very openly.
    0:14:58 So, so there’s first, that’s the first thing that sets people off and makes you think, well, how could he have switched so quickly from being a super Republican as a political actor to writing the prince to suddenly writing the discourse?
    0:14:59 So that’s a kind of warning sign.
    0:15:01 And then that got me thinking.
    0:15:06 And then I kept coming across earlier authors who I trust deeply, like Rousseau.
    0:15:18 I mean, Jean-Jacques Rousseau, in the 18th century, a great, I think a very great philosopher and a deep Republican, has a footnote in his social contract saying the prince has been totally misunderstood.
    0:15:20 This was Machiavelli.
    0:15:20 Is that right?
    0:15:21 Yeah.
    0:15:23 I didn’t, I didn’t, I mean, I’ve read that book, but.
    0:15:23 Yeah, yeah, yeah.
    0:15:25 This is what set me off.
    0:15:26 So this isn’t just me making it up.
    0:15:33 I mean, I have to say, I’m not like a, I mean, I’m a very, like, I was very uncertain about this too.
    0:15:43 But then when I started seeing that some of the earliest readers of Machiavelli and the earliest comments you get from Republican authors, they all see Machiavelli as an ally.
    0:15:45 And they say it, they say he’s a moral writer.
    0:15:50 Rousseau says he has only had superficial and corrupt readers until now.
    0:15:57 You pick up the prince and you read the first four chapters, and most people don’t read them that carefully because they’re kind of boring, modern readers.
    0:16:04 The exciting ones are the ones in the middle about morality and immorality, but like the first ones you go, and then you come to chapter five, which is about freedom.
    0:16:11 And up to chapter four, it sounds like a pretty cool, cold analysis of this is what you should do.
    0:16:13 Chapter five, wow.
    0:16:16 It’s like how republics fight back.
    0:16:20 And the whole tone, and remember, he’s a literary guy and he’s a dramatist.
    0:16:22 The whole tone changes.
    0:16:34 There’s suddenly fire, republics are fighting back, and the prince has to be on his toes because he’s probably not going to survive the wrath of these fiery republics that do not give up.
    0:16:35 So who is he talking to?
    0:16:37 Who is he really talking to in the prince?
    0:16:40 Is he talking to the people or is he talking to future princes?
    0:16:51 I mean, I see it as, you know, imagine somebody who’s been kicked out of his job and has a big family to support.
    0:16:55 He had a lot of kids and who loved his job and was passionate about the republic.
    0:16:57 He’s been tortured.
    0:16:59 He doesn’t know what’s going to happen next.
    0:17:07 And he’s absolutely gutted that Florence’s republican experiment, new, renewing the republic experiment has failed.
    0:17:10 And he can’t speak freely.
    0:17:17 So what does a guy with a history of writing dramas and satire do to make himself feel better?
    0:17:19 So number one motivation is it makes you feel better.
    0:17:31 You know, I mean, you’re just like taking the piss out of the people who have made you and a lot of your friends very miserable in a low-key, you know, way, because you can’t be too brutally satirical about it.
    0:17:33 It makes you feel a little bit better.
    0:17:40 But I think he’s really writing it in a way to kind of expose the ways of tyrants.
    0:17:53 Support for this show comes from Shopify.
    0:17:57 When you’re creating your own business, you have to wear too many hats.
    0:18:06 You have to be on top of marketing and sales and outreach and sales and designs and sales and finances and definitely sales.
    0:18:10 Finding the right tool that simplifies everything can be a game changer.
    0:18:14 For millions of businesses, that tool is Shopify.
    0:18:18 Shopify is a commerce platform behind millions of businesses around the world.
    0:18:23 And according to the company, 10% of all e-commerce in the U.S.
    0:18:28 From household names like Mattel and Gemshark to brands just getting started.
    0:18:33 They say they have hundreds of ready-to-use templates to help design your brand style.
    0:18:36 If you’re ready to sell, you’re ready for Shopify.
    0:18:42 You can turn your big business idea into with Shopify on your side.
    0:18:48 You can sign up for your $1 per month trial period and start selling today at Shopify.com slash Fox.
    0:18:51 You can go to Shopify.com slash Fox.
    0:18:54 That’s Shopify.com slash Fox.
    0:19:03 Support for this show comes from NPR’s Planet Money.
    0:19:08 Sometimes the way we discuss the economy can feel completely removed from our lives.
    0:19:14 but economics is everywhere and it’s in everything fueling our lives, even where we least expect it.
    0:19:18 The Planet Money hosts go to great lengths to help explain the economy.
    0:19:23 They’ve done things for stories that you wouldn’t initially connect with economics.
    0:19:29 Things like shooting a satellite into space, becoming a record label, making a comic book,
    0:19:31 and shorting the entire stock market.
    0:19:35 All to help you better understand the world around you.
    0:19:41 Tune into Planet Money every week for entertaining stories and insights about how money shapes our world.
    0:19:44 Stories that can’t be found anywhere else.
    0:19:48 If you’re curious to learn something new and exciting about economics every week,
    0:19:52 you can listen to the Planet Money podcast from NPR.
    0:19:59 The great thing about this show is that it’s really smart and it’s really entertaining and it works whether you’re an expert
    0:20:03 or someone who barely understands the economy, like me.
    0:20:11 Anyway, you can tune into Planet Money every week for entertaining stories and insights about how money shapes our world.
    0:20:13 Stories that can’t be found anywhere else.
    0:20:15 Listen now to Planet Money from NPR.
    0:20:22 Support for this show comes from Quince.
    0:20:24 It’s Quince season.
    0:20:30 And I’m not talking about the delicious yellow pear-like fruit that the Portuguese turn into marmalade.
    0:20:32 That would be late fall.
    0:20:33 And I am looking forward to it.
    0:20:35 I’m talking about summer.
    0:20:37 And I’m talking about clothes.
    0:20:38 Clothes from Quince.
    0:20:42 Quince has things you actually want to wear in the summer.
    0:20:46 Like organic cotton silk polos, European linen beach shorts,
    0:20:51 and comfortable pants that work for everything from backyard hangs to nice dinners.
    0:20:57 And Quince says that everything is priced 50 to 80 percent less than what you’d find at similar brands.
    0:21:01 My colleague here at Vox, Claire, tried Quince for herself.
    0:21:05 Every piece that I got from Quince is perfect for the summertime.
    0:21:09 The sunglasses have been great to go with a ton of different outfits, and they’re really high quality.
    0:21:13 I feel like I can just carry them around with me all day for any situation.
    0:21:16 I recommend Quince to everyone.
    0:21:21 They’re great pieces, they’re affordable, and they’re going to last a long time in your wardrobe.
    0:21:24 You can elevate your closet with Quince.
    0:21:31 You can go to quince.com slash gray area for free shipping on your order and 365-day returns.
    0:21:40 That’s Q-U-I-N-C-E dot com slash gray area to get free shipping and 365-day returns.
    0:21:42 Quince.com slash gray area.
    0:22:11 I think one of the more famous sentiments in The Prince is this idea that fear is more powerful than love.
    0:22:15 that from the perspective of a ruler, fear is more dependable than love.
    0:22:17 What do you make of that?
    0:22:18 Do you think it’s true?
    0:22:20 Did he actually believe that?
    0:22:27 He says, it’s good to be feared and loved, but if you have to choose, it’s better to be feared than loved.
    0:22:28 So that’s the context.
    0:22:35 And then he gives you examples of what kind of fear should be used.
    0:22:43 And what he really means when you look at his examples of the best kind of fear is just, you know, like fear of the laws.
    0:22:49 You know, he gives examples that actually relate to transparency and legality.
    0:22:51 I’m sorry, but this is true.
    0:23:01 If you go to the chapter where he says that, you know, he’s talking, he says, it’s better to be transparent and regular and not to do things in an irregular, arbitrary way.
    0:23:03 Do not arbitrarily take people’s property.
    0:23:05 Do not take their wives.
    0:23:06 Do not do this.
    0:23:07 He gives you these lists.
    0:23:11 That’s the kind of fear you want people to have.
    0:23:17 And it could be fear of you, the ruler, or it could be fear of a legal constitutional ruler as well.
    0:23:21 So what he’s not saying is you should just use random terror.
    0:23:23 You know, arbitrarily scaring people.
    0:23:25 That is a disaster.
    0:23:27 He’s really clear about that in The Prince.
    0:23:33 Yeah, he says, you know, if you do have to be feared, do not be feared in such a way as to produce hatred.
    0:23:35 That’s a very important qualification.
    0:23:37 Yeah, yeah.
    0:23:51 You know, so, and look, I know part of what you do in the book is you’re driving this, you’re resisting this idea that Machiavelli is very simplistically driving a wedge between politics and morality.
    0:23:57 But God, there are these incredible lines, you know, like you’ve been pointing out, right?
    0:23:58 And here’s one.
    0:24:04 He says, therefore, it is necessary for a prince who wishes to maintain himself to learn how not to be good.
    0:24:10 And to use knowledge and not use it according to the necessity of the case.
    0:24:13 So what is the meaning of a line like that for you?
    0:24:19 Is he just saying politics is a dirty business and you can’t survive it without getting your hands dirty?
    0:24:22 Or is it more complicated than that?
    0:24:23 Yeah.
    0:24:32 I mean, unfortunately, this is what the thing about Machiavelli that makes him so susceptible to the kind of reading that’s become popular is he’s got these amazing lines.
    0:24:33 Oh, wow.
    0:24:34 They’re so good, Erika.
    0:24:35 They’re so good.
    0:24:36 And they’re so cool.
    0:24:37 I know.
    0:24:38 They’re so cool.
    0:24:50 And if you’re a teacher or professor who teaches political theory, it is kind of sad to kind of think about my Machiavelli being the right one instead of the one that we’ve all grown to kind of hate, love hate.
    0:24:57 Because, you know, he’s such a different view of politics than what we’re used to and of ethics.
    0:25:07 But I’m sorry to tell you that he’s fantastic because he really is spelling out how human beings really behave and how leaders often behave.
    0:25:15 But what he does in that sentence is he’s setting the stage for a series of chapters about what do people think is good?
    0:25:17 And are they right?
    0:25:19 That’s what the next few chapters are about.
    0:25:23 And he has discussion of cruelty.
    0:25:25 What do people think is cruel or harsh?
    0:25:27 And are they right?
    0:25:29 And using money.
    0:25:32 When is kind of using money to get ahead okay?
    0:25:35 And when is it not okay for you?
    0:25:37 Because it’s actually going to get you in deep trouble.
    0:25:43 So what he’s doing with that is trying to get you to sort of think there’s good and there’s good.
    0:25:49 Is what is conventionally thought of as good the way to go?
    0:25:57 And sometimes he says that we have this angelic idea of how leaders should behave, which isn’t suitable.
    0:26:07 But that’s not saying that you should compromise, like, your basic standards of transparency and decent, you know, decent rule.
    0:26:13 It’s just, it’s very hard sometimes to know when he’s merely describing something and when he’s endorsing it.
    0:26:15 He’s deliberately ambiguous.
    0:26:17 He’s ambiguous on purpose.
    0:26:20 A lot of ancient writers were deliberately ambiguous.
    0:26:22 We are the ones who are kind of aberrations.
    0:26:27 Modern people who think that everything has to be straightforward, blunt, and clear.
    0:26:37 In ancient writers, you find loads of writers, including Plato and all the historians who were deliberately ambiguous because they’re trying to get us to think.
    0:26:51 Well, and part of what is very persuasive about your book is that you really get to understand what he’s up to when you see some of his correspondence, some of his private correspondence with, you know, letters to friends and that sort of thing,
    0:26:53 where he’s being much more honest.
    0:27:01 And with that context, it gives you a much better insight into what he’s actually doing in his work.
    0:27:04 And that’s the work that you do.
    0:27:10 One of the things he says in the print is that, you know, the ruler must imitate the lion and the fox.
    0:27:14 And the book of yours we’re talking about is called Be Like the Fox.
    0:27:15 I have it right here.
    0:27:18 Why that title?
    0:27:20 What do you mean?
    0:27:22 What does it mean to be like the fox?
    0:27:23 Yeah.
    0:27:36 I mean, fox is, again, he’s playing with us because we think of the classical image of, you know, the trope of the fox is associated with cunning, sly, sneaky, Machiavellian.
    0:27:38 Hard to pin down.
    0:27:39 Hard to pin down.
    0:27:43 But if you look at the context again, you look at what he says.
    0:27:47 He says, the fox recognizes snares.
    0:27:51 He said, people, the ruler should imitate the fox and the lion.
    0:27:54 The lion, because the lion can scare wolves.
    0:27:57 And the fox, because the fox recognizes traps.
    0:28:01 So the skill of the fox he’s highlighting isn’t being shrewd and cunning.
    0:28:14 It’s recognizing when someone’s being shrewd and cunning towards you and building up defenses, cognitive and physical, whatever you need, so that that person doesn’t pull you in.
    0:28:26 Tell me if I’m wrong, I remember, and I don’t know where I read this, it was a long time ago, but I recall reading about this story that Machiavelli loved, and apparently referenced quite a bit.
    0:28:39 And the story was of some ruler, I don’t know who, who sends a man to some principality to put down an insurgency with just brutal force, right?
    0:28:46 And the guy does the job, but the people left behind in that principality are really pissed off, and they resent him.
    0:28:54 So the prince has the guy who did it, the guy that he ordered to do the job, killed, and then strung up in the public square.
    0:29:01 And then he makes a big show of how outraged he is by this man’s criminal act of defiance.
    0:29:14 Machiavelli apparently is said to have loved that story, because it demonstrates how flexible and cunning a prince can and should be, and how effective it can be if he’s doing that well.
    0:29:22 I think what you’re talking about is chapter seven of The Prince, where he talks about Cesare Borgia, who was the son of Popeye VI, who was a brilliant deceiver.
    0:29:24 And Cesare Borgia, he’s the prince of fortune.
    0:29:29 Chapter seven is about how to become a prince, not by virtue, but by fortune.
    0:29:36 And Cesare Borgia is a really ambivalent figure in The Prince, but he does something along the lines you suggested.
    0:29:46 He’s got a guy called Ramiro de Orco, and he sets him up in this small town to kind of be the police guy, the big sheriff on the block.
    0:29:48 And then he scapegoats him, basically.
    0:29:51 He gets Ramiro to be brutal and kind of suppress all discontent.
    0:30:01 And then when all the people start getting upset about this, he goes, hey, I’m just going to like—and so he doesn’t even—Machiavelli just describes it.
    0:30:02 He did love telling this story.
    0:30:04 He told it in several different places, actually.
    0:30:16 But he says, one day the people go into the plaza at dawn, and there’s the pieces of Ramiro de Orco, like in pieces, with a coltello, a knife by his side.
    0:30:19 And, you know, this is the image.
    0:30:25 And then it says the people were so stupefied that they didn’t dare rebel anymore.
    0:30:33 And then if you end it there, and you say, that’s the end of the story, you say, okay, Machiavellian in that.
    0:30:35 That’s pretty cynical.
    0:30:36 It’s pretty cynical, Erika.
    0:30:37 It’s pretty cynical.
    0:30:37 Exactly.
    0:30:38 But read on.
    0:30:39 It is.
    0:30:39 It’s very cynical.
    0:30:41 Read on.
    0:30:42 Don’t get stuck.
    0:30:45 My one advice, don’t get stuck on one thing.
    0:30:46 Keep reading on.
    0:30:48 What were the consequences for Cesare Borja?
    0:30:49 What happened to him after that?
    0:30:56 Well, everyone starts leaving Cesare, like all the people who are giving him troops and who supported him, all his allies.
    0:30:58 They all, like, say, okay, this guy’s crazy.
    0:30:59 He’s out of control.
    0:31:00 They start dropping out.
    0:31:03 The French pull back their troops that they were giving him.
    0:31:06 All his closest mates, they conspire against him.
    0:31:07 He finds this out.
    0:31:10 He brutally slaughters them, and then everyone else hates him.
    0:31:14 So within a few months, he’s, like, really in trouble, and then Cesare’s dead.
    0:31:16 He doesn’t actually die immediately, but he’s out.
    0:31:18 And that’s it.
    0:31:25 So if you go to the end of the story, you don’t just stop and say, wow, what a cool thing.
    0:31:29 Because, I mean, we can see examples of this all over the world today.
    0:31:31 Wow, that was a cool thing.
    0:31:32 Somebody did.
    0:31:32 That was tough.
    0:31:39 Wait till the story continues, because doing that is going to make you feared and hated.
    0:31:41 And that’s what happened.
    0:31:43 You’re mad at me for not finishing the reading, aren’t you?
    0:31:44 Yeah.
    0:31:46 I can read that chapter.
    0:31:49 Oh, you didn’t even read that chapter in my book.
    0:31:51 That was such a good, that is such a good chapter.
    0:31:58 I read your book all the way through when I read it the first time, and I didn’t reread
    0:31:59 it cover to cover this time.
    0:32:02 I did revisit it, but I didn’t reread the whole thing.
    0:32:05 But I did, at one point, read the whole thing.
    0:32:10 There’s clearly a pragmatism to Machiavelli.
    0:32:16 Would you say that he has something like an ideology, or is he just a clear-eyed realist?
    0:32:18 Yeah, he’s a Republican.
    0:32:22 And again, this is something that if you just read The Prince, you’re not going to get that.
    0:32:26 But if you even just read The Discourses, which, as I say, was written around the same time as
    0:32:33 The Prince, it’s very, very similar in almost every way, except that it praises republics
    0:32:39 and criticizes tyrants very openly, whereas The Prince never once uses the tyrant or the word
    0:32:39 tyranny.
    0:32:45 So if there’s a guiding set of political views, whether you call it ideology or not,
    0:32:46 it’s Republican.
    0:32:50 A Republican ideology, if you like, is shared power.
    0:32:55 It’s all the people in a city, all the male people in this case.
    0:32:57 In Machiavelli’s case, he was quite egalitarian.
    0:33:03 He clearly wanted as broad a section of the male population to be citizens as possible.
    0:33:08 He says very clearly, the key to stabilizing your power is to change the Constitution and to
    0:33:10 give everyone their share.
    0:33:12 Everyone has to have their share.
    0:33:16 You might want, in the first instance, a little bit more for yourself and the rich guys, but
    0:33:18 in the end, everyone’s got to have a share.
    0:33:20 I know you just said he’s a Republican.
    0:33:21 He’s defending republicanism.
    0:33:25 But do you think of him as a democratic theorist?
    0:33:30 Do you think of him as someone who would defend what we call democracy today?
    0:33:36 If you see the main principle of democracy is also sharing power among all the people equally,
    0:33:42 which is how I understand democracy, yeah, he’d totally agree with that.
    0:33:45 What kind of institutions would he say a democracy has to have?
    0:33:47 He’s pretty clear in the discourses.
    0:33:50 He tells you, you don’t want a long-term executive.
    0:33:52 You need to always check power.
    0:33:59 So anyone who’s in a position of like a magistracy, you know, a political office of any kind needs
    0:34:04 to have very strict limits, needs to be under very strict laws, even stricter laws when they’re
    0:34:06 in the office than they would be as private citizens.
    0:34:07 Can I pause you for a second?
    0:34:11 Why is he a critic of people being in power for a long time?
    0:34:13 Why does he want limits, term limits?
    0:34:14 Power corrupts.
    0:34:15 Simple.
    0:34:18 He looks at any, and he’s doing this all through Roman history.
    0:34:21 So he makes his arguments not by just kind of abstract setting that out.
    0:34:24 These are the kind of constitutional principles you need.
    0:34:28 He’s saying, this is what the Romans did when they got rid of the kings and they started building
    0:34:28 a republic.
    0:34:30 They did some really good things.
    0:34:34 And then they did some things not so well, and they had to then kind of go back to the
    0:34:40 drawing board and rewrite some of the institutions and add some laws that were especially strict
    0:34:44 for, against people trying to come back and create a dictatorship.
    0:34:51 So he goes through lots of different, you know, kind of things that the Romans did that are now
    0:34:55 kind of reflected also in the U.S. Constitution or in, you know, other democracies around the
    0:35:00 world because the founding fathers drew on Machiavelli and others had built on him.
    0:35:03 The rule of law is really super important.
    0:35:07 If you don’t have laws that are kind of constraining everybody and institutions that make sure that
    0:35:14 the more powerful are not held in check, then you’re going to have trouble soon because people
    0:35:15 are always, always in conflict.
    0:35:20 This is another thing I think is really more interesting about Machiavelli’s view of democracy
    0:35:22 than a lot of democratic theories you get today.
    0:35:25 He stresses how much democracy is turbulent.
    0:35:30 Even in a stable democracy, people are going to be fighting all the time about what they’re
    0:35:32 kind of, you know, where do they want to go?
    0:35:34 What kind of values do you want in there?
    0:35:37 Rich and poor, you know, how much should people get taxed?
    0:35:39 That’s an eternal problem of democracy, eternal.
    0:35:46 And he says, you need to have institutions where everyone can debate that and, you know, checks
    0:35:48 on people getting too powerful, also economically.
    0:35:53 Why did he think rule of law was so perilously fragile?
    0:35:59 Because people don’t want to be equal all the time.
    0:36:01 And that’s just a thing.
    0:36:01 That’s what he said.
    0:36:04 This is what I would say Machiavelli is a realist.
    0:36:10 It’s this kind of human nature realism that isn’t, it’s not, you know, good and evil.
    0:36:13 It’s, that’s, that’s the wrong lens to read Machiavelli.
    0:36:18 He’s going back to this old ancient pre-Christian, you know, traditions that say, look, human
    0:36:19 beings are bloody messy.
    0:36:26 We’re always doing these things that upset the orders we create with all our great ideas.
    0:36:28 And people don’t all want to be equal.
    0:36:32 You know, you’re not going to turn people into angels who are happy, just saying, let’s
    0:36:33 all just share power.
    0:36:37 You’ve got to have institutions and laws that do that for them.
    0:36:42 And if you’re going to talk about what kind of democracy would help us, you know, get
    0:36:46 more stability, he thinks it would make sense just to have it up front.
    0:36:49 But we’re not idealizing human nature.
    0:36:51 We’re not idealizing what a democracy is.
    0:36:53 Democracy is like hard work.
    0:36:55 It’s hard work.
    0:37:01 And it means that some people are not going to be happy all the time, but fight for it
    0:37:03 because it’s a lot better than the alternatives.
    0:37:22 Support for the show comes from Bombas.
    0:37:28 If you’re a regular listener of the show, you know, I love to run, but the summers down
    0:37:33 here in the deep South can make the whole thing unbearably hot and ridiculously sweaty.
    0:37:38 This season, Bombas wants to help make your outdoor exercises a little more comfortable
    0:37:42 with thoughtfully designed, blister-fighting, sweat-wicking athletic socks that are perfect
    0:37:45 for your next marathon or just tackling your first mile.
    0:37:47 I’ve tried Bombas for myself.
    0:37:54 They sent me a few pairs of their athletic socks last year, and I’ve been running in them ever
    0:37:54 since.
    0:37:58 And I’ve tried all the brands, the cheap ones, the pricey ones.
    0:38:02 All of them feel like swamp rags after a summer run.
    0:38:05 I don’t know how they do it, but these socks are different.
    0:38:05 They’re comfortable.
    0:38:06 They’re durable.
    0:38:09 Pretty much all I run in at this point.
    0:38:14 Bombas says they started making socks when they learned that they’re the number one most
    0:38:16 requested clothing item in homeless shelters.
    0:38:20 So Bombas would like to thank you for shopping with them.
    0:38:25 They say you’ve helped donate over 150 million essential items.
    0:38:28 Now that’s a lot of socks and a lot of kindness.
    0:38:35 You can head over to bombas.com slash gray area and use code gray area for 20% off your
    0:38:36 first purchase.
    0:38:40 That’s B-O-M-B-A-S dot com slash gray area.
    0:38:42 Code gray area at checkout.
    0:38:52 Whether you’re a startup founder navigating your first audit or a seasoned security professional
    0:38:58 scaling your GRC program, proving your commitment to security has never been more critical or
    0:38:58 more complex.
    0:39:01 That’s where Vanta comes in.
    0:39:08 Businesses use Vanta to build trust by automating compliance for in-demand frameworks like SOC 2,
    0:39:12 ISO 27001, HIPAA, GDPR, and more.
    0:39:18 And with automation and AI throughout the platform, you can proactively manage vendor risk and complete
    0:39:22 security questionnaires up to five times faster, getting valuable time back.
    0:39:26 Vanta not only saves you time, it can also save you money.
    0:39:34 A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the
    0:39:36 platform pays for itself in just three months.
    0:39:40 For any business, establishing trust is essential.
    0:39:43 Vanta can help your business with exactly that.
    0:39:49 Go to vanta.com slash vox to meet with a Vanta expert about your business needs.
    0:39:52 That’s vanta.com slash vox.
    0:40:09 Vanta is lit, but there’s nothing to grill, when the in-laws decide that, actually, they
    0:40:10 will stay for dinner.
    0:40:15 Instacart has all your groceries covered this summer, so download the app and get delivery
    0:40:16 in as fast as 60 minutes.
    0:40:20 Plus, enjoy $0 delivery fees on your first three orders.
    0:40:22 Service fees exclusions and terms apply.
    0:40:23 Instacart.
    0:40:25 Groceries that over-deliver.
    0:40:49 If we were looking at Machiavelli for insights into, well, now, where do we start?
    0:40:55 What do you think makes him a useful, relevant guide to understanding contemporary politics,
    0:40:57 particularly American politics?
    0:40:59 This is a really Machiavellian moment.
    0:41:05 If you read the prints, kind of looking not just for those outstanding, great quotes, you
    0:41:08 know, but look for the criticisms.
    0:41:09 And sometimes they’re subtle.
    0:41:15 You start to see that he’s often, like, exposing a lot of the stuff that we’re seeing today.
    0:41:22 And chapter nine in The Prince, where he talks about how you can rise to be the kind of ruler
    0:41:26 of a republic and how much resistance you might face.
    0:41:30 And he says that the resistance that you’re going to get, people might be kind of quite
    0:41:32 passive at first and not do very much.
    0:41:39 But at some point, when they see you start to attack the law, the courts especially, and
    0:41:41 the magistrates, that’s when you’re going to clash.
    0:41:46 And he says, that’s when you as leader, he’s playing like I’m on your side, leader.
    0:41:50 That’s when you’ve got to decide, are you going to get really, really tough?
    0:41:55 Or are you going to have to kind of find other ways to kind of soften things up a bit?
    0:41:56 What would he make of Trump?
    0:42:00 He would put Trump in two categories.
    0:42:02 He’s got different classifications of prints.
    0:42:08 He’s got the Prince of Fortune, who’s somebody who relies on wealth, money, and big impressions
    0:42:11 to get ahead and on other people’s arms.
    0:42:15 He would say Trump has a lot of qualities of that because of the wealth question, relying
    0:42:18 on a massive wealth to help him campaign.
    0:42:24 But he’d also call him, what Machiavelli has this word, astutia, astuteness, which doesn’t
    0:42:27 really translate in English because we think of that as a good quality.
    0:42:29 But he means like calculating shrewdness.
    0:42:36 Somebody who’s great talent is being able to kind of shrewdly manipulate and find little
    0:42:41 holes where he can kind of end people’s weaknesses and dissatisfactions and exploit them.
    0:42:44 And that’s what he also thought the Medici were good at.
    0:42:49 And his analysis of that is that it can cover you for a long time.
    0:42:56 People will kind of see this, the good appearances and hope that you would be able to achieve the
    0:43:01 things that you can, but in the long term, people who do that don’t know how to build
    0:43:02 a solid state.
    0:43:04 That’s what he would say on a domestic front.
    0:43:10 Let’s also say like, if people are interested, chapter 21 is the most Machiavellian chapter
    0:43:14 in a way for our times because it’s about foreign alliances and people’s behavior on the foreign,
    0:43:16 on the international stage.
    0:43:22 And he’s got this example of Ferdinand of Aragon of Spain, who was like super hyperactive.
    0:43:26 Like he comes to power and he’s immediately going out and like doing things that shock
    0:43:31 and horrify everyone, beating up Jews, beating up Arabs, beating up doing this and that, and
    0:43:34 taking neighboring countries around him.
    0:43:39 And Machiavelli kind of is very, very funny in the way he describes this behavior.
    0:43:45 But then he ends up in the chapter saying, look, if you don’t have stable alliances, you’re
    0:43:45 dead.
    0:43:52 You know, stable alliances, thick and thin, transparency, that is the key to kind of steady
    0:43:54 long-term government.
    0:43:58 Well, just going back to Trump.
    0:43:59 Okay.
    0:44:00 I didn’t mention him directly.
    0:44:04 We’re not running away from this, Erica.
    0:44:04 I’m sorry.
    0:44:12 No, look, I think there’s an unsophisticated way to look at the Trump administration as Machiavelli.
    0:44:17 There are these lines in the prints about knowing how to deploy cruelty and knowing when to be
    0:44:18 ruthless.
    0:44:27 But to your point, I don’t think Machiavelli ever endorses cruelty for cruelty’s sake.
    0:44:30 And this is my personal opinion.
    0:44:33 But I think with Trump, cruelty is often the point.
    0:44:35 And that’s not really Machiavellian.
    0:44:36 It’s just cruel.
    0:44:39 I wouldn’t say Trump is Machiavellian.
    0:44:45 I mean, quite honestly, since the beginning of the Trump administration, I’ve often felt
    0:44:49 like he’s getting advice from a lot of young people who haven’t really read Machiavelli or,
    0:44:53 you know, put Machiavelli into chat GPT and got some pointers and got all the wrong ones
    0:44:58 because the ones that they’re picking out that he and his guys are acting on, especially
    0:45:01 at the beginning, were just so crude.
    0:45:06 You know, they’re just, yeah, they’re crude, but they sounded Machiavellian.
    0:45:07 But cruelty, you’re absolutely right.
    0:45:14 Cruelty is, I think, for me too, it’s been the thing that made me most, wow.
    0:45:17 This is something that’s very hard to process.
    0:45:22 And Machiavelli is very, very clear in the prints that cruelty is not going to get you anywhere.
    0:45:23 You’re going to get pure hate.
    0:45:28 So if you think it’s ever instrumentally useful to be super cruel, think again.
    0:45:35 Again, I’m being a little American-centric here, but obviously one of the problems of our time
    0:45:37 is polarization and negative partisanship.
    0:45:43 Did he have a lot to say about the dangers of partisanship in democracies?
    0:45:44 Oh, yeah.
    0:45:45 Oh, yeah.
    0:45:48 And that was, again, something that the Romans talked about a lot.
    0:45:51 So he’s drawing on a whole history of talking about partisanship.
    0:45:59 He talks about divisions developing to such a point that it doesn’t even really matter
    0:46:05 that much if the other side is telling the truth or introducing a specific policy that
    0:46:08 is, you know, justifiably going to annoy the other side.
    0:46:14 It’s just that there’s so little trust that conflict is bound to escalate.
    0:46:19 And he calls this kind of thing a sickness that you’ve got to catch as early as possible
    0:46:24 because if you let it grow too big, it’s going to be really, really hard to pull back.
    0:46:29 Something I hear a lot in my life and from people around me is some version of the argument that,
    0:46:32 you know, the system is so broken.
    0:46:33 Things are so messed up.
    0:46:38 We need someone to come in here and smash the system in order to save it.
    0:46:40 We need political dynamite.
    0:46:47 And I bring that up because Machiavelli says repeatedly that politics requires flexibility
    0:46:54 and maybe even a little practical ruthlessness in order to get done what has to get done
    0:46:56 in order to preserve the republic.
    0:47:03 Do you think he would say that there’s real danger in clinging to procedural purity?
    0:47:05 Yeah, this is a great question.
    0:47:09 I mean, again, this is one he does address in the discourses quite a lot.
    0:47:14 And he talks about how the Romans, when their republic started kind of slippery, slidey,
    0:47:17 you know, going in a wrong way and great men were coming up and saying,
    0:47:18 I’ll save you, I’ll save you.
    0:47:22 And there were a lot before Julius Caesar finally saved and then it all went to part.
    0:47:29 He really says that, you know, there are procedures that have to sometimes be wiped out.
    0:47:32 You have to reform institutions and add new ones.
    0:47:33 The Romans added new ones.
    0:47:34 They subtracted some.
    0:47:36 They changed the terms.
    0:47:42 He was very, very keen on shortening the terms of various long, excessively long offices,
    0:47:47 but also creating some emergency institutions where if you really face an emergency,
    0:47:53 that institution gives somebody more power to take executive action to solve the problem.
    0:47:59 But that institution, the dictatorship, it was called in Rome, it wasn’t like a random
    0:48:01 dictator can come and then do whatever he wants.
    0:48:06 It’s like this dictator has executive special powers, but he is under strict oversight, very
    0:48:10 strict oversight by the Senate and the plebs.
    0:48:15 So that if he steps, you know, takes one step wrong, out and maybe serious punishment.
    0:48:21 So he was really into like being very severe and punishing leaders who took these responsibilities
    0:48:22 and then abused them.
    0:48:29 Anytime we do these sorts of, you know, philosopher episodes or looking back on some important thinker,
    0:48:38 I try to close with some sense of the legacy and what they left us and why they’re important
    0:48:38 and still matter.
    0:48:44 And, you know, Machiavelli is such a unique case because his influence is everywhere.
    0:48:49 I mean, he’s really one of the few philosophers that have sort of seeped into the mainstream
    0:48:49 culture.
    0:48:55 And, you know, whatever you think of him and whatever he may have believed privately, he
    0:49:01 did lay out a vision of politics that is easily recognizable today.
    0:49:03 It’s our politics in lots of ways.
    0:49:09 Did he help make the world that way or did he just see it clearly before most others?
    0:49:10 I don’t know, maybe a bit of both.
    0:49:14 How do you think about his ultimate legacy?
    0:49:21 I mean, you know, obviously, because I think that what he was really trying to do was to
    0:49:27 criticize exactly the kinds of actions and leaders that we often see as his children, you
    0:49:28 know, his brain children.
    0:49:34 And a lot of politicians have cited him as their kind of intellectual grandfather and given
    0:49:41 intellectual respectability to a lot of positions which I think he would consider really, really
    0:49:44 cheap and amateurish and bad.
    0:49:47 So I’m feeling about this legacy.
    0:49:54 And I think it would be great if more people would, maybe in the times we’re living in, start
    0:50:01 to kind of think, hang on, now I’m kind of getting this idea that maybe Machiavelli was being kind
    0:50:08 of cynically funny, but also trying to kind of steer people to criticize what’s going on and
    0:50:14 maybe pick up the prints and find some of these passages and realize that maybe this is a kind
    0:50:20 of satirical warning signal, a serious satire that’s saying, you know, wake up, people.
    0:50:22 This is what they’re doing.
    0:50:23 These are the tricks.
    0:50:29 But these are also, in a way, he’s empowering citizens, I think, also who read the prints
    0:50:33 because he’s saying, these guys are actually vulnerable.
    0:50:35 You know, I’m spelling out what they do.
    0:50:40 And I’m also, if you read properly to the end of their story, I’m showing you where they
    0:50:44 ended up by using these so-called hardcore, you know, realist methods.
    0:50:48 So that means that, you know, it’s not lost.
    0:50:49 All is not lost.
    0:50:50 They are vulnerable.
    0:50:52 Recognize that.
    0:50:56 Find ways to build up your own power and do it.
    0:51:00 Well, we’re doing the important work here of setting the record straight.
    0:51:03 And look, Machiavelli is endlessly interesting.
    0:51:07 And your book, Be Like the Fox, is fantastic.
    0:51:08 Thanks very much for coming in.
    0:51:09 Thank you so much for having me.
    0:51:09 Thank you.
    0:51:17 All right.
    0:51:19 I hope you enjoyed this episode.
    0:51:20 You know I did.
    0:51:22 As always, we want to know what you think.
    0:51:25 So drop us a line at thegrayareaatvox.com.
    0:51:33 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
    0:51:38 And if you have some time, please go ahead, rate, review, subscribe to the show.
    0:51:45 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Christian,
    0:51:49 Ayala, fact-checked by Melissa Hirsch, and Alex Oberington wrote our theme music.
    0:51:53 New episodes of The Gray Area drop on Mondays.
    0:51:54 Listen and subscribe.
    0:51:56 The show is part of Vox.
    0:52:00 Support Vox’s journalism by joining our membership program today.
    0:52:03 Go to vox.com slash members to sign up.
    0:52:06 And if you decide to sign up because of this show, let us know.
    0:52:17 Vox.com slash members to sign up.

    Almost nothing stands the test of time. Machiavelli’s writings are a rare exception.

    Why are we still talking about Machiavelli, nearly 500 years after his death? What is it about his political philosophy that feels so important, prescient, or maybe chilling today?

    In this episode, Sean speaks with political philosopher and writer Erica Benner about Niccolo Machiavelli’s legacy. The two discuss The Prince, Machiavelli’s views on democracy, and what he might say about the Trump administration were he alive today.

    Host: Sean Illing (@SeanIlling)
    Guest: Erica Benner, political philosopher, historian, and author of Be Like the Fox

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Do you have moral ambition?

    AI transcript
    0:00:01 Sue Bird here.
    0:00:04 I am thrilled to announce I’m launching a brand new show,
    0:00:07 Bird’s Eye View, the definitive WNBA podcast.
    0:00:10 Every week, we’ll dig into the WNBA stories
    0:00:11 that actually matter with guest interviews,
    0:00:14 candid takes, and in-depth analysis from around the league.
    0:00:16 It’s a show I’ve wanted to make for a while,
    0:00:18 and I’m so excited it’s finally happening.
    0:00:20 Whether you’re new to the WNBA or a longtime fan,
    0:00:21 pull up.
    0:00:22 This show is for you.
    0:00:24 Bird’s Eye View is coming May 16th.
    0:00:25 Follow the show on YouTube
    0:00:27 or wherever you listen to your podcasts.
    0:00:33 Support for this show comes from ServiceNow,
    0:00:36 a company that helps people do more fulfilling work,
    0:00:38 the work they actually want to do.
    0:00:40 You know what people don’t want to do?
    0:00:41 Boring, busy work.
    0:00:44 But ServiceNow says that with their AI agents
    0:00:46 built into the ServiceNow platform,
    0:00:49 you can automate millions of repetitive tasks
    0:00:50 in every corner of a business.
    0:00:54 IT, HR, customer service, and more.
    0:00:57 And the company says that means your people
    0:00:59 can focus on the work that they want to do.
    0:01:02 That’s putting AI agents to work for people.
    0:01:03 It’s your turn.
    0:01:08 You can get started at servicenow.com slash AI dash agents.
    0:01:16 We’re told from a young age to achieve.
    0:01:17 Get good grades.
    0:01:19 Get into a good school.
    0:01:20 Get a good job.
    0:01:22 Be ambitious about earning a high salary
    0:01:24 or a high status position.
    0:01:28 Some of us love this endless climb.
    0:01:31 But lots of us, at least once in our lives,
    0:01:33 find ourselves asking,
    0:01:35 what’s the point of all this ambition?
    0:01:37 The fat salary or the fancy title?
    0:01:40 Aren’t those pretty meaningless measures of success?
    0:01:46 One proposed solution is to stop being ambitious
    0:01:48 and start being idealistic instead.
    0:01:51 You hear this from a lot of influencers.
    0:01:52 Follow your passion.
    0:01:54 Small is beautiful.
    0:01:57 The idea is that you should drop out of the capitalist rat race
    0:01:58 and do what you love.
    0:02:00 Yoga, maybe.
    0:02:01 Or watercolor painting.
    0:02:05 Even if it makes very little positive impact on the world.
    0:02:09 But what if instead of trying to be less ambitious,
    0:02:14 we try to be more ambitious about the things that really matter?
    0:02:16 Like helping others.
    0:02:19 In an era when there’s so much chaos, injustice,
    0:02:22 and frankly, a feeling of widespread despair,
    0:02:24 it’s worth asking.
    0:02:27 What would the world look like if we start measuring our success,
    0:02:29 not in terms of fame or fortune,
    0:02:32 but in terms of how much good we do?
    0:02:38 I’m Sigal Samuel, and this is The Gray Area.
    0:02:45 Today’s guest is historian and author Rutger Bregman.
    0:02:50 He’s probably best known for what he yelled at policymakers at Davos a few years ago.
    0:02:52 Taxes, taxes, taxes.
    0:02:55 He’s tried to get billionaires to pay their fair share in taxes,
    0:02:59 and he’s also argued for other policies that could make life better for everyone,
    0:03:02 like a universal basic income.
    0:03:07 Now, he’s written a new book called Moral Ambition,
    0:03:10 which urges us to stop wasting our talents on meaningless work
    0:03:13 and start trying to do more good for the world.
    0:03:17 He wants us to be both ambitious and idealistic,
    0:03:21 to devote ourselves to solving the world’s biggest problems,
    0:03:25 like malaria and pandemics and climate change.
    0:03:29 I invited Rutger on the show because I find his message inspiring.
    0:03:34 And, to be honest, I also have some questions about it.
    0:03:37 I want to dedicate myself to work that feels meaningful,
    0:03:41 but I’m not sure that work that helps the greatest number of people
    0:03:43 is the only way to do that.
    0:03:47 So in this conversation, we’ll explore all the different things
    0:03:48 that can make our lives feel meaningful
    0:03:52 and ask, are some objectively better than others?
    0:03:58 Hey, Rutger, welcome to the show.
    0:04:00 Thanks for having me. Good to see you.
    0:04:02 Your book is called Moral Ambition.
    0:04:05 Why should people be morally ambitious?
    0:04:10 My whole career, I’ve been fascinated with the waste of talent
    0:04:13 that is going on in modern economies.
    0:04:17 There’s this one study from two Dutch economists
    0:04:18 done a couple of years ago,
    0:04:23 and they estimate that around 25% of all workers
    0:04:27 think that their own job is socially meaningless,
    0:04:30 or at least doubt the value of their job.
    0:04:33 That is just insane to me.
    0:04:35 I mean, this is five times the unemployment rate.
    0:04:39 And we’re talking about people who have often excellent resumes,
    0:04:41 you know, who went to very nice universities.
    0:04:45 I’m going to Harvard tomorrow to speak to students there.
    0:04:47 And, well, it’s an interesting case in point.
    0:04:51 45% of Harvard graduates end up in consultancy or finance.
    0:04:54 Not saying all of that is totally socially useless,
    0:04:58 but I do wonder whether that is the best allocation of talent.
    0:05:01 And as you know, we face some pretty big,
    0:05:03 obvious problems out there,
    0:05:05 whether it’s, you know, the threat of the next pandemic
    0:05:07 that may be just around the corner.
    0:05:10 Terrible diseases like malaria and tuberculosis
    0:05:11 killing millions of people.
    0:05:15 The problem with democracy breaking down.
    0:05:17 I mean, the list goes on and on and on.
    0:05:20 And so I’ve always been frustrated
    0:05:23 by this enormous waste of talent.
    0:05:27 Now, I’m not saying that morality should suck up everything.
    0:05:29 I’m personally a pluralist.
    0:05:32 I think that there are many things that are important in life,
    0:05:34 you know, family, friends, music, art.
    0:05:37 And you don’t want to let morality dominate everything.
    0:05:40 But I think in a rich, well-rounded life,
    0:05:42 it does play an important role.
    0:05:44 And if we’re going to have a career anyway,
    0:05:46 we might as well do a lot of good with it.
    0:05:49 What about that question specifically about,
    0:05:51 you know, someone comes to you and says,
    0:05:52 I’m a third grade teacher.
    0:05:54 I’m a social worker.
    0:05:57 Am I not being morally ambitious enough?
    0:06:00 So half of the country already works
    0:06:02 in these so-called essential jobs.
    0:06:04 We discover that during the pandemic,
    0:06:07 that, you know, when some people go on strike,
    0:06:07 we’re in real trouble.
    0:06:10 So my point here is that half of the country
    0:06:12 doesn’t need a lecture from me
    0:06:14 about being morally ambitious.
    0:06:15 They’re already working in essential jobs.
    0:06:18 I’m indeed more interested in preaching
    0:06:20 to my own people,
    0:06:24 to honestly quite a few of my friends.
    0:06:26 We used to have big ideals and dreams
    0:06:28 when we were still in university.
    0:06:31 You know, we wrote these beautiful application essays
    0:06:33 about how we were going to fix
    0:06:35 tax avoidance and tax evasion,
    0:06:37 how we were going to tackle global hunger
    0:06:39 and work at the United Nations
    0:06:40 and look at us.
    0:06:41 What has happened?
    0:06:43 It’s pretty sad, isn’t it?
    0:06:45 Now we’re old and wrinkled and complacent.
    0:06:47 Yeah, yeah, yeah.
    0:06:50 Something has gone wrong, I would say.
    0:06:53 So that doesn’t mean that I don’t think
    0:06:54 anyone can be morally ambitious.
    0:06:57 Rosa Parks was a seamstress.
    0:06:59 Le Kualesa, you know,
    0:07:01 the great social revolutionary in Poland.
    0:07:04 He was an electrician.
    0:07:06 So, I mean, history is littered with examples
    0:07:08 of people who weren’t very privileged
    0:07:10 and still did a lot of good.
    0:07:13 But they don’t need a lecture from me, I think.
    0:07:16 I’m mainly talking to people
    0:07:18 who shouldn’t just check their privilege,
    0:07:21 but also use that privilege
    0:07:22 to make a massive difference.
    0:07:26 What role does personal passion play in that?
    0:07:27 You write in the book,
    0:07:29 don’t start out by asking,
    0:07:30 what’s my passion?
    0:07:32 Ask instead, how can I contribute most?
    0:07:34 And then choose the role that suits you best.
    0:07:35 Don’t forget,
    0:07:37 your talents are but a means to an end.
    0:07:39 Yeah, I think follow your passion
    0:07:41 is probably the worst career advice out there.
    0:07:44 We, at the School for Moral Ambition,
    0:07:45 an organization I co-founded,
    0:07:48 we deeply believe in the Gandalf-Frodo model
    0:07:49 of changing the world.
    0:07:51 So I always like to say that
    0:07:52 Frodo, you know,
    0:07:54 he didn’t follow his passion.
    0:07:55 Gandalf never asked him,
    0:07:57 oh, what’s your passion, Frodo?
    0:07:58 He said, look,
    0:08:00 this really needs to be done.
    0:08:01 This needs to be fixed.
    0:08:02 You got to throw the ring into the mountain.
    0:08:05 If Frodo would have followed his passion,
    0:08:08 he would have probably, you know,
    0:08:09 been a gardener,
    0:08:10 having a life, you know,
    0:08:11 full of second breakfasts,
    0:08:13 pretty comfortable in the Shire,
    0:08:15 and then the orcs would have turned up
    0:08:16 and murdered everyone he ever loved.
    0:08:19 So I think the point here is pretty simple.
    0:08:22 Find yourself some wise old wizard,
    0:08:23 a Gandalf.
    0:08:27 Figure out what are some of the most pressing issues
    0:08:28 that we face as a species
    0:08:29 and ask yourself,
    0:08:30 how can I make a difference?
    0:08:32 And then you will find out
    0:08:33 that you can become
    0:08:34 very passionate about it.
    0:08:37 It’s just don’t start
    0:08:39 with looking at your navel
    0:08:39 and thinking,
    0:08:41 oh, what is it for me?
    0:08:44 Just ask smart people out there
    0:08:45 and become passionate
    0:08:46 about what they say.
    0:08:47 So you’re saying,
    0:08:49 do the work first,
    0:08:51 trust that the passion will come later?
    0:08:52 Absolutely, yeah.
    0:08:54 And I’ve got a couple of examples
    0:08:55 of that in the book.
    0:08:59 One school I’ve got a whole chapter on
    0:09:01 is called Charity Entrepreneurship.
    0:09:03 They’ve since rebranded
    0:09:04 as Ambitious Impact,
    0:09:06 but it’s a school that I like to describe
    0:09:08 as the Hogwarts for do-gooders.
    0:09:10 So they recruit
    0:09:14 really driven entrepreneurial people
    0:09:15 who want to start
    0:09:17 a highly effective nonprofit.
    0:09:19 And they continuously
    0:09:21 research this question.
    0:09:22 It’s called prioritization research,
    0:09:23 thinking about,
    0:09:23 yeah,
    0:09:25 what are some of the most pressing issues
    0:09:25 we face?
    0:09:28 And then they find these founders
    0:09:29 of these nonprofits,
    0:09:31 and they basically match the founders
    0:09:32 not only with each other
    0:09:34 so that you have a co-founder,
    0:09:36 but also with these tasks, right?
    0:09:38 You basically get a mission.
    0:09:40 And one of the most successful charities
    0:09:41 they’ve launched
    0:09:41 is called
    0:09:43 the Lead Exposure Elimination Project.
    0:09:44 I believe you guys
    0:09:45 have also written about them.
    0:09:45 That’s right.
    0:09:47 one of the co-founders
    0:09:47 is Lucia Coulter.
    0:09:49 She used to be a doctor
    0:09:49 at the NHS.
    0:09:51 Loved her work.
    0:09:52 But at the same time,
    0:09:53 she was like,
    0:09:55 can’t I do more good, right?
    0:09:57 I’m currently working as a doctor
    0:09:58 in a very rich country,
    0:10:00 mostly treating patients
    0:10:02 who are already relatively old.
    0:10:03 It’s beautiful work,
    0:10:04 but I want to do more good.
    0:10:06 And you should talk to her now.
    0:10:06 I mean,
    0:10:08 she’s incredibly passionate
    0:10:09 about the work she does.
    0:10:10 OK, but so that’s a good example.
    0:10:11 So it’s not that
    0:10:13 she completely ditched
    0:10:14 what she was already doing
    0:10:16 and her existing passions, right?
    0:10:17 She found a way
    0:10:17 to take her passion
    0:10:19 for health care
    0:10:20 or for global health
    0:10:21 and sort of
    0:10:23 put it on a different scale,
    0:10:24 but still using
    0:10:26 her existing core passion
    0:10:27 and skill set.
    0:10:28 That’s a good point.
    0:10:31 Maybe we got to be passionate
    0:10:32 on a meta level,
    0:10:32 you know,
    0:10:34 about our higher level goals.
    0:10:36 You can be really passionate
    0:10:37 about making the world
    0:10:37 a better place,
    0:10:38 helping a lot of people,
    0:10:39 improving,
    0:10:40 global health,
    0:10:41 something like that.
    0:10:43 But it’s quite risky
    0:10:44 if you get too attached
    0:10:46 to a certain intervention
    0:10:47 or something like that.
    0:10:49 I think that’s a very sure way
    0:10:51 of massively limiting your impact.
    0:10:52 And you see it a lot, sadly.
    0:10:54 I’ve been walking around
    0:10:55 in the world of philanthropy
    0:10:56 for the past two years
    0:10:58 and it just drives me nuts
    0:11:00 how many of these rich people
    0:11:01 are all the time,
    0:11:02 you know,
    0:11:03 they’re gazing at their navel.
    0:11:03 And like,
    0:11:05 you don’t have to come up
    0:11:06 with the answer yourself.
    0:11:07 The research has already been done,
    0:11:08 right?
    0:11:12 Why do you have to be the one,
    0:11:12 you know,
    0:11:14 who needs to have this epiphany
    0:11:14 about,
    0:11:15 oh!
    0:11:16 Right.
    0:11:17 It’s the pandas
    0:11:18 in this specific region
    0:11:19 that really need our help.
    0:11:22 There are already Gandalfs
    0:11:22 and Dumbledores
    0:11:24 working on it for you,
    0:11:24 figuring it out.
    0:11:25 Exactly, exactly.
    0:11:27 And it takes a team
    0:11:28 to make a big difference.
    0:11:31 I think it can be
    0:11:32 quite liberating as well
    0:11:33 to not have to fight
    0:11:34 your passion anymore.
    0:11:35 I speak to quite a few
    0:11:38 teenagers and people
    0:11:39 in their 20s
    0:11:41 about what they should do
    0:11:41 with their career
    0:11:43 and a lot of them
    0:11:45 find a lot of relief
    0:11:46 in this message
    0:11:47 that they don’t have
    0:11:48 to find their passion.
    0:11:49 That there are other people
    0:11:50 out there
    0:11:51 who have a job
    0:11:52 for them to do, right?
    0:11:53 That they can just
    0:11:53 sign up for it.
    0:11:55 Interesting.
    0:11:56 In your book,
    0:11:59 there is one Venn diagram
    0:12:00 that caught my eye.
    0:12:01 It’s, you know,
    0:12:02 these three circles.
    0:12:04 The first is labeled sizable,
    0:12:06 the second is solvable,
    0:12:08 and the third is sorely overlooked.
    0:12:09 And in the middle
    0:12:10 where they all overlap,
    0:12:12 it says moral ambition.
    0:12:13 Explain that to me.
    0:12:14 What does that mean?
    0:12:15 Yeah, so this is
    0:12:16 the triple S framework
    0:12:17 of making the world
    0:12:18 a wildly better place.
    0:12:20 And it’s connected
    0:12:22 to this simple point
    0:12:23 that choosing the cause
    0:12:25 you work on
    0:12:26 is probably
    0:12:27 the most important question
    0:12:28 you’ve got to answer.
    0:12:29 And so,
    0:12:30 at the School for More Ambition,
    0:12:32 we work with this framework
    0:12:35 in selecting these causes.
    0:12:37 Take something like
    0:12:38 climate change, for example.
    0:12:39 Climate change is obviously
    0:12:41 a very sizable problem.
    0:12:41 It’s very big.
    0:12:44 Threatens a lot of people.
    0:12:46 It’s also very solvable, right?
    0:12:47 We know what we can do.
    0:12:49 We’ve got a huge toolbox,
    0:12:50 a lot of solutions out there
    0:12:51 that are waiting
    0:12:52 to be implemented.
    0:12:54 And then the question is,
    0:12:56 is it also sorely neglected?
    0:12:57 And the good news here
    0:12:59 is less and less so.
    0:12:59 You could ask yourself,
    0:13:01 what was the best time
    0:13:02 to be a climate activist?
    0:13:03 And the answer is not now.
    0:13:05 30 years ago.
    0:13:05 Exactly.
    0:13:06 That was the moment.
    0:13:08 So if you, again,
    0:13:09 want to maximize your impact,
    0:13:10 if you want to ask
    0:13:11 the morally ambitious question,
    0:13:12 then the question is,
    0:13:13 okay,
    0:13:15 what would the climate activists
    0:13:16 of the 70s
    0:13:17 have done today, right?
    0:13:19 Or what is the problem
    0:13:20 that’s currently
    0:13:20 where climate change
    0:13:22 was in the 1970s?
    0:13:23 You see what I mean?
    0:13:27 That is an entrepreneurial way
    0:13:29 of looking at doing good.
    0:13:31 You are really looking
    0:13:31 for the gap in the market.
    0:13:32 You could also do that
    0:13:34 within a cost area,
    0:13:34 by the way.
    0:13:36 So if you look at climate change,
    0:13:37 then you can think,
    0:13:38 okay,
    0:13:40 what is the part of the problem
    0:13:40 that is currently
    0:13:41 most neglected?
    0:13:42 Okay,
    0:13:43 so looking at the neglected
    0:13:44 or sorely overlooked,
    0:13:45 looking at the solvable
    0:13:47 and looking at the sizable.
    0:13:48 I do wonder about
    0:13:50 the sizable part of that.
    0:13:51 Does moral ambition
    0:13:53 always have to be about scale?
    0:13:55 Yeah,
    0:13:55 I think so.
    0:13:55 Yeah.
    0:13:56 Yeah.
    0:13:57 It’s about making
    0:13:58 the biggest possible impact.
    0:14:00 And if you can achieve
    0:14:01 your goals
    0:14:02 during your lifetime,
    0:14:02 then you’re probably
    0:14:04 not thinking big enough.
    0:14:05 Look,
    0:14:06 I’m not saying
    0:14:06 that everyone
    0:14:07 has to be morally ambitious
    0:14:08 or something like that.
    0:14:09 I’m not like
    0:14:11 preaching with my finger
    0:14:11 and saying,
    0:14:11 oh,
    0:14:12 if you don’t live
    0:14:13 this kind of life,
    0:14:14 you’re a bad person.
    0:14:15 I am saying,
    0:14:18 if you are ambitious anyway,
    0:14:19 you know,
    0:14:21 why not redirect that energy
    0:14:22 to do a lot of good?
    0:14:23 I think it will make your life
    0:14:24 much more meaningful.
    0:14:25 If you’re going to have
    0:14:26 a burnout anyway,
    0:14:27 you know,
    0:14:27 you might as well
    0:14:28 get that burnout
    0:14:30 while you help
    0:14:30 a lot of people,
    0:14:31 right?
    0:14:33 And the same is true
    0:14:34 for some people
    0:14:36 who are very idealistic
    0:14:36 but not very ambitious.
    0:14:37 Like,
    0:14:38 wouldn’t it be nice
    0:14:39 to actually achieve a lot?
    0:14:40 I mean,
    0:14:41 I personally come
    0:14:42 from the political left
    0:14:43 and,
    0:14:45 yeah,
    0:14:45 there’s this weird
    0:14:46 leftist obsession
    0:14:48 with being pure
    0:14:48 and irrelevant,
    0:14:50 right?
    0:14:52 Calling out a lot of people,
    0:14:53 winning the debate
    0:14:54 in the group chat,
    0:14:55 but not actually
    0:14:55 making a difference
    0:14:57 for the people you say
    0:14:57 you care so much about.
    0:14:58 I think that’s
    0:14:59 what you call in the book
    0:15:00 the noble loser,
    0:15:01 right?
    0:15:01 Yeah,
    0:15:01 yeah,
    0:15:02 yeah,
    0:15:02 yeah.
    0:15:04 But I guess
    0:15:04 what I’m wondering is,
    0:15:05 do you believe
    0:15:06 that there is sort of
    0:15:07 a moral imperative
    0:15:09 to do the most good
    0:15:10 you possibly can do
    0:15:11 to have the most impact,
    0:15:12 the most scale?
    0:15:14 Well,
    0:15:15 obviously at some point
    0:15:17 you’ve done enough.
    0:15:19 I talk about
    0:15:20 Thomas Clarkson,
    0:15:21 my favorite abolitionist.
    0:15:23 He was
    0:15:25 a British writer
    0:15:25 and activist
    0:15:28 and when he was 25
    0:15:29 he had this epiphany
    0:15:30 that slavery
    0:15:31 was probably
    0:15:31 the greatest moral
    0:15:33 atrocity of his time
    0:15:33 and he was like,
    0:15:34 you know what,
    0:15:34 maybe I can make
    0:15:35 a difference.
    0:15:36 Maybe I can
    0:15:38 spend my life
    0:15:39 fighting this
    0:15:40 horrible institution
    0:15:42 and that’s basically
    0:15:42 what he did.
    0:15:43 The first seven years
    0:15:44 he traveled across
    0:15:45 the United Kingdom
    0:15:46 35,000 miles
    0:15:47 spreading his abolitionist
    0:15:48 propaganda everywhere
    0:15:49 and then
    0:15:50 he had a total
    0:15:51 nervous breakdown.
    0:15:53 Utter burnout.
    0:15:53 He couldn’t walk
    0:15:54 the stairs anymore.
    0:15:55 He couldn’t speak.
    0:15:56 He started sweating
    0:15:57 profusely whenever
    0:15:59 he wanted to say something
    0:16:00 and I read that
    0:16:00 in his memoirs
    0:16:01 and I was like,
    0:16:02 Thomas, Thomas, Thomas.
    0:16:04 Remember your
    0:16:05 breathing exercises.
    0:16:05 You can take things
    0:16:06 too far.
    0:16:07 Now, the reason I say
    0:16:08 that only at the end
    0:16:08 of the book
    0:16:10 because, you know,
    0:16:11 most of us first
    0:16:12 deserve a kick in the butt.
    0:16:13 So, yeah,
    0:16:14 there are some
    0:16:15 do-gooders out there.
    0:16:17 I think they, you know,
    0:16:18 take morality
    0:16:19 a little bit too seriously.
    0:16:20 As I said,
    0:16:22 I’m personally a pluralist.
    0:16:22 I’m a father
    0:16:23 of two young children.
    0:16:24 I think they’re
    0:16:25 way more important
    0:16:26 than, you know,
    0:16:27 my career.
    0:16:29 But I am
    0:16:31 pretty ambitious, right?
    0:16:32 I do want to make
    0:16:33 a mark on this world
    0:16:34 and I think there are
    0:16:35 a lot of people out there.
    0:16:36 We are all,
    0:16:37 or most of us,
    0:16:38 are scared to death.
    0:16:40 And what do you want
    0:16:41 to look back on
    0:16:42 when you lie on your deathbed?
    0:16:44 All the PowerPoints,
    0:16:44 you know,
    0:16:46 you hated to make
    0:16:47 or all the reports
    0:16:48 you wrote
    0:16:48 that no one ever
    0:16:49 wanted to read,
    0:16:50 all the products
    0:16:51 that you didn’t believe in
    0:16:52 that you still spend
    0:16:53 a lifetime selling?
    0:16:54 Seems pretty sad to me.
    0:16:56 I think this is touching
    0:16:57 on something really honest,
    0:16:58 which is that
    0:16:59 I think a lot of
    0:17:01 the desire
    0:17:02 for this sort of
    0:17:03 big impact
    0:17:04 may actually come
    0:17:05 from our fear
    0:17:07 of our own mortality
    0:17:08 and this desire
    0:17:09 to leave a legacy
    0:17:10 that will outlast us
    0:17:11 so that we feel like
    0:17:11 in some sense
    0:17:12 it actually mattered
    0:17:13 that we lived it all.
    0:17:15 And I remember
    0:17:17 dealing with this myself.
    0:17:20 I’m a journalist now
    0:17:20 but before that
    0:17:21 I was a novelist
    0:17:23 and I didn’t care
    0:17:25 how many people
    0:17:26 my work impacted, right?
    0:17:27 It was for me
    0:17:28 really not about scale.
    0:17:29 My feeling was,
    0:17:30 look, if my novel
    0:17:31 deeply moves
    0:17:32 just one reader
    0:17:34 and helps them feel
    0:17:35 less alone in the world,
    0:17:36 helps them feel
    0:17:36 more understood,
    0:17:38 I will be happy.
    0:17:40 So I guess
    0:17:42 my question for you
    0:17:42 as someone who has
    0:17:43 personally struggled
    0:17:44 with this issue of scale
    0:17:45 is, you know,
    0:17:46 are you telling me
    0:17:47 I shouldn’t be happy
    0:17:47 with that?
    0:17:49 The title of chapter one
    0:17:50 in your book
    0:17:50 is literally
    0:17:52 no, you’re not fine
    0:17:52 just the way you are.
    0:17:55 So I think
    0:17:56 there is absolutely
    0:17:57 a place for
    0:17:59 as the French say
    0:18:00 art pour l’art,
    0:18:01 right?
    0:18:03 It’s just music
    0:18:04 or art
    0:18:04 for the sake
    0:18:05 of art itself.
    0:18:07 I don’t want to,
    0:18:07 you know,
    0:18:09 let everything succumb
    0:18:09 to kind of
    0:18:12 utilitarian calculus.
    0:18:14 I think
    0:18:15 it’s better
    0:18:16 to help a lot of people
    0:18:17 than just a few people
    0:18:17 people.
    0:18:20 So, and as I said,
    0:18:21 in any rich life,
    0:18:22 morality does play
    0:18:23 a big role.
    0:18:25 I wouldn’t want
    0:18:26 to live in a society
    0:18:26 where everyone
    0:18:27 is like Thomas Clarkson,
    0:18:27 you know,
    0:18:28 running around
    0:18:29 on his horseback
    0:18:31 doing morally
    0:18:31 ambitious work.
    0:18:33 But on the margins,
    0:18:35 I think in the world
    0:18:35 today,
    0:18:36 we need a lot
    0:18:37 more ambition.
    0:18:38 We need much more
    0:18:39 moral ambition
    0:18:39 than we currently have.
    0:18:41 Yeah, I mean,
    0:18:41 I personally
    0:18:42 would not want
    0:18:42 to end up
    0:18:42 in a world
    0:18:43 where everyone
    0:18:44 is so focused
    0:18:45 on moral ambition
    0:18:46 and scale
    0:18:46 that we,
    0:18:47 like,
    0:18:47 that no one
    0:18:48 ever writes a novel
    0:18:49 because they worry
    0:18:49 it won’t impact
    0:18:50 enough people.
    0:18:51 You know,
    0:18:52 when I was reading
    0:18:52 your book,
    0:18:53 I kept thinking
    0:18:54 of the philosopher
    0:18:55 Susan Wolfe,
    0:18:57 who has this great
    0:18:57 essay called
    0:18:58 Moral Saints,
    0:18:58 and I know you
    0:18:59 mention it
    0:19:00 in a footnote,
    0:19:00 but I think her ideas
    0:19:01 are very,
    0:19:01 very important
    0:19:02 in this context,
    0:19:02 so I want
    0:19:03 to talk about them.
    0:19:05 Wolfe,
    0:19:05 in that essay
    0:19:06 Moral Saints,
    0:19:07 she says,
    0:19:08 if the moral saint
    0:19:10 is devoting all his time
    0:19:10 to feeding the hungry
    0:19:11 or healing the sick
    0:19:12 or raising money
    0:19:13 for Oxfam,
    0:19:14 then necessarily
    0:19:15 he is not reading
    0:19:15 Victorian novels,
    0:19:17 playing the oboe,
    0:19:18 or improving his backhand.
    0:19:19 A life in which
    0:19:20 none of these possible
    0:19:22 aspects of character
    0:19:22 are developed
    0:19:23 may seem to be
    0:19:24 a life strangely barren.
    0:19:27 Quite an elitist idea
    0:19:28 of how to spend
    0:19:29 your life,
    0:19:29 by the way,
    0:19:30 reading a novel
    0:19:31 and improving
    0:19:32 your backhand,
    0:19:33 or maybe just
    0:19:34 watching Netflix all day.
    0:19:35 Fair, fair,
    0:19:36 but you could
    0:19:36 swap that out
    0:19:38 with reading
    0:19:39 your favorite book
    0:19:42 and any hobby,
    0:19:43 playing soccer,
    0:19:44 whatever it might be.
    0:19:45 But basically
    0:19:46 what she’s saying
    0:19:47 is if you try
    0:19:47 to make all
    0:19:48 of your actions
    0:19:49 as morally good
    0:19:49 as possible,
    0:19:50 you kind of end up
    0:19:51 living a life
    0:19:52 that’s bereft
    0:19:52 of hobbies
    0:19:54 or relationships
    0:19:55 or all the other
    0:19:55 experiences
    0:19:56 that make life meaningful.
    0:19:58 Talk a little more
    0:19:59 about how you square
    0:19:59 that with your urge
    0:20:00 to be morally ambitious.
    0:20:02 There is some tension,
    0:20:03 but I think
    0:20:04 that tension
    0:20:04 is mainly felt
    0:20:05 by philosophers
    0:20:06 for some reason
    0:20:08 and not really
    0:20:09 by me
    0:20:10 or, I don’t know,
    0:20:12 a lot of normies.
    0:20:14 It’s just,
    0:20:17 as I said,
    0:20:17 for me,
    0:20:18 it’s super obvious
    0:20:19 that life is about
    0:20:19 many things,
    0:20:20 including improving
    0:20:22 your backhand.
    0:20:23 I’m not saying
    0:20:24 that people aren’t
    0:20:25 allowed to play
    0:20:26 tennis anymore,
    0:20:27 but we spend,
    0:20:28 what is it,
    0:20:30 2,000 work weeks
    0:20:30 in our career,
    0:20:32 10,000 working days,
    0:20:33 80,000 hours.
    0:20:34 That’s a lot of time
    0:20:36 still left at the job.
    0:20:37 And as I said,
    0:20:38 25% of people
    0:20:39 currently consider
    0:20:40 their own job
    0:20:41 socially meaningless.
    0:20:42 And a lot of
    0:20:43 our so-called
    0:20:44 best and brightest
    0:20:45 are stuck in those jobs.
    0:20:46 So,
    0:20:47 I don’t know.
    0:20:49 We are living
    0:20:49 in a world
    0:20:50 where a huge amount
    0:20:50 of people
    0:20:51 have a career
    0:20:52 that they consider
    0:20:52 socially meaningless
    0:20:53 and then they spend
    0:20:54 the rest of their time
    0:20:56 swiping TikTok.
    0:20:58 That’s the reality,
    0:20:59 right?
    0:21:01 I really don’t think
    0:21:03 that there’s a big danger
    0:21:03 of, you know,
    0:21:05 people reading my book
    0:21:05 and, you know,
    0:21:07 moving all the way
    0:21:08 in the other direction.
    0:21:09 And that’s a problem
    0:21:09 I would honestly
    0:21:10 like to have.
    0:21:11 So,
    0:21:11 you’re saying,
    0:21:11 like,
    0:21:12 we’re currently
    0:21:14 very far away
    0:21:14 from this problem
    0:21:15 of, like,
    0:21:15 everyone going
    0:21:16 full tilt
    0:21:17 on moral ambition
    0:21:18 and ignoring
    0:21:19 everything else in life.
    0:21:20 There’s only one
    0:21:21 community I know of
    0:21:22 where this has
    0:21:23 become a problem
    0:21:24 and, as you know,
    0:21:25 it’s the effective
    0:21:26 altruism community.
    0:21:28 In a way,
    0:21:29 moral ambition
    0:21:30 could be seen
    0:21:31 as effective
    0:21:32 altruism for normies.
    0:21:34 Okay,
    0:21:34 I definitely,
    0:21:35 I definitely want
    0:21:36 to get to that,
    0:21:36 but I’m going
    0:21:36 to put a pin
    0:21:37 in that for a moment
    0:21:39 because I just want
    0:21:41 to take the flip side
    0:21:41 of what you were
    0:21:42 just saying.
    0:21:42 You’re saying,
    0:21:43 like,
    0:21:43 okay,
    0:21:45 I’m not really
    0:21:46 concerned,
    0:21:46 Seagal,
    0:21:47 that we’re,
    0:21:47 like,
    0:21:48 edging into this world
    0:21:48 where everyone
    0:21:49 is so focused
    0:21:50 on moral ambition.
    0:21:53 But how
    0:21:54 do you then
    0:21:55 actually know
    0:21:56 when it’s enough?
    0:21:57 I think you used
    0:21:57 the phrase earlier,
    0:21:58 like,
    0:21:58 at some point
    0:21:59 it’s enough,
    0:21:59 you know?
    0:22:01 And I think,
    0:22:01 you know,
    0:22:03 you write in the epilogue
    0:22:04 of the book,
    0:22:05 morality plays a big role
    0:22:06 in a rich and full life,
    0:22:07 but it’s not everything.
    0:22:08 And if your inner fire
    0:22:09 burns bright,
    0:22:10 no need to stoke it hotter.
    0:22:12 But to me,
    0:22:12 that is pretty,
    0:22:12 like,
    0:22:13 fuzzy sounding.
    0:22:14 How can I know
    0:22:15 what’s enough
    0:22:17 and avoid pushing
    0:22:17 so far
    0:22:18 that moral ambition
    0:22:20 does take over my life?
    0:22:21 That does happen
    0:22:21 to some people.
    0:22:24 So how can I concretely know,
    0:22:24 like,
    0:22:24 Seagal,
    0:22:25 you’ve done enough.
    0:22:26 Chill.
    0:22:27 Well,
    0:22:28 it depends
    0:22:30 on how far
    0:22:30 you want to
    0:22:31 push yourself.
    0:22:33 Look,
    0:22:34 there are no
    0:22:35 easy answers here.
    0:22:37 I think that at some point
    0:22:38 when you really start
    0:22:41 to suffer
    0:22:42 from your moral ambition,
    0:22:43 that’s not where
    0:22:45 I would want you
    0:22:46 to end up.
    0:22:48 I think you should be fueled
    0:22:49 for 80%
    0:22:50 by enthusiasm
    0:22:52 and for maybe 20%
    0:22:53 by feelings of guilt
    0:22:53 and shame.
    0:22:55 So a little bit
    0:22:56 of guilt and shame
    0:22:56 in the mix,
    0:22:57 that’s fine.
    0:22:59 It’s actually how,
    0:23:00 you know,
    0:23:01 this journey started
    0:23:01 for me.
    0:23:02 You know,
    0:23:03 I published
    0:23:04 this previous book,
    0:23:05 Humankind,
    0:23:06 made quite a lot
    0:23:07 of money on it,
    0:23:07 honestly,
    0:23:08 which I never
    0:23:09 would have expected.
    0:23:10 I always thought
    0:23:11 that it would be
    0:23:12 a broke history teacher
    0:23:13 or something like that.
    0:23:15 And yeah,
    0:23:16 that gave me
    0:23:17 a feeling of responsibility
    0:23:18 like,
    0:23:18 huh,
    0:23:19 what does this mean?
    0:23:20 I actually need
    0:23:21 to do something.
    0:23:23 And I also felt
    0:23:23 a little bit ashamed
    0:23:25 for spending a decade
    0:23:26 in what I like to describe
    0:23:28 as the awareness industry.
    0:23:28 You know,
    0:23:29 I’d been
    0:23:32 saying a lot
    0:23:32 about all the things
    0:23:33 that need to happen
    0:23:33 in the world.
    0:23:34 A lot of people
    0:23:34 would know me
    0:23:35 for shouting
    0:23:35 taxes,
    0:23:36 taxes,
    0:23:37 taxes at Davos,
    0:23:37 right?
    0:23:37 Yep.
    0:23:39 And I was a bit
    0:23:41 fed up with myself,
    0:23:41 honestly,
    0:23:43 for standing
    0:23:44 on the sidelines.
    0:23:45 To me,
    0:23:46 what this is indicating
    0:23:46 is like,
    0:23:47 there’s some element
    0:23:48 of subjectivity here,
    0:23:48 right?
    0:23:49 Like the question
    0:23:50 of what percentage
    0:23:51 of my life
    0:23:52 should be focused
    0:23:53 on moral ambition
    0:23:53 and what should be
    0:23:55 like playing the oboe
    0:23:56 or like whatever,
    0:23:57 making watercolor paintings.
    0:23:58 To some degree,
    0:23:59 you’re deciding
    0:23:59 how much
    0:24:00 you want to push yourself,
    0:24:01 how much
    0:24:02 you’re okay
    0:24:03 with having
    0:24:03 some suffering
    0:24:04 in your life
    0:24:04 to achieve
    0:24:05 a greater goal,
    0:24:06 how much you’re like…
    0:24:08 Can I push back
    0:24:08 a little bit?
    0:24:09 Yeah, please.
    0:24:10 I think the question
    0:24:12 itself sort of presumes
    0:24:13 that doing a lot
    0:24:13 of good
    0:24:14 or making a lot
    0:24:15 of impact
    0:24:16 is not going
    0:24:17 to be a nice
    0:24:18 experience or something
    0:24:18 like that,
    0:24:20 that pushing harder
    0:24:22 will always involve
    0:24:23 more sacrifices.
    0:24:24 But if you talk
    0:24:24 to a lot
    0:24:25 of entrepreneurs,
    0:24:26 they find a lot
    0:24:27 of joy
    0:24:28 in thinking big.
    0:24:29 They find a lot
    0:24:30 of joy
    0:24:31 in climbing the ladder.
    0:24:33 It’s what I always
    0:24:34 experienced in my career.
    0:24:35 I love becoming
    0:24:36 a member
    0:24:37 of a student society
    0:24:38 in Utrecht
    0:24:38 in the Netherlands
    0:24:39 where I grew up
    0:24:41 because I felt
    0:24:42 so dumb
    0:24:43 compared to all
    0:24:43 these older students.
    0:24:44 And I was like,
    0:24:45 this is awesome.
    0:24:45 I want to learn
    0:24:46 about philosophy
    0:24:47 and anthropology
    0:24:48 and history.
    0:24:49 And again,
    0:24:49 when I started
    0:24:49 my career
    0:24:50 as a journalist
    0:24:52 at the Volkskrantz,
    0:24:52 which is sort of
    0:24:53 the Guardian
    0:24:55 or, well,
    0:24:55 I guess the New York Times
    0:24:56 at the Netherlands,
    0:24:57 I just love being
    0:24:59 the youngest journalist
    0:25:01 there and learning
    0:25:02 from my older colleagues.
    0:25:04 And when I started
    0:25:06 as a writer,
    0:25:07 I had these big dreams
    0:25:08 about, you know,
    0:25:09 I want to write a book
    0:25:10 that will speak
    0:25:11 to millions of people
    0:25:12 about the big questions
    0:25:12 of history,
    0:25:13 like why have we
    0:25:14 conquered the globe?
    0:25:16 What makes humans special?
    0:25:19 And then as I did that,
    0:25:19 you know,
    0:25:21 I was in my early 30s,
    0:25:22 I was, yeah,
    0:25:23 a bit bored
    0:25:24 and looking for the next
    0:25:24 ladder to climb.
    0:25:27 So for me,
    0:25:29 climbing a new ladder
    0:25:30 has mostly been
    0:25:31 about excitement
    0:25:33 and enthusiasm.
    0:25:49 Support for this show
    0:25:50 comes from Shopify.
    0:25:52 When you’re creating
    0:25:53 your own business,
    0:25:54 you have to wear
    0:25:54 too many hats.
    0:25:56 You have to be on top
    0:25:56 of marketing
    0:25:57 and sales
    0:25:58 and outreach
    0:25:59 and sales
    0:26:00 and designs
    0:26:01 and sales
    0:26:02 and finances
    0:26:03 and definitely
    0:26:04 sales.
    0:26:05 Finding the right tool
    0:26:07 that simplifies everything
    0:26:08 can be a game changer.
    0:26:09 For millions of businesses,
    0:26:10 that tool
    0:26:11 is Shopify.
    0:26:13 Shopify is a commerce
    0:26:15 platform behind millions
    0:26:15 of businesses
    0:26:16 around the world
    0:26:17 and,
    0:26:18 according to the company,
    0:26:20 10% of all e-commerce
    0:26:21 in the U.S.
    0:26:22 From household names
    0:26:23 like Mattel
    0:26:24 and Gemshark
    0:26:25 to brands
    0:26:26 just getting started,
    0:26:27 they say they have
    0:26:28 hundreds of ready-to-use
    0:26:29 templates to help
    0:26:30 design your brand style.
    0:26:32 If you’re ready
    0:26:32 to sell,
    0:26:33 you’re ready
    0:26:34 for Shopify.
    0:26:35 You can turn
    0:26:36 your big business
    0:26:37 idea into
    0:26:39 with Shopify
    0:26:40 on your side.
    0:26:41 You can sign up
    0:26:41 for your $1
    0:26:42 per month trial period
    0:26:43 and start selling
    0:26:44 today at
    0:26:45 shopify.com
    0:26:46 slash vox.
    0:26:47 You can go to
    0:26:48 shopify.com
    0:26:49 slash vox.
    0:26:50 That’s
    0:26:51 shopify.com
    0:26:52 slash vox.
    0:26:59 Support for the gray area
    0:27:00 comes from Bombas.
    0:27:02 It’s time for spring cleaning
    0:27:02 and you can start
    0:27:04 with your sock drawer.
    0:27:05 Bombas can help you
    0:27:06 replace all your old
    0:27:07 worn-down pairs.
    0:27:08 Say you’re thinking
    0:27:09 of getting into running
    0:27:09 this summer.
    0:27:11 Bombas engineers
    0:27:11 blister-fighting,
    0:27:13 sweat-wicking athletic socks
    0:27:13 that can help you
    0:27:14 go that extra mile.
    0:27:16 Or if you have a spring
    0:27:17 wedding coming up,
    0:27:17 they make comfortable
    0:27:18 dress socks too
    0:27:20 for loafers, heels,
    0:27:20 and all your other
    0:27:21 fancy shoes.
    0:27:23 I’m a big runner.
    0:27:24 I talk about it all the time.
    0:27:25 But the problem is that
    0:27:27 I live on the Gulf Coast
    0:27:28 and it’s basically
    0:27:29 a sauna outside
    0:27:30 for four months of the year,
    0:27:31 maybe five.
    0:27:32 I started wearing
    0:27:34 Bombas athletic socks
    0:27:34 for my runs
    0:27:36 and they’ve held up
    0:27:37 better than any other
    0:27:38 socks I’ve ever tried.
    0:27:39 They’re super durable,
    0:27:40 comfortable,
    0:27:42 and they really do
    0:27:43 a great job
    0:27:43 of absorbing
    0:27:44 all that sweat.
    0:27:45 And right now,
    0:27:46 Bombas is going
    0:27:46 international.
    0:27:48 You can get
    0:27:48 worldwide shipping
    0:27:50 to over 200 countries.
    0:27:51 You can go to
    0:27:52 bombas.com
    0:27:53 slash gray area
    0:27:54 and use code
    0:27:54 gray area
    0:27:55 for 20% off
    0:27:56 your first purchase.
    0:27:57 That’s
    0:27:59 B-O-M-B-A-S
    0:27:59 dot com
    0:28:00 slash gray area.
    0:28:02 Code gray area
    0:28:02 for 20% off
    0:28:03 your first purchase.
    0:28:05 Bombas dot com
    0:28:05 slash gray area.
    0:28:07 Code gray area.
    0:28:12 Harvey Weinstein
    0:28:13 is back in court
    0:28:14 this week.
    0:28:15 An appeals court
    0:28:16 overturned his
    0:28:16 2020 conviction
    0:28:17 in New York
    0:28:18 saying he hadn’t
    0:28:19 gotten a fair trial
    0:28:21 and so his accusers
    0:28:22 must now testify again.
    0:28:25 Weinstein has always
    0:28:26 had very good lawyers,
    0:28:27 but the court
    0:28:28 of public opinion
    0:28:29 was against him.
    0:28:30 Until now,
    0:28:31 it seems.
    0:28:32 After looking over
    0:28:32 this case,
    0:28:32 I’ve concluded
    0:28:33 that Harvey Weinstein
    0:28:34 was wrongfully convicted
    0:28:35 and was basically
    0:28:35 just hung on
    0:28:36 the Me Too thing.
    0:28:37 The commentator
    0:28:38 Candace Owens,
    0:28:38 who has previously
    0:28:39 defended Kanye
    0:28:40 and Andrew Tate.
    0:28:41 Andrew Tate
    0:28:42 and his brother
    0:28:43 were actually a response
    0:28:45 to a misandrist culture.
    0:28:46 Women that hated men.
    0:28:47 Before Andrew Tate,
    0:28:48 there was Lena Dunham.
    0:28:49 Has taken up
    0:28:50 Weinstein’s cause
    0:28:51 and it seems to be
    0:28:53 gaining her followers.
    0:28:54 Coming up on Today Explained,
    0:28:56 when Candace met Harvey.
    0:28:56 When Candace met Harvey.
    0:29:26 Let’s talk about
    0:29:27 the effective altruism
    0:29:28 piece of this.
    0:29:28 Some of our listeners
    0:29:29 may have heard of it,
    0:29:31 but for those who haven’t,
    0:29:31 it’s a movement
    0:29:32 that’s all about
    0:29:33 using reason
    0:29:34 and evidence
    0:29:34 and data
    0:29:35 to do as much
    0:29:36 good as possible.
    0:29:37 I will say
    0:29:39 I’m not an effective altruist,
    0:29:40 but I am a journalist
    0:29:41 who has reported
    0:29:42 a lot on EA
    0:29:43 because I work
    0:29:44 for Vox’s
    0:29:45 Future Perfect section,
    0:29:46 which was sort of
    0:29:47 loosely inspired
    0:29:48 by EA
    0:29:50 in its early days.
    0:29:52 So I am curious
    0:29:53 where you stand on this.
    0:29:54 You talk about
    0:29:55 effective altruism
    0:29:55 in the book
    0:29:56 and you do echo
    0:29:58 a lot of its core ideas,
    0:29:59 like this idea
    0:29:59 that you shouldn’t
    0:30:00 just be trying
    0:30:00 to do good,
    0:30:01 you should try to do
    0:30:03 the most good possible.
    0:30:05 So is being morally ambitious
    0:30:06 different from being
    0:30:07 an effective altruist?
    0:30:09 Yeah, so I wouldn’t say
    0:30:10 the most good.
    0:30:11 I was like,
    0:30:12 you should do
    0:30:12 a lot of good.
    0:30:14 Okay, okay.
    0:30:14 Which is different, right?
    0:30:15 That’s not about
    0:30:16 being perfect,
    0:30:17 but just about
    0:30:18 being ambitious.
    0:30:20 So in the book,
    0:30:21 I study a lot of movements
    0:30:22 that I admire.
    0:30:23 As you know,
    0:30:24 I write extensively
    0:30:25 about the abolitionists,
    0:30:26 about the suffragettes,
    0:30:28 about the civil right
    0:30:28 campaigners,
    0:30:30 about extraordinary people
    0:30:31 like Rosa Parks,
    0:30:32 who was such a
    0:30:33 strategic visionary.
    0:30:33 A lot of people
    0:30:34 remember her
    0:30:35 as this,
    0:30:36 you know,
    0:30:37 quiet seamstress,
    0:30:38 but she was actually
    0:30:39 a highly experienced
    0:30:39 activist,
    0:30:43 and they really planned
    0:30:45 this whole Montgomery bus boycott.
    0:30:46 It didn’t just happen.
    0:30:47 I talk about
    0:30:48 the animal rights movement.
    0:30:49 I talk about
    0:30:50 Ralph Nader
    0:30:52 and the extraordinary
    0:30:53 Nader’s Raider movement
    0:30:54 in the 60s and the 70s,
    0:30:55 when Ralph Nader
    0:30:56 was able to recruit
    0:30:58 a lot of really talented
    0:31:00 young Ivy League graduates
    0:31:00 and convince them
    0:31:01 to not work
    0:31:03 for boring law firms,
    0:31:03 but instead
    0:31:04 go to Washington
    0:31:06 and influence legislation.
    0:31:07 There’s one historian
    0:31:08 who estimates
    0:31:08 that they’ve influenced,
    0:31:09 what is it,
    0:31:11 25 pieces of federal legislation.
    0:31:12 So anyway,
    0:31:13 the book is a whole collection
    0:31:14 of studies of movements
    0:31:15 that I admire,
    0:31:16 and indeed,
    0:31:17 effective altruism
    0:31:18 is also one of those
    0:31:19 movements that I admire
    0:31:19 quite a bit.
    0:31:20 I think there’s a lot
    0:31:21 we can learn from them,
    0:31:22 and there are also
    0:31:23 quite a few things
    0:31:24 that I don’t really like
    0:31:25 about them.
    0:31:28 So the main thing
    0:31:29 I think indeed
    0:31:30 what I really like
    0:31:31 about them
    0:31:31 is their
    0:31:33 moral seriousness.
    0:31:35 As I said,
    0:31:36 I come from the political left,
    0:31:37 and if there’s one thing
    0:31:39 that’s often quite annoying
    0:31:40 about lefties
    0:31:40 is that they
    0:31:41 preach a lot,
    0:31:42 but they
    0:31:43 do little.
    0:31:44 For example,
    0:31:45 this simple thing
    0:31:46 about donating
    0:31:47 to charity,
    0:31:49 I think it’s
    0:31:49 pretty easy
    0:31:50 to make the case
    0:31:50 that
    0:31:52 that is one of the most
    0:31:53 effective things
    0:31:54 you can do,
    0:31:55 but then
    0:31:56 very few
    0:31:56 of my
    0:31:57 progressive
    0:31:58 leftist friends
    0:31:58 donate
    0:32:00 anything.
    0:32:01 So I really
    0:32:02 like that
    0:32:03 moral seriousness
    0:32:04 of EAs.
    0:32:05 You know,
    0:32:05 you go to conferences
    0:32:07 and you will meet
    0:32:07 quite a few people
    0:32:08 who have donated
    0:32:09 kidneys to
    0:32:11 random strangers,
    0:32:12 which is
    0:32:13 pretty impressive.
    0:32:14 I’m sorry to say
    0:32:15 that I still have
    0:32:16 both of my kidneys.
    0:32:18 My condolences.
    0:32:18 And I’m quite attached to them.
    0:32:21 But yeah,
    0:32:22 I admire the people
    0:32:24 who really
    0:32:24 practice what
    0:32:25 they preach.
    0:32:28 I guess the main
    0:32:30 thing I dislike
    0:32:31 is probably
    0:32:32 what we already
    0:32:33 talked about.
    0:32:33 Like,
    0:32:34 where does the
    0:32:35 motivation come from?
    0:32:38 One of the
    0:32:39 founding fathers
    0:32:40 of effective
    0:32:40 altruism was
    0:32:41 the philosopher
    0:32:42 Peter Singer,
    0:32:42 obviously,
    0:32:43 also one of the
    0:32:44 founding fathers
    0:32:44 of the mother
    0:32:45 animal rights
    0:32:45 movement.
    0:32:46 And everyone
    0:32:47 knows him for
    0:32:47 this,
    0:32:49 you know,
    0:32:50 that thought
    0:32:51 experiment of
    0:32:51 the child
    0:32:52 drowning in the
    0:32:53 shallow pond.
    0:32:55 I’m pretty sure
    0:32:55 that he must be
    0:32:56 really fed up
    0:32:58 with talking about
    0:32:59 that thought
    0:33:00 experiment because
    0:33:01 like,
    0:33:01 I am already
    0:33:02 fed up talking
    0:33:03 about it and
    0:33:03 it’s not even
    0:33:04 my thought
    0:33:04 experiment.
    0:33:05 Right.
    0:33:05 So that’s the
    0:33:06 thought experiment
    0:33:07 where Peter Singer
    0:33:08 says,
    0:33:09 look,
    0:33:09 if you are
    0:33:10 walking to work
    0:33:10 and you see
    0:33:11 a little kid
    0:33:12 drowning in a
    0:33:12 shallow pond,
    0:33:13 you know you
    0:33:14 could save this
    0:33:14 kid.
    0:33:15 Your life will
    0:33:15 be in no danger.
    0:33:16 It’s shallow,
    0:33:18 but you will
    0:33:19 ruin your expensive
    0:33:19 suit or you will
    0:33:20 muddy your shoes
    0:33:21 should you do it.
    0:33:21 And it’s
    0:33:22 supposed to
    0:33:22 be like,
    0:33:22 yes,
    0:33:23 obviously you
    0:33:24 should do it.
    0:33:25 And well,
    0:33:26 by comparison,
    0:33:26 you know,
    0:33:27 by analogy,
    0:33:28 we have money.
    0:33:29 It could easily
    0:33:30 save the lives
    0:33:30 of people in
    0:33:31 developing countries.
    0:33:33 So you should
    0:33:34 donate it.
    0:33:34 Yeah.
    0:33:35 Thank you so much
    0:33:36 for helping me
    0:33:36 out with that one.
    0:33:37 Anyway,
    0:33:39 I never really
    0:33:39 liked the thought
    0:33:41 experiment because
    0:33:41 it always felt
    0:33:43 like a form of
    0:33:44 moral blackmail to
    0:33:44 me.
    0:33:45 And now I’m
    0:33:46 suddenly supposed
    0:33:47 to see drowning
    0:33:47 children everywhere
    0:33:48 and like,
    0:33:48 oh,
    0:33:49 this microphone,
    0:33:50 it was too
    0:33:50 expensive.
    0:33:51 Could have
    0:33:51 donated that
    0:33:52 to, I don’t
    0:33:52 know,
    0:33:53 a charity in
    0:33:54 Malawi or,
    0:33:55 you know,
    0:33:55 I just had a
    0:33:56 sandwich and,
    0:33:57 you know,
    0:33:59 the peanut butter
    0:33:59 on it was also
    0:34:00 too expensive.
    0:34:01 It’s like a
    0:34:02 totally inhuman
    0:34:03 way of, I
    0:34:03 don’t know,
    0:34:04 looking at life.
    0:34:05 It just doesn’t
    0:34:05 resonate with me
    0:34:06 at all.
    0:34:07 But there are
    0:34:07 quite a few
    0:34:08 people who
    0:34:10 instantly thought,
    0:34:11 yes,
    0:34:11 that is true.
    0:34:12 they discovered,
    0:34:12 hey,
    0:34:13 wait a minute,
    0:34:13 I’m not
    0:34:13 alone.
    0:34:15 Let’s build a
    0:34:15 movement together.
    0:34:17 And I really
    0:34:17 like that.
    0:34:18 For me,
    0:34:19 the historical
    0:34:21 comparison is
    0:34:22 the Quakers,
    0:34:23 the early
    0:34:24 abolitionists,
    0:34:25 who were very
    0:34:26 weird as well.
    0:34:27 It was like
    0:34:28 this small
    0:34:29 Protestant sect
    0:34:30 of people who
    0:34:31 deeply believed
    0:34:31 in equality.
    0:34:32 They were some
    0:34:33 of the first
    0:34:34 who allowed
    0:34:35 women to
    0:34:36 also preach
    0:34:36 in their
    0:34:37 meeting houses.
    0:34:38 They would
    0:34:39 never take an
    0:34:40 oath because
    0:34:40 they were like,
    0:34:41 yeah,
    0:34:41 we always
    0:34:42 speak the
    0:34:42 truth,
    0:34:42 so why
    0:34:42 would we
    0:34:43 take an
    0:34:43 oath?
    0:34:44 Anyway,
    0:34:44 they were
    0:34:45 seen as
    0:34:45 very weird
    0:34:48 and quite
    0:34:48 amazing as
    0:34:49 well.
    0:34:49 The
    0:34:50 abolitionism
    0:34:50 sort of
    0:34:51 started as
    0:34:52 a Quaker
    0:34:52 startup.
    0:34:53 So that’s
    0:34:54 also how
    0:34:54 I see
    0:34:55 EA,
    0:34:56 as very
    0:34:56 weird,
    0:34:57 but pretty
    0:34:58 impressive.
    0:35:01 And I
    0:35:01 think a lot
    0:35:02 of people in
    0:35:02 there have
    0:35:02 done a lot
    0:35:03 of good
    0:35:03 work,
    0:35:04 even though
    0:35:05 I’d never
    0:35:06 joined the
    0:35:06 church.
    0:35:08 It’s not
    0:35:08 for me.
    0:35:09 And there are
    0:35:10 some obvious
    0:35:12 downsides to
    0:35:13 the ideology
    0:35:14 as well.
    0:35:15 Let’s pick
    0:35:15 up on that
    0:35:16 weirdness bit,
    0:35:16 right?
    0:35:17 So in
    0:35:17 your book,
    0:35:18 you straight
    0:35:19 up tell
    0:35:20 readers,
    0:35:21 join a
    0:35:22 cult or
    0:35:22 start your
    0:35:22 own.
    0:35:23 Regardless,
    0:35:24 you can’t
    0:35:24 be afraid to
    0:35:25 come across
    0:35:26 as weird if
    0:35:26 you want to
    0:35:26 make a
    0:35:26 difference.
    0:35:27 Every milestone
    0:35:28 of civilization
    0:35:29 was first seen
    0:35:29 as the crazy
    0:35:30 idea of some
    0:35:31 subculture.
    0:35:33 I’m curious
    0:35:34 how you think
    0:35:35 about the
    0:35:36 downsides of
    0:35:37 being in a
    0:35:37 cult.
    0:35:38 cults don’t
    0:35:39 have a
    0:35:39 great
    0:35:39 reputation,
    0:35:40 do they?
    0:35:42 So I
    0:35:42 got to give
    0:35:43 some credit
    0:35:43 to Peter
    0:35:44 Thiel here.
    0:35:46 Maybe not
    0:35:48 someone that
    0:35:49 people naturally
    0:35:50 associate with
    0:35:50 me.
    0:35:52 For those who
    0:35:52 don’t know
    0:35:53 him, he is
    0:35:54 a venture
    0:35:54 capitalist,
    0:35:55 very much on
    0:35:55 the right
    0:35:57 wing side of
    0:35:57 the political
    0:35:58 spectrum.
    0:35:59 He’s written
    0:35:59 this fantastic
    0:36:00 book called
    0:36:01 Zero to One
    0:36:02 about how to
    0:36:03 build a
    0:36:03 successful
    0:36:03 startup.
    0:36:05 And indeed,
    0:36:06 one of his
    0:36:07 advices is to
    0:36:07 start a cult.
    0:36:09 a cult is
    0:36:10 a small
    0:36:10 group of
    0:36:11 thoughtful,
    0:36:12 committed
    0:36:13 citizens who
    0:36:13 want to
    0:36:14 change the
    0:36:14 world.
    0:36:15 And they
    0:36:16 have some
    0:36:18 shared beliefs
    0:36:18 that make
    0:36:18 them very
    0:36:20 weird for
    0:36:21 the rest of
    0:36:21 society.
    0:36:23 Now, as I
    0:36:23 said, I
    0:36:24 spent the
    0:36:25 first decade
    0:36:25 of my
    0:36:25 career as
    0:36:26 a journalist
    0:36:28 and most
    0:36:29 journalists
    0:36:30 think that
    0:36:30 they should
    0:36:31 break out
    0:36:31 of their
    0:36:31 bubble,
    0:36:33 that they
    0:36:34 should meet
    0:36:34 people on
    0:36:34 the other
    0:36:35 side of the
    0:36:35 political
    0:36:35 spectrum.
    0:36:36 This is a
    0:36:37 debate that
    0:36:37 I used
    0:36:38 to have
    0:36:38 with my
    0:36:38 colleagues.
    0:36:39 They would
    0:36:39 say, yeah,
    0:36:40 we’ve got to
    0:36:40 make sure
    0:36:41 that the
    0:36:41 plumbers read
    0:36:42 our essays
    0:36:43 as well.
    0:36:44 And my
    0:36:44 response was
    0:36:45 always like,
    0:36:45 you know,
    0:36:46 I would love
    0:36:47 for plumbers
    0:36:47 to read my
    0:36:48 essays, but
    0:36:49 currently my
    0:36:50 friends aren’t
    0:36:50 reading them.
    0:36:52 So maybe we
    0:36:52 can start
    0:36:53 there.
    0:36:54 Right?
    0:36:56 And this is
    0:36:56 why I think
    0:36:57 it sometimes
    0:36:57 makes sense to
    0:36:58 actually double
    0:36:59 down on a
    0:37:00 cult, because
    0:37:01 in a cult,
    0:37:02 it can be
    0:37:02 radicalized,
    0:37:03 and sometimes
    0:37:04 that’s exactly
    0:37:05 what’s
    0:37:05 necessary.
    0:37:06 To give you
    0:37:06 one simple
    0:37:07 example, in a
    0:37:08 world that
    0:37:08 doesn’t really
    0:37:09 seem to care
    0:37:09 about animals
    0:37:10 all that much,
    0:37:11 it’s easy to
    0:37:12 become disillusioned.
    0:37:14 But then once you
    0:37:15 join a safe space
    0:37:16 of ambitious
    0:37:16 do-gooders, you
    0:37:18 can suddenly get
    0:37:18 this feeling like,
    0:37:19 hey, I’m not the
    0:37:20 only one, right?
    0:37:21 There are other
    0:37:21 people who deeply
    0:37:22 care about animals
    0:37:23 as well, and you
    0:37:23 know what?
    0:37:24 I can do much
    0:37:25 more than I’m
    0:37:26 currently doing.
    0:37:26 So it can have a
    0:37:27 radicalizing effect.
    0:37:29 Now, I totally
    0:37:29 acknowledge that
    0:37:30 there are all
    0:37:31 signs of dangers
    0:37:32 here.
    0:37:34 Like, you can
    0:37:34 become too
    0:37:35 dogmatic, you
    0:37:36 can be, you
    0:37:37 know, quite
    0:37:38 hostile to people
    0:37:39 who don’t share
    0:37:39 all your beliefs.
    0:37:41 So I do see
    0:37:42 all of that.
    0:37:43 I just want to
    0:37:44 recognize that if
    0:37:45 you look at some
    0:37:45 of these great
    0:37:46 movements of
    0:37:46 history, the
    0:37:48 abolitionists, the
    0:37:49 suffragettes, yeah,
    0:37:50 they had cultish
    0:37:51 aspects.
    0:37:52 They were in a
    0:37:53 way, yeah, a
    0:37:54 little bit like a
    0:37:54 cult.
    0:37:56 I want to push
    0:37:58 a little bit on
    0:37:59 this question
    0:38:00 about, you
    0:38:00 know, cults and
    0:38:01 dogmatism.
    0:38:03 Obviously, a big
    0:38:04 downside, as you
    0:38:04 mentioned, is that
    0:38:05 you can become
    0:38:06 dogmatic, you can
    0:38:06 become kind of
    0:38:07 deaf to criticism
    0:38:07 from the outside.
    0:38:09 Do you have any
    0:38:10 advice for people
    0:38:11 on how to avoid
    0:38:12 the downside?
    0:38:14 Yeah, don’t let
    0:38:14 it suck up your
    0:38:15 whole life.
    0:38:16 There’s this quote
    0:38:18 from Flaubert, the
    0:38:19 novelist, who once
    0:38:20 said something like,
    0:38:20 if you want to be
    0:38:22 violent and original
    0:38:23 in your work, you
    0:38:24 need to be boring
    0:38:25 in your private
    0:38:25 life.
    0:38:26 I’m paraphrasing
    0:38:26 here.
    0:38:27 But I’ve always
    0:38:29 like that quote.
    0:38:30 I don’t know, it
    0:38:31 gives you a certain
    0:38:32 groundedness and
    0:38:33 stability.
    0:38:35 So maybe surround
    0:38:36 yourself with
    0:38:38 other types of
    0:38:39 people and other
    0:38:40 types of pursuits,
    0:38:40 right?
    0:38:41 Basically be a
    0:38:42 pluralist.
    0:38:44 Look, I don’t
    0:38:45 know, honestly.
    0:38:46 I don’t have the
    0:38:48 perfect recipe here.
    0:38:53 In general, it’s
    0:38:54 super important to
    0:38:55 surround yourself with
    0:38:56 people who are
    0:38:56 critical of your
    0:38:57 work, who don’t
    0:38:58 take you too
    0:38:59 seriously, who
    0:38:59 can also laugh
    0:39:02 at you, who
    0:39:03 have a good
    0:39:03 sense of humor,
    0:39:06 or who can just
    0:39:06 see your
    0:39:07 foolishness and
    0:39:07 call it out and
    0:39:08 still be a good
    0:39:09 friend.
    0:39:10 But this is
    0:39:11 general life advice
    0:39:12 for everyone.
    0:39:13 Right, right.
    0:39:15 Having a strong
    0:39:16 dose of pluralism
    0:39:17 can help
    0:39:20 counteract a lot
    0:39:21 of the potential
    0:39:22 pitfalls with
    0:39:23 these sorts of
    0:39:24 ideological movements.
    0:39:25 Yeah, absolutely.
    0:39:25 At the same
    0:39:26 time, you know, I
    0:39:27 come from such a
    0:39:27 different place,
    0:39:28 you know.
    0:39:30 I was mainly
    0:39:31 frustrated with all
    0:39:33 these people on
    0:39:33 the left side of the
    0:39:34 political spectrum
    0:39:35 saying, oh, we
    0:39:36 need systemic
    0:39:37 change.
    0:39:38 We need to
    0:39:39 abolish capitalism,
    0:39:40 overthrow the
    0:39:42 patriarchy, and
    0:39:43 write, you know,
    0:39:44 a hundred more
    0:39:45 monographs about it
    0:39:46 in utterly
    0:39:47 inaccessible
    0:39:48 academic jargon.
    0:39:49 And I was like,
    0:39:50 come on, can we
    0:39:51 actually do
    0:39:51 something, right?
    0:39:53 Can we actually
    0:39:55 find some effective
    0:39:55 way of actually
    0:39:56 making a difference?
    0:40:24 I think one
    0:40:25 important question
    0:40:26 is the question
    0:40:27 of who
    0:40:27 should we be
    0:40:28 trying to
    0:40:28 make a difference
    0:40:29 for?
    0:40:31 There is a very
    0:40:31 interesting concept
    0:40:32 that you mention
    0:40:33 in the book,
    0:40:34 which is humanity’s
    0:40:35 expanding moral
    0:40:36 circle.
    0:40:37 What is that?
    0:40:38 It’s, again, a
    0:40:39 term from Peter
    0:40:41 Singer, the
    0:40:41 philosopher, who
    0:40:42 makes the simple
    0:40:43 case that throughout
    0:40:45 history, our
    0:40:46 moral circle has
    0:40:46 expanded.
    0:40:49 So, back in
    0:40:49 the old days, we
    0:40:50 mainly cared about
    0:40:52 our own tribe and
    0:40:53 members of our
    0:40:53 tribe.
    0:40:54 And then, you
    0:40:55 know, we got the
    0:40:56 big religions and
    0:40:57 we started caring
    0:40:58 about people who
    0:40:59 believe the same
    0:40:59 things.
    0:41:00 And then we got
    0:41:01 the nation states
    0:41:02 and so on and so
    0:41:02 on.
    0:41:03 And he basically
    0:41:04 says that moral
    0:41:04 progress is all
    0:41:05 about expanding the
    0:41:07 moral circle and
    0:41:08 to keep pushing
    0:41:09 that expansion.
    0:41:11 A couple of
    0:41:11 years ago, I was
    0:41:12 actually working on
    0:41:13 a different book.
    0:41:14 I wanted to write
    0:41:14 the history of
    0:41:15 moral circle
    0:41:15 expansion.
    0:41:17 because it’s
    0:41:18 really interesting
    0:41:19 that a lot of
    0:41:19 the first
    0:41:21 abolitionists, they
    0:41:22 already cared
    0:41:23 deeply about animal
    0:41:24 rights, which makes
    0:41:24 a lot of sense
    0:41:25 because once you
    0:41:26 start expanding your
    0:41:27 moral circle, once
    0:41:27 you start opening
    0:41:29 your heart to
    0:41:30 people who first
    0:41:30 weren’t included in
    0:41:31 your moral circle,
    0:41:32 then the question
    0:41:32 is, like, why
    0:41:33 stop at some
    0:41:33 point?
    0:41:34 And I was writing
    0:41:35 about that, learning
    0:41:36 about that, and I
    0:41:37 was like, huh,
    0:41:39 maybe I should
    0:41:40 finish this book
    0:41:41 when I’m 60 or
    0:41:42 70 or something.
    0:41:44 Maybe I should be
    0:41:44 doing this stuff,
    0:41:45 you know, not
    0:41:46 just be writing
    0:41:46 about it.
    0:41:47 So for me, that
    0:41:48 was incredibly
    0:41:48 inspirational.
    0:41:50 That’s funny.
    0:41:50 Okay, so if the
    0:41:51 moral circle is
    0:41:52 like, okay, who’s
    0:41:53 worthy of our
    0:41:54 moral consideration,
    0:41:54 who’s not, who’s
    0:41:56 in, who’s out, you
    0:41:58 kind of acknowledge
    0:41:58 in the book, like,
    0:41:59 maybe it’s not
    0:42:00 obvious how to
    0:42:01 tell, are we
    0:42:02 including everyone
    0:42:03 in the moral circle
    0:42:03 that should be
    0:42:04 included?
    0:42:05 And you have a few
    0:42:06 pointers that you
    0:42:08 offer people on
    0:42:08 how to check that
    0:42:09 they’re including
    0:42:09 everyone that should
    0:42:10 be included.
    0:42:11 Do you want to
    0:42:12 give us a little
    0:42:13 summary, a few
    0:42:14 pointers?
    0:42:15 I think that
    0:42:16 there are some
    0:42:17 classic signs
    0:42:20 that can tell
    0:42:20 us whether we’re
    0:42:21 on the right
    0:42:22 side of history.
    0:42:23 This is one of
    0:42:24 those fascinating
    0:42:24 questions that we
    0:42:25 can ask, right?
    0:42:26 We can look back
    0:42:27 on, say, the
    0:42:29 Romans who threw
    0:42:30 naked women before
    0:42:31 the lions, but
    0:42:32 still thought they
    0:42:32 were super
    0:42:33 civilized because
    0:42:34 unlike the
    0:42:35 barbarians, they
    0:42:36 didn’t sacrifice
    0:42:38 kids to the
    0:42:39 gods, right?
    0:42:40 and every
    0:42:40 civilization
    0:42:41 throughout history
    0:42:41 has always
    0:42:43 thought we
    0:42:43 are the most
    0:42:44 civilized.
    0:42:45 And obviously
    0:42:46 we think that
    0:42:46 today as well,
    0:42:47 like any
    0:42:48 modern-day
    0:42:49 liberal in
    0:42:50 the US or
    0:42:52 the West in
    0:42:52 the 21st century
    0:42:53 will be like,
    0:42:53 yeah, there’s
    0:42:54 still bad stuff
    0:42:55 happening, but
    0:42:57 basically we’ve
    0:42:57 figured things
    0:42:58 out.
    0:43:00 And the
    0:43:01 uncomfortable
    0:43:01 truth is that
    0:43:02 probably we are
    0:43:04 still committed,
    0:43:06 engaged in some
    0:43:07 really terrible
    0:43:08 moral atrocities.
    0:43:09 I mean, that’s
    0:43:10 highly likely if
    0:43:10 you just look at
    0:43:11 the historical
    0:43:11 track record.
    0:43:12 So the question
    0:43:13 is, what will
    0:43:14 the historians of
    0:43:15 the future say
    0:43:16 about us?
    0:43:16 And then I’m
    0:43:17 not just talking
    0:43:17 about, oh,
    0:43:18 yeah, the bad
    0:43:19 MAGA people or
    0:43:19 something like that.
    0:43:20 No, no, no,
    0:43:21 I’m talking to
    0:43:22 you directly who’s
    0:43:23 listening to this
    0:43:24 podcast right now
    0:43:24 and probably thinks
    0:43:25 of his or himself
    0:43:27 as a pretty
    0:43:27 decent person.
    0:43:29 Then the question
    0:43:29 is, okay, what
    0:43:30 is that?
    0:43:30 A couple of
    0:43:30 signs.
    0:43:31 Well, one is
    0:43:32 we’ve been
    0:43:33 talking about it
    0:43:34 for a long
    0:43:34 time.
    0:43:35 So the alarm
    0:43:35 bells have been
    0:43:36 ringing for a
    0:43:37 long time.
    0:43:37 That’s one
    0:43:38 clear sign.
    0:43:39 In the book,
    0:43:39 I give the
    0:43:40 example of the
    0:43:40 way we treat
    0:43:41 animals.
    0:43:41 And it’s not
    0:43:42 as if these
    0:43:43 arguments are
    0:43:44 new or anything.
    0:43:44 You know, a lot
    0:43:45 of smart people
    0:43:46 have said this
    0:43:46 for a long
    0:43:47 time.
    0:43:47 You know,
    0:43:48 Jeremy Bentham
    0:43:49 already in the
    0:43:50 late 18th
    0:43:51 century wrote
    0:43:51 that, you
    0:43:52 know, it’s not
    0:43:52 about whether
    0:43:53 these animals
    0:43:54 can speak or
    0:43:54 reason or do
    0:43:55 mathematics.
    0:43:56 No, it’s about
    0:43:56 the simple
    0:43:57 question, can
    0:43:58 they suffer?
    0:43:59 And we’ve got
    0:43:59 an enormous
    0:44:00 mountain of
    0:44:01 evidence that
    0:44:02 tells us, yeah,
    0:44:03 they can probably
    0:44:04 suffer really
    0:44:04 badly.
    0:44:06 So yeah, if
    0:44:07 you eat meat
    0:44:07 and dairy
    0:44:08 today, then
    0:44:10 you are, yeah,
    0:44:10 that it’s quite
    0:44:11 likely that you’re
    0:44:11 involved in one
    0:44:12 of those moral
    0:44:12 atrocities.
    0:44:14 I’ve got a few
    0:44:15 other signs that
    0:44:15 I talk about.
    0:44:17 For example, we
    0:44:18 rationalize these
    0:44:19 kind of things by
    0:44:20 saying that they’re
    0:44:22 natural or normal
    0:44:23 or necessary.
    0:44:24 This is what
    0:44:25 Melanie Joy, the
    0:44:26 psychologist, calls
    0:44:27 the three ends.
    0:44:28 And you look at
    0:44:29 something like
    0:44:31 slavery, and that’s
    0:44:31 also what we did
    0:44:32 back then, right?
    0:44:32 We said it was
    0:44:33 natural.
    0:44:34 Like, throughout
    0:44:35 history, every
    0:44:35 civilization has
    0:44:36 always practiced the
    0:44:37 institution of
    0:44:37 slavery.
    0:44:39 Like, it’s just
    0:44:40 what people do,
    0:44:40 right?
    0:44:41 What are you going
    0:44:42 to do about it?
    0:44:43 Or necessary, people
    0:44:44 would say.
    0:44:45 Yeah, it was just
    0:44:47 essential for the
    0:44:47 economy.
    0:44:48 If we would
    0:44:49 abolish slavery
    0:44:49 today, you know,
    0:44:50 the economy will
    0:44:51 collapse and there
    0:44:51 will be all kinds
    0:44:52 of perverse
    0:44:53 consequences.
    0:44:54 So anyway, it’s
    0:44:55 interesting to look
    0:44:55 at those signs and
    0:44:56 then think, okay,
    0:44:57 what are some of the
    0:44:58 worst things that may
    0:44:58 be happening today?
    0:45:00 There’s sort of a
    0:45:01 pet peeve I have
    0:45:02 about the way people
    0:45:03 sometimes talk about
    0:45:03 the expanding
    0:45:04 moral circle.
    0:45:06 People, I find,
    0:45:07 typically talk about
    0:45:09 it as if moral
    0:45:09 progress or the
    0:45:10 expansion of the
    0:45:11 moral circle is
    0:45:11 some sort of
    0:45:12 linear process.
    0:45:16 But to me, that
    0:45:16 seems like a very
    0:45:17 Eurocentric reading
    0:45:19 of history because
    0:45:20 there are other
    0:45:21 cultures, right?
    0:45:21 I’m thinking of the
    0:45:23 Jains in India or
    0:45:24 the Quechua people
    0:45:25 in Latin America.
    0:45:27 For them, you know,
    0:45:28 the inclusion of all
    0:45:29 animals and all
    0:45:30 nature in the
    0:45:31 moral circle has
    0:45:32 been morally
    0:45:33 obvious for a
    0:45:34 long time and
    0:45:35 that’s still not
    0:45:36 obvious to
    0:45:36 Americans.
    0:45:38 I think that’s a
    0:45:38 really good point
    0:45:39 you’re making.
    0:45:40 So historians call
    0:45:41 this the Whig
    0:45:43 view of history,
    0:45:44 you know, named
    0:45:45 after the Whigs,
    0:45:47 the political
    0:45:49 party in the
    0:45:50 UK a few
    0:45:51 centuries ago,
    0:45:52 which indeed had
    0:45:53 this Western
    0:45:55 triumphalism baked
    0:45:55 into it.
    0:45:56 Like, we know
    0:45:57 what’s right for
    0:45:59 the world and we
    0:45:59 will show the rest
    0:46:00 of the world,
    0:46:00 you know, how
    0:46:01 to be good,
    0:46:02 how to be moral.
    0:46:04 And obviously,
    0:46:06 the fight against
    0:46:07 the slave trade and
    0:46:08 slavery was essential
    0:46:08 to that.
    0:46:13 So, I have
    0:46:13 complicated views on
    0:46:14 this.
    0:46:15 There are some
    0:46:15 people who are
    0:46:16 like, look, it’s
    0:46:18 just total BS that,
    0:46:19 you know, Britain was
    0:46:20 so important in
    0:46:21 abolishing the slave
    0:46:22 trade because, you
    0:46:23 know, it was mainly
    0:46:24 the revolutions in
    0:46:25 Haiti, you know, it
    0:46:26 was enslaved people
    0:46:27 themselves who did
    0:46:27 it.
    0:46:30 So, yeah, stop
    0:46:30 with the colonists
    0:46:31 crap.
    0:46:33 And I think that’s
    0:46:34 just not true, to
    0:46:34 be honest.
    0:46:37 People who have
    0:46:38 been suffering from
    0:46:39 slavery and the
    0:46:40 slave trade, you
    0:46:40 know, they’ve always
    0:46:42 revolted, obviously,
    0:46:42 you know, from
    0:46:43 Spartacus onwards.
    0:46:46 One in ten slave
    0:46:47 voyages saw a
    0:46:48 revolt.
    0:46:49 But the reality is
    0:46:50 that this system was
    0:46:51 so horrible, and
    0:46:52 not just in the
    0:46:53 West, in the
    0:46:54 colonies in the
    0:46:55 Caribbean, but in
    0:46:55 many places around
    0:46:57 the globe, that
    0:46:58 yeah, abolitionism
    0:47:00 was for a long
    0:47:00 time unthinkable.
    0:47:02 And it was really
    0:47:03 a new idea that
    0:47:05 originated among
    0:47:07 Anglo-Saxon
    0:47:09 Protestants, first
    0:47:10 the Quakers, and
    0:47:10 then also the
    0:47:11 Evangelicals, this
    0:47:13 new idea that you
    0:47:13 could actually
    0:47:15 abolish slavery as
    0:47:16 an institution.
    0:47:17 It was really a
    0:47:17 small group of
    0:47:18 people who had
    0:47:19 this crazy idea.
    0:47:21 And then because
    0:47:21 they did it in
    0:47:22 Britain, and they
    0:47:23 were successful in
    0:47:24 Britain, then that
    0:47:25 country was able to
    0:47:27 use its power on
    0:47:28 the Seven Seas, the
    0:47:30 Royal Navy, to
    0:47:31 force a huge
    0:47:31 amount of other
    0:47:32 countries to also
    0:47:33 stop slavery, slave
    0:47:34 trading.
    0:47:34 So the Netherlands,
    0:47:36 where I’m from, we
    0:47:37 didn’t abolish the
    0:47:38 slave trade on our
    0:47:38 own.
    0:47:38 Like, we were
    0:47:39 making a lot of
    0:47:40 money and enjoying
    0:47:41 it quite immensely.
    0:47:43 But then, you know,
    0:47:44 these moralistic
    0:47:46 British people came
    0:47:46 along and, okay,
    0:47:47 okay, we will
    0:47:48 abolish it.
    0:47:49 And that happened
    0:47:50 again and again.
    0:47:51 The irony is,
    0:47:52 obviously, that this
    0:47:53 was, again, also an
    0:47:54 excuse for more
    0:47:55 colonialism, so
    0:47:57 that, you know, some
    0:47:58 new horrors grew out
    0:47:59 of that, that under
    0:47:59 the banner of
    0:48:01 anti-slavery, a new
    0:48:03 colonial era dawned
    0:48:04 and the whole
    0:48:05 scramble for Africa
    0:48:05 happened.
    0:48:07 So I really don’t
    0:48:09 want to, you know,
    0:48:09 suggest that there
    0:48:10 are some natural
    0:48:11 progress in history.
    0:48:13 If the arc of
    0:48:15 justice bends, or if
    0:48:16 the arc of history
    0:48:16 bends towards
    0:48:18 justice, then it’s
    0:48:20 because, like, people
    0:48:20 do that.
    0:48:21 And if we don’t
    0:48:22 keep bending it, it
    0:48:23 might easily snap
    0:48:24 back.
    0:48:25 And there’s really
    0:48:27 no natural order
    0:48:28 of things here.
    0:48:29 And indeed, in some
    0:48:31 ways, we’ve made,
    0:48:32 what’s the opposite
    0:48:32 of progress?
    0:48:33 What’s the English
    0:48:33 word?
    0:48:34 Backsliding.
    0:48:35 Yeah, we’ve been
    0:48:36 backsliding.
    0:48:37 And I think animals
    0:48:38 is a great example.
    0:48:40 Imagine a world where
    0:48:40 the Industrial
    0:48:41 Revolution would have
    0:48:42 happened in India.
    0:48:43 I mean, maybe we
    0:48:43 wouldn’t have
    0:48:45 ended up with these
    0:48:46 horrible systems of
    0:48:47 factory farming.
    0:48:49 It could have been
    0:48:51 so much better.
    0:48:53 Yeah, when I think
    0:48:55 about progress, I
    0:48:56 mean, I think of it
    0:48:58 as, first of all, like,
    0:48:58 who gets to define
    0:48:59 what’s progress?
    0:49:01 I think that depends a
    0:49:01 lot on who’s in power
    0:49:02 and who’s defining it.
    0:49:05 But I don’t see it as
    0:49:06 a sort of straight line
    0:49:07 linearly going up.
    0:49:08 I very much see it as
    0:49:09 a messy squiggle.
    0:49:11 And it’s entirely
    0:49:12 plausible to me that
    0:49:15 in 100 years, we will
    0:49:17 have expanded our
    0:49:18 moral circle in some
    0:49:19 ways and given more
    0:49:19 rights to certain
    0:49:20 human beings.
    0:49:22 You know, for example,
    0:49:24 that we’ve abolished
    0:49:25 factory farming and we
    0:49:26 are treating animals
    0:49:28 great, even as we’re
    0:49:30 now really repressing
    0:49:31 certain classes of
    0:49:31 human beings.
    0:49:33 Does that prediction
    0:49:35 sound plausible to you?
    0:49:35 Oh, no, no.
    0:49:36 I’m not making any
    0:49:37 predictions here.
    0:49:38 I think the future
    0:49:39 could be much worse
    0:49:39 than today.
    0:49:42 For me, that’s one of
    0:49:42 the main lessons of
    0:49:43 history.
    0:49:44 Things can change
    0:49:46 quite radically, for
    0:49:46 better or for worse.
    0:49:47 I’m pretty sure
    0:49:49 that when you would
    0:49:50 have talked to, you
    0:49:51 know, most Germans in
    0:49:53 the 1920s, I mean,
    0:49:53 they couldn’t have
    0:49:54 imagined, like, the
    0:49:55 terrible abyss that
    0:49:56 was ahead of them.
    0:49:58 If I look at the U.S.
    0:50:00 today, I am really
    0:50:01 pessimistic, to be
    0:50:01 honest.
    0:50:03 I think there’s a real
    0:50:04 threat of democracy
    0:50:06 breaking down, and I
    0:50:07 think that things can
    0:50:08 get much, much worse
    0:50:10 quite soon, actually.
    0:50:11 Mm-hmm.
    0:50:13 Let’s talk about
    0:50:14 what’s ahead for you
    0:50:15 personally.
    0:50:18 Maybe you have a little
    0:50:19 more ability to predict
    0:50:20 that, potentially.
    0:50:21 It, you know, it
    0:50:22 strikes me with your
    0:50:24 book, like, you could
    0:50:25 have been like, look,
    0:50:26 I’m happy, I’m
    0:50:27 content to just write a
    0:50:27 book about moral
    0:50:28 ambition, leave it at
    0:50:29 that, you know.
    0:50:31 But you did not just
    0:50:31 leave it at that, you
    0:50:33 also decided to co-found
    0:50:34 something that you
    0:50:34 mentioned earlier.
    0:50:35 It’s called the School
    0:50:36 for Moral Ambition.
    0:50:38 What is that, and how
    0:50:39 did that get started?
    0:50:40 I was at a point in
    0:50:42 my career where I
    0:50:43 looked at what I
    0:50:44 had, you know, a bit
    0:50:44 of a platform.
    0:50:46 I think I have the
    0:50:48 ability to, you know,
    0:50:50 write things that
    0:50:51 perhaps some people
    0:50:51 want to read.
    0:50:54 But I also felt this
    0:50:57 itch, right, and felt
    0:50:58 a little bit fed up
    0:50:58 with myself.
    0:51:00 And I was hugely
    0:51:02 inspired by, for
    0:51:03 example, what Ralph
    0:51:03 Nader did in the
    0:51:05 60s and the 70s, that
    0:51:06 he was able to build
    0:51:08 this beacon, this
    0:51:08 magnet for very
    0:51:09 driven and talented
    0:51:10 people to work on
    0:51:11 some of the most
    0:51:12 pressing issues.
    0:51:15 Throughout history, I
    0:51:16 think we’ve seen
    0:51:17 movements that have
    0:51:18 been successful at
    0:51:19 redefining what it
    0:51:20 means to be
    0:51:20 successful.
    0:51:21 That was one of the
    0:51:23 epiphanies I had when
    0:51:24 I studied the British
    0:51:25 abolitionist movement,
    0:51:26 is they were actually
    0:51:27 part of a much bigger
    0:51:29 societal shift that
    0:51:30 was all about making
    0:51:30 doing good more
    0:51:31 fashionable.
    0:51:33 So I guess that’s
    0:51:34 what we are betting
    0:51:34 on.
    0:51:36 Again, we are trying
    0:51:37 to build that
    0:51:37 magnet.
    0:51:38 We are trying to
    0:51:39 redefine what it
    0:51:40 means to be
    0:51:41 successful.
    0:51:42 So we do a couple
    0:51:43 of things.
    0:51:45 One is we organize
    0:51:45 these so-called
    0:51:46 moral ambition
    0:51:46 circles.
    0:51:47 They’re groups of
    0:51:48 five to eight
    0:51:49 people who want to
    0:51:50 explore what a
    0:51:50 morally ambitious
    0:51:51 life could mean for
    0:51:51 them.
    0:51:54 This is all freely
    0:51:55 accessible on our
    0:51:56 website, moralambition.org.
    0:51:57 And at the same
    0:51:58 time, we organize
    0:51:59 so-called moral
    0:52:00 ambition fellowships.
    0:52:03 And you could see
    0:52:04 them as small SWAT
    0:52:06 teams of extremely
    0:52:08 talented, very driven
    0:52:09 people who have
    0:52:10 agreed to quit their
    0:52:13 job, follow Gandalf,
    0:52:15 and work on some of
    0:52:16 the most important
    0:52:17 global problems.
    0:52:18 We got started in
    0:52:18 Europe.
    0:52:20 No, no, no, no, no,
    0:52:20 no, no.
    0:52:21 I’m not coming up with
    0:52:22 the mission statements.
    0:52:24 It’s actually our
    0:52:25 researchers who are
    0:52:25 our Gandalfs.
    0:52:26 I’m more like the
    0:52:27 Muppet, you know?
    0:52:30 Like the mascot, you
    0:52:31 know, in the silly
    0:52:33 suit, right?
    0:52:35 That’s me who walks
    0:52:36 on the field before
    0:52:37 the match gets
    0:52:37 started.
    0:52:38 That’s my job.
    0:52:41 But, yeah, so we
    0:52:42 asked our researchers
    0:52:43 what are some of the
    0:52:44 most important things
    0:52:45 we can do in
    0:52:45 Brussels.
    0:52:46 And to my big
    0:52:47 surprise, actually,
    0:52:47 one of the things
    0:52:48 they advised us is to
    0:52:49 work on fighting big
    0:52:50 tobacco.
    0:52:51 It’s the single
    0:52:52 largest preventable
    0:52:53 cause of disease
    0:52:54 still today.
    0:52:55 Eight million
    0:52:55 deaths every year.
    0:52:56 year, and very
    0:52:57 few people are
    0:52:58 working on
    0:52:59 countering it.
    0:53:00 So we’ve been
    0:53:02 recruiting corporate
    0:53:02 lawyers,
    0:53:03 marketeers.
    0:53:04 Actually, we’ve got
    0:53:05 someone in our
    0:53:06 last cohort who
    0:53:07 used to work for
    0:53:09 Big Tobacco, and
    0:53:10 now they’re applying
    0:53:11 their skills and
    0:53:12 their talents to
    0:53:13 doing a lot of
    0:53:13 good.
    0:53:15 And, yeah, we
    0:53:16 want to scale up
    0:53:17 this machine.
    0:53:18 Obviously, the point
    0:53:19 is that it is very
    0:53:20 hard to get into
    0:53:21 one of our
    0:53:22 fellowships because
    0:53:22 we want to make
    0:53:23 it more prestigious.
    0:53:24 You went to
    0:53:24 Harvard.
    0:53:25 Okay, well,
    0:53:25 that’s not
    0:53:26 nearly enough.
    0:53:27 That’s nice, but
    0:53:30 we are, yeah,
    0:53:32 it’s quite
    0:53:32 extraordinary, I
    0:53:33 think, the
    0:53:34 groups that we
    0:53:35 are now bringing
    0:53:36 together.
    0:53:38 I think because of
    0:53:39 two reasons.
    0:53:39 One, because we
    0:53:40 want to make doing
    0:53:41 good more
    0:53:41 prestigious and
    0:53:43 more fashionable.
    0:53:44 The other thing is
    0:53:45 that we genuinely
    0:53:46 believe that if you’re
    0:53:47 very selective and
    0:53:48 that some very
    0:53:49 entrepreneurial people
    0:53:50 can just do so
    0:53:51 much.
    0:53:51 where is the
    0:53:52 school for
    0:53:53 moral ambition
    0:53:54 getting all the
    0:53:55 funding, getting
    0:53:56 the money to be
    0:53:56 able to pay
    0:53:57 people to quit
    0:53:57 their jobs?
    0:53:59 Mostly from me
    0:53:59 now.
    0:54:02 Everything I earn
    0:54:02 with the book is
    0:54:03 going all into the
    0:54:04 movement.
    0:54:06 So that’s been
    0:54:06 helpful.
    0:54:07 And we’ve got a
    0:54:09 group of entrepreneurs
    0:54:09 supporting us as
    0:54:10 well.
    0:54:11 So these are
    0:54:11 people who have
    0:54:12 indeed built their
    0:54:13 own companies and
    0:54:14 who are looking
    0:54:16 to climb, as
    0:54:17 David Brooks would
    0:54:18 say, their second
    0:54:18 mountain.
    0:54:19 You know, you
    0:54:20 mentioned that
    0:54:21 the School for
    0:54:22 Moral Ambition is
    0:54:24 highly sort of
    0:54:25 competitive to get
    0:54:25 in.
    0:54:27 And most of the
    0:54:28 listeners won’t end
    0:54:29 up going to the
    0:54:31 school, but I am
    0:54:32 kind of interested to
    0:54:32 hear that you’re
    0:54:33 also promoting
    0:54:34 these moral ambition
    0:54:35 circles that people
    0:54:35 can start with
    0:54:36 their friends.
    0:54:38 I personally am not
    0:54:40 really sold on the
    0:54:41 idea of maximizing,
    0:54:42 like do the most
    0:54:44 good possible as my
    0:54:45 entire guiding
    0:54:46 philosophy for life,
    0:54:47 but I am attracted
    0:54:49 to the idea of
    0:54:50 trying to do more
    0:54:51 good.
    0:54:52 Exactly.
    0:54:53 Right?
    0:54:54 We’re totally on
    0:54:54 the same page.
    0:54:55 Yeah.
    0:54:58 And I very much
    0:54:59 think I could enjoy
    0:55:00 kind of just sitting
    0:55:01 with five or six
    0:55:02 friends on a regular
    0:55:03 basis and trying to
    0:55:04 challenge each other
    0:55:05 to be more
    0:55:06 intentional about
    0:55:07 whatever the values
    0:55:08 are that we do
    0:55:09 believe in, right?
    0:55:09 Yeah.
    0:55:11 So maybe one way
    0:55:12 to say this,
    0:55:13 Sikal, is that
    0:55:15 when I talk to
    0:55:15 some of my banker
    0:55:17 friends, I’m not
    0:55:18 inclined to talk
    0:55:18 about all these
    0:55:19 drowning children
    0:55:20 in shallow
    0:55:21 ponds, right?
    0:55:23 I’m also not
    0:55:24 inclined to talk
    0:55:25 in a more leftist
    0:55:25 way and say,
    0:55:25 oh, you’re so
    0:55:26 bad, you’re so
    0:55:27 greedy.
    0:55:30 What I’ve
    0:55:31 discovered is
    0:55:32 that it’s much
    0:55:32 more effective to
    0:55:33 say something
    0:55:34 like, oh,
    0:55:35 wow, you’re so
    0:55:36 talented, you’re so
    0:55:38 experienced, and
    0:55:39 this is what you’re
    0:55:39 doing?
    0:55:40 Boring.
    0:55:43 And that hurts
    0:55:44 them much more
    0:55:46 in my experience.
    0:55:47 And it’s also
    0:55:48 honestly what I
    0:55:48 believe.
    0:55:50 Yeah, people
    0:55:50 really don’t like
    0:55:51 to be boring.
    0:55:54 I will say this
    0:55:55 conversation has
    0:55:55 been far from
    0:55:56 boring.
    0:55:57 I really enjoyed
    0:55:58 chatting with you
    0:55:59 and reading your
    0:55:59 book.
    0:56:00 It’s called
    0:56:01 Moral Ambition.
    0:56:03 Rutger, just
    0:56:03 want to say thank
    0:56:04 you so much for
    0:56:05 being on our
    0:56:05 show.
    0:56:06 Thanks for
    0:56:06 having me.
    0:56:15 I hope you
    0:56:15 enjoyed this
    0:56:15 episode.
    0:56:16 I know I
    0:56:17 enjoyed wrestling
    0:56:17 with all these
    0:56:18 ideas.
    0:56:19 And while I
    0:56:20 don’t think I’ll
    0:56:20 be enrolling at
    0:56:21 the School for
    0:56:21 Moral Ambition,
    0:56:23 I will consider
    0:56:23 setting up a
    0:56:24 moral ambition
    0:56:25 circle with my
    0:56:25 friends.
    0:56:26 But as always,
    0:56:27 we want to know
    0:56:28 what you think,
    0:56:29 so drop us a
    0:56:29 line at
    0:56:31 thegrayareaatvox.com
    0:56:33 or leave us a
    0:56:33 message on our
    0:56:34 new voicemail
    0:56:35 line at
    0:56:38 1-800-214-5749.
    0:56:39 And once you’re
    0:56:40 finished with that,
    0:56:41 go ahead and rate
    0:56:42 and review and
    0:56:43 subscribe to the
    0:56:43 podcast.
    0:56:45 This episode was
    0:56:46 produced by Beth
    0:56:47 Morrissey, edited
    0:56:48 by Jorge Just,
    0:56:49 engineered by
    0:56:50 Christian Ayala,
    0:56:51 fact-checked by
    0:56:52 Melissa Hirsch,
    0:56:53 and Alex Overington
    0:56:54 wrote our theme
    0:56:54 music.
    0:56:56 The episode was
    0:56:56 hosted by me,
    0:56:57 Sigal Samuel.
    0:56:58 I’m a senior
    0:56:59 reporter at
    0:57:00 Vox’s Future
    0:57:01 Perfect, where I
    0:57:02 cover AI,
    0:57:03 neuroscience, and a
    0:57:03 whole lot more.
    0:57:05 You can read
    0:57:05 my writing at
    0:57:06 vox.com
    0:57:07 slash future
    0:57:07 perfect.
    0:57:09 Also, if you
    0:57:10 want to learn
    0:57:10 more about
    0:57:11 effective altruism
    0:57:11 and the
    0:57:12 drowning child
    0:57:13 thought experiment,
    0:57:14 check out
    0:57:15 Vox’s Good
    0:57:16 Robot podcast
    0:57:16 series.
    0:57:17 I highly
    0:57:17 recommend it.
    0:57:18 We’ll drop a
    0:57:19 link to that
    0:57:19 in the show
    0:57:19 notes.
    0:57:22 New episodes of
    0:57:22 The Gray Area
    0:57:23 drop on Mondays.
    0:57:24 Listen and
    0:57:25 subscribe.
    0:57:26 The show is
    0:57:27 part of Vox.
    0:57:28 Support Vox’s
    0:57:29 journalism by
    0:57:29 joining our
    0:57:30 membership program
    0:57:30 today.
    0:57:31 Go to
    0:57:32 vox.com
    0:57:33 slash members
    0:57:34 to sign up.
    0:57:35 And if you
    0:57:36 decide to sign
    0:57:36 up because of
    0:57:37 this show,
    0:57:38 let us know.

    We’re told from a young age to achieve. Get good grades. Get into a good school. Get a good job. Be ambitious about earning a high salary or a high-status position.

    Some of us love this endless climb. But lots of us, at least once in our lives, find ourselves asking, “What’s the point of all this ambition?”Historian and author Rutger Bregman doesn’t think there is a point to that kind of ambition. Instead, he wants us to be morally ambitious, to measure the value of our achievements based on how much good we do, by how much we improve the world.

    In this episode, Bregman speaks with guest host Sigal Samuel about how to know if you’re morally ambitious, the value of surrounding yourself with like-minded people, and how to make moral ambition fashionable.

    Host: Sigal Samuel, Vox senior reporter

    Guest: Rutger Bregman, historian, author of Moral Ambition, and co-founder of The School for Moral Ambition

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Show Notes

    Vox’s Good Robot series can be found here:

    Episode 1

    Episode 2

    Episode 3 (discusses the “drowning child thought experiment” and effective altruism)

    Episode 4

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • The science of ideology

    AI transcript
    0:00:04 We all have bad days, and sometimes bad weeks, and maybe even bad years.
    0:00:08 But the good news is we don’t have to figure out life all alone.
    0:00:11 I’m comedian Chris Duffy, host of TED’s How to Be a Better Human podcast.
    0:00:15 And our show is about the little ways that you can improve your life,
    0:00:19 actual practical tips that you can put into place that will make your day-to-day better.
    0:00:23 Whether it is setting boundaries at work or rethinking how you clean your house,
    0:00:29 each episode has conversations with experts who share tips on how to navigate life’s ups and downs.
    0:00:32 Find How to Be a Better Human wherever you’re listening to this.
    0:01:05 A word you hear a lot these days is ideology.
    0:01:12 In fact, you could argue this is the political term of the moment.
    0:01:21 When Trump is denouncing the left, he’s talking about gender ideology or critical race theory or DEI.
    0:01:28 When the left is denouncing Trump, they’re talking about fascism or Project 2025.
    0:01:35 Wherever you look, ideology is being used to explain or justify policies.
    0:01:44 And buried in all that is an unstated assumption that the real ideologues are on the other side.
    0:01:51 Often, to call someone ideological is to imply that they’re fanatical or dogmatic.
    0:01:57 Most of us don’t think of ourselves as ideological for that reason.
    0:02:02 And if someone does call you an ideologue, you might recoil a little bit.
    0:02:10 I mean, sure, you have beliefs, you have a worldview, but you’re not an ideologue, right?
    0:02:15 Maybe this isn’t the best way to think about ideology.
    0:02:20 Maybe we don’t really know what we’re talking about when we talk about ideology.
    0:02:26 Is it possible that we’re all ideological in ways we don’t recognize?
    0:02:33 And if we could see ourselves a little more clearly, might that help us see others more clearly?
    0:02:39 I’m Sean Elling, and this is The Gray Area.
    0:02:46 Today’s guest is Liorce McGrath.
    0:02:50 She’s a cognitive neuroscientist and the author of The Ideological Brain.
    0:02:56 The book makes the case that our political beliefs aren’t just beliefs.
    0:03:01 They’re neurological signatures written into our neurons and reflexes.
    0:03:08 That’s a fancy way of saying that how we think and what we believe is a product of the way our brains are wired.
    0:03:16 To be clear, she isn’t saying that our beliefs are entirely shaped by our biology.
    0:03:19 The point isn’t that brain is destiny.
    0:03:27 But she is saying that the way our brains handle change and uncertainty may shape not only the beliefs we adopt,
    0:03:30 but how fiercely we cling to those beliefs.
    0:03:37 A book like this feels especially relevant in such a polarized moment,
    0:03:46 because it’s hard to imagine bridging the divides in our society without understanding ourselves and each other much better.
    0:03:50 And part of that understanding is knowing what’s really motivating us.
    0:03:57 Lior McGrath, welcome to the show.
    0:04:00 Maybe I will ask you to say that again.
    0:04:01 Oh, did I mess it up?
    0:04:02 I knew I was going to do it.
    0:04:04 I knew I was going to do it.
    0:04:06 I was in my head.
    0:04:08 All right, let’s try to get home.
    0:04:11 Liorce McGrath, welcome to the show.
    0:04:12 Great to be here.
    0:04:14 I totally got it right that time, right?
    0:04:15 Yeah, you did.
    0:04:15 You did.
    0:04:15 You did great.
    0:04:16 All right.
    0:04:23 This is a very interesting book, full of a lot of provocative, compelling claims.
    0:04:26 And we are going to get to all of that.
    0:04:34 Before we do, I am just curious, what drew you to this question?
    0:04:36 Why ideology?
    0:04:40 Well, in many ways, ideology is all around us.
    0:04:43 But often, we don’t really know what it is, right?
    0:04:49 We kind of say, well, an ideology is just a system of beliefs or just a kind of insult.
    0:04:54 We used to kind of demean someone who believes something totally different to us, which we think
    0:04:55 is wrong.
    0:05:01 And I was really interested in delving into what it means to think ideologically and what
    0:05:07 it means for a brain to really be immersed in ideology and whether that’s a kind of experience
    0:05:13 that can change the brain, that certain brains might be more prone to taking an ideology and
    0:05:16 kind of embracing it in an extreme and intense way.
    0:05:23 And that’s why in the book, in The Ideological Brain, I really delve into this question of
    0:05:29 what makes people gravitate towards ideologies and what is it about some brains that makes them
    0:05:30 especially susceptible?
    0:05:36 And in doing that, I’m really interested in thinking about ideology in a more precise way
    0:05:41 than we typically think about it as, which is not just as a broad system of beliefs floating
    0:05:48 above our heads in an ambiguous way or something that’s purely historical or sociological, but
    0:05:53 it’s something that’s really deeply psychological and that we can see inside people’s brains.
    0:05:55 Well, let’s take it step by step.
    0:05:58 What does it mean to think ideologically?
    0:05:59 What is ideology?
    0:06:02 How are you defining it?
    0:06:05 And how is that different from how people typically define it?
    0:06:10 So the way I think about ideology is as really being comprised of two components.
    0:06:17 One is a very fixed doctrine, a kind of set of descriptions about the world that’s very
    0:06:22 absolutist, that’s very black and white, and that is very resistant to evidence.
    0:06:28 So an ideology will always have a certain kind of causal narrative about the world that describes
    0:06:31 what the world is like and also how we should act within that world.
    0:06:36 It gives prescriptions for how we should act, how we should think, how we should interact
    0:06:36 with other people.
    0:06:39 But that’s not the end of the story.
    0:06:44 To think ideologically is both to have this fixed doctrine and also to have a very fixed
    0:06:48 identity that you really kind of judge everyone with.
    0:06:54 And that fixed identity stems from the fact that every ideology, every doctrine will have
    0:06:55 believers and non-believers.
    0:07:04 And so when you think ideologically, you’re really embracing those rigid identity categories and
    0:07:09 deciding to exclusively affiliate with people who believe in your ideology and really reject
    0:07:10 anyone who doesn’t.
    0:07:17 The degree of ideological extremity can really be mapped onto how hostile you are to anyone with
    0:07:21 differing beliefs, whether you’re willing to potentially harm people in the name of your
    0:07:22 ideology.
    0:07:28 You write that, and now I’m quoting, not all stories are ideologies and not all forms
    0:07:32 of collective storytelling are rigid and oppressive, end quote.
    0:07:34 How do you tell the difference?
    0:07:39 How do you, for instance, distinguish an ideology from a religion?
    0:07:43 Is there even room for a distinction like that in your framework?
    0:07:50 What I think about often is the difference between ideology and culture, because culture can encompass
    0:07:56 eccentricities, it can encompass deviation, different kinds of traditions or patterns from the
    0:07:57 past.
    0:08:03 But it’s not about legislating what one can do or what one can’t do.
    0:08:08 The moment we detect an ideology is the moment when you have very rigid prescriptions about what
    0:08:10 is permissible and what is not permissible.
    0:08:17 And when you stop being able to tolerate any deviation, that’s when you’ve moved from culture,
    0:08:23 which can encompass a lot of deviation and kind of reinterpretations, where as an ideology,
    0:08:29 there is no room for those kinds of nonconformities or differences.
    0:08:34 What you’re doing here and what you do in the book that is interesting to me, and novel as
    0:08:42 far as I know, is this reframing of ideology more as a style of thinking rather than just
    0:08:44 a set of beliefs.
    0:08:49 I mean, as you know, like the conventional way to think about ideology has always been to focus
    0:08:53 on the content, on what people believe, not how they think.
    0:08:55 And you flipped us around.
    0:09:02 What does this understanding let us see that other definitions missed?
    0:09:10 What that inversion reveals is that embracing an ideology in an extreme way and thinking really
    0:09:13 about what are the mechanics of thinking ideologically?
    0:09:16 What are the ways in which reason gets shifted?
    0:09:18 How emotion gets distorted?
    0:09:25 How our biological and kind of even physiological responses to the world get distorted is that
    0:09:31 we stop thinking about ideologies as things that just envelop us from outside and that just
    0:09:35 kind of are almost tipped into us by external forces.
    0:09:42 And we start to see how it’s a much more dynamic process and that we can even see parallels between
    0:09:47 ideologues who believe in very different things and partisans to completely different parties to
    0:09:53 different missions, but that really it’s how they think that’s very similar, even if what
    0:09:54 they think is very different.
    0:09:59 I mean, some people might be more ideological than others, but does everyone more or less
    0:10:05 have an ideology, even if they don’t think of themselves as having an ideology?
    0:10:12 I kind of think about ideological thinking as something more specific, that it’s this antagonism
    0:10:19 to evidence, this very kind of tight embrace of a particular narrative about the world and
    0:10:23 rules about how the world works and how you should behave within that world.
    0:10:31 And so when we think about it as that kind of fixed, rigid set of behaviors, of compulsions,
    0:10:35 we see that not everyone is obviously equally ideological.
    0:10:41 And I don’t know whether there’s a perfect human being completely without any ideology,
    0:10:44 but in the book I do talk about, you don’t think so?
    0:10:45 I don’t think so.
    0:10:46 We’ll get there, but I don’t think so.
    0:10:54 I think that you can be a lot less ideological and that, that’s almost the challenge that I
    0:10:59 talk about in the book is what does it mean to think non-ideologically about the world,
    0:11:02 maybe anti-ideologically about the world?
    0:11:04 And what does that look like?
    0:11:09 Well, tell me how you test for cognitive flexibility versus rigidity.
    0:11:11 What kind of survey work did you do?
    0:11:12 What kind of lab work?
    0:11:18 So in order to test someone’s cognitive rigidity or their flexibility, one of the most important
    0:11:24 things is not just to ask them because people are terrible at knowing whether they’re rigid
    0:11:24 or flexible.
    0:11:29 The most rigid thinkers will tell you they’re fabulously flexible and the most flexible thinkers
    0:11:30 will not know it.
    0:11:34 And so that’s why we need to use these kind of unconscious assessments, these cognitive
    0:11:41 tests and games that tap into your natural kind of capacity to be adaptable or to resist
    0:11:42 change.
    0:11:49 And so one test to do this is called the Wisconsin Card Sorting Test, which is a card sorting game
    0:11:52 where people are presented with a deck of cards that they need to sort.
    0:11:57 And initially they don’t know what the rule that governs the game is, so they try and
    0:11:58 figure it out.
    0:12:02 And quickly they’ll realize that they should match the cards in their deck according to
    0:12:02 their color.
    0:12:07 So they’ll start putting a blue card with a blue card, a red card with a red card, and
    0:12:10 they’ll get affirmation, the kind of positive feedback that they’re doing it right.
    0:12:15 And so they start enacting this rule, adopting it, kind of applying it again and again and again.
    0:12:20 And after a while, unbeknownst to them, the rule of the game changes, and suddenly this
    0:12:22 color rule doesn’t work anymore.
    0:12:28 And so that’s the moment of change that I’m most interested in, because some people will
    0:12:30 notice that change and they will adapt.
    0:12:33 They will then go looking for a different rule and they’ll quickly figure out that they
    0:12:37 should actually sort now the cards according to the shape of the objects on the card.
    0:12:39 And fine, they’ll follow this new rule.
    0:12:42 Those are very cognitively flexible individuals.
    0:12:47 But there are other people who will notice that change and they will hate it.
    0:12:48 They will resist that change.
    0:12:54 They will try to say that it never happened and they’ll try to apply the old rule despite
    0:12:57 getting negative feedback, despite being told that they’re doing it wrong.
    0:13:03 And those people that really resist the change are the most cognitively rigid people, that they
    0:13:04 don’t like change.
    0:13:08 They don’t adapt their behavior when the evidence suggests that they do.
    0:13:13 And what’s interesting about this kind of task is that it’s not related to politics at
    0:13:14 all, right?
    0:13:19 It’s just a game that taps into how people are responding to information, responding to
    0:13:20 rules, responding to change.
    0:13:27 And we see how that people’s behavior on this kind of game really predicts their ideological
    0:13:28 rigidities too.
    0:13:34 Can we say that the point here is that if someone really struggles to switch gears,
    0:13:40 in a card sorting game like that, that that says something about their comfort with change
    0:13:42 and ambiguity in general.
    0:13:48 And someone who struggles with change and ambiguity in a card game will probably also have an aversion
    0:13:54 to pluralism in politics because their brain processes that as chaotic.
    0:13:58 I mean, is that a fair summary of the argument or the logic?
    0:14:04 Yeah, broadly it is, because people who resist that change, who resist the uncertainty, who’d like
    0:14:08 things to stay the same, that when the rules change, they really don’t like it.
    0:14:15 Often that can be translated into, you know, the most cognitively rigid people don’t like
    0:14:17 plurality, don’t like debate.
    0:14:26 They like a kind of singular source of information, a singular argument about a single theory of
    0:14:26 everything.
    0:14:33 But that can also, that can really coexist on both sides of the political spectrum.
    0:14:40 So when we’re talking about diversity, like that can be a more politicized concept that
    0:14:48 you can still find very rigid thinkers being very militant about certain ideas that we might
    0:14:48 say are progressive.
    0:14:50 So it’s quite nuanced.
    0:14:58 Are there particular habits of mind or patterns of behavior that you’d consider warning signs
    0:15:02 of overly rigid thinking, things that people can notice in themselves?
    0:15:09 Well, it’s funny that you say habits of mind, because in many ways, I think that habits are
    0:15:11 the biggest culprits here.
    0:15:15 You know, we live in a society that constantly talks about how good it is to have habits and
    0:15:18 to have routines that you repeat over and over again.
    0:15:24 But actually, habits are the way in which we become more rigid because we become less
    0:15:25 sensitive to change.
    0:15:28 We want to repeat things exactly in the same way.
    0:15:35 And so probably the first step, if you’re wanting to be more flexible in the way you approach
    0:15:41 the world, is to take all your habits and routines and interrogate them and think about what it
    0:15:47 does to you to be repeating constantly rather than to be exploring and navigating change.
    0:15:54 I mean, I think it’s intuitively easy to understand why being extremely rigid would be a bad thing.
    0:15:59 Is it possible to be too flexible?
    0:16:02 Like, what does that look like at the extreme of flexibility?
    0:16:08 If you’re just totally unmoored and just like permanently wide open and like incapable of settling
    0:16:12 on anything, that seems bad in a different way.
    0:16:12 Yeah.
    0:16:13 Yeah.
    0:16:19 And what that is, is a kind of immense persuadability, but that’s not flexibility, right?
    0:16:25 So there is a distinction there because being flexible is about updating your beliefs in line
    0:16:30 of credible evidence, not necessarily adopting a belief just because some authority says so,
    0:16:34 but it’s about, you know, seeing the evidence and responding to it.
    0:16:40 You write that we possess beliefs, but we can also be possessed by them.
    0:16:46 And, you know, that reminds me of Carl Jung’s claim that, you know, we don’t have ideas, ideas
    0:16:46 have us.
    0:16:49 But what are you getting at here?
    0:16:53 Like, what does it mean to say that we’re possessed by beliefs?
    0:16:58 Does that mean that we are being animated and controlled by them unconsciously?
    0:17:00 Or is it something different?
    0:17:10 I think that it means that, I’ll pause here to think about the best, because it’s such
    0:17:11 a massive question.
    0:17:11 Yeah.
    0:17:18 What we see with this science, with the science I’ve been involved in called political neuroscience,
    0:17:24 where we use neuroscientific methods to study these questions about people’s political beliefs
    0:17:30 and identities, is that the degree to which you espouse really dogmatic ideological beliefs
    0:17:38 can get reflected in your body, in your neurobiology, in the way in which your brain responds to the
    0:17:40 world at very unconscious levels.
    0:17:43 And so it becomes a part of us.
    0:17:53 And so there’s a kind of, I’m losing the word, but there’s a kind of expansion or a kind of echoing of your
    0:18:00 thought patterns, not just in politics, but that they become part of how you think about anything in the
    0:18:04 world and how your body responds and reacts to anything in the world.
    0:18:07 And so our politics are not just things outside of us.
    0:18:11 They’re really part of how the human body starts to function.
    0:18:14 So you think ideologies can really change us physiologically?
    0:18:20 What we see in a lot of studies is that, and this is obviously a growing field and there are many more
    0:18:27 studies to conduct, but what we see across these experiments is that ideology really conditions
    0:18:29 your physiological responses to the world.
    0:18:36 So in one experiment, they looked at how much you justify existing systems and existing inequalities.
    0:18:44 So some people think that very stark inequalities are bad and unnatural and maybe things that should be
    0:18:48 corrected, whereas others think that inequalities are fine.
    0:18:52 They’re natural parts of human life and maybe that they’re even good, that they’re desirable things
    0:18:53 to have in society.
    0:19:00 And what we see is that people who believe that inequalities are bad, we see that those
    0:19:06 people, when they look at videos of injustice taking place of someone, for instance, discussing
    0:19:11 their experience of homelessness and the adversity of that, their whole bodies react, their heart
    0:19:16 rates accelerate, their kind of physiological markers of arousal really spike.
    0:19:22 Because they’re biologically disturbed by what they’re seeing, they’re disturbed physically
    0:19:24 by the injustice that they see.
    0:19:31 In contrast, people who believe that those inequalities are fine, that they’re justifiable, that they
    0:19:36 should not change at all, and that we should continue to have stark inequalities in society,
    0:19:41 those people, when they see that injustice, their bodies are numb.
    0:19:43 They’re physiologically unmoved.
    0:19:47 They will not biologically be disturbed by the injustice that they see in front of them.
    0:19:55 And so you really see how ideology conditions even our most unconscious, rapid physiological responses.
    0:20:16 Support for the gray area comes from Mint Mobile.
    0:20:19 There are a couple ways people say data.
    0:20:20 There’s data.
    0:20:21 Then there’s data.
    0:20:22 Me, personally?
    0:20:24 I say data.
    0:20:25 I think.
    0:20:26 Most of the time.
    0:20:31 But no matter how you pronounce it, it doesn’t change the fact that most data plans cost an
    0:20:32 arm and a leg.
    0:20:36 But with Mint Mobile, they offer plans starting at just $15 a month.
    0:20:38 And there’s only one way to say that.
    0:20:40 Unless you say $15, I guess.
    0:20:44 But no matter how you pronounce it, all Mint Mobile plans come with high-speed data and
    0:20:49 unlimited talk and text delivered on the nation’s largest 5G network.
    0:20:52 You can use your own phone with any Mint Mobile plan.
    0:20:55 And you can ring along your phone number with all your existing contacts.
    0:20:58 No matter how you say it, don’t overpay for it.
    0:21:02 You can shop data plans at mintmobile.com slash gray area.
    0:21:04 That’s mintmobile.com slash gray area.
    0:21:09 Upfront payment of $45 for a three-month, five-gigabyte plan required.
    0:21:12 Equivalent to $15 per month.
    0:21:14 New customer offer for first three months only.
    0:21:17 Then full price plan options available.
    0:21:18 Taxes and fees extra.
    0:21:20 See Mint Mobile for details.
    0:21:27 Support for the gray area comes from Greenlight.
    0:21:33 School can teach kids all kinds of useful things, from the wonders of the atom to the story of Marbury vs. Madison.
    0:21:38 One thing schools don’t typically teach, though, is how to manage your finances.
    0:21:42 So those skills fall primarily on you, the parent.
    0:21:43 But don’t worry.
    0:21:44 Greenlight can help.
    0:21:49 Greenlight says they offer a simple and convenient way for parents to teach kids smart money habits,
    0:21:53 while also allowing them to see what their kids are spending and saving.
    0:21:57 Plus, kids can play games on the app that teach money skills in a fun, accessible way.
    0:22:02 The Greenlight app even includes a chores feature, where you can set up one-time or recurring chores,
    0:22:07 customized to your family’s needs, and reward kids with allowance for a job well done.
    0:22:12 My kids are a bit too young to talk about spending and saving and all that.
    0:22:17 But one of our colleagues here at Vox uses Greenlight with his two boys, and he absolutely loves it.
    0:22:21 Start your risk-free Greenlight trial today at greenlight.com slash gray area.
    0:22:25 That’s greenlight.com slash gray area to get started.
    0:22:27 Greenlight.com slash gray area.
    0:22:36 Support for the show comes from the podcast Democracy Works.
    0:22:40 The world certainly seems a bit alarming at the moment.
    0:22:42 And that’s putting it lightly.
    0:22:46 And sometimes it can feel as if no one is really doing anything to fix it.
    0:22:51 Now, a lot of podcasts focus on that, the doom and gloom of it all,
    0:22:53 and how democracy can feel like it’s failing.
    0:22:57 But the people over at the Democracy Works podcast take a different approach.
    0:23:02 They’re turning their mics to those who are working to make democracy stronger.
    0:23:04 From scholars to journalists to activists.
    0:23:08 They examine a different aspect of democratic life each week.
    0:23:12 From elections to the rule of law to the free press and everything in between.
    0:23:17 They interview experts who study democracy as well as people who are out there on the ground
    0:23:22 doing the hard work to keep our democracy functioning day in and day out.
    0:23:25 Listen to Democracy Works wherever you listen to podcasts.
    0:23:29 And check out their website, democracyworkspodcast.com to learn more.
    0:23:35 The Democracy Works podcast is a production of the McCourtney Institute for Democracy at Penn State.
    0:23:56 Focusing on rigidity does make a lot of sense.
    0:24:07 But I can’t imagine one critique of this being that you risk pathologizing conviction, right?
    0:24:12 How do you draw the line between principled thinking and dogmatic thinking?
    0:24:17 Because as you know, one of those codes is good and the other codes as bad.
    0:24:31 In many ways, I think that it’s not about pathologizing any conviction, but it is about questioning what it means to believe in an idea without being willing to change your mind on it.
    0:24:39 And I think that there is, you know, there is a very fine line, right, between what we call principles and what we call dogmas.
    0:24:47 And that’s what in many ways I hope that implicitly readers come to think about and interrogate is,
    0:24:59 are they holding kind of broad moral values about the world that help them, you know, make ethical decisions, but also being sensitive to context and the specifics of each situation?
    0:25:13 Or are they adhering to certain rules without the capacity to take context into account, without being willing to see all the shades of gray that a situation might kind of enable?
    0:25:22 And thinking that taking very strong, principled positions is a purely good thing, I think is, I would like to challenge that.
    0:25:27 I think it gets particularly thorny in the moral domain, right?
    0:25:36 Like, no one wants to be dogmatic, but it’s also hard to imagine any kind of moral clarity without something like a fixed commitment to certain principles or values.
    0:25:42 And what often happens is, if we don’t like someone’s values, we’ll call them extremist or dogmatic.
    0:25:46 But if we like their values, we call them principled.
    0:25:56 Yeah, and that’s why I think that a kind of psychological approach to what it means to thinking ideologically helps us escape from that kind of very slippery relativism.
    0:26:03 Because then it’s not just about, oh, where is someone relative to us on certain issues on the political spectrum?
    0:26:07 But it’s about thinking, well, what does it mean to resist evidence?
    0:26:16 So there is a delicate path there where you can find a way to have a moral compass.
    0:26:30 Maybe not the same absolutist moral clarity that ideologies try to convince you exists, but you can have a morality without having really dogmatic ideologies.
    0:26:33 We all want things to make sense.
    0:26:37 We want things to have a reason or a purpose.
    0:26:48 How much of our rigid thinking, how much of our ideological thinking is just about our fear of uncertainty?
    0:26:55 Ideologies are, in many ways, our brain’s way of solving the problem of uncertainty in the world.
    0:26:59 Because, you know, our brains are these incredible predictive organs.
    0:27:10 They’re trying to understand the world, but they’d also like shortcuts wherever possible, because it’s very complicated and very computationally expensive to figure out everything that’s happening in the world.
    0:27:13 And so ideologies kind of hand that to you on a silver plate.
    0:27:16 And they say, here are all the rules for life.
    0:27:17 They’re all rules for social interaction.
    0:27:21 Here’s a description of all the causal mechanisms for how the world works.
    0:27:23 There you go.
    0:27:29 And you don’t need to do that hard labor of figuring it out all on your own.
    0:27:46 And so that’s why ideologies can be incredibly tempting and seductive for our predictive brains that are trying to resolve uncertainty, that are trying to resolve ambiguities, that are just trying to understand the world in a coherent way.
    0:27:49 And so it is a kind of coping mechanism.
    0:27:57 And what I hope to show in the book is that it’s a coping mechanism with very disastrous side effects for individual bodies.
    0:28:01 Well, yeah, I think the main problem is that the world isn’t coherent.
    0:28:06 And in order to make it coherent, you have to distort it often.
    0:28:12 And I think that’s where this can lead to bad outcomes.
    0:28:16 But look, so ideologies are certainly one way.
    0:28:22 I mean, maybe the main way we satisfy this longing we have for clarity and certainty.
    0:28:30 Do you think there are non-ideological ways to satisfy that longing?
    0:28:33 I think so.
    0:28:40 But I also think that it’s about recognizing that we have that longing and that ideologies are solutions to that longing.
    0:28:56 And maybe by realizing that there’s that constructive element to it, right, that we gravitate towards ideologies, not necessarily because they’re true, but just maybe because they seem at first glance useful or nice or comforting.
    0:29:12 And I think that already goes maybe some way at chipping away at the kind of illusion that ideologies try to claim and to establish, which is that they are the only truth and the theories of everything and that there is no other truth.
    0:29:23 And so I think that it’s already important to recognize that kind of magnetism that happens between our minds and these ideological myths.
    0:29:39 And I think that there are ways to live that don’t require you to espouse ideologies in a dogmatic way, in a way that inspires you to dehumanize other people for the sake of justifying your ideology.
    0:29:59 And I think that that lies with thinking about what it means to update your beliefs in response to credible evidence, living in a society that has information and evidence that is accessible to everyone, rather than what is going on now with digital environments,
    0:30:10 where that information that you receive is increasingly skewed, is increasingly selective and designed to disregulate you and to manipulate you rather than to offer you information.
    0:30:37 But once you start to battle some of those systemic kind of problems with our information systems, I think you can do a lot of work to learn how to process information, to respond to disagreements in a way that is flexible, in a way that is balanced, in a way that is really focused on evaluating evidence in a kind of balanced way.
    0:30:54 I think that in experiments, what we find is that people who are most cognitively rigid will kind of adhere to the most extreme ideologies.
    0:31:03 But that doesn’t have to be a purely kind of far-right authoritarianism that we most typically are familiar with.
    0:31:05 It can also exist on the left.
    0:31:07 There are also left-wing authoritarianisms.
    0:31:19 In the studies, we see that the people who are most rigid can exist both on the far left and on the far right, which is important because a lot of times there’s been this assumption that it’s only the political right that can be rigid.
    0:31:29 But we see that when we measure people’s unconscious traits, that you can also find that rigidity on the left.
    0:31:40 And I hope that that’s a kind of warning signal for a lot of people on the left who think that liberalism and the left are inherently about change and flexibility and progress.
    0:31:46 Well, it can also attract rigid minds.
    0:32:07 And so you need to think about, if you want to enact progress that maybe has a liberal flavor to it, you need to think about how to avoid those kind of rigid strains, the kind of dogmatic, conformity-minded, authority-minded way of thinking that exists on both sides of the spectrum.
    0:32:15 And to that very point, somewhere in the book, you write that every worldview can be practiced extremely and dogmatically.
    0:32:26 And I read that, and I just wondered if it leaves room for making normative judgments about different ideologies.
    0:32:28 But let me put that in the form of a question.
    0:32:37 Do you think every ideology is equally susceptible to extremist practices?
    0:32:42 I sometimes get strong opposition from people saying, well, my ideology is about love.
    0:32:59 It’s about generosity or about looking after others, kind of positive ideologies that we think surely should be immune from these kind of dogmatic and authoritarian ways of thinking.
    0:33:20 But in many ways, what I’m trying to do with this research and in the book is rather than compare ideologies as, you know, these big entities represented by many people, is just to look at people and to look at, well, can we find, are there people who are extremely rigid in different ideologies?
    0:33:42 And we do see that every ideology that has this very strong utopian vision of what life and the world should be or a very dystopian kind of fear of where the world is going.
    0:33:47 And all of those have a capacity to become extreme.
    0:34:00 Support for this show comes from Shopify.
    0:34:04 When you’re creating your own business, you have to wear too many hats.
    0:34:11 You have to be on top of marketing and sales and outreach and sales and designs and sales and finances.
    0:34:17 And definitely sales, finding the right tool that simplifies everything can be a game changer.
    0:34:21 For millions of businesses, that tool is Shopify.
    0:34:26 Shopify is a commerce platform behind millions of businesses around the world.
    0:34:30 And according to the company, 10% of all e-commerce in the U.S.
    0:34:35 From household names like Mattel and Gemshark to brands just getting started.
    0:34:40 They say they have hundreds of ready-to-use templates to help design your brand style.
    0:34:44 If you’re ready to sell, you’re ready for Shopify.
    0:34:49 You can turn your big business idea into with Shopify on your side.
    0:34:55 You can sign up for your $1 per month trial period and start selling today at Shopify.com slash Vox.
    0:34:58 You can go to Shopify.com slash Vox.
    0:35:10 Have you ever gotten a medical bill and thought, how am I ever going to pay for this?
    0:35:16 This week on Net Worth and Chill, we’re tackling the financial emergency that is the American healthcare system.
    0:35:21 From navigating insurance nightmares to making sure your emergency fund actually covers those emergencies,
    0:35:25 We’re diving deep into the hidden healthcare costs that no one warns you about.
    0:35:35 Most hospitals in the U.S. are actually nonprofits, which means they have to have financial assistance or charity care policies.
    0:35:41 So essentially, if you make below a certain amount, the hospital legally has to waive your medical bill up to a certain percent.
    0:35:45 Listen wherever you get your podcasts or watch on YouTube.com slash YourRichBFF.
    0:35:51 The regular season is in the rear view, and now it’s time for the games that matter the most.
    0:35:54 This is Kenny Beecham, and playoff basketball is finally here.
    0:36:02 On Small Ball, we’re diving deep into every series, every crunch time finish, every coaching adjustment that can make or break a championship run.
    0:36:04 Who’s building for a 16-win marathon?
    0:36:06 Which superstar will submit their legacy?
    0:36:10 And which role player is about to become a household name?
    0:36:15 With so many fascinating first-round matchups, will the West be the bloodbath we anticipate?
    0:36:17 Will the East be as predictable as we think?
    0:36:19 Can the Celtics defend their title?
    0:36:23 Can Steph Curry, LeBron James, Kawhi Leonard push the young teams at the top?
    0:36:28 I’ll be bringing the expertise, the passion, and the genuine opinion you need for the most exciting time of the NBA calendar.
    0:36:32 Small Ball is your essential companion for the NBA postseason.
    0:36:36 Join me, Kenny Beecham, for new episodes of Small Ball throughout the playoffs.
    0:36:38 Don’t miss Small Ball with Kenny Beecham.
    0:36:40 New episodes dropping through the playoffs.
    0:36:43 Available on YouTube and wherever you get your podcasts.
    0:37:05 How do you think about causality here, right?
    0:37:13 I mean, are some people just constitutionally, biologically prone to dogmatic thinking?
    0:37:22 Or do they get possessed, to use your word, by ideologies that reshape their brain over time?
    0:37:24 Yeah, this is a fascinating question.
    0:37:28 And I think that causality goes both ways.
    0:37:35 I think there’s evidence that there are pre-existing predispositions that propel some people to join ideological groups.
    0:37:43 And that when there is a trigger, they will be the first to run to the front of the line, kind of in support of the ideological cause.
    0:37:49 But that at the same time, as you become more extreme, more dogmatic, you are changed.
    0:37:54 You are changed to the way in which you think about the world, the way in which you think about yourself.
    0:38:00 You become more ritualistic, more narrow, more rigid in every realm of life.
    0:38:01 So that can change you too.
    0:38:04 Just to be clear about what you mean by change, right?
    0:38:07 When you say it changes our brains, how do you know that?
    0:38:11 Are you looking at MRI scans and you can see these changes?
    0:38:12 What do those changes look like?
    0:38:20 So we don’t yet have, you know, the longitudinal studies required to see complete change.
    0:38:32 But we do have other kinds of studies that look at, for instance, what happens when either when a brain is in a condition which makes it more prone to becoming ideological.
    0:38:40 So, for example, what happens when we take people who are already have quite radical beliefs, so radical religious fundamentalists.
    0:38:45 We put them in a brain scanner, we put them in a brain scanner and we make them feel very socially excluded.
    0:38:52 We heighten that feeling that they’re socially excluded from others, that they’re alienated.
    0:38:59 So we take vulnerable minds and we also kind of put them in a more kind of psychologically vulnerable state.
    0:39:04 And what we see is that then they become a lot more ideological.
    0:39:13 Their brains start imbuing every value as sacred, as something that they would be willing to die for, as something they would be willing to hurt others for.
    0:39:26 And we see that these processes are so dynamic that even in conditions where people are stressed out, where they feel lonely, excluded, like there aren’t enough resources to go around.
    0:39:51 And what we see is that there are these experiments that show the arrows pointing one way and also that people who have experienced brain injury, traumatic brain injury, two specific parts of the brain, that that later on we see that they’re more radical.
    0:39:58 That their beliefs are a lot more extreme, that they’ll see a radical idea and they will say that it’s fine.
    0:40:25 So through these kind of natural studies of either brain injury or about what happens to a brain that is already radical in those environments, we can get a sense that being in a rigid environment, in an environment that is stressful, that is authoritarian, that tries to put people into that mindset of thinking about every human being as an instrument to an end,
    0:40:30 And that that can change how the brain responds to the world and maybe how it functions too.
    0:40:37 So when these circuits get activated, there are corresponding parts of the brain that light up.
    0:40:37 Exactly.
    0:40:39 And that’s how you can make the connections?
    0:40:40 Exactly.
    0:40:40 Yeah.
    0:40:41 So go ahead.
    0:40:42 That’s so fascinating.
    0:40:49 Now, look, I know you’re being careful about saying causality runs both ways and surely it does, but I want to push you a little bit.
    0:40:56 And how far would you go in saying that genes determine political beliefs?
    0:41:03 Would it be too neat to say that people are born with liberal brains or conservative brains?
    0:41:11 So what we do know is that there are genetic predispositions to thinking more rigidly about the world.
    0:41:17 These predispositions are related to dopamine and how dopamine is expressed throughout your brain.
    0:41:30 So that can be about how dopamine is expressed in the prefrontal cortex, the area behind your forehead responsible for the high level decision making, and the dopamine kind of in your reward circuitry in the striatum.
    0:41:35 And what we see is that there are genetic traits that make some people more prone to rigid thinking.
    0:41:39 But there’s still so much scope for change.
    0:41:44 These genetic traits are kind of potentials.
    0:41:48 They can activate risk, but they can also really be subdued.
    0:41:59 And that’s where we can look at also how what happens to minds with those genetic predispositions, but that grew up in environments and upbringing that were much more liberal or that were much more authoritarian.
    0:42:02 Yes, that is what I found myself thinking a lot about.
    0:42:05 This is not straight up determinism you’re doing here.
    0:42:08 In your words, we’re talking about probabilities, not fates.
    0:42:14 So our biology opens up certain possibilities, inclines us in one direction or another.
    0:42:23 But our environment, our stresses, our communities, our family life, all of that can push us different ways.
    0:42:29 Can you say a little bit more about this tension between biology and environment?
    0:42:38 I think there’s sometimes the sense that, oh, if you’re talking about the kind of biology of ideology, that you’re saying everything is fixed and predetermined.
    0:42:44 But actually, there’s huge scope for change and malleability and choice within that.
    0:42:56 And what we see and what in my experiments I’ve found is that the best reflection of a person’s cognitive style is not necessarily the ideologies that they grew up with, but the ideologies that they ended up choosing.
    0:43:12 So people who chose to enter a dogmatic ideology and kind of embrace it strongly, even though they grew up in a much more secular, presumably non-ideological upbringing, those people were the most cognitively rigid.
    0:43:23 Choosing an ideology is the best reflection of your rigidity, whereas people who maybe grew up really ideological but left that environment are the most cognitively flexible people.
    0:43:28 More flexible than people who grew up in non-ideological settings and stayed non-ideological.
    0:43:33 And so there’s huge range and capacity for choice.
    0:43:35 And so our biology doesn’t predetermine us.
    0:43:43 It puts us on certain paths for risk or resilience, but then it’s our choices that affect which of our traits get expressed or not.
    0:43:48 How much room is there for agency here, right?
    0:43:54 Like, if I want to change the way I think, cognitively and politically, can I really do that?
    0:43:56 How much freedom do I have?
    0:44:07 I think you have an immense amount of freedom, and I think we know that you can change because people do change, and people do change the rigidity with which they approach the world.
    0:44:09 They’ve changed their beliefs over time.
    0:44:19 And so if we are all lying on a kind of spectrum from flexible thinking to rigid thinking, and we’re all somewhere on that spectrum, we can also all shift our position.
    0:44:21 So how do people do that?
    0:44:23 How do they go about bringing that change about?
    0:44:30 Well, the first way in which we can understand this is by looking maybe what we would say might be the negative change.
    0:44:33 What happens, what prompts people to think more rigidly about the world.
    0:44:41 And the best way, and maybe the most sinister way, to make people think more rigidly is to stress them out.
    0:45:01 And we can do that even in the lab, we can stress a body out by either asking it to do something that would make any person socially nervous, like standing in front of a big group and speaking unexpectedly, or by asking them to do something that would physically stress their body out, like putting their hand in a bucket of ice water.
    0:45:09 And that just automatically for any person, you know, all of your body’s resources get channeled to dealing with that physical stressor.
    0:45:26 And what we see is that even in that immediate moment, like the kind of three minutes that pass when your body stresses out, you immediately become more rigid in how you solve problems, in how you kind of solve all kinds of game mental challenges.
    0:45:33 And so you can see that stress is a huge factor that pushes people towards more rigid thinking.
    0:45:40 Well, that’s a very profound finding and one I think that maps onto like the historical record.
    0:45:56 What you find very often in history is that when material conditions and societies decline, when people get more impoverished and deprived and desperate, they become more vulnerable.
    0:46:00 To authoritarian or extremist movements, right?
    0:46:14 And maybe part of what’s going on there is these circuits that are getting induced in people’s brains, that stress, those stressful conditions make them, they prime them to be more susceptible to these ways of thinking.
    0:46:42 Yeah, because understanding that a body that is stressed is a body that is more vulnerable to extreme authoritarian dogmatic thinking really helps us understand who is most susceptible and at what times we are all most susceptible, so that we understand why and how maybe malicious agents can take hold of those experiences of stress, of adversity, of precarity or lack of resources.
    0:46:58 Or to actually, you know, create ideological rhetoric that stresses us out or that makes us think that there aren’t enough resources to go around, that that’s a profoundly, you know, powerful way to get people to think in a more authoritarian minded way.
    0:47:12 Well, we’re in an era, especially in America, but I think this is also true in your neck of the woods, of highly polarized partisan politics.
    0:47:16 Do you feel like this research has some particular insight into that?
    0:47:22 Do you think absorbing this can actually make us more intelligible to each other?
    0:47:25 I think so.
    0:47:40 I think, first of all, it allows us, by recognizing that actually people at the very ends of, for instance, the political spectrum, or people of many different ideologies, when taken to the extreme, actually start to resemble each other, I think is probably a very humbling insight.
    0:47:49 Because you realize that although you might be feeling like you’re fighting for completely different missions, you’re psychologically engaged in a very similar process.
    0:47:57 And so, hopefully, hopefully that is one way to maybe help us understand each other in very polarized times.
    0:48:10 But I think that there’s also this really profoundly individual or personal problem here that you have to confront, which is how flexible or how dogmatic are you?
    0:48:22 And how would you like to live, you know, rather than just judging other people for their dogmatism, it’s about thinking, well, what are the rules that you impose on yourself or on those around you?
    0:48:27 And can those be actually damaging to your mental freedom?
    0:48:34 Because those rules we impose on ourselves, yeah, reduce our capacity to think authentically.
    0:48:42 The end of your book imagines a mind that’s ideology-free.
    0:48:45 Do you really think that’s possible?
    0:48:48 Do you think we can live without ideology?
    0:48:51 I think we can certainly try.
    0:48:53 And I think…
    0:48:56 Well played.
    0:49:06 I think we can, because I think, you know, starting to shed those ideological, those really harsh ideological convictions with which we,
    0:49:14 encounter ourselves and others, is possible, is probably desirable from a psychological perspective, and quite empowering.
    0:49:24 It’s also a very difficult process, because to be flexible is not just an end state that you arrive at, and you made it, you’re flexible, that’s it, you’re good.
    0:49:35 It’s this continuous struggle I even talk about as a Sisyphean task, because there are so many pressures trying to rigidify you, to narrow your thoughts, that to stay flexible,
    0:49:43 to stay in that space of being willing to accept nuance, ambiguity is a really, really hard thing.
    0:49:53 Flexibility is very fragile, but I think it’s also really fulfilling to be in pursuit of that more flexible, ideologically free way of being.
    0:50:01 The sort of flexibility you’re talking about is, to me, not just an intellectual virtue.
    0:50:13 I think it’s also a moral virtue in the sense that it enables us to be more open to ideas and people, and more humble about what we don’t or can’t know.
    0:50:28 Do you have any thoughts on, do you have any thoughts after all this research on how to educate children, how to parent, how to teach people to be anti-ideological in the way you defined it?
    0:50:32 Yeah, I mean, in many ways…
    0:50:36 Sorry, there were too many different ways to answer that question.
    0:50:37 Yeah, no, please start over.
    0:51:01 I think one of the most profound insights from this research is that when you start to embody flexible thinking in your everyday life, in the way in which you psychologically approach the world, that will bleed into the way in which you evaluate moral space, the political space, the ideological space.
    0:51:15 And so if we wanted to cultivate that flexibility in children, in fellow adults, it would be about encouraging that kind of flexibility in all things.
    0:51:20 Responsibility, like we talked about, is not just this endless persuadability or a kind of wish-washiness.
    0:51:31 It’s this very active stance where you’re trying to think about things in the most wide-ranging anti-essentialist way.
    0:51:48 And so teaching people to think really creatively, of course, we also need to teach them to be critical thinkers, but to be really creative in any domain, not just in art, which tries to demand creativity, but in every realm of life.
    0:51:55 Rather than repeating your day in the same way again and again, how do you incorporate change?
    0:52:01 How do you incorporate thinking outside the box, breaking down essences into kind of new ways of thinking?
    0:52:15 So teaching people to be more flexible, original, creative, imaginative in that way, I think is something that education systems and families can do, and hopefully they should.
    0:52:21 As you said earlier, we’re very much in the beginning of this research.
    0:52:25 Where does it go from here?
    0:52:30 What do you think is the next frontier of political neuroscience?
    0:52:43 Where we go from here is to continue to tackle those questions about causality, to really learn to see how ideologies can change the human body, the human brain, how it responds to the world.
    0:53:04 And also how those, how, what we bring to the table, I think continuing to understand that, that requires studies of people over a long time, over their whole lives, to see how people’s changes in their psychological kind of expressions can map onto their, their ideological commitments.
    0:53:07 And I think now is probably a very good time to do it, because there is a lot of change.
    0:53:17 People are both becoming at times more dogmatic, more extreme, but also changing allegiances at paces that maybe we haven’t seen in a long time.
    0:53:33 And so this is a great moment to stop thinking about things purely as the political left versus the political right, but evaluate any ideological commitment, whether it’s nationalistic, social, religious, environmental, any kind of ideology, to start to see those parallels.
    0:53:36 And kind of like you’ve been hinting at, well, what are the differences?
    0:53:40 When does it matter what you think and not just how you think?
    0:53:45 So there’s a lot of exciting science to do there, and it’ll be, it’ll be interesting to see where it goes.
    0:53:47 I think that’s a good place to leave it.
    0:53:54 Once again, the book is called The Ideological Brain, The Radical Science of Flexible Thinking.
    0:53:55 This was a lot of fun.
    0:53:56 It’s a great book.
    0:53:58 Thank you for coming in.
    0:53:59 Thank you so much.
    0:54:08 All right.
    0:54:10 I hope you enjoyed this episode.
    0:54:21 One thing I really appreciate about what this book is doing is that it just speaks to how complicated we are and how complicated our beliefs are.
    0:54:25 And that should be humbling in a lot of ways.
    0:54:28 But as always, we want to know what you think.
    0:54:32 So drop us a line at thegrayareaatvox.com.
    0:54:40 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
    0:54:46 And if you have time, please go ahead, rate, review, and subscribe to the pod.
    0:54:58 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Christian Ayala, fact-checked by Melissa Hirsch, and Alex Overington wrote our theme music.
    0:55:02 New episodes of The Gray Area drop on Mondays.
    0:55:03 Listen and subscribe.
    0:55:06 This show is part of Vox.
    0:55:10 Support Vox’s journalism by joining our membership program today.
    0:55:13 Go to vox.com slash members to sign up.
    0:55:17 And if you decide to sign up because of this show, let us know.
    0:55:17 Thank you.

    What do you do when you’re faced with evidence that challenges your ideology? Do you engage with that new information? Are you willing to change your mind about your most deeply held beliefs? Are you pre-disposed to be more rigid or more flexible in your thinking?

    That’s what political psychologist and neuroscientist Leor Zmigrod wants to know. In her new book, The Ideological Brain, she examines the connection between our biology, our psychology, and our political beliefs.

    In today’s episode, Leor speaks with Sean about rigid vs. flexible thinking, how our biology and ideology influence each other, and the conditions under which our ideology is more likely to become extreme.

    Host: Sean Illing (@SeanIlling)
    Guest: Leor Zmigrod, political psychologist, neuroscientist, and author of The Ideological Brain

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Politics after Covid

    AI transcript
    0:00:04 I’ve got some news before we start today’s show.
    0:00:07 You can now listen to The Gray Area without any ads.
    0:00:10 That’s right. Just become a Vox member.
    0:00:15 You’ll directly support the work we do on this show and get all kinds of other cool perks,
    0:00:19 including unlimited access to all of Vox’s stories.
    0:00:23 Just go to vox.com slash members to sign up.
    0:00:32 Support for this show comes from ServiceNow, a company that helps people do more fulfilling work,
    0:00:34 the work they actually want to do.
    0:00:37 You know what people don’t want to do? Boring, busy work.
    0:00:42 But ServiceNow says that with their AI agents built into the ServiceNow platform,
    0:00:46 you can automate millions of repetitive tasks in every corner of a business.
    0:00:50 IT, HR, customer service, and more.
    0:00:55 And the company says that means your people can focus on the work that they want to do.
    0:00:58 That’s putting AI agents to work for people.
    0:00:59 It’s your turn.
    0:01:04 You can get started at ServiceNow.com slash AI dash agents.
    0:01:12 There are lots of stories to tell about the COVID pandemic.
    0:01:17 But almost all of them, if you drill down, are about politics.
    0:01:27 About who makes the decisions, who questions those decisions, who matters, who suffers, who survives, who doesn’t, and why.
    0:01:31 But what did we get right?
    0:01:34 What did we get wrong?
    0:01:39 And what do all those choices say about the health of our democracy?
    0:01:45 I think it’s safe to say we’ll be living in COVID’s shadow for a long time.
    0:01:53 But perhaps there’s enough distance now to have a serious conversation about all these questions.
    0:01:58 I’m Sean Elling, and this is The Gray Area.
    0:02:04 Today’s guest is Frances Lee.
    0:02:15 She’s a professor of political science and public affairs at Princeton University, and a co-author of a book called In COVID’s Wake, How Our Politics Failed Us.
    0:02:21 It treats our response to COVID as a kind of stress test of our political system.
    0:02:34 Lee and her co-author, Stephen Macedo, look at all the institutions responsible for truth-seeking, journalism, science, universities, and asks, how did they perform?
    0:02:37 Were they committed to truth?
    0:02:38 Open to criticism?
    0:02:43 Did they live up to the basic norms of liberalism and science?
    0:02:48 Were we able to have a reasonable conversation about what was happening?
    0:02:51 And if we weren’t, why not?
    0:02:52 And can we have it now?
    0:02:59 Frances Lee, welcome to the show.
    0:03:01 Thank you, Sean.
    0:03:13 I’m going to start this conversation where you start the book, which is with a, I think, a pretty revealing quote by Francis Collins.
    0:03:17 And if you don’t mind, I’m just going to read it very quickly.
    0:03:24 We failed to say every time there was a recommendation, guys, this is the best we can do right now.
    0:03:26 It’s a good chance this is wrong.
    0:03:27 We didn’t say that.
    0:03:36 We wanted to be sure people actually motivated themselves by what we said, because we wanted change to happen in case it was right.
    0:03:41 But we did not admit our ignorance, and that was a profound mistake.
    0:03:45 And we lost a lot of credibility along the way.
    0:03:48 So, who is Francis Collins?
    0:03:52 What does he represent in this story?
    0:03:55 And how does that quote really anchor this book?
    0:04:00 So, Francis Collins is the head of the National Institutes of Health.
    0:04:09 And in that passage, he is reflecting back on the way in which science agencies in the U.S. handled the pandemic.
    0:04:21 He’s on a panel with a trucker from Minnesota, and it’s at a Braver Angels event, which tries to bring together people from diverse perspectives for dialogues.
    0:04:29 And he is just being remarkably candid in reflecting back on what he saw as the failings of the pandemic response.
    0:04:45 You know, we saw what he had to say at that panel as sort of summing up the argument of the book, which is that experts were not frank with the public about the limits of their knowledge and their uncertainties.
    0:04:50 They were improvising through the pandemic to a very considerable degree.
    0:04:56 They got a lot of things wrong, and they lost a lot of credibility, just as Francis Collins said.
    0:05:03 And that they should have been more honest with people about what they knew and did not know.
    0:05:07 And they would have retained the public’s trust to a greater degree.
    0:05:16 How would you characterize the debate we had in this country about our response to COVID as we were responding?
    0:05:19 From your point of view, what went wrong?
    0:05:23 Well, it was a crisis, a fast-moving crisis.
    0:05:28 And so it’s not surprising in retrospect that the debate was truncated.
    0:05:43 But it is surprising, as we looked back and did the research for this book, the extent to which the decisions that were made in the early going of the pandemic departed from conventional wisdom about how to handle a pandemic.
    0:05:55 And violated recommendations that had been put on paper in calmer times about how a crisis like this should be handled.
    0:06:13 So countries around the world sort of scrapped pre-existing pandemic plans in order to follow the example set in Wuhan and then in Italy, with Italy having the first nationwide lockdown, and improvising along the way.
    0:06:24 There wasn’t a scientific basis for the actions that were taken in the sense that there was no accumulated body of evidence that these measures would be effective.
    0:06:30 That it was hoped that they would be, but there was the lack of evidence.
    0:06:41 And, you know, if you go back and take a look at a report that was prepared by the World Health Organization in 2019, so just months before the pandemic broke out,
    0:07:00 that document goes through each of the proposed non-pharmaceutical interventions, meaning the measures that are taken to keep people apart in the context of an infectious disease pandemic, like, you know, masking or social distancing, business closures, school closures.
    0:07:06 Takes a look at each of those measures in turn and discusses the evidence base around them.
    0:07:10 And across the board, the evidence base is rated as poor quality.
    0:07:19 And several such measures are recommended not to be used under any circumstances in the context of a respiratory pandemic.
    0:07:27 And among those were border closures, quarantine of exposed individuals, and testing and contact tracing.
    0:07:42 And then all those measures were, of course, employed here in the U.S. and around the world in the context of the COVID pandemic without any kind of reckoning with the reasons why those measures were recommended against in the pre-pandemic planning.
    0:07:47 Well, that seems like an important point. It wasn’t just here. This is pretty much what everybody was doing, right?
    0:07:48 Yes, that’s right.
    0:07:54 Okay. Why do you suppose that was? Why the departure from these pre-pandemic plans?
    0:08:06 Well, it was the example in Wuhan. So the first such measures, you know, lockdowns, were imposed in Hubei province in China.
    0:08:17 And the World Health Organization sent a delegation. There were a couple of Americans on that delegation to Wuhan in early February 2020.
    0:08:23 They spent a week there. They saw the scale of the society-wide response.
    0:08:31 And they admired it tremendously, the extent to which everyone seemed to be pulling together to try to suppress the spread of the disease.
    0:08:34 And then you see cases start to fall.
    0:08:40 And the temporary hospitals that had been put up were taken down.
    0:08:47 And the report declares the Chinese response a success and recommends that same approach to the whole world.
    0:08:53 So there wasn’t a—I mean, I think there should have been more skepticism at the time.
    0:08:56 We knew that pandemics come in waves.
    0:09:04 And so it was hard to know to what extent the fall in cases was just the natural patterns that we’d previously seen with pandemics,
    0:09:07 and to what extent it was a result of the actions taken.
    0:09:13 And the public around the world are clamoring for action to protect us from this crisis.
    0:09:19 And here in the U.S., when the closures were first announced in March 2020, they were enormously popular.
    0:09:29 A Pew Research Center study that we cite in the book shows that 87% of Americans approved of those closures, large majorities of both parties.
    0:09:33 So the initial response was, this makes sense to us.
    0:09:38 Let’s do this on the part of the public at large.
    0:09:48 There was a lot of consensus at that time, and then that consensus fades pretty quickly for reasons we’ll get into.
    0:09:56 But before we do that, I think it’s important for you to set up the story you tell in the book.
    0:10:05 And a big part of that story is about how certain groups of people were disproportionately harmed by our COVID policies.
    0:10:07 Can you say a bit about that?
    0:10:13 Well, the effects are wide-ranging, the effects of the pandemic response.
    0:10:17 And across the board, they tend to fall harder on the less well-off.
    0:10:19 So let’s start with the closures themselves.
    0:10:22 Not everyone can stay home.
    0:10:27 In order for society to continue to function, for us to stay alive, some people have to keep working.
    0:10:34 Well, disproportionately, that’s working-class people who had to keep working through the pandemic, the so-called essential workers.
    0:10:40 I mean, you think of medical personnel, and so those are essential workers, too.
    0:10:56 But the bulk of the people who had to collect the trash, keep the utilities working, deliver the food, drive the trucks, you know, all the things that needed to be done during the pandemic were largely being done by working-class people.
    0:10:59 So the closures are not protecting them.
    0:11:01 Meanwhile, their kids can’t go to school.
    0:11:07 It’s hard for me to even understand exactly how people got through this crisis under those circumstances.
    0:11:08 They certainly had to scramble.
    0:11:25 As you look at the effects of those school closures, the learning losses are greater among high-poverty areas on disadvantaged students, on students who were lagging academically before the pandemic, they lost more so that the gaps became wider.
    0:11:38 The inflation that resulted from the pandemic response here and around the world, that also places a greater pinch on people who are not doing as well.
    0:11:43 The enormous rise in housing costs that followed the pandemic.
    0:11:45 You just go through the list.
    0:11:51 Every case, as you look at the response, it hits harder in some parts of society than others.
    0:11:58 I noticed in a lot of the descriptions of the pandemic that journalists would write about it as that the pandemic exposed inequalities.
    0:11:59 Well, it did.
    0:12:02 It exposed them, but it also exacerbated them.
    0:12:04 It made them bigger.
    0:12:09 Say more about the class biases at work here.
    0:12:12 What were the blind spots on the part of the decision makers?
    0:12:14 What trade-offs did they miss?
    0:12:19 What potential harms did they discount or overlook?
    0:12:25 It was not a deliberative process, and there were biases in who was at the table making these decisions.
    0:12:36 Decisions around COVID policy tended to be made by small groups of people, and it’s basically some generalist government officials and specialists in infectious disease.
    0:12:41 So there just weren’t a diversity of voices being brought to bear.
    0:12:43 Is that avoidable?
    0:12:48 This seems to be—not to say it isn’t a problem, but it seems kind of unavoidable on some level.
    0:13:06 Remember, this is a long crisis that, you know, so we can talk about March 2020 and what was done then, but then we have to ask, you know, what was the capacity of governments to take on board new information, listen to more voices, and adjust course?
    0:13:28 When you look at our response comparatively to the rest of the world, does it seem to you from the perch of the present that we performed more or less on par with most other countries?
    0:13:35 Or was there something exceptional about our responses and these sorts of effects, either in a good way or a bad way?
    0:13:42 Well, our handling of the pandemic became more party polarized than is characteristic around the world.
    0:13:58 That 2020 was a presidential election year, and with Trump in the White House when the crisis began, you saw sort of an in-power, out-of-power dynamic where Democrats—I mean, Democrats didn’t like or trust President Trump before the crisis.
    0:14:08 And so then to have him at the helm, while there was so much fear, there was a tendency to reject anything he had to say, you know, to sort of assume that if he said it, it had to be wrong.
    0:14:27 And so you saw a sorting out process where Democrats reacting against President Trump and Trump’s inconsistent stances on, you know, what to do in the context of the crisis was certainly not confidence-inspiring even for independents during that time.
    0:14:36 So what we see is this enormous partisan structuring of the pandemic response, not just at the level of policy, but also at the level of individual behavior.
    0:14:48 It’s very remarkable to the extent to which party is your key variable for predicting how any individual or how any jurisdiction would respond to the crisis.
    0:14:53 Yeah, this part of the story is pretty startling and pretty depressing.
    0:15:00 Why do you think our COVID strategies became so strongly associated with political partisanship?
    0:15:02 Trump is part of the story here, but not all of it.
    0:15:14 To be honest, I don’t think that we have an entirely satisfactory account because, I mean, certainly you can see the reaction around the president, the reaction against Trump.
    0:15:21 And Democrats had already had a higher opinion of science and science agencies.
    0:15:26 You know, you’d already had the March for Science under Trump before the pandemic, you know.
    0:15:40 And so the political dichotomy that Americans perceived during the crisis was there’s Trump versus the scientists, Trump versus Fauci, politicians versus the scientists.
    0:15:44 And presented with that choice, Democrats said, well, I trust the scientists.
    0:15:51 That began to be the dichotomy on which the attitudes towards the pandemic broke down.
    0:15:54 Trump is obviously mentioned in the book.
    0:15:57 There’s no way to tell this story without him.
    0:16:01 But he’s not a central focus.
    0:16:02 Why is that?
    0:16:12 We don’t focus on Trump because the U.S. response is not so different from other countries, at least, you know, in the early going.
    0:16:17 It evolves in different ways, but that’s not really a Trump story so much as a story of U.S. federalism.
    0:16:22 The governors are the primary decision makers over the course of the crisis.
    0:16:31 I mean, Trump was on television a lot, and the coronavirus task force made recommendations about what to do and, you know, offered guidance.
    0:16:39 And you might remember the gating processes and the color-coded schemes that they came up with about when states could reopen.
    0:16:40 But all of that was advisory.
    0:16:43 The key decisions were made by the governors.
    0:16:51 And so we see those as more central actors in a policy making sense in the U.S. response.
    0:17:00 You know, as far as the broader partisanship problem, I mean, I think it has become very clear that this intense polarization, especially in this information environment,
    0:17:05 it means a lot of people aren’t really committed to any stable set of ideas.
    0:17:08 Like, the only thing they’re committed to is disliking the people on the other side.
    0:17:13 And that kind of negative partisanship really does blinker our intuitions on almost every other front.
    0:17:18 I mean, the term we use in the book for that phenomenon is moralized antagonism,
    0:17:23 where you see people who have different views on a policy issue as bad people.
    0:17:28 And so you don’t look at whether there are any reasons why they hold those views.
    0:17:33 You don’t consider it as, you know, potentially worthwhile.
    0:17:35 You know, what is there to be learned from bad people?
    0:17:43 And I think that was to a great extent where we saw failures in the truth-seeking institutions of American democracy,
    0:17:51 in the academy, among journalists, some reporting, and also among scientists.
    0:17:55 Okay, so the partisan split is very apparent.
    0:17:57 And it’s very apparent very early.
    0:18:07 And you note in the book that there was no real gap in health outcomes in red and blue states until the vaccine was released.
    0:18:11 What starts to change post-vaccine and why?
    0:18:23 So it’s so fascinating, like when you track cumulative COVID mortality over time in the states grouped by partisanship.
    0:18:28 You can see that red and blue states track pretty close together in that first year.
    0:18:39 And in fact, in December 2020, when the vaccine was rolled out, there’s no difference at that point in per capita cumulative COVID mortality in red and blue states.
    0:18:42 But it starts to emerge right away.
    0:18:50 And, you know, from the work that, you know, was done at the level of public opinion and attitudes towards the vaccine,
    0:18:56 it was evident immediately that Democrats were just chomping at the bit to get the vaccine.
    0:18:58 They were so much more eager to get vaccinated.
    0:19:00 So you saw that in public opinion polling.
    0:19:05 You also saw it in the press to get appointments for vaccines,
    0:19:12 that it was much more difficult to get an appointment in blue states and blue jurisdictions than in red states.
    0:19:13 Yeah, I was in Mississippi.
    0:19:14 I had no problem.
    0:19:15 No problem, that’s right.
    0:19:16 I was in and out.
    0:19:19 I took my mom down to get hers from the National Guard.
    0:19:22 And, yes, there was no problem for her to get the vaccine early.
    0:19:27 But it was more challenging, you know, in the strongly democratic parts of the country.
    0:19:38 And so you see just a quick divergence in vaccine uptake across Republican and Democratic leaning jurisdictions.
    0:19:43 And, again, you know, thinking of this from a social science point of view,
    0:19:51 it’s a nice linear relationship between the partisan lean of the state and the rate of vaccine uptake.
    0:19:57 And then that relationship also tracks COVID mortality over the coming year.
    0:20:02 So that states with higher vaccine uptake have lower COVID mortality starting in 2021.
    0:20:11 You know, one key point, you know, I want to emphasize that our book does find that Democratic states did better than Republican states over the course of the pandemic.
    0:20:12 They absolutely did.
    0:20:14 It’s a clear difference.
    0:20:18 But that difference emerges in year two of the pandemic, not in year one.
    0:20:20 So what did work?
    0:20:26 A lot more research needs to be done on what succeeded and what failed.
    0:20:35 What we have in the book is pretty highly aggregated analysis so that we can show that places that had kept their schools closed longer didn’t do better.
    0:20:40 That places with longer lockdowns didn’t do better than places with shorter lockdowns.
    0:20:46 Places that locked down more quickly don’t do better than places that were slower to announce stay-at-home orders.
    0:20:51 So we can show that, you know, that there’s a lack of correlation there.
    0:20:53 But why?
    0:20:54 What drives that lack of correlation?
    0:21:09 Is it because these measures are not sustainable for human beings over the long timeline necessary to get from the start of a crisis to a vaccine that had been tested and shown to be efficacious and safe?
    0:21:22 Is that because a large share of the workforce always had to keep on working regardless so that the virus just continued to spread?
    0:21:31 And if anything, maybe the lockdowns had the effect of just ensuring that that spread took place disproportionately among essential workers, but really didn’t reduce it that much.
    0:21:32 You know, again, we don’t know what the peaks are.
    0:21:39 Like, pandemics unfold in waves, which means that in that first wave, you don’t get full population exposure.
    0:21:46 what I would argue here is that there’s an awful lot we still don’t know.
    0:22:02 Support for this show comes from Shopify.
    0:22:06 When you’re creating your own business, you have to wear too many hats.
    0:22:15 You have to be on top of marketing and sales and outreach and sales and designs and sales and finances and definitely sales.
    0:22:19 Finding the right tool that simplifies everything can be a game changer.
    0:22:23 For millions of businesses, that tool is Shopify.
    0:22:28 Shopify is a commerce platform behind millions of businesses around the world.
    0:22:33 And according to the company, 10% of all e-commerce in the U.S.
    0:22:37 From household names like Mattel and Gemshark to brands just getting started.
    0:22:42 They say they have hundreds of ready-to-use templates to help design your brand style.
    0:22:45 If you’re ready to sell, you’re ready for Shopify.
    0:22:51 You can turn your big business idea into with Shopify on your side.
    0:22:58 You can sign up for your $1 per month trial period and start selling today at Shopify.com slash Vox.
    0:23:01 You can go to Shopify.com slash Vox.
    0:23:03 That’s Shopify.com slash Vox.
    0:23:14 Support for the show comes from the podcast Democracy Works.
    0:23:18 The world certainly seems a bit alarming at the moment.
    0:23:19 And that’s putting it lightly.
    0:23:23 And sometimes it can feel as if no one is really doing anything to fix it.
    0:23:26 Now, a lot of podcasts focus on that.
    0:23:28 The doom and gloom of it all.
    0:23:31 And how democracy can feel like it’s failing.
    0:23:35 The people over at the Democracy Works podcast take a different approach.
    0:23:39 They’re turning their mics to those who are working to make democracy stronger.
    0:23:42 From scholars to journalists to activists.
    0:23:45 They examine a different aspect of democratic life each week.
    0:23:50 From elections to the rule of law to the free press and everything in between.
    0:23:59 They interview experts who study democracy as well as people who are out there on the ground doing the hard work to keep our democracy functioning day in and day out.
    0:24:07 Listen to Democracy Works wherever you listen to podcasts and check out their website, democracyworkspodcast.com to learn more.
    0:24:12 The Democracy Works podcast is a production of the McCourtney Institute for Democracy at Penn State.
    0:24:32 This week on Net Worth and Chill, I’m talking to Mike, the situation Sorrentino, who skyrocketed to fame on Jersey Shore earning millions before it all came crashing down.
    0:24:34 Tax evasion, prison time, addiction battles.
    0:24:41 Mike is rebuilding his wealth with purpose and helping the people and communities that lifted him up during his darkest days.
    0:24:46 I believe that you are the writer, director, and producer of your life.
    0:24:50 And if you want a better outcome, then you need to make it so.
    0:24:54 Listen wherever you get your podcasts or watch on youtube.com slash yourrichbff.
    0:25:18 Well, I think this is a good spot to talk a little more in detail about the decision makers and how they made those decisions.
    0:25:24 Now, I want to read to you a quote that gets at what I’m really asking here.
    0:25:27 It’s from one of the health officials in your book.
    0:25:52 What is wrong with saying and adopting, as a matter of policy, that the most important thing is saving lives, and we should save lives at all costs.
    0:25:59 I believe that that’s a quote from Deborah Birx, and so she was the coordinator on the coronavirus task force.
    0:26:08 She was not able, she said, to do a kind of cost-benefit analysis where she could calculate how much a life was worth.
    0:26:13 I mean, that’s a very understandable response, an attitude.
    0:26:25 But you have to remember that as policymakers faced with the kinds of measures that were being employed to control the spread of a disease, lives are on both sides of the equation.
    0:26:33 Let’s begin with one of the first measures taken, was the shutting down of so-called non-essential health care.
    0:26:37 And it was defined quite broadly.
    0:26:46 There were a lot of cancer treatments that were canceled and regarded as non-essential, depending on how advanced the cancer was.
    0:26:56 So you’re trading off future risks to life to preserve health care capacity now.
    0:27:06 When you are exacerbating inequalities, when you are depriving people of education, that has long-term health effects.
    0:27:10 I mean, education is one of the best predictors of people’s longevity.
    0:27:14 So you’re trading off present and future.
    0:27:17 It’s not so simple.
    0:27:18 These are very difficult choices.
    0:27:29 The reason why we do cost-benefit analysis is in order to be responsible as policymakers, that you can’t only focus on one threat to human beings, that we’re faced with many.
    0:27:34 And look, it’s even more excruciating in terms of the trade-offs, right?
    0:27:48 Because different population groups were not equally vulnerable here, even if you’re talking about saving lives at the highest priority, well, okay, old people were more vulnerable than young people, right?
    0:27:56 And so there may be policies that would save the lives of older population groups or more, you know, health-compromised people.
    0:28:03 And that might come at the expense of real harm to children in school, right?
    0:28:04 How do you weigh that?
    0:28:07 You know, it’s just, there’s no formula for that.
    0:28:14 Well, there was a refusal to weigh it during the crisis, that there was sort of a denial that that was what was happening.
    0:28:16 Do you think it was a denial, though?
    0:28:17 How do we know it was a denial?
    0:28:22 And how do we know they just did their best and made their choices?
    0:28:23 Some of them were good, some of them were bad.
    0:28:28 Well, they acknowledged that they didn’t discuss the costs or the trade-offs.
    0:28:37 So, you know, we have a lot of quotations to that effect from policymakers involved, saying that that was really somebody else’s job and not their job to consider the costs.
    0:28:42 So they’re pretty frank about that, that they simply weren’t doing it.
    0:28:44 And who’s they?
    0:28:46 You’re talking about the health officials mostly?
    0:28:47 Yeah, health officials.
    0:28:47 That’s right.
    0:28:58 So when you think about the policy process around COVID, you’ve got government officials, elected officials, and you have health virologists and public health officials.
    0:29:01 And that’s basically who’s in the room.
    0:29:09 And so then it would be the elected officials’ job to consider everything else.
    0:29:22 But how would they do it if they’re not being advised, if they have no perspectives being brought to bear that would shed light on the trade-offs and on the costs present in the room?
    0:29:29 Now, you know, I think they certainly deserve blame here, too, that it doesn’t all rest on public health.
    0:29:38 You know, elected officials had a tendency to want to hide behind public health, to say we’re just following the science, as if that’s possible in the presence of all these value trade-offs.
    0:29:49 But it made their lives easier to pretend like they were not exercising any discretion, and that they were doing the only thing that could be done.
    0:30:03 Part of the critique here, at least of the public officials and the health experts, is that they were intolerant of criticism, or they were intolerant of skepticism.
    0:30:10 And, again, I’m trying to be fair in retrospect to the people that were in the fire here.
    0:30:21 And I can imagine that one reason for that intolerance of skepticism, whatever one thinks of it, is that they really were in a very tough position.
    0:30:27 Do you have sympathy for the predicament that these people were facing?
    0:30:30 I mean, how would you have weighed the trade-offs here?
    0:30:33 Well, I do have empathy.
    0:30:41 I also know, and experts should be cognizant of this as well, that they have their limitations.
    0:30:43 We have our limitations.
    0:30:46 And that there’s always a risk of hubris.
    0:30:52 And that they should have acknowledged the possibility of failure.
    0:30:57 That these measures wouldn’t work as well as they hoped that they would.
    0:31:01 And that should have been factored into their decision-making.
    0:31:04 It’s not just, you know, lives versus the economy.
    0:31:07 It’s also the question of, how many lives are you even saving?
    0:31:10 Like, is this really, are these policies workable for society?
    0:31:14 There was a lack of evidence based on that.
    0:31:21 And so you can’t just make policy affecting the whole of society on a wing and a prayer.
    0:31:24 And to a great extent, that is what they were doing.
    0:31:42 You implied earlier, and you certainly talk about this in the book, that some of these health officials, people like Fauci and Birx, who were the face of these measures, that there was a bit of a disjunction between what they were saying in private and what they were saying in public.
    0:31:50 That they were lying at worst, that they were lying at worst, or being misleading at best when facing the public and talking about this.
    0:32:01 I just don’t want that to hang out there without being, without examples being offered, because I know there is a lot of bad faith attacks out there, and I don’t want to do that.
    0:32:05 So, can you give me an example of what you mean by that?
    0:32:08 How do we know that there was this disjunction?
    0:32:14 Well, in her memoir, Deborah Birx is quite frank that two weeks to slow the spread was just a pretext.
    0:32:25 And it was just an effort to get Trump on board for initial closures, and that as soon as those closures were in place, she says, we immediately began to look for ways to extend them.
    0:32:50 I think one of the more devastating noble lies that was told during the pandemic was to go out there in summer, spring and summer 2021, even into the fall of 2021, with the vaccine mandates and tell people that if you get vaccinated, you can protect your loved ones from catching the disease from you, that you will become a dead end to the virus.
    0:32:55 They did not have a scientific basis for making that claim.
    0:32:59 The vaccine trials had not tested for an outcome on transmission.
    0:33:10 We knew that, you know, based on the trials, that people who were vaccinated were less likely to report symptoms consistent with it and less likely to test positive for COVID.
    0:33:18 But there weren’t tests conducted on whether getting vaccinated would protect people in your household, for example.
    0:33:23 Like, they could have done tests for transmission, but that was not part of the endpoint of the trials.
    0:33:29 And so when they went out and made claims that it would affect or stop transmission, they were going beyond their data.
    0:33:45 We also knew that a systemically administered vaccine, meaning a shot, it’s not a nasal vaccine, doesn’t prevent you from contracting the virus and for it proliferating in your nasal cavity so that you can transmit.
    0:33:47 That was known.
    0:33:56 And so you shouldn’t have gone out there and just reassured people that this would work and you’d be able to protect your loved ones.
    0:34:05 Everybody found out in rather short order that getting vaccinated for COVID didn’t prevent you from getting COVID and also from transmitting it to others.
    0:34:34 If you were in one of those rooms making these decisions in that moment about what to tell the public, what would you do if you were faced with a choice where you could either mislead the public with a noble lie that you were absolutely convinced would say thousands of lies, but you also knew that if the public were to learn about the lie later, it would shatter trust in scientific institutions for maybe a generation.
    0:34:39 Honestly, Francis, I don’t know what I would actually do in that position.
    0:34:40 I know what I would tell you.
    0:34:42 I would do if you asked me now.
    0:34:45 I’d say, well, I’d tell the truth and let the chips fall.
    0:34:52 But that’s very easy to say from a distance and probably a lot more difficult when you’re in the fire like that.
    0:34:54 But is this something you thought about?
    0:34:55 What would you have done?
    0:34:59 This is a very important question.
    0:35:11 I mean, what I, what I, again, I would turn to is what is the basis for believing that these measures would work, that that you have to, you have to be able to accept uncertainty.
    0:35:15 If you’re a scientist, you know, there’s a lot we just don’t know about the world.
    0:35:20 To a great extent, the more expertise you develop, the more you learn about what we don’t know.
    0:35:26 And so you have, you have to come to terms with your ignorance as a policymaker.
    0:35:31 And so you may be wrong about what you think is going to work.
    0:35:41 And so under those conditions, now you’re trading your future credibility for some, for measures that will be suboptimal, may not have nearly the effectiveness that you hope for.
    0:35:50 That, that I think is the greater failing that, you know, to, to, to, to not confront the limits of our knowledge.
    0:36:03 It’s hubris because, you know, if you ask them, well, on what basis do you make this claim that if you get this vaccine, which is a shot, that will stop you from transmitting a respiratory virus?
    0:36:04 Like, well, on what basis?
    0:36:09 And so here’s where I think, you know, we see failures in other truth-seeking institutions.
    0:36:11 Where were the academics?
    0:36:16 Where were the journalists asking hard questions of policymakers during that time?
    0:36:20 Critical thinking, I think, got suspended during the pandemic.
    0:36:27 And so then government officials, including public health officials, are not being held accountable in the way they should be to justify themselves.
    0:36:38 We are talking about the lines between scientific judgments and political judgments or scientific judgments and value judgments.
    0:36:51 Do you think COVID shattered the delusion, if anyone still held on to it, that there’s a value-free science, that we can make policy choices like these based on science alone?
    0:37:06 One should not think that it is possible for science to settle political questions in the way that politicians talked about the COVID response, that they’re just following the science.
    0:37:09 That was never a responsible rhetoric.
    0:37:12 It was never a responsible way to make policy.
    0:37:24 That you have to come to terms with the reality of politics, you know, which is diverse values and diverse interests.
    0:37:29 And that when you make policy choices, there are always winners and losers.
    0:37:33 And you have to see that with clear eyes.
    0:37:35 And you try to make as many winners as possible.
    0:37:38 And you try not to harm people unnecessarily.
    0:37:48 But you can’t blind yourself to the effects of the choices that you make by sort of pretending like there was no choice at all.
    0:37:50 Which, you know, I think we saw a lot of that during the pandemic.
    0:37:58 There’s no version of a crisis like this that won’t involve mistakes, obviously, because of all the uncertainty.
    0:38:02 So how do we draw a line between mistakes and deceptions?
    0:38:06 I mean, I think mostly what we’re talking about are mistakes.
    0:38:12 But they were compounded by failures of accountability relationships.
    0:38:26 That, you know, had there been more tough questions being asked, I think it would have exposed some of the weaknesses of the assumptions that were being made or the claims that were being made.
    0:38:31 So with the start, tremendous uncertainty choices are made.
    0:38:40 But under those conditions, recognizing how little you know, you should be on a quest, on a mission to try to learn as much as you can.
    0:38:58 There should have been enormous interest in the successes in school reopening in spring of 2020 in Europe, in the handful of schools that reopened in Montana and Wyoming here in the U.S. in the 2019-2020 school year.
    0:39:02 So there were some schools that did reopen even then, not very many, but some.
    0:39:07 Lots reopened in the fall across whole swaths of the U.S.
    0:39:17 And it seemed to make little impression on the outlets of elite opinion leadership, major newspapers and news magazines.
    0:39:28 There wasn’t a quest for information on the scale that I would have thought officials would want to launch if they had recognized to their ignorance.
    0:39:33 But they made a set of policy decisions like they knew how to handle this crisis.
    0:39:35 And then they were not really open to learning.
    0:39:50 On March 12th, Quilmar Abrego Garcia was picked up by ICE in Prince George’s County, Maryland.
    0:40:00 In the days that followed, he was deported to the country where he was born, El Salvador, except this time he wound up in its infamous Seacott prison.
    0:40:05 At Seacott, they don’t let any of the prisoners have access to the outside world.
    0:40:12 On March 31st, the Trump administration said it had mistakenly deported Abrego Garcia, calling it an administrative error.
    0:40:14 On April 4th, a U.S.
    0:40:19 District Judge told the Trump administration to have Abrego Garcia back in the United States by April 7th.
    0:40:26 On April 10th, the Supreme Court entered the chat and more or less agreed, saying the Trump administration needed to get Abrego Garcia back.
    0:40:29 But it’s April 23rd, and he’s still not back.
    0:40:43 On Today Explained, we’re going to speak with the Maryland senator who sat down with Abrego Garcia in El Salvador last week and figure out how this legal standoff between the Trump administration and the courts might play out.
    0:40:48 The regular season is in the rearview, and now it’s time for the games that matter the most.
    0:40:51 This is Kenny Beecham, and playoff basketball is finally here.
    0:40:59 On Small Ball, we’re diving deep into every series, every crunch time finish, every coaching adjustment that can make or break a championship run.
    0:41:01 Who’s building for a 16-win marathon?
    0:41:04 Which superstar will submit their legacy?
    0:41:07 And which role player is about to become a household name?
    0:41:12 With so many fascinating first-round matchups, will the West be the bloodbath we anticipate?
    0:41:14 Will the East be as predictable as we think?
    0:41:16 Can the Celtics defend their title?
    0:41:20 Can Steph Curry, LeBron James, Kawhi Leonard push the young teams at the top?
    0:41:26 I’ll be bringing the expertise, the passionate, genuine opinion you need for the most exciting time of the NBA calendar.
    0:41:30 Small Ball is your essential companion for the NBA postseason.
    0:41:34 Join me, Kenny Beecham, for new episodes of Small Ball throughout the playoffs.
    0:41:40 Don’t miss Small Ball at Kenny Beecham, new episodes dropping through the playoffs, available on YouTube and wherever you get your podcasts.
    0:41:51 This week on Prof G Markets, we speak with Ryan Peterson, founder and CEO of Flexport, a leader in global supply chain management.
    0:41:59 We discuss how tariffs are actually impacting businesses, and we get Ryan’s take on the likely outcomes of this ongoing trade war.
    0:42:05 If they don’t change anything and this 145% duty sticks on China, it’ll take out, like, mass bankruptcies.
    0:42:09 We’re talking, like, 80% of small business that buys from China will just die.
    0:42:12 And millions of employees will go, you know, we’ll be unemployed.
    0:42:17 I mean, it’s sort of why I’m like, they obviously have to back off the trade.
    0:42:19 Like, that can’t be that they just do that.
    0:42:21 I don’t believe that they’re that crazy.
    0:42:25 You can find that conversation exclusively on the Prof G Markets podcast.
    0:42:49 A book like this, a conversation like this, is ultimately only valuable if there are lessons to be drawn from the failures.
    0:42:52 Can you tell me about some of those lessons?
    0:43:06 Well, I think, you know, for me, the key lesson, you know, as I look back on it, is that policymakers have to be honest with themselves and with the public about what they know and they don’t know.
    0:43:21 You can’t just wing it and you can’t pretend, you know, tremendous loss of credibility in terms of your relationship with the public and also bad judgments when you don’t acknowledge what you don’t know and don’t seek to learn.
    0:43:24 So I see that as really at the core.
    0:43:27 The book Steve and I have written is not a muckraking book.
    0:43:38 You know, we’re not accusing officials of nefarious motives or corruption or, you know, it’s not the plandemic that sometimes—
    0:43:38 Yeah, it’s interesting.
    0:43:40 There are no real villains in this book.
    0:43:41 It’s not that kind of story.
    0:43:46 It’s more a story of folly than villainy.
    0:43:58 So that kind of honesty about what you know and don’t know, and this is a trick—policy making in our highly complex world is rife with uncertainty.
    0:44:05 And we have to confront that squarely in order to avoid making big mistakes.
    0:44:11 What do we know about the loss of public trust in our institutions and our government?
    0:44:13 Do we have good data on this yet?
    0:44:17 Was there a clear erosion of trust after COVID?
    0:44:18 Yes, there has been.
    0:44:21 I mean, it was already on a downward trend.
    0:44:28 I mean, trust in institutions had been on the decline for a long time, but you can see it really does markedly drop.
    0:44:35 And so it’s not just public health that is affected, also universities, and it’s also the media.
    0:44:41 They have all taken a hit, and they were all not in great shape before COVID, but they’re—well, I take that back.
    0:44:53 Public health was in pretty good shape before COVID, but I regret to say universities and the media were not in such great shape, and they’ve all suffered.
    0:44:55 I can see it in the people around me.
    0:44:59 Something was ruptured.
    0:45:00 For a lot of people.
    0:45:04 And I don’t think we quite understand it yet, but I think it was significant.
    0:45:12 And I think it has some really alarming downstream implications for our society.
    0:45:25 That is at the root of our motives in writing the book, that we want to confront this history and try to reach some kind of broader shared understanding about what happened and what it means.
    0:45:41 And so we’re trying to push that conversation so that we can, instead of just turning away from this episode, we can process it and come to at least the contours of a common understanding.
    0:45:50 Maybe that’s too much to hope for in our polarized context, but that is what we hope to be able to advance.
    0:45:52 I agree with you in principle.
    0:45:55 I also don’t know what that would actually look like.
    0:46:03 What would it mean for public officials like that to be held accountable in that way?
    0:46:10 Are we capable of doing that right now in this polarized climate in a civilized and productive way?
    0:46:22 I don’t know if, you know, it would work to sort of haul them in front of a congressional committee and watch them get roasted, if that’s what, you know, if that’s what we mean by accountability.
    0:46:33 I think accountability can happen more in society with conversations like the one we are having, with conferences and academia and the classrooms.
    0:46:45 There are some policymakers who had positions of great responsibility during the crisis who are able to have a conversation about what they got right and what they got wrong.
    0:46:47 And certainly that should be encouraged.
    0:46:50 Those can happen in societal settings.
    0:46:56 It doesn’t have to be a highly charged political setting where those conversations occur.
    0:47:01 But I think that’s the path forward towards healing these ruptures.
    0:47:08 And I do agree with you that there were profound ruptures during the pandemic in society.
    0:47:16 You know, divisions between families and friends over how they were interpreting the pandemic response.
    0:47:23 Presumably we can draw lessons from this that will help us navigate the next societal crisis.
    0:47:29 To that end, what do you think is the most important takeaway here?
    0:47:33 What lesson must we absolutely learn for the next storm?
    0:47:34 Whatever form that takes.
    0:47:42 Killer comet or climate catastrophe, fill in the blank, you know, with your favorite extinction threat.
    0:47:48 So the acknowledgement of uncertainty, the willingness to keep learning,
    0:47:54 and then resist that impulse towards moralized antagonism.
    0:48:01 You know, dismissing the perspectives of people you disagree with or that on the other side politically.
    0:48:03 Resist that.
    0:48:06 Listen to them and try to evaluate what they say on the merits.
    0:48:12 And don’t assume that you have nothing to learn from people you think are bad people.
    0:48:16 What we saw in the pandemic was, you know, society sort of turning on itself.
    0:48:23 So Democrats blaming Republicans, Republicans blaming Democrats, you know, all these different divides,
    0:48:31 where the root problem was that this crisis was not within our, we did not have the technology to control or stop this crisis.
    0:48:40 All we could really do is mitigate it and sort of acknowledging our frailties as human beings.
    0:48:41 That’s difficult.
    0:48:48 It’s much easier and more comfortable just to blame the bad things that are happening on the people you don’t like anyway.
    0:48:50 And so we saw an awful lot of that.
    0:48:52 I’m going to leave it right there.
    0:48:58 Once again, the book is called In COVID’s Wake, How Our Politics Failed Us.
    0:49:01 Frances Lee, this was a pleasure.
    0:49:02 Thank you.
    0:49:02 Thank you, Sean.
    0:49:10 All right.
    0:49:12 I hope you enjoyed this episode.
    0:49:22 I certainly did in the sense that it was a reminder of how chaotic and complicated this time was
    0:49:28 and how agonizing the decisions that had to be made really were.
    0:49:33 But as always, we want to know what you think.
    0:49:37 So drop us a line at the gray area at vox.com.
    0:49:44 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
    0:49:53 And if you have time after that, go ahead and rate and review and subscribe to the podcast that helps get the word to more people.
    0:50:04 This episode was produced by the gray area.
    0:50:12 And if you decide to sign up because of this show, let us know.
    0:50:24 And if you decide to sign up because of this show, let us know.

    There are lots of stories to tell about the Covid pandemic. Most of them, on some level, are about politics, about decisions that affected people’s lives in different — and very unequal — ways.

    Covid hasn’t disappeared, but the crisis has subsided. So do we have enough distance from it to reflect on what we got right, what we got wrong, and what we can do differently when the next crisis strikes?

    Professor Frances E. Lee — co-author of In Covid’s Wake: How Our Politics Failed Us — thinks we do. In this episode, she speaks with Sean about how our politics, our assumptions, and our biases affected decision-making and outcomes during the pandemic.

    Host: Sean Illing (@SeanIlling)

    Guest: Frances E. Lee, professor of politics and public affairs at Princeton and co-author of In Covid’s Wake: How Our Politics Failed Us

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Whatever this is, it isn’t liberalism

    AI transcript
    0:00:04 Thumbtack presents the ins and outs of caring for your home.
    0:00:10 Out. Indecision. Overthinking. Second-guessing every choice you make.
    0:00:16 In. Plans and guides that make it easy to get home projects done.
    0:00:21 Out. Beige. On beige. On beige.
    0:00:26 In. Knowing what to do, when to do it, and who to hire.
    0:00:29 Start caring for your home with confidence.
    0:00:31 Download Thumbtack today.
    0:00:39 Support for this show comes from ServiceNow, a company that helps people do more fulfilling work.
    0:00:41 The work they actually want to do.
    0:00:45 You know what people don’t want to do? Boring, busy work.
    0:00:49 But ServiceNow says that with their AI agents built into the ServiceNow platform,
    0:00:54 you can automate millions of repetitive tasks in every corner of a business.
    0:00:58 IT, HR, customer service, and more.
    0:01:03 And the company says that means your people can focus on the work that they want to do.
    0:01:06 That’s putting AI agents to work for people.
    0:01:07 It’s your turn.
    0:01:11 You can get started at ServiceNow.com slash AI dash agents.
    0:01:29 Those are the words of the great and now infamous Thomas Hobbes, the 17th century English philosopher.
    0:01:39 You can find them in his 1651 book, The Leviathan, which is often considered the founding text of modern political philosophy.
    0:01:47 Hobbes’ big contribution was to challenge the right of kings and religious authorities to rule.
    0:01:52 The foundation of political power for him was the consent of the governed.
    0:01:58 And the only reason to hand over authority to the state, or anyone else for that matter,
    0:02:00 was for the protection of the individual.
    0:02:05 If that sounds familiar, it’s because it is.
    0:02:12 That’s basically the political philosophy that came to dominate the Western world from the Enlightenment on.
    0:02:15 It’s what we now call liberalism.
    0:02:21 But we’re in an era where liberalism and democracy are being contested from within and without.
    0:02:28 And while I wouldn’t say that liberalism is dead, that doesn’t quite make sense.
    0:02:31 I would say that it’s wobbly.
    0:02:35 What should we make of that?
    0:02:38 Is the liberal experiment coming to an end?
    0:02:43 And if it is, what does that mean for our political future?
    0:02:50 I’m Sean Elling, and this is The Gray Area.
    0:03:03 Today’s guest is political philosopher John Gray.
    0:03:12 We spoke before last year’s elections, and lately, I have found myself returning to that conversation over and over again.
    0:03:18 In his book, The New Leviathans, Thoughts After Liberalism,
    0:03:27 Gray challenges the idea that the liberal dream of history, with a capital H, is over, and that liberal democracy has won.
    0:03:35 Hobbes is at the center of his book because he thinks Hobbes’ liberalism was more realistic in its ambitions,
    0:03:41 and that his most important lessons about the limits of politics have been forgotten.
    0:03:48 It is, as you might suspect, a challenging book, but it is an essential read.
    0:03:56 And I invited Gray onto the show to talk about what he thinks has gone wrong, and more importantly, where he thinks we’re headed.
    0:04:03 John Gray, welcome to The Gray Area.
    0:04:04 Thank you very much, Sean.
    0:04:12 What’s interesting about this new book is that you’re not even bothering to announce the death of liberalism.
    0:04:17 Like Nietzsche’s madman screaming about God in the town square.
    0:04:22 You’re saying liberalism has already passed, and most of us don’t quite know it yet.
    0:04:23 Is that right?
    0:04:33 Yes, I think there are many visible signs that anything like a liberal order or a liberal civilization has passed.
    0:04:43 In the last 30 years, shall we say, since 1990, 30-odd years, there’s been an enormous…
    0:04:52 After that moment in which it seemed that liberal democracy was going to become universal or nearly universal following the collapse of communism,
    0:04:58 What, in fact, happened was that the transition from communism to liberal democracy did not occur in Russia.
    0:05:00 It has not occurred in China.
    0:05:15 The wars that were fought, so-called wars of choice, by the United States and its followers, including Britain, in Afghanistan, Iraq, Syria, to some degree, and Libya, were all failures.
    0:05:28 None of those countries became democratic or anything near it, and, in fact, they only damaged those countries in profound ways and damaged the United States, and particularly the United States and Britain in various ways.
    0:05:39 So, I think if you just look at geopolitical trends, you can see that the so-called liberal West, if something like that ever fully existed, is in steep retreat.
    0:06:08 And in Western societies themselves, what were taken for granted, even within my lifetime, and perhaps yours, Sean, as fully accepted, liberal freedoms of speech and inquiry and expression and so forth, have been curtailed, not by a dictatorial state, interestingly, as in the former Soviet Union or today in Xi’s China, but actually by civil institutions themselves.
    0:06:27 It’s been universities and museums and publishers and media organizations, that of charities and cultural institutions and so on, have imposed various kinds of limits on themselves, such that they police the expression of their members.
    0:06:36 And those who deviate from a prevailing progressive orthodoxy or are in various ways canceled or excluded, that’s quite new.
    0:06:39 But it’s rather widespread now and pervasive.
    0:06:50 And although, of course, it’s true that there are enclaves of free expression, enclaves or niches like the one we’re enjoying now.
    0:07:00 Although we’re not in the position that people are in, in Xi’s China or Putin’s Russia, we can still communicate relatively freely.
    0:07:12 There are large areas of life, including the institutions I mentioned earlier, which used to be, let’s say, governed by liberal norms, and aren’t any longer.
    0:07:26 So I think it makes sense just as an empirical observation to say that liberal civilization that existed and could be described as a liberal civilization, with all its faults and flaws, doesn’t exist any longer.
    0:07:34 Of course, you might say liberalism as a theory continues to exist, but then so does medieval political theory or any modern political theory.
    0:07:36 It just doesn’t describe the world anymore.
    0:07:46 Well, let’s not get too far ahead of ourselves here, because the term liberalism is one of those big, unwieldy terms that means a million different things to a million different people.
    0:07:53 What do you mean by liberalism, just so it’s clear what we’re diagnosing the death of here?
    0:08:08 The core of liberalism as a philosophy is the idea that no one has a natural right to rule, and that all rulers, all regimes, all states serve those whom they govern.
    0:08:11 So that this is a view which differs from Plato.
    0:08:24 Plato thought that philosophers had the best authority to rule because they could better than other people perceive truths beyond the shadows of the improbable world.
    0:08:32 In Hobbes’ day, some people believed, many people believed that kings had divine right to rule.
    0:08:40 And later on, we’ve had beliefs, we have had philosophies which have developed according to which it’s the most virtuous people who should rule.
    0:08:52 And I think actually the hyper-liberal, or what is now sometimes called the woke movement, has something of that in it, which is that they imagine that they represent virtue better than, and progressiveness better than others.
    0:08:58 And therefore, they have a right at least to shape society according to their vision.
    0:09:11 But a liberal, and in this sense, Hobbes is a liberal, and I’m still a liberal in this sense, actually, is one who thinks that any sovereign, any ruler, depends for their authority on protecting the well-being of the ruled.
    0:09:15 And in liberal theory, it’s normally, liberal thoughts are normally individuals.
    0:09:19 And when it doesn’t do that, then any obligation to obey is dissolved.
    0:09:23 And Hobbes says explicitly, the book is partly about Thomas Hobbes, of course.
    0:09:30 As you know, the 17th century political philosopher that wrote the book, Leviathan, that’s why it’s called New Leviathan.
    0:09:53 Hobbes said that when the sovereign, which could be a king or a Republican assembly or a parliament or whatever, but when the sovereign fails to protect the individual from violence for other human beings, when the sovereign fails to provide security, all obligations are dissolved, and the individual can leave or kill the sovereign.
    0:09:54 Kill the sovereign.
    0:09:59 So there is a fundamental equality between the ruler and the root.
    0:10:00 I think that’s the core of liberalism.
    0:10:03 And in that sense, I say Hobbes is still a liberal, and so am I.
    0:10:12 But it had many, many different meanings later or attached to it about rights and progressiveness and so on, which I don’t subscribe to, and neither did Hobbes.
    0:10:22 Do you actually call Hobbes the first and last great liberal philosopher, which might surprise more than a few political philosopher types?
    0:10:23 Why is that?
    0:10:25 Why is he the first and the last great liberal philosopher for you?
    0:10:27 Well, it shouldn’t surprise them.
    0:10:36 If they knew a bit more than they normally do about the history of political ideas, they would know that the best 20th century scholars of Hobbes all regarded him as a liberal.
    0:10:44 So Michael Oakeshott, the British conservative philosopher, the Canadian Marxist philosopher, C.B.
    0:10:51 Macpherson, and Leo Strauss, the American conservative philosopher, they all regarded Hobbes as a liberal.
    0:10:57 And so it’s only philosophers who don’t read ideas and their philosophy, which is the majority, I’m afraid.
    0:11:00 It’s only those who are surprised by it.
    0:11:01 So they shouldn’t be.
    0:11:07 But I think the sense in which he is is exactly the sense of which I just mentioned earlier, which is that he doesn’t accept any.
    0:11:09 The most virtuous don’t have the right to rule.
    0:11:12 The cleverest or the most intelligent don’t have the right to rule.
    0:11:15 None are appointed by God to rule.
    0:11:24 States or sovereigns or human constructions or human creations, which exist only so long as they serve the purposes of those over whom they rule.
    0:11:30 And so that, I think, is still alive, that idea, not only in philosophy.
    0:11:31 I think it’s alive in the world.
    0:11:39 And there’s nowhere in the world now, there was in the past, even relatively recent past, where anyone rules by prescriptive right.
    0:11:51 If someone just says, I have the right to rule you, as our King Charles did in the Civil War in Britain in the 17th century, I have the divine right to rule.
    0:11:51 He was executed.
    0:11:53 He was executed by the parliament.
    0:11:57 So that liberal idea, I think, is still quite strong in the world.
    0:12:02 But it’s quite different from lots of other liberal ideas about progress and humanity and rights and so on.
    0:12:12 I used to teach Hobbes, and I always wondered what it was I liked so much about him, because he is so dark and gloomy.
    0:12:21 I mean, even if you’ve never read Hobbes, you probably know his famous description of human life as nasty, brutish, and solitary, and short, that kind of thing.
    0:12:29 And I think what appeals to me in his thought is the tragic dimension.
    0:12:34 You know, anarchy, for him, was never something we transcend.
    0:12:36 It was something we stave off.
    0:12:38 But it remained a permanent possibility.
    0:12:45 That awful state of nature that he worried about was always lurking just beneath civilization.
    0:12:59 Do you think modern liberalism went awry when it lost sight of this and maybe drifted away from Hobbes’ very limited view of the purpose of the state, which is just to keep us from eating each other, basically?
    0:13:05 I think liberalism, over time, turned into something different.
    0:13:12 I mean, one has to say that, although historically, in terms of the history of ideas, Hobbes is definitely a liberal.
    0:13:30 Most people who’ve called themselves liberals subsequently in the 19th and 20th and 21st centuries wouldn’t regard Hobbes and don’t regard Hobbes as a liberal, because although he has this feature that sovereigns or states serve the individuals over whom they rule,
    0:13:41 He doesn’t think that what the state or the sovereign can do to provide security can be limited or should be limited by rights or some of the principles.
    0:13:42 He doesn’t think that.
    0:13:58 And that’s the sort of difficulty that many people find in thinking about Hobbes, which is that although he thinks the state has a very limited purpose, it can do anything that it judges, the sovereign judges, that will achieve that purpose.
    0:14:04 So, for example, the state in Hobbes has no obligation to respect freedom of speech.
    0:14:10 If freedom of speech harms social peace and political order, it can intervene.
    0:14:18 Hobbes even says that the sovereign can define the term, define the words used in the Bible to kind of define what those words mean.
    0:14:31 And probably when you taught him, you notice this, so that society can avoid the religious wars that were raging, had been raging in Europe in his time and around his time over what the Bible meant.
    0:14:33 Peace determines everything.
    0:14:35 So there’s no right to free speech.
    0:14:39 There’s no right to demonstrate that none of these rights can restrain the state.
    0:15:01 On the other hand, and here he’s different from modern liberals, the state can’t intervene in society, can’t curb human beings in order to achieve some idea of social justice or progress or a higher type of humanity, a more civilized or superior or ethically superior type.
    0:15:05 It can’t do that either, it shouldn’t promote virtue, it’s indifferent to those matters.
    0:15:08 So, it’s a very unfamiliar type of liberalism.
    0:15:11 But I share your view, I’m not sure it’s tragic.
    0:15:12 I would just say it’s a reality.
    0:15:20 Hobbes thought it was a reality that at any time, order in society can break down anywhere, if certain, and it can happen quite quickly.
    0:15:23 In other words, order is fragile in human life.
    0:15:26 The default condition of human life is not harmony.
    0:15:29 I guess that’s where he differs from many liberals.
    0:15:34 They’ve assumed that basically human beings want to cooperate, that’s what they try and do.
    0:15:44 And if they’re thwarted, it’s by tyranny or reaction or evil demagogues or some sort of evil force which prevents them.
    0:15:45 Hobbes doesn’t assume that.
    0:15:57 Hobbes thinks the default condition of humanity is conflict and that, therefore, one can fall into brutal and terrible and civilization forms of that conflict at any time.
    0:16:03 And I would say that the history of the 20th century exhibited that in many ways.
    0:16:12 The main destroyers, I guess, of human life and peace and the main agencies that inflict violence then were states.
    0:16:16 But in the 21st century, they’re not necessarily states.
    0:16:19 They can be terrorist organizations or criminal gangs.
    0:16:31 And so anarchy has emerged now, I think, in the 21st century as at least as much of a threat to human security and human freedom as totalitarian and tyrannical states were in the 20th century.
    0:16:35 And that’s, I think, a relatively new development in recent times.
    0:16:39 And it’s one which, I think, makes Hobbes more topical, if you like.
    0:16:52 I mean, when it was states that were committing vast crimes, his argument that the state should be unfettered in its pursuit of peace kind of seemed weak because states weren’t pursuing peace.
    0:16:57 They were pursuing other gods and were killing countless or tens of millions of human beings.
    0:17:05 Now, it’s more often the case that states are collapsed or are destroyed.
    0:17:15 And sometimes they’re destroyed, as they were in Iraq and Afghanistan and in Libya, for example, by the attempt to bring in a better kind of state.
    0:17:25 And so I think one big error of contemporary liberalism, which has actually affected policies in America and elsewhere, has been the idea that nothing is worse than tyranny.
    0:17:34 Whereas Hobbes’ insight, his relatively simple insight, but his rather profound one, is that anarchy can be worse than tyranny.
    0:17:43 And what’s also true is that once you’re in an anarchical condition, once the state is broken down, once you’re in a failed state, it’s very difficult, actually, to reconstruct the state.
    0:17:47 Well, in what sense has liberalism, for you, passed into the dustbin of history?
    0:17:56 I mean, liberalism is still very much a thing, even if the shape of it has changed, and it is very much alive, if not terribly well.
    0:18:02 So what does it mean to say that liberalism has passed away or died or however you like to put it?
    0:18:04 Well, as I’ve said, there are still ideas.
    0:18:05 Yeah, yeah.
    0:18:12 I mean, you could go into a library and pull a book down, and it will describe medieval or ancient Greek and Roman political philosophy to you.
    0:18:14 In that sense, these ideas are alive.
    0:18:23 But in the actual world, the actual human world, liberal regimes or liberal societies or a liberal civilization, I think, is in the past.
    0:18:30 So, well, let me give you a kind of rather obvious example, since we’re talking partly in an American context.
    0:18:48 Thirty years ago, I wrote that I thought that what would happen, I quote myself, perhaps rather vainly, in this new book of mine, I wrote that what I expected to happen in the United States was that as more and more freedoms and activities became covered by rights, by legal rights,
    0:19:00 and when some of those rights did not reflect a moral consensus in society, but there were rights to do things that were morally conflicted in society, like abortion.
    0:19:03 Now, I’m pro-abortion, but that’s pro-choice, but that’s irrelevant here.
    0:19:12 I thought that what would eventually happen would be that the judicial institutions, up to and including the Supreme Court, would be politicized.
    0:19:14 They’d become objects of political capture.
    0:19:33 Now, when I said that thirty-odd years ago, people like Dworkin, whom I knew in Oxford and others, were incredulous, because for them it was natural, it was some kind of settled fact of life that the majority of judges had become liberal and would stay liberal.
    0:19:34 I never thought that for a moment.
    0:19:47 I thought that a different dynamic would take place, that the more rights discourse and the practice of rights was extended to morally disputable and conflicted areas, the judicial institutions would be politicized and taken over.
    0:20:01 So that, I think, is a feature of, if you think of a liberal regime or a liberal society, one of which there are judicial institutions that are not politically contested, that aren’t part of the political arena, then that’s passed away, that’s gone.
    0:20:13 And so, I think, also, has the area of private life, of life in which what you say to friends or work colleagues is not sort of judiciable, is not actionable.
    0:20:28 That’s much smaller than it used to be, certainly in Britain, which I know well, and I’m pretty sure it is in America, too, in that what used to be a private conversation could be cited against you because it deviates from some progressive norm.
    0:20:40 So, the defining features of liberalism, not as a philosophy that exists in libraries, but as a practicing set of institutions and norms, has at least become weaker.
    0:20:44 And I would say it’s more of a pretty well gone now, and I don’t expect it to come back.
    0:20:57 We’ll be back with more of my conversation with John Gray after a quick break.
    0:21:14 Support for the Gray Area comes from Quince.
    0:21:18 Vacation season is nearly here, and when you go on vacation, you want to dress the part.
    0:21:25 Whether that’s breathable linen for summer nights, like I wear, or comfy leggings for long plane rides, like I also wear,
    0:21:39 You can treat yourself with Quince’s high-quality travel essentials at fair prices, like Quince’s lightweight shirts and shorts from $30, and comfortable lounge sets, like the ones I wear, with premium luggage options and durable duffel bags to carry it all.
    0:21:41 The best part?
    0:21:45 Quince says their items are priced 50% to 80% less than similar brands.
    0:21:48 Our colleague, Claire White, got to check out Quince.
    0:21:54 I received the leather pouch travel set from Quince, and I love them.
    0:21:56 They are so versatile.
    0:22:01 They fit a lot while still looking great and maintaining a really high quality of leather.
    0:22:06 For your next trip, treat yourself to the luxe upgrades you deserve from Quince.
    0:22:13 You can go to Quince.com slash grayarea for 365-day returns, plus free shipping on your order.
    0:22:20 That’s Q-U-I-N-C-E dot com slash grayarea to get free shipping and 365-day returns.
    0:22:22 Quince.com slash grayarea.
    0:22:29 Support for the gray area comes from Bombas.
    0:22:32 It’s time for spring cleaning, and you can start with your sock drawer.
    0:22:36 Bombas can help you replace all your old, worn-down pairs.
    0:22:38 Say you’re thinking of getting into running this summer.
    0:22:43 Bombas engineers blister-fighting, sweat-wicking athletic socks that can help you go that extra mile.
    0:22:50 Or if you have a spring wedding coming up, they make comfortable dress socks, too, for loafers, heels, and all your other fancy shoes.
    0:22:52 I’m a big runner.
    0:22:53 I talk about it all the time.
    0:23:00 But the problem is that I live on the Gulf Coast, and it’s basically a sauna outside for four months of the year, maybe five.
    0:23:07 I started wearing Bombas athletic socks for my runs, and they’ve held up better than any other socks I’ve ever tried.
    0:23:13 They’re super durable, comfortable, and they really do a great job of absorbing all that sweat.
    0:23:15 And right now, Bombas is going international.
    0:23:19 You can get worldwide shipping to over 200 countries.
    0:23:25 You can go to bombas.com slash gray area and use code gray area for 20% off your first purchase.
    0:23:29 That’s B-O-M-B-A-S dot com slash gray area.
    0:23:32 Code gray area for 20% off your first purchase.
    0:23:34 Bombas.com slash gray area.
    0:23:36 Code gray area.
    0:23:43 Support for the gray area comes from Shopify.
    0:23:47 Creating a successful business means you have to be on top of a lot of elements.
    0:23:54 You need a product with demand, a focus brand, a steady hand, and a gray area ad budget of at least $100,000.
    0:23:56 You also need savvy marketing.
    0:23:58 But that one didn’t rhyme.
    0:24:01 And of course, there’s the business behind the business.
    0:24:03 The one that makes selling things easy.
    0:24:06 For a lot of companies, that business is Shopify.
    0:24:12 According to their data, Shopify can help you boost conversions by up to 50% with their ShopPay feature.
    0:24:19 That basically means less people abandoning their online shopping carts and more people going through with the sale.
    0:24:24 If you want to grow your business, your commerce platform should be built to sell wherever your customers are.
    0:24:29 Online, in-store, in their feed, and everywhere in between.
    0:24:32 Businesses that sell, sell more with Shopify.
    0:24:36 You can upgrade your business and get the same checkout Mattel uses.
    0:24:40 You can sign up for your $1 per month trial period at Shopify.com slash Vox.
    0:24:42 All lowercase.
    0:24:46 Go to Shopify.com slash Vox to upgrade your selling today.
    0:24:48 Shopify.com slash Vox.
    0:25:11 As you know, Nietzsche thought that liberalism was rooted in these Christian ideas about human equality and the value of the human person.
    0:25:21 But modern liberals rejected the religious roots of these values while still attempting to preserve them on secular grounds.
    0:25:25 That was a move he thought was destined to fail.
    0:25:36 You seem to think that Hobbesian liberalism was intended to be a kind of political atheism, but it eventually shape-shifted into something like a political religion.
    0:25:39 Only it didn’t recognize itself as such.
    0:25:41 Is that sort of the core problem here?
    0:25:42 Or one of them?
    0:25:55 One of the core problems – I mean, I think I talk at some length in the book when I discuss the way in John Stuart Mill, who I think for many liberals is still a canonical liberal, or even the canonical liberal.
    0:26:06 But he explicitly, undeniably, yet overtly adopted the view that from Auguste Comte, the French positive thinker, who was an anti-liberal, actually.
    0:26:17 But anyway, he adopted from Comte the idea of a religion of humanity, which he said should replace all the existing religions and would be better than any of the existing religions.
    0:26:22 He explicitly took that from Comte and cited and said that and wrote that in several places.
    0:26:30 So, I think it was probably in Mill, at least in Britain, that liberalism became itself a kind of religion.
    0:26:39 But, of course, there are still many respects in which it secularized monotheistic assumptions or values or premises.
    0:26:59 So, I think it is undoubtedly the case, historically, that liberalism was a set of footnotes to, particularly the liberalism that later emerged as a kind of religion in its own right, to monotheism, to Christian and Jewish monotheism, and as a competitor to it.
    0:27:07 And, basically, liberals, conventional liberals, 90% of liberals, are adamantly resistant to this view.
    0:27:19 They adamantly insist that their views at no point depend on anything in theism, but they would say it’s a kind of genetic fallacy to think that just something may have come from theism.
    0:27:20 It depends on that.
    0:27:23 But it’s actually, I think, quite difficult.
    0:27:33 You know, it has become more difficult for me to identify what I am, and it’s not just because the fault lines around me are so scrambled.
    0:27:41 I think on some level, it’s because, and maybe I’m projecting a little bit onto Hobbes, I have a pretty tragic view of political life.
    0:27:54 And because of that, I have a fairly modest understanding of the goal of politics, which is to navigate this tension between order and chaos with the understanding that nothing is permanent.
    0:27:59 Everything is contingent, and history has no ultimate direction.
    0:28:04 I mean, in so many ways, this was the political lesson of the 20th century.
    0:28:13 And after a handful of decades of liberal triumphalism, which is barely a blink in historical time, by the way, people seem to have forgotten this.
    0:28:16 And this is probably where you and I are maybe most aligned.
    0:28:20 But you don’t think the belief in progress is a complete delusion, right?
    0:28:22 I mean, the world has indeed gotten much, much better.
    0:28:26 It’s just that that progress isn’t fixed, and it’s dangerous to believe otherwise.
    0:28:27 Well, I don’t know.
    0:28:35 I mean, what I say in the book is that progress meant in those who believed in it.
    0:28:40 It didn’t mean that things would get better for a while and then get worse.
    0:28:43 I guess it meant two things, both of which are false.
    0:28:53 One is that progress was cumulative in the sense that what was achieved in one generation could be carried on in the next generation.
    0:28:54 That’s what meliorism was.
    0:29:04 Meliorism as a philosophy isn’t just the idea or the belief, which is some societies or some parts of history, some are better than others.
    0:29:08 I think everybody would accept that, whatever their values are, actually.
    0:29:13 But it was the belief that the human lot could be cumulatively improved.
    0:29:19 That’s to say that certain achievements could be embedded and they would remain fixed.
    0:29:21 You could have some retrogression.
    0:29:27 You could go from stair seven on the escalator of progress back to stair three.
    0:29:33 But then the stakers would start moving again and you would get back to seven.
    0:29:35 And then you could get to eight or nine.
    0:29:40 So you might make two steps back, but you would then make two or three steps forward.
    0:29:41 That was meliorism.
    0:29:43 And I think that’s clearly false.
    0:29:46 You might be tempted to think that it was true if you thought of only the last 300 years.
    0:29:51 But if you look at the larger, there was no apocalyptic revelation 300 years ago.
    0:29:53 Some apocalyptic change in human events.
    0:30:00 Human beings remained what they were before that in ancient Greece and ancient China and elsewhere.
    0:30:01 And then medieval times.
    0:30:07 They remained basically, I think, still what they were in their natures and appetites and so on.
    0:30:10 And so meliorism in that sense is false.
    0:30:21 Well, one thing that seems obvious enough at this moment is that liberal societies are experiencing a lot of internal disruption.
    0:30:29 I mean, maybe the only thing that really unites the far right and the far left is their contempt for the society that produced them.
    0:30:36 And you say something in the book that I think cuts right to the core of this.
    0:30:40 And I just want to read it to you and ask you what you mean by that.
    0:30:48 You say in its current and final phase, the liberal West is possessed by an idea of freedom.
    0:30:51 What does it mean to be possessed by an idea of freedom?
    0:31:07 Well, the sense in which I use it in the book is the sense in which it was used by late 19th century intellectuals in Tsarist Russia were possessed by an idea of freedom, which is that an idea of freedom comes to be prevalent.
    0:31:30 That means not the reduction of coercion by other human beings or by the state, not a set of procedures which enables people to live together, not a set of norms of tolerance or peaceful coexistence or even of mutual indifference, which enable people to live together in some rough and ready way.
    0:31:32 Freedom means self-creation.
    0:31:36 Freedom means creating yourself as the person you want to be.
    0:31:40 And that I buy here, I think, is definitely not in Hobbes.
    0:31:43 It’s not even in Locke or other liberals.
    0:31:44 But it is in Mill.
    0:32:00 It is in the chapter of Mill’s essay on liberty where he talks about individuality, where he says that anyone who inherits their way of living or what we would now call their identity from the society, from conventions, from traditions, from history, lacks individuality.
    0:32:10 Individuality means being the author of your own life, changing it, fashioning it as if it was a work of art so that it fits something unique and authentic about yourself.
    0:32:14 And I think that is what the West is what the West is possessed by.
    0:32:30 Because the reason it’s an impossible ideal to realize is that if you want to author your life in a certain way and have a certain identity, it doesn’t mean much or anything unless that identity is somehow accepted by others as well.
    0:32:34 Otherwise, it’s just a fiction of yours or a dream.
    0:32:49 And that’s, I think, one of the things that’s provoked deep conflict in Western society because there is the underlying idea of a strong version of autonomy as self-creation has become not part of the far right or the far left.
    0:32:52 It’s not that which has produced the present conflict.
    0:32:54 It’s not the far right or the far left.
    0:32:57 It’s become part of liberal thinking and practice itself.
    0:33:04 And that, I guess, goes back to Mill and to romantic theorists and philosophers who Mill read.
    0:33:10 It’s an element in the liberal tradition that wasn’t very strong or perhaps present at all there, but it’s very, very strong now.
    0:33:22 So I guess that’s what it means by being possessed by an idea of freedom, that unless you can be what you want to be and unless you can actually somehow have that validated by others, you’re not free.
    0:33:24 Well, that’s not really possible.
    0:33:34 And I think the more traditional liberal idea of toleration, which is that you don’t have to be fully validated by other people and they don’t have to be fully validated by you.
    0:33:42 They can simply, you can rub along as the different miscellaneous personalities and contingent human beings that you are.
    0:33:48 That seems to me a more achievable ideal, but it’s not one that satisfies many people today.
    0:33:50 Not many liberals, anyway.
    0:33:56 Yeah, I mean, I think that the pursuit of individual freedom is good.
    0:34:03 The desire to free ourselves from our inherited identities is good and necessary.
    0:34:15 But we do seem to run into a ditch if we pursue it too far because the pursuit, as I think you’re saying, the pursuit of self-definition doesn’t end with the self because no one can be wholly self-defined.
    0:34:18 So it becomes a political contest for recognition.
    0:34:22 And I don’t think liberal politics are equipped to handle that very well or for very long.
    0:34:32 Well, I agree with that, especially if it becomes a matter of rights, because then, of course, you have a perpetual conflict between the rights of rival groups, basically.
    0:34:41 If these identities, especially if they’re framed in ways which are antagonistic or polarized, it’s a recipe for unending conflict.
    0:34:49 I’m not sure, you see, I wouldn’t even go as far as you do in saying that is wanting to free oneself from traditional is necessarily good.
    0:34:56 I think some people want it so they can go ahead and live like that in what used to be called a liberal society if they want to.
    0:35:02 But others might be quite happy to just jog along with whatever they’ve inherited and be left.
    0:35:04 I think people should have the choice is what I was saying.
    0:35:05 I don’t mean imposing that.
    0:35:06 No, no, not imposing.
    0:35:07 But you think it’s, I don’t think it’s even better.
    0:35:09 I don’t think one is better than the other.
    0:35:11 I think they’re just preferences, actually.
    0:35:24 And so I would never say, as Mill does, Mill constantly says, people who accept the definition of their inherited identities are, he doesn’t use the word inferior, but he says, he implies all the way throughout that they’re inferior.
    0:35:34 He suggests that they’re not themselves, they obey a convent by rote, they’re puppet-like creatures, and so I wouldn’t say any of that.
    0:35:39 There may be those, I mean, who want to construct themselves, turn themselves into works of art, if you like.
    0:35:41 They can go ahead and try.
    0:35:44 But quite a lot of people, at least in the past, didn’t want to do that.
    0:35:48 And I think there are still quite a lot of people who don’t want to do that now.
    0:35:55 And they should have as much freedom and as much respect, it’s an important point, I would say, as these others.
    0:36:15 I mean, the key point, I guess, of the book is that the problems of liberal society or the fact that it’s passed away, as I claim, isn’t something that’s happened, as many conservatives or leftists or others say, because liberalism has been sidelined by Marxism or post-modernism or some other philosophy.
    0:36:25 The problems of liberal societies come from within liberal societies, come from within liberal societies themselves.
    0:36:36 And they are all problems, if you like, that liberalism has proved, the problems it’s generated, the contradictions it’s generated, have proved to be ones that it’s not very good at resolving.
    0:36:58 This contemporary obsession with self-expression and self-creation and status and that sort of thing, do you see that as symptomatic of some deep failure of liberal politics, that this was bound to happen because liberal politics did not and cannot satisfy this kind of need?
    0:37:09 No, I mean, that’s a kind of Hegelian view or a Fukuyama-like view, which says that what people want is recognition and that liberal societies haven’t been able to, etc., etc.
    0:37:28 I think that the main challenges to liberal societies now are quite different, which is that the economic model of liberal society, which was adopted after the collapse of communism, after the Cold War, has left large parts of society behind, not just minorities.
    0:37:37 There have been working-class communities in Britain and American parts of Europe, which have just been more or less abandoned.
    0:37:54 But also, large parts of what used to be called the middle classes have not seen their incomes or their standards ever being improved much or at all in the last 30 years, while the societies as a whole have gotten considerably better.
    0:38:16 So, I think the economic model, actually, of Western liberal societies, the dominant one after the Cold War, during the Cold War, we tend to forget now, although it’s within my lifetime, we tend to forget that after the Second World War, there was a model of social democracy in which the state intervened in many different ways to smooth out the hard edges of market capitalism and constrain it.
    0:38:24 I think the abandonment of that model after the end of the Cold War has led to deep-seated contradictions, but maybe they’re not what you’re referring to.
    0:38:26 They are, certainly in part.
    0:38:43 I mean, I’m glad you said that, because one of the things that irks me about a lot of right-wing types who like to rail against identity politics or wokeism, a term I really hate to use because it has been stretched to the point of meaninglessness, in our discourse at least.
    0:38:56 There is this whole materialist history to be told about the failures of liberal capitalism, and those failures have produced a lot of our political pathologies, and a lot of people on the right don’t want to hear about that, and I think that’s a huge mistake.
    0:39:05 I agree with you, and in fact, I say in the book, it’s a very simple point, but very hard for many liberals, right-wing liberals in particular, to understand.
    0:39:17 I say that what these people call populism is the political blowback against the social disruption produced by their own policies, which they don’t understand or deny.
    0:39:19 That’s what populism is.
    0:39:29 They talk about populism as if it was a sort of demonic thing that arose from nowhere, that it was a few demagogues that whipped it up out of practically nothing.
    0:39:48 I’m not saying there aren’t demagogues, but the reason the demagogues were successful in 2016 and later, and not in 1950 or 60 or 70s in Europe and America, is that there were periods, certainly in Europe, and to some extent even in America, of social democracy,
    0:40:01 in which there was a more extensive state, the Eisenhower state, the Rooseveltian state, even before that in America, which limited the impact of market capitalism on human well-being and provided some protection for its casualties.
    0:40:17 If you scrap that, which was done to a considerable extent after the end of the Cold War, then over time you create large sections of the population which are suffering and dislocate or simply have no place in the productive process.
    0:40:20 And you’ve got to expect some sort of kickback.
    0:40:22 So that’s what liberals call populism.
    0:40:28 They call populism the political movements around them that they have caused, which they don’t understand.
    0:40:30 That’s what populism is, basically.
    0:40:33 But you could never get that across to them, actually.
    0:40:36 I’ve tried to do this, and they say, but it’s the demagogues.
    0:40:37 It’s Trump.
    0:40:38 It’s Forrest Johnson.
    0:40:40 It’s Nigel Farage.
    0:40:41 It’s all these wicked people.
    0:40:44 If you could only shut these wicked people up, everything would be fine.
    0:40:46 Or some of them say it’s the Russians.
    0:40:59 So what they’re doing is they’re denying, or maybe just not understanding, maybe they’re just stupid, they’re just not understanding why these movements have arisen when they did.
    0:41:10 I guess the problem for me, and this is why I’m still basically a liberal, is that I don’t think any of the conservative alternatives are preferable for a thousand different reasons.
    0:41:14 And I’m not a fan of any imaginable version of authoritarianism.
    0:41:17 So I don’t really have anywhere else to go, ideologically.
    0:41:18 Liberalism, it is.
    0:41:20 It’s up to you.
    0:41:27 But it depends how far you think the degeneration of liberal society has gone and how far it can remain livable.
    0:41:42 I mean, one of the things in Europe now is that the far right in many European countries, not in Britain yet, but in France and Germany, is now a very substantial political bloc.
    0:41:51 In other words, there isn’t a flawed liberal society around us, uncontested, which can carry on pretty well whatever happens.
    0:42:01 There are powerful movements, not exactly like in the 30s, but there are powerful far right movements, and in some countries also far left movements, which are challenging it.
    0:42:07 So the liberal position might be a kind of luxury of history that is now passing away.
    0:42:19 We’ll be back with more of my conversation with John Gray after one more quick break.
    0:42:40 Support for the gray area comes from Mint Mobile.
    0:42:43 There are a couple ways people say data.
    0:42:44 There’s data.
    0:42:46 Then there’s data.
    0:42:47 Me, personally?
    0:42:48 I say data.
    0:42:49 I think.
    0:42:50 Most of the time.
    0:42:56 But no matter how you pronounce it, it doesn’t change the fact that most data plans cost an arm and a leg.
    0:43:00 But with Mint Mobile, they offer plans starting at just 15 bucks a month.
    0:43:02 And there’s only one way to say that.
    0:43:04 Unless you say $15, I guess.
    0:43:13 But no matter how you pronounce it, all Mint Mobile plans come with high-speed data and unlimited talk and text delivered on the nation’s largest 5G network.
    0:43:16 You can use your own phone with any Mint Mobile plan.
    0:43:20 And you can ring along your phone number with all your existing contacts.
    0:43:23 No matter how you say it, don’t overpay for it.
    0:43:26 You can shop data plans at mintmobile.com slash gray area.
    0:43:29 That’s mintmobile.com slash gray area.
    0:43:34 Upfront payment of $45 for a three-month, five-gigabyte plan required.
    0:43:36 Equivalent to $15 per month.
    0:43:39 New customer offer for first three months only.
    0:43:41 Then full price plan options available.
    0:43:43 Taxes and fees extra.
    0:43:45 See Mint Mobile for details.
    0:43:51 Support for the gray area comes from Found.
    0:43:57 When you’re a small business owner, making sure your bookkeeping and taxes stay in order comes at a cost.
    0:43:59 And not just a financial cost.
    0:44:01 It can also take a lot of time.
    0:44:03 Well, that can change with Found.
    0:44:08 Found is a banking platform that says it doesn’t just consolidate your financial ecosystem.
    0:44:14 It automates manual activities like expense tracking and finding tax write-offs.
    0:44:18 Found says they can make staying on top of invoices and payments easy.
    0:44:21 And they say small businesses are loving Found.
    0:44:24 According to the company, one Found user said this.
    0:44:27 Found is going to save me so much headache.
    0:44:29 It makes everything so much easier.
    0:44:34 Expenses, income, profits, taxes, invoices even.
    0:44:37 That’s just one of their 30,000 five-star reviews.
    0:44:42 You can open a Found account for free at found.com slash gray area.
    0:44:46 Spelled F-O-U-N-D dot com slash gray area.
    0:44:49 Found is a financial technology company, not a bank.
    0:44:54 Banking services are provided by Piermont Bank, member FDIC.
    0:44:56 You don’t need to put this one off.
    0:45:01 You can join thousands of small business owners who have streamlined their finances with Found.
    0:45:07 For as long as I can remember, bread has given me hiccups.
    0:45:11 I always get the hiccups when I eat baby carrots.
    0:45:16 Sometimes when I am washing my left ear, just my left ear, I hiccup.
    0:45:20 And my tried and true hiccup here is…
    0:45:27 Pour a glass of water, light a match, put the match out in the water, drink the water, throw away the match.
    0:45:31 Put your elbows out, point two fingers together and sort of stare at the point between the fingers.
    0:45:35 It doesn’t work if you bring your elbows down, but it works.
    0:45:38 Just eat a spoonful of peanut butter.
    0:45:39 Think of a green rabbit.
    0:45:42 I taught myself to burp on commands like…
    0:45:46 Excuse me.
    0:45:51 And I discovered that when I make myself burp, it stops my hiccups.
    0:45:56 Unexplainable is taking on hiccups.
    0:45:57 What causes them?
    0:46:00 And is there any kind of scientific cure?
    0:46:03 Follow Unexplainable for new episodes every Wednesday.
    0:46:29 I sometimes wonder how long America can continue to exist with the level of fragmentation and internal confusion that we have.
    0:46:31 And the same is true of much of Europe.
    0:46:38 How easy is it for you to imagine a political future where America and Europe cease to exist in any recognizable form?
    0:46:41 Well, Europe doesn’t exist in any recognizable form.
    0:46:44 There isn’t a European super-state, and there isn’t going to be.
    0:46:50 What there are are a variety of nation-states with internal problems of various kinds.
    0:46:52 And so I think that will basically continue.
    0:46:54 They might shift into becoming a kind of…
    0:47:00 I mean, what’s been happening in the last few years is that they’re shifting into becoming almost a hard-right block.
    0:47:06 Not that the far-right has taken over, though some people might say it did in Hungary and did in Poland for a while.
    0:47:10 But it’s the far-right which is shaping policy on lots of issues.
    0:47:12 But it won’t become a super-state.
    0:47:18 As to America, I don’t expect the American state to fragment in the way that by secession…
    0:47:24 I mean, I know some Americans talk about that, and Texans and Californians and others.
    0:47:25 I don’t actually expect that.
    0:47:38 I would more expect a kind of semi-stable, semi-anarchy, in which there are lots of regions of American society and of cities and so on which are semi-anarchical.
    0:47:42 That’s also true in places like Mexico, is it not, and parts of Latin America.
    0:47:45 That could go on for quite a long time.
    0:47:50 The big change, I guess, will be, if I’m right, it will be in the capacity of America to project its power globally.
    0:47:52 I think that is steeply declining.
    0:47:59 And I think that will, within your and my lifetime, will be actually seen to be greatly diminished.
    0:48:07 Because although America still has an enormous amount, the U.S., an enormous amount of hard firepower, more than anywhere else, actually, China’s catching up.
    0:48:13 But also, its capacity to use that hard firepower intelligently has not been very great.
    0:48:30 You actually say something pretty interesting, if that’s the right word, about America in the book, which is that it’s become Schmittian in the sense that we believed, rather foolishly, that the law could protect liberal values from political contestation.
    0:48:35 But the law has become indistinguishable from politics.
    0:48:39 And Trump just pushed us right past the threshold.
    0:48:48 And now we’re in, in my estimation, just a full-blown legitimacy crisis, where it doesn’t even matter who wins the next election.
    0:48:50 Just something like 30% of the country.
    0:48:51 Do you agree with that, by the way?
    0:48:52 That is what I think.
    0:48:53 But do you agree with that?
    0:48:53 Do I agree with what?
    0:48:56 That America’s in a legitimation crisis.
    0:48:56 Oh, yes.
    0:48:58 I’ve written this many times.
    0:49:00 It doesn’t matter who wins the next election.
    0:49:03 Something like 30% of the country will consider it illegitimate.
    0:49:04 That’s that liberal politics, John.
    0:49:07 That’s something much closer to war, really.
    0:49:09 Well, it’s what Schmitt thought politics was.
    0:49:10 Friends and enemies.
    0:49:18 And I think the achievement of liberalism in the various liberalism was to replace the war by something else, or at least attenuate the war.
    0:49:21 I mean, this was true, by the way, even in my time.
    0:49:23 Let me give you an autobiographical example.
    0:49:34 During the Thatcherite period, when I was an actor of Thatcherite, I remained, in terms of close friendship, with leading members, both theoretical members and even politicians, in the Labour Party.
    0:49:49 So, we could meet, we could have dinner, we could talk with each other, we could share ideas, didn’t agree, didn’t share goals, thought that this great Thatcherite experiment could come to grief in various different ways, as I then came to think, and so on, for slightly different reasons.
    0:49:54 But that’s actually, in America, I would say, it’s rare, I would think.
    0:49:58 Is it not shown for people to interact in that way?
    0:50:02 How many Trumpists have friendly relations with Washington Post liberals?
    0:50:03 Not many, I think.
    0:50:06 No, I’d say that’s, and that’s becoming increasingly so.
    0:50:09 That’s unfortunate, because that’s the triumph of the Schmittian model.
    0:50:13 It’s the triumph of friend-enemy relations.
    0:50:19 And once you’ve gotten to friend-enemy relations, I think you’re in deep trouble, at least from a liberal standpoint.
    0:50:25 It’s very hard to get back from that situation, because both sides want to win.
    0:50:27 And that means it’s a sort of downward spiral.
    0:50:31 Very hard to, I don’t say impossible, you know, something could happen that we haven’t thought of.
    0:50:33 But it’s very difficult to get out.
    0:50:35 So I agree completely with you.
    0:50:41 And it’s one of the things I constantly say, which is that, in one sense, it’s very important who wins the American election next year.
    0:50:45 Because if it’s Trump, the changes will be huge and quick, I believe.
    0:50:53 But in another sense, it doesn’t matter at all, because whoever wins will not be accepted, as you say, by maybe a quarter or a third of American society, American voters.
    0:50:58 So the legitimization crisis will just get worse, whoever wins.
    0:51:06 That’s a very profound fact of the world, because the world still depends on a kind of shadow of Pax Americana.
    0:51:09 It still depends on that, or has depended on that.
    0:51:23 And as that is comprehensively removed, I mean, if Trump pulls American forces out of Europe, which, if he winds up NATO, if he pulls out of the Gulf, where there is now the new Middle Eastern war, that would be a very profound change.
    0:51:31 Yeah, I think the unfortunate truth is that liberalism doesn’t really have a solution to a legitimacy crisis.
    0:51:34 No, I agree with you entirely, which is why it’s so difficult to speculate.
    0:51:46 I mean, what I don’t expect is any new order emerging from this, whether of the right or the left, but just of continued disintegration, not into civil war in America.
    0:51:47 I’m not an American.
    0:51:52 Sometimes since I’ve been there, I spent a long time in America in the 70s and 80s and 90s.
    0:51:55 So I knew it better then than I do now.
    0:51:58 But I don’t expect a full-scale civil war.
    0:52:16 But I can imagine a fairly long period, decades, you know, maybe generations of civil warfare, when different identity groups, different political ideologies, different parts of America, states of America, American states and municipalities, just go their own way with lots of the conflicts that that involves.
    0:52:33 That involves, but with a kind of area, which I think will still exist of high technology, an oligarchy, which preserves its own position one way or another, and the rest of society is doing as best it can.
    0:52:35 I mean, large parts of it abandoned.
    0:52:42 That’s what I sort of expect a kind of hybrid like that could go on for an awfully long time.
    0:52:50 I don’t think America faces the internal pressures that, say, Russia does, because Russia has powerful ethnic divisions within minorities.
    0:52:58 And the state apparatus in Russia, although more ruthless and more violent domestically, is much more corroded and much more corrupt.
    0:53:07 So I think there is a real possibility that Russia could actually break up, whereas I don’t actually, you may be more optimistic or less hyperbolic, if you like, than I am.
    0:53:09 I don’t see that as likely in America.
    0:53:13 I think just continuing decay is a much more likely prospect.
    0:53:15 Yeah, I would agree with that.
    0:53:17 I have no idea what’s going to happen.
    0:53:23 I take some solace in the fact that, at least in America, we’ve survived much, much worse in our past.
    0:53:29 And, you know, we may just lumber along in this interregnum for a very, very long time.
    0:53:31 It may be a very long interregnum.
    0:53:32 It might be.
    0:53:35 And look, maybe we need a new order.
    0:53:48 My fear has always been the road from the present order to the next one is historically a rather bumpy one, and one probably none of us want to take.
    0:53:53 And I’d prefer to fix the world we have before we tear it down.
    0:53:54 But I don’t know.
    0:53:58 Again, I’m not in the prophecy business, so I don’t know what’s going to happen.
    0:54:04 I mean, I’ve been talking about this idea of politics as tragedy, too, for the last few years.
    0:54:12 And what some liberals and others say is, they say, well, we want to get to a world where tragedy is diminished.
    0:54:23 Now, very few of them say now where there is no tragedy, though some of them say we want to get to a world, some of them have said, in which the only tragedies are failed love affairs or familial disputes and so on.
    0:54:26 We’ll never get to a world like that, I’m sure.
    0:54:46 But what I think the danger of trying to eliminate tragedy in politics is that in order to survive in any political system and to gain the power and retain the power and exercise the power, you would need to get to a society in which tragedy is supposedly diminished or mitigated or abolished.
    0:54:54 You have to enter into tragic choices which replicate the tragedy you’re trying to get rid of, trying to transcend.
    0:55:09 So, for example, one of the things that happens in all revolutions, certainly in all the European, Russian, Chinese revolutions and so on, is that once the old regime fails, if it’s really knocked down and fails, then the revolutionary contestants fight among themselves.
    0:55:11 And the one that prevails is the one that’s the most ruthless.
    0:55:19 So that in Soviet Union, early Soviet Russia, which I know the best, the anarchists were the first to be suppressed.
    0:55:22 Then the social revolutionaries, because they were less well organized, they were less ruthless.
    0:55:29 So what actually produces the authoritarianism is the struggle by the revolutionary groups against each other.
    0:55:30 And that always happens.
    0:55:37 And that sort of illustrates my deeper point, which is that in order to get to a supposedly post-tragic world,
    0:55:49 you have all kinds of ruthless, tragic decisions have had to be made about shooting anarchists en masse, assassinating, murdering, and putting in camps and so on, various dissidents.
    0:55:55 And once you’ve done that, you’re back into the world where you’ve never left it, actually, of tragic choices.
    0:56:05 So I would much prefer politics, which accepted that tragedy was primordial and omnipresent and would always be, but use this.
    0:56:14 I mean, this is why I’ve had a kind of Occam’s razor approach to tragedy, which is the aim should be to minimize tragedies beyond what was strictly necessary.
    0:56:19 And don’t go around, multiply them by trying to create new regimes all over the place.
    0:56:21 Tragedy in politics isn’t imperfectibility.
    0:56:23 We have no idea of perfection.
    0:56:26 It isn’t that progress is always reversible and ephemeral.
    0:56:28 It’s something deeper than that.
    0:56:38 It’s that there are recurring situations in politics, and always will be, in which whatever we do has deep and enduring losses attached to it.
    0:56:40 And I think that will always be the case.
    0:56:42 So I think that’s what I prefer.
    0:56:59 But I think in order to get a view of the world like that, you do actually have to go back before Christianity to maybe to the book of Job, but also to ancient Greek tragedy, where there’s no ultimate redemption at all, actually.
    0:57:16 It recurs a bit in Shakespeare later on in a Christian civilization, but you have to go all the way back to the Greek tragic dramas to get that sense that human beings are not autonomous in the sense of being ever able to shape the choices they have to make.
    0:57:23 Tragedies are unchosen choices, choices that human beings don’t want to make and would prefer not to make, but have to make.
    0:57:30 Once again, the book is called The New Leviathans, Thoughts After Liberalism.
    0:57:32 John Gray, always a pleasure.
    0:57:33 Thank you for coming in today.
    0:57:35 Great pleasure on my part as well.
    0:57:37 Let’s have another conversation in a couple of years, shall we?
    0:57:38 Let’s do it.
    0:57:59 Patrick Boyd engineered this episode.
    0:58:02 Alex Overington wrote our theme music.
    0:58:04 And A.M. Hall is the boss.
    0:58:09 As always, let us know what you think of the episode.
    0:58:17 Drop us a line at thegrayareaatvox.com and share the show with your friends, family, and anyone else who will listen.
    0:58:20 New episodes of The Gray Area drop on Mondays.
    0:58:21 Listen and subscribe.
    0:58:41 Support for The Gray Area comes from Greenlight.
    0:58:47 School can teach kids all kinds of useful things, from the wonders of the atom to the story of Marbury vs. Madison.
    0:58:51 One thing schools don’t typically teach, though, is how to manage your finances.
    0:58:55 So those skills fall primarily on you, the parent.
    0:58:57 But don’t worry, Greenlight can help.
    0:59:02 Greenlight says they offer a simple and convenient way for parents to teach kids smart money habits,
    0:59:06 while also allowing them to see what their kids are spending and saving.
    0:59:11 Plus, kids can play games on the app that teach money skills in a fun, accessible way.
    0:59:16 The Greenlight app even includes a chores feature, where you can set up one-time or recurring chores,
    0:59:21 customized to your family’s needs, and reward kids with allowance for a job well done.
    0:59:25 My kids are a bit too young to talk about spending and saving and all that,
    0:59:30 but one of our colleagues here at Vox uses Greenlight with his two boys, and he absolutely loves it.
    0:59:35 Start your risk-free Greenlight trial today at Greenlight.com slash gray area.
    0:59:38 That’s Greenlight.com slash gray area to get started.
    0:59:41 Greenlight.com slash gray area.

    What exactly is the basis for democracy?

    Arguably Iiberalism, the belief that the government serves the people, is the stone on which modern democracy was founded. That notion is so ingrained in the US that we often forget that America could be governed any other way. But political philosopher John Gray believes that liberalism has been waning for a long, long time.

    He joins Sean to discuss the great liberal thinker Thomas Hobbes and America’s decades-long transition away from liberalism.

    Host: Sean Illing (@SeanIlling)

    Guest: John Gray, political philosopher and author of The New Leviathans: Thoughts After Liberalism

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • A new way to listen

    AI transcript
    0:00:05 Hey, it’s Sean Elling. I wanted to tell you some exciting news and ask for your help.
    0:00:10 Okay, the exciting part first. Fox members now get ad-free podcasts.
    0:00:13 That’s right. Think of all the time you can save.
    0:00:17 It’s just one of the great benefits you get for directly supporting our work.
    0:00:26 Fox members also get unlimited reading on our website, member-exclusive newsletters, and more special perks as a thank you.
    0:00:31 Now I want to ask for your help. Vox is an independent publication.
    0:00:38 That means we rely on support from listeners like you to produce journalism that the world really needs right now.
    0:00:43 At Vox, we strive to help you understand what really matters in our world.
    0:00:51 That’s why we report on the most important issues shaping our world and also on truly essential stories that others neglect.
    0:00:55 We can only do that because of support from people like you.
    0:01:03 So if you’d like to support our work and get ad-free listening on our podcast, go to vox.com slash members today.
    0:01:05 That’s vox.com slash members.

    We have an exciting announcement! Vox Members now get access to ad-free podcasts. If you sign up, you’ll get unlimited access to reporting on vox.com, exclusive newsletters, and all of our podcasts — including The Gray Area — ad-free. Plus, you’ll play a crucial role in helping our show get made.

    Check it out at vox.com/members.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • The beliefs AI is built on

    AI transcript
    0:00:04 There’s over 500,000 small businesses in B.C. and no two are alike.
    0:00:05 I’m a carpenter.
    0:00:06 I’m a graphic designer.
    0:00:08 I sell dog socks online.
    0:00:12 That’s why BCAA created One Size Doesn’t Fit All insurance.
    0:00:15 It’s customizable based on your unique needs.
    0:00:18 So whether you manage rental properties or paint pet portraits,
    0:00:23 you can protect your small business with B.C.’s most trusted insurance brand.
    0:00:28 Visit bcaa.com slash smallbusiness and use promo code radio to receive $50 off.
    0:00:29 Conditions apply.
    0:00:39 There’s a lot of uncertainty when it comes to artificial intelligence.
    0:00:44 Technologists love to talk about all the good these tools can do in the world.
    0:00:46 All the problems they might solve.
    0:00:56 And yet, many of those same technologists are also warning us about all the ways AI might upend society.
    0:01:04 It’s not really clear which, if either, of these narratives are true.
    0:01:08 But three things do seem to be true.
    0:01:11 One, change is coming.
    0:01:15 Two, it’s coming whether we like it or not.
    0:01:20 Hell, even as I write this document, Google Gemini is asking me how it can help me today.
    0:01:21 It can’t.
    0:01:24 Today’s intro is 100% human-made.
    0:01:31 And finally, it’s abundantly clear that AI will affect all of us.
    0:01:39 Yet, very few of us have any say in how this technology is being developed and used.
    0:01:43 So, who does have a say?
    0:01:47 And why are they so worried about an AI apocalypse?
    0:01:51 And how are their beliefs shaping our future?
    0:01:57 I’m Sean Elling, and this is The Gray Area.
    0:02:13 My guest today is Vox host and editorial director, Julia Longoria.
    0:02:27 She spent nearly a year digging into the AI industry, trying to understand some of the people who are shaping artificial intelligence, and why so many of them believe that AI is a threat to humanity.
    0:02:33 She turned that story into a four-part podcast series called Good Robot.
    0:02:39 Most stories about AI are focused on how the technology is built and what it can do.
    0:02:51 Good Robot, instead, focuses on the beliefs and values, and most importantly, fears, of the people funding, building, and advocating on issues related to AI.
    0:03:07 What she found is a set of ideologies, some of which critics and advocates of AI adhere to, with an almost religious fervor, that are influencing the conversation around AI, and even the way the technology is built.
    0:03:22 Whether you’re familiar with these ideologies or not, they’re impacting your life, or certainly they will impact your life, because they’re shaping the development of AI as well as the guardrails, or lack thereof, around it.
    0:03:29 So I invited Julia onto the show to help me understand these values, and the people who hold them.
    0:03:39 Julia Longoria, welcome to the show.
    0:03:41 Thank you for having me.
    0:03:46 So, it was quite the reporting journey we went on for this series.
    0:03:48 It’s really, really well done.
    0:03:51 So, first of all, congrats.
    0:03:51 Thank you.
    0:03:56 Thank you for having me on that, and we’re actually going to play some clips from it today.
    0:03:57 I’m glad you enjoyed it.
    0:04:03 It’s, it’s, you’re in, I’m in that, you know, nerve-wracking first few weeks when it comes out, so it makes me feel good to hear that.
    0:04:13 So, going into this thing, you wanted to understand why so many people are worried about an AI apocalypse.
    0:04:18 And if you should be afraid, and if you should be afraid to, we will get to the answers, I promise.
    0:04:23 But why were these the motivating questions for you?
    0:04:29 You know, I come to artificial intelligence as a normie, as people in the know called me.
    0:04:32 I don’t know much about it.
    0:04:33 I didn’t know much about it.
    0:04:38 But I had the sense, as an outsider, that the stakes were really high.
    0:04:52 And it seemed like people talked about it in a language that I didn’t understand, and talking about these stakes that felt like really epic, but kind of like impenetrable to someone who didn’t speak their language.
    0:05:02 So, I guess I just wanted to start out with, like, the biggest, most epic, like, almost most ignorant question, you know, like, okay, people are afraid.
    0:05:06 There, some people are afraid that AI could just wipe us all out.
    0:05:07 Where does that fear come from?
    0:05:18 And just have that be a starting point to break the ice of this area that, like, honestly has felt kind of intangible and hard for me to even wrap my head around.
    0:05:27 Yeah, I mean, I appreciate your normie status, because that’s the position almost all of us are in.
    0:05:35 You know, we’re on the outside looking in, trying to understand what the hell is happening here.
    0:05:40 What did being a normie mean to you as you waded into this world?
    0:05:46 I mean, did you find that that outside perspective was actually useful in your reporting?
    0:05:48 Definitely, yeah.
    0:05:52 I think that’s kind of how I try to come to any topic.
    0:05:59 Like, I’ve also reported on the Supreme Court, and that’s, like, another world that speaks its own dense, impenetrable language.
    0:06:06 And, you know, like the Supreme Court, like, artificial intelligence affects all of our lives deeply.
    0:06:20 And I feel like because it is such a, you know, sophisticated technology, and the people who work in it are so deep in it, it’s hard for normies to ask the more ignorant questions.
    0:06:31 And so I feel like having the microphone and being armed with, you know, my Vox byline, I was able to ask the dumb question.
    0:06:36 And, you know, I think I always said, like, you know, I know the answer to some of these questions.
    0:06:41 But I’m asking on behalf of, like, the listener.
    0:06:42 And sometimes I knew the answer.
    0:06:43 Sometimes I didn’t.
    0:07:04 I don’t know about you, but for me, and I’m sure a lot of people listening, it is maddening to be continually told that, you know what, we might be on the wrong end of an extinction event here, caused by this tiny minority of non-normies building this stuff.
    0:07:13 And that it’s possible for so few to make decisions that might unravel life for the rest of us is just, well, maddening.
    0:07:14 It is maddening.
    0:07:15 It is maddening.
    0:07:19 And to even hear it be talked about, like, this affects all of us.
    0:07:22 So shouldn’t we, shouldn’t it be the thing that we’re all talking about?
    0:07:29 But it feels like it’s reserved for a certain group of people who get to make the decisions and get to set the terms of the conversation.
    0:07:39 Let’s talk about the ideologies and all the camps that make up this weird, insular world of AI.
    0:07:44 And I want to start with the, what you call the AI safety camp.
    0:07:46 What is their deal?
    0:07:48 What should we know about them?
    0:07:54 So AI safety is a term that’s evolved over the years.
    0:08:07 But it’s kind of like people who fear that AI could be an existential risk to humanity, whether that’s like AI going rogue and doing things we didn’t want it to do.
    0:08:13 It’s about the biggest worry, I guess, of all of us being wiped out.
    0:08:18 We never talked about a cell phone apocalypse or an internet apocalypse.
    0:08:22 I guess maybe if you count Y2K.
    0:08:25 But even that wasn’t going to wipe out humanity.
    0:08:30 But the threat of an AI apocalypse, it feels like it’s everywhere.
    0:08:34 Mark my words, AI is far more dangerous than nukes.
    0:08:39 From billionaire Elon Musk to the United Nations.
    0:08:46 Today, all 193 members of the United Nations General Assembly have spoken in one voice.
    0:08:48 AI is existential.
    0:08:55 But then it feels like scientists in the know can’t even agree on what exactly we should be worried about.
    0:08:59 And where does the term AI safety come from?
    0:09:12 We trace the origin to a man named Eliezer Yudkowsky, who, you know, I think not all AI safety people today agree with Eliezer Yudkowsky.
    0:09:16 But basically, you know, Eliezer Yudkowsky wrote about this fear.
    0:09:24 Actually, as a teenager, he became popular, sort of found his following when he wrote a Harry Potter fan fiction.
    0:09:26 As one does.
    0:09:27 As one does.
    0:09:31 It’s actually one of the most popular Harry Potter fan fictions out there.
    0:09:34 It’s called Harry Potter and the Methods of Rationality.
    0:09:36 And he wrote it almost as a way.
    0:09:38 Love it.
    0:09:44 He wrote it almost as a way to get people to think differently about AI.
    0:09:53 He had thought deeply about the possibility of building a artificial intelligence that was smarter than human beings.
    0:09:55 Like, he kind of imagined this idea.
    0:10:02 And at first, he imagined it as a good robot, which is the name of the series, that could save us.
    0:10:13 But, you know, eventually he realized, like, or came to fear that it could probably go very poorly if we built something smarter than us, that it would, it could result in it killing us.
    0:10:18 So, anyway, that’s the origin, but it’s sort of, his ideas have caught on.
    0:10:27 Open AI, actually, the CEO, Sam Altman, talks about how Eliezer was like an early inspiration for him making the company.
    0:10:36 They do not agree on a lot because Eliezer thinks Open AI, the chat GPT company, is on track to cause an apocalypse.
    0:10:42 But, anyway, that’s, that’s the gist, is like, AI safety is like, AI could kill us all.
    0:10:43 How do we prevent that?
    0:10:51 So, it’s really, it’s about, it’s focused on the sort of long-range existential risks.
    0:10:51 Correct.
    0:10:53 And some people don’t think it’s long-range.
    0:10:57 Some of these people think that that could happen very soon.
    0:11:02 So, this Yudkowsky guy, right, he makes these two general claims, right?
    0:11:07 One is that we will build an AI that’s smarter than us, and it will change the world.
    0:11:14 And the second claim is that to get that right is extraordinarily difficult, if not impossible.
    0:11:19 Why does he think it’s so difficult to get this right?
    0:11:22 Why is he so convinced that we won’t?
    0:11:28 He thinks about this in terms of thought experiments.
    0:11:38 So, just kind of taking, taking this premise that we could build something that outpaces us at most tasks.
    0:11:47 He tries to explain the different ways this could happen with these, like, quirky parables.
    0:11:54 And we start with his most famous one, which is the paperclip maximizer thought experiment.
    0:12:00 Suppose, in the future, there is an artificial intelligence.
    0:12:13 We’ve created an AI so vastly powerful, so unfathomably intelligent, that we might call it superintelligent.
    0:12:18 Let’s give this superintelligent AI a simple goal.
    0:12:21 Produce…
    0:12:23 Paperclips
    0:12:33 Because the AI is superintelligent, it quickly learns how to make paperclips out of anything in the world.
    0:12:41 It can anticipate and foil any attempt to stop it, and will do so because its one directive is to make more paperclips.
    0:12:50 Should we attempt to turn the AI off, it will fight back because it can’t make more paperclips if it is turned off.
    0:12:55 And it will beat us because it is superintelligent and we are not.
    0:12:57 The final result?
    0:13:08 The entire galaxy, including you, me, and everyone we know, has either been destroyed or been transformed.
    0:13:15 Into paperclips.
    0:13:31 The gist is, we build something so smart we fail to understand it, how it works, and we could try to give it good goals to help improve our lives.
    0:13:39 But maybe that goal has an unintended consequence that could lead to something catastrophic that we couldn’t have even imagined.
    0:13:45 Right, and it’s such a good example because a paperclip is like the most innocuous, trivial thing ever, right?
    0:13:47 Like what could possibly go wrong?
    0:13:52 Is Yukowski, even within the safety camp, on the extremes?
    0:13:57 I mean, I went to his website, and I just want to read this quote.
    0:13:59 He writes,
    0:14:07 It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight.
    0:14:15 Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity.
    0:14:17 I mean, come on, dude.
    0:14:19 It’s so dramatic.
    0:14:23 I mean, that, he seems convinced that the game is already up here.
    0:14:27 We’re just, we just don’t know how much sand is left in the hourglass.
    0:14:31 I mean, is he on the margins even within this camp, or is this a fairly representative view?
    0:14:32 Definitely, yeah.
    0:14:32 Okay.
    0:14:35 No, no, it’s, he’s on the margins, I would say.
    0:14:37 It’s, he’s like an extreme case.
    0:14:40 He had a big influence on the industry early on.
    0:14:47 So, in that sense, he, he was like an early influencer of all these people who ended up going into AI.
    0:14:50 A lot of people I talked to went into AI because of his writings.
    0:14:53 I can’t square that circle, right?
    0:14:54 If they were influenced by him.
    0:14:55 No.
    0:14:56 And this whole thing is, don’t do this, we’re going to die.
    0:14:58 Why are they doing it?
    0:15:03 To me, it felt like similar to the world of religion, almost like a schism.
    0:15:10 Believers in the superintelligence, and then people who thought we shouldn’t try and build it, and then the people who thought we should.
    0:15:21 Yeah, I mean, I, I guess with any kind of grand thinking about the fate of humanity, you end up with these, it starts to get very religious-y very quickly,
    0:15:26 even if it’s cloaked in the language of science and secularism, as this is.
    0:15:31 The religious part of it, I mean, did that, did the parallels there jump out to you pretty immediately?
    0:15:43 That, that the people at the level of ideology are treating this, thinking about this, as though it is a religious problem or a religious worldview?
    0:15:44 It really did.
    0:16:01 It did jump out at me really early, because I think, like, going into reporting on a technology, you expect to be kind of bogged down by technological language and terminology that’s, like, in the weeds of whatever, computer science or whatever it is.
    0:16:14 But, but the words that were hard to understand were, like, superintelligence and AGI, and then hearing about, you know, the CEO of OpenAI, Sam Altman, talking about a magic intelligence in the sky.
    0:16:18 And the question I had was, like, what are these guys talking about?
    0:16:21 But it was almost like they were talking about a god, is what it felt like to me.
    0:16:23 Yeah.
    0:16:24 All right.
    0:16:27 I have some thoughts on the religious thing, but let me table that for a second.
    0:16:30 I think we’ll, we’ll end up circling back to that.
    0:16:35 I want to finish our little survey of the, of the tribes, the gangs here.
    0:16:39 The other camp you talk about are the, the AI ethicists.
    0:16:40 What’s their deal?
    0:16:42 What are they concerned about?
    0:16:48 How are they different from the safetyists who are focused on these existential problems or risks?
    0:17:01 Yeah, the AI ethicists that I spoke to came to AI pretty early on, too, like, just a couple years, maybe after, a few years after Eliezer was writing about it.
    0:17:02 They were working on algorithms.
    0:17:06 They were working on AI as it existed in the world.
    0:17:08 So that, that was a key difference.
    0:17:11 They weren’t thinking about things in, like, these hypotheticals.
    0:17:24 But AI ethicists, where AI safety folks tend to worry about the ways in which AI could be an existential risk in the future, it could wipe us out.
    0:17:32 AI ethicists tended to worry about harms that AI was doing right now, in the present.
    0:17:56 Whether that was through, you know, governments using AI to surveil people, bias in AI data, the data that went into building AI systems, you know, racial bias, gender bias, and ways that algorithmic systems were making racist decisions, sexist decisions, decisions that were harmful to disabled people.
    0:17:57 They were worried about things now.
    0:17:59 Tell me about Margaret Mitchell.
    0:18:10 She’s a researcher and a colorful character in the series, and she’s an ethicist, and she coined the everything is awesome problem.
    0:18:12 Tell me about that.
    0:18:15 That’s an interesting example of the sorts of things they worry about.
    0:18:22 Yeah, so Margaret Mitchell was working on AI systems in the early days, like long before we had ChatGPT.
    0:18:28 She was working on a system at Microsoft that was vision to language.
    0:18:35 So it was taking a series of images of a scene and trying to describe it in words.
    0:18:43 And so she, you know, she was giving the system things like images of weddings or images of different events.
    0:18:50 And she gave the system a series of images of what’s called the Hempstead Blast.
    0:19:03 It was at a factory, and you could see from the sequence of images that the person taking the photo had like a third-story view sort of overlooking the explosion.
    0:19:11 So it was a series of pictures showing that there was this terrible explosion happening, and whoever was taking the photo was very close to the scene.
    0:19:20 So I put these images through my system, and the system says, wow, this is a great view.
    0:19:22 This is awesome!
    0:19:35 The system learned from the images that it had been trained on that if you were taking an image from, you know, from above, down below, like, that that’s a great view.
    0:19:42 And that if there were, like, all these, you know, different colors, like in a sunset, which the explosion had made all these colors, that that was beautiful.
    0:19:52 And so she saw really early on before, you know, this AI moment that we’re living, that the data that these systems are trained on is crucial.
    0:20:01 And so her worry with systems like ChatGPT are, they’re trained on, like, basically the entire internet.
    0:20:08 And so the technologists making the system lose track of, like, what kinds of biases could be in there.
    0:20:13 And, yeah, this is, like, sort of her origin story of worrying about these things.
    0:20:25 And she went and worked for Google’s AI ethics team and later was fired after trying to get a paper published there about these worries.
    0:20:31 So why is the everything is awesome problem a problem, right?
    0:20:39 I mean, I guess someone may hear that and go, well, okay, that’s kind of goofy and quirky that an AI would interpret a horrible image in that way.
    0:20:44 But what actual harm is that going to cause in the world?
    0:20:45 Right.
    0:21:06 I mean, the way she puts it is, you know, if you were training a system to, like, launch missiles and you gave it some of its own autonomy to make decisions, like, you know, she was like, you could have a system that’s, like, launching missiles in pursuit of the aesthetic of beauty.
    0:21:10 So, in a sense, it’s a bit of a thought experiment on its own, right?
    0:21:19 It’s like she’s not worried about this in particular, but worried about implications for biased data in future systems.
    0:21:21 Yeah, it’s the same thing with the paperclip example, right?
    0:21:27 It’s just, it’s unintended, the bizarre and unintended consequences of these things, right?
    0:21:34 What seems goofy and quirky at first may, a few steps down the road, be catastrophic, right?
    0:21:38 And if you’re not, if you can’t predict that, maybe you should be a little careful about building it.
    0:21:40 Right, right, exactly.
    0:21:52 So, do the AI ethics people in general, do they think the concerns about an extinction event or existential threats, do they think those concerns are valid?
    0:22:01 Or do they think they’re mostly just science fiction and a complete distraction from, you know, actual present-day harms?
    0:22:10 I should say at the outset that, you know, I found that the AI ethics and AI safety camps, they’re less camps and more of a spectrum.
    0:22:18 So, I don’t want to say that every single AI ethics person I spoke to was like, these existential risks are nonsense.
    0:22:28 But by and large, people I spoke to in the ethics camp said that these existential risks are a distraction.
    0:22:37 It’s like this epic fear that’s attention grabbing and, you know, goes viral and takes away from the harms that AI is doing right now.
    0:22:45 It takes away attention from those things and it, crucially, in their view, takes away resources from fighting those kinds of harms.
    0:22:46 In what way?
    0:23:04 You know, I think when it comes to funding, if you’re like a billionaire who wants to give money to companies or charities or, you know, causes and you want to leave a legacy in the world, I mean, do you want to make sure that data and AI systems is unbiased or do you want to make sure that you save humanity from apocalypse, you know?
    0:23:09 Yeah. I should ask about the effect of altruists.
    0:23:15 They’re another camp, another school of thought, another tradition of thought, whatever you want to call it, that you talk about in the series.
    0:23:19 How do they fit in to the story? Or how are they situated?
    0:23:24 Yeah. So, effective altruism is a movement that’s had an effect on the AI industry.
    0:23:37 It’s also had an effect on Vox. Future Perfect is the Vox section that we collaborated with to make Good Robot and it was actually inspired by effective altruism.
    0:23:52 The whole point of the effective altruism movement is to try to do the most good in the world and EA, as it’s sometimes called, comes up with a sort of formula for how to choose which causes you should focus on and put your efforts toward.
    0:24:07 So, early rationalists like Eliezer Yudkowsky encountered early effective altruists and tried to convince them that the highest stakes issue of our time, the cause that they should focus on is AI.
    0:24:18 Effective altruism is traditionally known to give philanthropic dollars to things like malaria nets, but they also gave philanthropic dollars to saving us from an AI apocalypse.
    0:24:27 And so, the AI safety industry is really a big part of how it was financed is that effective altruism rallied as a cause around it.
    0:24:36 These are the people who think we really have an obligation to build a good robot in order to protect future humans.
    0:24:40 And again, I don’t know what they mean by good.
    0:24:44 I mean, good and bad, those are value judgments.
    0:24:45 This is morality, not science.
    0:24:49 There’s no utility function for humanity.
    0:24:58 It’s like, I don’t know who’s defining the goodness of the good robot, but I’ll just say that I don’t think it’s as simple as some of these technologists seem to think it is.
    0:25:03 And maybe I’m just being annoying philosophy guy here, but whatever, here I am.
    0:25:12 Yeah, no, I think everyone in the AI world that I talk to just like was really striving toward the good, like whatever that looked like.
    0:25:17 Like AI ethics saw like the good robot as a specific set of values.
    0:25:23 And folks in effective altruism were also like baffled by like, how do I do the most good?
    0:25:28 And trying to use math to, you know, put a utility function on it.
    0:25:35 And it’s like, the truth is a lot more messy than a math problem of how to do the most good.
    0:25:36 You can’t really know.
    0:25:41 And yeah, I think sitting in the messiness is hard for a lot of us.
    0:25:50 And I don’t know how you do that when you’re fully aware that you’re building or attempting to build something that you don’t fully understand.
    0:25:51 That’s exactly right.
    0:26:01 Like in the series, like we tell the story of effective altruism through the parable of the drowning child, of this child who’s drowning in a pond, a shallow pond.
    0:26:04 Okay.
    0:26:08 On your way to work, you pass a small pond.
    0:26:14 Children sometimes play in the pond, which is only about knee deep.
    0:26:17 The weather’s cool, though, and it’s early.
    0:26:21 So you’re surprised to see a child splashing about in the pond.
    0:26:31 As you get closer, you see that it is a very young child, just a toddler, who’s flailing about, unable to stay upright or walk out of the pond.
    0:26:35 You look for the parents or babysitter, but there’s no one else around.
    0:26:40 The child is unable to keep her head above the water for more than a few seconds at a time.
    0:26:43 If you don’t wade in and pull her out, she seems likely to drown.
    0:26:53 Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy.
    0:27:01 By the time you hand the child over to someone responsible for her and change your clothes, you’ll be late for work.
    0:27:04 What should you do?
    0:27:12 Are you going to save it even though you ruin your suit?
    0:27:15 Everyone answers, yes.
    0:27:23 And this sort of utilitarian philosophy behind effective autism asks, well, what if that child were far away from you?
    0:27:25 Would you still save it if it was oceans away from you?
    0:27:28 And that’s where you get to malaria nets.
    0:27:32 You’re going to donate money to save children across an ocean.
    0:27:38 But, yeah, this idea of, like, well, what if the child hasn’t been born yet?
    0:27:43 And that’s the future child that would die from an AI apocalypse.
    0:27:49 But, like, abstracting things so far in advance, you could really just justify anything.
    0:27:51 And that’s the problem, right?
    0:27:52 Yeah, right.
    0:28:09 Of focusing on the long term in that way, the willingness to maybe overlook or sacrifice present harms in service to some unknown future, that’s a dangerous thing.
    0:28:19 There are dangers in being willfully blind to present harms because you think there’s some more important or some more significant harm down the road.
    0:28:27 And you’re willing to sacrifice that harm now because you think it’s, in the end, justifiable.
    0:28:30 Yeah, at what point are you starting to play God, right?
    0:28:37 So I come from the world of political philosophy, and in that maybe equally weird world.
    0:28:48 Whenever you have competing ideologies, what you find at the root of those disagreements are very different views about human nature, really.
    0:28:53 And all the differences really spring from that divide.
    0:28:58 Is there something similar at work in these AI camps?
    0:29:11 Do you find that these people that you talk to have different beliefs about how good or bad people are, different beliefs about what motivates us, different beliefs about our ability to cooperate and solve problems?
    0:29:15 Is there a core dispute at that basic level?
    0:29:22 There’s a pretty striking demographic difference between AI safety folks and AI ethics folks.
    0:29:26 Like, I went to a conference, two conferences, one of each.
    0:29:38 And so immediately you could see, like, AI safety folks were skewed white and male, and AI ethics folks skewed, like, more people of color, more women.
    0:29:44 And so, like, people talked about blind spots that each camp had.
    0:30:01 And so if you’re, you know, a white male moving around the world, like, you’re not fearing the sort of, like, racist, sexist, ableist, like, consequences of AI systems today as much, because it’s just not in your view.
    0:30:30 It’s been a rough week for your retirement account, your friend who imports products from China for the TikTok shop, and also Hooters.
    0:30:35 Hooters has now filed for bankruptcy, but they say they are not going anywhere.
    0:30:39 Last year, Hooters closed dozens of restaurants because of rising food and labor costs.
    0:30:47 Hooters is shifting away from its iconic skimpy waitress outfits and bikini days, instead opting for a family-friendly vibe.
    0:30:54 They’re vowing to improve the food and ingredients, and staff is now being urged to greet women first when groups arrive.
    0:30:57 Maybe in April of 2025, you’re thinking, good riddance?
    0:31:01 Does the world still really need this chain of restaurants?
    0:31:09 But then we were surprised to learn of who exactly was mourning the potential loss of Hooters.
    0:31:11 Straight guys who like chicken, sure.
    0:31:14 But also a bunch of gay guys who like chicken?
    0:31:19 Check out Today Explained to find out why exactly that is, won’t ya?
    0:31:19 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:36 Did all the people you spoke to, regardless of the camps they were in, did they all more
    0:31:43 or less agree that what we’re doing here is attempting to build God, or something God-like?
    0:31:45 No, I think no.
    0:31:52 A lot of, I would say a lot of the AI safety people I spoke to like bought into this idea
    0:31:55 of a super intelligence and a God-like intelligence.
    0:31:59 I should say, I don’t think that’s every AI safety person by any means.
    0:32:06 But AI ethics people for the most part just didn’t buy, just completely, everyone I spoke
    0:32:14 to talked about it as being just AI hype as a way to like amp up the capability of this
    0:32:18 technology that’s really in its infancy and is not God-like at this point.
    0:32:27 I saw that when Sam Altman, the CEO of OpenAI, he was on Joe Rogan’s podcast and he was asked
    0:32:30 whether they’re attempting to build God and he said, I have the quote here, I guess it comes
    0:32:35 down to a definitional disagreement about what you mean by it becomes a God.
    0:32:39 I think whatever we create will be subject to the laws of physics in this universe.
    0:32:40 Okay.
    0:32:44 So, so God or no God.
    0:32:45 Right.
    0:32:45 Yeah.
    0:32:47 I mean, it’s, it’s, he’s called it though.
    0:32:49 I don’t know if it’s tongue in cheek.
    0:32:53 It’s all like very, you know, hard to read, but he’s called it like the magic intelligence
    0:32:54 in the sky.
    0:33:02 And Anthropics CEO has called AI systems machines of loving grace, which sounds like this is religious
    0:33:03 language, you know?
    0:33:04 Okay.
    0:33:05 Come on now.
    0:33:09 What in the world is that supposed to mean?
    0:33:12 What is a machine of loving grace?
    0:33:14 Does he know what that means?
    0:33:21 I think it’s like this, you know, it’s a very optimistic view of what machines can do for
    0:33:21 us.
    0:33:26 Like, you know, the idea that machines can help us cure cancer.
    0:33:27 And I don’t know.
    0:33:32 I think that’s ultimately probably what he means, but it does, there’s an element of
    0:33:36 it that I just completely, you know, roll my eyes, raise my eyebrows at where it’s like,
    0:33:43 I don’t think we should be so reverent of a technology that’s like flawed and needs to
    0:33:44 be regulated.
    0:33:47 And I think that reverence is dangerous.
    0:33:55 Why do you think it matters that people like Altman or the CEO of Anthropic have reverence
    0:33:57 or have reverence for machines, right?
    0:33:59 Who cares if they think they’re building God?
    0:34:03 Does it matter really in terms of what it will be and how it will be deployed?
    0:34:11 Well, I think that if you believe you’re, if you have these sorts of delusions of grandeur
    0:34:16 about what you’re making and if you talk about it as a machine of loving grace, like, I don’t
    0:34:23 know, it seems like you don’t have the level of skepticism that I want you to be having.
    0:34:27 And we’re not regulating these companies at this point.
    0:34:29 We’re relying on them to regulate themselves.
    0:34:34 So yeah, it’s a little worrying when you talk about building something so powerful.
    0:34:37 And so intelligent and you’re not being checked.
    0:34:38 Yeah.
    0:34:43 I don’t expect my toaster to tell me it loves me in the morning, right?
    0:34:45 I just want my bagels crispy.
    0:34:48 But I understand that my toaster is a technology.
    0:34:49 It’s a tool with a function.
    0:34:55 To talk about machines of loving grace suggests to me that these people do not think they’re
    0:34:56 just building tools.
    0:34:58 They think they’re building creatures.
    0:34:59 They think they’re building God.
    0:35:00 Yeah.
    0:35:05 And, you know, Margaret Mitchell, as you’ll hear in the series, she talks about how she
    0:35:07 thinks we shouldn’t be building a God.
    0:35:13 We should be building, you know, machines, AI systems that are going to fulfill specific
    0:35:14 purposes.
    0:35:18 Like specifically, she talks about a smart toaster that makes really good toast.
    0:35:26 And I don’t think she means a toaster in particular, but just building systems that are designed
    0:35:32 to help humans achieve a certain goal, like something specific out in the world.
    0:35:40 Whether that’s, you know, like helping us figure out how proteins fold or helping us figure out
    0:35:45 how animals communicate, which are some of the things that we’re using AI to do in a narrow way.
    0:35:52 She talks about this as an artificial narrow intelligence, as distinct from artificial general
    0:35:58 intelligence, which is sort of the super intelligent God AI that’s, you know, quote unquote, smarter
    0:36:00 than us at most tasks.
    0:36:07 I mean, this is an old idea in the history of philosophy that God is like fundamentally
    0:36:09 just a projection of human aspirations, right?
    0:36:15 That our image of God is really a mirror that we’ve created, a mirror that reflects our idea
    0:36:17 of a perfect being, a being in our image.
    0:36:23 And this is something you talk about in the series, and that this is what we’re doing with AI.
    0:36:31 We’re building robots in our image, which, you know, raises the question, well, in whose image exactly, right?
    0:36:35 If AI is a mirror, it’s not a mirror of all of us, is it, right?
    0:36:37 It’s a mirror of the people building it.
    0:36:44 And the people building it are, I would say, not representative of the entire human race.
    0:36:53 Yeah, you’ll hear in the series, like, I latched on to this idea of, like, AI is a mirror of us.
    0:36:58 And that’s so interesting that, like, yeah, God, the concept of God is also like a mirror.
    0:37:04 But if you think about it, I mean, large language models are made from basically the Internet,
    0:37:09 which is, like, all of our thoughts and our musings as humans on the Internet.
    0:37:14 It’s a certain lens on human behavior and speech.
    0:37:22 But it’s also, yeah, like, AI is, like, the decisions that its creators make of what data to use,
    0:37:25 of how to train the system, how to fine-tune it.
    0:37:30 And when I used ChatGPT, it was very complimentary of me.
    0:37:33 And I found it to be this almost, like, smooth, smooth…
    0:37:35 It charmed you. You got charmed.
    0:37:39 Yeah, I got charmed. It was, like, so, it gave me the compliments I wanted to hear.
    0:37:48 And I think it’s, like, this smooth, frictionless version of humanity where it compliments us and makes us feel good.
    0:37:53 And it also, like, you know, you don’t have to write that letter of recommendation for your person.
    0:37:55 You don’t have to write that email.
    0:37:57 You could just… It’s just smooth and frictionless.
    0:38:10 And I worry that, you know, in making this, like, smooth mirror of humanity, like, where do we lose our humanity if we keep relying, like, keep seeding more and more to AI systems?
    0:38:17 I want it to be a tool to help us, like, achieve our goals rather than, like, this thing that replaces us.
    0:38:24 Yeah, I won’t lie. I mean, I did. I just recently got my chat GPT account.
    0:38:29 And I did ask it what it thought of Sean Elling, host of the Gray Area podcast.
    0:38:30 What did it say?
    0:38:31 And it was very complimentary.
    0:38:35 It’s extremely, extremely generous.
    0:38:38 And I was like, oh, shit, yeah, this thing gets it.
    0:38:41 Oh, this is okay. All right.
    0:38:41 Maybe it is a god.
    0:38:42 Now I trust it.
    0:38:45 Clearly it’s an all-knowing, omnipotent one.
    0:38:53 That’s what I came away with, like, you know, from the series and the reporting is, like, I think before I used to be very afraid of AI and using it and not knowing.
    0:38:59 And now I feel, like, armed to be skeptical in the right ways and to try to use it for good.
    0:39:03 Yeah. So that’s what I hope people get out of the series anyway.
    0:39:15 Are you worried about us losing our humanity or just becoming so different that we don’t recognize ourselves anymore?
    0:39:20 I am worried that it’ll just make us more isolated.
    0:39:30 And it’s so good at giving us what we want to hear that we won’t, like, you know, find the friction, search for the friction in life that makes life worth living.
    0:39:42 Yeah, yeah. So, look, I mean, the different camps may disagree about a lot, but they seem to converge on the basic notion that this technology is transformative.
    0:39:45 It’s going to transform our lives.
    0:39:55 It’s probably going to transform the economy and the way this stuff gets developed and deployed and the incentives driving it are really going to matter.
    0:40:08 Is it your sense that checks and balances are being put in place to guide this transformation so that it does benefit more people than it hurts, or at least as much as possible?
    0:40:11 I mean, was this something you explored in your reporting?
    0:40:17 Yeah, I mean, you know, I think a lot of the people I spoke to really wanted regulation.
    0:40:25 But I think ultimately, like, there isn’t really regulation in the U.S. on the AI safety front or the AI ethics front.
    0:40:32 The technology is dramatically outpacing regulators’ ability to regulate it.
    0:40:35 So, that’s troubling. Like, it’s not great.
    0:40:42 I would imagine the ethicists would be a little more focused on imposing regulations now.
    0:40:45 But it doesn’t seem like they’re making a lot of headway on that front.
    0:40:48 I’m not sure how regulatable it is.
    0:41:07 Yeah, I think that was one of my frustrations just listening to all this infighting was, like, I felt like these two groups that, like, they have a lot in common and they should be pursuing, like, a common goal of getting some good regulation, of, you know, having some strong safeguards in place for both AI safety and AI ethics concerns.
    0:41:16 And ultimately, you know, we tell the story of how some of them did come together to write an open letter calling for both kinds of regulations.
    0:41:21 But they’ve not, you know, and that’s encouraging to see people working together.
    0:41:29 But ultimately, I don’t think they’ve made, at this point, strides in getting anything significant past.
    0:41:30 You know, it’s interesting.
    0:41:33 You’re reporting on this in the series.
    0:41:38 And our employer, Vox, has a deal with OpenAI.
    0:41:45 And in the course of your reporting, you were trying to find out what you could about that deal.
    0:41:49 How did that go, if you’re comfortable talking about it?
    0:41:50 Yeah, yeah.
    0:41:55 Yeah, so our, the parent, we should say, the parent company of our Vox, Vox Media.
    0:41:58 I know the language I need to use.
    0:42:00 I have it down back, as you can’t tell.
    0:42:14 But, you know, kind of shortly after we decided to tackle AI in this series, we learned that Vox Media was entering a partnership with OpenAI, the ChatGPT company.
    0:42:20 We learned it meant that OpenAI could train its models on our journalism.
    0:42:29 And I guess for personally, it just felt like I wanted to know if they were training on my voice, you know?
    0:42:30 Yeah, me too.
    0:42:33 That, to me, feels really, yeah, really personal.
    0:42:35 Like, there’s so much emotional information in a voice.
    0:42:42 Like, I feel very naked going out on air and having people listen to my voice.
    0:42:46 And I spend so much time carefully crafting what I say.
    0:42:52 And so the idea that they would train on my voice and I don’t do what with it.
    0:42:52 I don’t know.
    0:42:56 One of our editors pointed out, like, that’s part of the story.
    0:42:59 You know, like, AI is, like, entering our lives.
    0:43:03 More and more AI systems and robots are entering our lives and having this.
    0:43:10 And for me personally, it’s like, yeah, like, literally, my work, our work is being used to train these systems.
    0:43:13 Like, what does that mean for us, for our work?
    0:43:20 It felt, and, you know, I reached out to Vox Media and to OpenAI for an interview.
    0:43:27 And they both declined, which made it feel even, you know, just, you feel really helpless.
    0:43:35 And, I mean, there’s not much more answers that I have than that.
    0:43:39 Yeah, well, I mean, you even interview a guy on the show.
    0:43:41 You know, he’s a former OpenAI employee.
    0:43:46 You know, and you’re raising these concerns and he’s sort of dismissive of it, right?
    0:43:49 Like, you know, whatever data they’re getting.
    0:43:49 He just laughed at us.
    0:44:00 I would be quite surprised if the data provided by Vox is itself very valuable to OpenAI.
    0:44:03 I would imagine it’s a tiny, tiny drop in that bucket.
    0:44:10 If all of ChatGPT’s training data were to fit inside the entire Atlantic Ocean,
    0:44:17 then all of Vox’s journalism would be like a few hundred drops in that ocean.
    0:44:22 Rightly, you’re like, well, fuck, it matters to me.
    0:44:27 It’s my work, it’s my voice, and it may eventually be my job, right?
    0:44:33 And the point here is, like, that this is a thing now that our job,
    0:44:39 the fact that our job and many other jobs are already tangled up with AI in this way,
    0:44:42 it’s just a reminder that this isn’t the future, right?
    0:44:49 It’s here now, and it’s only going to get more strange and complicated.
    0:44:50 Totally, yeah.
    0:44:56 And I don’t know, I guess I understand, like, the impulse from, like, from Vox Media to be like,
    0:45:01 okay, we want to have, we want to be compensated for, you know,
    0:45:05 licensing our journalists’ work who work so hard and we pay them.
    0:45:15 But it feels, yeah, it just feels like, it feels weird to not have a say when it’s the work you’re doing.
    0:45:16 So, I have your views on online and online and online and online.
    0:45:17 So, I have your views on online and online, like, online and online.
    0:45:20 So, I have your views on online and online and online, like, online and online and online.
    0:45:22 So, I have your views on online and online, like, online and online and online.
    0:45:24 So, I have your views on online and online and online and online and online, like, online and online.
    0:45:38 So, I have your views on online and online and online.
    0:45:38 So, I have your views on online and online and online.
    0:45:40 So, I have your views on online and online and online.
    0:45:50 So, have your views on AI in general changed all that much after doing this series?
    0:45:57 I mean, you say at the end that when you look at AI, just what you see is a funhouse mirror.
    0:45:59 What does that mean?
    0:46:07 AI, like a lot of our technologies and I guess like our visions of God, as you talk about, are a reflection of ourselves.
    0:46:17 And so, I think it was a comforting realization to me to realize that, like, the story of AI is not some, like, technological story I can’t understand.
    0:46:26 Like, the story of AI is a story about humans who are trying really hard to make a technology good and failing to varying degrees.
    0:46:44 But, yeah, I think fundamentally the course of, like, reporting it for me just brought the technology down to earth and made me a little more empowered to ask questions, to be skeptical, and to use it in my life with the right amount of skepticism.
    0:46:48 So, what do you hope people get out of this series?
    0:46:54 Normies who enter into it, you know, without a sort of solidified position on it.
    0:46:56 What do you hope they take away from it?
    0:47:14 I hope that people who didn’t feel like they had any place in the conversation around AI will feel, like, invited to the table and will be more informed and skeptical and curious and excited about the technology.
    0:47:18 And I hope that it brings it down to earth a little bit.
    0:47:21 Julia Longoria, this has been a lot of fun.
    0:47:23 Thank you so much for coming on the show.
    0:47:27 And the series, once again, is called Good Robot.
    0:47:28 It is fantastic.
    0:47:30 You should go listen to it immediately.
    0:47:31 Thank you.
    0:47:32 Thank you.
    0:47:41 All right.
    0:47:43 I hope you enjoyed this episode.
    0:47:53 If you want to listen to Julia’s Good Robot series, and of course you do, you can find all four episodes in the Vox Unexplainable podcast feed.
    0:47:57 We’ll drop a link to the first episode in the show notes.
    0:48:00 And as always, we want to know what you think.
    0:48:04 So drop us a line at the gray area at vox.com.
    0:48:12 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
    0:48:17 And once you’re done with that, please go ahead, rate, review, subscribe to the pod.
    0:48:19 That stuff really helps.
    0:48:32 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Erica Wong, fact-checked by Melissa Hirsch, and Alex Overington wrote our theme music.
    0:48:35 New episodes of the gray area drop on Mondays.
    0:48:37 Listen and subscribe.
    0:48:39 The show is part of Vox.
    0:48:43 Support Vox’s journalism by joining our membership program today.
    0:48:47 Members get access to this show without any ads.
    0:48:49 Go to vox.com/members to sign up.
    0:48:53 And if you decide to sign up because of this show, let us know.

    There’s a lot of uncertainty when it comes to artificial intelligence. Technologists love to talk about all the good these tools can do in the world, all the problems they might solve. Yet, many of those same technologists are also warning us about all the ways AI might upend society, how it might even destroy humanity.

    Julia Longoria, Vox host and editorial director, spent a year trying to understand that dichotomy. The result is a four-part podcast series — called Good Robot — that explores the ideologies of the people funding, building, and driving the conversation about AI.

    Today Julia speaks with Sean about how the hopes and fears of these individuals are influencing the technology that will change all of our lives.

    Host: Sean Illing (@SeanIlling)

    Guest: Vox Host and Editorial Director Julia Longoria

    Good Robot is available in the Vox Unexplainable feed.

    Episode 1

    Episode 2

    Episode 3

    Episode 4

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Stop comparing yourself to AI

    AI transcript
    0:00:01 Are you forgetting about that chip in your windshield?
    0:00:03 It’s time to fix it.
    0:00:05 Come to Speedy Glass before it turns into a crack.
    0:00:08 Our experts will repair your windshield in less than an hour,
    0:00:09 and it’s free if you’re insured.
    0:00:12 Book your appointment today at speedyglass.ca.
    0:00:14 Details and conditions at speedyglass.ca.
    0:00:20 What do you think about when you think about AI?
    0:00:25 Maybe chatbots giving you new lasagna recipes,
    0:00:29 research assistants helping you finish that paper.
    0:00:33 Do you think about machines taking your job?
    0:00:37 Maybe you think of something even more ominous,
    0:00:41 like Skynet robots wiping out humanity.
    0:00:46 If you’re like me, you probably think of all those things,
    0:00:47 depending on the day.
    0:00:50 And that’s sort of the point.
    0:00:55 AI is not well understood, even by the people creating it.
    0:00:58 And even though we all know it’s a technology
    0:01:00 that’s going to change our lives,
    0:01:03 that’s really all we know at this point.
    0:01:10 So how do we confront this uncertainty?
    0:01:13 How do we navigate the current moment?
    0:01:17 And how do we, the people who have been told
    0:01:19 that we will be impacted by AI,
    0:01:21 but don’t seem to have much of a say
    0:01:23 in how the AI is being built,
    0:01:26 engage in the conversation?
    0:01:31 I’m Sean Elling, and this is The Gray Area.
    0:01:45 Today’s guest is Jaron Lanier.
    0:01:48 He’s a virtual reality pioneer,
    0:01:50 a digital philosopher,
    0:01:54 and the author of several best-selling books on technology.
    0:01:57 He’s also one of the most profound critics
    0:02:01 of Silicon Valley and the business model driving it.
    0:02:04 I wanted to bring Jaron on the show
    0:02:07 for the first episode of this special series on AI
    0:02:10 because I think he’s uniquely positioned
    0:02:14 to speak both to the technological side of AI,
    0:02:16 what’s happening, where it’s going,
    0:02:20 and also to the human side.
    0:02:24 Jaron’s a computer scientist who loves technology.
    0:02:29 But at his core, he’s a humanist
    0:02:32 who’s always thinking about what technologies are doing to us
    0:02:36 and how our understanding of these tools
    0:02:39 will inevitably determine how they’re used.
    0:02:43 Maybe what Jaron does the best, though,
    0:02:45 is offer a different lens
    0:02:47 through which to view these technologies.
    0:02:51 We’re encouraged to treat these machines
    0:02:54 as though they’re godlike,
    0:02:56 as though they’re thinking for themselves.
    0:03:01 Indeed, they’re designed to make you feel that way
    0:03:04 because it adds to the mystique around them
    0:03:07 and obscures the truth about how they really work.
    0:03:12 But Jaron’s plea is to be careful
    0:03:15 about thoughtlessly adopting the language
    0:03:17 that the AI creators give us
    0:03:18 to describe their creation
    0:03:21 because that language structures
    0:03:25 not only how we think about these technologies,
    0:03:27 but what we do with them.
    0:03:35 Jaron Lanier, welcome to the show.
    0:03:36 That’s me. Hey.
    0:03:39 So look, I have heard
    0:03:43 so many of these big picture conversations about AI
    0:03:48 and they often begin with a question
    0:03:52 about how or whether AI is going to take over the world.
    0:03:55 But I discovered very quickly
    0:03:57 that you don’t accept the terms of that question,
    0:03:59 which is why I’m not going to ask it.
    0:04:01 but I thought it would be useful
    0:04:03 as a beginning to ask you
    0:04:05 why you find questions like that
    0:04:07 or claims like that ridiculous.
    0:04:10 Oh, well, you know,
    0:04:12 when it comes to AI,
    0:04:15 the whole technical field
    0:04:16 is kind of defined
    0:04:19 by an almost metaphysical assertion,
    0:04:22 which is we are creating intelligence.
    0:04:23 Well, what is intelligence?
    0:04:26 Something human.
    0:04:28 The whole field was founded
    0:04:31 by Alan Turing’s thought experiment
    0:04:32 called the Turing test,
    0:04:37 where if you can fool a human
    0:04:38 into thinking you’ve made a human,
    0:04:40 then you might as well have made a human
    0:04:42 because what other tests could there be?
    0:04:45 Which in a way is fair enough.
    0:04:45 On the other hand,
    0:04:47 what other scientific field
    0:04:50 other than maybe supporting stage magicians
    0:04:53 is entirely based on being able to fool people?
    0:04:53 I mean, it’s stupid.
    0:04:56 Fooling people in itself accomplishes nothing.
    0:04:58 There’s no productivity.
    0:04:59 There’s no insight
    0:05:01 unless you’re studying
    0:05:03 the cognition of being fooled, of course.
    0:05:06 So there’s an alternative way
    0:05:07 to think about what we do
    0:05:09 with what we call AI,
    0:05:12 which is that there’s no new entity.
    0:05:14 There’s nothing intelligent there.
    0:05:16 What there is is a new
    0:05:17 and in my opinion,
    0:05:18 sometimes quite useful
    0:05:21 form of collaboration between people.
    0:05:23 If you look at something like the Wikipedia,
    0:05:25 where people mash up
    0:05:27 a lot of their communications into one thing,
    0:05:30 you can think of that as a step on the way
    0:05:32 to what we call large model AI,
    0:05:34 where we take all the data that we have
    0:05:35 and we put it together
    0:05:39 in a way that allows more interpolation
    0:05:43 and more commingling than previous methods.
    0:05:47 And I think that can be of great use,
    0:05:49 but I don’t think there’s any requirement
    0:05:52 that we perceive that as a new entity.
    0:05:53 Now, you might say,
    0:05:54 well, what’s the harm if we do?
    0:05:56 That’s a fair question.
    0:05:57 Like, who cares?
    0:05:58 If somebody wants to think of it
    0:06:00 as a new type of person
    0:06:02 or even a new type of God or whatever,
    0:06:03 what’s wrong with that?
    0:06:06 Potentially nothing.
    0:06:08 People believe all kinds of things all the time.
    0:06:12 But, in the case of our technology,
    0:06:15 let me put it this way.
    0:06:19 If you’re a mathematician or a scientist,
    0:06:25 you can do what you do
    0:06:27 in a kind of an abstract way.
    0:06:28 Like, you can say,
    0:06:30 I’m furthering math.
    0:06:33 And, in a way, that’ll be true
    0:06:35 even if nobody else ever even perceives
    0:06:36 that I’ve done it.
    0:06:37 I’ve written down this proof.
    0:06:40 But that’s not true for technologists.
    0:06:43 Technologists only make sense
    0:06:46 if there’s a designated beneficiary.
    0:06:49 Like, you have to make technology for someone.
    0:06:52 And, as soon as you say
    0:06:56 the technology itself is a new someone,
    0:07:00 you stop making sense as a technologist.
    0:07:01 Right?
    0:07:03 Let me actually take up that question
    0:07:04 that you just posed a second ago
    0:07:05 with a thought,
    0:07:07 I’ve heard from you,
    0:07:09 which is something to the effect of,
    0:07:11 I think the way you put it is
    0:07:13 the easiest way to mismanage a technology
    0:07:15 is to misunderstand it.
    0:07:17 So, to answer your question…
    0:07:18 Sounds like me, I guess.
    0:07:19 Yeah. Okay.
    0:07:22 If we make the mistake,
    0:07:23 which is now common,
    0:07:26 to insist that AI is, in fact,
    0:07:28 some kind of god or creature
    0:07:30 or entity or oracle,
    0:07:31 whatever term you prefer,
    0:07:33 instead of a tool as you define it,
    0:07:34 the implication is that
    0:07:37 that would be a consequential mistake, right?
    0:07:39 That we will mismanage the technology
    0:07:40 by misunderstanding it.
    0:07:41 So, is that not quite right?
    0:07:42 Am I not quite understanding?
    0:07:43 No, I think that’s right.
    0:07:46 I think when you treat the technology
    0:07:47 as its own beneficiary,
    0:07:49 you miss a lot of opportunities
    0:07:50 to make it better.
    0:07:52 Like, I see this in AI all the time.
    0:07:53 I see people saying,
    0:07:55 well, if we did this,
    0:07:56 it would pass the Turing test better,
    0:07:57 and if we did that,
    0:07:58 it would seem more like
    0:07:59 it was an independent mind.
    0:08:01 But those are all goals
    0:08:01 that are different
    0:08:04 from it being economically useful.
    0:08:05 They’re different from it
    0:08:08 being useful to any particular user.
    0:08:09 They’re just these weird,
    0:08:12 to me, almost religious ritual goals
    0:08:13 or something.
    0:08:15 like they, and so every time
    0:08:16 you’re devoting yourself to that,
    0:08:18 it means you’re not devoting yourself
    0:08:20 to making it better.
    0:08:22 Like, an example is,
    0:08:25 we have, in my view,
    0:08:28 deliberately designed large model AI
    0:08:32 to obscure the original human sources
    0:08:34 of the data that the AI is trained on
    0:08:36 to help create this illusion
    0:08:37 of the new entity.
    0:08:38 But when we do that,
    0:08:41 we make it harder to do quality control.
    0:08:43 We make it harder to do authentication
    0:08:48 and to detect malicious uses of the model
    0:08:52 because we can’t tell what the intent is,
    0:08:54 what data it’s drawing upon.
    0:08:56 We’re sort of willfully making ourselves
    0:08:58 kind of blind in a way
    0:09:00 that we probably don’t really need to.
    0:09:01 And I really want to emphasize
    0:09:03 from a metaphysical point of view,
    0:09:05 I can’t prove,
    0:09:06 and neither can anyone else,
    0:09:08 that a computer is alive or not
    0:09:09 or conscious or not or whatever.
    0:09:11 I mean, all that stuff
    0:09:13 is always going to be a matter of faith.
    0:09:15 That’s just the way it is.
    0:09:17 That’s what we got around here.
    0:09:19 But what I can say
    0:09:21 is that this emphasis
    0:09:22 on trying to make the models
    0:09:25 seem like they’re freestanding new entities
    0:09:27 does blind us
    0:09:29 to some ways we could make them better.
    0:09:30 And so I think, like, why bother?
    0:09:32 What do we get out of that?
    0:09:32 Not a lot.
    0:09:34 So do you think maybe
    0:09:35 the cardinal mistake
    0:09:37 with a lot of this kind of thinking
    0:09:38 is to assume
    0:09:42 that artificial intelligence
    0:09:43 is something that’s in competition
    0:09:45 with human intelligence
    0:09:46 and human abilities,
    0:09:47 that that kind of misunderstanding
    0:09:48 sets us off on a course
    0:09:50 for a lot of other kinds
    0:09:51 of misunderstandings?
    0:09:53 I wouldn’t choose that language
    0:09:54 because then the natural thing
    0:09:55 somebody’s going to say
    0:09:56 who’s a true believer
    0:09:57 that the AI is coming alive,
    0:09:58 they’re going to say,
    0:09:59 yeah, you’re right.
    0:10:00 It’s not competition.
    0:10:01 We’re going to align them
    0:10:02 and they’re going to be
    0:10:03 our collaborators
    0:10:05 or whatever.
    0:10:06 So that, to me,
    0:10:07 doesn’t go far enough.
    0:10:09 My own way of thinking
    0:10:11 is that I’m able
    0:10:12 to improve the models
    0:10:13 when I say
    0:10:14 there’s no new entity there.
    0:10:15 I just say they don’t,
    0:10:15 they’re not there.
    0:10:16 They don’t exist
    0:10:17 as separate entities.
    0:10:18 They’re just collaborations
    0:10:19 of people.
    0:10:20 I have to go that far
    0:10:22 to get the clarity
    0:10:23 to improve them.
    0:10:26 It might be a little late
    0:10:27 in the language game
    0:10:29 to replace a term
    0:10:30 like artificial intelligence,
    0:10:30 but if you could,
    0:10:31 do you have a better one?
    0:10:34 I have had the experience
    0:10:35 of coming up with terms
    0:10:37 that were widely adopted
    0:10:37 in society.
    0:10:38 I came up with
    0:10:39 virtual reality
    0:10:40 and some other things
    0:10:41 when I was young
    0:10:44 and I have seen that
    0:10:45 even when you get
    0:10:46 to coin the term,
    0:10:47 you don’t get to define it
    0:10:50 and I don’t love
    0:10:51 the way people think
    0:10:52 of virtual reality
    0:10:53 typically today.
    0:10:54 It’s lost a little bit
    0:10:55 of its old humanism,
    0:10:56 I would say.
    0:10:59 So that experience
    0:11:00 has led me to feel
    0:11:01 that it’s really
    0:11:02 younger generations
    0:11:03 who should come up
    0:11:03 with their own terms.
    0:11:04 So what I would prefer
    0:11:06 to see is younger people
    0:11:07 reject our terms
    0:11:09 and come up
    0:11:09 with their own.
    0:11:11 Fair enough.
    0:11:14 I’ve read a lot
    0:11:14 of your work
    0:11:15 on AI
    0:11:17 and I’ve listened
    0:11:19 to a lot of your interviews
    0:11:21 and I take your point
    0:11:22 that AI
    0:11:25 is a distillation
    0:11:26 of all these human inputs
    0:11:27 fundamentally.
    0:11:30 but for you at what point
    0:11:32 does or can complexity
    0:11:35 start looking like autonomy
    0:11:37 and what would autonomy
    0:11:38 even mean
    0:11:39 that the thing starts
    0:11:40 making its own decisions
    0:11:41 and is that the simple
    0:11:42 definition of that?
    0:11:43 This is an obsession
    0:11:44 that people have
    0:11:45 but you have to understand
    0:11:46 it’s a religious
    0:11:48 and entirely subjective
    0:11:50 or sort of cultural obsession
    0:11:51 not a scientific one.
    0:11:52 It’s your judgment
    0:11:54 of how you want to see
    0:11:55 the start of autonomy.
    0:11:58 So I love complex systems
    0:11:59 and I love different levels
    0:12:00 of description
    0:12:01 and I love the independence
    0:12:03 of different levels
    0:12:03 of grantedness
    0:12:04 in physics
    0:12:06 so I’m utterly
    0:12:07 as obsessed
    0:12:07 as anyone
    0:12:08 with that
    0:12:10 but it’s important
    0:12:10 to distinguish
    0:12:12 that fascination
    0:12:12 which is a scientific
    0:12:13 fascination
    0:12:14 with the question
    0:12:16 of does crossing
    0:12:17 some threshold
    0:12:18 make something
    0:12:19 human or not?
    0:12:21 because the question
    0:12:22 of humanness
    0:12:24 or of becoming
    0:12:24 an entity
    0:12:26 that we care about
    0:12:27 in our planning
    0:12:27 becoming
    0:12:28 creating something
    0:12:29 that itself
    0:12:30 is a beneficiary
    0:12:31 of our technology
    0:12:32 that question
    0:12:33 has to be
    0:12:34 a matter of faith
    0:12:36 we just have
    0:12:36 to accept
    0:12:38 that our culture
    0:12:39 our law
    0:12:40 our ability
    0:12:41 to be technologists
    0:12:42 ultimately rests
    0:12:43 on values
    0:12:45 that in a sense
    0:12:45 we pull out
    0:12:46 of our asses
    0:12:47 or if you like
    0:12:48 we have to be
    0:12:49 a little bit mystical
    0:12:50 in order to create
    0:12:51 the ground layer
    0:12:52 in order to be
    0:12:52 then rational
    0:12:53 as technologists
    0:12:54 in a way
    0:12:55 I wish it wasn’t so
    0:12:56 it sort of sucks
    0:12:57 but it’s just the truth
    0:12:57 and the sooner
    0:12:58 we accept that
    0:12:59 the better off
    0:13:00 we’ll be
    0:13:00 and the more honest
    0:13:01 we’ll be
    0:13:02 and I’m okay with it
    0:13:03 why?
    0:13:05 because
    0:13:06 if I’m designing
    0:13:07 AI for AI’s sake
    0:13:08 I’m talking nonsense
    0:13:09 you know
    0:13:10 like
    0:13:11 right now
    0:13:13 it’s very expensive
    0:13:13 to compute AI
    0:13:14 so what percentage
    0:13:16 of that expense
    0:13:17 it goes into
    0:13:18 creating the illusion
    0:13:19 so that you can believe
    0:13:20 it’s sort of
    0:13:21 another person
    0:13:22 when you use chat
    0:13:23 how much electricity
    0:13:24 is being spent
    0:13:25 so that the way
    0:13:26 it talks to you
    0:13:27 feels like it’s a person
    0:13:28 a lot
    0:13:28 you know
    0:13:29 and it’s a waste
    0:13:30 like why are we doing that
    0:13:31 why are we doing
    0:13:32 why are we creating
    0:13:34 a carbon footprint
    0:13:36 for the benefit
    0:13:38 of some non-entity
    0:13:39 in order to fool humans
    0:13:40 like it’s
    0:13:40 it’s ridiculous
    0:13:42 but we don’t see that
    0:13:43 because we have this
    0:13:45 religious imperative
    0:13:46 in the tech
    0:13:48 cultural world
    0:13:49 to create
    0:13:50 this new life
    0:13:52 but it’s entirely
    0:13:53 a matter of
    0:13:54 our own perception
    0:13:55 there’s no test
    0:13:55 for it
    0:13:56 other than the
    0:13:56 Turing test
    0:13:57 which is no test
    0:13:57 at all
    0:13:58 I mean
    0:13:59 we still don’t even
    0:14:01 have a real
    0:14:01 definition
    0:14:03 of consciousness
    0:14:05 and I hear all
    0:14:05 these discussions
    0:14:07 about machine learning
    0:14:09 and human intelligence
    0:14:09 and the differences
    0:14:11 and I continue
    0:14:12 to have no idea
    0:14:13 when something
    0:14:14 stops being a
    0:14:15 simulacrum of intelligence
    0:14:16 and becomes the real thing
    0:14:17 I still don’t quite know
    0:14:18 when something can
    0:14:19 reasonably be called
    0:14:20 sentient
    0:14:21 or intelligent
    0:14:22 but maybe the question
    0:14:22 doesn’t even matter
    0:14:24 maybe it’s enough
    0:14:25 for us to think it does
    0:14:26 right
    0:14:27 so the problem
    0:14:28 in what you just
    0:14:29 said is the word
    0:14:29 still
    0:14:32 like it’s a
    0:14:33 this
    0:14:35 lack of knowledge
    0:14:36 is structural
    0:14:37 you’re not going
    0:14:38 to overcome it
    0:14:39 you can pretend
    0:14:40 you have
    0:14:40 but you’re not going
    0:14:41 to
    0:14:42 this is genuinely
    0:14:43 a matter of faith
    0:14:43 you know
    0:14:44 and
    0:14:46 it’s a very
    0:14:46 old discussion
    0:14:47 when it comes
    0:14:48 to God
    0:14:49 but
    0:14:50 it’s a new
    0:14:50 discussion
    0:14:51 when it comes
    0:14:52 to each other
    0:14:53 or to AIs
    0:14:54 and
    0:14:54 you know
    0:14:55 like
    0:14:56 faith is okay
    0:14:56 we can live
    0:14:57 with faith
    0:14:57 we just have
    0:14:58 to be honest
    0:14:59 about it
    0:14:59 and I think
    0:15:01 being dishonest
    0:15:01 and saying
    0:15:02 oh
    0:15:03 it’s not faith
    0:15:04 I have this
    0:15:04 rational proof
    0:15:05 of something
    0:15:07 it’s not
    0:15:08 dishonesty
    0:15:08 is probably
    0:15:09 not good
    0:15:10 especially
    0:15:10 if you’re
    0:15:10 trying to do
    0:15:11 science or technology
    0:15:15 maybe we just
    0:15:17 maybe we just
    0:15:18 hold on
    0:15:19 maybe
    0:15:20 I’m going to
    0:15:21 say this
    0:15:23 we probably
    0:15:23 just have to
    0:15:24 hold on to
    0:15:24 some notion
    0:15:25 that there’s
    0:15:26 something
    0:15:26 fundamentally
    0:15:27 special
    0:15:28 about human
    0:15:29 consciousness
    0:15:30 and that even
    0:15:30 if on some
    0:15:31 purely empirical
    0:15:31 level
    0:15:32 that’s not
    0:15:32 even true
    0:15:33 maybe believing
    0:15:34 that it is
    0:15:34 is essential
    0:15:36 to our
    0:15:36 survival
    0:15:37 I don’t
    0:15:37 think you
    0:15:38 can rationally
    0:15:40 proceed
    0:15:41 as an
    0:15:41 as an
    0:15:42 acting
    0:15:42 technologist
    0:15:44 without
    0:15:45 an
    0:15:46 irrational
    0:15:47 belief
    0:15:48 that people
    0:15:49 are special
    0:15:50 because once again
    0:15:50 then you have
    0:15:51 no recipient
    0:15:52 and if you
    0:15:53 say well
    0:15:53 there’s going
    0:15:54 to be
    0:15:54 no belief
    0:15:55 all the way
    0:15:55 to the bottom
    0:15:56 it’s just
    0:15:56 going to be
    0:15:57 rationality
    0:15:57 forever
    0:15:58 I mean
    0:15:59 it doesn’t
    0:15:59 work
    0:16:00 rationality
    0:16:01 never creates
    0:16:01 a total
    0:16:02 enclosed
    0:16:02 system
    0:16:04 we kind
    0:16:05 of float
    0:16:05 in a sea
    0:16:05 of mystery
    0:16:06 and we
    0:16:06 have like
    0:16:07 this belief
    0:16:07 that lets
    0:16:08 us have
    0:16:08 a footing
    0:16:09 and it’s
    0:16:10 our job
    0:16:11 to acknowledge
    0:16:11 that even
    0:16:12 if we’re
    0:16:12 uncomfortable
    0:16:13 with it
    0:16:15 can I try
    0:16:15 another angle
    0:16:16 on you
    0:16:16 yeah
    0:16:17 do you know
    0:16:17 my
    0:16:17 okay
    0:16:18 so there’s
    0:16:18 another
    0:16:19 argument
    0:16:19 about the
    0:16:20 turing test
    0:16:20 right
    0:16:21 turing test
    0:16:22 you have a
    0:16:23 person on a
    0:16:23 computer
    0:16:23 they’re each
    0:16:24 trying to fool
    0:16:24 a judge
    0:16:25 and at the
    0:16:26 moment the
    0:16:26 judge can’t
    0:16:26 tell them
    0:16:27 apart
    0:16:27 you say
    0:16:28 well we
    0:16:28 might as
    0:16:29 well call
    0:16:30 the computer
    0:16:31 human because
    0:16:31 what other
    0:16:31 tests can
    0:16:32 there be
    0:16:32 that’s the
    0:16:32 best we’ll
    0:16:33 get
    0:16:33 okay
    0:16:35 so the
    0:16:36 problem with
    0:16:36 the test
    0:16:37 is that it
    0:16:38 measures whether
    0:16:38 there’s a
    0:16:38 differential
    0:16:39 but it
    0:16:40 doesn’t tell
    0:16:40 you whether
    0:16:41 the computer
    0:16:42 got smarter
    0:16:42 or the
    0:16:42 human got
    0:16:43 stupider
    0:16:44 it doesn’t
    0:16:45 tell you if
    0:16:45 the computer
    0:16:46 became more
    0:16:47 human or if
    0:16:47 the human
    0:16:48 became less
    0:16:48 human in
    0:16:49 any sense
    0:16:49 whatever that
    0:16:50 might be
    0:16:51 so there’s
    0:16:52 two humans
    0:16:52 the contestant
    0:16:53 and the judge
    0:16:53 and one
    0:16:54 computer
    0:16:54 therefore
    0:16:56 and this is
    0:16:56 meant to be
    0:16:57 funny but it’s
    0:16:57 also kind of
    0:16:57 real
    0:16:58 there’s a
    0:16:58 two-thirds
    0:16:59 chance that
    0:16:59 it was a
    0:17:00 human that
    0:17:00 got stupider
    0:17:01 rather than
    0:17:01 a computer
    0:17:01 that got
    0:17:02 smarter
    0:17:04 and I
    0:17:04 see that
    0:17:05 borne out
    0:17:05 like when I
    0:17:06 look at
    0:17:06 social media
    0:17:07 and I see
    0:17:08 people interacting
    0:17:08 with the AI
    0:17:09 algorithms that
    0:17:10 are supposed to
    0:17:10 guide their
    0:17:11 attention
    0:17:12 I see them
    0:17:13 getting stupider
    0:17:13 two-thirds
    0:17:14 of the time
    0:17:14 but then you
    0:17:15 know sometimes
    0:17:16 really good
    0:17:16 stuff happens
    0:17:17 so I think
    0:17:18 this general
    0:17:19 spread of most
    0:17:20 of the time
    0:17:20 things get
    0:17:21 worse but then
    0:17:21 there’s some
    0:17:22 stuff that’s
    0:17:22 really cool
    0:17:24 tends to be
    0:17:24 true when you
    0:17:25 believe in AI
    0:17:26 and so
    0:17:27 I would
    0:17:28 say don’t
    0:17:28 believe in
    0:17:28 it and
    0:17:30 some people
    0:17:30 are still
    0:17:31 getting it
    0:17:31 stupider
    0:17:31 because that’s
    0:17:32 how we are
    0:17:33 but I think
    0:17:33 we can get to
    0:17:34 the point where
    0:17:34 the majority
    0:17:35 gets better
    0:17:36 instead of
    0:17:37 stupider but
    0:17:37 right now I
    0:17:37 think we’re
    0:17:38 at two-thirds
    0:17:39 get stupider
    0:17:40 yeah that
    0:17:41 math checks out
    0:17:41 to me
    0:17:42 great I
    0:17:43 think that’s
    0:17:43 a rigorous
    0:17:44 argument that’s
    0:17:44 what you call
    0:17:45 a rigorous
    0:17:46 quantitative
    0:17:47 theoretically and
    0:17:48 empirically supported
    0:17:49 argument right
    0:17:49 there
    0:17:50 so do you
    0:17:51 think all
    0:17:53 the anxieties
    0:17:54 including from
    0:17:55 serious people
    0:17:56 in in the
    0:17:57 world of AI
    0:17:58 all the worries
    0:18:00 about human
    0:18:01 extinction and
    0:18:01 mitigating the
    0:18:02 risks thereof
    0:18:04 does that is
    0:18:04 that religious
    0:18:06 hysteria to
    0:18:06 you or does
    0:18:07 that feel
    0:18:09 what drives me
    0:18:09 crazy about
    0:18:10 this I this
    0:18:11 is my world
    0:18:11 you know so I
    0:18:12 talk to the
    0:18:12 people who
    0:18:13 believe that
    0:18:14 stuff all the
    0:18:15 time and
    0:18:16 increasingly a
    0:18:16 lot of them
    0:18:17 believe that it
    0:18:17 would be good to
    0:18:18 wipe out people
    0:18:19 and that the AI
    0:18:19 future would be a
    0:18:20 better one and
    0:18:21 that we should
    0:18:22 wear a disposable
    0:18:24 temporary container
    0:18:25 for the birth of
    0:18:26 AI I hear that
    0:18:27 opinion quite a lot
    0:18:27 that’s a real
    0:18:28 opinion held by
    0:18:29 real people
    0:18:32 many many I
    0:18:33 mean like the
    0:18:34 other day I was
    0:18:35 at a lunch in
    0:18:36 Palo Alto and
    0:18:36 there were some
    0:18:37 young AI
    0:18:38 scientists there
    0:18:39 who were saying
    0:18:41 that they would
    0:18:42 never have a
    0:18:43 bio baby because
    0:18:43 as soon as you
    0:18:44 have a bio baby
    0:18:44 you get the
    0:18:46 mind virus of
    0:18:48 the bio world
    0:18:48 and that when
    0:18:49 you have the
    0:18:50 bio mind virus
    0:18:50 you become
    0:18:51 committed to
    0:18:52 your human baby
    0:18:52 but it’s much
    0:18:53 more important to
    0:18:54 be committed to
    0:18:54 the AI of the
    0:18:56 future and so
    0:18:57 to have human
    0:18:58 babies is
    0:18:58 fundamentally
    0:18:59 unethical
    0:19:01 now okay in
    0:19:01 this particular
    0:19:03 case this was
    0:19:03 a young man
    0:19:04 with a female
    0:19:05 partner who
    0:19:06 wanted a kid
    0:19:06 and what I’m
    0:19:07 thinking is this
    0:19:07 is just another
    0:19:08 variation of the
    0:19:09 very very old
    0:19:10 story of young
    0:19:11 men attempting to
    0:19:12 put off the baby
    0:19:13 thing with their
    0:19:14 sexual partner as
    0:19:15 long as possible
    0:19:16 because I’ve been
    0:19:16 there and many of
    0:19:16 us have been
    0:19:17 there so in a
    0:19:18 way I think it’s
    0:19:19 not anything new
    0:19:19 and it’s just the
    0:19:20 old thing but
    0:19:21 it’s a very
    0:19:23 common attitude
    0:19:25 not the dominant
    0:19:25 one I would say
    0:19:26 the dominant one
    0:19:27 is that the
    0:19:28 super AI will
    0:19:29 turn into this
    0:19:30 god thing that’ll
    0:19:31 save us and
    0:19:32 will either upload
    0:19:33 us to be immortal
    0:19:34 or solve all our
    0:19:34 problems at the
    0:19:35 very least or
    0:19:36 something create
    0:19:37 super abundance at
    0:19:38 the very very very
    0:19:41 least and I
    0:19:45 I have to say
    0:19:45 there’s a bit of
    0:19:46 an inverse
    0:19:47 proportion here
    0:19:48 between the people
    0:19:49 who directly work
    0:19:50 in making AI
    0:19:51 systems and then
    0:19:51 the people who
    0:19:52 are adjacent to
    0:19:54 them who have
    0:19:54 these various
    0:19:57 beliefs my own
    0:19:58 opinion is that
    0:19:59 the people
    0:20:00 how can I put
    0:20:02 this the people
    0:20:03 who are able to
    0:20:04 be skeptical and
    0:20:05 a little bored and
    0:20:06 dismissive of the
    0:20:07 technology they’re
    0:20:08 working on tend to
    0:20:09 improve it more than
    0:20:09 the people kind of
    0:20:10 worship it too much
    0:20:13 like I’ve seen that
    0:20:14 a lot in a lot of
    0:20:15 different things not
    0:20:16 not just computer
    0:20:17 science and I think
    0:20:18 I think you have to
    0:20:19 have a kind of
    0:20:20 like you can’t drink
    0:20:21 your own whiskey too
    0:20:22 much when you’re a
    0:20:24 technologist you have
    0:20:25 to kind of be ready
    0:20:26 to say oh maybe
    0:20:27 this thing’s a bit
    0:20:28 overhyped I’m not
    0:20:29 going to tell that
    0:20:30 to the people buying
    0:20:31 shares in my company
    0:20:31 but you know what
    0:20:32 like just between us
    0:20:35 you know and but
    0:20:35 that attitude is
    0:20:37 exactly the one that
    0:20:38 puts you over the
    0:20:38 threshold to then
    0:20:39 start improving it
    0:20:40 more and that’s one
    0:20:41 of the dangers of
    0:20:42 this kind of
    0:20:43 mythologizing of it
    0:20:44 oh it’s about to
    0:20:45 become this god
    0:20:45 that’ll take over
    0:20:46 everything but
    0:20:48 that what follows
    0:20:49 from that is this
    0:20:50 very curious thing
    0:20:51 which is that the
    0:20:52 way of thinking
    0:20:53 about it where it’s
    0:20:54 about to turn into
    0:20:55 this god that’ll
    0:20:56 run everything and
    0:20:57 either kill us all
    0:20:57 or fix all our
    0:20:58 problems that
    0:21:00 attitude in itself
    0:21:02 makes you not
    0:21:04 only a little bit
    0:21:05 of a lesser
    0:21:06 improver of the
    0:21:07 technology by any
    0:21:08 like real measurable
    0:21:10 metric but it
    0:21:11 also makes you a
    0:21:12 bad steward of it
    0:21:15 part of part of
    0:21:15 what makes this
    0:21:16 very confusing
    0:21:17 especially to you
    0:21:19 know non-technical
    0:21:20 normie outsiders
    0:21:21 like me and like
    0:21:22 most people frankly
    0:21:24 is that it is it’s
    0:21:25 just moving and
    0:21:26 changing and evolving
    0:21:27 really quickly and
    0:21:28 the terms and
    0:21:29 concepts are very
    0:21:30 slippery if you’re
    0:21:32 not deep in it and
    0:21:32 you know you’re
    0:21:33 talking about super
    0:21:34 super AI and godlike
    0:21:36 powers one example
    0:21:37 is and you’ll bear
    0:21:38 with me for a second
    0:21:39 so I can bring people
    0:21:41 along we have this
    0:21:42 dichotomy between
    0:21:44 AI versus AGI
    0:21:45 artificial intelligence
    0:21:46 versus artificial
    0:21:47 general intelligence and
    0:21:48 my understanding is
    0:21:50 that AI is a term for
    0:21:51 the general set of
    0:21:52 tools that people
    0:21:53 are building chat
    0:21:54 bots and that sort
    0:21:54 of thing and that
    0:21:56 AGI is still sort of
    0:21:57 a theoretical thing
    0:21:58 where this tech is
    0:22:00 basically as good at
    0:22:01 everything as a
    0:22:03 normal regular person
    0:22:03 is and it can also
    0:22:04 learn and grow and
    0:22:05 apply that knowledge
    0:22:07 just like we can and
    0:22:08 we’ve got AI now
    0:22:09 clearly but we don’t
    0:22:11 have AGI yet and if
    0:22:13 we get it and there
    0:22:13 are people who think
    0:22:14 we’re maybe closer
    0:22:15 than we thought
    0:22:16 recently that it’ll be
    0:22:18 a real Rubicon
    0:22:20 crossing moment for
    0:22:21 us what’s your
    0:22:22 feeling on that do
    0:22:23 you think AGI is
    0:22:24 even possible in the
    0:22:25 way most people
    0:22:26 have you not
    0:22:26 listened to a word
    0:22:28 I said that’s a
    0:22:28 religious question
    0:22:30 that’s like asking
    0:22:30 if I think the
    0:22:31 rapture is coming
    0:22:33 soon I mean it’s
    0:22:33 yeah but you can
    0:22:34 have an opinion
    0:22:34 about religious
    0:22:35 questions I guess
    0:22:38 that’s true I mean
    0:22:40 there are those who
    0:22:41 say we have AGI
    0:22:42 already and their
    0:22:43 opinion is as
    0:22:44 legitimate as
    0:22:45 anybody else’s I
    0:22:46 mean I just think
    0:22:47 the moment you’ve
    0:22:48 put the question
    0:22:48 that way you’ve
    0:22:49 already confused
    0:22:50 yourself and made
    0:22:50 yourself kind of
    0:22:51 useless in talking
    0:22:52 about what to do
    0:22:53 with the technology
    0:22:54 so I have to reject
    0:22:55 your question as
    0:22:56 being like poorly
    0:22:56 framed and
    0:22:57 ill-informed I’m
    0:22:59 sorry I was hoping
    0:22:59 to get through this
    0:23:00 fucking conversation
    0:23:01 without you having
    0:23:02 to beat back at
    0:23:03 one of my ill-informed
    0:23:04 questions and I
    0:23:05 did make it I made
    0:23:06 it almost 20 minutes
    0:23:07 in yeah good luck
    0:23:08 with that my friend
    0:23:12 all right sir
    0:23:13 it was a valiant
    0:23:14 effort you win that
    0:23:17 you really I mean
    0:23:19 look I mean this
    0:23:20 is silly this is
    0:23:21 like I’m also
    0:23:21 trying to speak for
    0:23:22 concerns that I
    0:23:23 know a lot of
    0:23:24 people I know
    0:23:25 because we broadcast
    0:23:26 that way of thinking
    0:23:27 about it so yeah
    0:23:31 look there’s a
    0:23:31 thing all right
    0:23:33 look I’m I
    0:23:35 benefit from people
    0:23:36 believing in AI
    0:23:37 professionally and
    0:23:39 there’s a way that
    0:23:39 the whole economy
    0:23:40 runs on attention
    0:23:42 getting and in a
    0:23:44 funny way the way
    0:23:45 digital attention
    0:23:46 economy works
    0:23:51 is it rewards
    0:23:52 anxieties and
    0:23:54 terror as much
    0:23:54 or maybe a
    0:23:56 little more than
    0:23:59 optimism or you
    0:24:01 know goodwill and
    0:24:02 so you have this
    0:24:03 weird situation where
    0:24:05 somebody can play
    0:24:06 the villain on
    0:24:06 social media and
    0:24:08 do very well and
    0:24:09 similar things
    0:24:10 happening in the
    0:24:11 rhetoric of computer
    0:24:12 science so when we
    0:24:13 say oh our stuff
    0:24:14 might be about to
    0:24:15 come alive and
    0:24:16 it’s about to get
    0:24:17 smarter than you
    0:24:18 it generates this
    0:24:19 little anxiety in
    0:24:20 people and then that
    0:24:21 actually benefits us
    0:24:22 because it keeps it
    0:24:24 keeps the attention
    0:24:27 on us and so
    0:24:28 there’s a funny way
    0:24:29 that we’re
    0:24:30 incentivized to put
    0:24:31 things in the most
    0:24:33 alarming way what I
    0:24:34 what I will say is
    0:24:36 that I like the
    0:24:37 idea of models being
    0:24:38 useful so I think
    0:24:40 of the models that
    0:24:41 we’re building as
    0:24:42 being wonderful
    0:24:43 mashup models so
    0:24:44 like for instance
    0:24:46 I love being able
    0:24:47 to use large models
    0:24:48 to go through the
    0:24:48 scientific literature
    0:24:51 and find correlations
    0:24:51 between different
    0:24:52 papers that might not
    0:24:53 use the same
    0:24:54 terminology that would
    0:24:54 have been a pain in
    0:24:55 the butt to detect
    0:24:57 before that’s great
    0:24:58 if you present that
    0:24:59 with a chat
    0:25:00 interface it seems
    0:25:01 like a smart
    0:25:02 scientist if people
    0:25:03 like that I mean I
    0:25:04 guess whatever it’s
    0:25:05 not my job to judge
    0:25:06 everybody but the
    0:25:08 thing is you don’t
    0:25:09 need to present it
    0:25:09 that way you’d
    0:25:10 still get the
    0:25:11 same value but
    0:25:11 that’s the way we
    0:25:13 do it we we add
    0:25:14 in personhood
    0:25:16 fooling to what
    0:25:17 would otherwise be
    0:25:19 really in a way
    0:25:20 more clear
    0:25:21 freestanding value I
    0:25:23 think but we like
    0:25:24 to present the
    0:25:24 fantasy
    0:25:37 there’s over 500
    0:25:38 thousand small
    0:25:40 businesses in bc and
    0:25:40 no two are alike
    0:25:42 i’m a carpenter i’m a
    0:25:43 graphic designer i sell
    0:25:45 dog socks online that’s
    0:25:47 why bcaa created one
    0:25:48 size doesn’t fit all
    0:25:49 insurance it’s
    0:25:51 customizable based on
    0:25:52 your unique needs so
    0:25:53 whether you manage
    0:25:54 rental properties or
    0:25:55 paint pet portraits you
    0:25:56 can protect your small
    0:25:58 business with bc’s most
    0:25:59 trusted insurance brand
    0:26:01 visit bcaa.com slash
    0:26:03 small business and use
    0:26:04 promo code radio to
    0:26:05 receive fifty dollars
    0:26:06 off conditions apply
    0:26:16 all right let me try to
    0:26:17 pull away a little bit
    0:26:18 from religious questions
    0:26:22 okay so look i i’m i’m
    0:26:23 not worried about the
    0:26:23 matrix and the
    0:26:25 terminator um i am
    0:26:27 worried about a much
    0:26:28 more boring and
    0:26:30 unsexy scenario but i
    0:26:31 think equally bad
    0:26:34 possibility is that these
    0:26:37 emergent technologies will
    0:26:39 accelerate a trend that
    0:26:41 i think digital tech in
    0:26:42 general and social media
    0:26:43 in particular has already
    0:26:47 started which is to pull
    0:26:49 us away more and more
    0:26:50 from the physical world
    0:26:52 and encourage us to
    0:26:54 perform versions of
    0:26:55 ourselves in the virtual
    0:26:56 world and because of how
    0:26:58 it’s designed it has this
    0:27:00 habit of reducing other
    0:27:02 people to crude avatars
    0:27:03 which is why it’s so easy
    0:27:05 to be cruel and vicious
    0:27:07 online and why people who
    0:27:08 are on social media too
    0:27:10 much start to become
    0:27:12 mutually unintelligible
    0:27:13 to each other and i
    0:27:16 worry about ai super
    0:27:17 charging some of this
    0:27:18 stuff i mean do you even
    0:27:19 accept that framing am i
    0:27:20 right to be thinking of ai
    0:27:23 as a potential accelerant of
    0:27:26 these trends yeah i mean i
    0:27:29 i think you are correct
    0:27:36 so it’s arguable and
    0:27:37 actually consistent with the
    0:27:38 way the community speaks
    0:27:41 internally to say that the
    0:27:43 algorithms that have been
    0:27:44 driving social media up to
    0:27:49 now are a form of ai if you
    0:27:52 if you unlike me wish to use
    0:27:55 the term ai and what the
    0:27:59 algorithms do is they
    0:28:01 attempt to predict human
    0:28:03 behavior based on the
    0:28:05 stimulus given to the
    0:28:07 human and by putting that
    0:28:08 in an adaptive loop they
    0:28:11 hope to drive attention and
    0:28:13 sort of an obsessive
    0:28:15 attachment to a platform
    0:28:18 because these algorithms
    0:28:21 can’t tell whether
    0:28:23 something’s being driven
    0:28:25 because of things that we
    0:28:25 might think are positive
    0:28:26 or things that we might
    0:28:28 think are negative so i
    0:28:29 call this the life of the
    0:28:30 parody the this notion
    0:28:32 that you can’t tell like
    0:28:33 if a bid is one or zero
    0:28:34 doesn’t matter because it’s
    0:28:36 an arbitrary designation in
    0:28:38 a digital system so if
    0:28:39 somebody’s getting
    0:28:40 attention by being a dick
    0:28:42 that works just as well as
    0:28:43 if they’re offering
    0:28:44 life-saving information or
    0:28:45 helping people improve
    0:28:46 themselves but then the
    0:28:47 peaks that are good are
    0:28:48 really good and i don’t
    0:28:49 want to deny that i love
    0:28:50 dance culture on tiktok
    0:28:53 science bloggers on on
    0:28:54 youtube have achieved a
    0:28:55 level that’s like
    0:28:57 astonishingly good and so
    0:28:58 on like there’s all these
    0:29:00 really really positive good
    0:29:01 spots but then overall
    0:29:03 there’s this loss of truth
    0:29:06 and political paranoia and
    0:29:09 unnecessary confrontation
    0:29:11 between arbitrarily created
    0:29:13 cultural groups and so on
    0:29:15 that’s really doing damage
    0:29:18 um and as is often pointed
    0:29:20 out especially to young
    0:29:21 girls and so on and so
    0:29:22 forth uh not not great
    0:29:25 and so uh yeah could
    0:29:27 better ai algorithms make
    0:29:27 that worse
    0:29:31 plausibly i mean it’s
    0:29:32 possible that it’s already
    0:29:34 bottomed out that it’s kind
    0:29:37 of the the badness just
    0:29:37 comes from the overall
    0:29:38 structure and if the
    0:29:39 algorithms themselves get
    0:29:41 more sophisticated it won’t
    0:29:42 really push it that much
    0:29:43 further but i think
    0:29:45 actually kind of can i’m
    0:29:46 i’m worried about it i
    0:29:48 because we so much want to
    0:29:49 pass the turing test and
    0:29:50 make people think our
    0:29:51 programs are people
    0:29:55 we’re moving to this um
    0:29:56 so-called agentic era where
    0:29:59 it’s not just that you have a
    0:30:00 chat interface with with the
    0:30:01 thing but the chat interface
    0:30:04 gets to know you for years at
    0:30:06 a time and gets a so-called
    0:30:08 personality and but and all
    0:30:09 this and then the idea is that
    0:30:10 people then fall in love with
    0:30:11 these and we’re already
    0:30:13 seeing examples of this
    0:30:15 here and there um and this
    0:30:16 notion of a whole generation
    0:30:17 of young people falling in
    0:30:20 love with fake avatars i mean
    0:30:24 people people talk about ai as
    0:30:25 if it’s just like this yeast in
    0:30:26 the air it’s like oh ai will
    0:30:27 appear and people will fall in
    0:30:29 love with ai avatars but it’s
    0:30:30 not ai is always run by
    0:30:32 companies so like they’re going
    0:30:33 to be falling in love with
    0:30:35 something from google or meta or
    0:30:39 whatever and like that notion
    0:30:41 that your love life becomes
    0:30:44 owned by some company or even
    0:30:45 worse tiktok or a chinese thing
    0:30:49 eek eek eek eek i think that’ll
    0:30:51 create a a a new centralization
    0:30:56 or or or xai eek eek eek eek i’ll
    0:30:57 add some more eeks to that and so
    0:30:59 this centralization of power and
    0:31:02 influence could be even worse and
    0:31:04 that might be a breaking point
    0:31:06 event and so that kind of thing
    0:31:07 ending civilization or ending up
    0:31:09 killing all the people does seem
    0:31:11 plausible to me and some of my
    0:31:12 colleagues would interpret that as
    0:31:15 ai become coming alive and killing
    0:31:16 everybody but i would just
    0:31:17 interpret it as people being
    0:31:20 making terrible choices it all
    0:31:21 amounts to the same thing in the
    0:31:23 end anyway it does at the end of
    0:31:25 the day in terms of actual events
    0:31:27 the same so jaron from your point
    0:31:29 of view is it even possible to have
    0:31:33 good algorithms nudging us around
    0:31:35 online or are all algorithms bad yes
    0:31:37 of course it is okay what does that
    0:31:39 look like course it is of course it
    0:31:41 is yes yes yes yes give me the good
    0:31:42 stuff here give me the good
    0:31:44 algorithms well i mean look in the
    0:31:49 scientific community we do it like i
    0:31:51 mean like okay here’s an example um
    0:31:55 deep research from open ai is a great
    0:31:57 tool it does a literature search on some
    0:31:59 topic and assembles a little report
    0:32:03 it has unnecessary chatbot elements
    0:32:05 to try to make it seem like there’s
    0:32:07 somebody there i view that as a waste
    0:32:10 of time and a waste of energy and i i
    0:32:11 would be happy without it but but
    0:32:13 whatever okay it’s it’s not terrible
    0:32:16 though what it does is it saves
    0:32:18 scientists a ton of time it makes a lot
    0:32:20 of sense i get a lot out of it it’s
    0:32:21 great and now there’s some new
    0:32:25 competitors to it great that stuff’s
    0:32:27 fabulous i really really really it’s
    0:32:28 good because the scientific literature
    0:32:30 has become impossible to use without
    0:32:33 it i do a lot of work that’s pretty
    0:32:35 mathematical and the problem is that
    0:32:37 every time somebody comes across
    0:32:38 similar math they don’t realize
    0:32:39 somebody else has done it so they come
    0:32:41 up with their own terms for things and
    0:32:43 then you have the same ideas or
    0:32:45 similar ones with different terms and
    0:32:46 all these scattered papers in totally
    0:32:47 different communities at different
    0:32:48 conferences and different journals
    0:32:53 yeah but with a tool like this you
    0:32:55 can capture all that and get it into
    0:32:59 place it’s like what what what ai is is
    0:33:01 it’s a way of improving collaboration
    0:33:03 between people it’s a way of gathering
    0:33:06 what people have done in a more unified
    0:33:09 way that can notice multiple hops of
    0:33:12 different terms and similar structures it’s
    0:33:15 it’s a better way of using statistics to
    0:33:17 connect what we’ve all done together to
    0:33:21 get more use out of it it’s great i love
    0:33:25 it and the amount of avatar illusion
    0:33:27 nonsense is kept to a minimum because
    0:33:29 our job is not to fall in love with our
    0:33:31 research our fake research assistant our
    0:33:35 job is to make progress efficiently on
    0:33:37 whatever we’re doing right and so that
    0:33:39 that’s great what is wrong with that
    0:33:41 nothing it’s fabulous so yeah there’s
    0:33:43 wonderful uses if i didn’t think those
    0:33:46 things existed i’d quit what i do
    0:33:49 professionally in the industry of course
    0:33:51 there’s wonderful uses and i think we
    0:33:52 need those things i think they really
    0:33:53 matter
    0:33:56 i guess what i’m hovering around is the
    0:33:58 business model right i mean uh the
    0:34:00 advertising model was sort of the
    0:34:02 original sin of the internet yeah yeah i
    0:34:02 think it is
    0:34:06 um how do we not fuck this up how do we
    0:34:07 not repeat those mistakes what’s a better
    0:34:09 model i mean you talk a lot about data
    0:34:11 dignity so you’re saying we can say fuck
    0:34:13 on this podcast oh you can say whatever
    0:34:15 you want if i had known that there would
    0:34:17 be a lot of fuckery up to now in my in my
    0:34:18 speech it’s not too late anyway it’s not
    0:34:23 too late we got plenty of time okay but no
    0:34:25 but seriously what how do we get it right
    0:34:26 this time how do we not make the same
    0:34:29 mistakes what is a better model yeah well
    0:34:32 um this is actually more important this
    0:34:34 question is the central question of our
    0:34:36 time in my view like the central
    0:34:39 question of our time isn’t um being able
    0:34:42 to scale ai more is is an important
    0:34:45 question and i get that and most people
    0:34:47 are focused on that and dealing with the
    0:34:49 climate is an important question but in
    0:34:51 terms of our own survival coming up with
    0:34:53 a business model for civilization that
    0:34:56 isn’t self-destructive is in a way our
    0:34:59 most primary problem and challenge right
    0:35:01 now because the way we’re doing it what
    0:35:04 we kind of we went through this thing in
    0:35:06 the earlier phase of the internet like
    0:35:08 information should be free and then the
    0:35:09 only business model that’s left is paying
    0:35:12 for influence uh and so then all the
    0:35:16 platforms look free or very cheap to the
    0:35:17 user but then actually the real customer
    0:35:19 trying to influence the user and you end
    0:35:23 up with what’s essentially a stealthy form
    0:35:26 of um manipulation being the central
    0:35:30 project of civilization and we can only
    0:35:31 get away with that for so long at some
    0:35:33 point that bites us and we become too
    0:35:36 crazy to survive so we must change the
    0:35:38 business model of civilization and so
    0:35:41 exactly how to get from here to there is
    0:35:44 a bit of a mystery but i continue to work
    0:35:46 on it like i think we should incentivize
    0:35:48 people to put great data into the ai
    0:35:51 programs of the future uh and i’d like
    0:35:53 people to be paid for data used
    0:35:55 ai models and also to be celebrated and
    0:35:56 made visible and known because i think
    0:35:58 it’s just a big collaboration and our
    0:36:01 collaborators should be valued how easy
    0:36:02 would it be to do that do you think we
    0:36:05 can or will there’s still some unsolved
    0:36:07 technical questions about how to do it
    0:36:09 i’m very very actively working on those
    0:36:10 and i believe it’s doable and there’s a
    0:36:12 whole you know research community devoted
    0:36:14 to exactly that distributed around the
    0:36:16 world and i think it’ll make better
    0:36:18 models i mean better data makes better
    0:36:20 models and there’s a lot of people who
    0:36:21 dispute that and they say no it’s just
    0:36:22 better algorithms and we already have
    0:36:25 enough data for the rest of all time but
    0:36:28 i disagree with that i think i don’t
    0:36:29 think we’re the smartest people who will
    0:36:31 ever live and there might be new creative
    0:36:33 things that happen in the future that we
    0:36:35 don’t foresee and the models we’ve
    0:36:37 currently built might not extend into
    0:36:39 those things and having some open system
    0:36:41 where people can contribute to new models
    0:36:44 in new ways is a more expansive and
    0:36:47 creative and you know open-minded and
    0:36:51 and just you know kind of spiritually
    0:36:53 optimistic way of thinking about the deep
    0:36:53 future
    0:37:15 today explained here with eric levitt senior
    0:37:17 correspondent at vox.com to talk about the
    0:37:21 2024 election that can’t be right eric i thought
    0:37:22 we were done with that i feel like i’m pacino
    0:37:24 in three just when i thought i was out
    0:37:28 they pull me back in why are we talking about
    0:37:30 the 2024 election again the reason why we’re
    0:37:33 still looking back is that it takes a while
    0:37:36 after an election to get all of the most high
    0:37:40 quality data on what exactly happened so the
    0:37:42 full picture is starting to just come into view
    0:37:45 now and you wrote a piece about the full
    0:37:49 picture for vox recently and it did bonkers business
    0:37:53 on the internet what did it say what struck a
    0:37:56 chord yeah so this was my interview with
    0:38:00 david shore of blue rose research he’s one of
    0:38:04 the biggest sort of democratic data gurus in
    0:38:08 the party and basically the big picture headline
    0:38:12 takeaways are on today explained you’ll have to go listen
    0:38:15 to them there find the show wherever you listen to shows bro
    0:38:35 i think i’m a humanist like you in the end and what i want fundamentally is just the
    0:38:39 elevation of human agency not the diminishment of it and part of what that means to borrow your
    0:38:45 language is creating more creative classes and less dependent classes yep uh you’ve convinced me
    0:38:50 that that’s at least possible i don’t know if it’s likely but i hope it is and and maybe some
    0:38:55 some kind of data dignity type model is the most promising thing i’ve heard
    0:39:06 no i sort of feel like the human project our our survival is simultaneously both certain and
    0:39:11 unlikely if you know what i mean like i i feel like if we just follow the immediate trend lines
    0:39:13 and what we see we’re probably gonna
    0:39:16 buck ourselves up to use the word i’m
    0:39:22 encouraged to say here there you go but i also just have this feeling we’ve made it through a lot of
    0:39:26 stuff in the past and i just have this feeling we’re gonna rise to the occasion and figure this
    0:39:32 one out really i don’t know exactly how we will but i think we will i don’t know what the
    0:39:40 alternative is the alternative is in 200 million years there’ll be smart cephalopods to take over
    0:39:45 the planet uh and maybe they’ll do that i mean that’s the alternative but i think we can do it i
    0:39:56 really do i really i we just we just have to be a little less full of ourselves and not believe we’re
    0:40:02 making a new god no more golden calves that’s really our problem still yeah good luck with
    0:40:09 that i mean i i i like i’m constantly thinking more about the the social and political and cultural
    0:40:14 dynamics because that’s just my background um and you know i mean i i guess speaking of dependent
    0:40:23 classes i a very common concern is is this fear that ai is going to create a lot of social instability by
    0:40:29 taking all of our jobs it’s a widespread fear it’s scary as hell and it feels like
    0:40:35 the latest iteration of a very old story about new technologies like automation displacing workers
    0:40:39 i mean how do you speak to these sorts of fears when you hear them because surely you hear them a lot
    0:40:41 yeah and they concern me i mean
    0:40:54 look um there’s not a perfect solution to that problem uh there i’ll give you an example of one
    0:41:00 that i find tricky to think about uh my mom died in a car accident and i’ve always believed from when
    0:41:05 i was very young that cars should drive themselves that it was manifestly obvious that we could create
    0:41:13 a digital system that would save many many lives so we have tens of thousands of people killed by cars
    0:41:16 every year still in the us and i think it’s over a million worldwide or something like that i mean
    0:41:24 it’s like crazy it’s like and so um there are a lot of reasons for it and a self-driving car is never
    0:41:28 going to be perfect because it’s not a task that can be done perfectly there’ll be circumstances where
    0:41:34 there’s no optimal solution you know in the instant but overall we ought to be able to save a lot of
    0:41:42 life so i’m really supportive of that project at the same time an incredibly large number of blue collar
    0:41:49 people around the world get by behind a wheel whether it’s truck drivers or rideshare drivers these days
    0:42:00 or etc you know and so like how do you reconcile those two things uh and i i don’t think there’s any way to do it perfectly i think there’s
    0:42:10 there’s two things that should be true one is that we need to find an intermediate way to love
    0:42:15 a social safety net that isn’t all the way to universal basic income because the universal basic
    0:42:21 income idea gives people this idea that they’re not worth anything and they’re just being supported by
    0:42:27 the tech titans as a hobby and it doesn’t feel very secure or very dignified and or stable there’s like
    0:42:33 just a lot of reasons why i’m skeptical of that uh in the long term and i don’t think people like it
    0:42:40 or want it but on the other hand um just telling people well you’re thrown out into the mix and in
    0:42:45 the u.s you have no health insurance and just figure something out that’s also just too cruel and not viable
    0:42:51 if it’s a lot of people at once so we have to find our way to a very unfashionable intermediate
    0:43:00 sense of social safety network or uh to help people through transitions and right now the accounting for
    0:43:04 that is very very difficult to sort out and especially in the united states there’s a deep
    0:43:13 hostility to it and i just don’t see logically any other way but then beyond that um i do think new roles
    0:43:18 will appear like the the story that well new things will happen and new new things will be possible
    0:43:25 i do believe that like there’s a kind of a vague and uncomfortable sense that surely new things will
    0:43:31 come along and i i actually think that’s true i don’t feel comfortable making that claim for all
    0:43:35 those drivers like we’re not going to retrain them to be programmers because low-level programming
    0:43:43 itself is also getting automated right i don’t know exactly how that’ll work um i have thought a great
    0:43:49 great deal about it but that’s who i am for the moment i believe that there could be all kinds of
    0:43:57 things we don’t foresee and that within that explosion of new sectors of creativity there will be enough new
    0:44:04 needs for people to do things if only to train ai’s that it’ll keep up with human needs and support
    0:44:08 some kind of a world of economics that’s more distributed than just a central authority
    0:44:13 distributing income to everybody which i think would be corrupted yeah yeah i agree with that
    0:44:20 do you think we’re being sufficiently intentional about the development of this technology do you
    0:44:27 think we’re asking the right questions as a society now well i mean the questions are dominated by
    0:44:33 a certain internal technical culture which is and the mainstream of technical culture is very
    0:44:39 obsessed with ai as a new god or some kind of new entity and so i think that that does make the
    0:44:47 whole conversation go askew and that said we’re almost like if you go to ai conferences
    0:44:54 there might there’s usually more talk where somebody is saying we’re going to talk about how to talk the
    0:45:01 ai into not killing us you know and that kind of conversation which to me is not well grounded and
    0:45:09 i think it kind of loses itself in loops but that kind of conversation can take up as much time and space
    0:45:14 as like a serious conversation of like how can we optimize this algorithm or how can we you know like the
    0:45:20 the actual work that we should be doing as technologists um i was at one conference i was
    0:45:25 kind of funny where i forget what there were these different factions there’s the artificial general
    0:45:30 intelligence and there’s the super intelligence and there’s all these different people who have
    0:45:34 slightly different ideas about how awesome ai will be and help might kill us all in different ways
    0:45:42 and they were so conflicted that they got into a fist fight um a not very competent fist fight it must be
    0:45:48 said but i’m shocked it’s kind of funny anyway i sort of wish i had a film of that that was really funny but
    0:45:54 i don’t know i mean i love my world i love the people i do kind of make fun of us a little bit
    0:46:00 sometimes because i just think it’s important too you know okay so if we just let’s just set aside for
    0:46:06 the moment that the more common fears about ai the alignment problem and taking our jobs and
    0:46:12 flattening human creativity all that stuff all that is there all of that um is there is there a fear of
    0:46:18 yours something you think we could get terribly wrong that’s not currently something we hear much about
    0:46:26 uh god i don’t even know where to start yeah there’s like a lot lot lot lot lot lot lot
    0:46:38 lot i’m i mean one of the things i worry about is we’re gradually moving education into an ai model
    0:46:46 and the motivations for that are often very good because in a lot of places on earth it’s just been
    0:46:50 impossible to come up with an economics of supporting and training enough human teachers
    0:46:58 and a lot of cultural issues in changing societies make it very very hard to make schools that work
    0:47:06 and so on like there’s a lot of issues and in theory a sort of uh client self-adapting ai tutor
    0:47:13 could solve a lot of problems at a low cost in a lot of situations but then the issue with that is
    0:47:19 once again creativity how do you keep people who learn in a system like that
    0:47:24 how do you train them so that they’re able to step outside of what the system was trained on
    0:47:29 you know like there’s this funny way that you’re always retreading and recombining the training data
    0:47:35 in any ai system and you can address that to a degree with constant fresh input and this and that but
    0:47:40 i am a little worried about people being trained in a closed system that makes them a little less than
    0:47:46 they might otherwise have been and have a little less faith in themselves i’m a little concerned about
    0:47:53 sort of defining the nature of life and education downward you know and the thing is the history
    0:47:59 of education is filled with doing exactly that thing like education has been filled with overly
    0:48:08 reductive ideas or overly idealistic and and um biased ideas of different kinds i mean so it’s not like
    0:48:14 we’re entering this perfect system messing it up we’re entering a messed up system and trying to figure out
    0:48:22 how to not perpetuate it it’s messed up itness i think in the case of education um challenging really
    0:48:28 challenging i think i just ask just because i’m just curious what you would say i i have a five-year-old
    0:48:37 son and he’s already started asking questions about you know like what kind of skills should he learn what
    0:48:42 what should he what should he aspire to do in the world oh man that’s a hard one right and i don’t know
    0:48:49 what to tell him because i have no idea what the world is going to look like by the time he’s 18 or 20 or 15
    0:48:54 hell you know i what would you what would you tell him if uncle jaron came over oh yeah and he asked
    0:48:59 you that what would you say well i have a teen daughter now and when she was younger uh she went
    0:49:07 to coding camp you know and loved it and then when uh copilot for github came out and now some of the
    0:49:12 other ones that are out she was like well you know the kinds of programs i’d write i can just ask for now
    0:49:17 so why did you send me to all this thing why did i waste all my time at these things and i said uh remember
    0:49:24 you loved coding camp remember you liked it you liked it it’s like well yeah but i would have
    0:49:30 i could have liked spelunking camp or something too like why coding camp and um i i mean
    0:49:36 i don’t have a perfect answer for all that right now i really don’t i do
    0:49:44 i do think there are new things that will emerge i have a feeling there’ll be a lot of new professions
    0:49:51 related to adaptive biology and modifications and helping people deal with weird changes to their
    0:49:56 bodies that will become possible i think that’ll become a big thing i don’t know exactly how it’s
    0:50:04 too early to say like i there’s a subtle point here i want to make which is um i am very far from being
    0:50:12 anti-futuristic or disliking extreme change in the future but what i what i have to insist upon
    0:50:18 is continuity so in this idea there’s a term called the singularity uh applied to ai sometimes that
    0:50:23 there’ll be this rush of change so fast that nobody can learn anything nobody can know anything and it
    0:50:29 just is beyond us beyond us beyond us the problem with the singularity whether it’s in a black hole or in
    0:50:35 the big bang or in technology is that it’s very hard to have you know like by definition even if
    0:50:41 you don’t technically lose information you lose the ability to access the information in the in the
    0:50:46 original context or with any kind of structure so it’s essentially a form of massive forgetting and
    0:50:54 massive loss of context and massive loss of meaning therefore and so however radical we get if in the future
    0:51:00 we’re all going to evolve into massive distributed colonies of space bacteria flying around
    0:51:07 and intergalactically or something whatever we turn into i’m all for it i’m in i’m in i’m in but
    0:51:12 the line from here to there has to have memory it has to be continuous enough that we’re learning
    0:51:19 lessons and we we remember if we break that because we want the thrill of polpot’s year zero where from
    0:51:24 now on we’re the smartest people and everybody else was wrong and we start over if we want that break
    0:51:29 we must resist it we must oppose people who want that break year zero never works out well it’s a
    0:51:38 really really bad idea and so that to me i’m like pro extreme futures but anti discontinuity into the
    0:51:43 future and and so that’s a an in-between place to be that’s a little subtle and hard to get across but
    0:51:48 i think that that’s the right place to be well i always try to end these conversations with as much
    0:51:55 optimism as possible so do you have any other good news or uh rosy scenarios you can you can paint for
    0:52:01 us uh before we get out of here about how things are going to be awesome in the future right now we’re
    0:52:09 in a very hard to parse moment things are strange things are scary and what i keep on telling myself
    0:52:16 there’s always hope in chaos as much as someone might someone driving chaos might be certain that
    0:52:25 it’s under their command but it never is and those of us who watch unfolding chaos looking for signs of
    0:52:33 hope looking for optimism looking for little openings in which to do something good we will find them if we
    0:52:40 stay alert and so i’d urge everybody to do that during this period jaron lanier i’m a fan of your
    0:52:45 work i’m a fan of you as a human being as well i appreciate you coming in oh well that’s very kind
    0:52:52 of you thank you so much and i really appreciate all the effort and also just the goodwill and warmth
    0:53:03 you put into this interview i really do appreciate it so much
    0:53:12 all right i hope you enjoyed this episode there was a lot going on in this one jaron is a unique mind
    0:53:22 and i appreciate the way he thinks about all of this this conversation did force me to reflect on the
    0:53:31 language i use to make sense of ai and all the assumptions buried in that language so i hope you
    0:53:39 found his insights useful but either way as always we want to know what you think so drop us a line
    0:53:52 at the gray area at vox.com or leave us a message on our new voicemail line at 1-800-214-5749
    0:53:58 and once you’re finished with that if you have a second please go ahead and rate and review and
    0:54:09 subscribe to the podcast this episode was produced by beth morrissey edited by jorge just engineered by erica
    0:54:17 wong fact check by melissa hirsch and alex overington wrote our theme music new episodes of the gray area
    0:54:25 drop on mondays listen and subscribe the show is part of vox support vox’s journalism by joining our
    0:54:33 membership program today go to vox.com slash members to sign up and if you decide to sign up because of this show
    0:54:51 let us know you

    Why do we keep comparing AI to humans?

    Jaron Lanier — virtual reality pioneer, digital philosopher, and the author of several best-selling books on technology — thinks that we should stop. In his view, technology is only valuable if it has beneficiaries. So instead of asking “What can AI do?,” we should be asking, “What can AI do for us?”

    In today’s episode, Jaron and Sean discuss a humanist approach to AI and how changing our understanding of AI tools could change how we use, develop, and improve them.

    Host: Sean Illing (@SeanIlling)

    Guest: Jaron Lanier, computer scientist, artist, and writer.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Democrats need to do something

    AI transcript
    0:00:04 Thumbtack presents the ins and outs of caring for your home.
    0:00:10 Out. Indecision. Overthinking. Second-guessing every choice you make.
    0:00:16 In. Plans and guides that make it easy to get home projects done.
    0:00:21 Out. Beige. On beige. On beige.
    0:00:26 In. Knowing what to do, when to do it, and who to hire.
    0:00:29 Start caring for your home with confidence.
    0:00:31 Download Thumbtack today.
    0:00:37 Let’s drive good together with Bonterra and Volkswagen.
    0:00:43 Buy any sustainably focused Bonterra bathroom tissue, paper towel, or facial tissue,
    0:00:47 and you could win a 2025 Volkswagen All-Electric ID Buzz.
    0:00:51 See in-store for details. Bonterra for a better planet.
    0:00:55 No purchase necessary. Terms and conditions apply. See online for details.
    0:01:03 If I had to pick one word to really capture American politics, for most of my adult life at least,
    0:01:07 it wouldn’t be hope or change or forward or future.
    0:01:11 The word I’d choose is inertia.
    0:01:15 It doesn’t matter what the slogans are or what the speeches say.
    0:01:21 In terms of getting things done, or fundamentally changing how we do things,
    0:01:24 both parties seem slow to solve problems.
    0:01:27 Slow to build new things.
    0:01:29 Slow to change anything, really.
    0:01:33 Until now.
    0:01:39 As you know, the Trump administration has been passing executive orders
    0:01:43 and implementing new policies at a breakneck pace.
    0:01:50 Attempting to remake entire swaths of the federal government.
    0:01:53 And you might not like what they’re doing.
    0:01:54 I don’t.
    0:01:56 But they are doing something.
    0:01:59 And the Democratic opposition?
    0:02:02 Well, they don’t seem to have the answers.
    0:02:09 At the very least, they cannot articulate a different vision for America’s future that the country wants.
    0:02:11 Why is that?
    0:02:17 Why couldn’t Democrats craft a message that resonated with voters in 2024’s election?
    0:02:22 And why, in the face of Trump and Musk and Doge,
    0:02:25 in a relentless attack on American institutions,
    0:02:30 are Democrats unable to convince America that their way of governing is better?
    0:02:36 I’m Sean Elling, and this is The Gray Area.
    0:02:43 Today’s guest is Ezra Klein,
    0:02:45 the former host of this podcast,
    0:02:49 the current host of The Ezra Klein Show at The New York Times,
    0:02:52 and the co-author of a new book called Abundance,
    0:02:55 which he wrote with journalist Derek Thompson.
    0:03:04 Ezra argues that in states run by Democrats,
    0:03:07 policy failures have contributed to the rising cost of living.
    0:03:09 to address this crisis,
    0:03:12 and really any crisis America is facing,
    0:03:17 it needs to be easier to build and invent the things that America needs.
    0:03:21 And in our current system, that’s almost impossible to do.
    0:03:26 Not because we don’t have the means, the technology, or the know-how.
    0:03:28 We have all of that in spades.
    0:03:31 What we don’t have is a political economy that makes sense.
    0:03:36 Ezra believes that this idea should be the major,
    0:03:39 maybe the only focus of liberal politics in America.
    0:03:42 So I invited him onto the show, his old show,
    0:03:43 to tell me more.
    0:03:49 Ezra Klein, welcome to the show.
    0:03:53 Ah, it’s like stepping back into an old couch
    0:03:56 that you’ve sat in so much that it slightly has an imprint of your body.
    0:03:57 I finally feel like I’m back home.
    0:03:59 We’re glad to have you.
    0:04:00 I’m glad to be here.
    0:04:03 All right, let’s get to the book, Abundance.
    0:04:07 You want this book to reorient liberal thinking in America.
    0:04:10 Tell me, what are you looking to change?
    0:04:15 I think it’s important for liberals, for progressives,
    0:04:22 to recenter technology as an engine of social progress.
    0:04:25 Most liberals can tell you which five social insurance programs
    0:04:27 they’d like to create or substantially expand,
    0:04:29 but they can’t tell you which five technologies.
    0:04:32 They want the government to really organize resources and intention
    0:04:34 towards pulling in from the future into the present.
    0:04:38 So the idea of Abundance is that to have the future we want,
    0:04:41 we need to build and invent the things we need.
    0:04:44 Some of the things we need to build are things we know how to build,
    0:04:47 like housing, like clean energy, like high-speed rail.
    0:04:50 Some of the things are things we need to invent.
    0:04:52 We are not going to hit our climate targets.
    0:04:54 I mean, we’re not currently on pace to hit them at all,
    0:04:56 but we’re definitely not going to hit them
    0:04:57 if we cannot figure out things like green cement
    0:05:00 and low-carbon or low-emissions jet fuel,
    0:05:02 things we literally do not have,
    0:05:05 certainly not at an affordability point we can scale.
    0:05:09 There are problems you cannot solve without innovation.
    0:05:13 So this is really an effort to put building and innovation,
    0:05:17 the expansion of supply, at the center of liberalism.
    0:05:20 Well, one thing I do appreciate about the book
    0:05:24 is that you’re not trying to offer a suite of policy solutions.
    0:05:26 It’s more about articulating the questions.
    0:05:29 You think our politics should revolve around.
    0:05:33 Why do you think it’s important to begin with the right questions?
    0:05:35 You see what you’re looking for.
    0:05:40 And I think that American liberalism has learned to look
    0:05:42 for opportunities to subsidize.
    0:05:44 Health insurance is too expensive.
    0:05:45 Can we make it subsidized?
    0:05:49 If people need housing, we give them a rental voucher, sometimes.
    0:05:53 If they need to go to college, we give them a Pell Grant.
    0:05:55 If they need food, we give them SNAP.
    0:06:00 If they need income as a retiree or as an elderly person,
    0:06:01 we give them Social Security, right?
    0:06:06 We know how to look for opportunities to do money or voucher-like things.
    0:06:07 That’s really important.
    0:06:12 But we do not look for opportunities to expand supply.
    0:06:13 And that creates two problems.
    0:06:18 One is that if you subsidize something and you don’t have enough supply of it, you will just
    0:06:20 have price increases or rationing.
    0:06:25 The other is that they’re just things you need that if you don’t increase the supply of them,
    0:06:26 you’re just not going to have.
    0:06:31 And look, I’m a Californian, and when I look around my home state where I lived for much
    0:06:35 of the writing of the book, and I think, what has deranged Californian politics?
    0:06:40 Why can Gavin Newsom not run for president in 2028 as he wants to do and say, elect me,
    0:06:42 and you can all have the California dream?
    0:06:44 Because nobody thinks it’s a dream.
    0:06:45 It’s losing people.
    0:06:46 And why is it losing people?
    0:06:47 Because the cost of living is too high.
    0:06:49 And why is the cost of living too high?
    0:06:50 We don’t have enough of the things we need.
    0:06:53 We don’t have enough supply of housing, child care, et cetera.
    0:06:59 And so you will get different answers to the question of how to expand different things.
    0:07:03 If you ask me why it’s hard to lay down transmission lines, that is a different answer
    0:07:06 than why is it hard to build affordable housing in San Francisco.
    0:07:10 But just simply asking the question of how do we get more of the thing we want,
    0:07:15 that I think is a more productive place to start and one that just honestly a lot of
    0:07:18 liberal governance is going to ride by not centering.
    0:07:24 Well, you know, people will hear these kinds of complaints and they will immediately think
    0:07:26 of all the ways the other side is to blame.
    0:07:34 But you do say pretty early on that some of these outcomes reflect an ideological conspiracy
    0:07:35 at the heart of our politics.
    0:07:37 What’s the argument here?
    0:07:43 So I think that liberals, frankly, conservatives too, are comfortable with the narrative that
    0:07:49 we had a conservative movement that arose in the latter half of the 20th century, has attained
    0:07:54 yet more power in the 21st, that is anti-government, that wants to, as Governor Norquist famously
    0:07:56 put it, strangle government in a bathtub.
    0:08:03 That doesn’t really explain, though, why governance in places where conservative Republicans have
    0:08:08 functionally no power, California, Illinois, New York, is pretty bad.
    0:08:14 And to understand that, you have to start looking at something else that does not get as much
    0:08:19 narrative weight in our politics, which is starting in, again, the back half of the 20th
    0:08:26 century, there was a liberalism, the new left, that arose in response to the New Deal left.
    0:08:30 And what New Deal liberalism put at its center was growing to build things.
    0:08:32 We had a rapidly expanding population.
    0:08:34 We were this, you know, new superpower.
    0:08:36 And we went on this orgy of building.
    0:08:37 And we often built recklessly.
    0:08:39 We built in ways that damaged the environment.
    0:08:44 We, you know, I grew up outside of Los Angeles at a time when you would have that curtain of
    0:08:47 smog descend and your eyes would water and people would cough.
    0:08:49 And it was really bad for kids and, frankly, adults.
    0:08:55 And so this sort of liberalism emerged that was about making it harder to build, that was
    0:09:00 about making sure government couldn’t do what, say, Robert Moses did in New York and cut
    0:09:02 a freeway right through, you know, a marginalized community.
    0:09:07 And frankly, more than that, it ended up being a liberalism that really made it impossible to
    0:09:09 cut a freeway through an affluent community.
    0:09:13 And a lot of this was not just well-intentioned.
    0:09:14 It worked.
    0:09:15 We cleaned up the environment.
    0:09:17 We cleaned up the air.
    0:09:18 We cleaned up water.
    0:09:22 We did make it harder for government to do stupid things or act without thinking about
    0:09:22 its actions.
    0:09:26 Over time, those things grew and grew and grew.
    0:09:30 Those statutes, those processes, those movements, liberals became more affluent.
    0:09:31 They had more to defend.
    0:09:36 And so in places even where you didn’t really have a strong conservative movement, what you
    0:09:44 did develop was a way of doing government that was so coalitional, that had so many opportunities
    0:09:50 for veto, had so many opportunities for individuals or nonprofits to sue the government, that you
    0:09:51 just couldn’t get shit done.
    0:09:57 And so construction productivity has been functionally falling in America for a very long time or stagnating
    0:09:58 in some areas.
    0:10:03 And so as the years have gone by, we’ve gotten really good at building in the digital world.
    0:10:07 We can make cryptocurrencies and AI and this whole expansive internet and really quite
    0:10:09 shitty at building in the real world.
    0:10:17 Look, I think rattling off a bunch of numbers isn’t awesome, but I have to just at least mention
    0:10:21 a couple here because it just illustrates the problem, right?
    0:10:22 So this is from your book.
    0:10:29 It cost about $609 million to build a kilometer of high-speed rail in the U.S.
    0:10:31 $609.
    0:10:32 Just rail, not high-speed.
    0:10:34 Oh, even better.
    0:10:37 In Germany, it’s $384.
    0:10:39 In Canada, $295.
    0:10:41 Japan, $267.
    0:10:44 And in Portugal, fuck, they’re really doing something right.
    0:10:46 It’s only $96 million.
    0:10:48 How is that even real?
    0:10:53 So one thing to note about that is that conservatives will say, yeah, the government sucks.
    0:10:54 Don’t use it.
    0:10:56 But those countries have governments.
    0:11:00 Those countries actually have higher union density than the U.S. does.
    0:11:04 So there is something about the way we do government here, the way we do building here.
    0:11:06 And there’s a bunch of different answers to that.
    0:11:10 One of the big ones is we are very focused on adversarial legalism, as it’s called.
    0:11:17 We make it the primary way we let people constrain the government is by suing it.
    0:11:19 Suing it takes a long time.
    0:11:24 I mean, and, you know, at this moment, people are glad we have a way to sue the government under Donald Trump.
    0:11:27 So the point is not that it is always and everywhere bad.
    0:11:36 But nevertheless, there is a dimension where we have made it so hard for the government to act, so slow for it to act, that it just functionally can’t act.
    0:11:43 And one thing about those numbers that you then see is that we just don’t do as many big infrastructure projects anymore for all kinds of reasons.
    0:11:47 We’re very afraid of doing anything that requires tunneling in a way they’re not in other countries.
    0:11:50 The Second Avenue subway in New York City is like a total nightmare.
    0:11:55 And we have just created ways of building that don’t work.
    0:11:57 I wish they did.
    0:11:58 What’s the Second Avenue subway?
    0:12:07 Oh, it’s a subway extension in New York that has been planned for a very, very long time that was supposed to be much more ambitious than it will now be.
    0:12:15 Look, when they began building the New York subways, they opened the first 28 stations, I think it was, in four years, if I’m not wrong.
    0:12:20 It takes decades now to do anything, to do like one station.
    0:12:44 You would think, with the advances in machinery we have, with the advances in imaging we have, with the advances in 3D computerized drafting that we have, I mean, you would think, with everything we have built, advanced machinery-wise, since 1908, we would make things bigger, better, faster, right?
    0:12:46 We would be just way better at building things than we were then.
    0:12:48 But we’re just not.
    0:12:50 I mean, we are safer at building them.
    0:12:53 There are things we were better at planning for when we build them.
    0:12:55 I don’t want to suggest that no advancement has happened.
    0:12:58 But they built the Empire State Building in a year.
    0:12:59 A year.
    0:13:01 We just can’t do that anymore.
    0:13:04 And the reason isn’t that we have forgotten technique.
    0:13:11 And the reason isn’t that we haven’t had things advance in terms of machinery and building.
    0:13:13 The reason is we’ve made the politics of building very, very difficult.
    0:13:16 And we’ve made the process of building very, very cumbersome.
    0:13:21 I talk about the example of California high-speed rail at some length, but I think it’s a good one.
    0:13:24 And I could say a million things about it, but I’ll say this.
    0:13:27 High-speed rail replaces cars.
    0:13:28 It’s pretty clean, right?
    0:13:34 It’s a good—the reason to do it, in part, is it is an environmentally friendly form of transportation.
    0:13:41 The effort to environmentally clear the high-speed rail line that California intended to use began in 2012.
    0:13:48 By the time I wrapped the book, at the end of 2024, it was almost, but not quite done.
    0:13:51 12 years, and it was not finished.
    0:13:59 And the question that that environmental review was asking was not, was high-speed rail having it better than not having it?
    0:14:10 It’s in every individual parcel of track, had they considered all the possible consequences of having it?
    0:14:14 Mitigated all the possible downsides, which, of course, the status quo does not have to do.
    0:14:21 And, you know, most importantly, bulletproofed themselves as much as they can against lawsuits, which can take years to play out.
    0:14:26 This replicates across clean energy efforts.
    0:14:32 Congestion pricing in New York City was held up for years in environmental assessment.
    0:14:35 And these are for things that are good for the environment.
    0:14:43 So this is—it’s one example, but these are liberal policies that liberals defend that make it very hard for liberals to deliver on the things liberals say they are going to give people.
    0:14:44 That’s a problem.
    0:14:54 I just want to stress that part of what makes this so maddening is that it’s an outcome basically no one really wants, right?
    0:15:04 It’s the system, it’s the incentive structure, it’s individuals making narrowly rational decisions, which in the end produce incredibly stupid, unhelpful results.
    0:15:07 That is definitely a big part of it.
    0:15:11 Some things are drift, some things are accidental, some things are unseen, and some things are intended.
    0:15:19 When we talk about housing, which is different than something like rail, you’re dealing with a problem that housing has become a core financial asset.
    0:15:26 And that asset is often made more valuable, or at least people believe it will be made more valuable, by scarcity.
    0:15:39 And the idea that, you know, you’ve got this house on a block of San Francisco or Brooklyn or whatever, and you don’t want a large affordable housing complex going up down the street, it’s not crazy.
    0:15:41 I mean, that might actually hurt your parking.
    0:15:44 That might actually hurt your home values, depending on how it plays out.
    0:15:54 But now you’ve got a real problem, because you’ve made the engine of wealth something that the only way people can feel comfortable to keep going up is to make sure we don’t build enough housing around it.
    0:15:56 But we need to build enough housing around it.
    0:15:57 And so who’s winning?
    0:15:59 You know, the people already who have the assets.
    0:16:05 And liberalism has to ask, like, does it hold the values it puts on lawn signs?
    0:16:07 You know, human being is illegal and kindness is everything.
    0:16:12 And, you know, or is it, you know, and I got mine?
    0:16:15 You know, sorry you didn’t get yours ethos.
    0:16:32 Support for the gray area comes from Shopify.
    0:16:34 Running a business can be a grind.
    0:16:39 In fact, it’s kind of a miracle that anyone decides to start their own company.
    0:16:46 It takes thousands of hours of grueling, often thankless work to build infrastructure, develop products, and attract customers.
    0:16:50 And keeping things running smoothly requires a supportive, consistent team.
    0:16:57 If you want to add another member to that team, a platform you and your customers can rely on, you might want to check out Shopify.
    0:17:03 Shopify is an all-in-one digital commerce platform that wants to help your business sell better than ever before.
    0:17:10 It doesn’t matter if your customers spend their time scrolling through your feed or strolling past your physical storefront.
    0:17:15 There’s a reason companies like Mattel and Heinz turn to Shopify to sell more products to more customers.
    0:17:18 Businesses that sell more sell with Shopify.
    0:17:22 Want to upgrade your business and get the same checkout Mattel uses?
    0:17:28 You can sign up for your $1 per month trial period at Shopify.com slash Vox, all lowercase.
    0:17:32 That’s Shopify.com slash Vox to upgrade your selling today.
    0:17:34 Shopify.com slash Vox.
    0:17:41 Support for the gray area comes from Upway.
    0:17:47 If you’re tired of feeling stuck in traffic every day, there might be a better way to adventure on an e-bike.
    0:17:55 Imagine cruising past traffic, tackling hills with ease, and exploring new trails, all without breaking a sweat or your wallet.
    0:18:02 At Upway.co, you can find e-bikes from top-tier brands like Specialized, Cannondale, and Aventon.
    0:18:06 At up to 60% off retail, perfect for your next weekend adventure.
    0:18:11 Whether you’re looking for a rugged mountain bike or a sleek city cruiser, there’s a ride for everyone.
    0:18:20 And right now, you can use code GRAYARIA150 to get $150 off your first e-bike purchase of $1,000 or more.
    0:18:29 There’s over 500,000 small businesses in B.C. and no two are alike.
    0:18:30 I’m a carpenter.
    0:18:31 I’m a graphic designer.
    0:18:33 I sell dog socks online.
    0:18:37 That’s why BCAA created One Size Doesn’t Fit All Insurance.
    0:18:40 It’s customizable, based on your unique needs.
    0:18:47 So whether you manage rental properties or paint pet portraits, you can protect your small business with B.C.’s most trusted insurance brand.
    0:18:53 Visit bcaa.com slash smallbusiness and use promo code RADIO to receive $50 off.
    0:18:54 Conditions apply.
    0:19:14 Look, I guess we’re a couple of months into this new administration.
    0:19:19 People feel as though the government is acting very rapidly.
    0:19:21 How do you make sense of that?
    0:19:27 I mean, is that just because it’s basically breaking shit and breaking shit is significantly easier than building shit?
    0:19:39 Elon Musk and Donald Trump decided, certainly Musk decided, that he just wasn’t going to treat a lot of things that have constrained past administrations as real and binding.
    0:19:45 And it turns out from watching them, there’s a lot more you could do than people thought you could do.
    0:19:48 The civil service protections were not nearly as binding as people made them seem to be.
    0:19:55 I do not like what Elon Musk is doing in terms of indiscriminately and ideologically firing huge swaths of the federal workforce.
    0:20:01 But I believe four months ago, and I believe today, that it was way too hard to hire and fire in the civil service.
    0:20:11 And because liberalism never fixed that in a way that was conceptually and morally appropriate, now we’re getting this burn-it-down approach.
    0:20:13 And I think that’s true in a lot of things.
    0:20:20 If you do not make government work, someone else will eventually weaponize the dissatisfaction with it and burn it to the ground.
    0:20:30 And liberals had no really good things to say about cost of living and affordability in the 2024 election, in part because they themselves have been bad on cost of living and affordability.
    0:20:32 The places they govern have become unaffordable.
    0:20:37 And that was part of why they lost to Donald Trump in an election that was about cost of living and affordability.
    0:20:42 I don’t want to put everything on liberals or liberalism.
    0:20:48 The right deserves – the right has to take responsibility for its own actions, its own failures.
    0:20:51 The things they want are very different than the things I want.
    0:20:53 But yes, Musk has come in.
    0:20:54 Trump has come in.
    0:21:01 And they have not treated process as binding or even something worth respecting in the way liberalism has.
    0:21:09 And I think the two coalitions have developed mirror image pathologies, which is liberals are much too respectful and obsessed by process.
    0:21:15 And the right now has functionally no process and no respect for it and no respect for the legality of things.
    0:21:26 And, you know, I would like to see something that is more thoughtfully integrating of these perspectives.
    0:21:29 I mean, look, can I just vent for a second?
    0:21:30 You know what, man?
    0:21:31 It’s a podcast.
    0:21:32 It’s your podcast.
    0:21:36 I mean – okay.
    0:21:39 So, Democrats believe in government, right?
    0:21:41 Have used government, as you were saying, to do great things.
    0:21:51 And I agree that they have created or helped create a wildly sclerotic system that makes it very difficult, if not impossible, to build stuff and do stuff.
    0:21:58 But meanwhile, Republicans don’t really believe in government except for defense and national security.
    0:22:00 They want to dismantle it.
    0:22:01 They want to privatize everything.
    0:22:07 And this dynamic is eternally to their advantage because, again, breaking shit is easier than building shit.
    0:22:13 And in their efforts to break government, they’ve increased the public’s disgust with it because it keeps not working.
    0:22:15 And this is the doom loop.
    0:22:20 And I definitely take your point about absurd liberal proceduralism.
    0:22:28 But I do think having one of our two government parties enter into politics with the explicit aim of making government not work is a problem.
    0:22:34 And I don’t know how liberals and Democrats can solve that because this is a 51-49 country.
    0:22:36 Well, maybe it wouldn’t be if we were better at governing.
    0:22:37 You think so?
    0:22:38 I hope so.
    0:22:44 This thing liberals do, where it’s like, oh, man, they’re so bad.
    0:22:45 And they are.
    0:22:47 Like, I am fucking furious.
    0:22:57 And you know what I also am is I’m fucking furious that liberals gave up the, like, mantle of people who would fix your problems to this band of idiots.
    0:22:59 It makes me angry.
    0:23:01 Like, it should make other people angry.
    0:23:05 And just telling yourself endlessly that they are so bad, what are we going to do?
    0:23:07 Well, you know what would be good if we did?
    0:23:11 Created a situation where people said, California, that’s a well-run state.
    0:23:16 Maybe one of any of the 18 national-level figures it has recently produced should be president.
    0:23:19 New York, there’s a big economically important state.
    0:23:23 Maybe somebody from it should be a plausible national figure.
    0:23:33 You can’t, like, it is easier to run for president as a governor of Texas or Florida than the governor of California and New York.
    0:23:35 Now, that’s not true for everywhere.
    0:23:36 Jared Polis has done a good job in Colorado.
    0:23:38 And you know what happened in Colorado in 2024?
    0:23:46 They didn’t suffer a complete collapse of the Democratic vote share in the way that happened in California and New York.
    0:23:48 Because on some level, governing will does matter.
    0:23:55 And, like, I don’t think being a nihilistic party is highly popular, but being an ineffectual party is also not highly popular.
    0:24:05 So what you’re preaching, right, doing big things, building big things, actually leading, governing, investing in the future, people will say, well, you know, Joe Biden kind of did this, right?
    0:24:06 Or he appeared to do this.
    0:24:09 He passed the bipartisan infrastructure bill.
    0:24:10 He did the Chips and Science Act.
    0:24:12 He did the Inflation Reduction Act.
    0:24:13 And it’s like it never happened.
    0:24:14 He got no credit.
    0:24:16 It passed away like a fart in the wind.
    0:24:19 So, like, what is the lesson of that for you?
    0:24:25 Did all of that just fail politically because maybe the money was allocated, but for all the reasons you’ve outlined, nothing actually got done?
    0:24:27 Or is it something else?
    0:24:28 There are a couple things here.
    0:24:34 So, one is that there was a huge problem with running Joe Biden for president a second time.
    0:24:35 There just was.
    0:24:38 I mean, obviously, with somebody who was not in favor of that.
    0:24:49 But I think Joe Biden, if you change nothing about the election except Joe Biden into 65 and can effectively tell a story about his own administration, I think he would have won re-election.
    0:24:49 I really do.
    0:24:53 Now, his record alone was not that strong.
    0:24:58 Part of that was inflation, which they bear a modest amount of the blame for.
    0:25:05 It is the case that they put too much demand into the economy at a time when supply chains were choking, and that ended up being a bad idea.
    0:25:15 But, yeah, on the other side, when you make it slow to get these things built, it really does harm you.
    0:25:20 So, they got $42 billion for broadband access in poor communities.
    0:25:22 How many people got broadband?
    0:25:23 Approximately zero.
    0:25:28 So, yeah, Biden’s a complicated case because I do think there are movements towards abundance.
    0:25:34 I don’t think there was a serious effort to show that they’re making government spend in a way that was, you know, meant to benefit people.
    0:25:47 I think it’s fucking a problem that Doge is a dark Republican con as opposed to something that was a bright Democratic idea.
    0:26:05 I would have loved an actual Department of Government efficiency that was acting with real aggression, not the illegal and almost nihilistic levels of aggression under Elon Musk, but was really, really, really upset about places where government was failing and was making a big show of that.
    0:26:10 And you saw something like this under Bill Clinton with the Reinventing Government Initiative, which Al Gore got.
    0:26:17 But under Biden, they had this hyper-coalitional approach to politics and hyper-bureaucratic approach to the federal government.
    0:26:22 And, you know, they didn’t have any real—they never bought themselves credibility on that.
    0:26:25 They never seemed upset about things like they were doing wrong, right?
    0:26:27 Everything just kind of got explained away.
    0:26:36 And so, yeah, then at the end of the day, people felt prices had increased a bunch, and they were going around saying, no, no, no, we’ve spent all this money to spark a manufacturing boom.
    0:26:39 And, you know, the two things didn’t connect.
    0:26:42 Okay.
    0:26:48 So, let’s talk about how to fix shit, okay?
    0:27:04 If the big problem—and this is a book written broadly from the left, by the left, for the left, and the big problem on the left is this soul-crushing proceduralism, what is the solution to that?
    0:27:06 There is no one solution.
    0:27:11 I’m not, you know, it’s not one weird trick to get rid of your belly fat here.
    0:27:22 What I see us as trying to do is build—in certain ways, rebuild—an ideological tendency in politics, but on the left.
    0:27:26 And it will take time for that to take root and do big things.
    0:27:30 It will take time for it to change processes, if it ever does.
    0:27:33 It will take time for it to do new legislation.
    0:27:43 You know, one of the most inspiring of the movements here that I think are part of, like, this broader sense of refocusing on supply is the YIMBY movement.
    0:27:45 Can you just say what the YIMBYs are just for people to know?
    0:27:56 The Yes in My Backyard movement, which is basically a sort of tendency—they want to be bipartisan, but at least began, like, as an intra-liberal fight over saying, no, no, to be a liberal, you can’t be fighting this development.
    0:27:58 You can’t be fighting new homes.
    0:27:59 You can’t be fighting affordable housing.
    0:28:02 You can’t say we can build nothing and then say you’re a liberal.
    0:28:07 Liberalism requires building enough that living in this city is affordable for the working class.
    0:28:10 And they’ve had incredible intellectual victories.
    0:28:13 Kamala Harris is running on building three million new homes.
    0:28:18 Barack Obama, you know, brought up functionally YIMBYism during his DNC speech.
    0:28:23 But again, in the place where it is most powerful, it has not moved the needle in a significant way.
    0:28:27 And it’s because it’s still been bogged down in these coalitional fights.
    0:28:31 And, you know, I was talking to somebody who is a developer down there, and I was saying, look,
    0:28:36 they’ve passed all these bills in California to give you a fast track to build housing.
    0:28:38 Why aren’t you building more housing?
    0:28:39 He said, oh, I don’t use any of those bills.
    0:28:41 I said, well, why?
    0:28:46 He’s like, well, in order to use those bills, I have to agree to a whole new set of standards.
    0:28:49 I have to pay higher wages, prevailing wages.
    0:28:50 I have to do all these different things.
    0:28:56 So in the end, the fast track of that would end up costing me more than just not doing it all.
    0:28:58 And he’s like, that’s how I am.
    0:28:59 That’s how all my developer friends are.
    0:29:01 Like, the budgeting of it just doesn’t work out.
    0:29:05 And, you know, all these things are good on some level.
    0:29:06 Like, I want people to pay high wages.
    0:29:12 But when you have a housing crisis, right, California in 2022, it had 12% of the country’s population.
    0:29:14 It had 30% of its homeless population.
    0:29:18 It had 50% of its unsheltered homeless population.
    0:29:21 California is an astonishing homelessness crisis.
    0:29:23 And that is driven by a housing crisis.
    0:29:33 When you have a housing crisis and you’re passing a bunch of bills to build more housing and your bills aren’t working, well, then you have to ask, like, are the coalitional decisions you’re making good ones?
    0:29:42 Or do you have to deal with the housing crisis in your housing crisis bills and try to think about wages and do an income tax credit or whatever you want to do in other bills?
    0:29:46 But if your bills to solve your crisis are not solving your crisis, you’ve got to do something different.
    0:29:48 It’s not going to be easy.
    0:29:57 It’s going to take a political movement that, you know, over time begins to just see things differently at a lot of different levels and chip away at things in a lot of different ways.
    0:29:59 And it will take aggressive leadership.
    0:30:03 Again, you know, I don’t want to see what Elon Musk and Doge are doing to become the norm.
    0:30:13 But I would like to see much more aggressive leadership from liberal politicians who are furious at government not working and insistent that it has to work and has to deliver the outcomes they actually promise.
    0:30:31 This week on Unexplainable, the final installment of Good Robot, our four-part series on the stories we tell about AI.
    0:30:36 So what I want you to do first is I want you to open up ChatGPT.
    0:30:38 This time, the robots.
    0:30:46 And I want you to say, I’m going to give you three episodes of a series in order.
    0:30:47 Come for our jobs.
    0:30:49 Why are you laughing?
    0:30:50 I don’t know.
    0:30:51 It’s like a little creepy.
    0:31:01 Good Robot, a four-part series about AI from Julia Longoria and Unexplainable, wherever you listen.
    0:31:27 Okay, so let’s just assume that we are able to clear the way for big innovations and invention.
    0:31:32 What do you think we most need and how quickly do we need it?
    0:31:35 So we’ve had abundance of some things for quite some time, right?
    0:31:39 We’ve really built the global economy to give us an abundance of consumer goods.
    0:31:46 Forty years ago, you could go to public college debt-free, but you couldn’t have a flat-screen television.
    0:31:48 And now it’s basically the reverse, right?
    0:31:50 You can have a flat-screen television, but you can’t go to college debt-free.
    0:32:05 So we’re sort of more interested in abundance in the things that are the building blocks of what we think of as not just a good life, but the building blocks of a kind of creative and generative, productive life.
    0:32:17 So the things people really need that allow them to do other things, education, health care, and inside of health care, it doesn’t mean just everybody having insurance, but it means having cures to as much as we can, right?
    0:32:28 The value of health insurance, you know, my partner, she’s written a lot about this, so this is not me speaking out of turn, but, you know, she has a bunch of very complex and strange autoimmune diseases.
    0:32:40 Our health insurance would be a hell of a lot more valuable to me if it had cures for all of them, you know, and this is true for anybody who, you know, who knows people or loves people or they themselves suffer from difficult diseases.
    0:32:43 So what, like, the pace of medical innovation really matters.
    0:32:55 Housing, like, you just need to be able to build homes, and I want to see working-class families be able to live in the big, economically productive cities.
    0:33:11 And that matters not just because, like, it’s fun to live in New York City or San Francisco, but because it is a fundamental path to productivity and to social mobility and to opportunity to have all classes living in the places that are the biggest economic engines.
    0:33:21 And one thing we’ve seen that’s a very, very worrying trend is it used to be that poor people migrated to rich places, and then they got richer, and now they migrate away from them because they can’t afford to live in them.
    0:33:27 And that takes away all the opportunity those rich places used to offer to people who weren’t already rich.
    0:33:30 Michael Bloomberg used to talk about New York City as a luxury good.
    0:33:33 Cities are not supposed to be luxury goods.
    0:33:39 They are engines of opportunity, and when we gate them, we have turned off something very fundamental in the economy.
    0:33:50 So I love the section at the end of the book about these periods of political order where there’s a broad alignment of values, right?
    0:34:02 So after the wreckage of the Great Depression and World War II, we have this spirit of solidarity and collective action, and the power of the state expands enormously, and this is the New Deal era.
    0:34:15 And then this consensus collapses in the 70s, and the pendulum swings back in the opposite direction, and we get the neoliberal era, which is defined much more by individualism and consumerism.
    0:34:26 And I guess my question to you is, to undertake the sort of project you’re talking about here, this era of abundance, that will require a shift in priorities and outlook.
    0:34:33 And do you think that’s possible in this environment, in the absence of some kind of truly epic calamity?
    0:34:37 Like, do we have the attentional resources to course-correct as a country anymore?
    0:34:43 I never think things happen all the way or none of the way.
    0:34:46 Like, there was no pure neoliberal era.
    0:34:48 Nothing in this period was pure neoliberalism.
    0:34:53 Now, there are ideological tendencies that win out during periods.
    0:34:56 But, you know, the neoliberal era is full of contradictions.
    0:35:02 What is opening possibilities right now are very real problems that people have to figure out how to solve.
    0:35:05 Now, history is not, to me, teleological.
    0:35:07 I don’t believe the arc of history bends towards abundance.
    0:35:10 I think that it could go very badly.
    0:35:17 One of the things that we see with Trump is, look, that guy could have run as a sort of conservative abundance.
    0:35:19 I mean, he would want different things than I do.
    0:35:21 The values would be different.
    0:35:23 But he’s not.
    0:35:26 He does not want to bring the Texas housing policies to the nation.
    0:35:34 He and J.D. Vance have repeatedly used the housing crisis as a cudgel against immigrants and an argument for why we need to close the border, right?
    0:35:35 That’s a scarcity approach.
    0:35:40 He doesn’t want to increase the flows of international trade by making us build more stuff.
    0:35:43 He’s using tariffs to cut them down.
    0:35:52 Like, Elon Musk is not expanding what the government can do, given that the government is what allowed him to build Tesla, SpaceX, and SolarCity.
    0:35:55 He is trying to slash and destroy what the government can do.
    0:36:01 Right-wing populism loves scarcity because at the core of its politics is a suspicion of the other.
    0:36:09 If there is the feeling or the reality of there not being enough, then we look with a lot of suspicion on those who might take what we have or what we want.
    0:36:12 So I do think it’s going to be up to the left to try to embrace abundance.
    0:36:19 But if we don’t or if we fail, yeah, scarcity could just be the politics that wins out in the day.
    0:36:21 It has in many eras of human history before.
    0:36:33 I wonder if you think we’ll need a fundamentally different kind of communication environment shaped by different tools in order to have something like a constructive form of politics that makes these sorts of changes possible.
    0:36:34 I don’t think that.
    0:36:36 Why not?
    0:36:37 I hope you’re right.
    0:36:44 Because I think that the current information ecosystem is bad.
    0:36:47 I think it has been often bad in human history.
    0:36:52 I don’t think the specific way it’s bad is really at the root of many of the things that I’m worried about.
    0:36:59 And I don’t think the information ecosystem cares one way or the other about local permitting.
    0:37:06 I don’t think the information ecosystem, like, frankly, I think it’s actually quite friendly to all sorts of different forms of futurism.
    0:37:14 I think that it’s not standing in the way of all progress or all change.
    0:37:25 And like one just good example of that is that, you know, it in some ways created Trump, Musk, Vance.
    0:37:28 But it’s not stopping them from doing things.
    0:37:34 And Trump won by the popular vote by 1.5 points.
    0:37:40 So, you could very much imagine a Democrat, you know, like, imagine a different world.
    0:37:42 Joe Biden does not run for re-election.
    0:37:43 We have a Democratic primary.
    0:37:48 Maybe Kamala Harris wins it and has more time to put together a campaign that has more to say about the issues of the moment.
    0:37:49 And she’s better at talking about them.
    0:38:00 Or maybe Josh Shapiro or Gretchen Whitmer or, you know, someone else, Pete Buttigieg, Wes Moore, you know, wins the primary and they run.
    0:38:01 Like, you just can’t tell.
    0:38:06 Like, the thing, this did not all just turn on the information ecosystem.
    0:38:10 Or to the extent it did, it could have, you know, turned in many different ways.
    0:38:18 And we see different things happening in different states, even though all the states are exposed to the same information ecosystem.
    0:38:21 I think you’ve got to get a little less monocasal, my friend.
    0:38:28 I’ve never been indeterminous, but I think I’ve just increasingly become one.
    0:38:31 And look, you can talk me off the ledge here.
    0:38:45 I mean, I think part of, or one of my hang-ups is that I think we’ve lost the capacity as a society to tell ourselves a coherent story about who we are, what we are, where we’re going, what we want.
    0:38:54 And I guess maybe the question is, do we need, do we need, do we actually need to tell ourselves a coherent story in order to move a political project like this forward?
    0:38:57 Did we ever really need a coherent story?
    0:38:59 Or did we ever really have a coherent story?
    0:39:11 I think if your view of politics is that it needs some extremely high level of informational and narrative cohesion to function, then your politics has a real problem.
    0:39:14 Because that’s very, very, very rarely on offer.
    0:39:24 I think one criticism you’ll get from the left is that, you know, what do you attribute to liberal ideology?
    0:39:29 Because part of the problem here is the rules written by liberals decades ago being used to prevent building stuff today.
    0:39:44 Well, that’s really about wealthy, powerful people using their wealth and power to block progress, which is more about class politics than liberal ideology, that these people aren’t really liberals in any meaningful sense, just rich people protecting their turf.
    0:39:46 I don’t know.
    0:39:47 How do you tease that out?
    0:39:49 Does that distinction even make sense to you?
    0:39:54 I don’t have a class politics where I’m like, rich people are always bad and anybody else is always good.
    0:39:56 But there are places where rich people are a huge problem.
    0:39:58 And you get a lot of it in nimbyism.
    0:40:06 You get a lot of it in, you know, Ted Kennedy, the late Ted Kennedy, helping to organize against an offshore wind project near Cape Cod.
    0:40:15 I just think you’ve got to be specific about what you’re talking about and then work through what you think the political opposition is and what the problems are and what the process is.
    0:40:19 I don’t take that as a particularly useful blanket claim.
    0:40:25 Even in the place where you’d expect rich people to speak the most with one voice, should we raise taxes on rich people?
    0:40:27 They actually don’t anymore.
    0:40:38 The way polarization is structured itself, the way income polarization is structured itself, Democrats are doing better and better with rich people at a time when they’ve become more and more likely and insistent on taxing the rich.
    0:40:41 And so, like, that’s a kind of interesting fact of our politics.
    0:40:43 It has scrambled a bunch of things.
    0:40:48 Democrats sort of think they can, they will get the working class voters they want by saying we’re going to tax rich people.
    0:40:52 They’re weirdly winning more rich voters and fewer working class voters.
    0:40:55 And instead, you have more working class voters for the first time voting for Donald Trump.
    0:40:59 It’s easier if your only problem is rich people.
    0:41:04 It’s hard in the sense that they control a bunch of resources, but it’s easier in that that narrative is super clean.
    0:41:07 What happens when it’s not, though?
    0:41:15 What happens when some of your problems are just, like, upper-middle-class people who are the core of your constituency and you don’t want building happening around them?
    0:41:26 What happens when a bunch of your problems are actually other parts of the government you yourself run that over time have developed turf and funding and kind of stakeholder dynamics?
    0:41:29 And now all of your processes are incredibly difficult.
    0:41:32 So, yeah, rich people are sometimes a problem.
    0:41:34 They’re not the only problem.
    0:41:37 I just, I don’t have a lot of patience for monocasal politics.
    0:41:41 Oh, that feels like a low-key shot there.
    0:41:43 I feel attacked.
    0:41:46 Oh, well, you know, I have more patience for monocasal media politics, maybe.
    0:41:52 I just think everybody, we all have, like, look, abundance is also not a full politics.
    0:41:58 Like, asking the question of how do we solve problems you supply does not tell you every problem.
    0:42:03 It’s not going to tell you how to solve or even what position to take on a bunch of very difficult cultural and social issues.
    0:42:07 It is one set of problems that we could do a better job on.
    0:42:08 And better would be better.
    0:42:17 Yeah, I mean, one of the things I like about it is that it doesn’t necessarily mat neatly and predictably onto partisan cleavages in that way.
    0:42:29 But look, you know, there’s also a movement of people, as you know, who say the only sensible response, actually, at this point in history, is to do the opposite of what you suggest.
    0:42:30 Which is degrowth, right?
    0:42:36 That this whole model of late-stage consumer capitalism has been a moral and ecological catastrophe.
    0:42:40 And we have to scale it back in order to save ourselves.
    0:42:42 To that, you say, what?
    0:42:43 No.
    0:42:45 Say more.
    0:42:49 So I have a long, we have a long discussion of degrowth in the book.
    0:42:49 Yeah.
    0:42:56 And I have a lot I could say and want to say about degrowth, but I’ll say a couple things that are, I guess, maybe narrow.
    0:43:04 One, I do not agree that this era has been, like, a moral, it’s been a bit of an ecological catastrophe, but not a moral catastrophe.
    0:43:11 It’s still not for human beings who, as part of, I do think degrowth has too little preference for human beings in it.
    0:43:16 And the amount of people we’ve pulled out of poverty, the rise in living standards, those are not things to take lightly.
    0:43:20 Then I think, again, you’ve got to, like, look at what your problems are.
    0:43:27 Degrowth has this kind of interesting dynamic to me of being both too much and not enough of a solution to something like climate change.
    0:43:43 If we had not invented our way towards genuinely cheap and plentiful solar energy, wind energy, the possibility of advanced geothermal, new generation battery storage, the only answer we would have to climate change would be sacrifice.
    0:43:46 And sacrifice is just a terrible politics.
    0:43:47 It doesn’t really work.
    0:43:51 If you’re, I would love to see some people run on it and make it work, but I just, I’ve not seen it.
    0:43:53 It doesn’t seem to me to happen.
    0:43:54 Definitely not at this speed.
    0:43:59 And so, our only real shot, in my view, on climate change is technological.
    0:44:05 We have to deploy unfathomable quantities of clean energy as fast as we can.
    0:44:12 And that will also, as we do that, because of the way these sort of innovation curves work, we will get better and better at using the energy.
    0:44:13 It will become more energy dense.
    0:44:20 What has happened to solar and wind and battery storage is genuinely miraculous, has outpaced all expectations.
    0:44:27 And that is at least a viable politics, promising people that they can actually have, like, great technologically advanced lives.
    0:44:36 And it can be built on, you know, abundant clean energy, which is completely conceptually and physically and technologically possible.
    0:44:38 Like, that’s a viable politics.
    0:44:45 Well, the politics of degrowth, degrowth as a political proposition, is like the platonic ideal of a dead fucking end.
    0:44:52 Well, what’s worse is that it doesn’t hold out the possibility that you miss your climate targets by three-tenths of a percent or something.
    0:44:59 It’s that you empower a populist right that promises to burn their way back to prosperity, which is what they are doing right now, right?
    0:45:00 And I think it’s really important.
    0:45:04 Like, when your politics doesn’t work, it’s not like you get half of what you wanted.
    0:45:07 You get, like, the opposite of what you wanted.
    0:45:15 Like, you really have to be, if you care about these problems and you think these problems are near term, hard-nosed about the political consequences of what you’re about to do.
    0:45:18 Well, to that point, I know we’ve got to go soon.
    0:45:31 A lot of what’s happening right now is you have an administration in power that is doing their very best to render government inoperable.
    0:45:43 Does it concern you that damage might be done that will make it more difficult, if not impossible, to do any of these things after they’re gone?
    0:45:47 The damage that will be done concerns me hugely.
    0:45:54 The idea that it would then be impossible to do any of these things, I think if decent people win back power, that’s not accurate.
    0:45:59 I think the damage that will be done is going to be less than the damage of the Civil War, right?
    0:46:08 I mean, less than the—I mean, we have seen countries destroyed by all kinds of natural disasters and wars that were then able to build strong states fairly rapidly afterward.
    0:46:14 I am not one of the people who has a view that what they’re going to do is permanently wreck state capacity.
    0:46:21 But they could create authoritarianism, right, which would weaponize state capacity in a different way.
    0:46:34 My concerns have more to do with democratic backsliding than they do with the idea that we would never be able to rebuild a capable Department of Energy after they shut it down or otherwise corrode it.
    0:46:36 Yeah, and just so you know, I’m not even thinking in terms of permanence.
    0:46:38 I’m thinking just in terms of that 10-year window.
    0:46:40 Oh, you mean on climate change specifically?
    0:46:41 Yeah, specifically.
    0:46:42 Yeah, I’m very fucking concerned.
    0:46:43 I don’t know what to tell.
    0:46:58 Like, I’m more worried, again, than that we won’t be able to do good policies in the next administration, if you imagine a better administration following them in 2029, than I am that they will do everything they can to retard our progress in the next four years.
    0:47:03 And they are trying to, as we speak, destroy the solar and wind industries.
    0:47:06 And this is a really, really, really crucial period.
    0:47:09 I am hair on fire about that.
    0:47:16 But I don’t have a lever to stop it, you know, like, we’re in the timeline we’re in.
    0:47:23 I mean, you also say, too, that you think this era features too little utopian thinking.
    0:47:25 I think you’re right about that.
    0:47:30 But I also know that utopian thinking gets a bad rap.
    0:47:36 But what do you really think of as the practical value of a little utopian thinking?
    0:47:37 What do you mean by that?
    0:47:39 I think you should think about what future you’re trying to create.
    0:47:41 And that helps you work backwards.
    0:47:45 I think that too often we settle for parceling out the present.
    0:47:50 We think about the present and we think about making it a little gentler, a little kinder, a little fairer.
    0:47:53 I think we can think about futures that are quite different.
    0:47:57 And we don’t do that enough for a lot of different reasons.
    0:47:59 The right tends to be relentlessly nostalgic.
    0:48:04 And the left tends to be very just focused on the injustices of the past.
    0:48:07 And in that way, I tend to be more on the left with that.
    0:48:10 And I think there has been a lot of injustice and we should try to do a lot about it.
    0:48:13 But thinking about ways the future could be different I think is important.
    0:48:19 I think for a long time for American liberals, the sort of hoped-for future is Denmark or France.
    0:48:23 It’s a future with a European-level welfare state.
    0:48:25 That has been the grail of where they’re trying to get to.
    0:48:27 And that’s fine.
    0:48:30 That would be better in a bunch of different ways from my perspective.
    0:48:32 But Europe is a basket case.
    0:48:33 Productivity is really low.
    0:48:35 It’s poor compared to us at this point.
    0:48:40 Canada, which a lot of us think of as a much more humane place.
    0:48:45 If Canada were a state, it would be like Alabama level in terms of income per capita.
    0:48:52 You really do create wealth and dynamism differently in America.
    0:48:55 And I think we need a vision of the future that, yes, is kinder.
    0:48:56 Yes, is fairer.
    0:48:57 Yes, is more humane.
    0:48:58 Yes, is more compassionate.
    0:49:02 But also imagines, like, amazing things happening.
    0:49:06 I don’t think that you have to give up on good ideas from Europe or Canada.
    0:49:09 But that shouldn’t be all of it, right?
    0:49:11 We can do better than Denmark.
    0:49:13 We can do better than France.
    0:49:14 We can do better than the UK.
    0:49:15 I’m going to leave it right there.
    0:49:19 Once again, the book is called Abundance.
    0:49:23 Ezra Klein, my friend and former employer, thanks for coming in.
    0:49:25 It was great to be back with you here, Sean.
    0:49:26 Really, really enjoyed it.
    0:49:35 All right, I hope you enjoyed this episode.
    0:49:42 You know, whatever comes of this call for a politics of abundance, I do think there is
    0:49:49 enormous value in trying to articulate a new vision forward or a new framework for liberals
    0:49:56 in particular, because we are stuck right now, stuck in our old categories, stuck in our
    0:49:56 old models.
    0:50:03 And even though there’s a lot of angst and uncertainty right now, there’s also, for
    0:50:08 the same reasons, a lot of potential for something fresh and maybe even hopeful.
    0:50:12 And I got a lot of that in this conversation.
    0:50:16 But as always, we want to know what you think.
    0:50:24 So drop us a line at thegrayareaatvox.com or leave us a message on our new voicemail line
    0:50:28 at 1-800-214-5749.
    0:50:34 And once you’re finished with that, please go ahead and rate and review and subscribe to
    0:50:35 the pod.
    0:50:41 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Christian
    0:50:46 Ayala, fact check by Melissa Hirsch, and Alex Overington wrote our theme music.
    0:50:50 New episodes of The Gray Area drop on Mondays.
    0:50:51 Listen and subscribe.
    0:50:54 This show is part of Vox.
    0:50:57 Support Vox’s journalism by joining our membership program today.
    0:51:00 Go to vox.com slash members to sign up.
    0:51:03 And if you decide to sign up because of this show, let us know.
    0:51:04 Thank you.

    American government has a speed issue. Both parties are slow to solve problems. Slow to build new things. Slow to make any change at all.

    Until now. The Trump administration is pushing through sweeping changes as fast as possible, completely changing the dynamic. And the Democrats? They’ve been slow to respond. Slow to mount a defense. Slow to change tactics. Still.

    Ezra Klein — writer, co-founder of Vox, and host of The Ezra Klein Show for the New York Times — would like to offer a course correction.

    In a new book, Abundance, Klein and co-author Derek Thompson, argue that the way to make a better, brighter future, is to build and invent the things we need. To do that, liberals need to push past hyper-coalitional and bureaucratic ways of getting things done.

    In this episode, Ezra speaks with Sean about the policy decisions that have rendered government inert and how we can make it easier to build the things we want and need.

    Host: Sean Illing (@SeanIlling)

    Guest: Ezra Klein, co-author of Abundance and host of The Ezra Klein Show

    Learn more about your ad choices. Visit podcastchoices.com/adchoices