Author: The Gray Area with Sean Illing

  • Halfway there: a philosopher’s guide to midlife crises

    AI transcript
    0:00:03 Avoiding your unfinished home projects because you’re not sure where to start?
    0:00:07 Thumbtack knows homes, so you don’t have to.
    0:00:10 Don’t know the difference between matte paint finish and satin?
    0:00:13 Or what that clunking sound from your dryer is?
    0:00:16 With Thumbtack, you don’t have to be a home pro.
    0:00:17 You just have to hire one.
    0:00:23 You can hire top-rated pros, see price estimates, and read reviews all on the app.
    0:00:24 Download today.
    0:00:29 We all have bad days, and sometimes bad weeks, and maybe even bad years.
    0:00:33 But the good news is we don’t have to figure out life all alone.
    0:00:37 I’m comedian Chris Duffy, host of Ted’s How to Be a Better Human podcast.
    0:00:44 And our show is about the little ways that you can improve your life, actual practical tips that you can put into place that will make your day-to-day better.
    0:00:48 Whether it is setting boundaries at work or rethinking how you clean your house,
    0:00:54 each episode has conversations with experts who share tips on how to navigate life’s ups and downs.
    0:00:57 Find How to Be a Better Human wherever you’re listening to this.
    0:01:00 Why do we do philosophy?
    0:01:02 What is it even for?
    0:01:09 I can almost hear the water bong bubbling as I ask that question.
    0:01:12 But seriously, what is it for?
    0:01:16 It’s an old question.
    0:01:17 It’s an old question.
    0:01:20 One of the oldest in philosophy.
    0:01:23 And the answer is not obvious.
    0:01:28 Some people think the point of philosophy is to make the world make sense.
    0:01:31 To explain how everything hangs together.
    0:01:38 For others, philosophy is useless if it doesn’t tell us how to live.
    0:01:43 If you’re in the latter camp, and I basically am,
    0:01:49 then it’s fair to say that you think of philosophy as a form of self-help.
    0:01:56 Philosophy should have a lot to offer us when we’re anxious or depressed,
    0:02:02 or in one of those uneasy periods of life where you start wondering who you are and what you’re doing and where you’re going.
    0:02:05 What’s otherwise known as a midlife crisis.
    0:02:12 I’m Sean Elling, and this is The Gray Area.
    0:02:24 Today’s guest is Kieran Setia.
    0:02:29 He’s a philosopher at MIT and the author of several books.
    0:02:33 Most recently, Life is Hard, How Philosophy Can Help Us Find Our Way.
    0:02:36 And Midlife, A Philosophical Guide.
    0:02:43 Setia is that rare academic philosopher who cares about writing for a general audience.
    0:02:46 And more importantly, knows how to.
    0:02:50 His book about midlife crises is a great example of this.
    0:02:55 It’s intimate, accessible, and full of genuine insight.
    0:02:59 I knew pretty quickly that I wanted to get him on the show.
    0:03:02 And now, he’s here to talk about it.
    0:03:10 Kieran Setia, welcome to The Gray Area.
    0:03:12 Thanks for having me. It’s good to be here.
    0:03:17 Could you just maybe tell the audience a little bit about the kind of work you do as a philosopher?
    0:03:18 What do you study?
    0:03:22 What are the big questions driving your work?
    0:03:23 Kieran Setia, PhD
    0:03:25 So I work on ethics.
    0:03:28 So really, the big question is, how should we live our lives?
    0:03:32 And that question leads me into lots of related areas.
    0:03:35 So I’m very interested in the nature of human action, the nature of agency,
    0:03:39 in the nature and possibility of knowledge in general,
    0:03:45 and ethical knowledge in particular, and questions about human nature and what kinds of beings we are.
    0:03:48 Yeah, tiny questions like, how should we live our lives?
    0:03:48 Kieran Setia, PhD
    0:03:49 Exactly, exactly.
    0:03:58 Well, you are a professional philosopher, so let me just kick this off with a layup question.
    0:03:59 Kieran Setia, PhD
    0:04:00 Okay.
    0:04:00 Kieran Setia, PhD
    0:04:05 What do you think philosophy is really for? What’s the point of philosophizing?
    0:04:06 Kieran Setia, PhD
    0:04:09 Oh, good, yeah. No, it’s good to start with the easy ones. Thank you.
    0:04:10 Kieran Setia, PhD
    0:04:16 I mean, I think the best answer to the question of what philosophy is for is related to the question of what philosophy is.
    0:04:19 And I think the best way to approach that is historical.
    0:04:24 So when I try to explain what philosophy is, I usually start with ancient Greek philosophy
    0:04:29 and the idea that philosophy encompasses all systematic inquiry into the world,
    0:04:34 our relationship to the world, and how to orient ourselves to and conduct ourselves in the world.
    0:04:40 And then what happens through the course of history is that particular disciplines sort of peel off from philosophy.
    0:04:47 So you get psychology and economics in the 19th century, you get linguistics, computer science in the 20th century.
    0:04:59 And what philosophy is left with are the big, demanding, difficult questions for which we don’t have any accepted results other than think really hard about it.
    0:05:02 The negative way of putting it is we’re left with the detritus of inquiry.
    0:05:12 The positive way to put it is we ask the kinds of inevitable, essential, important questions that the other disciplines can’t answer and don’t really even know how to ask.
    0:05:17 Yeah, I sometimes get asked why I ended up choosing philosophy as a major.
    0:05:22 And I don’t really have a great answer other than I always just kind of love the questions more than the answers.
    0:05:25 And so that’s just where I naturally landed.
    0:05:36 I mean, I think you have to have a certain kind of patience with questions to really sustain work in philosophy because the progress is fitful and uncertain.
    0:05:41 And a lot of what you’re doing is trying to figure out what the questions are and how to make them tractable.
    0:05:52 And I think some of the questions philosophy asks, like, how should I live my life, are ones that people are implicitly and often explicitly asking every day and they’re really unavoidable.
    0:06:00 And I think some of the questions philosophy asks are ones you could probably go your whole life without really spending a lot of time confronting.
    0:06:19 Some of them, like, how should I live my life, are ones that people are implicitly and often explicitly asking every day and they’re really unavoidable, but still have this philosophical character that we don’t have an off-the-shelf method for answering them other than try to think it through using every tool you can lay your hands on.
    0:06:24 Yeah, I’ve always found this debate in the history of philosophy fascinating.
    0:06:41 You know, does philosophy exist to explain the world or is it supposed to tell us what to do, how to live, or is it just another human activity like painting that really only justifies itself by how much beauty and meaning it adds to life?
    0:06:46 It sounds like you kind of land on, no, it should tell us how to live.
    0:06:47 Otherwise, what are we doing here?
    0:06:50 I definitely think that’s one of the goals of philosophy.
    0:06:58 I think, just going back to the history again, one thing that happens is you have these pre-Socratic philosophers before Socrates in the 5th century BCE.
    0:07:01 They’re mostly metaphysicians.
    0:07:03 They’re interested in the nature of reality and how the world works.
    0:07:12 Socrates then comes along and says, guys, the urgent question is how should we live our lives and is interrogating the ethos and the ethics of the Athens of his time.
    0:07:18 And then Plato and Aristotle and some other philosophers who follow him say, actually, we’ve got to do both.
    0:07:24 Like the only way to really answer the question how to live is through more abstruse metaphysical reflections.
    0:07:31 So the idea that philosophy should answer the question how to live and try and guide us in our lives is there from pretty early on and is one that we shouldn’t give up.
    0:07:36 If I had to place you in some philosophical camp, I’m not sure which one it would be.
    0:07:44 I mean, is there a school or a tradition or a particular philosopher that you identify with?
    0:07:51 Is there a label like Stoic or Existentialist or Aristotelian that you’re comfortable with?
    0:07:55 I do have a philosophical hero and it’s the novelist philosopher Iris Murdoch.
    0:08:09 She’s my hero in part because she is the person who really developed the idea that accurate description of reality is a demanding, complicated, arduous moral task.
    0:08:15 And that it’s a much bigger part of what ethical reflection looks like than typical philosophers appreciate.
    0:08:23 And she has a book called The Sovereignty of Good that it’s one of the few philosophy books that I go back to for solace.
    0:08:26 Like when I’m feeling down, there’s not a lot of philosophy I think, this will cheer me up.
    0:08:33 But with Murdoch, I find it so inspiring, the ambition of what she’s doing in this very short, dense book.
    0:08:38 If I was going to declare allegiance to a philosopher, it would be kind of continuing the spirit of her work.
    0:08:45 Well, the essay you wrote that caught my attention posed this question, is philosophy self-help?
    0:08:52 Now, self-help is a strange term we can get into a little bit, but is that how you conceive a philosophy is essentially a tool of self-help?
    0:08:54 A lot of it depends on the connotations of self-help.
    0:09:08 So I think self-help as a distinctive genre now is often associated with a kind of narrow concern for one’s own happiness as opposed to how to live a good life in general, which involves how you relate to other people.
    0:09:14 And it’s a kind of particular literary genre that is a little bit at odds with how philosophers tend to operate.
    0:09:24 There’s a sense in which asking the question how to live in a way that’s practical surely ought to count as a project of guiding our lives and helping us to live better.
    0:09:32 But it doesn’t fit neatly with how self-help is understood in this sort of contemporary, narrower, generic sense.
    0:09:47 So I think the thing that’s easier to define pithily is the literary genre, which has a kind of definite origin.
    0:09:59 So historians who write about the literary and cultural genre of self-help often trace it back to the amazingly named Samuel Smiles, the first self-help guru in 1859.
    0:10:15 And he wrote this book, self-help, and he was a kind of social activist, but he had the idea that a book aimed at a general audience telling people how to change their own lives to flourish in the world was a good thing to do.
    0:10:28 And that idea that there’s a kind of genre of writing that tells you how you can change yourself, and that’s the sort of self-help idea, and it’s on you to flourish in the world.
    0:10:34 That descends from Smiles and then really kind of explodes through the 19th, 20th, 21st centuries.
    0:10:40 So that’s one way to define it, is it’s a kind of literary genre that has some pitfalls to it.
    0:10:43 I mean, one of the pitfalls is it’s very individualistic.
    0:10:52 It’s very much focused on the idea that you can change yourself, and less focused on the idea of changing society or structures in which it’s difficult to flourish.
    0:10:58 And also, it tends to be very focused on individual happiness rather than treating other people well.
    0:11:07 On another understanding of self-help, it’s just reflective thinking about how to live a better life, and then it looks like really it is just philosophical ethics.
    0:11:17 But if you go back to the beginning, at least in the Western tradition, philosophy was essentially understood as a form of self-help, right?
    0:11:19 I mean, Plato definitely thought of it that way.
    0:11:28 Yeah, I mean, certainly in the sense of there’s the project of making our lives better that drives Socrates.
    0:11:36 And then when Plato writes the Republic, there’s a lot of metaphysics, there’s the nature of the forms, there’s the parable of the cave and the idea that we don’t really know true reality.
    0:11:41 There’s all this metaphysical and epistemological stuff to do with knowledge.
    0:11:49 But the guiding question is, how should I as an individual live, what would flourishing be, and how should the state be organized?
    0:11:53 So in that sense, it’s self-help from the beginning.
    0:12:00 The ways in which I think it differs from contemporary self-help have partly to do with the idea that the goal there is not just that you feel happy.
    0:12:06 It is about living a flourishing life, and that’s understood in terms that this is a little anachronistic.
    0:12:08 We’d think of it as partly moral.
    0:12:11 It’s about justice and the treatment of other people.
    0:12:14 You don’t get a lot of self-help books today.
    0:12:21 Things that are about how to be a more just person or what your moral obligations are typically don’t get classified as self-help.
    0:12:24 It’s much more about your own happiness and feeling good.
    0:12:34 And the other thing that’s a difference and that’s a challenge is that Plato is writing in an often esoteric way, like the work is difficult, it’s demanding.
    0:12:39 So genre-wise, it’s not quite as outward-facing as contemporary self-help.
    0:12:50 And there’s a question, should we expect that philosophical reflection of this dense, theoretical, ambitious kind into the nature of reality is going to make us feel happy?
    0:12:54 Yeah, Socrates may have been wrong that the unexamined life isn’t worth living.
    0:13:05 No, and there’s a sense in which certain kinds of philosophical reflection, this sort of more theoretical reflection, it doesn’t seem essential to me to living a good life.
    0:13:08 There’s lots of people who are friends of mine who are living very good lives.
    0:13:18 They do engage in ethical reflection, but it doesn’t look a lot like the abstract theory construction that one might associate with academic philosophy.
    0:13:22 Well, you wrote a book called Life is Hard.
    0:13:29 Not that your philosophy of life can be summed up in three words, but if you had to sum it up in three words, is that it?
    0:13:44 I think it’s partly about the way in which philosophers like Plato and in the Republic and Aristotle and his ethics, these ancient Greek philosophers, tend to think about the question how to live in terms of the ideal life.
    0:14:01 And that can be both unrealistic and in a certain way self-punitive. I mean, often the right way to approach the ideal life is to think that’s not available. I shouldn’t beat myself up about the fact that that’s not available.
    0:14:19 Really, what living well is about, or living as well as I can is about, is dealing with the ways in which life is difficult. And I think when you think about the conversations you have with friends in which you start to worry about how to live, often they’re, “Should I quit my job which I don’t like?” or “I’m having trouble with my parents or my kids.”
    0:14:26 It’s problems that generate the urgency of the question how to live, or the sense that life is absurd, or the world is going to hell.
    0:14:38 And so I do think the right method for doing moral philosophy or ethics is to start with the ways in which life is hard, and think about how philosophy can tackle them.
    0:14:49 When we get back from the break, why do we have midlife crises?
    0:14:52 And can philosophy help guide us through them?
    0:14:53 Stay with us.
    0:15:03 How do you navigate an entire career change after losing everything?
    0:15:11 This week on Net Worth and Chill, I’m chatting with Lewis Howes, the host of the School of Greatness podcast, with over 500 million downloads.
    0:15:15 Lewis went from rising professional athlete to broke after a career-ending injury.
    0:15:18 I believe self-doubt is the killer of dreams.
    0:15:25 When we doubt ourselves, it doesn’t matter how talented or smart you are, you’re going to limit yourself on what you’re able to do.
    0:15:26 But that was just the beginning of his story.
    0:15:35 It’s an episode packed with raw honesty and failure, practical advice for career pivots, and the financial wisdom that comes from losing it all and rebuilding it.
    0:15:40 Listen wherever you get your podcasts or watch on youtube.com/yourrichbff.
    0:15:42 Hi folks, this is Kara Swisher.
    0:15:52 This week on my podcast, on with Kara Swisher, I’m speaking with philanthropist, businesswoman, and women’s rights advocate, Melinda French Gates, on how she’s refocused after her divorce from tech mogul Bill Gates.
    0:16:00 We talk about why investing in women in politics and business is playing the long and smart game, and we discuss her new memoir, The Next Day.
    0:16:05 My mom used to say to me as I was growing up, “Set your own agenda or someone else will.”
    0:16:10 I know society is better off when women are in positions of power.
    0:16:21 I really enjoy this conversation because it’s an interesting moment where women in technology are having much more of an important impact than men who are still moving fast and breaking things.
    0:16:24 Have a listen to “On with Kara Swisher” wherever you get your podcasts.
    0:16:33 We borrow money from Chinese peasants to buy the things those Chinese peasants manufacture.
    0:16:36 That is not a recipe for economic prosperity.
    0:16:43 Vice President J.D. Vance, defending the Trump administration’s tariffs on China, hit China squarely below the belt.
    0:16:46 And China hit back with memes. Cue music.
    0:16:58 Americans on assembly lines, at sewing machines, in fields, eating chips, drinking coke, looking ill-prepared for factory work, to put it politely, which the memes are not.
    0:17:04 China’s argument since this trade war began is that America cannot win it.
    0:17:07 China is tougher, more resilient, and better prepared.
    0:17:15 On Today Explained, as this trade war escalates, we ask, “What if that’s true?”
    0:17:34 Today Explained, every weekday.
    0:17:43 One of the things about life that appears to be hard is middle age.
    0:17:47 And you wrote a book about midlife crises.
    0:17:50 How do you define a midlife crisis?
    0:17:55 Actually, kind of like the self-help movement, midlife crisis is one of those funny cultural
    0:17:58 phenomena that has a particular date of origin.
    0:18:04 So, in 1965, this Canadian psychoanalyst, Elliot Jacques, writes a paper, “Death and the Midlife Crisis.”
    0:18:05 And that’s the origin of the phrase.
    0:18:10 And he is looking at patients, and also, in fact, the lives of creative artists, who experience a kind of
    0:18:15 midlife creative crisis. So, it’s people in their late 30s.
    0:18:21 I think the stereotype of the midlife crisis is that it’s a sort of paralyzing sense of uncertainty
    0:18:26 and being unmoored. Nowadays, I think there’s been a kind of shift in the way people think
    0:18:32 about the midlife crisis, that people’s life satisfaction takes the form of a kind of gentle
    0:18:39 U-shape. That basically, even if it’s not a crisis, people tend to be at their lowest ebb in their 40s.
    0:18:44 And this is men and women. It’s true around the world to differing degrees, but it’s pretty pervasive.
    0:18:49 So, I think nowadays, often when people like me talk about the midlife crisis, what they really
    0:18:55 have in mind is more like a midlife malaise. It may not reach the crisis level, but there seems to be
    0:19:02 something distinctively challenging about finding meaning and orientation in this midlife period in
    0:19:03 your 40s.
    0:19:09 Well, I’m 42. I just turned 42. Sounds like I’m right in the middle of my midlife crisis.
    0:19:14 I think you’re, you know, not everyone has it, but you’re predicted to hit it, yes.
    0:19:22 Yikes. Well, what is it about midlife that generates all this anxiety and disturbing reflection?
    0:19:26 I think, really, there are many midlife crises. It’s not just one thing. I think some of them
    0:19:31 are looking to the past. So, there’s regret. There’s the sense that your options have narrowed. So,
    0:19:37 whatever space of possibilities might have seemed open to you earlier, whatever choices you’ve made,
    0:19:42 you’re at a point where there are many kinds of lives that might have been really attractive to you,
    0:19:48 that it’s now clear to you and in a vivid sort of material way that you can’t live. So, there’s missing
    0:19:52 out. There’s also regret in the sense of things have gone wrong in your life, you’ve made mistakes,
    0:19:58 bad things have happened. And now the project is, how do I live the rest of my life in this imperfect
    0:20:02 circumstance? The dream life is off the table for most of us. And then I think there’s also things
    0:20:09 that are more present-focused. So, often people have a sense of the daily grind being empty. And that’s
    0:20:15 partly to do with so much of it being occupied by things that need to be done, rather than things that
    0:20:22 make life seem positively valuable. It’s just one thing after another. And then death starts to look like
    0:20:29 it’s at a distance that you can measure in terms you kind of really palpably understand. Like, you have a
    0:20:33 sense of what a decade is like, and there’s only three or four left at best.
    0:20:41 The thing about being young is the future is pure potential. Ahead of you is nothing but freedom
    0:20:51 and choices. But as you get older, life has a way of shrinking. Responsibilities pile up. You get trapped in
    0:21:02 the consequences of the decisions you’ve made. And the feeling of freedom dwindles. That’s a very difficult thing to wrestle with.
    0:21:07 I think that’s exactly right. I mean, part of what’s philosophically puzzling about it is
    0:21:13 that it’s not news. That in a way, whatever your sense of the space of options was when you were, say,
    0:21:17 20, you knew you weren’t going to get to do all of the things.
    0:21:18 Yeah.
    0:21:22 So there’s a sense in which it’s kind of puzzling that when, at 40, even if things go well,
    0:21:26 you didn’t get to do all of the things. That’s not news. You knew that wasn’t going to happen.
    0:21:33 What it suggests, and I think this is a kind of philosophical insight, is that there is a profound
    0:21:41 difference between knowing that things might go a certain way, well or badly, and knowing in concrete detail
    0:21:47 how they went well or badly. And that’s something that I think we learn from this transition that we
    0:21:52 make in midlife. The kind of pain of just discovering the particular ways in which life isn’t everything
    0:21:56 you thought it might be, even though you knew all along that it couldn’t be everything you hoped it
    0:22:02 might be. That suggests that there’s a certain aspect of our emotional relationship to life that
    0:22:07 is missed out if you just ask in abstract terms what will be better or worse, what would make a good
    0:22:12 life? And so I think philosophy needs to kind of incorporate that kind of particularity, that kind
    0:22:18 of engagement with the texture of life in a way that philosophers don’t always do. I mean, I think
    0:22:22 there’s another thing philosophy can say here that’s more constructive, which is part of the sense of
    0:22:28 missing out has to do with what philosophers call incommensurable values. The idea that, you know,
    0:22:34 if you’re choosing between $50 and $100, you take the $100 and you don’t have a moment’s regret. But if you’re
    0:22:40 choosing between going to a concert or staying home and spending time with your kid, either way,
    0:22:45 you’re going to miss out on something that is sort of irreplaceable. And that’s pretty low stakes.
    0:22:50 But one of the things we experience in midlife is all the kinds of lives we don’t get to live
    0:22:56 that are different from our life, and there’s no real compensation for that. And that can be very painful.
    0:23:01 On the other hand, I think it’s useful to see the flip side of that, which is the only way you
    0:23:06 could avoid that kind of missing out, that sense that there’s all kinds of things in life that you’ll
    0:23:12 never get to have. The only way you could avoid that is if the world was suddenly totally impoverished
    0:23:17 of variety, or you were so monomaniacal, you just didn’t care about anything but money, for instance.
    0:23:21 And you don’t really want that. So there’s a way in which this sense of missing out,
    0:23:25 the sense that there’s so much in the world we’ll never be able to experience,
    0:23:30 is a manifestation of something we really shouldn’t regret and in fact should cherish. Namely,
    0:23:33 the evaluative richness of the world, the kind of diversity of good things.
    0:23:36 And there’s a kind of consolation in that, I think.
    0:23:44 So is that to say that FOMO is always and everywhere a philosophical error? Or is it actually valid in
    0:23:44 some ways?
    0:23:49 I think it’s a philosophical insight in a way. I think this kind of existential FOMO is part of
    0:23:53 what we have in midlife or sometimes earlier, sometimes later. But I think that sense that
    0:24:00 it really is true that we’re missing out on things and that there’s no substitute for them. That’s really
    0:24:05 true. The kind of rejoinder to FOMO is, well, imagine there weren’t any parties you didn’t get to go
    0:24:11 to. That wouldn’t be good either, right? You want there to be a variety of things that are actually
    0:24:16 worth doing and attractive. We want that kind of richness in the world, even though one of the
    0:24:19 inevitable consequences of it is that we don’t get to have all of the things.
    0:24:30 One of the arguments you make is how easily we can delude ourselves when we start pining for the roads
    0:24:37 not traveled in our lives. And, you know, you think, what if I really went for it? What if I
    0:24:43 tried to become a novelist or a musician or join that commune or, I don’t know, pursued whatever
    0:24:51 life fantasy you had when you were younger? But if you take that seriously and consider what it
    0:24:59 really means, you might not like it. Because the things you value the most in your life,
    0:25:07 like, say, your children, well, they don’t exist if you had zigged instead of zagging 15 or 20 years
    0:25:12 ago. And that’s what it means to have lived that alternative life. And I guess it’s helpful to remember
    0:25:16 that sometimes, but it’s easy to forget it because you just, you’re imagining what you don’t have.
    0:25:21 This is, again, about the kind of danger of abstraction that, in a way, philosophy
    0:25:25 can lead us towards this kind of abstraction, but it can also tell us what’s going wrong with it. So,
    0:25:30 the thought, I could have had a better life, things could have gone better for me, it’s almost always
    0:25:35 tempting and true. But when you think through in concrete particularity what would have happened if
    0:25:40 your failed marriage had not happened, often the answer is, well, I would never have had my kid,
    0:25:45 or I would never have met these people. And while you might think, yeah, but I would have had some
    0:25:51 other unspecifiable friends who would have been great, and some other unspecifiable kid who would
    0:25:57 have been great. I think we rightly don’t evaluate our lives just in terms of those kinds of abstract
    0:26:02 possibilities, but in terms of attachments to particulars. And so, if you just ask yourself,
    0:26:09 could my life have been better? You’re kind of throwing away one of the basic sources of consolation,
    0:26:15 rational consolation, I think, which is attachment to the particularity of the good things, the good
    0:26:20 enough things in your own life, even if you acknowledge that they’re not perfect and that
    0:26:24 there are other things that could have been, in a certain way, better.
    0:26:31 This is why I always loved Nietzsche’s idea of amor fati, this notion that you have to say yes to
    0:26:37 everything you’ve done and experienced. Because all the good and bad in your life is part of this chain of
    0:26:43 events. And if you alter any of those events at any point in the chain, you also alter everything
    0:26:47 else that followed in unimaginable ways.
    0:26:53 I mean, I do think there’s a profound source of affirmation there. I think my hesitation is just
    0:26:58 that it’s not that all the mistakes that we make or the terrible things that happen to us are redeemed
    0:27:04 by attachment to the particulars of our lives. It’s that there’s always this counterweight. At the very
    0:27:11 worst, we’re going to end up with some kind of ambivalence, and that’s better than the situation
    0:27:17 of mere unmitigated regret. But it’s not quite the full embrace of life that a certain kind of
    0:27:27 philosophical consolation might have given us. What precipitated your midlife crisis and what role did
    0:27:31 your philosophical education play in helping you through it?
    0:27:36 My philosophical education probably created it, and then it did help me to work through it. Which
    0:27:42 was that I think I loved philosophy. I still love philosophy, but I loved it as a teenager and a
    0:27:47 college student in a way that was not professionalized. I mean, I just loved these
    0:27:51 questions. I love thinking about them. I love talking about them. And then I wanted to keep doing it. And the
    0:27:58 way academia works, there’s a tendency for love of a certain kind of intellectual engagement to get
    0:28:06 channeled into getting into a grad program, finishing your PhD, getting a job, getting tenure, getting
    0:28:12 promoted, publishing a book, getting this article into this fancy journal. And what happens, and this is
    0:28:19 sort of the diagnosis I came to, is that something that is not really directed at particular achievements
    0:28:25 becomes transformed into something that is. And then what you find yourself valuing is these achievements
    0:28:29 one after another. And you finish them, and then you’re like, “Well, what next?” And what you lose is
    0:28:34 the sense of the love of just doing philosophy and thinking about it and engaging with it.
    0:28:38 At the time, what happened was I thought, “I’ve got everything I want, everything I’ve worked
    0:28:43 for for 20 years. But there’s something deeply hollow about the idea that I’m just going to write
    0:28:46 another article, and then another article, teach another class, and then another class.”
    0:28:52 There’s an emptiness to this. And then I thought, “Well, that is very philosophically puzzling. Like,
    0:28:58 how can it be that I’m doing things that I think are worth doing, and I’m incredibly fortunate to be
    0:29:03 able to do them, and yet I still think there’s something deeply wrong with my life?” And I thought,
    0:29:08 “Well, maybe I can work on that philosophical problem, since that seems to be my problem.”
    0:29:14 And then, you know, the judo move of using philosophy to solve my problem with being a philosopher.
    0:29:19 And I think when you put it that way, I think what I’m describing was in some ways idiosyncratic to me,
    0:29:24 but I think it is one of the canonical forms of midlife crisis. It’s the type-A project-driven person
    0:29:28 who is achieving quite a lot of the things they set out to achieve, and then has this sense of
    0:29:34 hollowness, and what next? Just more of that forever, until I die? And that was the shape my
    0:29:36 particular midlife crisis took.
    0:29:38 And how has philosophy helped?
    0:29:44 In terms of the midlife crisis, I think the biggest thing was the shift from valuing what I
    0:29:49 call telic activities, from the Greek telos or end, where you’re aiming at an endpoint, a project or
    0:29:54 achievement, which has this problem that the thing you want is always in the future, and then the moment
    0:29:59 you’ve got it, it’s over, and it’s in the past. And what you’re doing is basically pursuing something
    0:30:05 with a view to getting it out of your life, like you’re checking off the box. And I think not all
    0:30:11 activities are like that. So having a kid, that’s a thing you can finish doing. But then parenting is
    0:30:17 just this ongoing process. Or writing a book, you get it done. But thinking about a topic, that just goes
    0:30:22 on in this what I call an atelic way. And with atelic activities, where they’re not directed at a
    0:30:27 particular accomplishment or outcome, you don’t have the sense that you’re deferring the thing you really
    0:30:32 value to the future. If you want to be thinking about a topic or talking about philosophy as we are right
    0:30:37 now, it’s happening right now. It’s not like we’re putting it off to the future or trying to, you know,
    0:30:43 finish something. And so insofar as you value the atelic process of what you’re doing, that can mitigate
    0:30:50 this sense of emptiness. I think the way in which philosophy can help us grapple with difficulties
    0:30:56 like failure or loss is often not by saying, I’ve got this grand theory, I’ll apply it to whatever’s
    0:31:03 happening in your life. It’s by saying, let’s try to describe what the problem of loss or grief is
    0:31:06 in a way that helps us to understand and come to terms with it.
    0:31:16 Yeah, you know, I have a decent philosophical education and I have all these ideas I’ve encountered over the
    0:31:27 years in my head. But, you know, often when real pain strikes, it is not always easy to find relief in ideas.
    0:31:37 Two of the hardest moments of my adult life were the sudden loss of my mother a few years ago and
    0:31:47 the unexpected loss of a baby last year. And I think like a lot of people, I did that thing where I felt
    0:31:53 victimized, you know? Like, the world is conspiring against me and, you know, you go through the anger of
    0:31:59 all that. But then you remind yourself that you’re not, in fact, uniquely unlucky that this happens to
    0:32:06 people every day and no one’s immune. Pain and loss are part of life as central to life as anything else.
    0:32:16 And philosophy can help with that awareness. And in that way, it can bring real peace. But it’s hard.
    0:32:17 It is very hard.
    0:32:23 Yeah, I’m so sorry to hear about both of those losses. That sounds incredibly hard. And I think
    0:32:28 what philosophy has to do is what human beings have to do faced with those kinds of difficulties,
    0:32:35 which is not switch too rapidly into what I call assurance advice mode, which is saying,
    0:32:41 it’s all going to be fine. Or, hey, here’s what you do. And those are things we do in personal
    0:32:46 interaction. But there are also versions of philosophical approaches to the difficulties of life.
    0:32:51 There’s the kind of theodicy where philosophers argue that all is for the best. They’ve got some
    0:32:56 proof that although this seems bad, it’s going to work out well. Or they have some kind of theory
    0:33:01 where they say, my philosophical principle is this. I’ll just apply it to your situation. And those are
    0:33:06 rarely good philosophical tactics for dealing with the kind of difficulties you’re describing
    0:33:12 for reasons that are not unrelated to the fact that they’re rarely good interpersonal ways of approaching
    0:33:19 difficulty. So the fact that as people to people, the starting point is sitting with difficulty,
    0:33:25 acknowledging it, trying to take in what’s really happening, really describing the particularity of it.
    0:33:32 It is connected with a kind of philosophical methodology that I have come to embrace. And this,
    0:33:38 it’s sort of a shift from thinking, well, philosophy is going to be about coming up with really cool arguments
    0:33:45 that prove you should think this or that to thinking there’s a real continuity between the literary
    0:33:52 and human description of phenomena like grief and philosophical reflection. Because often what
    0:33:59 philosophical reflection provides is less a proof that you should live this way and more concepts with
    0:34:05 which to articulate your experience and then structure and guide how you relate to reality. And seen that way,
    0:34:10 we can sort of understand how philosophy can operate as self-help when just saying, hey,
    0:34:16 here’s some philosopher’s argument, feels like it’s not making contact with the texture of the difficulties
    0:34:23 we’re dealing with.
    0:34:42 After one more short break, we finally get to the easy questions. Like, what’s the point of life? Stay with us.
    0:34:54 25 years ago, McDonald’s restaurants across the country were being robbed by a masked man who always
    0:34:57 entered through the roof and was always polite.
    0:35:02 He was a gentleman going so far as to use ma’am, sir.
    0:35:12 And I didn’t know whether to laugh or to be scared because, you know, you see in the movies, robberies are not like that.
    0:35:22 I’m Phoebe Judge. Listen to part one of The Roof Man right now on Criminal and listen to part two early by becoming a member of Criminal Plus.
    0:35:32 The regular season is in the rear view, and now it’s time for the games that matter the most.
    0:35:35 This is Kenny Beecham, and playoff basketball is finally here.
    0:35:39 On Small Ball, we’re diving deep into every series, every crunch time finished,
    0:35:43 every coaching adjustment that can make or break a championship run.
    0:35:45 Who’s building for a 16-win marathon?
    0:35:48 Which superstar will submit their legacy?
    0:35:51 And which role player is about to become a household name?
    0:35:56 With so many fascinating first-round matchups, will the West be the bloodbath we anticipate?
    0:35:58 Will the East be as predictable as we think?
    0:36:00 Can the Celtics defend their title?
    0:36:04 Can Steph Curry, LeBron James, Kawhi Leonard push the young teams at the top?
    0:36:10 I’ll be bringing the expertise to pass in the genuine opinion you need for the most exciting time of the NBA calendar.
    0:36:14 Small Ball is your essential companion for the NBA postseason.
    0:36:18 Join me, Kenny Beecham, for new episodes of Small Ball throughout the playoffs.
    0:36:20 Don’t miss Small Ball with Kenny Beecham.
    0:36:22 New episodes dropping through the playoffs.
    0:36:24 Available on YouTube and wherever you get your podcasts.
    0:36:30 Looking for a political show that doesn’t scream from the extremes?
    0:36:34 Raging Moderates is now twice a week.
    0:36:35 What a thrill!
    0:36:37 Oh my God!
    0:36:39 Alert the media!
    0:36:45 Hosted by political strategist Jess Tarlov and myself, Scott Galloway.
    0:36:51 This is the show for those who are living somewhere between the center-left and the center-right.
    0:36:55 You can now find Raging Moderates on its own feed every Tuesday and Friday.
    0:36:55 That’s right.
    0:36:56 Twice a week.
    0:36:59 Exclusive interviews with sharp political minds.
    0:37:00 You won’t hear anywhere else.
    0:37:02 Also, everyone that’s running for president.
    0:37:05 All of a sudden, everybody wants to know our viewpoint on thing.
    0:37:07 In other words, put me on your pod so I can run for president.
    0:37:08 Anyways, twice a week.
    0:37:11 Please sign up on our distinct feed.
    0:37:13 Follow Raging Moderates wherever you get your podcasts.
    0:37:17 And on YouTube so you don’t miss an episode.
    0:37:19 So tune in!
    0:37:22 We’re not always right, but our hearts are in the right place.
    0:37:23 We’re more raging than moderate.
    0:37:40 There is an ethos in our culture that says happiness is the goal of life.
    0:37:44 So if you’re not happy, you must be failing in some sense.
    0:37:48 Is this something that you really do want to challenge head on?
    0:37:50 Yes, I do.
    0:37:54 I mean, I think philosophers often make this point in a way that may be a little bit unrelatable,
    0:37:57 which is with kooky thought experiments where they’ll say,
    0:37:59 you know, you think happiness is the goal of life?
    0:38:06 Well, let’s imagine someone who is suddenly deceived and plugged into a kind of matrix scenario
    0:38:10 where they’re fed a stream of fake experiences and they feel great and they’re super happy.
    0:38:14 But suppose they don’t interact with anyone ever again.
    0:38:18 They’re just plugged into this machine and nothing they think they’re doing or hardly any of it are
    0:38:18 they really doing.
    0:38:20 And it’s all an illusion.
    0:38:23 Is that what you want for your loved ones?
    0:38:26 And the answer is for most of us, rightly, I think, no.
    0:38:31 But that person could be experiencing a state of mind of great happiness.
    0:38:36 I think what we should be aiming for is to live the way we should.
    0:38:38 And all of the things that matter in life come into that.
    0:38:42 Living the way you should is being responsive to all the kinds of reasons there are.
    0:38:46 Not just to worry about your own feelings, but about other people, the world around you,
    0:38:50 injustice in the world around you, the needs of other people.
    0:38:56 And so when we re-conceptualize the goal of self-help from just feeling happy to
    0:39:03 living a good enough life, living as well as we can, it starts to look much less narcissistic.
    0:39:08 And also, it looks like sometimes the answer to the question of what’s the best way I can
    0:39:12 live in this circumstance is, well, it’s going to involve a fair amount of unhappiness.
    0:39:19 And I think grief is one case that many of us have experienced in which it seems clear that a
    0:39:26 certain amount of sadness is not in opposition to living the way we should is grief. The pain of
    0:39:30 grief is not as it were something that would be better if we just didn’t have. It’s not like the
    0:39:36 ideal scenario would be one in which when people we love die, we just feel nothing. It’s part of
    0:39:40 something we deeply value, namely loving attachment to others, that we have to go through this.
    0:39:46 Again, that’s just a more concrete illustration of the contrast between happiness as a kind of
    0:39:53 positive state of mind and living well as responding to the real world we’re confronting
    0:39:55 in the kind of ways that it calls for.
    0:40:03 I think this is the main reason why I recoil at a lot of the new-agey self-actualization
    0:40:09 nonsense, because in the end, it is a lot of sublimated narcissism.
    0:40:15 And the message is that you help yourself. And of course, conveniently, you help the world
    0:40:22 by loving yourself and taking care of you. That’s the road to happiness and fulfillment.
    0:40:28 I’m not against loving yourself, to be clear. But I don’t think that’s the way to a meaningful,
    0:40:37 good life. The greatest gift of parenting and marriage, for me at least, has been caring about
    0:40:45 other people more than I care about myself. Spending less time in my own head, which I have always done,
    0:40:52 quite naturally. And just spending more time being present for other people, directing my attention
    0:40:58 towards other people. And too much of the self-help stuff seems to lead people in the opposite direction.
    0:41:02 It seems to lead them inward. And that’s a moral dead end.
    0:41:08 I think we’re exactly on the same page about that. I think there are dangers of narcissism in a certain
    0:41:13 kind of self-help tradition. And it’s not that happiness isn’t a good thing and doesn’t matter,
    0:41:18 but sometimes unhappiness is not a sign that there’s anything going wrong with you.
    0:41:24 It’s that your unhappiness registers something wrong in the world, and you’re right to be unhappy.
    0:41:29 So if you look at injustice in the world around you and you’re angry, and in that sense, really
    0:41:35 unhappy about it, the fix to that is not going to be, I think, changing you. It’s going to be trying
    0:41:40 to change the world. And if you can’t change the world, you’re going to find yourself in a situation
    0:41:46 where part of living well and responsively might well involve a certain feeling of unhappiness. And I
    0:41:51 think the consolation has to be not there’s some secret ticket to happiness. It’s my unhappiness about
    0:41:58 this is not a sign that I’m going wrong. It’s a sign of me facing reality in the way that I should.
    0:42:05 And there’s a recent fashion for a certain kind of Stoicism, a kind of neo-Stoic movement in
    0:42:10 contemporary self-help that has a philosophical pedigree. But one of the ways in which it risks
    0:42:15 going wrong is that there’s this sort of Stoic idea that you should let go of what you can’t control.
    0:42:21 And for the Stoics, this is backed by a picture of the cosmos in which the divine mind,
    0:42:25 Zeus, ensures that everything works out for the best. There’s this theodicy. And if you think
    0:42:30 everything works out for the best, I can see why you’d say, well, if you’re raging against something
    0:42:34 that seems bad, you’re just kind of missing the divine plan. But if you take away that backing,
    0:42:42 I think this Stoic advice to just accept what’s out of your control is often an advice to not pay
    0:42:46 attention to reality and not respond to reality in the way it calls for. I think sometimes we have to and
    0:42:53 should feel grief or anger about things we can’t control. And that’s part of living well,
    0:42:55 even if it involves unhappiness.
    0:43:04 I’d say maybe the most common platitude you find in the self-help, self-improvement space is some
    0:43:11 version of what people call the law of attraction. You know, this idea that positive things happen to
    0:43:15 positive people. You manifest the things you want. So if you just assume the right attitude,
    0:43:21 the right posture, you will be happy. Do you think this sort of thing is wise or helpful?
    0:43:26 I mean, there’s a kind of empirical question of which I don’t quite know how to judge about what
    0:43:32 the likely effects are of being more or less upbeat. It’s not that I could say, hey, my more downbeat
    0:43:36 approach has paid off in spades. I mean, I find the world difficult to cope with. And I think this idea
    0:43:41 of positive thinking, like, look at the positive, don’t dwell on the negative, the risk is partly that
    0:43:47 we won’t have the acknowledgement of difficulty that connects us to others. And in that way, it is a kind of
    0:43:54 isolation and drawing away from other people. I think being willing to dwell on the negative
    0:44:00 is a condition of certain kinds of supportive intimacy. And, you know, I think we lose that
    0:44:02 if we just say, hey, be positive, get over it.
    0:44:09 It’s not that I think it’s good to be negative or that it’s not healthy to maintain a positive attitude.
    0:44:16 I mean, sure. Yeah. Hell yeah, actually. Why not? The issue is that taken too far,
    0:44:24 it can make you become a little blind to the tragic dimension of life. You start seeing people as
    0:44:31 responsible for their suffering or their happiness in a way that isn’t really true. And more importantly,
    0:44:38 isn’t compassionate. Both parts of that seem right to me. I think there’s a kind of honesty and realism
    0:44:43 that you risk if you take that kind of positive thinking approach. I think one of the kinds of
    0:44:49 moments of bonding we have with other people is sharing difficulties. That’s a kind of intimacy that
    0:44:55 is unavailable if you refuse to take in the negative. And it can be punitive as well. It can be a way of
    0:44:59 saying to people who are dealing with difficult things that the problem is with them and their
    0:45:04 attitude. Often the compassionate and just and honest response is that the problem is with the
    0:45:07 world in which they’re dealing and that we should change the world around them.
    0:45:14 That is well said. And again, I don’t think anyone should wake up every day and spend the first eight
    0:45:21 hours ruminating on all the terrible shit in the world. I just mean living purely or mostly for
    0:45:29 individual happiness can blind us to other moral pursuits that require more sensitivity to the
    0:45:30 suffering of other people.
    0:45:36 Yeah. No, I totally agree with that. In Life is Hard, there’s a chapter on infirmity where I talk
    0:45:42 about my own experience with chronic pain. Having written about it, people will contact me and say,
    0:45:48 “Oh, have you tried this?” or “I’m experiencing something similar.” And there is this solidarity and community
    0:45:53 that comes out of sharing difficulty. And I think that is a really profound and important thing.
    0:45:57 I don’t think it’s inevitable that confronting suffering in your own life will be a source of
    0:46:04 compassion, but I think it can be. I mean, I remember when I was first facing this diagnosis that was
    0:46:10 chronic, it’s not going away. I remember sitting outside the clinic, looking at people walk by with
    0:46:15 this sense of incredible bitterness, thinking, “You people, you don’t know how good you have it. You’re
    0:46:20 walking past pain-free.” And then there was a beat and I thought, “I have absolutely no idea what’s
    0:46:24 happening with any of these people. They could be going through much worse things, worse pain,
    0:46:30 grief, loss. What am I thinking here? What am I talking about?” So that moment where you flip from
    0:46:36 the kind of self-pity of thinking, “Other people don’t realize how hard I have it,” to saying,
    0:46:42 “Yes, and by the same token, I am often unaware of and insensitive to the difficulties other people
    0:46:46 have.” I don’t think it’s inevitable that thinking about your own suffering will lead you in that
    0:46:53 direction, but I think there are moments where it can pivot from recognizing one’s own invisible
    0:47:00 difficulties to acknowledgement of and reaching out towards the invisible difficulties of other people.
    0:47:07 And so, yeah, I think we would lose something if we didn’t dwell in difficulty to a certain extent.
    0:47:12 Not for its own sake, not because we want to dwell in hardships, but precisely because
    0:47:19 it’s by dwelling in it that we can find solidarity and find ways of coming to terms with or changing
    0:47:23 the world to cope with it. So if individual happiness
    0:47:28 shouldn’t be the goal of life, what should the goal of life be? And I apologize for the
    0:47:33 ridiculously hard question, but if anyone can answer it, it’s you, so I’m going to ask.
    0:47:37 Well, I feel like we’re going back to the, yeah, we started with what is philosophy and now we’re
    0:47:42 coming back to the other easy question, what should we aim for in life? I mean, I think it’s very hard
    0:47:49 to say in general terms what that should be. I do think that what we can say is that it’s about responding
    0:47:55 to reality as we should. So, taking in reality, that’s an essential part of living well. And then
    0:48:00 responding to it as we should, I think there are limits to how much philosophical argument
    0:48:07 can show us about how we should respond. I think the way we actually are guided is by trying to describe
    0:48:13 reality and let those descriptions guide us. So, when I think of how philosophy contributes to this,
    0:48:20 I think of concepts philosophy provides. So, allowing us to think about the distinction between, say,
    0:48:25 TELIC and ATELIC activities and say, “Hey, am I putting too much value on projects when I should be
    0:48:31 thinking about the ongoing process of what I’m doing?” Or when philosophers coin concepts like structural
    0:48:37 injustice or alienation and we think, “Hold on, something feels wrong with the world. Is this a
    0:48:42 way I can conceptualize it?” And I think the way in which we find an answer to this question, how should
    0:48:49 we live, is by finding descriptions that capture the problems we’re facing and then orient us towards
    0:48:56 them. So, I think it’s not going to come in the form of a kind of pithy slogan that goes beyond,
    0:49:02 find the right kinds of descriptions of reality and let them guide you in your responses to it.
    0:49:07 And then the philosophical project is not necessarily that they’re going to prove to you
    0:49:11 this is how you have to live. It’s that they’re coming up with a kind of conceptual structure that
    0:49:17 allows you to understand a situation and that understanding then guides and orients you towards
    0:49:24 it. So, I think attention to reality plays this absolutely central role in philosophical reflection and in ethics.
    0:49:31 And I love what you say, that this is really the great achievement of moral and political philosophy,
    0:49:40 that it gives us a language to think about and frame and conceptualize our own lives and its relation to
    0:49:44 other people. I mean, this is always why existentialism was very important to me when I was younger, because
    0:49:49 it gave me this language, freedom, engagement, responsibility. Those terms have a concrete meaning
    0:49:53 for me. And that’s what philosophy at its best can do.
    0:49:59 David I totally agree. And I think philosophers should be attentive to the fact that, as it were,
    0:50:06 what people remember and take away from philosophy are those concepts. So, probably what even philosophers
    0:50:13 remember about a text that they studied intricately is not every intricate detail of the argument. It’s a kind
    0:50:19 of concept that orients them towards problems. You know, you could say, well, you know, this is a terrible
    0:50:24 failing. But really, ideally, we’d remember all the details of every little intricate argument in a
    0:50:29 philosopher’s work. And there’s value to engaging with those intricate details. But the fact that what
    0:50:36 people take away into their lives are the concepts that articulate their experience and that then they can
    0:50:42 lean on to kind of guide themselves in situations should inform how philosophers think about what
    0:50:48 they’re doing. And I think that helps us understand how philosophy can operate as a form of self-help,
    0:50:55 even if some of the esoteric, intricate arguments philosophers write about are important but not widely
    0:51:03 accessible. Often the concepts and understandings and orientations that arise from them are shareable and really can help
    0:51:06 help people grapple with the kind of problems they’re facing.
    0:51:15 I do think modern philosophy has lost its way a little bit and become too academic, too specialized,
    0:51:24 too removed from everyday life, which is why I’m very happy to see someone like you doing more
    0:51:29 public philosophy in this way, because I think that’s what we need. And I think there’s a real
    0:51:34 hunger for that. It’s one of the reasons I do the show, and it’s one of the reasons why it’s
    0:51:38 a privilege to have people like you on it. So, yeah, thanks.
    0:51:44 Thank you very much. And I think shows like this are important for that reason. I think there’s a
    0:51:49 real movement in philosophy to do more public-facing work. It’s a time where there’s a lot of excitement
    0:51:52 and potential. I’m really optimistic about it.
    0:51:56 Is there anything else you’d like to say before we get out of here?
    0:52:04 I think we’ve covered a huge amount from the birth of philosophy to the ills and prospects of
    0:52:08 contemporary philosophy. So, we pretty much covered it.
    0:52:12 Kieran Setia, this was a genuine pleasure. Thank you.
    0:52:34 This episode was produced by John Ahrens, edited by Jorge Just, engineered by Patrick Boyd,
    0:52:41 and Alex Overington wrote our theme music.
    0:52:46 Alright, I want to hear from you. Hit me up. You can drop us a line at thegrayarea@vox.com.
    0:52:52 This was a personal one. I guess I say that a lot because, I don’t know, I get personal on some of
    0:52:57 these. But this one was especially so. And I think it had to be because of, well, the topic.
    0:53:03 I’m curious if any of it resonated with you or if you had any thoughts at all, good or bad.
    0:53:08 Let me know. The email address is thegrayarea@vox.com.
    0:53:23 New episodes of The Gray Area drop on Mondays. Listen and subscribe.

    Philosophy often feels like a disconnected discipline, obsessed with tedious and abstract problems. But MIT professor Kieran Setiya believes philosophical inquiry has a practical purpose outside the classroom — to help guide us through life’s most challenging circumstances. He joins Sean to talk about self-help, FOMO, and midlife crises.

    This episode originally aired in April 2024.

    Host: Sean Illing (@SeanIlling)

    Guest: Kieran Setiya, author of Life Is Hard: How Philosophy Can Help Us Find Our Way and Midlife: A Philosophical Guide.

    Help us plan for the future of The Gray Area by filling out a brief survey: voxmedia.com/survey. Thank you!

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Whatever this is, it isn’t liberalism

    AI transcript
    0:00:04 Thumbtack presents the ins and outs of caring for your home.
    0:00:10 Out. Indecision. Overthinking. Second-guessing every choice you make.
    0:00:16 In. Plans and guides that make it easy to get home projects done.
    0:00:21 Out. Beige. On beige. On beige.
    0:00:26 In. Knowing what to do, when to do it, and who to hire.
    0:00:29 Start caring for your home with confidence.
    0:00:31 Download Thumbtack today.
    0:00:39 Support for this show comes from ServiceNow, a company that helps people do more fulfilling work.
    0:00:41 The work they actually want to do.
    0:00:45 You know what people don’t want to do? Boring, busy work.
    0:00:49 But ServiceNow says that with their AI agents built into the ServiceNow platform,
    0:00:54 you can automate millions of repetitive tasks in every corner of a business.
    0:00:58 IT, HR, customer service, and more.
    0:01:03 And the company says that means your people can focus on the work that they want to do.
    0:01:06 That’s putting AI agents to work for people.
    0:01:07 It’s your turn.
    0:01:11 You can get started at ServiceNow.com slash AI dash agents.
    0:01:29 Those are the words of the great and now infamous Thomas Hobbes, the 17th century English philosopher.
    0:01:39 You can find them in his 1651 book, The Leviathan, which is often considered the founding text of modern political philosophy.
    0:01:47 Hobbes’ big contribution was to challenge the right of kings and religious authorities to rule.
    0:01:52 The foundation of political power for him was the consent of the governed.
    0:01:58 And the only reason to hand over authority to the state, or anyone else for that matter,
    0:02:00 was for the protection of the individual.
    0:02:05 If that sounds familiar, it’s because it is.
    0:02:12 That’s basically the political philosophy that came to dominate the Western world from the Enlightenment on.
    0:02:15 It’s what we now call liberalism.
    0:02:21 But we’re in an era where liberalism and democracy are being contested from within and without.
    0:02:28 And while I wouldn’t say that liberalism is dead, that doesn’t quite make sense.
    0:02:31 I would say that it’s wobbly.
    0:02:35 What should we make of that?
    0:02:38 Is the liberal experiment coming to an end?
    0:02:43 And if it is, what does that mean for our political future?
    0:02:50 I’m Sean Elling, and this is The Gray Area.
    0:03:03 Today’s guest is political philosopher John Gray.
    0:03:12 We spoke before last year’s elections, and lately, I have found myself returning to that conversation over and over again.
    0:03:18 In his book, The New Leviathans, Thoughts After Liberalism,
    0:03:27 Gray challenges the idea that the liberal dream of history, with a capital H, is over, and that liberal democracy has won.
    0:03:35 Hobbes is at the center of his book because he thinks Hobbes’ liberalism was more realistic in its ambitions,
    0:03:41 and that his most important lessons about the limits of politics have been forgotten.
    0:03:48 It is, as you might suspect, a challenging book, but it is an essential read.
    0:03:56 And I invited Gray onto the show to talk about what he thinks has gone wrong, and more importantly, where he thinks we’re headed.
    0:04:03 John Gray, welcome to The Gray Area.
    0:04:04 Thank you very much, Sean.
    0:04:12 What’s interesting about this new book is that you’re not even bothering to announce the death of liberalism.
    0:04:17 Like Nietzsche’s madman screaming about God in the town square.
    0:04:22 You’re saying liberalism has already passed, and most of us don’t quite know it yet.
    0:04:23 Is that right?
    0:04:33 Yes, I think there are many visible signs that anything like a liberal order or a liberal civilization has passed.
    0:04:43 In the last 30 years, shall we say, since 1990, 30-odd years, there’s been an enormous…
    0:04:52 After that moment in which it seemed that liberal democracy was going to become universal or nearly universal following the collapse of communism,
    0:04:58 What, in fact, happened was that the transition from communism to liberal democracy did not occur in Russia.
    0:05:00 It has not occurred in China.
    0:05:15 The wars that were fought, so-called wars of choice, by the United States and its followers, including Britain, in Afghanistan, Iraq, Syria, to some degree, and Libya, were all failures.
    0:05:28 None of those countries became democratic or anything near it, and, in fact, they only damaged those countries in profound ways and damaged the United States, and particularly the United States and Britain in various ways.
    0:05:39 So, I think if you just look at geopolitical trends, you can see that the so-called liberal West, if something like that ever fully existed, is in steep retreat.
    0:06:08 And in Western societies themselves, what were taken for granted, even within my lifetime, and perhaps yours, Sean, as fully accepted, liberal freedoms of speech and inquiry and expression and so forth, have been curtailed, not by a dictatorial state, interestingly, as in the former Soviet Union or today in Xi’s China, but actually by civil institutions themselves.
    0:06:27 It’s been universities and museums and publishers and media organizations, that of charities and cultural institutions and so on, have imposed various kinds of limits on themselves, such that they police the expression of their members.
    0:06:36 And those who deviate from a prevailing progressive orthodoxy or are in various ways canceled or excluded, that’s quite new.
    0:06:39 But it’s rather widespread now and pervasive.
    0:06:50 And although, of course, it’s true that there are enclaves of free expression, enclaves or niches like the one we’re enjoying now.
    0:07:00 Although we’re not in the position that people are in, in Xi’s China or Putin’s Russia, we can still communicate relatively freely.
    0:07:12 There are large areas of life, including the institutions I mentioned earlier, which used to be, let’s say, governed by liberal norms, and aren’t any longer.
    0:07:26 So I think it makes sense just as an empirical observation to say that liberal civilization that existed and could be described as a liberal civilization, with all its faults and flaws, doesn’t exist any longer.
    0:07:34 Of course, you might say liberalism as a theory continues to exist, but then so does medieval political theory or any modern political theory.
    0:07:36 It just doesn’t describe the world anymore.
    0:07:46 Well, let’s not get too far ahead of ourselves here, because the term liberalism is one of those big, unwieldy terms that means a million different things to a million different people.
    0:07:53 What do you mean by liberalism, just so it’s clear what we’re diagnosing the death of here?
    0:08:08 The core of liberalism as a philosophy is the idea that no one has a natural right to rule, and that all rulers, all regimes, all states serve those whom they govern.
    0:08:11 So that this is a view which differs from Plato.
    0:08:24 Plato thought that philosophers had the best authority to rule because they could better than other people perceive truths beyond the shadows of the improbable world.
    0:08:32 In Hobbes’ day, some people believed, many people believed that kings had divine right to rule.
    0:08:40 And later on, we’ve had beliefs, we have had philosophies which have developed according to which it’s the most virtuous people who should rule.
    0:08:52 And I think actually the hyper-liberal, or what is now sometimes called the woke movement, has something of that in it, which is that they imagine that they represent virtue better than, and progressiveness better than others.
    0:08:58 And therefore, they have a right at least to shape society according to their vision.
    0:09:11 But a liberal, and in this sense, Hobbes is a liberal, and I’m still a liberal in this sense, actually, is one who thinks that any sovereign, any ruler, depends for their authority on protecting the well-being of the ruled.
    0:09:15 And in liberal theory, it’s normally, liberal thoughts are normally individuals.
    0:09:19 And when it doesn’t do that, then any obligation to obey is dissolved.
    0:09:23 And Hobbes says explicitly, the book is partly about Thomas Hobbes, of course.
    0:09:30 As you know, the 17th century political philosopher that wrote the book, Leviathan, that’s why it’s called New Leviathan.
    0:09:53 Hobbes said that when the sovereign, which could be a king or a Republican assembly or a parliament or whatever, but when the sovereign fails to protect the individual from violence for other human beings, when the sovereign fails to provide security, all obligations are dissolved, and the individual can leave or kill the sovereign.
    0:09:54 Kill the sovereign.
    0:09:59 So there is a fundamental equality between the ruler and the root.
    0:10:00 I think that’s the core of liberalism.
    0:10:03 And in that sense, I say Hobbes is still a liberal, and so am I.
    0:10:12 But it had many, many different meanings later or attached to it about rights and progressiveness and so on, which I don’t subscribe to, and neither did Hobbes.
    0:10:22 Do you actually call Hobbes the first and last great liberal philosopher, which might surprise more than a few political philosopher types?
    0:10:23 Why is that?
    0:10:25 Why is he the first and the last great liberal philosopher for you?
    0:10:27 Well, it shouldn’t surprise them.
    0:10:36 If they knew a bit more than they normally do about the history of political ideas, they would know that the best 20th century scholars of Hobbes all regarded him as a liberal.
    0:10:44 So Michael Oakeshott, the British conservative philosopher, the Canadian Marxist philosopher, C.B.
    0:10:51 Macpherson, and Leo Strauss, the American conservative philosopher, they all regarded Hobbes as a liberal.
    0:10:57 And so it’s only philosophers who don’t read ideas and their philosophy, which is the majority, I’m afraid.
    0:11:00 It’s only those who are surprised by it.
    0:11:01 So they shouldn’t be.
    0:11:07 But I think the sense in which he is is exactly the sense of which I just mentioned earlier, which is that he doesn’t accept any.
    0:11:09 The most virtuous don’t have the right to rule.
    0:11:12 The cleverest or the most intelligent don’t have the right to rule.
    0:11:15 None are appointed by God to rule.
    0:11:24 States or sovereigns or human constructions or human creations, which exist only so long as they serve the purposes of those over whom they rule.
    0:11:30 And so that, I think, is still alive, that idea, not only in philosophy.
    0:11:31 I think it’s alive in the world.
    0:11:39 And there’s nowhere in the world now, there was in the past, even relatively recent past, where anyone rules by prescriptive right.
    0:11:51 If someone just says, I have the right to rule you, as our King Charles did in the Civil War in Britain in the 17th century, I have the divine right to rule.
    0:11:51 He was executed.
    0:11:53 He was executed by the parliament.
    0:11:57 So that liberal idea, I think, is still quite strong in the world.
    0:12:02 But it’s quite different from lots of other liberal ideas about progress and humanity and rights and so on.
    0:12:12 I used to teach Hobbes, and I always wondered what it was I liked so much about him, because he is so dark and gloomy.
    0:12:21 I mean, even if you’ve never read Hobbes, you probably know his famous description of human life as nasty, brutish, and solitary, and short, that kind of thing.
    0:12:29 And I think what appeals to me in his thought is the tragic dimension.
    0:12:34 You know, anarchy, for him, was never something we transcend.
    0:12:36 It was something we stave off.
    0:12:38 But it remained a permanent possibility.
    0:12:45 That awful state of nature that he worried about was always lurking just beneath civilization.
    0:12:59 Do you think modern liberalism went awry when it lost sight of this and maybe drifted away from Hobbes’ very limited view of the purpose of the state, which is just to keep us from eating each other, basically?
    0:13:05 I think liberalism, over time, turned into something different.
    0:13:12 I mean, one has to say that, although historically, in terms of the history of ideas, Hobbes is definitely a liberal.
    0:13:30 Most people who’ve called themselves liberals subsequently in the 19th and 20th and 21st centuries wouldn’t regard Hobbes and don’t regard Hobbes as a liberal, because although he has this feature that sovereigns or states serve the individuals over whom they rule,
    0:13:41 He doesn’t think that what the state or the sovereign can do to provide security can be limited or should be limited by rights or some of the principles.
    0:13:42 He doesn’t think that.
    0:13:58 And that’s the sort of difficulty that many people find in thinking about Hobbes, which is that although he thinks the state has a very limited purpose, it can do anything that it judges, the sovereign judges, that will achieve that purpose.
    0:14:04 So, for example, the state in Hobbes has no obligation to respect freedom of speech.
    0:14:10 If freedom of speech harms social peace and political order, it can intervene.
    0:14:18 Hobbes even says that the sovereign can define the term, define the words used in the Bible to kind of define what those words mean.
    0:14:31 And probably when you taught him, you notice this, so that society can avoid the religious wars that were raging, had been raging in Europe in his time and around his time over what the Bible meant.
    0:14:33 Peace determines everything.
    0:14:35 So there’s no right to free speech.
    0:14:39 There’s no right to demonstrate that none of these rights can restrain the state.
    0:15:01 On the other hand, and here he’s different from modern liberals, the state can’t intervene in society, can’t curb human beings in order to achieve some idea of social justice or progress or a higher type of humanity, a more civilized or superior or ethically superior type.
    0:15:05 It can’t do that either, it shouldn’t promote virtue, it’s indifferent to those matters.
    0:15:08 So, it’s a very unfamiliar type of liberalism.
    0:15:11 But I share your view, I’m not sure it’s tragic.
    0:15:12 I would just say it’s a reality.
    0:15:20 Hobbes thought it was a reality that at any time, order in society can break down anywhere, if certain, and it can happen quite quickly.
    0:15:23 In other words, order is fragile in human life.
    0:15:26 The default condition of human life is not harmony.
    0:15:29 I guess that’s where he differs from many liberals.
    0:15:34 They’ve assumed that basically human beings want to cooperate, that’s what they try and do.
    0:15:44 And if they’re thwarted, it’s by tyranny or reaction or evil demagogues or some sort of evil force which prevents them.
    0:15:45 Hobbes doesn’t assume that.
    0:15:57 Hobbes thinks the default condition of humanity is conflict and that, therefore, one can fall into brutal and terrible and civilization forms of that conflict at any time.
    0:16:03 And I would say that the history of the 20th century exhibited that in many ways.
    0:16:12 The main destroyers, I guess, of human life and peace and the main agencies that inflict violence then were states.
    0:16:16 But in the 21st century, they’re not necessarily states.
    0:16:19 They can be terrorist organizations or criminal gangs.
    0:16:31 And so anarchy has emerged now, I think, in the 21st century as at least as much of a threat to human security and human freedom as totalitarian and tyrannical states were in the 20th century.
    0:16:35 And that’s, I think, a relatively new development in recent times.
    0:16:39 And it’s one which, I think, makes Hobbes more topical, if you like.
    0:16:52 I mean, when it was states that were committing vast crimes, his argument that the state should be unfettered in its pursuit of peace kind of seemed weak because states weren’t pursuing peace.
    0:16:57 They were pursuing other gods and were killing countless or tens of millions of human beings.
    0:17:05 Now, it’s more often the case that states are collapsed or are destroyed.
    0:17:15 And sometimes they’re destroyed, as they were in Iraq and Afghanistan and in Libya, for example, by the attempt to bring in a better kind of state.
    0:17:25 And so I think one big error of contemporary liberalism, which has actually affected policies in America and elsewhere, has been the idea that nothing is worse than tyranny.
    0:17:34 Whereas Hobbes’ insight, his relatively simple insight, but his rather profound one, is that anarchy can be worse than tyranny.
    0:17:43 And what’s also true is that once you’re in an anarchical condition, once the state is broken down, once you’re in a failed state, it’s very difficult, actually, to reconstruct the state.
    0:17:47 Well, in what sense has liberalism, for you, passed into the dustbin of history?
    0:17:56 I mean, liberalism is still very much a thing, even if the shape of it has changed, and it is very much alive, if not terribly well.
    0:18:02 So what does it mean to say that liberalism has passed away or died or however you like to put it?
    0:18:04 Well, as I’ve said, there are still ideas.
    0:18:05 Yeah, yeah.
    0:18:12 I mean, you could go into a library and pull a book down, and it will describe medieval or ancient Greek and Roman political philosophy to you.
    0:18:14 In that sense, these ideas are alive.
    0:18:23 But in the actual world, the actual human world, liberal regimes or liberal societies or a liberal civilization, I think, is in the past.
    0:18:30 So, well, let me give you a kind of rather obvious example, since we’re talking partly in an American context.
    0:18:48 Thirty years ago, I wrote that I thought that what would happen, I quote myself, perhaps rather vainly, in this new book of mine, I wrote that what I expected to happen in the United States was that as more and more freedoms and activities became covered by rights, by legal rights,
    0:19:00 and when some of those rights did not reflect a moral consensus in society, but there were rights to do things that were morally conflicted in society, like abortion.
    0:19:03 Now, I’m pro-abortion, but that’s pro-choice, but that’s irrelevant here.
    0:19:12 I thought that what would eventually happen would be that the judicial institutions, up to and including the Supreme Court, would be politicized.
    0:19:14 They’d become objects of political capture.
    0:19:33 Now, when I said that thirty-odd years ago, people like Dworkin, whom I knew in Oxford and others, were incredulous, because for them it was natural, it was some kind of settled fact of life that the majority of judges had become liberal and would stay liberal.
    0:19:34 I never thought that for a moment.
    0:19:47 I thought that a different dynamic would take place, that the more rights discourse and the practice of rights was extended to morally disputable and conflicted areas, the judicial institutions would be politicized and taken over.
    0:20:01 So that, I think, is a feature of, if you think of a liberal regime or a liberal society, one of which there are judicial institutions that are not politically contested, that aren’t part of the political arena, then that’s passed away, that’s gone.
    0:20:13 And so, I think, also, has the area of private life, of life in which what you say to friends or work colleagues is not sort of judiciable, is not actionable.
    0:20:28 That’s much smaller than it used to be, certainly in Britain, which I know well, and I’m pretty sure it is in America, too, in that what used to be a private conversation could be cited against you because it deviates from some progressive norm.
    0:20:40 So, the defining features of liberalism, not as a philosophy that exists in libraries, but as a practicing set of institutions and norms, has at least become weaker.
    0:20:44 And I would say it’s more of a pretty well gone now, and I don’t expect it to come back.
    0:20:57 We’ll be back with more of my conversation with John Gray after a quick break.
    0:21:14 Support for the Gray Area comes from Quince.
    0:21:18 Vacation season is nearly here, and when you go on vacation, you want to dress the part.
    0:21:25 Whether that’s breathable linen for summer nights, like I wear, or comfy leggings for long plane rides, like I also wear,
    0:21:39 You can treat yourself with Quince’s high-quality travel essentials at fair prices, like Quince’s lightweight shirts and shorts from $30, and comfortable lounge sets, like the ones I wear, with premium luggage options and durable duffel bags to carry it all.
    0:21:41 The best part?
    0:21:45 Quince says their items are priced 50% to 80% less than similar brands.
    0:21:48 Our colleague, Claire White, got to check out Quince.
    0:21:54 I received the leather pouch travel set from Quince, and I love them.
    0:21:56 They are so versatile.
    0:22:01 They fit a lot while still looking great and maintaining a really high quality of leather.
    0:22:06 For your next trip, treat yourself to the luxe upgrades you deserve from Quince.
    0:22:13 You can go to Quince.com slash grayarea for 365-day returns, plus free shipping on your order.
    0:22:20 That’s Q-U-I-N-C-E dot com slash grayarea to get free shipping and 365-day returns.
    0:22:22 Quince.com slash grayarea.
    0:22:29 Support for the gray area comes from Bombas.
    0:22:32 It’s time for spring cleaning, and you can start with your sock drawer.
    0:22:36 Bombas can help you replace all your old, worn-down pairs.
    0:22:38 Say you’re thinking of getting into running this summer.
    0:22:43 Bombas engineers blister-fighting, sweat-wicking athletic socks that can help you go that extra mile.
    0:22:50 Or if you have a spring wedding coming up, they make comfortable dress socks, too, for loafers, heels, and all your other fancy shoes.
    0:22:52 I’m a big runner.
    0:22:53 I talk about it all the time.
    0:23:00 But the problem is that I live on the Gulf Coast, and it’s basically a sauna outside for four months of the year, maybe five.
    0:23:07 I started wearing Bombas athletic socks for my runs, and they’ve held up better than any other socks I’ve ever tried.
    0:23:13 They’re super durable, comfortable, and they really do a great job of absorbing all that sweat.
    0:23:15 And right now, Bombas is going international.
    0:23:19 You can get worldwide shipping to over 200 countries.
    0:23:25 You can go to bombas.com slash gray area and use code gray area for 20% off your first purchase.
    0:23:29 That’s B-O-M-B-A-S dot com slash gray area.
    0:23:32 Code gray area for 20% off your first purchase.
    0:23:34 Bombas.com slash gray area.
    0:23:36 Code gray area.
    0:23:43 Support for the gray area comes from Shopify.
    0:23:47 Creating a successful business means you have to be on top of a lot of elements.
    0:23:54 You need a product with demand, a focus brand, a steady hand, and a gray area ad budget of at least $100,000.
    0:23:56 You also need savvy marketing.
    0:23:58 But that one didn’t rhyme.
    0:24:01 And of course, there’s the business behind the business.
    0:24:03 The one that makes selling things easy.
    0:24:06 For a lot of companies, that business is Shopify.
    0:24:12 According to their data, Shopify can help you boost conversions by up to 50% with their ShopPay feature.
    0:24:19 That basically means less people abandoning their online shopping carts and more people going through with the sale.
    0:24:24 If you want to grow your business, your commerce platform should be built to sell wherever your customers are.
    0:24:29 Online, in-store, in their feed, and everywhere in between.
    0:24:32 Businesses that sell, sell more with Shopify.
    0:24:36 You can upgrade your business and get the same checkout Mattel uses.
    0:24:40 You can sign up for your $1 per month trial period at Shopify.com slash Vox.
    0:24:42 All lowercase.
    0:24:46 Go to Shopify.com slash Vox to upgrade your selling today.
    0:24:48 Shopify.com slash Vox.
    0:25:11 As you know, Nietzsche thought that liberalism was rooted in these Christian ideas about human equality and the value of the human person.
    0:25:21 But modern liberals rejected the religious roots of these values while still attempting to preserve them on secular grounds.
    0:25:25 That was a move he thought was destined to fail.
    0:25:36 You seem to think that Hobbesian liberalism was intended to be a kind of political atheism, but it eventually shape-shifted into something like a political religion.
    0:25:39 Only it didn’t recognize itself as such.
    0:25:41 Is that sort of the core problem here?
    0:25:42 Or one of them?
    0:25:55 One of the core problems – I mean, I think I talk at some length in the book when I discuss the way in John Stuart Mill, who I think for many liberals is still a canonical liberal, or even the canonical liberal.
    0:26:06 But he explicitly, undeniably, yet overtly adopted the view that from Auguste Comte, the French positive thinker, who was an anti-liberal, actually.
    0:26:17 But anyway, he adopted from Comte the idea of a religion of humanity, which he said should replace all the existing religions and would be better than any of the existing religions.
    0:26:22 He explicitly took that from Comte and cited and said that and wrote that in several places.
    0:26:30 So, I think it was probably in Mill, at least in Britain, that liberalism became itself a kind of religion.
    0:26:39 But, of course, there are still many respects in which it secularized monotheistic assumptions or values or premises.
    0:26:59 So, I think it is undoubtedly the case, historically, that liberalism was a set of footnotes to, particularly the liberalism that later emerged as a kind of religion in its own right, to monotheism, to Christian and Jewish monotheism, and as a competitor to it.
    0:27:07 And, basically, liberals, conventional liberals, 90% of liberals, are adamantly resistant to this view.
    0:27:19 They adamantly insist that their views at no point depend on anything in theism, but they would say it’s a kind of genetic fallacy to think that just something may have come from theism.
    0:27:20 It depends on that.
    0:27:23 But it’s actually, I think, quite difficult.
    0:27:33 You know, it has become more difficult for me to identify what I am, and it’s not just because the fault lines around me are so scrambled.
    0:27:41 I think on some level, it’s because, and maybe I’m projecting a little bit onto Hobbes, I have a pretty tragic view of political life.
    0:27:54 And because of that, I have a fairly modest understanding of the goal of politics, which is to navigate this tension between order and chaos with the understanding that nothing is permanent.
    0:27:59 Everything is contingent, and history has no ultimate direction.
    0:28:04 I mean, in so many ways, this was the political lesson of the 20th century.
    0:28:13 And after a handful of decades of liberal triumphalism, which is barely a blink in historical time, by the way, people seem to have forgotten this.
    0:28:16 And this is probably where you and I are maybe most aligned.
    0:28:20 But you don’t think the belief in progress is a complete delusion, right?
    0:28:22 I mean, the world has indeed gotten much, much better.
    0:28:26 It’s just that that progress isn’t fixed, and it’s dangerous to believe otherwise.
    0:28:27 Well, I don’t know.
    0:28:35 I mean, what I say in the book is that progress meant in those who believed in it.
    0:28:40 It didn’t mean that things would get better for a while and then get worse.
    0:28:43 I guess it meant two things, both of which are false.
    0:28:53 One is that progress was cumulative in the sense that what was achieved in one generation could be carried on in the next generation.
    0:28:54 That’s what meliorism was.
    0:29:04 Meliorism as a philosophy isn’t just the idea or the belief, which is some societies or some parts of history, some are better than others.
    0:29:08 I think everybody would accept that, whatever their values are, actually.
    0:29:13 But it was the belief that the human lot could be cumulatively improved.
    0:29:19 That’s to say that certain achievements could be embedded and they would remain fixed.
    0:29:21 You could have some retrogression.
    0:29:27 You could go from stair seven on the escalator of progress back to stair three.
    0:29:33 But then the stakers would start moving again and you would get back to seven.
    0:29:35 And then you could get to eight or nine.
    0:29:40 So you might make two steps back, but you would then make two or three steps forward.
    0:29:41 That was meliorism.
    0:29:43 And I think that’s clearly false.
    0:29:46 You might be tempted to think that it was true if you thought of only the last 300 years.
    0:29:51 But if you look at the larger, there was no apocalyptic revelation 300 years ago.
    0:29:53 Some apocalyptic change in human events.
    0:30:00 Human beings remained what they were before that in ancient Greece and ancient China and elsewhere.
    0:30:01 And then medieval times.
    0:30:07 They remained basically, I think, still what they were in their natures and appetites and so on.
    0:30:10 And so meliorism in that sense is false.
    0:30:21 Well, one thing that seems obvious enough at this moment is that liberal societies are experiencing a lot of internal disruption.
    0:30:29 I mean, maybe the only thing that really unites the far right and the far left is their contempt for the society that produced them.
    0:30:36 And you say something in the book that I think cuts right to the core of this.
    0:30:40 And I just want to read it to you and ask you what you mean by that.
    0:30:48 You say in its current and final phase, the liberal West is possessed by an idea of freedom.
    0:30:51 What does it mean to be possessed by an idea of freedom?
    0:31:07 Well, the sense in which I use it in the book is the sense in which it was used by late 19th century intellectuals in Tsarist Russia were possessed by an idea of freedom, which is that an idea of freedom comes to be prevalent.
    0:31:30 That means not the reduction of coercion by other human beings or by the state, not a set of procedures which enables people to live together, not a set of norms of tolerance or peaceful coexistence or even of mutual indifference, which enable people to live together in some rough and ready way.
    0:31:32 Freedom means self-creation.
    0:31:36 Freedom means creating yourself as the person you want to be.
    0:31:40 And that I buy here, I think, is definitely not in Hobbes.
    0:31:43 It’s not even in Locke or other liberals.
    0:31:44 But it is in Mill.
    0:32:00 It is in the chapter of Mill’s essay on liberty where he talks about individuality, where he says that anyone who inherits their way of living or what we would now call their identity from the society, from conventions, from traditions, from history, lacks individuality.
    0:32:10 Individuality means being the author of your own life, changing it, fashioning it as if it was a work of art so that it fits something unique and authentic about yourself.
    0:32:14 And I think that is what the West is what the West is possessed by.
    0:32:30 Because the reason it’s an impossible ideal to realize is that if you want to author your life in a certain way and have a certain identity, it doesn’t mean much or anything unless that identity is somehow accepted by others as well.
    0:32:34 Otherwise, it’s just a fiction of yours or a dream.
    0:32:49 And that’s, I think, one of the things that’s provoked deep conflict in Western society because there is the underlying idea of a strong version of autonomy as self-creation has become not part of the far right or the far left.
    0:32:52 It’s not that which has produced the present conflict.
    0:32:54 It’s not the far right or the far left.
    0:32:57 It’s become part of liberal thinking and practice itself.
    0:33:04 And that, I guess, goes back to Mill and to romantic theorists and philosophers who Mill read.
    0:33:10 It’s an element in the liberal tradition that wasn’t very strong or perhaps present at all there, but it’s very, very strong now.
    0:33:22 So I guess that’s what it means by being possessed by an idea of freedom, that unless you can be what you want to be and unless you can actually somehow have that validated by others, you’re not free.
    0:33:24 Well, that’s not really possible.
    0:33:34 And I think the more traditional liberal idea of toleration, which is that you don’t have to be fully validated by other people and they don’t have to be fully validated by you.
    0:33:42 They can simply, you can rub along as the different miscellaneous personalities and contingent human beings that you are.
    0:33:48 That seems to me a more achievable ideal, but it’s not one that satisfies many people today.
    0:33:50 Not many liberals, anyway.
    0:33:56 Yeah, I mean, I think that the pursuit of individual freedom is good.
    0:34:03 The desire to free ourselves from our inherited identities is good and necessary.
    0:34:15 But we do seem to run into a ditch if we pursue it too far because the pursuit, as I think you’re saying, the pursuit of self-definition doesn’t end with the self because no one can be wholly self-defined.
    0:34:18 So it becomes a political contest for recognition.
    0:34:22 And I don’t think liberal politics are equipped to handle that very well or for very long.
    0:34:32 Well, I agree with that, especially if it becomes a matter of rights, because then, of course, you have a perpetual conflict between the rights of rival groups, basically.
    0:34:41 If these identities, especially if they’re framed in ways which are antagonistic or polarized, it’s a recipe for unending conflict.
    0:34:49 I’m not sure, you see, I wouldn’t even go as far as you do in saying that is wanting to free oneself from traditional is necessarily good.
    0:34:56 I think some people want it so they can go ahead and live like that in what used to be called a liberal society if they want to.
    0:35:02 But others might be quite happy to just jog along with whatever they’ve inherited and be left.
    0:35:04 I think people should have the choice is what I was saying.
    0:35:05 I don’t mean imposing that.
    0:35:06 No, no, not imposing.
    0:35:07 But you think it’s, I don’t think it’s even better.
    0:35:09 I don’t think one is better than the other.
    0:35:11 I think they’re just preferences, actually.
    0:35:24 And so I would never say, as Mill does, Mill constantly says, people who accept the definition of their inherited identities are, he doesn’t use the word inferior, but he says, he implies all the way throughout that they’re inferior.
    0:35:34 He suggests that they’re not themselves, they obey a convent by rote, they’re puppet-like creatures, and so I wouldn’t say any of that.
    0:35:39 There may be those, I mean, who want to construct themselves, turn themselves into works of art, if you like.
    0:35:41 They can go ahead and try.
    0:35:44 But quite a lot of people, at least in the past, didn’t want to do that.
    0:35:48 And I think there are still quite a lot of people who don’t want to do that now.
    0:35:55 And they should have as much freedom and as much respect, it’s an important point, I would say, as these others.
    0:36:15 I mean, the key point, I guess, of the book is that the problems of liberal society or the fact that it’s passed away, as I claim, isn’t something that’s happened, as many conservatives or leftists or others say, because liberalism has been sidelined by Marxism or post-modernism or some other philosophy.
    0:36:25 The problems of liberal societies come from within liberal societies, come from within liberal societies themselves.
    0:36:36 And they are all problems, if you like, that liberalism has proved, the problems it’s generated, the contradictions it’s generated, have proved to be ones that it’s not very good at resolving.
    0:36:58 This contemporary obsession with self-expression and self-creation and status and that sort of thing, do you see that as symptomatic of some deep failure of liberal politics, that this was bound to happen because liberal politics did not and cannot satisfy this kind of need?
    0:37:09 No, I mean, that’s a kind of Hegelian view or a Fukuyama-like view, which says that what people want is recognition and that liberal societies haven’t been able to, etc., etc.
    0:37:28 I think that the main challenges to liberal societies now are quite different, which is that the economic model of liberal society, which was adopted after the collapse of communism, after the Cold War, has left large parts of society behind, not just minorities.
    0:37:37 There have been working-class communities in Britain and American parts of Europe, which have just been more or less abandoned.
    0:37:54 But also, large parts of what used to be called the middle classes have not seen their incomes or their standards ever being improved much or at all in the last 30 years, while the societies as a whole have gotten considerably better.
    0:38:16 So, I think the economic model, actually, of Western liberal societies, the dominant one after the Cold War, during the Cold War, we tend to forget now, although it’s within my lifetime, we tend to forget that after the Second World War, there was a model of social democracy in which the state intervened in many different ways to smooth out the hard edges of market capitalism and constrain it.
    0:38:24 I think the abandonment of that model after the end of the Cold War has led to deep-seated contradictions, but maybe they’re not what you’re referring to.
    0:38:26 They are, certainly in part.
    0:38:43 I mean, I’m glad you said that, because one of the things that irks me about a lot of right-wing types who like to rail against identity politics or wokeism, a term I really hate to use because it has been stretched to the point of meaninglessness, in our discourse at least.
    0:38:56 There is this whole materialist history to be told about the failures of liberal capitalism, and those failures have produced a lot of our political pathologies, and a lot of people on the right don’t want to hear about that, and I think that’s a huge mistake.
    0:39:05 I agree with you, and in fact, I say in the book, it’s a very simple point, but very hard for many liberals, right-wing liberals in particular, to understand.
    0:39:17 I say that what these people call populism is the political blowback against the social disruption produced by their own policies, which they don’t understand or deny.
    0:39:19 That’s what populism is.
    0:39:29 They talk about populism as if it was a sort of demonic thing that arose from nowhere, that it was a few demagogues that whipped it up out of practically nothing.
    0:39:48 I’m not saying there aren’t demagogues, but the reason the demagogues were successful in 2016 and later, and not in 1950 or 60 or 70s in Europe and America, is that there were periods, certainly in Europe, and to some extent even in America, of social democracy,
    0:40:01 in which there was a more extensive state, the Eisenhower state, the Rooseveltian state, even before that in America, which limited the impact of market capitalism on human well-being and provided some protection for its casualties.
    0:40:17 If you scrap that, which was done to a considerable extent after the end of the Cold War, then over time you create large sections of the population which are suffering and dislocate or simply have no place in the productive process.
    0:40:20 And you’ve got to expect some sort of kickback.
    0:40:22 So that’s what liberals call populism.
    0:40:28 They call populism the political movements around them that they have caused, which they don’t understand.
    0:40:30 That’s what populism is, basically.
    0:40:33 But you could never get that across to them, actually.
    0:40:36 I’ve tried to do this, and they say, but it’s the demagogues.
    0:40:37 It’s Trump.
    0:40:38 It’s Forrest Johnson.
    0:40:40 It’s Nigel Farage.
    0:40:41 It’s all these wicked people.
    0:40:44 If you could only shut these wicked people up, everything would be fine.
    0:40:46 Or some of them say it’s the Russians.
    0:40:59 So what they’re doing is they’re denying, or maybe just not understanding, maybe they’re just stupid, they’re just not understanding why these movements have arisen when they did.
    0:41:10 I guess the problem for me, and this is why I’m still basically a liberal, is that I don’t think any of the conservative alternatives are preferable for a thousand different reasons.
    0:41:14 And I’m not a fan of any imaginable version of authoritarianism.
    0:41:17 So I don’t really have anywhere else to go, ideologically.
    0:41:18 Liberalism, it is.
    0:41:20 It’s up to you.
    0:41:27 But it depends how far you think the degeneration of liberal society has gone and how far it can remain livable.
    0:41:42 I mean, one of the things in Europe now is that the far right in many European countries, not in Britain yet, but in France and Germany, is now a very substantial political bloc.
    0:41:51 In other words, there isn’t a flawed liberal society around us, uncontested, which can carry on pretty well whatever happens.
    0:42:01 There are powerful movements, not exactly like in the 30s, but there are powerful far right movements, and in some countries also far left movements, which are challenging it.
    0:42:07 So the liberal position might be a kind of luxury of history that is now passing away.
    0:42:19 We’ll be back with more of my conversation with John Gray after one more quick break.
    0:42:40 Support for the gray area comes from Mint Mobile.
    0:42:43 There are a couple ways people say data.
    0:42:44 There’s data.
    0:42:46 Then there’s data.
    0:42:47 Me, personally?
    0:42:48 I say data.
    0:42:49 I think.
    0:42:50 Most of the time.
    0:42:56 But no matter how you pronounce it, it doesn’t change the fact that most data plans cost an arm and a leg.
    0:43:00 But with Mint Mobile, they offer plans starting at just 15 bucks a month.
    0:43:02 And there’s only one way to say that.
    0:43:04 Unless you say $15, I guess.
    0:43:13 But no matter how you pronounce it, all Mint Mobile plans come with high-speed data and unlimited talk and text delivered on the nation’s largest 5G network.
    0:43:16 You can use your own phone with any Mint Mobile plan.
    0:43:20 And you can ring along your phone number with all your existing contacts.
    0:43:23 No matter how you say it, don’t overpay for it.
    0:43:26 You can shop data plans at mintmobile.com slash gray area.
    0:43:29 That’s mintmobile.com slash gray area.
    0:43:34 Upfront payment of $45 for a three-month, five-gigabyte plan required.
    0:43:36 Equivalent to $15 per month.
    0:43:39 New customer offer for first three months only.
    0:43:41 Then full price plan options available.
    0:43:43 Taxes and fees extra.
    0:43:45 See Mint Mobile for details.
    0:43:51 Support for the gray area comes from Found.
    0:43:57 When you’re a small business owner, making sure your bookkeeping and taxes stay in order comes at a cost.
    0:43:59 And not just a financial cost.
    0:44:01 It can also take a lot of time.
    0:44:03 Well, that can change with Found.
    0:44:08 Found is a banking platform that says it doesn’t just consolidate your financial ecosystem.
    0:44:14 It automates manual activities like expense tracking and finding tax write-offs.
    0:44:18 Found says they can make staying on top of invoices and payments easy.
    0:44:21 And they say small businesses are loving Found.
    0:44:24 According to the company, one Found user said this.
    0:44:27 Found is going to save me so much headache.
    0:44:29 It makes everything so much easier.
    0:44:34 Expenses, income, profits, taxes, invoices even.
    0:44:37 That’s just one of their 30,000 five-star reviews.
    0:44:42 You can open a Found account for free at found.com slash gray area.
    0:44:46 Spelled F-O-U-N-D dot com slash gray area.
    0:44:49 Found is a financial technology company, not a bank.
    0:44:54 Banking services are provided by Piermont Bank, member FDIC.
    0:44:56 You don’t need to put this one off.
    0:45:01 You can join thousands of small business owners who have streamlined their finances with Found.
    0:45:07 For as long as I can remember, bread has given me hiccups.
    0:45:11 I always get the hiccups when I eat baby carrots.
    0:45:16 Sometimes when I am washing my left ear, just my left ear, I hiccup.
    0:45:20 And my tried and true hiccup here is…
    0:45:27 Pour a glass of water, light a match, put the match out in the water, drink the water, throw away the match.
    0:45:31 Put your elbows out, point two fingers together and sort of stare at the point between the fingers.
    0:45:35 It doesn’t work if you bring your elbows down, but it works.
    0:45:38 Just eat a spoonful of peanut butter.
    0:45:39 Think of a green rabbit.
    0:45:42 I taught myself to burp on commands like…
    0:45:46 Excuse me.
    0:45:51 And I discovered that when I make myself burp, it stops my hiccups.
    0:45:56 Unexplainable is taking on hiccups.
    0:45:57 What causes them?
    0:46:00 And is there any kind of scientific cure?
    0:46:03 Follow Unexplainable for new episodes every Wednesday.
    0:46:29 I sometimes wonder how long America can continue to exist with the level of fragmentation and internal confusion that we have.
    0:46:31 And the same is true of much of Europe.
    0:46:38 How easy is it for you to imagine a political future where America and Europe cease to exist in any recognizable form?
    0:46:41 Well, Europe doesn’t exist in any recognizable form.
    0:46:44 There isn’t a European super-state, and there isn’t going to be.
    0:46:50 What there are are a variety of nation-states with internal problems of various kinds.
    0:46:52 And so I think that will basically continue.
    0:46:54 They might shift into becoming a kind of…
    0:47:00 I mean, what’s been happening in the last few years is that they’re shifting into becoming almost a hard-right block.
    0:47:06 Not that the far-right has taken over, though some people might say it did in Hungary and did in Poland for a while.
    0:47:10 But it’s the far-right which is shaping policy on lots of issues.
    0:47:12 But it won’t become a super-state.
    0:47:18 As to America, I don’t expect the American state to fragment in the way that by secession…
    0:47:24 I mean, I know some Americans talk about that, and Texans and Californians and others.
    0:47:25 I don’t actually expect that.
    0:47:38 I would more expect a kind of semi-stable, semi-anarchy, in which there are lots of regions of American society and of cities and so on which are semi-anarchical.
    0:47:42 That’s also true in places like Mexico, is it not, and parts of Latin America.
    0:47:45 That could go on for quite a long time.
    0:47:50 The big change, I guess, will be, if I’m right, it will be in the capacity of America to project its power globally.
    0:47:52 I think that is steeply declining.
    0:47:59 And I think that will, within your and my lifetime, will be actually seen to be greatly diminished.
    0:48:07 Because although America still has an enormous amount, the U.S., an enormous amount of hard firepower, more than anywhere else, actually, China’s catching up.
    0:48:13 But also, its capacity to use that hard firepower intelligently has not been very great.
    0:48:30 You actually say something pretty interesting, if that’s the right word, about America in the book, which is that it’s become Schmittian in the sense that we believed, rather foolishly, that the law could protect liberal values from political contestation.
    0:48:35 But the law has become indistinguishable from politics.
    0:48:39 And Trump just pushed us right past the threshold.
    0:48:48 And now we’re in, in my estimation, just a full-blown legitimacy crisis, where it doesn’t even matter who wins the next election.
    0:48:50 Just something like 30% of the country.
    0:48:51 Do you agree with that, by the way?
    0:48:52 That is what I think.
    0:48:53 But do you agree with that?
    0:48:53 Do I agree with what?
    0:48:56 That America’s in a legitimation crisis.
    0:48:56 Oh, yes.
    0:48:58 I’ve written this many times.
    0:49:00 It doesn’t matter who wins the next election.
    0:49:03 Something like 30% of the country will consider it illegitimate.
    0:49:04 That’s that liberal politics, John.
    0:49:07 That’s something much closer to war, really.
    0:49:09 Well, it’s what Schmitt thought politics was.
    0:49:10 Friends and enemies.
    0:49:18 And I think the achievement of liberalism in the various liberalism was to replace the war by something else, or at least attenuate the war.
    0:49:21 I mean, this was true, by the way, even in my time.
    0:49:23 Let me give you an autobiographical example.
    0:49:34 During the Thatcherite period, when I was an actor of Thatcherite, I remained, in terms of close friendship, with leading members, both theoretical members and even politicians, in the Labour Party.
    0:49:49 So, we could meet, we could have dinner, we could talk with each other, we could share ideas, didn’t agree, didn’t share goals, thought that this great Thatcherite experiment could come to grief in various different ways, as I then came to think, and so on, for slightly different reasons.
    0:49:54 But that’s actually, in America, I would say, it’s rare, I would think.
    0:49:58 Is it not shown for people to interact in that way?
    0:50:02 How many Trumpists have friendly relations with Washington Post liberals?
    0:50:03 Not many, I think.
    0:50:06 No, I’d say that’s, and that’s becoming increasingly so.
    0:50:09 That’s unfortunate, because that’s the triumph of the Schmittian model.
    0:50:13 It’s the triumph of friend-enemy relations.
    0:50:19 And once you’ve gotten to friend-enemy relations, I think you’re in deep trouble, at least from a liberal standpoint.
    0:50:25 It’s very hard to get back from that situation, because both sides want to win.
    0:50:27 And that means it’s a sort of downward spiral.
    0:50:31 Very hard to, I don’t say impossible, you know, something could happen that we haven’t thought of.
    0:50:33 But it’s very difficult to get out.
    0:50:35 So I agree completely with you.
    0:50:41 And it’s one of the things I constantly say, which is that, in one sense, it’s very important who wins the American election next year.
    0:50:45 Because if it’s Trump, the changes will be huge and quick, I believe.
    0:50:53 But in another sense, it doesn’t matter at all, because whoever wins will not be accepted, as you say, by maybe a quarter or a third of American society, American voters.
    0:50:58 So the legitimization crisis will just get worse, whoever wins.
    0:51:06 That’s a very profound fact of the world, because the world still depends on a kind of shadow of Pax Americana.
    0:51:09 It still depends on that, or has depended on that.
    0:51:23 And as that is comprehensively removed, I mean, if Trump pulls American forces out of Europe, which, if he winds up NATO, if he pulls out of the Gulf, where there is now the new Middle Eastern war, that would be a very profound change.
    0:51:31 Yeah, I think the unfortunate truth is that liberalism doesn’t really have a solution to a legitimacy crisis.
    0:51:34 No, I agree with you entirely, which is why it’s so difficult to speculate.
    0:51:46 I mean, what I don’t expect is any new order emerging from this, whether of the right or the left, but just of continued disintegration, not into civil war in America.
    0:51:47 I’m not an American.
    0:51:52 Sometimes since I’ve been there, I spent a long time in America in the 70s and 80s and 90s.
    0:51:55 So I knew it better then than I do now.
    0:51:58 But I don’t expect a full-scale civil war.
    0:52:16 But I can imagine a fairly long period, decades, you know, maybe generations of civil warfare, when different identity groups, different political ideologies, different parts of America, states of America, American states and municipalities, just go their own way with lots of the conflicts that that involves.
    0:52:33 That involves, but with a kind of area, which I think will still exist of high technology, an oligarchy, which preserves its own position one way or another, and the rest of society is doing as best it can.
    0:52:35 I mean, large parts of it abandoned.
    0:52:42 That’s what I sort of expect a kind of hybrid like that could go on for an awfully long time.
    0:52:50 I don’t think America faces the internal pressures that, say, Russia does, because Russia has powerful ethnic divisions within minorities.
    0:52:58 And the state apparatus in Russia, although more ruthless and more violent domestically, is much more corroded and much more corrupt.
    0:53:07 So I think there is a real possibility that Russia could actually break up, whereas I don’t actually, you may be more optimistic or less hyperbolic, if you like, than I am.
    0:53:09 I don’t see that as likely in America.
    0:53:13 I think just continuing decay is a much more likely prospect.
    0:53:15 Yeah, I would agree with that.
    0:53:17 I have no idea what’s going to happen.
    0:53:23 I take some solace in the fact that, at least in America, we’ve survived much, much worse in our past.
    0:53:29 And, you know, we may just lumber along in this interregnum for a very, very long time.
    0:53:31 It may be a very long interregnum.
    0:53:32 It might be.
    0:53:35 And look, maybe we need a new order.
    0:53:48 My fear has always been the road from the present order to the next one is historically a rather bumpy one, and one probably none of us want to take.
    0:53:53 And I’d prefer to fix the world we have before we tear it down.
    0:53:54 But I don’t know.
    0:53:58 Again, I’m not in the prophecy business, so I don’t know what’s going to happen.
    0:54:04 I mean, I’ve been talking about this idea of politics as tragedy, too, for the last few years.
    0:54:12 And what some liberals and others say is, they say, well, we want to get to a world where tragedy is diminished.
    0:54:23 Now, very few of them say now where there is no tragedy, though some of them say we want to get to a world, some of them have said, in which the only tragedies are failed love affairs or familial disputes and so on.
    0:54:26 We’ll never get to a world like that, I’m sure.
    0:54:46 But what I think the danger of trying to eliminate tragedy in politics is that in order to survive in any political system and to gain the power and retain the power and exercise the power, you would need to get to a society in which tragedy is supposedly diminished or mitigated or abolished.
    0:54:54 You have to enter into tragic choices which replicate the tragedy you’re trying to get rid of, trying to transcend.
    0:55:09 So, for example, one of the things that happens in all revolutions, certainly in all the European, Russian, Chinese revolutions and so on, is that once the old regime fails, if it’s really knocked down and fails, then the revolutionary contestants fight among themselves.
    0:55:11 And the one that prevails is the one that’s the most ruthless.
    0:55:19 So that in Soviet Union, early Soviet Russia, which I know the best, the anarchists were the first to be suppressed.
    0:55:22 Then the social revolutionaries, because they were less well organized, they were less ruthless.
    0:55:29 So what actually produces the authoritarianism is the struggle by the revolutionary groups against each other.
    0:55:30 And that always happens.
    0:55:37 And that sort of illustrates my deeper point, which is that in order to get to a supposedly post-tragic world,
    0:55:49 you have all kinds of ruthless, tragic decisions have had to be made about shooting anarchists en masse, assassinating, murdering, and putting in camps and so on, various dissidents.
    0:55:55 And once you’ve done that, you’re back into the world where you’ve never left it, actually, of tragic choices.
    0:56:05 So I would much prefer politics, which accepted that tragedy was primordial and omnipresent and would always be, but use this.
    0:56:14 I mean, this is why I’ve had a kind of Occam’s razor approach to tragedy, which is the aim should be to minimize tragedies beyond what was strictly necessary.
    0:56:19 And don’t go around, multiply them by trying to create new regimes all over the place.
    0:56:21 Tragedy in politics isn’t imperfectibility.
    0:56:23 We have no idea of perfection.
    0:56:26 It isn’t that progress is always reversible and ephemeral.
    0:56:28 It’s something deeper than that.
    0:56:38 It’s that there are recurring situations in politics, and always will be, in which whatever we do has deep and enduring losses attached to it.
    0:56:40 And I think that will always be the case.
    0:56:42 So I think that’s what I prefer.
    0:56:59 But I think in order to get a view of the world like that, you do actually have to go back before Christianity to maybe to the book of Job, but also to ancient Greek tragedy, where there’s no ultimate redemption at all, actually.
    0:57:16 It recurs a bit in Shakespeare later on in a Christian civilization, but you have to go all the way back to the Greek tragic dramas to get that sense that human beings are not autonomous in the sense of being ever able to shape the choices they have to make.
    0:57:23 Tragedies are unchosen choices, choices that human beings don’t want to make and would prefer not to make, but have to make.
    0:57:30 Once again, the book is called The New Leviathans, Thoughts After Liberalism.
    0:57:32 John Gray, always a pleasure.
    0:57:33 Thank you for coming in today.
    0:57:35 Great pleasure on my part as well.
    0:57:37 Let’s have another conversation in a couple of years, shall we?
    0:57:38 Let’s do it.
    0:57:59 Patrick Boyd engineered this episode.
    0:58:02 Alex Overington wrote our theme music.
    0:58:04 And A.M. Hall is the boss.
    0:58:09 As always, let us know what you think of the episode.
    0:58:17 Drop us a line at thegrayareaatvox.com and share the show with your friends, family, and anyone else who will listen.
    0:58:20 New episodes of The Gray Area drop on Mondays.
    0:58:21 Listen and subscribe.
    0:58:41 Support for The Gray Area comes from Greenlight.
    0:58:47 School can teach kids all kinds of useful things, from the wonders of the atom to the story of Marbury vs. Madison.
    0:58:51 One thing schools don’t typically teach, though, is how to manage your finances.
    0:58:55 So those skills fall primarily on you, the parent.
    0:58:57 But don’t worry, Greenlight can help.
    0:59:02 Greenlight says they offer a simple and convenient way for parents to teach kids smart money habits,
    0:59:06 while also allowing them to see what their kids are spending and saving.
    0:59:11 Plus, kids can play games on the app that teach money skills in a fun, accessible way.
    0:59:16 The Greenlight app even includes a chores feature, where you can set up one-time or recurring chores,
    0:59:21 customized to your family’s needs, and reward kids with allowance for a job well done.
    0:59:25 My kids are a bit too young to talk about spending and saving and all that,
    0:59:30 but one of our colleagues here at Vox uses Greenlight with his two boys, and he absolutely loves it.
    0:59:35 Start your risk-free Greenlight trial today at Greenlight.com slash gray area.
    0:59:38 That’s Greenlight.com slash gray area to get started.
    0:59:41 Greenlight.com slash gray area.

    What exactly is the basis for democracy?

    Arguably Iiberalism, the belief that the government serves the people, is the stone on which modern democracy was founded. That notion is so ingrained in the US that we often forget that America could be governed any other way. But political philosopher John Gray believes that liberalism has been waning for a long, long time.

    He joins Sean to discuss the great liberal thinker Thomas Hobbes and America’s decades-long transition away from liberalism.

    Host: Sean Illing (@SeanIlling)

    Guest: John Gray, political philosopher and author of The New Leviathans: Thoughts After Liberalism

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • A new way to listen

    AI transcript
    0:00:05 Hey, it’s Sean Elling. I wanted to tell you some exciting news and ask for your help.
    0:00:10 Okay, the exciting part first. Fox members now get ad-free podcasts.
    0:00:13 That’s right. Think of all the time you can save.
    0:00:17 It’s just one of the great benefits you get for directly supporting our work.
    0:00:26 Fox members also get unlimited reading on our website, member-exclusive newsletters, and more special perks as a thank you.
    0:00:31 Now I want to ask for your help. Vox is an independent publication.
    0:00:38 That means we rely on support from listeners like you to produce journalism that the world really needs right now.
    0:00:43 At Vox, we strive to help you understand what really matters in our world.
    0:00:51 That’s why we report on the most important issues shaping our world and also on truly essential stories that others neglect.
    0:00:55 We can only do that because of support from people like you.
    0:01:03 So if you’d like to support our work and get ad-free listening on our podcast, go to vox.com slash members today.
    0:01:05 That’s vox.com slash members.

    We have an exciting announcement! Vox Members now get access to ad-free podcasts. If you sign up, you’ll get unlimited access to reporting on vox.com, exclusive newsletters, and all of our podcasts — including The Gray Area — ad-free. Plus, you’ll play a crucial role in helping our show get made.

    Check it out at vox.com/members.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • The beliefs AI is built on

    AI transcript
    0:00:04 There’s over 500,000 small businesses in B.C. and no two are alike.
    0:00:05 I’m a carpenter.
    0:00:06 I’m a graphic designer.
    0:00:08 I sell dog socks online.
    0:00:12 That’s why BCAA created One Size Doesn’t Fit All insurance.
    0:00:15 It’s customizable based on your unique needs.
    0:00:18 So whether you manage rental properties or paint pet portraits,
    0:00:23 you can protect your small business with B.C.’s most trusted insurance brand.
    0:00:28 Visit bcaa.com slash smallbusiness and use promo code radio to receive $50 off.
    0:00:29 Conditions apply.
    0:00:39 There’s a lot of uncertainty when it comes to artificial intelligence.
    0:00:44 Technologists love to talk about all the good these tools can do in the world.
    0:00:46 All the problems they might solve.
    0:00:56 And yet, many of those same technologists are also warning us about all the ways AI might upend society.
    0:01:04 It’s not really clear which, if either, of these narratives are true.
    0:01:08 But three things do seem to be true.
    0:01:11 One, change is coming.
    0:01:15 Two, it’s coming whether we like it or not.
    0:01:20 Hell, even as I write this document, Google Gemini is asking me how it can help me today.
    0:01:21 It can’t.
    0:01:24 Today’s intro is 100% human-made.
    0:01:31 And finally, it’s abundantly clear that AI will affect all of us.
    0:01:39 Yet, very few of us have any say in how this technology is being developed and used.
    0:01:43 So, who does have a say?
    0:01:47 And why are they so worried about an AI apocalypse?
    0:01:51 And how are their beliefs shaping our future?
    0:01:57 I’m Sean Elling, and this is The Gray Area.
    0:02:13 My guest today is Vox host and editorial director, Julia Longoria.
    0:02:27 She spent nearly a year digging into the AI industry, trying to understand some of the people who are shaping artificial intelligence, and why so many of them believe that AI is a threat to humanity.
    0:02:33 She turned that story into a four-part podcast series called Good Robot.
    0:02:39 Most stories about AI are focused on how the technology is built and what it can do.
    0:02:51 Good Robot, instead, focuses on the beliefs and values, and most importantly, fears, of the people funding, building, and advocating on issues related to AI.
    0:03:07 What she found is a set of ideologies, some of which critics and advocates of AI adhere to, with an almost religious fervor, that are influencing the conversation around AI, and even the way the technology is built.
    0:03:22 Whether you’re familiar with these ideologies or not, they’re impacting your life, or certainly they will impact your life, because they’re shaping the development of AI as well as the guardrails, or lack thereof, around it.
    0:03:29 So I invited Julia onto the show to help me understand these values, and the people who hold them.
    0:03:39 Julia Longoria, welcome to the show.
    0:03:41 Thank you for having me.
    0:03:46 So, it was quite the reporting journey we went on for this series.
    0:03:48 It’s really, really well done.
    0:03:51 So, first of all, congrats.
    0:03:51 Thank you.
    0:03:56 Thank you for having me on that, and we’re actually going to play some clips from it today.
    0:03:57 I’m glad you enjoyed it.
    0:04:03 It’s, it’s, you’re in, I’m in that, you know, nerve-wracking first few weeks when it comes out, so it makes me feel good to hear that.
    0:04:13 So, going into this thing, you wanted to understand why so many people are worried about an AI apocalypse.
    0:04:18 And if you should be afraid, and if you should be afraid to, we will get to the answers, I promise.
    0:04:23 But why were these the motivating questions for you?
    0:04:29 You know, I come to artificial intelligence as a normie, as people in the know called me.
    0:04:32 I don’t know much about it.
    0:04:33 I didn’t know much about it.
    0:04:38 But I had the sense, as an outsider, that the stakes were really high.
    0:04:52 And it seemed like people talked about it in a language that I didn’t understand, and talking about these stakes that felt like really epic, but kind of like impenetrable to someone who didn’t speak their language.
    0:05:02 So, I guess I just wanted to start out with, like, the biggest, most epic, like, almost most ignorant question, you know, like, okay, people are afraid.
    0:05:06 There, some people are afraid that AI could just wipe us all out.
    0:05:07 Where does that fear come from?
    0:05:18 And just have that be a starting point to break the ice of this area that, like, honestly has felt kind of intangible and hard for me to even wrap my head around.
    0:05:27 Yeah, I mean, I appreciate your normie status, because that’s the position almost all of us are in.
    0:05:35 You know, we’re on the outside looking in, trying to understand what the hell is happening here.
    0:05:40 What did being a normie mean to you as you waded into this world?
    0:05:46 I mean, did you find that that outside perspective was actually useful in your reporting?
    0:05:48 Definitely, yeah.
    0:05:52 I think that’s kind of how I try to come to any topic.
    0:05:59 Like, I’ve also reported on the Supreme Court, and that’s, like, another world that speaks its own dense, impenetrable language.
    0:06:06 And, you know, like the Supreme Court, like, artificial intelligence affects all of our lives deeply.
    0:06:20 And I feel like because it is such a, you know, sophisticated technology, and the people who work in it are so deep in it, it’s hard for normies to ask the more ignorant questions.
    0:06:31 And so I feel like having the microphone and being armed with, you know, my Vox byline, I was able to ask the dumb question.
    0:06:36 And, you know, I think I always said, like, you know, I know the answer to some of these questions.
    0:06:41 But I’m asking on behalf of, like, the listener.
    0:06:42 And sometimes I knew the answer.
    0:06:43 Sometimes I didn’t.
    0:07:04 I don’t know about you, but for me, and I’m sure a lot of people listening, it is maddening to be continually told that, you know what, we might be on the wrong end of an extinction event here, caused by this tiny minority of non-normies building this stuff.
    0:07:13 And that it’s possible for so few to make decisions that might unravel life for the rest of us is just, well, maddening.
    0:07:14 It is maddening.
    0:07:15 It is maddening.
    0:07:19 And to even hear it be talked about, like, this affects all of us.
    0:07:22 So shouldn’t we, shouldn’t it be the thing that we’re all talking about?
    0:07:29 But it feels like it’s reserved for a certain group of people who get to make the decisions and get to set the terms of the conversation.
    0:07:39 Let’s talk about the ideologies and all the camps that make up this weird, insular world of AI.
    0:07:44 And I want to start with the, what you call the AI safety camp.
    0:07:46 What is their deal?
    0:07:48 What should we know about them?
    0:07:54 So AI safety is a term that’s evolved over the years.
    0:08:07 But it’s kind of like people who fear that AI could be an existential risk to humanity, whether that’s like AI going rogue and doing things we didn’t want it to do.
    0:08:13 It’s about the biggest worry, I guess, of all of us being wiped out.
    0:08:18 We never talked about a cell phone apocalypse or an internet apocalypse.
    0:08:22 I guess maybe if you count Y2K.
    0:08:25 But even that wasn’t going to wipe out humanity.
    0:08:30 But the threat of an AI apocalypse, it feels like it’s everywhere.
    0:08:34 Mark my words, AI is far more dangerous than nukes.
    0:08:39 From billionaire Elon Musk to the United Nations.
    0:08:46 Today, all 193 members of the United Nations General Assembly have spoken in one voice.
    0:08:48 AI is existential.
    0:08:55 But then it feels like scientists in the know can’t even agree on what exactly we should be worried about.
    0:08:59 And where does the term AI safety come from?
    0:09:12 We trace the origin to a man named Eliezer Yudkowsky, who, you know, I think not all AI safety people today agree with Eliezer Yudkowsky.
    0:09:16 But basically, you know, Eliezer Yudkowsky wrote about this fear.
    0:09:24 Actually, as a teenager, he became popular, sort of found his following when he wrote a Harry Potter fan fiction.
    0:09:26 As one does.
    0:09:27 As one does.
    0:09:31 It’s actually one of the most popular Harry Potter fan fictions out there.
    0:09:34 It’s called Harry Potter and the Methods of Rationality.
    0:09:36 And he wrote it almost as a way.
    0:09:38 Love it.
    0:09:44 He wrote it almost as a way to get people to think differently about AI.
    0:09:53 He had thought deeply about the possibility of building a artificial intelligence that was smarter than human beings.
    0:09:55 Like, he kind of imagined this idea.
    0:10:02 And at first, he imagined it as a good robot, which is the name of the series, that could save us.
    0:10:13 But, you know, eventually he realized, like, or came to fear that it could probably go very poorly if we built something smarter than us, that it would, it could result in it killing us.
    0:10:18 So, anyway, that’s the origin, but it’s sort of, his ideas have caught on.
    0:10:27 Open AI, actually, the CEO, Sam Altman, talks about how Eliezer was like an early inspiration for him making the company.
    0:10:36 They do not agree on a lot because Eliezer thinks Open AI, the chat GPT company, is on track to cause an apocalypse.
    0:10:42 But, anyway, that’s, that’s the gist, is like, AI safety is like, AI could kill us all.
    0:10:43 How do we prevent that?
    0:10:51 So, it’s really, it’s about, it’s focused on the sort of long-range existential risks.
    0:10:51 Correct.
    0:10:53 And some people don’t think it’s long-range.
    0:10:57 Some of these people think that that could happen very soon.
    0:11:02 So, this Yudkowsky guy, right, he makes these two general claims, right?
    0:11:07 One is that we will build an AI that’s smarter than us, and it will change the world.
    0:11:14 And the second claim is that to get that right is extraordinarily difficult, if not impossible.
    0:11:19 Why does he think it’s so difficult to get this right?
    0:11:22 Why is he so convinced that we won’t?
    0:11:28 He thinks about this in terms of thought experiments.
    0:11:38 So, just kind of taking, taking this premise that we could build something that outpaces us at most tasks.
    0:11:47 He tries to explain the different ways this could happen with these, like, quirky parables.
    0:11:54 And we start with his most famous one, which is the paperclip maximizer thought experiment.
    0:12:00 Suppose, in the future, there is an artificial intelligence.
    0:12:13 We’ve created an AI so vastly powerful, so unfathomably intelligent, that we might call it superintelligent.
    0:12:18 Let’s give this superintelligent AI a simple goal.
    0:12:21 Produce…
    0:12:23 Paperclips
    0:12:33 Because the AI is superintelligent, it quickly learns how to make paperclips out of anything in the world.
    0:12:41 It can anticipate and foil any attempt to stop it, and will do so because its one directive is to make more paperclips.
    0:12:50 Should we attempt to turn the AI off, it will fight back because it can’t make more paperclips if it is turned off.
    0:12:55 And it will beat us because it is superintelligent and we are not.
    0:12:57 The final result?
    0:13:08 The entire galaxy, including you, me, and everyone we know, has either been destroyed or been transformed.
    0:13:15 Into paperclips.
    0:13:31 The gist is, we build something so smart we fail to understand it, how it works, and we could try to give it good goals to help improve our lives.
    0:13:39 But maybe that goal has an unintended consequence that could lead to something catastrophic that we couldn’t have even imagined.
    0:13:45 Right, and it’s such a good example because a paperclip is like the most innocuous, trivial thing ever, right?
    0:13:47 Like what could possibly go wrong?
    0:13:52 Is Yukowski, even within the safety camp, on the extremes?
    0:13:57 I mean, I went to his website, and I just want to read this quote.
    0:13:59 He writes,
    0:14:07 It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight.
    0:14:15 Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity.
    0:14:17 I mean, come on, dude.
    0:14:19 It’s so dramatic.
    0:14:23 I mean, that, he seems convinced that the game is already up here.
    0:14:27 We’re just, we just don’t know how much sand is left in the hourglass.
    0:14:31 I mean, is he on the margins even within this camp, or is this a fairly representative view?
    0:14:32 Definitely, yeah.
    0:14:32 Okay.
    0:14:35 No, no, it’s, he’s on the margins, I would say.
    0:14:37 It’s, he’s like an extreme case.
    0:14:40 He had a big influence on the industry early on.
    0:14:47 So, in that sense, he, he was like an early influencer of all these people who ended up going into AI.
    0:14:50 A lot of people I talked to went into AI because of his writings.
    0:14:53 I can’t square that circle, right?
    0:14:54 If they were influenced by him.
    0:14:55 No.
    0:14:56 And this whole thing is, don’t do this, we’re going to die.
    0:14:58 Why are they doing it?
    0:15:03 To me, it felt like similar to the world of religion, almost like a schism.
    0:15:10 Believers in the superintelligence, and then people who thought we shouldn’t try and build it, and then the people who thought we should.
    0:15:21 Yeah, I mean, I, I guess with any kind of grand thinking about the fate of humanity, you end up with these, it starts to get very religious-y very quickly,
    0:15:26 even if it’s cloaked in the language of science and secularism, as this is.
    0:15:31 The religious part of it, I mean, did that, did the parallels there jump out to you pretty immediately?
    0:15:43 That, that the people at the level of ideology are treating this, thinking about this, as though it is a religious problem or a religious worldview?
    0:15:44 It really did.
    0:16:01 It did jump out at me really early, because I think, like, going into reporting on a technology, you expect to be kind of bogged down by technological language and terminology that’s, like, in the weeds of whatever, computer science or whatever it is.
    0:16:14 But, but the words that were hard to understand were, like, superintelligence and AGI, and then hearing about, you know, the CEO of OpenAI, Sam Altman, talking about a magic intelligence in the sky.
    0:16:18 And the question I had was, like, what are these guys talking about?
    0:16:21 But it was almost like they were talking about a god, is what it felt like to me.
    0:16:23 Yeah.
    0:16:24 All right.
    0:16:27 I have some thoughts on the religious thing, but let me table that for a second.
    0:16:30 I think we’ll, we’ll end up circling back to that.
    0:16:35 I want to finish our little survey of the, of the tribes, the gangs here.
    0:16:39 The other camp you talk about are the, the AI ethicists.
    0:16:40 What’s their deal?
    0:16:42 What are they concerned about?
    0:16:48 How are they different from the safetyists who are focused on these existential problems or risks?
    0:17:01 Yeah, the AI ethicists that I spoke to came to AI pretty early on, too, like, just a couple years, maybe after, a few years after Eliezer was writing about it.
    0:17:02 They were working on algorithms.
    0:17:06 They were working on AI as it existed in the world.
    0:17:08 So that, that was a key difference.
    0:17:11 They weren’t thinking about things in, like, these hypotheticals.
    0:17:24 But AI ethicists, where AI safety folks tend to worry about the ways in which AI could be an existential risk in the future, it could wipe us out.
    0:17:32 AI ethicists tended to worry about harms that AI was doing right now, in the present.
    0:17:56 Whether that was through, you know, governments using AI to surveil people, bias in AI data, the data that went into building AI systems, you know, racial bias, gender bias, and ways that algorithmic systems were making racist decisions, sexist decisions, decisions that were harmful to disabled people.
    0:17:57 They were worried about things now.
    0:17:59 Tell me about Margaret Mitchell.
    0:18:10 She’s a researcher and a colorful character in the series, and she’s an ethicist, and she coined the everything is awesome problem.
    0:18:12 Tell me about that.
    0:18:15 That’s an interesting example of the sorts of things they worry about.
    0:18:22 Yeah, so Margaret Mitchell was working on AI systems in the early days, like long before we had ChatGPT.
    0:18:28 She was working on a system at Microsoft that was vision to language.
    0:18:35 So it was taking a series of images of a scene and trying to describe it in words.
    0:18:43 And so she, you know, she was giving the system things like images of weddings or images of different events.
    0:18:50 And she gave the system a series of images of what’s called the Hempstead Blast.
    0:19:03 It was at a factory, and you could see from the sequence of images that the person taking the photo had like a third-story view sort of overlooking the explosion.
    0:19:11 So it was a series of pictures showing that there was this terrible explosion happening, and whoever was taking the photo was very close to the scene.
    0:19:20 So I put these images through my system, and the system says, wow, this is a great view.
    0:19:22 This is awesome!
    0:19:35 The system learned from the images that it had been trained on that if you were taking an image from, you know, from above, down below, like, that that’s a great view.
    0:19:42 And that if there were, like, all these, you know, different colors, like in a sunset, which the explosion had made all these colors, that that was beautiful.
    0:19:52 And so she saw really early on before, you know, this AI moment that we’re living, that the data that these systems are trained on is crucial.
    0:20:01 And so her worry with systems like ChatGPT are, they’re trained on, like, basically the entire internet.
    0:20:08 And so the technologists making the system lose track of, like, what kinds of biases could be in there.
    0:20:13 And, yeah, this is, like, sort of her origin story of worrying about these things.
    0:20:25 And she went and worked for Google’s AI ethics team and later was fired after trying to get a paper published there about these worries.
    0:20:31 So why is the everything is awesome problem a problem, right?
    0:20:39 I mean, I guess someone may hear that and go, well, okay, that’s kind of goofy and quirky that an AI would interpret a horrible image in that way.
    0:20:44 But what actual harm is that going to cause in the world?
    0:20:45 Right.
    0:21:06 I mean, the way she puts it is, you know, if you were training a system to, like, launch missiles and you gave it some of its own autonomy to make decisions, like, you know, she was like, you could have a system that’s, like, launching missiles in pursuit of the aesthetic of beauty.
    0:21:10 So, in a sense, it’s a bit of a thought experiment on its own, right?
    0:21:19 It’s like she’s not worried about this in particular, but worried about implications for biased data in future systems.
    0:21:21 Yeah, it’s the same thing with the paperclip example, right?
    0:21:27 It’s just, it’s unintended, the bizarre and unintended consequences of these things, right?
    0:21:34 What seems goofy and quirky at first may, a few steps down the road, be catastrophic, right?
    0:21:38 And if you’re not, if you can’t predict that, maybe you should be a little careful about building it.
    0:21:40 Right, right, exactly.
    0:21:52 So, do the AI ethics people in general, do they think the concerns about an extinction event or existential threats, do they think those concerns are valid?
    0:22:01 Or do they think they’re mostly just science fiction and a complete distraction from, you know, actual present-day harms?
    0:22:10 I should say at the outset that, you know, I found that the AI ethics and AI safety camps, they’re less camps and more of a spectrum.
    0:22:18 So, I don’t want to say that every single AI ethics person I spoke to was like, these existential risks are nonsense.
    0:22:28 But by and large, people I spoke to in the ethics camp said that these existential risks are a distraction.
    0:22:37 It’s like this epic fear that’s attention grabbing and, you know, goes viral and takes away from the harms that AI is doing right now.
    0:22:45 It takes away attention from those things and it, crucially, in their view, takes away resources from fighting those kinds of harms.
    0:22:46 In what way?
    0:23:04 You know, I think when it comes to funding, if you’re like a billionaire who wants to give money to companies or charities or, you know, causes and you want to leave a legacy in the world, I mean, do you want to make sure that data and AI systems is unbiased or do you want to make sure that you save humanity from apocalypse, you know?
    0:23:09 Yeah. I should ask about the effect of altruists.
    0:23:15 They’re another camp, another school of thought, another tradition of thought, whatever you want to call it, that you talk about in the series.
    0:23:19 How do they fit in to the story? Or how are they situated?
    0:23:24 Yeah. So, effective altruism is a movement that’s had an effect on the AI industry.
    0:23:37 It’s also had an effect on Vox. Future Perfect is the Vox section that we collaborated with to make Good Robot and it was actually inspired by effective altruism.
    0:23:52 The whole point of the effective altruism movement is to try to do the most good in the world and EA, as it’s sometimes called, comes up with a sort of formula for how to choose which causes you should focus on and put your efforts toward.
    0:24:07 So, early rationalists like Eliezer Yudkowsky encountered early effective altruists and tried to convince them that the highest stakes issue of our time, the cause that they should focus on is AI.
    0:24:18 Effective altruism is traditionally known to give philanthropic dollars to things like malaria nets, but they also gave philanthropic dollars to saving us from an AI apocalypse.
    0:24:27 And so, the AI safety industry is really a big part of how it was financed is that effective altruism rallied as a cause around it.
    0:24:36 These are the people who think we really have an obligation to build a good robot in order to protect future humans.
    0:24:40 And again, I don’t know what they mean by good.
    0:24:44 I mean, good and bad, those are value judgments.
    0:24:45 This is morality, not science.
    0:24:49 There’s no utility function for humanity.
    0:24:58 It’s like, I don’t know who’s defining the goodness of the good robot, but I’ll just say that I don’t think it’s as simple as some of these technologists seem to think it is.
    0:25:03 And maybe I’m just being annoying philosophy guy here, but whatever, here I am.
    0:25:12 Yeah, no, I think everyone in the AI world that I talk to just like was really striving toward the good, like whatever that looked like.
    0:25:17 Like AI ethics saw like the good robot as a specific set of values.
    0:25:23 And folks in effective altruism were also like baffled by like, how do I do the most good?
    0:25:28 And trying to use math to, you know, put a utility function on it.
    0:25:35 And it’s like, the truth is a lot more messy than a math problem of how to do the most good.
    0:25:36 You can’t really know.
    0:25:41 And yeah, I think sitting in the messiness is hard for a lot of us.
    0:25:50 And I don’t know how you do that when you’re fully aware that you’re building or attempting to build something that you don’t fully understand.
    0:25:51 That’s exactly right.
    0:26:01 Like in the series, like we tell the story of effective altruism through the parable of the drowning child, of this child who’s drowning in a pond, a shallow pond.
    0:26:04 Okay.
    0:26:08 On your way to work, you pass a small pond.
    0:26:14 Children sometimes play in the pond, which is only about knee deep.
    0:26:17 The weather’s cool, though, and it’s early.
    0:26:21 So you’re surprised to see a child splashing about in the pond.
    0:26:31 As you get closer, you see that it is a very young child, just a toddler, who’s flailing about, unable to stay upright or walk out of the pond.
    0:26:35 You look for the parents or babysitter, but there’s no one else around.
    0:26:40 The child is unable to keep her head above the water for more than a few seconds at a time.
    0:26:43 If you don’t wade in and pull her out, she seems likely to drown.
    0:26:53 Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy.
    0:27:01 By the time you hand the child over to someone responsible for her and change your clothes, you’ll be late for work.
    0:27:04 What should you do?
    0:27:12 Are you going to save it even though you ruin your suit?
    0:27:15 Everyone answers, yes.
    0:27:23 And this sort of utilitarian philosophy behind effective autism asks, well, what if that child were far away from you?
    0:27:25 Would you still save it if it was oceans away from you?
    0:27:28 And that’s where you get to malaria nets.
    0:27:32 You’re going to donate money to save children across an ocean.
    0:27:38 But, yeah, this idea of, like, well, what if the child hasn’t been born yet?
    0:27:43 And that’s the future child that would die from an AI apocalypse.
    0:27:49 But, like, abstracting things so far in advance, you could really just justify anything.
    0:27:51 And that’s the problem, right?
    0:27:52 Yeah, right.
    0:28:09 Of focusing on the long term in that way, the willingness to maybe overlook or sacrifice present harms in service to some unknown future, that’s a dangerous thing.
    0:28:19 There are dangers in being willfully blind to present harms because you think there’s some more important or some more significant harm down the road.
    0:28:27 And you’re willing to sacrifice that harm now because you think it’s, in the end, justifiable.
    0:28:30 Yeah, at what point are you starting to play God, right?
    0:28:37 So I come from the world of political philosophy, and in that maybe equally weird world.
    0:28:48 Whenever you have competing ideologies, what you find at the root of those disagreements are very different views about human nature, really.
    0:28:53 And all the differences really spring from that divide.
    0:28:58 Is there something similar at work in these AI camps?
    0:29:11 Do you find that these people that you talk to have different beliefs about how good or bad people are, different beliefs about what motivates us, different beliefs about our ability to cooperate and solve problems?
    0:29:15 Is there a core dispute at that basic level?
    0:29:22 There’s a pretty striking demographic difference between AI safety folks and AI ethics folks.
    0:29:26 Like, I went to a conference, two conferences, one of each.
    0:29:38 And so immediately you could see, like, AI safety folks were skewed white and male, and AI ethics folks skewed, like, more people of color, more women.
    0:29:44 And so, like, people talked about blind spots that each camp had.
    0:30:01 And so if you’re, you know, a white male moving around the world, like, you’re not fearing the sort of, like, racist, sexist, ableist, like, consequences of AI systems today as much, because it’s just not in your view.
    0:30:30 It’s been a rough week for your retirement account, your friend who imports products from China for the TikTok shop, and also Hooters.
    0:30:35 Hooters has now filed for bankruptcy, but they say they are not going anywhere.
    0:30:39 Last year, Hooters closed dozens of restaurants because of rising food and labor costs.
    0:30:47 Hooters is shifting away from its iconic skimpy waitress outfits and bikini days, instead opting for a family-friendly vibe.
    0:30:54 They’re vowing to improve the food and ingredients, and staff is now being urged to greet women first when groups arrive.
    0:30:57 Maybe in April of 2025, you’re thinking, good riddance?
    0:31:01 Does the world still really need this chain of restaurants?
    0:31:09 But then we were surprised to learn of who exactly was mourning the potential loss of Hooters.
    0:31:11 Straight guys who like chicken, sure.
    0:31:14 But also a bunch of gay guys who like chicken?
    0:31:19 Check out Today Explained to find out why exactly that is, won’t ya?
    0:31:19 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:21 Today Explained to find out why exactly that is, won’t ya?
    0:31:36 Did all the people you spoke to, regardless of the camps they were in, did they all more
    0:31:43 or less agree that what we’re doing here is attempting to build God, or something God-like?
    0:31:45 No, I think no.
    0:31:52 A lot of, I would say a lot of the AI safety people I spoke to like bought into this idea
    0:31:55 of a super intelligence and a God-like intelligence.
    0:31:59 I should say, I don’t think that’s every AI safety person by any means.
    0:32:06 But AI ethics people for the most part just didn’t buy, just completely, everyone I spoke
    0:32:14 to talked about it as being just AI hype as a way to like amp up the capability of this
    0:32:18 technology that’s really in its infancy and is not God-like at this point.
    0:32:27 I saw that when Sam Altman, the CEO of OpenAI, he was on Joe Rogan’s podcast and he was asked
    0:32:30 whether they’re attempting to build God and he said, I have the quote here, I guess it comes
    0:32:35 down to a definitional disagreement about what you mean by it becomes a God.
    0:32:39 I think whatever we create will be subject to the laws of physics in this universe.
    0:32:40 Okay.
    0:32:44 So, so God or no God.
    0:32:45 Right.
    0:32:45 Yeah.
    0:32:47 I mean, it’s, it’s, he’s called it though.
    0:32:49 I don’t know if it’s tongue in cheek.
    0:32:53 It’s all like very, you know, hard to read, but he’s called it like the magic intelligence
    0:32:54 in the sky.
    0:33:02 And Anthropics CEO has called AI systems machines of loving grace, which sounds like this is religious
    0:33:03 language, you know?
    0:33:04 Okay.
    0:33:05 Come on now.
    0:33:09 What in the world is that supposed to mean?
    0:33:12 What is a machine of loving grace?
    0:33:14 Does he know what that means?
    0:33:21 I think it’s like this, you know, it’s a very optimistic view of what machines can do for
    0:33:21 us.
    0:33:26 Like, you know, the idea that machines can help us cure cancer.
    0:33:27 And I don’t know.
    0:33:32 I think that’s ultimately probably what he means, but it does, there’s an element of
    0:33:36 it that I just completely, you know, roll my eyes, raise my eyebrows at where it’s like,
    0:33:43 I don’t think we should be so reverent of a technology that’s like flawed and needs to
    0:33:44 be regulated.
    0:33:47 And I think that reverence is dangerous.
    0:33:55 Why do you think it matters that people like Altman or the CEO of Anthropic have reverence
    0:33:57 or have reverence for machines, right?
    0:33:59 Who cares if they think they’re building God?
    0:34:03 Does it matter really in terms of what it will be and how it will be deployed?
    0:34:11 Well, I think that if you believe you’re, if you have these sorts of delusions of grandeur
    0:34:16 about what you’re making and if you talk about it as a machine of loving grace, like, I don’t
    0:34:23 know, it seems like you don’t have the level of skepticism that I want you to be having.
    0:34:27 And we’re not regulating these companies at this point.
    0:34:29 We’re relying on them to regulate themselves.
    0:34:34 So yeah, it’s a little worrying when you talk about building something so powerful.
    0:34:37 And so intelligent and you’re not being checked.
    0:34:38 Yeah.
    0:34:43 I don’t expect my toaster to tell me it loves me in the morning, right?
    0:34:45 I just want my bagels crispy.
    0:34:48 But I understand that my toaster is a technology.
    0:34:49 It’s a tool with a function.
    0:34:55 To talk about machines of loving grace suggests to me that these people do not think they’re
    0:34:56 just building tools.
    0:34:58 They think they’re building creatures.
    0:34:59 They think they’re building God.
    0:35:00 Yeah.
    0:35:05 And, you know, Margaret Mitchell, as you’ll hear in the series, she talks about how she
    0:35:07 thinks we shouldn’t be building a God.
    0:35:13 We should be building, you know, machines, AI systems that are going to fulfill specific
    0:35:14 purposes.
    0:35:18 Like specifically, she talks about a smart toaster that makes really good toast.
    0:35:26 And I don’t think she means a toaster in particular, but just building systems that are designed
    0:35:32 to help humans achieve a certain goal, like something specific out in the world.
    0:35:40 Whether that’s, you know, like helping us figure out how proteins fold or helping us figure out
    0:35:45 how animals communicate, which are some of the things that we’re using AI to do in a narrow way.
    0:35:52 She talks about this as an artificial narrow intelligence, as distinct from artificial general
    0:35:58 intelligence, which is sort of the super intelligent God AI that’s, you know, quote unquote, smarter
    0:36:00 than us at most tasks.
    0:36:07 I mean, this is an old idea in the history of philosophy that God is like fundamentally
    0:36:09 just a projection of human aspirations, right?
    0:36:15 That our image of God is really a mirror that we’ve created, a mirror that reflects our idea
    0:36:17 of a perfect being, a being in our image.
    0:36:23 And this is something you talk about in the series, and that this is what we’re doing with AI.
    0:36:31 We’re building robots in our image, which, you know, raises the question, well, in whose image exactly, right?
    0:36:35 If AI is a mirror, it’s not a mirror of all of us, is it, right?
    0:36:37 It’s a mirror of the people building it.
    0:36:44 And the people building it are, I would say, not representative of the entire human race.
    0:36:53 Yeah, you’ll hear in the series, like, I latched on to this idea of, like, AI is a mirror of us.
    0:36:58 And that’s so interesting that, like, yeah, God, the concept of God is also like a mirror.
    0:37:04 But if you think about it, I mean, large language models are made from basically the Internet,
    0:37:09 which is, like, all of our thoughts and our musings as humans on the Internet.
    0:37:14 It’s a certain lens on human behavior and speech.
    0:37:22 But it’s also, yeah, like, AI is, like, the decisions that its creators make of what data to use,
    0:37:25 of how to train the system, how to fine-tune it.
    0:37:30 And when I used ChatGPT, it was very complimentary of me.
    0:37:33 And I found it to be this almost, like, smooth, smooth…
    0:37:35 It charmed you. You got charmed.
    0:37:39 Yeah, I got charmed. It was, like, so, it gave me the compliments I wanted to hear.
    0:37:48 And I think it’s, like, this smooth, frictionless version of humanity where it compliments us and makes us feel good.
    0:37:53 And it also, like, you know, you don’t have to write that letter of recommendation for your person.
    0:37:55 You don’t have to write that email.
    0:37:57 You could just… It’s just smooth and frictionless.
    0:38:10 And I worry that, you know, in making this, like, smooth mirror of humanity, like, where do we lose our humanity if we keep relying, like, keep seeding more and more to AI systems?
    0:38:17 I want it to be a tool to help us, like, achieve our goals rather than, like, this thing that replaces us.
    0:38:24 Yeah, I won’t lie. I mean, I did. I just recently got my chat GPT account.
    0:38:29 And I did ask it what it thought of Sean Elling, host of the Gray Area podcast.
    0:38:30 What did it say?
    0:38:31 And it was very complimentary.
    0:38:35 It’s extremely, extremely generous.
    0:38:38 And I was like, oh, shit, yeah, this thing gets it.
    0:38:41 Oh, this is okay. All right.
    0:38:41 Maybe it is a god.
    0:38:42 Now I trust it.
    0:38:45 Clearly it’s an all-knowing, omnipotent one.
    0:38:53 That’s what I came away with, like, you know, from the series and the reporting is, like, I think before I used to be very afraid of AI and using it and not knowing.
    0:38:59 And now I feel, like, armed to be skeptical in the right ways and to try to use it for good.
    0:39:03 Yeah. So that’s what I hope people get out of the series anyway.
    0:39:15 Are you worried about us losing our humanity or just becoming so different that we don’t recognize ourselves anymore?
    0:39:20 I am worried that it’ll just make us more isolated.
    0:39:30 And it’s so good at giving us what we want to hear that we won’t, like, you know, find the friction, search for the friction in life that makes life worth living.
    0:39:42 Yeah, yeah. So, look, I mean, the different camps may disagree about a lot, but they seem to converge on the basic notion that this technology is transformative.
    0:39:45 It’s going to transform our lives.
    0:39:55 It’s probably going to transform the economy and the way this stuff gets developed and deployed and the incentives driving it are really going to matter.
    0:40:08 Is it your sense that checks and balances are being put in place to guide this transformation so that it does benefit more people than it hurts, or at least as much as possible?
    0:40:11 I mean, was this something you explored in your reporting?
    0:40:17 Yeah, I mean, you know, I think a lot of the people I spoke to really wanted regulation.
    0:40:25 But I think ultimately, like, there isn’t really regulation in the U.S. on the AI safety front or the AI ethics front.
    0:40:32 The technology is dramatically outpacing regulators’ ability to regulate it.
    0:40:35 So, that’s troubling. Like, it’s not great.
    0:40:42 I would imagine the ethicists would be a little more focused on imposing regulations now.
    0:40:45 But it doesn’t seem like they’re making a lot of headway on that front.
    0:40:48 I’m not sure how regulatable it is.
    0:41:07 Yeah, I think that was one of my frustrations just listening to all this infighting was, like, I felt like these two groups that, like, they have a lot in common and they should be pursuing, like, a common goal of getting some good regulation, of, you know, having some strong safeguards in place for both AI safety and AI ethics concerns.
    0:41:16 And ultimately, you know, we tell the story of how some of them did come together to write an open letter calling for both kinds of regulations.
    0:41:21 But they’ve not, you know, and that’s encouraging to see people working together.
    0:41:29 But ultimately, I don’t think they’ve made, at this point, strides in getting anything significant past.
    0:41:30 You know, it’s interesting.
    0:41:33 You’re reporting on this in the series.
    0:41:38 And our employer, Vox, has a deal with OpenAI.
    0:41:45 And in the course of your reporting, you were trying to find out what you could about that deal.
    0:41:49 How did that go, if you’re comfortable talking about it?
    0:41:50 Yeah, yeah.
    0:41:55 Yeah, so our, the parent, we should say, the parent company of our Vox, Vox Media.
    0:41:58 I know the language I need to use.
    0:42:00 I have it down back, as you can’t tell.
    0:42:14 But, you know, kind of shortly after we decided to tackle AI in this series, we learned that Vox Media was entering a partnership with OpenAI, the ChatGPT company.
    0:42:20 We learned it meant that OpenAI could train its models on our journalism.
    0:42:29 And I guess for personally, it just felt like I wanted to know if they were training on my voice, you know?
    0:42:30 Yeah, me too.
    0:42:33 That, to me, feels really, yeah, really personal.
    0:42:35 Like, there’s so much emotional information in a voice.
    0:42:42 Like, I feel very naked going out on air and having people listen to my voice.
    0:42:46 And I spend so much time carefully crafting what I say.
    0:42:52 And so the idea that they would train on my voice and I don’t do what with it.
    0:42:52 I don’t know.
    0:42:56 One of our editors pointed out, like, that’s part of the story.
    0:42:59 You know, like, AI is, like, entering our lives.
    0:43:03 More and more AI systems and robots are entering our lives and having this.
    0:43:10 And for me personally, it’s like, yeah, like, literally, my work, our work is being used to train these systems.
    0:43:13 Like, what does that mean for us, for our work?
    0:43:20 It felt, and, you know, I reached out to Vox Media and to OpenAI for an interview.
    0:43:27 And they both declined, which made it feel even, you know, just, you feel really helpless.
    0:43:35 And, I mean, there’s not much more answers that I have than that.
    0:43:39 Yeah, well, I mean, you even interview a guy on the show.
    0:43:41 You know, he’s a former OpenAI employee.
    0:43:46 You know, and you’re raising these concerns and he’s sort of dismissive of it, right?
    0:43:49 Like, you know, whatever data they’re getting.
    0:43:49 He just laughed at us.
    0:44:00 I would be quite surprised if the data provided by Vox is itself very valuable to OpenAI.
    0:44:03 I would imagine it’s a tiny, tiny drop in that bucket.
    0:44:10 If all of ChatGPT’s training data were to fit inside the entire Atlantic Ocean,
    0:44:17 then all of Vox’s journalism would be like a few hundred drops in that ocean.
    0:44:22 Rightly, you’re like, well, fuck, it matters to me.
    0:44:27 It’s my work, it’s my voice, and it may eventually be my job, right?
    0:44:33 And the point here is, like, that this is a thing now that our job,
    0:44:39 the fact that our job and many other jobs are already tangled up with AI in this way,
    0:44:42 it’s just a reminder that this isn’t the future, right?
    0:44:49 It’s here now, and it’s only going to get more strange and complicated.
    0:44:50 Totally, yeah.
    0:44:56 And I don’t know, I guess I understand, like, the impulse from, like, from Vox Media to be like,
    0:45:01 okay, we want to have, we want to be compensated for, you know,
    0:45:05 licensing our journalists’ work who work so hard and we pay them.
    0:45:15 But it feels, yeah, it just feels like, it feels weird to not have a say when it’s the work you’re doing.
    0:45:16 So, I have your views on online and online and online and online.
    0:45:17 So, I have your views on online and online, like, online and online.
    0:45:20 So, I have your views on online and online and online, like, online and online and online.
    0:45:22 So, I have your views on online and online, like, online and online and online.
    0:45:24 So, I have your views on online and online and online and online and online, like, online and online.
    0:45:38 So, I have your views on online and online and online.
    0:45:38 So, I have your views on online and online and online.
    0:45:40 So, I have your views on online and online and online.
    0:45:50 So, have your views on AI in general changed all that much after doing this series?
    0:45:57 I mean, you say at the end that when you look at AI, just what you see is a funhouse mirror.
    0:45:59 What does that mean?
    0:46:07 AI, like a lot of our technologies and I guess like our visions of God, as you talk about, are a reflection of ourselves.
    0:46:17 And so, I think it was a comforting realization to me to realize that, like, the story of AI is not some, like, technological story I can’t understand.
    0:46:26 Like, the story of AI is a story about humans who are trying really hard to make a technology good and failing to varying degrees.
    0:46:44 But, yeah, I think fundamentally the course of, like, reporting it for me just brought the technology down to earth and made me a little more empowered to ask questions, to be skeptical, and to use it in my life with the right amount of skepticism.
    0:46:48 So, what do you hope people get out of this series?
    0:46:54 Normies who enter into it, you know, without a sort of solidified position on it.
    0:46:56 What do you hope they take away from it?
    0:47:14 I hope that people who didn’t feel like they had any place in the conversation around AI will feel, like, invited to the table and will be more informed and skeptical and curious and excited about the technology.
    0:47:18 And I hope that it brings it down to earth a little bit.
    0:47:21 Julia Longoria, this has been a lot of fun.
    0:47:23 Thank you so much for coming on the show.
    0:47:27 And the series, once again, is called Good Robot.
    0:47:28 It is fantastic.
    0:47:30 You should go listen to it immediately.
    0:47:31 Thank you.
    0:47:32 Thank you.
    0:47:41 All right.
    0:47:43 I hope you enjoyed this episode.
    0:47:53 If you want to listen to Julia’s Good Robot series, and of course you do, you can find all four episodes in the Vox Unexplainable podcast feed.
    0:47:57 We’ll drop a link to the first episode in the show notes.
    0:48:00 And as always, we want to know what you think.
    0:48:04 So drop us a line at the gray area at vox.com.
    0:48:12 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
    0:48:17 And once you’re done with that, please go ahead, rate, review, subscribe to the pod.
    0:48:19 That stuff really helps.
    0:48:32 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Erica Wong, fact-checked by Melissa Hirsch, and Alex Overington wrote our theme music.
    0:48:35 New episodes of the gray area drop on Mondays.
    0:48:37 Listen and subscribe.
    0:48:39 The show is part of Vox.
    0:48:43 Support Vox’s journalism by joining our membership program today.
    0:48:47 Members get access to this show without any ads.
    0:48:49 Go to vox.com/members to sign up.
    0:48:53 And if you decide to sign up because of this show, let us know.

    There’s a lot of uncertainty when it comes to artificial intelligence. Technologists love to talk about all the good these tools can do in the world, all the problems they might solve. Yet, many of those same technologists are also warning us about all the ways AI might upend society, how it might even destroy humanity.

    Julia Longoria, Vox host and editorial director, spent a year trying to understand that dichotomy. The result is a four-part podcast series — called Good Robot — that explores the ideologies of the people funding, building, and driving the conversation about AI.

    Today Julia speaks with Sean about how the hopes and fears of these individuals are influencing the technology that will change all of our lives.

    Host: Sean Illing (@SeanIlling)

    Guest: Vox Host and Editorial Director Julia Longoria

    Good Robot is available in the Vox Unexplainable feed.

    Episode 1

    Episode 2

    Episode 3

    Episode 4

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Stop comparing yourself to AI

    AI transcript
    0:00:01 Are you forgetting about that chip in your windshield?
    0:00:03 It’s time to fix it.
    0:00:05 Come to Speedy Glass before it turns into a crack.
    0:00:08 Our experts will repair your windshield in less than an hour,
    0:00:09 and it’s free if you’re insured.
    0:00:12 Book your appointment today at speedyglass.ca.
    0:00:14 Details and conditions at speedyglass.ca.
    0:00:20 What do you think about when you think about AI?
    0:00:25 Maybe chatbots giving you new lasagna recipes,
    0:00:29 research assistants helping you finish that paper.
    0:00:33 Do you think about machines taking your job?
    0:00:37 Maybe you think of something even more ominous,
    0:00:41 like Skynet robots wiping out humanity.
    0:00:46 If you’re like me, you probably think of all those things,
    0:00:47 depending on the day.
    0:00:50 And that’s sort of the point.
    0:00:55 AI is not well understood, even by the people creating it.
    0:00:58 And even though we all know it’s a technology
    0:01:00 that’s going to change our lives,
    0:01:03 that’s really all we know at this point.
    0:01:10 So how do we confront this uncertainty?
    0:01:13 How do we navigate the current moment?
    0:01:17 And how do we, the people who have been told
    0:01:19 that we will be impacted by AI,
    0:01:21 but don’t seem to have much of a say
    0:01:23 in how the AI is being built,
    0:01:26 engage in the conversation?
    0:01:31 I’m Sean Elling, and this is The Gray Area.
    0:01:45 Today’s guest is Jaron Lanier.
    0:01:48 He’s a virtual reality pioneer,
    0:01:50 a digital philosopher,
    0:01:54 and the author of several best-selling books on technology.
    0:01:57 He’s also one of the most profound critics
    0:02:01 of Silicon Valley and the business model driving it.
    0:02:04 I wanted to bring Jaron on the show
    0:02:07 for the first episode of this special series on AI
    0:02:10 because I think he’s uniquely positioned
    0:02:14 to speak both to the technological side of AI,
    0:02:16 what’s happening, where it’s going,
    0:02:20 and also to the human side.
    0:02:24 Jaron’s a computer scientist who loves technology.
    0:02:29 But at his core, he’s a humanist
    0:02:32 who’s always thinking about what technologies are doing to us
    0:02:36 and how our understanding of these tools
    0:02:39 will inevitably determine how they’re used.
    0:02:43 Maybe what Jaron does the best, though,
    0:02:45 is offer a different lens
    0:02:47 through which to view these technologies.
    0:02:51 We’re encouraged to treat these machines
    0:02:54 as though they’re godlike,
    0:02:56 as though they’re thinking for themselves.
    0:03:01 Indeed, they’re designed to make you feel that way
    0:03:04 because it adds to the mystique around them
    0:03:07 and obscures the truth about how they really work.
    0:03:12 But Jaron’s plea is to be careful
    0:03:15 about thoughtlessly adopting the language
    0:03:17 that the AI creators give us
    0:03:18 to describe their creation
    0:03:21 because that language structures
    0:03:25 not only how we think about these technologies,
    0:03:27 but what we do with them.
    0:03:35 Jaron Lanier, welcome to the show.
    0:03:36 That’s me. Hey.
    0:03:39 So look, I have heard
    0:03:43 so many of these big picture conversations about AI
    0:03:48 and they often begin with a question
    0:03:52 about how or whether AI is going to take over the world.
    0:03:55 But I discovered very quickly
    0:03:57 that you don’t accept the terms of that question,
    0:03:59 which is why I’m not going to ask it.
    0:04:01 but I thought it would be useful
    0:04:03 as a beginning to ask you
    0:04:05 why you find questions like that
    0:04:07 or claims like that ridiculous.
    0:04:10 Oh, well, you know,
    0:04:12 when it comes to AI,
    0:04:15 the whole technical field
    0:04:16 is kind of defined
    0:04:19 by an almost metaphysical assertion,
    0:04:22 which is we are creating intelligence.
    0:04:23 Well, what is intelligence?
    0:04:26 Something human.
    0:04:28 The whole field was founded
    0:04:31 by Alan Turing’s thought experiment
    0:04:32 called the Turing test,
    0:04:37 where if you can fool a human
    0:04:38 into thinking you’ve made a human,
    0:04:40 then you might as well have made a human
    0:04:42 because what other tests could there be?
    0:04:45 Which in a way is fair enough.
    0:04:45 On the other hand,
    0:04:47 what other scientific field
    0:04:50 other than maybe supporting stage magicians
    0:04:53 is entirely based on being able to fool people?
    0:04:53 I mean, it’s stupid.
    0:04:56 Fooling people in itself accomplishes nothing.
    0:04:58 There’s no productivity.
    0:04:59 There’s no insight
    0:05:01 unless you’re studying
    0:05:03 the cognition of being fooled, of course.
    0:05:06 So there’s an alternative way
    0:05:07 to think about what we do
    0:05:09 with what we call AI,
    0:05:12 which is that there’s no new entity.
    0:05:14 There’s nothing intelligent there.
    0:05:16 What there is is a new
    0:05:17 and in my opinion,
    0:05:18 sometimes quite useful
    0:05:21 form of collaboration between people.
    0:05:23 If you look at something like the Wikipedia,
    0:05:25 where people mash up
    0:05:27 a lot of their communications into one thing,
    0:05:30 you can think of that as a step on the way
    0:05:32 to what we call large model AI,
    0:05:34 where we take all the data that we have
    0:05:35 and we put it together
    0:05:39 in a way that allows more interpolation
    0:05:43 and more commingling than previous methods.
    0:05:47 And I think that can be of great use,
    0:05:49 but I don’t think there’s any requirement
    0:05:52 that we perceive that as a new entity.
    0:05:53 Now, you might say,
    0:05:54 well, what’s the harm if we do?
    0:05:56 That’s a fair question.
    0:05:57 Like, who cares?
    0:05:58 If somebody wants to think of it
    0:06:00 as a new type of person
    0:06:02 or even a new type of God or whatever,
    0:06:03 what’s wrong with that?
    0:06:06 Potentially nothing.
    0:06:08 People believe all kinds of things all the time.
    0:06:12 But, in the case of our technology,
    0:06:15 let me put it this way.
    0:06:19 If you’re a mathematician or a scientist,
    0:06:25 you can do what you do
    0:06:27 in a kind of an abstract way.
    0:06:28 Like, you can say,
    0:06:30 I’m furthering math.
    0:06:33 And, in a way, that’ll be true
    0:06:35 even if nobody else ever even perceives
    0:06:36 that I’ve done it.
    0:06:37 I’ve written down this proof.
    0:06:40 But that’s not true for technologists.
    0:06:43 Technologists only make sense
    0:06:46 if there’s a designated beneficiary.
    0:06:49 Like, you have to make technology for someone.
    0:06:52 And, as soon as you say
    0:06:56 the technology itself is a new someone,
    0:07:00 you stop making sense as a technologist.
    0:07:01 Right?
    0:07:03 Let me actually take up that question
    0:07:04 that you just posed a second ago
    0:07:05 with a thought,
    0:07:07 I’ve heard from you,
    0:07:09 which is something to the effect of,
    0:07:11 I think the way you put it is
    0:07:13 the easiest way to mismanage a technology
    0:07:15 is to misunderstand it.
    0:07:17 So, to answer your question…
    0:07:18 Sounds like me, I guess.
    0:07:19 Yeah. Okay.
    0:07:22 If we make the mistake,
    0:07:23 which is now common,
    0:07:26 to insist that AI is, in fact,
    0:07:28 some kind of god or creature
    0:07:30 or entity or oracle,
    0:07:31 whatever term you prefer,
    0:07:33 instead of a tool as you define it,
    0:07:34 the implication is that
    0:07:37 that would be a consequential mistake, right?
    0:07:39 That we will mismanage the technology
    0:07:40 by misunderstanding it.
    0:07:41 So, is that not quite right?
    0:07:42 Am I not quite understanding?
    0:07:43 No, I think that’s right.
    0:07:46 I think when you treat the technology
    0:07:47 as its own beneficiary,
    0:07:49 you miss a lot of opportunities
    0:07:50 to make it better.
    0:07:52 Like, I see this in AI all the time.
    0:07:53 I see people saying,
    0:07:55 well, if we did this,
    0:07:56 it would pass the Turing test better,
    0:07:57 and if we did that,
    0:07:58 it would seem more like
    0:07:59 it was an independent mind.
    0:08:01 But those are all goals
    0:08:01 that are different
    0:08:04 from it being economically useful.
    0:08:05 They’re different from it
    0:08:08 being useful to any particular user.
    0:08:09 They’re just these weird,
    0:08:12 to me, almost religious ritual goals
    0:08:13 or something.
    0:08:15 like they, and so every time
    0:08:16 you’re devoting yourself to that,
    0:08:18 it means you’re not devoting yourself
    0:08:20 to making it better.
    0:08:22 Like, an example is,
    0:08:25 we have, in my view,
    0:08:28 deliberately designed large model AI
    0:08:32 to obscure the original human sources
    0:08:34 of the data that the AI is trained on
    0:08:36 to help create this illusion
    0:08:37 of the new entity.
    0:08:38 But when we do that,
    0:08:41 we make it harder to do quality control.
    0:08:43 We make it harder to do authentication
    0:08:48 and to detect malicious uses of the model
    0:08:52 because we can’t tell what the intent is,
    0:08:54 what data it’s drawing upon.
    0:08:56 We’re sort of willfully making ourselves
    0:08:58 kind of blind in a way
    0:09:00 that we probably don’t really need to.
    0:09:01 And I really want to emphasize
    0:09:03 from a metaphysical point of view,
    0:09:05 I can’t prove,
    0:09:06 and neither can anyone else,
    0:09:08 that a computer is alive or not
    0:09:09 or conscious or not or whatever.
    0:09:11 I mean, all that stuff
    0:09:13 is always going to be a matter of faith.
    0:09:15 That’s just the way it is.
    0:09:17 That’s what we got around here.
    0:09:19 But what I can say
    0:09:21 is that this emphasis
    0:09:22 on trying to make the models
    0:09:25 seem like they’re freestanding new entities
    0:09:27 does blind us
    0:09:29 to some ways we could make them better.
    0:09:30 And so I think, like, why bother?
    0:09:32 What do we get out of that?
    0:09:32 Not a lot.
    0:09:34 So do you think maybe
    0:09:35 the cardinal mistake
    0:09:37 with a lot of this kind of thinking
    0:09:38 is to assume
    0:09:42 that artificial intelligence
    0:09:43 is something that’s in competition
    0:09:45 with human intelligence
    0:09:46 and human abilities,
    0:09:47 that that kind of misunderstanding
    0:09:48 sets us off on a course
    0:09:50 for a lot of other kinds
    0:09:51 of misunderstandings?
    0:09:53 I wouldn’t choose that language
    0:09:54 because then the natural thing
    0:09:55 somebody’s going to say
    0:09:56 who’s a true believer
    0:09:57 that the AI is coming alive,
    0:09:58 they’re going to say,
    0:09:59 yeah, you’re right.
    0:10:00 It’s not competition.
    0:10:01 We’re going to align them
    0:10:02 and they’re going to be
    0:10:03 our collaborators
    0:10:05 or whatever.
    0:10:06 So that, to me,
    0:10:07 doesn’t go far enough.
    0:10:09 My own way of thinking
    0:10:11 is that I’m able
    0:10:12 to improve the models
    0:10:13 when I say
    0:10:14 there’s no new entity there.
    0:10:15 I just say they don’t,
    0:10:15 they’re not there.
    0:10:16 They don’t exist
    0:10:17 as separate entities.
    0:10:18 They’re just collaborations
    0:10:19 of people.
    0:10:20 I have to go that far
    0:10:22 to get the clarity
    0:10:23 to improve them.
    0:10:26 It might be a little late
    0:10:27 in the language game
    0:10:29 to replace a term
    0:10:30 like artificial intelligence,
    0:10:30 but if you could,
    0:10:31 do you have a better one?
    0:10:34 I have had the experience
    0:10:35 of coming up with terms
    0:10:37 that were widely adopted
    0:10:37 in society.
    0:10:38 I came up with
    0:10:39 virtual reality
    0:10:40 and some other things
    0:10:41 when I was young
    0:10:44 and I have seen that
    0:10:45 even when you get
    0:10:46 to coin the term,
    0:10:47 you don’t get to define it
    0:10:50 and I don’t love
    0:10:51 the way people think
    0:10:52 of virtual reality
    0:10:53 typically today.
    0:10:54 It’s lost a little bit
    0:10:55 of its old humanism,
    0:10:56 I would say.
    0:10:59 So that experience
    0:11:00 has led me to feel
    0:11:01 that it’s really
    0:11:02 younger generations
    0:11:03 who should come up
    0:11:03 with their own terms.
    0:11:04 So what I would prefer
    0:11:06 to see is younger people
    0:11:07 reject our terms
    0:11:09 and come up
    0:11:09 with their own.
    0:11:11 Fair enough.
    0:11:14 I’ve read a lot
    0:11:14 of your work
    0:11:15 on AI
    0:11:17 and I’ve listened
    0:11:19 to a lot of your interviews
    0:11:21 and I take your point
    0:11:22 that AI
    0:11:25 is a distillation
    0:11:26 of all these human inputs
    0:11:27 fundamentally.
    0:11:30 but for you at what point
    0:11:32 does or can complexity
    0:11:35 start looking like autonomy
    0:11:37 and what would autonomy
    0:11:38 even mean
    0:11:39 that the thing starts
    0:11:40 making its own decisions
    0:11:41 and is that the simple
    0:11:42 definition of that?
    0:11:43 This is an obsession
    0:11:44 that people have
    0:11:45 but you have to understand
    0:11:46 it’s a religious
    0:11:48 and entirely subjective
    0:11:50 or sort of cultural obsession
    0:11:51 not a scientific one.
    0:11:52 It’s your judgment
    0:11:54 of how you want to see
    0:11:55 the start of autonomy.
    0:11:58 So I love complex systems
    0:11:59 and I love different levels
    0:12:00 of description
    0:12:01 and I love the independence
    0:12:03 of different levels
    0:12:03 of grantedness
    0:12:04 in physics
    0:12:06 so I’m utterly
    0:12:07 as obsessed
    0:12:07 as anyone
    0:12:08 with that
    0:12:10 but it’s important
    0:12:10 to distinguish
    0:12:12 that fascination
    0:12:12 which is a scientific
    0:12:13 fascination
    0:12:14 with the question
    0:12:16 of does crossing
    0:12:17 some threshold
    0:12:18 make something
    0:12:19 human or not?
    0:12:21 because the question
    0:12:22 of humanness
    0:12:24 or of becoming
    0:12:24 an entity
    0:12:26 that we care about
    0:12:27 in our planning
    0:12:27 becoming
    0:12:28 creating something
    0:12:29 that itself
    0:12:30 is a beneficiary
    0:12:31 of our technology
    0:12:32 that question
    0:12:33 has to be
    0:12:34 a matter of faith
    0:12:36 we just have
    0:12:36 to accept
    0:12:38 that our culture
    0:12:39 our law
    0:12:40 our ability
    0:12:41 to be technologists
    0:12:42 ultimately rests
    0:12:43 on values
    0:12:45 that in a sense
    0:12:45 we pull out
    0:12:46 of our asses
    0:12:47 or if you like
    0:12:48 we have to be
    0:12:49 a little bit mystical
    0:12:50 in order to create
    0:12:51 the ground layer
    0:12:52 in order to be
    0:12:52 then rational
    0:12:53 as technologists
    0:12:54 in a way
    0:12:55 I wish it wasn’t so
    0:12:56 it sort of sucks
    0:12:57 but it’s just the truth
    0:12:57 and the sooner
    0:12:58 we accept that
    0:12:59 the better off
    0:13:00 we’ll be
    0:13:00 and the more honest
    0:13:01 we’ll be
    0:13:02 and I’m okay with it
    0:13:03 why?
    0:13:05 because
    0:13:06 if I’m designing
    0:13:07 AI for AI’s sake
    0:13:08 I’m talking nonsense
    0:13:09 you know
    0:13:10 like
    0:13:11 right now
    0:13:13 it’s very expensive
    0:13:13 to compute AI
    0:13:14 so what percentage
    0:13:16 of that expense
    0:13:17 it goes into
    0:13:18 creating the illusion
    0:13:19 so that you can believe
    0:13:20 it’s sort of
    0:13:21 another person
    0:13:22 when you use chat
    0:13:23 how much electricity
    0:13:24 is being spent
    0:13:25 so that the way
    0:13:26 it talks to you
    0:13:27 feels like it’s a person
    0:13:28 a lot
    0:13:28 you know
    0:13:29 and it’s a waste
    0:13:30 like why are we doing that
    0:13:31 why are we doing
    0:13:32 why are we creating
    0:13:34 a carbon footprint
    0:13:36 for the benefit
    0:13:38 of some non-entity
    0:13:39 in order to fool humans
    0:13:40 like it’s
    0:13:40 it’s ridiculous
    0:13:42 but we don’t see that
    0:13:43 because we have this
    0:13:45 religious imperative
    0:13:46 in the tech
    0:13:48 cultural world
    0:13:49 to create
    0:13:50 this new life
    0:13:52 but it’s entirely
    0:13:53 a matter of
    0:13:54 our own perception
    0:13:55 there’s no test
    0:13:55 for it
    0:13:56 other than the
    0:13:56 Turing test
    0:13:57 which is no test
    0:13:57 at all
    0:13:58 I mean
    0:13:59 we still don’t even
    0:14:01 have a real
    0:14:01 definition
    0:14:03 of consciousness
    0:14:05 and I hear all
    0:14:05 these discussions
    0:14:07 about machine learning
    0:14:09 and human intelligence
    0:14:09 and the differences
    0:14:11 and I continue
    0:14:12 to have no idea
    0:14:13 when something
    0:14:14 stops being a
    0:14:15 simulacrum of intelligence
    0:14:16 and becomes the real thing
    0:14:17 I still don’t quite know
    0:14:18 when something can
    0:14:19 reasonably be called
    0:14:20 sentient
    0:14:21 or intelligent
    0:14:22 but maybe the question
    0:14:22 doesn’t even matter
    0:14:24 maybe it’s enough
    0:14:25 for us to think it does
    0:14:26 right
    0:14:27 so the problem
    0:14:28 in what you just
    0:14:29 said is the word
    0:14:29 still
    0:14:32 like it’s a
    0:14:33 this
    0:14:35 lack of knowledge
    0:14:36 is structural
    0:14:37 you’re not going
    0:14:38 to overcome it
    0:14:39 you can pretend
    0:14:40 you have
    0:14:40 but you’re not going
    0:14:41 to
    0:14:42 this is genuinely
    0:14:43 a matter of faith
    0:14:43 you know
    0:14:44 and
    0:14:46 it’s a very
    0:14:46 old discussion
    0:14:47 when it comes
    0:14:48 to God
    0:14:49 but
    0:14:50 it’s a new
    0:14:50 discussion
    0:14:51 when it comes
    0:14:52 to each other
    0:14:53 or to AIs
    0:14:54 and
    0:14:54 you know
    0:14:55 like
    0:14:56 faith is okay
    0:14:56 we can live
    0:14:57 with faith
    0:14:57 we just have
    0:14:58 to be honest
    0:14:59 about it
    0:14:59 and I think
    0:15:01 being dishonest
    0:15:01 and saying
    0:15:02 oh
    0:15:03 it’s not faith
    0:15:04 I have this
    0:15:04 rational proof
    0:15:05 of something
    0:15:07 it’s not
    0:15:08 dishonesty
    0:15:08 is probably
    0:15:09 not good
    0:15:10 especially
    0:15:10 if you’re
    0:15:10 trying to do
    0:15:11 science or technology
    0:15:15 maybe we just
    0:15:17 maybe we just
    0:15:18 hold on
    0:15:19 maybe
    0:15:20 I’m going to
    0:15:21 say this
    0:15:23 we probably
    0:15:23 just have to
    0:15:24 hold on to
    0:15:24 some notion
    0:15:25 that there’s
    0:15:26 something
    0:15:26 fundamentally
    0:15:27 special
    0:15:28 about human
    0:15:29 consciousness
    0:15:30 and that even
    0:15:30 if on some
    0:15:31 purely empirical
    0:15:31 level
    0:15:32 that’s not
    0:15:32 even true
    0:15:33 maybe believing
    0:15:34 that it is
    0:15:34 is essential
    0:15:36 to our
    0:15:36 survival
    0:15:37 I don’t
    0:15:37 think you
    0:15:38 can rationally
    0:15:40 proceed
    0:15:41 as an
    0:15:41 as an
    0:15:42 acting
    0:15:42 technologist
    0:15:44 without
    0:15:45 an
    0:15:46 irrational
    0:15:47 belief
    0:15:48 that people
    0:15:49 are special
    0:15:50 because once again
    0:15:50 then you have
    0:15:51 no recipient
    0:15:52 and if you
    0:15:53 say well
    0:15:53 there’s going
    0:15:54 to be
    0:15:54 no belief
    0:15:55 all the way
    0:15:55 to the bottom
    0:15:56 it’s just
    0:15:56 going to be
    0:15:57 rationality
    0:15:57 forever
    0:15:58 I mean
    0:15:59 it doesn’t
    0:15:59 work
    0:16:00 rationality
    0:16:01 never creates
    0:16:01 a total
    0:16:02 enclosed
    0:16:02 system
    0:16:04 we kind
    0:16:05 of float
    0:16:05 in a sea
    0:16:05 of mystery
    0:16:06 and we
    0:16:06 have like
    0:16:07 this belief
    0:16:07 that lets
    0:16:08 us have
    0:16:08 a footing
    0:16:09 and it’s
    0:16:10 our job
    0:16:11 to acknowledge
    0:16:11 that even
    0:16:12 if we’re
    0:16:12 uncomfortable
    0:16:13 with it
    0:16:15 can I try
    0:16:15 another angle
    0:16:16 on you
    0:16:16 yeah
    0:16:17 do you know
    0:16:17 my
    0:16:17 okay
    0:16:18 so there’s
    0:16:18 another
    0:16:19 argument
    0:16:19 about the
    0:16:20 turing test
    0:16:20 right
    0:16:21 turing test
    0:16:22 you have a
    0:16:23 person on a
    0:16:23 computer
    0:16:23 they’re each
    0:16:24 trying to fool
    0:16:24 a judge
    0:16:25 and at the
    0:16:26 moment the
    0:16:26 judge can’t
    0:16:26 tell them
    0:16:27 apart
    0:16:27 you say
    0:16:28 well we
    0:16:28 might as
    0:16:29 well call
    0:16:30 the computer
    0:16:31 human because
    0:16:31 what other
    0:16:31 tests can
    0:16:32 there be
    0:16:32 that’s the
    0:16:32 best we’ll
    0:16:33 get
    0:16:33 okay
    0:16:35 so the
    0:16:36 problem with
    0:16:36 the test
    0:16:37 is that it
    0:16:38 measures whether
    0:16:38 there’s a
    0:16:38 differential
    0:16:39 but it
    0:16:40 doesn’t tell
    0:16:40 you whether
    0:16:41 the computer
    0:16:42 got smarter
    0:16:42 or the
    0:16:42 human got
    0:16:43 stupider
    0:16:44 it doesn’t
    0:16:45 tell you if
    0:16:45 the computer
    0:16:46 became more
    0:16:47 human or if
    0:16:47 the human
    0:16:48 became less
    0:16:48 human in
    0:16:49 any sense
    0:16:49 whatever that
    0:16:50 might be
    0:16:51 so there’s
    0:16:52 two humans
    0:16:52 the contestant
    0:16:53 and the judge
    0:16:53 and one
    0:16:54 computer
    0:16:54 therefore
    0:16:56 and this is
    0:16:56 meant to be
    0:16:57 funny but it’s
    0:16:57 also kind of
    0:16:57 real
    0:16:58 there’s a
    0:16:58 two-thirds
    0:16:59 chance that
    0:16:59 it was a
    0:17:00 human that
    0:17:00 got stupider
    0:17:01 rather than
    0:17:01 a computer
    0:17:01 that got
    0:17:02 smarter
    0:17:04 and I
    0:17:04 see that
    0:17:05 borne out
    0:17:05 like when I
    0:17:06 look at
    0:17:06 social media
    0:17:07 and I see
    0:17:08 people interacting
    0:17:08 with the AI
    0:17:09 algorithms that
    0:17:10 are supposed to
    0:17:10 guide their
    0:17:11 attention
    0:17:12 I see them
    0:17:13 getting stupider
    0:17:13 two-thirds
    0:17:14 of the time
    0:17:14 but then you
    0:17:15 know sometimes
    0:17:16 really good
    0:17:16 stuff happens
    0:17:17 so I think
    0:17:18 this general
    0:17:19 spread of most
    0:17:20 of the time
    0:17:20 things get
    0:17:21 worse but then
    0:17:21 there’s some
    0:17:22 stuff that’s
    0:17:22 really cool
    0:17:24 tends to be
    0:17:24 true when you
    0:17:25 believe in AI
    0:17:26 and so
    0:17:27 I would
    0:17:28 say don’t
    0:17:28 believe in
    0:17:28 it and
    0:17:30 some people
    0:17:30 are still
    0:17:31 getting it
    0:17:31 stupider
    0:17:31 because that’s
    0:17:32 how we are
    0:17:33 but I think
    0:17:33 we can get to
    0:17:34 the point where
    0:17:34 the majority
    0:17:35 gets better
    0:17:36 instead of
    0:17:37 stupider but
    0:17:37 right now I
    0:17:37 think we’re
    0:17:38 at two-thirds
    0:17:39 get stupider
    0:17:40 yeah that
    0:17:41 math checks out
    0:17:41 to me
    0:17:42 great I
    0:17:43 think that’s
    0:17:43 a rigorous
    0:17:44 argument that’s
    0:17:44 what you call
    0:17:45 a rigorous
    0:17:46 quantitative
    0:17:47 theoretically and
    0:17:48 empirically supported
    0:17:49 argument right
    0:17:49 there
    0:17:50 so do you
    0:17:51 think all
    0:17:53 the anxieties
    0:17:54 including from
    0:17:55 serious people
    0:17:56 in in the
    0:17:57 world of AI
    0:17:58 all the worries
    0:18:00 about human
    0:18:01 extinction and
    0:18:01 mitigating the
    0:18:02 risks thereof
    0:18:04 does that is
    0:18:04 that religious
    0:18:06 hysteria to
    0:18:06 you or does
    0:18:07 that feel
    0:18:09 what drives me
    0:18:09 crazy about
    0:18:10 this I this
    0:18:11 is my world
    0:18:11 you know so I
    0:18:12 talk to the
    0:18:12 people who
    0:18:13 believe that
    0:18:14 stuff all the
    0:18:15 time and
    0:18:16 increasingly a
    0:18:16 lot of them
    0:18:17 believe that it
    0:18:17 would be good to
    0:18:18 wipe out people
    0:18:19 and that the AI
    0:18:19 future would be a
    0:18:20 better one and
    0:18:21 that we should
    0:18:22 wear a disposable
    0:18:24 temporary container
    0:18:25 for the birth of
    0:18:26 AI I hear that
    0:18:27 opinion quite a lot
    0:18:27 that’s a real
    0:18:28 opinion held by
    0:18:29 real people
    0:18:32 many many I
    0:18:33 mean like the
    0:18:34 other day I was
    0:18:35 at a lunch in
    0:18:36 Palo Alto and
    0:18:36 there were some
    0:18:37 young AI
    0:18:38 scientists there
    0:18:39 who were saying
    0:18:41 that they would
    0:18:42 never have a
    0:18:43 bio baby because
    0:18:43 as soon as you
    0:18:44 have a bio baby
    0:18:44 you get the
    0:18:46 mind virus of
    0:18:48 the bio world
    0:18:48 and that when
    0:18:49 you have the
    0:18:50 bio mind virus
    0:18:50 you become
    0:18:51 committed to
    0:18:52 your human baby
    0:18:52 but it’s much
    0:18:53 more important to
    0:18:54 be committed to
    0:18:54 the AI of the
    0:18:56 future and so
    0:18:57 to have human
    0:18:58 babies is
    0:18:58 fundamentally
    0:18:59 unethical
    0:19:01 now okay in
    0:19:01 this particular
    0:19:03 case this was
    0:19:03 a young man
    0:19:04 with a female
    0:19:05 partner who
    0:19:06 wanted a kid
    0:19:06 and what I’m
    0:19:07 thinking is this
    0:19:07 is just another
    0:19:08 variation of the
    0:19:09 very very old
    0:19:10 story of young
    0:19:11 men attempting to
    0:19:12 put off the baby
    0:19:13 thing with their
    0:19:14 sexual partner as
    0:19:15 long as possible
    0:19:16 because I’ve been
    0:19:16 there and many of
    0:19:16 us have been
    0:19:17 there so in a
    0:19:18 way I think it’s
    0:19:19 not anything new
    0:19:19 and it’s just the
    0:19:20 old thing but
    0:19:21 it’s a very
    0:19:23 common attitude
    0:19:25 not the dominant
    0:19:25 one I would say
    0:19:26 the dominant one
    0:19:27 is that the
    0:19:28 super AI will
    0:19:29 turn into this
    0:19:30 god thing that’ll
    0:19:31 save us and
    0:19:32 will either upload
    0:19:33 us to be immortal
    0:19:34 or solve all our
    0:19:34 problems at the
    0:19:35 very least or
    0:19:36 something create
    0:19:37 super abundance at
    0:19:38 the very very very
    0:19:41 least and I
    0:19:45 I have to say
    0:19:45 there’s a bit of
    0:19:46 an inverse
    0:19:47 proportion here
    0:19:48 between the people
    0:19:49 who directly work
    0:19:50 in making AI
    0:19:51 systems and then
    0:19:51 the people who
    0:19:52 are adjacent to
    0:19:54 them who have
    0:19:54 these various
    0:19:57 beliefs my own
    0:19:58 opinion is that
    0:19:59 the people
    0:20:00 how can I put
    0:20:02 this the people
    0:20:03 who are able to
    0:20:04 be skeptical and
    0:20:05 a little bored and
    0:20:06 dismissive of the
    0:20:07 technology they’re
    0:20:08 working on tend to
    0:20:09 improve it more than
    0:20:09 the people kind of
    0:20:10 worship it too much
    0:20:13 like I’ve seen that
    0:20:14 a lot in a lot of
    0:20:15 different things not
    0:20:16 not just computer
    0:20:17 science and I think
    0:20:18 I think you have to
    0:20:19 have a kind of
    0:20:20 like you can’t drink
    0:20:21 your own whiskey too
    0:20:22 much when you’re a
    0:20:24 technologist you have
    0:20:25 to kind of be ready
    0:20:26 to say oh maybe
    0:20:27 this thing’s a bit
    0:20:28 overhyped I’m not
    0:20:29 going to tell that
    0:20:30 to the people buying
    0:20:31 shares in my company
    0:20:31 but you know what
    0:20:32 like just between us
    0:20:35 you know and but
    0:20:35 that attitude is
    0:20:37 exactly the one that
    0:20:38 puts you over the
    0:20:38 threshold to then
    0:20:39 start improving it
    0:20:40 more and that’s one
    0:20:41 of the dangers of
    0:20:42 this kind of
    0:20:43 mythologizing of it
    0:20:44 oh it’s about to
    0:20:45 become this god
    0:20:45 that’ll take over
    0:20:46 everything but
    0:20:48 that what follows
    0:20:49 from that is this
    0:20:50 very curious thing
    0:20:51 which is that the
    0:20:52 way of thinking
    0:20:53 about it where it’s
    0:20:54 about to turn into
    0:20:55 this god that’ll
    0:20:56 run everything and
    0:20:57 either kill us all
    0:20:57 or fix all our
    0:20:58 problems that
    0:21:00 attitude in itself
    0:21:02 makes you not
    0:21:04 only a little bit
    0:21:05 of a lesser
    0:21:06 improver of the
    0:21:07 technology by any
    0:21:08 like real measurable
    0:21:10 metric but it
    0:21:11 also makes you a
    0:21:12 bad steward of it
    0:21:15 part of part of
    0:21:15 what makes this
    0:21:16 very confusing
    0:21:17 especially to you
    0:21:19 know non-technical
    0:21:20 normie outsiders
    0:21:21 like me and like
    0:21:22 most people frankly
    0:21:24 is that it is it’s
    0:21:25 just moving and
    0:21:26 changing and evolving
    0:21:27 really quickly and
    0:21:28 the terms and
    0:21:29 concepts are very
    0:21:30 slippery if you’re
    0:21:32 not deep in it and
    0:21:32 you know you’re
    0:21:33 talking about super
    0:21:34 super AI and godlike
    0:21:36 powers one example
    0:21:37 is and you’ll bear
    0:21:38 with me for a second
    0:21:39 so I can bring people
    0:21:41 along we have this
    0:21:42 dichotomy between
    0:21:44 AI versus AGI
    0:21:45 artificial intelligence
    0:21:46 versus artificial
    0:21:47 general intelligence and
    0:21:48 my understanding is
    0:21:50 that AI is a term for
    0:21:51 the general set of
    0:21:52 tools that people
    0:21:53 are building chat
    0:21:54 bots and that sort
    0:21:54 of thing and that
    0:21:56 AGI is still sort of
    0:21:57 a theoretical thing
    0:21:58 where this tech is
    0:22:00 basically as good at
    0:22:01 everything as a
    0:22:03 normal regular person
    0:22:03 is and it can also
    0:22:04 learn and grow and
    0:22:05 apply that knowledge
    0:22:07 just like we can and
    0:22:08 we’ve got AI now
    0:22:09 clearly but we don’t
    0:22:11 have AGI yet and if
    0:22:13 we get it and there
    0:22:13 are people who think
    0:22:14 we’re maybe closer
    0:22:15 than we thought
    0:22:16 recently that it’ll be
    0:22:18 a real Rubicon
    0:22:20 crossing moment for
    0:22:21 us what’s your
    0:22:22 feeling on that do
    0:22:23 you think AGI is
    0:22:24 even possible in the
    0:22:25 way most people
    0:22:26 have you not
    0:22:26 listened to a word
    0:22:28 I said that’s a
    0:22:28 religious question
    0:22:30 that’s like asking
    0:22:30 if I think the
    0:22:31 rapture is coming
    0:22:33 soon I mean it’s
    0:22:33 yeah but you can
    0:22:34 have an opinion
    0:22:34 about religious
    0:22:35 questions I guess
    0:22:38 that’s true I mean
    0:22:40 there are those who
    0:22:41 say we have AGI
    0:22:42 already and their
    0:22:43 opinion is as
    0:22:44 legitimate as
    0:22:45 anybody else’s I
    0:22:46 mean I just think
    0:22:47 the moment you’ve
    0:22:48 put the question
    0:22:48 that way you’ve
    0:22:49 already confused
    0:22:50 yourself and made
    0:22:50 yourself kind of
    0:22:51 useless in talking
    0:22:52 about what to do
    0:22:53 with the technology
    0:22:54 so I have to reject
    0:22:55 your question as
    0:22:56 being like poorly
    0:22:56 framed and
    0:22:57 ill-informed I’m
    0:22:59 sorry I was hoping
    0:22:59 to get through this
    0:23:00 fucking conversation
    0:23:01 without you having
    0:23:02 to beat back at
    0:23:03 one of my ill-informed
    0:23:04 questions and I
    0:23:05 did make it I made
    0:23:06 it almost 20 minutes
    0:23:07 in yeah good luck
    0:23:08 with that my friend
    0:23:12 all right sir
    0:23:13 it was a valiant
    0:23:14 effort you win that
    0:23:17 you really I mean
    0:23:19 look I mean this
    0:23:20 is silly this is
    0:23:21 like I’m also
    0:23:21 trying to speak for
    0:23:22 concerns that I
    0:23:23 know a lot of
    0:23:24 people I know
    0:23:25 because we broadcast
    0:23:26 that way of thinking
    0:23:27 about it so yeah
    0:23:31 look there’s a
    0:23:31 thing all right
    0:23:33 look I’m I
    0:23:35 benefit from people
    0:23:36 believing in AI
    0:23:37 professionally and
    0:23:39 there’s a way that
    0:23:39 the whole economy
    0:23:40 runs on attention
    0:23:42 getting and in a
    0:23:44 funny way the way
    0:23:45 digital attention
    0:23:46 economy works
    0:23:51 is it rewards
    0:23:52 anxieties and
    0:23:54 terror as much
    0:23:54 or maybe a
    0:23:56 little more than
    0:23:59 optimism or you
    0:24:01 know goodwill and
    0:24:02 so you have this
    0:24:03 weird situation where
    0:24:05 somebody can play
    0:24:06 the villain on
    0:24:06 social media and
    0:24:08 do very well and
    0:24:09 similar things
    0:24:10 happening in the
    0:24:11 rhetoric of computer
    0:24:12 science so when we
    0:24:13 say oh our stuff
    0:24:14 might be about to
    0:24:15 come alive and
    0:24:16 it’s about to get
    0:24:17 smarter than you
    0:24:18 it generates this
    0:24:19 little anxiety in
    0:24:20 people and then that
    0:24:21 actually benefits us
    0:24:22 because it keeps it
    0:24:24 keeps the attention
    0:24:27 on us and so
    0:24:28 there’s a funny way
    0:24:29 that we’re
    0:24:30 incentivized to put
    0:24:31 things in the most
    0:24:33 alarming way what I
    0:24:34 what I will say is
    0:24:36 that I like the
    0:24:37 idea of models being
    0:24:38 useful so I think
    0:24:40 of the models that
    0:24:41 we’re building as
    0:24:42 being wonderful
    0:24:43 mashup models so
    0:24:44 like for instance
    0:24:46 I love being able
    0:24:47 to use large models
    0:24:48 to go through the
    0:24:48 scientific literature
    0:24:51 and find correlations
    0:24:51 between different
    0:24:52 papers that might not
    0:24:53 use the same
    0:24:54 terminology that would
    0:24:54 have been a pain in
    0:24:55 the butt to detect
    0:24:57 before that’s great
    0:24:58 if you present that
    0:24:59 with a chat
    0:25:00 interface it seems
    0:25:01 like a smart
    0:25:02 scientist if people
    0:25:03 like that I mean I
    0:25:04 guess whatever it’s
    0:25:05 not my job to judge
    0:25:06 everybody but the
    0:25:08 thing is you don’t
    0:25:09 need to present it
    0:25:09 that way you’d
    0:25:10 still get the
    0:25:11 same value but
    0:25:11 that’s the way we
    0:25:13 do it we we add
    0:25:14 in personhood
    0:25:16 fooling to what
    0:25:17 would otherwise be
    0:25:19 really in a way
    0:25:20 more clear
    0:25:21 freestanding value I
    0:25:23 think but we like
    0:25:24 to present the
    0:25:24 fantasy
    0:25:37 there’s over 500
    0:25:38 thousand small
    0:25:40 businesses in bc and
    0:25:40 no two are alike
    0:25:42 i’m a carpenter i’m a
    0:25:43 graphic designer i sell
    0:25:45 dog socks online that’s
    0:25:47 why bcaa created one
    0:25:48 size doesn’t fit all
    0:25:49 insurance it’s
    0:25:51 customizable based on
    0:25:52 your unique needs so
    0:25:53 whether you manage
    0:25:54 rental properties or
    0:25:55 paint pet portraits you
    0:25:56 can protect your small
    0:25:58 business with bc’s most
    0:25:59 trusted insurance brand
    0:26:01 visit bcaa.com slash
    0:26:03 small business and use
    0:26:04 promo code radio to
    0:26:05 receive fifty dollars
    0:26:06 off conditions apply
    0:26:16 all right let me try to
    0:26:17 pull away a little bit
    0:26:18 from religious questions
    0:26:22 okay so look i i’m i’m
    0:26:23 not worried about the
    0:26:23 matrix and the
    0:26:25 terminator um i am
    0:26:27 worried about a much
    0:26:28 more boring and
    0:26:30 unsexy scenario but i
    0:26:31 think equally bad
    0:26:34 possibility is that these
    0:26:37 emergent technologies will
    0:26:39 accelerate a trend that
    0:26:41 i think digital tech in
    0:26:42 general and social media
    0:26:43 in particular has already
    0:26:47 started which is to pull
    0:26:49 us away more and more
    0:26:50 from the physical world
    0:26:52 and encourage us to
    0:26:54 perform versions of
    0:26:55 ourselves in the virtual
    0:26:56 world and because of how
    0:26:58 it’s designed it has this
    0:27:00 habit of reducing other
    0:27:02 people to crude avatars
    0:27:03 which is why it’s so easy
    0:27:05 to be cruel and vicious
    0:27:07 online and why people who
    0:27:08 are on social media too
    0:27:10 much start to become
    0:27:12 mutually unintelligible
    0:27:13 to each other and i
    0:27:16 worry about ai super
    0:27:17 charging some of this
    0:27:18 stuff i mean do you even
    0:27:19 accept that framing am i
    0:27:20 right to be thinking of ai
    0:27:23 as a potential accelerant of
    0:27:26 these trends yeah i mean i
    0:27:29 i think you are correct
    0:27:36 so it’s arguable and
    0:27:37 actually consistent with the
    0:27:38 way the community speaks
    0:27:41 internally to say that the
    0:27:43 algorithms that have been
    0:27:44 driving social media up to
    0:27:49 now are a form of ai if you
    0:27:52 if you unlike me wish to use
    0:27:55 the term ai and what the
    0:27:59 algorithms do is they
    0:28:01 attempt to predict human
    0:28:03 behavior based on the
    0:28:05 stimulus given to the
    0:28:07 human and by putting that
    0:28:08 in an adaptive loop they
    0:28:11 hope to drive attention and
    0:28:13 sort of an obsessive
    0:28:15 attachment to a platform
    0:28:18 because these algorithms
    0:28:21 can’t tell whether
    0:28:23 something’s being driven
    0:28:25 because of things that we
    0:28:25 might think are positive
    0:28:26 or things that we might
    0:28:28 think are negative so i
    0:28:29 call this the life of the
    0:28:30 parody the this notion
    0:28:32 that you can’t tell like
    0:28:33 if a bid is one or zero
    0:28:34 doesn’t matter because it’s
    0:28:36 an arbitrary designation in
    0:28:38 a digital system so if
    0:28:39 somebody’s getting
    0:28:40 attention by being a dick
    0:28:42 that works just as well as
    0:28:43 if they’re offering
    0:28:44 life-saving information or
    0:28:45 helping people improve
    0:28:46 themselves but then the
    0:28:47 peaks that are good are
    0:28:48 really good and i don’t
    0:28:49 want to deny that i love
    0:28:50 dance culture on tiktok
    0:28:53 science bloggers on on
    0:28:54 youtube have achieved a
    0:28:55 level that’s like
    0:28:57 astonishingly good and so
    0:28:58 on like there’s all these
    0:29:00 really really positive good
    0:29:01 spots but then overall
    0:29:03 there’s this loss of truth
    0:29:06 and political paranoia and
    0:29:09 unnecessary confrontation
    0:29:11 between arbitrarily created
    0:29:13 cultural groups and so on
    0:29:15 that’s really doing damage
    0:29:18 um and as is often pointed
    0:29:20 out especially to young
    0:29:21 girls and so on and so
    0:29:22 forth uh not not great
    0:29:25 and so uh yeah could
    0:29:27 better ai algorithms make
    0:29:27 that worse
    0:29:31 plausibly i mean it’s
    0:29:32 possible that it’s already
    0:29:34 bottomed out that it’s kind
    0:29:37 of the the badness just
    0:29:37 comes from the overall
    0:29:38 structure and if the
    0:29:39 algorithms themselves get
    0:29:41 more sophisticated it won’t
    0:29:42 really push it that much
    0:29:43 further but i think
    0:29:45 actually kind of can i’m
    0:29:46 i’m worried about it i
    0:29:48 because we so much want to
    0:29:49 pass the turing test and
    0:29:50 make people think our
    0:29:51 programs are people
    0:29:55 we’re moving to this um
    0:29:56 so-called agentic era where
    0:29:59 it’s not just that you have a
    0:30:00 chat interface with with the
    0:30:01 thing but the chat interface
    0:30:04 gets to know you for years at
    0:30:06 a time and gets a so-called
    0:30:08 personality and but and all
    0:30:09 this and then the idea is that
    0:30:10 people then fall in love with
    0:30:11 these and we’re already
    0:30:13 seeing examples of this
    0:30:15 here and there um and this
    0:30:16 notion of a whole generation
    0:30:17 of young people falling in
    0:30:20 love with fake avatars i mean
    0:30:24 people people talk about ai as
    0:30:25 if it’s just like this yeast in
    0:30:26 the air it’s like oh ai will
    0:30:27 appear and people will fall in
    0:30:29 love with ai avatars but it’s
    0:30:30 not ai is always run by
    0:30:32 companies so like they’re going
    0:30:33 to be falling in love with
    0:30:35 something from google or meta or
    0:30:39 whatever and like that notion
    0:30:41 that your love life becomes
    0:30:44 owned by some company or even
    0:30:45 worse tiktok or a chinese thing
    0:30:49 eek eek eek eek i think that’ll
    0:30:51 create a a a new centralization
    0:30:56 or or or xai eek eek eek eek i’ll
    0:30:57 add some more eeks to that and so
    0:30:59 this centralization of power and
    0:31:02 influence could be even worse and
    0:31:04 that might be a breaking point
    0:31:06 event and so that kind of thing
    0:31:07 ending civilization or ending up
    0:31:09 killing all the people does seem
    0:31:11 plausible to me and some of my
    0:31:12 colleagues would interpret that as
    0:31:15 ai become coming alive and killing
    0:31:16 everybody but i would just
    0:31:17 interpret it as people being
    0:31:20 making terrible choices it all
    0:31:21 amounts to the same thing in the
    0:31:23 end anyway it does at the end of
    0:31:25 the day in terms of actual events
    0:31:27 the same so jaron from your point
    0:31:29 of view is it even possible to have
    0:31:33 good algorithms nudging us around
    0:31:35 online or are all algorithms bad yes
    0:31:37 of course it is okay what does that
    0:31:39 look like course it is of course it
    0:31:41 is yes yes yes yes give me the good
    0:31:42 stuff here give me the good
    0:31:44 algorithms well i mean look in the
    0:31:49 scientific community we do it like i
    0:31:51 mean like okay here’s an example um
    0:31:55 deep research from open ai is a great
    0:31:57 tool it does a literature search on some
    0:31:59 topic and assembles a little report
    0:32:03 it has unnecessary chatbot elements
    0:32:05 to try to make it seem like there’s
    0:32:07 somebody there i view that as a waste
    0:32:10 of time and a waste of energy and i i
    0:32:11 would be happy without it but but
    0:32:13 whatever okay it’s it’s not terrible
    0:32:16 though what it does is it saves
    0:32:18 scientists a ton of time it makes a lot
    0:32:20 of sense i get a lot out of it it’s
    0:32:21 great and now there’s some new
    0:32:25 competitors to it great that stuff’s
    0:32:27 fabulous i really really really it’s
    0:32:28 good because the scientific literature
    0:32:30 has become impossible to use without
    0:32:33 it i do a lot of work that’s pretty
    0:32:35 mathematical and the problem is that
    0:32:37 every time somebody comes across
    0:32:38 similar math they don’t realize
    0:32:39 somebody else has done it so they come
    0:32:41 up with their own terms for things and
    0:32:43 then you have the same ideas or
    0:32:45 similar ones with different terms and
    0:32:46 all these scattered papers in totally
    0:32:47 different communities at different
    0:32:48 conferences and different journals
    0:32:53 yeah but with a tool like this you
    0:32:55 can capture all that and get it into
    0:32:59 place it’s like what what what ai is is
    0:33:01 it’s a way of improving collaboration
    0:33:03 between people it’s a way of gathering
    0:33:06 what people have done in a more unified
    0:33:09 way that can notice multiple hops of
    0:33:12 different terms and similar structures it’s
    0:33:15 it’s a better way of using statistics to
    0:33:17 connect what we’ve all done together to
    0:33:21 get more use out of it it’s great i love
    0:33:25 it and the amount of avatar illusion
    0:33:27 nonsense is kept to a minimum because
    0:33:29 our job is not to fall in love with our
    0:33:31 research our fake research assistant our
    0:33:35 job is to make progress efficiently on
    0:33:37 whatever we’re doing right and so that
    0:33:39 that’s great what is wrong with that
    0:33:41 nothing it’s fabulous so yeah there’s
    0:33:43 wonderful uses if i didn’t think those
    0:33:46 things existed i’d quit what i do
    0:33:49 professionally in the industry of course
    0:33:51 there’s wonderful uses and i think we
    0:33:52 need those things i think they really
    0:33:53 matter
    0:33:56 i guess what i’m hovering around is the
    0:33:58 business model right i mean uh the
    0:34:00 advertising model was sort of the
    0:34:02 original sin of the internet yeah yeah i
    0:34:02 think it is
    0:34:06 um how do we not fuck this up how do we
    0:34:07 not repeat those mistakes what’s a better
    0:34:09 model i mean you talk a lot about data
    0:34:11 dignity so you’re saying we can say fuck
    0:34:13 on this podcast oh you can say whatever
    0:34:15 you want if i had known that there would
    0:34:17 be a lot of fuckery up to now in my in my
    0:34:18 speech it’s not too late anyway it’s not
    0:34:23 too late we got plenty of time okay but no
    0:34:25 but seriously what how do we get it right
    0:34:26 this time how do we not make the same
    0:34:29 mistakes what is a better model yeah well
    0:34:32 um this is actually more important this
    0:34:34 question is the central question of our
    0:34:36 time in my view like the central
    0:34:39 question of our time isn’t um being able
    0:34:42 to scale ai more is is an important
    0:34:45 question and i get that and most people
    0:34:47 are focused on that and dealing with the
    0:34:49 climate is an important question but in
    0:34:51 terms of our own survival coming up with
    0:34:53 a business model for civilization that
    0:34:56 isn’t self-destructive is in a way our
    0:34:59 most primary problem and challenge right
    0:35:01 now because the way we’re doing it what
    0:35:04 we kind of we went through this thing in
    0:35:06 the earlier phase of the internet like
    0:35:08 information should be free and then the
    0:35:09 only business model that’s left is paying
    0:35:12 for influence uh and so then all the
    0:35:16 platforms look free or very cheap to the
    0:35:17 user but then actually the real customer
    0:35:19 trying to influence the user and you end
    0:35:23 up with what’s essentially a stealthy form
    0:35:26 of um manipulation being the central
    0:35:30 project of civilization and we can only
    0:35:31 get away with that for so long at some
    0:35:33 point that bites us and we become too
    0:35:36 crazy to survive so we must change the
    0:35:38 business model of civilization and so
    0:35:41 exactly how to get from here to there is
    0:35:44 a bit of a mystery but i continue to work
    0:35:46 on it like i think we should incentivize
    0:35:48 people to put great data into the ai
    0:35:51 programs of the future uh and i’d like
    0:35:53 people to be paid for data used
    0:35:55 ai models and also to be celebrated and
    0:35:56 made visible and known because i think
    0:35:58 it’s just a big collaboration and our
    0:36:01 collaborators should be valued how easy
    0:36:02 would it be to do that do you think we
    0:36:05 can or will there’s still some unsolved
    0:36:07 technical questions about how to do it
    0:36:09 i’m very very actively working on those
    0:36:10 and i believe it’s doable and there’s a
    0:36:12 whole you know research community devoted
    0:36:14 to exactly that distributed around the
    0:36:16 world and i think it’ll make better
    0:36:18 models i mean better data makes better
    0:36:20 models and there’s a lot of people who
    0:36:21 dispute that and they say no it’s just
    0:36:22 better algorithms and we already have
    0:36:25 enough data for the rest of all time but
    0:36:28 i disagree with that i think i don’t
    0:36:29 think we’re the smartest people who will
    0:36:31 ever live and there might be new creative
    0:36:33 things that happen in the future that we
    0:36:35 don’t foresee and the models we’ve
    0:36:37 currently built might not extend into
    0:36:39 those things and having some open system
    0:36:41 where people can contribute to new models
    0:36:44 in new ways is a more expansive and
    0:36:47 creative and you know open-minded and
    0:36:51 and just you know kind of spiritually
    0:36:53 optimistic way of thinking about the deep
    0:36:53 future
    0:37:15 today explained here with eric levitt senior
    0:37:17 correspondent at vox.com to talk about the
    0:37:21 2024 election that can’t be right eric i thought
    0:37:22 we were done with that i feel like i’m pacino
    0:37:24 in three just when i thought i was out
    0:37:28 they pull me back in why are we talking about
    0:37:30 the 2024 election again the reason why we’re
    0:37:33 still looking back is that it takes a while
    0:37:36 after an election to get all of the most high
    0:37:40 quality data on what exactly happened so the
    0:37:42 full picture is starting to just come into view
    0:37:45 now and you wrote a piece about the full
    0:37:49 picture for vox recently and it did bonkers business
    0:37:53 on the internet what did it say what struck a
    0:37:56 chord yeah so this was my interview with
    0:38:00 david shore of blue rose research he’s one of
    0:38:04 the biggest sort of democratic data gurus in
    0:38:08 the party and basically the big picture headline
    0:38:12 takeaways are on today explained you’ll have to go listen
    0:38:15 to them there find the show wherever you listen to shows bro
    0:38:35 i think i’m a humanist like you in the end and what i want fundamentally is just the
    0:38:39 elevation of human agency not the diminishment of it and part of what that means to borrow your
    0:38:45 language is creating more creative classes and less dependent classes yep uh you’ve convinced me
    0:38:50 that that’s at least possible i don’t know if it’s likely but i hope it is and and maybe some
    0:38:55 some kind of data dignity type model is the most promising thing i’ve heard
    0:39:06 no i sort of feel like the human project our our survival is simultaneously both certain and
    0:39:11 unlikely if you know what i mean like i i feel like if we just follow the immediate trend lines
    0:39:13 and what we see we’re probably gonna
    0:39:16 buck ourselves up to use the word i’m
    0:39:22 encouraged to say here there you go but i also just have this feeling we’ve made it through a lot of
    0:39:26 stuff in the past and i just have this feeling we’re gonna rise to the occasion and figure this
    0:39:32 one out really i don’t know exactly how we will but i think we will i don’t know what the
    0:39:40 alternative is the alternative is in 200 million years there’ll be smart cephalopods to take over
    0:39:45 the planet uh and maybe they’ll do that i mean that’s the alternative but i think we can do it i
    0:39:56 really do i really i we just we just have to be a little less full of ourselves and not believe we’re
    0:40:02 making a new god no more golden calves that’s really our problem still yeah good luck with
    0:40:09 that i mean i i i like i’m constantly thinking more about the the social and political and cultural
    0:40:14 dynamics because that’s just my background um and you know i mean i i guess speaking of dependent
    0:40:23 classes i a very common concern is is this fear that ai is going to create a lot of social instability by
    0:40:29 taking all of our jobs it’s a widespread fear it’s scary as hell and it feels like
    0:40:35 the latest iteration of a very old story about new technologies like automation displacing workers
    0:40:39 i mean how do you speak to these sorts of fears when you hear them because surely you hear them a lot
    0:40:41 yeah and they concern me i mean
    0:40:54 look um there’s not a perfect solution to that problem uh there i’ll give you an example of one
    0:41:00 that i find tricky to think about uh my mom died in a car accident and i’ve always believed from when
    0:41:05 i was very young that cars should drive themselves that it was manifestly obvious that we could create
    0:41:13 a digital system that would save many many lives so we have tens of thousands of people killed by cars
    0:41:16 every year still in the us and i think it’s over a million worldwide or something like that i mean
    0:41:24 it’s like crazy it’s like and so um there are a lot of reasons for it and a self-driving car is never
    0:41:28 going to be perfect because it’s not a task that can be done perfectly there’ll be circumstances where
    0:41:34 there’s no optimal solution you know in the instant but overall we ought to be able to save a lot of
    0:41:42 life so i’m really supportive of that project at the same time an incredibly large number of blue collar
    0:41:49 people around the world get by behind a wheel whether it’s truck drivers or rideshare drivers these days
    0:42:00 or etc you know and so like how do you reconcile those two things uh and i i don’t think there’s any way to do it perfectly i think there’s
    0:42:10 there’s two things that should be true one is that we need to find an intermediate way to love
    0:42:15 a social safety net that isn’t all the way to universal basic income because the universal basic
    0:42:21 income idea gives people this idea that they’re not worth anything and they’re just being supported by
    0:42:27 the tech titans as a hobby and it doesn’t feel very secure or very dignified and or stable there’s like
    0:42:33 just a lot of reasons why i’m skeptical of that uh in the long term and i don’t think people like it
    0:42:40 or want it but on the other hand um just telling people well you’re thrown out into the mix and in
    0:42:45 the u.s you have no health insurance and just figure something out that’s also just too cruel and not viable
    0:42:51 if it’s a lot of people at once so we have to find our way to a very unfashionable intermediate
    0:43:00 sense of social safety network or uh to help people through transitions and right now the accounting for
    0:43:04 that is very very difficult to sort out and especially in the united states there’s a deep
    0:43:13 hostility to it and i just don’t see logically any other way but then beyond that um i do think new roles
    0:43:18 will appear like the the story that well new things will happen and new new things will be possible
    0:43:25 i do believe that like there’s a kind of a vague and uncomfortable sense that surely new things will
    0:43:31 come along and i i actually think that’s true i don’t feel comfortable making that claim for all
    0:43:35 those drivers like we’re not going to retrain them to be programmers because low-level programming
    0:43:43 itself is also getting automated right i don’t know exactly how that’ll work um i have thought a great
    0:43:49 great deal about it but that’s who i am for the moment i believe that there could be all kinds of
    0:43:57 things we don’t foresee and that within that explosion of new sectors of creativity there will be enough new
    0:44:04 needs for people to do things if only to train ai’s that it’ll keep up with human needs and support
    0:44:08 some kind of a world of economics that’s more distributed than just a central authority
    0:44:13 distributing income to everybody which i think would be corrupted yeah yeah i agree with that
    0:44:20 do you think we’re being sufficiently intentional about the development of this technology do you
    0:44:27 think we’re asking the right questions as a society now well i mean the questions are dominated by
    0:44:33 a certain internal technical culture which is and the mainstream of technical culture is very
    0:44:39 obsessed with ai as a new god or some kind of new entity and so i think that that does make the
    0:44:47 whole conversation go askew and that said we’re almost like if you go to ai conferences
    0:44:54 there might there’s usually more talk where somebody is saying we’re going to talk about how to talk the
    0:45:01 ai into not killing us you know and that kind of conversation which to me is not well grounded and
    0:45:09 i think it kind of loses itself in loops but that kind of conversation can take up as much time and space
    0:45:14 as like a serious conversation of like how can we optimize this algorithm or how can we you know like the
    0:45:20 the actual work that we should be doing as technologists um i was at one conference i was
    0:45:25 kind of funny where i forget what there were these different factions there’s the artificial general
    0:45:30 intelligence and there’s the super intelligence and there’s all these different people who have
    0:45:34 slightly different ideas about how awesome ai will be and help might kill us all in different ways
    0:45:42 and they were so conflicted that they got into a fist fight um a not very competent fist fight it must be
    0:45:48 said but i’m shocked it’s kind of funny anyway i sort of wish i had a film of that that was really funny but
    0:45:54 i don’t know i mean i love my world i love the people i do kind of make fun of us a little bit
    0:46:00 sometimes because i just think it’s important too you know okay so if we just let’s just set aside for
    0:46:06 the moment that the more common fears about ai the alignment problem and taking our jobs and
    0:46:12 flattening human creativity all that stuff all that is there all of that um is there is there a fear of
    0:46:18 yours something you think we could get terribly wrong that’s not currently something we hear much about
    0:46:26 uh god i don’t even know where to start yeah there’s like a lot lot lot lot lot lot lot
    0:46:38 lot i’m i mean one of the things i worry about is we’re gradually moving education into an ai model
    0:46:46 and the motivations for that are often very good because in a lot of places on earth it’s just been
    0:46:50 impossible to come up with an economics of supporting and training enough human teachers
    0:46:58 and a lot of cultural issues in changing societies make it very very hard to make schools that work
    0:47:06 and so on like there’s a lot of issues and in theory a sort of uh client self-adapting ai tutor
    0:47:13 could solve a lot of problems at a low cost in a lot of situations but then the issue with that is
    0:47:19 once again creativity how do you keep people who learn in a system like that
    0:47:24 how do you train them so that they’re able to step outside of what the system was trained on
    0:47:29 you know like there’s this funny way that you’re always retreading and recombining the training data
    0:47:35 in any ai system and you can address that to a degree with constant fresh input and this and that but
    0:47:40 i am a little worried about people being trained in a closed system that makes them a little less than
    0:47:46 they might otherwise have been and have a little less faith in themselves i’m a little concerned about
    0:47:53 sort of defining the nature of life and education downward you know and the thing is the history
    0:47:59 of education is filled with doing exactly that thing like education has been filled with overly
    0:48:08 reductive ideas or overly idealistic and and um biased ideas of different kinds i mean so it’s not like
    0:48:14 we’re entering this perfect system messing it up we’re entering a messed up system and trying to figure out
    0:48:22 how to not perpetuate it it’s messed up itness i think in the case of education um challenging really
    0:48:28 challenging i think i just ask just because i’m just curious what you would say i i have a five-year-old
    0:48:37 son and he’s already started asking questions about you know like what kind of skills should he learn what
    0:48:42 what should he what should he aspire to do in the world oh man that’s a hard one right and i don’t know
    0:48:49 what to tell him because i have no idea what the world is going to look like by the time he’s 18 or 20 or 15
    0:48:54 hell you know i what would you what would you tell him if uncle jaron came over oh yeah and he asked
    0:48:59 you that what would you say well i have a teen daughter now and when she was younger uh she went
    0:49:07 to coding camp you know and loved it and then when uh copilot for github came out and now some of the
    0:49:12 other ones that are out she was like well you know the kinds of programs i’d write i can just ask for now
    0:49:17 so why did you send me to all this thing why did i waste all my time at these things and i said uh remember
    0:49:24 you loved coding camp remember you liked it you liked it it’s like well yeah but i would have
    0:49:30 i could have liked spelunking camp or something too like why coding camp and um i i mean
    0:49:36 i don’t have a perfect answer for all that right now i really don’t i do
    0:49:44 i do think there are new things that will emerge i have a feeling there’ll be a lot of new professions
    0:49:51 related to adaptive biology and modifications and helping people deal with weird changes to their
    0:49:56 bodies that will become possible i think that’ll become a big thing i don’t know exactly how it’s
    0:50:04 too early to say like i there’s a subtle point here i want to make which is um i am very far from being
    0:50:12 anti-futuristic or disliking extreme change in the future but what i what i have to insist upon
    0:50:18 is continuity so in this idea there’s a term called the singularity uh applied to ai sometimes that
    0:50:23 there’ll be this rush of change so fast that nobody can learn anything nobody can know anything and it
    0:50:29 just is beyond us beyond us beyond us the problem with the singularity whether it’s in a black hole or in
    0:50:35 the big bang or in technology is that it’s very hard to have you know like by definition even if
    0:50:41 you don’t technically lose information you lose the ability to access the information in the in the
    0:50:46 original context or with any kind of structure so it’s essentially a form of massive forgetting and
    0:50:54 massive loss of context and massive loss of meaning therefore and so however radical we get if in the future
    0:51:00 we’re all going to evolve into massive distributed colonies of space bacteria flying around
    0:51:07 and intergalactically or something whatever we turn into i’m all for it i’m in i’m in i’m in but
    0:51:12 the line from here to there has to have memory it has to be continuous enough that we’re learning
    0:51:19 lessons and we we remember if we break that because we want the thrill of polpot’s year zero where from
    0:51:24 now on we’re the smartest people and everybody else was wrong and we start over if we want that break
    0:51:29 we must resist it we must oppose people who want that break year zero never works out well it’s a
    0:51:38 really really bad idea and so that to me i’m like pro extreme futures but anti discontinuity into the
    0:51:43 future and and so that’s a an in-between place to be that’s a little subtle and hard to get across but
    0:51:48 i think that that’s the right place to be well i always try to end these conversations with as much
    0:51:55 optimism as possible so do you have any other good news or uh rosy scenarios you can you can paint for
    0:52:01 us uh before we get out of here about how things are going to be awesome in the future right now we’re
    0:52:09 in a very hard to parse moment things are strange things are scary and what i keep on telling myself
    0:52:16 there’s always hope in chaos as much as someone might someone driving chaos might be certain that
    0:52:25 it’s under their command but it never is and those of us who watch unfolding chaos looking for signs of
    0:52:33 hope looking for optimism looking for little openings in which to do something good we will find them if we
    0:52:40 stay alert and so i’d urge everybody to do that during this period jaron lanier i’m a fan of your
    0:52:45 work i’m a fan of you as a human being as well i appreciate you coming in oh well that’s very kind
    0:52:52 of you thank you so much and i really appreciate all the effort and also just the goodwill and warmth
    0:53:03 you put into this interview i really do appreciate it so much
    0:53:12 all right i hope you enjoyed this episode there was a lot going on in this one jaron is a unique mind
    0:53:22 and i appreciate the way he thinks about all of this this conversation did force me to reflect on the
    0:53:31 language i use to make sense of ai and all the assumptions buried in that language so i hope you
    0:53:39 found his insights useful but either way as always we want to know what you think so drop us a line
    0:53:52 at the gray area at vox.com or leave us a message on our new voicemail line at 1-800-214-5749
    0:53:58 and once you’re finished with that if you have a second please go ahead and rate and review and
    0:54:09 subscribe to the podcast this episode was produced by beth morrissey edited by jorge just engineered by erica
    0:54:17 wong fact check by melissa hirsch and alex overington wrote our theme music new episodes of the gray area
    0:54:25 drop on mondays listen and subscribe the show is part of vox support vox’s journalism by joining our
    0:54:33 membership program today go to vox.com slash members to sign up and if you decide to sign up because of this show
    0:54:51 let us know you

    Why do we keep comparing AI to humans?

    Jaron Lanier — virtual reality pioneer, digital philosopher, and the author of several best-selling books on technology — thinks that we should stop. In his view, technology is only valuable if it has beneficiaries. So instead of asking “What can AI do?,” we should be asking, “What can AI do for us?”

    In today’s episode, Jaron and Sean discuss a humanist approach to AI and how changing our understanding of AI tools could change how we use, develop, and improve them.

    Host: Sean Illing (@SeanIlling)

    Guest: Jaron Lanier, computer scientist, artist, and writer.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Democrats need to do something

    AI transcript
    0:00:04 Thumbtack presents the ins and outs of caring for your home.
    0:00:10 Out. Indecision. Overthinking. Second-guessing every choice you make.
    0:00:16 In. Plans and guides that make it easy to get home projects done.
    0:00:21 Out. Beige. On beige. On beige.
    0:00:26 In. Knowing what to do, when to do it, and who to hire.
    0:00:29 Start caring for your home with confidence.
    0:00:31 Download Thumbtack today.
    0:00:37 Let’s drive good together with Bonterra and Volkswagen.
    0:00:43 Buy any sustainably focused Bonterra bathroom tissue, paper towel, or facial tissue,
    0:00:47 and you could win a 2025 Volkswagen All-Electric ID Buzz.
    0:00:51 See in-store for details. Bonterra for a better planet.
    0:00:55 No purchase necessary. Terms and conditions apply. See online for details.
    0:01:03 If I had to pick one word to really capture American politics, for most of my adult life at least,
    0:01:07 it wouldn’t be hope or change or forward or future.
    0:01:11 The word I’d choose is inertia.
    0:01:15 It doesn’t matter what the slogans are or what the speeches say.
    0:01:21 In terms of getting things done, or fundamentally changing how we do things,
    0:01:24 both parties seem slow to solve problems.
    0:01:27 Slow to build new things.
    0:01:29 Slow to change anything, really.
    0:01:33 Until now.
    0:01:39 As you know, the Trump administration has been passing executive orders
    0:01:43 and implementing new policies at a breakneck pace.
    0:01:50 Attempting to remake entire swaths of the federal government.
    0:01:53 And you might not like what they’re doing.
    0:01:54 I don’t.
    0:01:56 But they are doing something.
    0:01:59 And the Democratic opposition?
    0:02:02 Well, they don’t seem to have the answers.
    0:02:09 At the very least, they cannot articulate a different vision for America’s future that the country wants.
    0:02:11 Why is that?
    0:02:17 Why couldn’t Democrats craft a message that resonated with voters in 2024’s election?
    0:02:22 And why, in the face of Trump and Musk and Doge,
    0:02:25 in a relentless attack on American institutions,
    0:02:30 are Democrats unable to convince America that their way of governing is better?
    0:02:36 I’m Sean Elling, and this is The Gray Area.
    0:02:43 Today’s guest is Ezra Klein,
    0:02:45 the former host of this podcast,
    0:02:49 the current host of The Ezra Klein Show at The New York Times,
    0:02:52 and the co-author of a new book called Abundance,
    0:02:55 which he wrote with journalist Derek Thompson.
    0:03:04 Ezra argues that in states run by Democrats,
    0:03:07 policy failures have contributed to the rising cost of living.
    0:03:09 to address this crisis,
    0:03:12 and really any crisis America is facing,
    0:03:17 it needs to be easier to build and invent the things that America needs.
    0:03:21 And in our current system, that’s almost impossible to do.
    0:03:26 Not because we don’t have the means, the technology, or the know-how.
    0:03:28 We have all of that in spades.
    0:03:31 What we don’t have is a political economy that makes sense.
    0:03:36 Ezra believes that this idea should be the major,
    0:03:39 maybe the only focus of liberal politics in America.
    0:03:42 So I invited him onto the show, his old show,
    0:03:43 to tell me more.
    0:03:49 Ezra Klein, welcome to the show.
    0:03:53 Ah, it’s like stepping back into an old couch
    0:03:56 that you’ve sat in so much that it slightly has an imprint of your body.
    0:03:57 I finally feel like I’m back home.
    0:03:59 We’re glad to have you.
    0:04:00 I’m glad to be here.
    0:04:03 All right, let’s get to the book, Abundance.
    0:04:07 You want this book to reorient liberal thinking in America.
    0:04:10 Tell me, what are you looking to change?
    0:04:15 I think it’s important for liberals, for progressives,
    0:04:22 to recenter technology as an engine of social progress.
    0:04:25 Most liberals can tell you which five social insurance programs
    0:04:27 they’d like to create or substantially expand,
    0:04:29 but they can’t tell you which five technologies.
    0:04:32 They want the government to really organize resources and intention
    0:04:34 towards pulling in from the future into the present.
    0:04:38 So the idea of Abundance is that to have the future we want,
    0:04:41 we need to build and invent the things we need.
    0:04:44 Some of the things we need to build are things we know how to build,
    0:04:47 like housing, like clean energy, like high-speed rail.
    0:04:50 Some of the things are things we need to invent.
    0:04:52 We are not going to hit our climate targets.
    0:04:54 I mean, we’re not currently on pace to hit them at all,
    0:04:56 but we’re definitely not going to hit them
    0:04:57 if we cannot figure out things like green cement
    0:05:00 and low-carbon or low-emissions jet fuel,
    0:05:02 things we literally do not have,
    0:05:05 certainly not at an affordability point we can scale.
    0:05:09 There are problems you cannot solve without innovation.
    0:05:13 So this is really an effort to put building and innovation,
    0:05:17 the expansion of supply, at the center of liberalism.
    0:05:20 Well, one thing I do appreciate about the book
    0:05:24 is that you’re not trying to offer a suite of policy solutions.
    0:05:26 It’s more about articulating the questions.
    0:05:29 You think our politics should revolve around.
    0:05:33 Why do you think it’s important to begin with the right questions?
    0:05:35 You see what you’re looking for.
    0:05:40 And I think that American liberalism has learned to look
    0:05:42 for opportunities to subsidize.
    0:05:44 Health insurance is too expensive.
    0:05:45 Can we make it subsidized?
    0:05:49 If people need housing, we give them a rental voucher, sometimes.
    0:05:53 If they need to go to college, we give them a Pell Grant.
    0:05:55 If they need food, we give them SNAP.
    0:06:00 If they need income as a retiree or as an elderly person,
    0:06:01 we give them Social Security, right?
    0:06:06 We know how to look for opportunities to do money or voucher-like things.
    0:06:07 That’s really important.
    0:06:12 But we do not look for opportunities to expand supply.
    0:06:13 And that creates two problems.
    0:06:18 One is that if you subsidize something and you don’t have enough supply of it, you will just
    0:06:20 have price increases or rationing.
    0:06:25 The other is that they’re just things you need that if you don’t increase the supply of them,
    0:06:26 you’re just not going to have.
    0:06:31 And look, I’m a Californian, and when I look around my home state where I lived for much
    0:06:35 of the writing of the book, and I think, what has deranged Californian politics?
    0:06:40 Why can Gavin Newsom not run for president in 2028 as he wants to do and say, elect me,
    0:06:42 and you can all have the California dream?
    0:06:44 Because nobody thinks it’s a dream.
    0:06:45 It’s losing people.
    0:06:46 And why is it losing people?
    0:06:47 Because the cost of living is too high.
    0:06:49 And why is the cost of living too high?
    0:06:50 We don’t have enough of the things we need.
    0:06:53 We don’t have enough supply of housing, child care, et cetera.
    0:06:59 And so you will get different answers to the question of how to expand different things.
    0:07:03 If you ask me why it’s hard to lay down transmission lines, that is a different answer
    0:07:06 than why is it hard to build affordable housing in San Francisco.
    0:07:10 But just simply asking the question of how do we get more of the thing we want,
    0:07:15 that I think is a more productive place to start and one that just honestly a lot of
    0:07:18 liberal governance is going to ride by not centering.
    0:07:24 Well, you know, people will hear these kinds of complaints and they will immediately think
    0:07:26 of all the ways the other side is to blame.
    0:07:34 But you do say pretty early on that some of these outcomes reflect an ideological conspiracy
    0:07:35 at the heart of our politics.
    0:07:37 What’s the argument here?
    0:07:43 So I think that liberals, frankly, conservatives too, are comfortable with the narrative that
    0:07:49 we had a conservative movement that arose in the latter half of the 20th century, has attained
    0:07:54 yet more power in the 21st, that is anti-government, that wants to, as Governor Norquist famously
    0:07:56 put it, strangle government in a bathtub.
    0:08:03 That doesn’t really explain, though, why governance in places where conservative Republicans have
    0:08:08 functionally no power, California, Illinois, New York, is pretty bad.
    0:08:14 And to understand that, you have to start looking at something else that does not get as much
    0:08:19 narrative weight in our politics, which is starting in, again, the back half of the 20th
    0:08:26 century, there was a liberalism, the new left, that arose in response to the New Deal left.
    0:08:30 And what New Deal liberalism put at its center was growing to build things.
    0:08:32 We had a rapidly expanding population.
    0:08:34 We were this, you know, new superpower.
    0:08:36 And we went on this orgy of building.
    0:08:37 And we often built recklessly.
    0:08:39 We built in ways that damaged the environment.
    0:08:44 We, you know, I grew up outside of Los Angeles at a time when you would have that curtain of
    0:08:47 smog descend and your eyes would water and people would cough.
    0:08:49 And it was really bad for kids and, frankly, adults.
    0:08:55 And so this sort of liberalism emerged that was about making it harder to build, that was
    0:09:00 about making sure government couldn’t do what, say, Robert Moses did in New York and cut
    0:09:02 a freeway right through, you know, a marginalized community.
    0:09:07 And frankly, more than that, it ended up being a liberalism that really made it impossible to
    0:09:09 cut a freeway through an affluent community.
    0:09:13 And a lot of this was not just well-intentioned.
    0:09:14 It worked.
    0:09:15 We cleaned up the environment.
    0:09:17 We cleaned up the air.
    0:09:18 We cleaned up water.
    0:09:22 We did make it harder for government to do stupid things or act without thinking about
    0:09:22 its actions.
    0:09:26 Over time, those things grew and grew and grew.
    0:09:30 Those statutes, those processes, those movements, liberals became more affluent.
    0:09:31 They had more to defend.
    0:09:36 And so in places even where you didn’t really have a strong conservative movement, what you
    0:09:44 did develop was a way of doing government that was so coalitional, that had so many opportunities
    0:09:50 for veto, had so many opportunities for individuals or nonprofits to sue the government, that you
    0:09:51 just couldn’t get shit done.
    0:09:57 And so construction productivity has been functionally falling in America for a very long time or stagnating
    0:09:58 in some areas.
    0:10:03 And so as the years have gone by, we’ve gotten really good at building in the digital world.
    0:10:07 We can make cryptocurrencies and AI and this whole expansive internet and really quite
    0:10:09 shitty at building in the real world.
    0:10:17 Look, I think rattling off a bunch of numbers isn’t awesome, but I have to just at least mention
    0:10:21 a couple here because it just illustrates the problem, right?
    0:10:22 So this is from your book.
    0:10:29 It cost about $609 million to build a kilometer of high-speed rail in the U.S.
    0:10:31 $609.
    0:10:32 Just rail, not high-speed.
    0:10:34 Oh, even better.
    0:10:37 In Germany, it’s $384.
    0:10:39 In Canada, $295.
    0:10:41 Japan, $267.
    0:10:44 And in Portugal, fuck, they’re really doing something right.
    0:10:46 It’s only $96 million.
    0:10:48 How is that even real?
    0:10:53 So one thing to note about that is that conservatives will say, yeah, the government sucks.
    0:10:54 Don’t use it.
    0:10:56 But those countries have governments.
    0:11:00 Those countries actually have higher union density than the U.S. does.
    0:11:04 So there is something about the way we do government here, the way we do building here.
    0:11:06 And there’s a bunch of different answers to that.
    0:11:10 One of the big ones is we are very focused on adversarial legalism, as it’s called.
    0:11:17 We make it the primary way we let people constrain the government is by suing it.
    0:11:19 Suing it takes a long time.
    0:11:24 I mean, and, you know, at this moment, people are glad we have a way to sue the government under Donald Trump.
    0:11:27 So the point is not that it is always and everywhere bad.
    0:11:36 But nevertheless, there is a dimension where we have made it so hard for the government to act, so slow for it to act, that it just functionally can’t act.
    0:11:43 And one thing about those numbers that you then see is that we just don’t do as many big infrastructure projects anymore for all kinds of reasons.
    0:11:47 We’re very afraid of doing anything that requires tunneling in a way they’re not in other countries.
    0:11:50 The Second Avenue subway in New York City is like a total nightmare.
    0:11:55 And we have just created ways of building that don’t work.
    0:11:57 I wish they did.
    0:11:58 What’s the Second Avenue subway?
    0:12:07 Oh, it’s a subway extension in New York that has been planned for a very, very long time that was supposed to be much more ambitious than it will now be.
    0:12:15 Look, when they began building the New York subways, they opened the first 28 stations, I think it was, in four years, if I’m not wrong.
    0:12:20 It takes decades now to do anything, to do like one station.
    0:12:44 You would think, with the advances in machinery we have, with the advances in imaging we have, with the advances in 3D computerized drafting that we have, I mean, you would think, with everything we have built, advanced machinery-wise, since 1908, we would make things bigger, better, faster, right?
    0:12:46 We would be just way better at building things than we were then.
    0:12:48 But we’re just not.
    0:12:50 I mean, we are safer at building them.
    0:12:53 There are things we were better at planning for when we build them.
    0:12:55 I don’t want to suggest that no advancement has happened.
    0:12:58 But they built the Empire State Building in a year.
    0:12:59 A year.
    0:13:01 We just can’t do that anymore.
    0:13:04 And the reason isn’t that we have forgotten technique.
    0:13:11 And the reason isn’t that we haven’t had things advance in terms of machinery and building.
    0:13:13 The reason is we’ve made the politics of building very, very difficult.
    0:13:16 And we’ve made the process of building very, very cumbersome.
    0:13:21 I talk about the example of California high-speed rail at some length, but I think it’s a good one.
    0:13:24 And I could say a million things about it, but I’ll say this.
    0:13:27 High-speed rail replaces cars.
    0:13:28 It’s pretty clean, right?
    0:13:34 It’s a good—the reason to do it, in part, is it is an environmentally friendly form of transportation.
    0:13:41 The effort to environmentally clear the high-speed rail line that California intended to use began in 2012.
    0:13:48 By the time I wrapped the book, at the end of 2024, it was almost, but not quite done.
    0:13:51 12 years, and it was not finished.
    0:13:59 And the question that that environmental review was asking was not, was high-speed rail having it better than not having it?
    0:14:10 It’s in every individual parcel of track, had they considered all the possible consequences of having it?
    0:14:14 Mitigated all the possible downsides, which, of course, the status quo does not have to do.
    0:14:21 And, you know, most importantly, bulletproofed themselves as much as they can against lawsuits, which can take years to play out.
    0:14:26 This replicates across clean energy efforts.
    0:14:32 Congestion pricing in New York City was held up for years in environmental assessment.
    0:14:35 And these are for things that are good for the environment.
    0:14:43 So this is—it’s one example, but these are liberal policies that liberals defend that make it very hard for liberals to deliver on the things liberals say they are going to give people.
    0:14:44 That’s a problem.
    0:14:54 I just want to stress that part of what makes this so maddening is that it’s an outcome basically no one really wants, right?
    0:15:04 It’s the system, it’s the incentive structure, it’s individuals making narrowly rational decisions, which in the end produce incredibly stupid, unhelpful results.
    0:15:07 That is definitely a big part of it.
    0:15:11 Some things are drift, some things are accidental, some things are unseen, and some things are intended.
    0:15:19 When we talk about housing, which is different than something like rail, you’re dealing with a problem that housing has become a core financial asset.
    0:15:26 And that asset is often made more valuable, or at least people believe it will be made more valuable, by scarcity.
    0:15:39 And the idea that, you know, you’ve got this house on a block of San Francisco or Brooklyn or whatever, and you don’t want a large affordable housing complex going up down the street, it’s not crazy.
    0:15:41 I mean, that might actually hurt your parking.
    0:15:44 That might actually hurt your home values, depending on how it plays out.
    0:15:54 But now you’ve got a real problem, because you’ve made the engine of wealth something that the only way people can feel comfortable to keep going up is to make sure we don’t build enough housing around it.
    0:15:56 But we need to build enough housing around it.
    0:15:57 And so who’s winning?
    0:15:59 You know, the people already who have the assets.
    0:16:05 And liberalism has to ask, like, does it hold the values it puts on lawn signs?
    0:16:07 You know, human being is illegal and kindness is everything.
    0:16:12 And, you know, or is it, you know, and I got mine?
    0:16:15 You know, sorry you didn’t get yours ethos.
    0:16:32 Support for the gray area comes from Shopify.
    0:16:34 Running a business can be a grind.
    0:16:39 In fact, it’s kind of a miracle that anyone decides to start their own company.
    0:16:46 It takes thousands of hours of grueling, often thankless work to build infrastructure, develop products, and attract customers.
    0:16:50 And keeping things running smoothly requires a supportive, consistent team.
    0:16:57 If you want to add another member to that team, a platform you and your customers can rely on, you might want to check out Shopify.
    0:17:03 Shopify is an all-in-one digital commerce platform that wants to help your business sell better than ever before.
    0:17:10 It doesn’t matter if your customers spend their time scrolling through your feed or strolling past your physical storefront.
    0:17:15 There’s a reason companies like Mattel and Heinz turn to Shopify to sell more products to more customers.
    0:17:18 Businesses that sell more sell with Shopify.
    0:17:22 Want to upgrade your business and get the same checkout Mattel uses?
    0:17:28 You can sign up for your $1 per month trial period at Shopify.com slash Vox, all lowercase.
    0:17:32 That’s Shopify.com slash Vox to upgrade your selling today.
    0:17:34 Shopify.com slash Vox.
    0:17:41 Support for the gray area comes from Upway.
    0:17:47 If you’re tired of feeling stuck in traffic every day, there might be a better way to adventure on an e-bike.
    0:17:55 Imagine cruising past traffic, tackling hills with ease, and exploring new trails, all without breaking a sweat or your wallet.
    0:18:02 At Upway.co, you can find e-bikes from top-tier brands like Specialized, Cannondale, and Aventon.
    0:18:06 At up to 60% off retail, perfect for your next weekend adventure.
    0:18:11 Whether you’re looking for a rugged mountain bike or a sleek city cruiser, there’s a ride for everyone.
    0:18:20 And right now, you can use code GRAYARIA150 to get $150 off your first e-bike purchase of $1,000 or more.
    0:18:29 There’s over 500,000 small businesses in B.C. and no two are alike.
    0:18:30 I’m a carpenter.
    0:18:31 I’m a graphic designer.
    0:18:33 I sell dog socks online.
    0:18:37 That’s why BCAA created One Size Doesn’t Fit All Insurance.
    0:18:40 It’s customizable, based on your unique needs.
    0:18:47 So whether you manage rental properties or paint pet portraits, you can protect your small business with B.C.’s most trusted insurance brand.
    0:18:53 Visit bcaa.com slash smallbusiness and use promo code RADIO to receive $50 off.
    0:18:54 Conditions apply.
    0:19:14 Look, I guess we’re a couple of months into this new administration.
    0:19:19 People feel as though the government is acting very rapidly.
    0:19:21 How do you make sense of that?
    0:19:27 I mean, is that just because it’s basically breaking shit and breaking shit is significantly easier than building shit?
    0:19:39 Elon Musk and Donald Trump decided, certainly Musk decided, that he just wasn’t going to treat a lot of things that have constrained past administrations as real and binding.
    0:19:45 And it turns out from watching them, there’s a lot more you could do than people thought you could do.
    0:19:48 The civil service protections were not nearly as binding as people made them seem to be.
    0:19:55 I do not like what Elon Musk is doing in terms of indiscriminately and ideologically firing huge swaths of the federal workforce.
    0:20:01 But I believe four months ago, and I believe today, that it was way too hard to hire and fire in the civil service.
    0:20:11 And because liberalism never fixed that in a way that was conceptually and morally appropriate, now we’re getting this burn-it-down approach.
    0:20:13 And I think that’s true in a lot of things.
    0:20:20 If you do not make government work, someone else will eventually weaponize the dissatisfaction with it and burn it to the ground.
    0:20:30 And liberals had no really good things to say about cost of living and affordability in the 2024 election, in part because they themselves have been bad on cost of living and affordability.
    0:20:32 The places they govern have become unaffordable.
    0:20:37 And that was part of why they lost to Donald Trump in an election that was about cost of living and affordability.
    0:20:42 I don’t want to put everything on liberals or liberalism.
    0:20:48 The right deserves – the right has to take responsibility for its own actions, its own failures.
    0:20:51 The things they want are very different than the things I want.
    0:20:53 But yes, Musk has come in.
    0:20:54 Trump has come in.
    0:21:01 And they have not treated process as binding or even something worth respecting in the way liberalism has.
    0:21:09 And I think the two coalitions have developed mirror image pathologies, which is liberals are much too respectful and obsessed by process.
    0:21:15 And the right now has functionally no process and no respect for it and no respect for the legality of things.
    0:21:26 And, you know, I would like to see something that is more thoughtfully integrating of these perspectives.
    0:21:29 I mean, look, can I just vent for a second?
    0:21:30 You know what, man?
    0:21:31 It’s a podcast.
    0:21:32 It’s your podcast.
    0:21:36 I mean – okay.
    0:21:39 So, Democrats believe in government, right?
    0:21:41 Have used government, as you were saying, to do great things.
    0:21:51 And I agree that they have created or helped create a wildly sclerotic system that makes it very difficult, if not impossible, to build stuff and do stuff.
    0:21:58 But meanwhile, Republicans don’t really believe in government except for defense and national security.
    0:22:00 They want to dismantle it.
    0:22:01 They want to privatize everything.
    0:22:07 And this dynamic is eternally to their advantage because, again, breaking shit is easier than building shit.
    0:22:13 And in their efforts to break government, they’ve increased the public’s disgust with it because it keeps not working.
    0:22:15 And this is the doom loop.
    0:22:20 And I definitely take your point about absurd liberal proceduralism.
    0:22:28 But I do think having one of our two government parties enter into politics with the explicit aim of making government not work is a problem.
    0:22:34 And I don’t know how liberals and Democrats can solve that because this is a 51-49 country.
    0:22:36 Well, maybe it wouldn’t be if we were better at governing.
    0:22:37 You think so?
    0:22:38 I hope so.
    0:22:44 This thing liberals do, where it’s like, oh, man, they’re so bad.
    0:22:45 And they are.
    0:22:47 Like, I am fucking furious.
    0:22:57 And you know what I also am is I’m fucking furious that liberals gave up the, like, mantle of people who would fix your problems to this band of idiots.
    0:22:59 It makes me angry.
    0:23:01 Like, it should make other people angry.
    0:23:05 And just telling yourself endlessly that they are so bad, what are we going to do?
    0:23:07 Well, you know what would be good if we did?
    0:23:11 Created a situation where people said, California, that’s a well-run state.
    0:23:16 Maybe one of any of the 18 national-level figures it has recently produced should be president.
    0:23:19 New York, there’s a big economically important state.
    0:23:23 Maybe somebody from it should be a plausible national figure.
    0:23:33 You can’t, like, it is easier to run for president as a governor of Texas or Florida than the governor of California and New York.
    0:23:35 Now, that’s not true for everywhere.
    0:23:36 Jared Polis has done a good job in Colorado.
    0:23:38 And you know what happened in Colorado in 2024?
    0:23:46 They didn’t suffer a complete collapse of the Democratic vote share in the way that happened in California and New York.
    0:23:48 Because on some level, governing will does matter.
    0:23:55 And, like, I don’t think being a nihilistic party is highly popular, but being an ineffectual party is also not highly popular.
    0:24:05 So what you’re preaching, right, doing big things, building big things, actually leading, governing, investing in the future, people will say, well, you know, Joe Biden kind of did this, right?
    0:24:06 Or he appeared to do this.
    0:24:09 He passed the bipartisan infrastructure bill.
    0:24:10 He did the Chips and Science Act.
    0:24:12 He did the Inflation Reduction Act.
    0:24:13 And it’s like it never happened.
    0:24:14 He got no credit.
    0:24:16 It passed away like a fart in the wind.
    0:24:19 So, like, what is the lesson of that for you?
    0:24:25 Did all of that just fail politically because maybe the money was allocated, but for all the reasons you’ve outlined, nothing actually got done?
    0:24:27 Or is it something else?
    0:24:28 There are a couple things here.
    0:24:34 So, one is that there was a huge problem with running Joe Biden for president a second time.
    0:24:35 There just was.
    0:24:38 I mean, obviously, with somebody who was not in favor of that.
    0:24:49 But I think Joe Biden, if you change nothing about the election except Joe Biden into 65 and can effectively tell a story about his own administration, I think he would have won re-election.
    0:24:49 I really do.
    0:24:53 Now, his record alone was not that strong.
    0:24:58 Part of that was inflation, which they bear a modest amount of the blame for.
    0:25:05 It is the case that they put too much demand into the economy at a time when supply chains were choking, and that ended up being a bad idea.
    0:25:15 But, yeah, on the other side, when you make it slow to get these things built, it really does harm you.
    0:25:20 So, they got $42 billion for broadband access in poor communities.
    0:25:22 How many people got broadband?
    0:25:23 Approximately zero.
    0:25:28 So, yeah, Biden’s a complicated case because I do think there are movements towards abundance.
    0:25:34 I don’t think there was a serious effort to show that they’re making government spend in a way that was, you know, meant to benefit people.
    0:25:47 I think it’s fucking a problem that Doge is a dark Republican con as opposed to something that was a bright Democratic idea.
    0:26:05 I would have loved an actual Department of Government efficiency that was acting with real aggression, not the illegal and almost nihilistic levels of aggression under Elon Musk, but was really, really, really upset about places where government was failing and was making a big show of that.
    0:26:10 And you saw something like this under Bill Clinton with the Reinventing Government Initiative, which Al Gore got.
    0:26:17 But under Biden, they had this hyper-coalitional approach to politics and hyper-bureaucratic approach to the federal government.
    0:26:22 And, you know, they didn’t have any real—they never bought themselves credibility on that.
    0:26:25 They never seemed upset about things like they were doing wrong, right?
    0:26:27 Everything just kind of got explained away.
    0:26:36 And so, yeah, then at the end of the day, people felt prices had increased a bunch, and they were going around saying, no, no, no, we’ve spent all this money to spark a manufacturing boom.
    0:26:39 And, you know, the two things didn’t connect.
    0:26:42 Okay.
    0:26:48 So, let’s talk about how to fix shit, okay?
    0:27:04 If the big problem—and this is a book written broadly from the left, by the left, for the left, and the big problem on the left is this soul-crushing proceduralism, what is the solution to that?
    0:27:06 There is no one solution.
    0:27:11 I’m not, you know, it’s not one weird trick to get rid of your belly fat here.
    0:27:22 What I see us as trying to do is build—in certain ways, rebuild—an ideological tendency in politics, but on the left.
    0:27:26 And it will take time for that to take root and do big things.
    0:27:30 It will take time for it to change processes, if it ever does.
    0:27:33 It will take time for it to do new legislation.
    0:27:43 You know, one of the most inspiring of the movements here that I think are part of, like, this broader sense of refocusing on supply is the YIMBY movement.
    0:27:45 Can you just say what the YIMBYs are just for people to know?
    0:27:56 The Yes in My Backyard movement, which is basically a sort of tendency—they want to be bipartisan, but at least began, like, as an intra-liberal fight over saying, no, no, to be a liberal, you can’t be fighting this development.
    0:27:58 You can’t be fighting new homes.
    0:27:59 You can’t be fighting affordable housing.
    0:28:02 You can’t say we can build nothing and then say you’re a liberal.
    0:28:07 Liberalism requires building enough that living in this city is affordable for the working class.
    0:28:10 And they’ve had incredible intellectual victories.
    0:28:13 Kamala Harris is running on building three million new homes.
    0:28:18 Barack Obama, you know, brought up functionally YIMBYism during his DNC speech.
    0:28:23 But again, in the place where it is most powerful, it has not moved the needle in a significant way.
    0:28:27 And it’s because it’s still been bogged down in these coalitional fights.
    0:28:31 And, you know, I was talking to somebody who is a developer down there, and I was saying, look,
    0:28:36 they’ve passed all these bills in California to give you a fast track to build housing.
    0:28:38 Why aren’t you building more housing?
    0:28:39 He said, oh, I don’t use any of those bills.
    0:28:41 I said, well, why?
    0:28:46 He’s like, well, in order to use those bills, I have to agree to a whole new set of standards.
    0:28:49 I have to pay higher wages, prevailing wages.
    0:28:50 I have to do all these different things.
    0:28:56 So in the end, the fast track of that would end up costing me more than just not doing it all.
    0:28:58 And he’s like, that’s how I am.
    0:28:59 That’s how all my developer friends are.
    0:29:01 Like, the budgeting of it just doesn’t work out.
    0:29:05 And, you know, all these things are good on some level.
    0:29:06 Like, I want people to pay high wages.
    0:29:12 But when you have a housing crisis, right, California in 2022, it had 12% of the country’s population.
    0:29:14 It had 30% of its homeless population.
    0:29:18 It had 50% of its unsheltered homeless population.
    0:29:21 California is an astonishing homelessness crisis.
    0:29:23 And that is driven by a housing crisis.
    0:29:33 When you have a housing crisis and you’re passing a bunch of bills to build more housing and your bills aren’t working, well, then you have to ask, like, are the coalitional decisions you’re making good ones?
    0:29:42 Or do you have to deal with the housing crisis in your housing crisis bills and try to think about wages and do an income tax credit or whatever you want to do in other bills?
    0:29:46 But if your bills to solve your crisis are not solving your crisis, you’ve got to do something different.
    0:29:48 It’s not going to be easy.
    0:29:57 It’s going to take a political movement that, you know, over time begins to just see things differently at a lot of different levels and chip away at things in a lot of different ways.
    0:29:59 And it will take aggressive leadership.
    0:30:03 Again, you know, I don’t want to see what Elon Musk and Doge are doing to become the norm.
    0:30:13 But I would like to see much more aggressive leadership from liberal politicians who are furious at government not working and insistent that it has to work and has to deliver the outcomes they actually promise.
    0:30:31 This week on Unexplainable, the final installment of Good Robot, our four-part series on the stories we tell about AI.
    0:30:36 So what I want you to do first is I want you to open up ChatGPT.
    0:30:38 This time, the robots.
    0:30:46 And I want you to say, I’m going to give you three episodes of a series in order.
    0:30:47 Come for our jobs.
    0:30:49 Why are you laughing?
    0:30:50 I don’t know.
    0:30:51 It’s like a little creepy.
    0:31:01 Good Robot, a four-part series about AI from Julia Longoria and Unexplainable, wherever you listen.
    0:31:27 Okay, so let’s just assume that we are able to clear the way for big innovations and invention.
    0:31:32 What do you think we most need and how quickly do we need it?
    0:31:35 So we’ve had abundance of some things for quite some time, right?
    0:31:39 We’ve really built the global economy to give us an abundance of consumer goods.
    0:31:46 Forty years ago, you could go to public college debt-free, but you couldn’t have a flat-screen television.
    0:31:48 And now it’s basically the reverse, right?
    0:31:50 You can have a flat-screen television, but you can’t go to college debt-free.
    0:32:05 So we’re sort of more interested in abundance in the things that are the building blocks of what we think of as not just a good life, but the building blocks of a kind of creative and generative, productive life.
    0:32:17 So the things people really need that allow them to do other things, education, health care, and inside of health care, it doesn’t mean just everybody having insurance, but it means having cures to as much as we can, right?
    0:32:28 The value of health insurance, you know, my partner, she’s written a lot about this, so this is not me speaking out of turn, but, you know, she has a bunch of very complex and strange autoimmune diseases.
    0:32:40 Our health insurance would be a hell of a lot more valuable to me if it had cures for all of them, you know, and this is true for anybody who, you know, who knows people or loves people or they themselves suffer from difficult diseases.
    0:32:43 So what, like, the pace of medical innovation really matters.
    0:32:55 Housing, like, you just need to be able to build homes, and I want to see working-class families be able to live in the big, economically productive cities.
    0:33:11 And that matters not just because, like, it’s fun to live in New York City or San Francisco, but because it is a fundamental path to productivity and to social mobility and to opportunity to have all classes living in the places that are the biggest economic engines.
    0:33:21 And one thing we’ve seen that’s a very, very worrying trend is it used to be that poor people migrated to rich places, and then they got richer, and now they migrate away from them because they can’t afford to live in them.
    0:33:27 And that takes away all the opportunity those rich places used to offer to people who weren’t already rich.
    0:33:30 Michael Bloomberg used to talk about New York City as a luxury good.
    0:33:33 Cities are not supposed to be luxury goods.
    0:33:39 They are engines of opportunity, and when we gate them, we have turned off something very fundamental in the economy.
    0:33:50 So I love the section at the end of the book about these periods of political order where there’s a broad alignment of values, right?
    0:34:02 So after the wreckage of the Great Depression and World War II, we have this spirit of solidarity and collective action, and the power of the state expands enormously, and this is the New Deal era.
    0:34:15 And then this consensus collapses in the 70s, and the pendulum swings back in the opposite direction, and we get the neoliberal era, which is defined much more by individualism and consumerism.
    0:34:26 And I guess my question to you is, to undertake the sort of project you’re talking about here, this era of abundance, that will require a shift in priorities and outlook.
    0:34:33 And do you think that’s possible in this environment, in the absence of some kind of truly epic calamity?
    0:34:37 Like, do we have the attentional resources to course-correct as a country anymore?
    0:34:43 I never think things happen all the way or none of the way.
    0:34:46 Like, there was no pure neoliberal era.
    0:34:48 Nothing in this period was pure neoliberalism.
    0:34:53 Now, there are ideological tendencies that win out during periods.
    0:34:56 But, you know, the neoliberal era is full of contradictions.
    0:35:02 What is opening possibilities right now are very real problems that people have to figure out how to solve.
    0:35:05 Now, history is not, to me, teleological.
    0:35:07 I don’t believe the arc of history bends towards abundance.
    0:35:10 I think that it could go very badly.
    0:35:17 One of the things that we see with Trump is, look, that guy could have run as a sort of conservative abundance.
    0:35:19 I mean, he would want different things than I do.
    0:35:21 The values would be different.
    0:35:23 But he’s not.
    0:35:26 He does not want to bring the Texas housing policies to the nation.
    0:35:34 He and J.D. Vance have repeatedly used the housing crisis as a cudgel against immigrants and an argument for why we need to close the border, right?
    0:35:35 That’s a scarcity approach.
    0:35:40 He doesn’t want to increase the flows of international trade by making us build more stuff.
    0:35:43 He’s using tariffs to cut them down.
    0:35:52 Like, Elon Musk is not expanding what the government can do, given that the government is what allowed him to build Tesla, SpaceX, and SolarCity.
    0:35:55 He is trying to slash and destroy what the government can do.
    0:36:01 Right-wing populism loves scarcity because at the core of its politics is a suspicion of the other.
    0:36:09 If there is the feeling or the reality of there not being enough, then we look with a lot of suspicion on those who might take what we have or what we want.
    0:36:12 So I do think it’s going to be up to the left to try to embrace abundance.
    0:36:19 But if we don’t or if we fail, yeah, scarcity could just be the politics that wins out in the day.
    0:36:21 It has in many eras of human history before.
    0:36:33 I wonder if you think we’ll need a fundamentally different kind of communication environment shaped by different tools in order to have something like a constructive form of politics that makes these sorts of changes possible.
    0:36:34 I don’t think that.
    0:36:36 Why not?
    0:36:37 I hope you’re right.
    0:36:44 Because I think that the current information ecosystem is bad.
    0:36:47 I think it has been often bad in human history.
    0:36:52 I don’t think the specific way it’s bad is really at the root of many of the things that I’m worried about.
    0:36:59 And I don’t think the information ecosystem cares one way or the other about local permitting.
    0:37:06 I don’t think the information ecosystem, like, frankly, I think it’s actually quite friendly to all sorts of different forms of futurism.
    0:37:14 I think that it’s not standing in the way of all progress or all change.
    0:37:25 And like one just good example of that is that, you know, it in some ways created Trump, Musk, Vance.
    0:37:28 But it’s not stopping them from doing things.
    0:37:34 And Trump won by the popular vote by 1.5 points.
    0:37:40 So, you could very much imagine a Democrat, you know, like, imagine a different world.
    0:37:42 Joe Biden does not run for re-election.
    0:37:43 We have a Democratic primary.
    0:37:48 Maybe Kamala Harris wins it and has more time to put together a campaign that has more to say about the issues of the moment.
    0:37:49 And she’s better at talking about them.
    0:38:00 Or maybe Josh Shapiro or Gretchen Whitmer or, you know, someone else, Pete Buttigieg, Wes Moore, you know, wins the primary and they run.
    0:38:01 Like, you just can’t tell.
    0:38:06 Like, the thing, this did not all just turn on the information ecosystem.
    0:38:10 Or to the extent it did, it could have, you know, turned in many different ways.
    0:38:18 And we see different things happening in different states, even though all the states are exposed to the same information ecosystem.
    0:38:21 I think you’ve got to get a little less monocasal, my friend.
    0:38:28 I’ve never been indeterminous, but I think I’ve just increasingly become one.
    0:38:31 And look, you can talk me off the ledge here.
    0:38:45 I mean, I think part of, or one of my hang-ups is that I think we’ve lost the capacity as a society to tell ourselves a coherent story about who we are, what we are, where we’re going, what we want.
    0:38:54 And I guess maybe the question is, do we need, do we need, do we actually need to tell ourselves a coherent story in order to move a political project like this forward?
    0:38:57 Did we ever really need a coherent story?
    0:38:59 Or did we ever really have a coherent story?
    0:39:11 I think if your view of politics is that it needs some extremely high level of informational and narrative cohesion to function, then your politics has a real problem.
    0:39:14 Because that’s very, very, very rarely on offer.
    0:39:24 I think one criticism you’ll get from the left is that, you know, what do you attribute to liberal ideology?
    0:39:29 Because part of the problem here is the rules written by liberals decades ago being used to prevent building stuff today.
    0:39:44 Well, that’s really about wealthy, powerful people using their wealth and power to block progress, which is more about class politics than liberal ideology, that these people aren’t really liberals in any meaningful sense, just rich people protecting their turf.
    0:39:46 I don’t know.
    0:39:47 How do you tease that out?
    0:39:49 Does that distinction even make sense to you?
    0:39:54 I don’t have a class politics where I’m like, rich people are always bad and anybody else is always good.
    0:39:56 But there are places where rich people are a huge problem.
    0:39:58 And you get a lot of it in nimbyism.
    0:40:06 You get a lot of it in, you know, Ted Kennedy, the late Ted Kennedy, helping to organize against an offshore wind project near Cape Cod.
    0:40:15 I just think you’ve got to be specific about what you’re talking about and then work through what you think the political opposition is and what the problems are and what the process is.
    0:40:19 I don’t take that as a particularly useful blanket claim.
    0:40:25 Even in the place where you’d expect rich people to speak the most with one voice, should we raise taxes on rich people?
    0:40:27 They actually don’t anymore.
    0:40:38 The way polarization is structured itself, the way income polarization is structured itself, Democrats are doing better and better with rich people at a time when they’ve become more and more likely and insistent on taxing the rich.
    0:40:41 And so, like, that’s a kind of interesting fact of our politics.
    0:40:43 It has scrambled a bunch of things.
    0:40:48 Democrats sort of think they can, they will get the working class voters they want by saying we’re going to tax rich people.
    0:40:52 They’re weirdly winning more rich voters and fewer working class voters.
    0:40:55 And instead, you have more working class voters for the first time voting for Donald Trump.
    0:40:59 It’s easier if your only problem is rich people.
    0:41:04 It’s hard in the sense that they control a bunch of resources, but it’s easier in that that narrative is super clean.
    0:41:07 What happens when it’s not, though?
    0:41:15 What happens when some of your problems are just, like, upper-middle-class people who are the core of your constituency and you don’t want building happening around them?
    0:41:26 What happens when a bunch of your problems are actually other parts of the government you yourself run that over time have developed turf and funding and kind of stakeholder dynamics?
    0:41:29 And now all of your processes are incredibly difficult.
    0:41:32 So, yeah, rich people are sometimes a problem.
    0:41:34 They’re not the only problem.
    0:41:37 I just, I don’t have a lot of patience for monocasal politics.
    0:41:41 Oh, that feels like a low-key shot there.
    0:41:43 I feel attacked.
    0:41:46 Oh, well, you know, I have more patience for monocasal media politics, maybe.
    0:41:52 I just think everybody, we all have, like, look, abundance is also not a full politics.
    0:41:58 Like, asking the question of how do we solve problems you supply does not tell you every problem.
    0:42:03 It’s not going to tell you how to solve or even what position to take on a bunch of very difficult cultural and social issues.
    0:42:07 It is one set of problems that we could do a better job on.
    0:42:08 And better would be better.
    0:42:17 Yeah, I mean, one of the things I like about it is that it doesn’t necessarily mat neatly and predictably onto partisan cleavages in that way.
    0:42:29 But look, you know, there’s also a movement of people, as you know, who say the only sensible response, actually, at this point in history, is to do the opposite of what you suggest.
    0:42:30 Which is degrowth, right?
    0:42:36 That this whole model of late-stage consumer capitalism has been a moral and ecological catastrophe.
    0:42:40 And we have to scale it back in order to save ourselves.
    0:42:42 To that, you say, what?
    0:42:43 No.
    0:42:45 Say more.
    0:42:49 So I have a long, we have a long discussion of degrowth in the book.
    0:42:49 Yeah.
    0:42:56 And I have a lot I could say and want to say about degrowth, but I’ll say a couple things that are, I guess, maybe narrow.
    0:43:04 One, I do not agree that this era has been, like, a moral, it’s been a bit of an ecological catastrophe, but not a moral catastrophe.
    0:43:11 It’s still not for human beings who, as part of, I do think degrowth has too little preference for human beings in it.
    0:43:16 And the amount of people we’ve pulled out of poverty, the rise in living standards, those are not things to take lightly.
    0:43:20 Then I think, again, you’ve got to, like, look at what your problems are.
    0:43:27 Degrowth has this kind of interesting dynamic to me of being both too much and not enough of a solution to something like climate change.
    0:43:43 If we had not invented our way towards genuinely cheap and plentiful solar energy, wind energy, the possibility of advanced geothermal, new generation battery storage, the only answer we would have to climate change would be sacrifice.
    0:43:46 And sacrifice is just a terrible politics.
    0:43:47 It doesn’t really work.
    0:43:51 If you’re, I would love to see some people run on it and make it work, but I just, I’ve not seen it.
    0:43:53 It doesn’t seem to me to happen.
    0:43:54 Definitely not at this speed.
    0:43:59 And so, our only real shot, in my view, on climate change is technological.
    0:44:05 We have to deploy unfathomable quantities of clean energy as fast as we can.
    0:44:12 And that will also, as we do that, because of the way these sort of innovation curves work, we will get better and better at using the energy.
    0:44:13 It will become more energy dense.
    0:44:20 What has happened to solar and wind and battery storage is genuinely miraculous, has outpaced all expectations.
    0:44:27 And that is at least a viable politics, promising people that they can actually have, like, great technologically advanced lives.
    0:44:36 And it can be built on, you know, abundant clean energy, which is completely conceptually and physically and technologically possible.
    0:44:38 Like, that’s a viable politics.
    0:44:45 Well, the politics of degrowth, degrowth as a political proposition, is like the platonic ideal of a dead fucking end.
    0:44:52 Well, what’s worse is that it doesn’t hold out the possibility that you miss your climate targets by three-tenths of a percent or something.
    0:44:59 It’s that you empower a populist right that promises to burn their way back to prosperity, which is what they are doing right now, right?
    0:45:00 And I think it’s really important.
    0:45:04 Like, when your politics doesn’t work, it’s not like you get half of what you wanted.
    0:45:07 You get, like, the opposite of what you wanted.
    0:45:15 Like, you really have to be, if you care about these problems and you think these problems are near term, hard-nosed about the political consequences of what you’re about to do.
    0:45:18 Well, to that point, I know we’ve got to go soon.
    0:45:31 A lot of what’s happening right now is you have an administration in power that is doing their very best to render government inoperable.
    0:45:43 Does it concern you that damage might be done that will make it more difficult, if not impossible, to do any of these things after they’re gone?
    0:45:47 The damage that will be done concerns me hugely.
    0:45:54 The idea that it would then be impossible to do any of these things, I think if decent people win back power, that’s not accurate.
    0:45:59 I think the damage that will be done is going to be less than the damage of the Civil War, right?
    0:46:08 I mean, less than the—I mean, we have seen countries destroyed by all kinds of natural disasters and wars that were then able to build strong states fairly rapidly afterward.
    0:46:14 I am not one of the people who has a view that what they’re going to do is permanently wreck state capacity.
    0:46:21 But they could create authoritarianism, right, which would weaponize state capacity in a different way.
    0:46:34 My concerns have more to do with democratic backsliding than they do with the idea that we would never be able to rebuild a capable Department of Energy after they shut it down or otherwise corrode it.
    0:46:36 Yeah, and just so you know, I’m not even thinking in terms of permanence.
    0:46:38 I’m thinking just in terms of that 10-year window.
    0:46:40 Oh, you mean on climate change specifically?
    0:46:41 Yeah, specifically.
    0:46:42 Yeah, I’m very fucking concerned.
    0:46:43 I don’t know what to tell.
    0:46:58 Like, I’m more worried, again, than that we won’t be able to do good policies in the next administration, if you imagine a better administration following them in 2029, than I am that they will do everything they can to retard our progress in the next four years.
    0:47:03 And they are trying to, as we speak, destroy the solar and wind industries.
    0:47:06 And this is a really, really, really crucial period.
    0:47:09 I am hair on fire about that.
    0:47:16 But I don’t have a lever to stop it, you know, like, we’re in the timeline we’re in.
    0:47:23 I mean, you also say, too, that you think this era features too little utopian thinking.
    0:47:25 I think you’re right about that.
    0:47:30 But I also know that utopian thinking gets a bad rap.
    0:47:36 But what do you really think of as the practical value of a little utopian thinking?
    0:47:37 What do you mean by that?
    0:47:39 I think you should think about what future you’re trying to create.
    0:47:41 And that helps you work backwards.
    0:47:45 I think that too often we settle for parceling out the present.
    0:47:50 We think about the present and we think about making it a little gentler, a little kinder, a little fairer.
    0:47:53 I think we can think about futures that are quite different.
    0:47:57 And we don’t do that enough for a lot of different reasons.
    0:47:59 The right tends to be relentlessly nostalgic.
    0:48:04 And the left tends to be very just focused on the injustices of the past.
    0:48:07 And in that way, I tend to be more on the left with that.
    0:48:10 And I think there has been a lot of injustice and we should try to do a lot about it.
    0:48:13 But thinking about ways the future could be different I think is important.
    0:48:19 I think for a long time for American liberals, the sort of hoped-for future is Denmark or France.
    0:48:23 It’s a future with a European-level welfare state.
    0:48:25 That has been the grail of where they’re trying to get to.
    0:48:27 And that’s fine.
    0:48:30 That would be better in a bunch of different ways from my perspective.
    0:48:32 But Europe is a basket case.
    0:48:33 Productivity is really low.
    0:48:35 It’s poor compared to us at this point.
    0:48:40 Canada, which a lot of us think of as a much more humane place.
    0:48:45 If Canada were a state, it would be like Alabama level in terms of income per capita.
    0:48:52 You really do create wealth and dynamism differently in America.
    0:48:55 And I think we need a vision of the future that, yes, is kinder.
    0:48:56 Yes, is fairer.
    0:48:57 Yes, is more humane.
    0:48:58 Yes, is more compassionate.
    0:49:02 But also imagines, like, amazing things happening.
    0:49:06 I don’t think that you have to give up on good ideas from Europe or Canada.
    0:49:09 But that shouldn’t be all of it, right?
    0:49:11 We can do better than Denmark.
    0:49:13 We can do better than France.
    0:49:14 We can do better than the UK.
    0:49:15 I’m going to leave it right there.
    0:49:19 Once again, the book is called Abundance.
    0:49:23 Ezra Klein, my friend and former employer, thanks for coming in.
    0:49:25 It was great to be back with you here, Sean.
    0:49:26 Really, really enjoyed it.
    0:49:35 All right, I hope you enjoyed this episode.
    0:49:42 You know, whatever comes of this call for a politics of abundance, I do think there is
    0:49:49 enormous value in trying to articulate a new vision forward or a new framework for liberals
    0:49:56 in particular, because we are stuck right now, stuck in our old categories, stuck in our
    0:49:56 old models.
    0:50:03 And even though there’s a lot of angst and uncertainty right now, there’s also, for
    0:50:08 the same reasons, a lot of potential for something fresh and maybe even hopeful.
    0:50:12 And I got a lot of that in this conversation.
    0:50:16 But as always, we want to know what you think.
    0:50:24 So drop us a line at thegrayareaatvox.com or leave us a message on our new voicemail line
    0:50:28 at 1-800-214-5749.
    0:50:34 And once you’re finished with that, please go ahead and rate and review and subscribe to
    0:50:35 the pod.
    0:50:41 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Christian
    0:50:46 Ayala, fact check by Melissa Hirsch, and Alex Overington wrote our theme music.
    0:50:50 New episodes of The Gray Area drop on Mondays.
    0:50:51 Listen and subscribe.
    0:50:54 This show is part of Vox.
    0:50:57 Support Vox’s journalism by joining our membership program today.
    0:51:00 Go to vox.com slash members to sign up.
    0:51:03 And if you decide to sign up because of this show, let us know.
    0:51:04 Thank you.

    American government has a speed issue. Both parties are slow to solve problems. Slow to build new things. Slow to make any change at all.

    Until now. The Trump administration is pushing through sweeping changes as fast as possible, completely changing the dynamic. And the Democrats? They’ve been slow to respond. Slow to mount a defense. Slow to change tactics. Still.

    Ezra Klein — writer, co-founder of Vox, and host of The Ezra Klein Show for the New York Times — would like to offer a course correction.

    In a new book, Abundance, Klein and co-author Derek Thompson, argue that the way to make a better, brighter future, is to build and invent the things we need. To do that, liberals need to push past hyper-coalitional and bureaucratic ways of getting things done.

    In this episode, Ezra speaks with Sean about the policy decisions that have rendered government inert and how we can make it easier to build the things we want and need.

    Host: Sean Illing (@SeanIlling)

    Guest: Ezra Klein, co-author of Abundance and host of The Ezra Klein Show

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • How to live in uncertain times

    AI transcript
    0:00:02 Support for this show comes from Indeed.
    0:00:05 Indeed-sponsored jobs can help you stand out and hire fast.
    0:00:10 Your post even jumps to the top of the page for relevant candidates to make sure you’re getting seen.
    0:00:12 There’s no need to wait any longer.
    0:00:14 Speed up your hiring right now with Indeed.
    0:00:18 And listeners of this show will get a $100 sponsored job credit.
    0:00:22 To get your jobs more visibility at Indeed.com slash VoxCA.
    0:00:30 Just go to Indeed.com slash VoxCA right now and support this show by saying you heard about Indeed on this podcast.
    0:00:33 Indeed.com slash VoxCA.
    0:00:34 Terms and conditions apply.
    0:00:37 Hiring, Indeed, is all you need.
    0:00:40 The world’s hardest problems can’t wait.
    0:00:44 Food security, energy resilience, the digital divide.
    0:00:46 I’m Astro Teller.
    0:00:51 For 15 years, I’ve worked with inventors and engineers to tackle the seemingly impossible.
    0:01:03 In the Moonshot podcast, we take you inside X, Google’s moonshot factory, giving you unprecedented access to the messy, exhilarating journey of turning science fiction into reality.
    0:01:05 This is the Moonshot podcast.
    0:01:07 Out now, wherever you listen.
    0:01:12 It shouldn’t surprise anyone to hear me say that I’m a fan of uncertainty.
    0:01:17 After all, this podcast is called The Gray Area for a reason.
    0:01:22 We try to live in the nuance as much as possible, and we try to explore questions with an open mind.
    0:01:25 And it’s not just a shtick for me.
    0:01:38 I really believe that openness is a necessity, especially today, when the loudest, most obnoxious voices take up so much of the oxygen.
    0:01:44 But I wouldn’t say that tolerance of uncertainty comes naturally to me.
    0:01:48 Like most people, I like to be right, and I fear the unknown.
    0:01:52 The temptation to retreat into certainty is always there.
    0:01:55 I think most of us are like that.
    0:01:57 So why is this the case?
    0:02:01 What is it about uncertainty that’s so scary?
    0:02:05 And what could be gained by letting go of that fear?
    0:02:10 I’m Sean Elling, and this is The Gray Area.
    0:02:26 Today’s guest is Maggie Jackson.
    0:02:34 She’s a writer and a journalist, and the author of a delightful new book called Uncertain, The Wisdom and Wonder of Being Unsure.
    0:02:39 Maggie makes a great case for uncertainty as a philosophical virtue.
    0:02:47 But she also dives into the best research we have and explains why embracing ambiguity not only primes us for learning,
    0:02:53 it’s also good for our mental health, which intuitively makes sense to me, but it’s not something I’ve really thought about.
    0:02:57 So I invited her onto the show to talk about it.
    0:03:04 Maggie Jackson, welcome to The Gray Area.
    0:03:06 Thank you very much for having me.
    0:03:18 So, I don’t always read the epigraph quotes in books, but you have one from the Austrian philosopher Ludwig Wittgenstein, and it caught my eye.
    0:03:21 So I just want to read it to you and hear your thoughts on it after.
    0:03:21 Sure.
    0:03:22 He wrote,
    0:03:30 I know seems to describe a state of affairs which guarantees what is known, guarantees it as a fact.
    0:03:35 One always forgets the expression, I thought I knew.
    0:03:36 Tell me about that.
    0:03:44 Well, I think that’s a great encapsulation or illustration of our individual and general attitudes toward knowledge,
    0:03:50 because we’re so proud of knowing, and I kind of picture knowledge as an island.
    0:03:55 And after that, the implication is that there’s the abyss.
    0:04:02 And his quote hints at what’s really, really important about uncertainty, and that is its mutability.
    0:04:08 Knowledge has this dynamism that we are very loathe to admit to.
    0:04:14 I like to think of uncertainty as wisdom in motion and not the paralysis that we think it is.
    0:04:17 How did you come to this topic?
    0:04:21 Why write a book on the virtues of uncertainty?
    0:04:24 Well, it was almost reluctantly, honestly.
    0:04:26 This is my third book.
    0:04:35 I’ve been writing about topics that are right under our noses, that are woven into our lives, and that we don’t understand or that we deeply misunderstand.
    0:04:38 The first book was about home, the nature of home in the digital age.
    0:04:45 The second book was about distraction, but particularly attention, which very few people could define, and its workings are newly being discovered.
    0:04:54 And then finally, I started writing a book about thinking in the digital age and what kinds of thinking that we need and what’s besieged and what are we gaining, et cetera.
    0:05:08 And the first chapter was about uncertainty, and not only did I discover uncertainty hadn’t really been studied or acknowledged or prized in so many different domains,
    0:05:14 but there was now this new attention to it, lots and lots of new research findings, even in psychology.
    0:05:18 And at the very same time, I was really fascinated by this idea.
    0:05:31 And yet, I was also reluctant because, like many people, I had this idea that it was just something to eradicate, that uncertainty is just something to get beyond and shut it down as fast as possible.
    0:05:37 The book wanted to go there, and it was a hard sell for the author, so to speak.
    0:05:39 But it was great once I got going.
    0:05:44 Why do we fear not knowing what’s going to happen?
    0:05:46 What’s beneath that fear?
    0:05:47 It’s really simple.
    0:05:58 We fear and dislike uncertainty because, as creatures, for survival’s sakes, we need and want answers.
    0:06:01 We have to solve what to eat, what to do, et cetera.
    0:06:15 And so, we evolved to have a stress response when you meet something new or unexpected or murky or ambiguous, and your body and brain kind of spring into action.
    0:06:28 So that when it’s your first day on the new job, or you’re meeting your in-laws for the first time, or all those sorts of lovely life situations, you know, your heart might beat or your palms might sweat.
    0:06:36 But at the same time, and this is newly discovered, neuroscientists are beginning to unpack what happens in the brain.
    0:06:45 The uncertainty of the moment, the realization that you don’t know, that you’ve reached the limits of your knowledge, instigate a number of neural changes.
    0:06:53 So, it’s like your focus broadens, and your brain becomes more receptive to new data, and your working memory is bolstered.
    0:06:55 So, this kind of rings a bell.
    0:06:56 You’re on your toes.
    0:07:01 And that’s why uncertainty at that moment is a kind of wakefulness.
    0:07:12 In fact, Joseph Cable of the University of Pennsylvania said to me, that’s the moment when your brain is telling itself there’s something to be learned here.
    0:07:26 So, by squandering that opportunity or retreating from that discomfort, we’re actually losing an opportunity to learn because your old knowledge is no longer sufficient.
    0:07:32 You need to be wakeful, but you’re also able to take up that invitation.
    0:07:33 So, what happens there?
    0:07:39 Stress hormones flood our brains, and it’s exciting and nerve-wracking at the same time.
    0:07:45 But the consequence of that is that you’re super plugged in, super attuned to what’s happening.
    0:07:50 And is that what makes it fertile for learning and growth and that kind of thing?
    0:07:50 Right.
    0:07:51 No, exactly.
    0:07:58 Because sometimes we think of learning as being associational, the kind of Pavlovian conditioning, etc.
    0:08:06 That’s true, but really the more updated idea of what learning is that it entails surprise.
    0:08:12 I mean, no surprise, no learning is what the neuroscientist Stanislas Dehane told me and writes about.
    0:08:31 So, when we are jolted from our daily routine, i.e., when we are recognizing the limits of our knowledge when we’re uncertain, that’s the time when your body begins to, and your brain begin to turn on, so to speak.
    0:08:43 What’s really also important to note about this is that what I’m describing seems like a kind of unconscious response and what can you do about it, etc.
    0:08:49 But actually, there is a conscious approach to uncertainty that we have.
    0:08:53 It’s really important not to retreat from the unsettling nature of uncertainty.
    0:08:55 We can also lean into uncertainty.
    0:09:05 And I’ve tried to find the right word for this, and I can’t think of any other than leaning in, because it’s sort of a deliberate embrace of that stress and that wakefulness.
    0:09:09 But it’s really important to, again, recognize that this is good stress.
    0:09:14 And another illustration of how this operates is in the realm of curiosity.
    0:09:28 Many different studies related to the curious disposition show that one of the most important facets of our curiosity is the ability to tolerate the stress of the unknown.
    0:09:43 So, in other words, to lean in to that uncomfortable feeling and to know that that’s your brain’s way of signaling to you that this is a great chance to think and to learn and to create.
    0:09:59 And people who are curious, who have that capacity to push through or to embrace the awkwardness of uncertainty, also are more likely to express dissent and they’re more engaged workers, etc., etc.
    0:10:07 I guess you can think of uncertainty as a precursor to good thinking, and I suppose it is.
    0:10:17 But to me, that makes it sound a little too much like a passive state as opposed to an active orientation to the world.
    0:10:23 Maybe what I’m really asking here is whether you think of uncertainty as a verb or a disposition.
    0:10:26 Yes, I would say both.
    0:10:31 Uncertainty is definitively a disposition.
    0:10:35 We each have our personal comfort zone in relation to uncertainty.
    0:10:41 Our impression is that uncertainty is static and that it’s synonymous with paralysis, etc.
    0:10:46 To be uncertain also has that ring of passivity.
    0:11:02 But when you are taking up that invitation to learn that the good stress of uncertainty offers you, there is a bit of a slowing down that occurs to action and to snap judgment and to racing to an answer.
    0:11:11 So, in contrast to what we expect so often, uncertainty involves process, and that’s really, really important.
    0:11:15 And so, you know, we can take one example of experts.
    0:11:27 You know, today we really venerate the swaggering kind of expert who knows what to do and whose know-how is developed over, quote-unquote, so-called 10,000 hours.
    0:11:31 That type of expertise needs updating.
    0:11:43 That type of expert’s knowledge basically tends to fall short in new, unpredictable, ambiguous problems, the kind that involve or demand uncertainty.
    0:11:51 So, years of experience are actually only weakly correlated with skill and accuracy in medicine and finance, etc.
    0:12:07 People who are typical so-called routine experts fall into something called carryover mode where they’re constantly applying their old knowledge, the old heuristic shortcut solutions, into new situations, and that’s when they begin to fail.
    0:12:10 Adaptive experts actually explore a problem.
    0:12:13 They spend more time on a problem than a novice even.
    0:12:17 And so, there’s this motion here, this forwardness, I think.
    0:12:26 I think of uncertainty as honesty because that’s involved with wakefulness, but I also think of it as dynamic, very, very dynamic.
    0:12:39 The idea that not knowing can be a strain does intuitively seem like a contradiction, in part because we’ve all been taught that knowledge is power, right?
    0:12:43 Do you think that cliche is wrong or just a tad misleading?
    0:12:50 Well, I think that knowledge certainly is power, and knowledge is incredibly important.
    0:12:52 Knowledge is the foundation and the groundwork.
    0:13:08 But at the same time, I think that what we need to do to update our understanding of knowledge and to look into the frontiers of not knowing is basically to see that knowledge is mutable and dynamic, ever-shifting.
    0:13:32 I mean, that metaphor of a rock is my own, and that’s people who are intolerant of uncertainty think of knowledge as something that’s like a rock that we are there to hold and defend, whereas people who are more tolerant of uncertainty, who are likely to be curious, flexible thinkers, I like to say they treat knowledge as a tapestry whose mutability is its very strength.
    0:13:45 That’s an important part, because I don’t think anyone, certainly you, would argue that ignorance is a virtue, but openness to revising our beliefs is, and that’s the distinction here.
    0:13:53 Right, and that centers right in what you’re driving about, where does uncertainty lie?
    0:13:58 And it’s really important to note that uncertainty is not ignorance.
    0:14:00 Ignorance is blank slate, the blank slate.
    0:14:03 I might not know anything about particle physics.
    0:14:04 I am ignorant.
    0:14:09 But when I’m uncertain, it could be this way, it could be that way.
    0:14:10 I’m not sure.
    0:14:17 I’m, again, reaching the limits of my knowledge, and that’s the chance where we can push beyond those boundaries.
    0:14:25 And in child development, there’s an expression called the zone of proximal development, which is usually used as a shorthand for scaffolding.
    0:14:30 That’s the place where a child is pushing beyond their usual knowledge.
    0:14:40 They’re trying something complex and new, and the parent might scaffold a little bit and help only where necessary, but letting them do the work of expanding their limits.
    0:14:45 But that’s actually very much something that is human throughout our whole lives.
    0:14:52 Because it’s really, zone of proximal development is, as one scientist told me, the green bud on the tree.
    0:14:54 That’s where we want to be.
    0:14:57 That’s where we thrive as thinkers and as people.
    0:15:07 When we get back from break, avoiding the pitfalls that come with uncertainty.
    0:15:08 Stay with us.
    0:15:25 Support for the gray area comes from Shopify.
    0:15:27 Running a business can be a grind.
    0:15:32 In fact, it’s kind of a miracle that anyone decides to start their own company.
    0:15:39 It takes thousands of hours of grueling, often thankless work to build infrastructure, develop products, and attract customers.
    0:15:43 And keeping things running smoothly requires a supportive, consistent team.
    0:15:50 If you want to add another member to that team, a platform you and your customers can rely on, you might want to check out Shopify.
    0:15:56 Shopify is an all-in-one digital commerce platform that wants to help your business sell better than ever before.
    0:16:02 It doesn’t matter if your customers spend their time scrolling through your feed or strolling past your physical storefront.
    0:16:08 There’s a reason companies like Mattel and Heinz turn to Shopify to sell more products to more customers.
    0:16:11 Businesses that sell more sell with Shopify.
    0:16:15 Want to upgrade your business and get the same checkout Mattel uses?
    0:16:21 You can sign up for your $1 per month trial period at shopify.com slash fox, all lowercase.
    0:16:25 That’s shopify.com slash fox to upgrade your selling today.
    0:16:27 shopify.com slash fox.
    0:16:37 Support for the gray area is brought to you by Wondery and their new show, Scam Factory.
    0:16:44 You’ve probably received some suspicious email or text that was quite obviously a scam, deleted it, and moved on with your day.
    0:16:49 But have you ever stopped to think about who was on the other end of that scam?
    0:16:54 Occasionally, the stranger on the other side is being forced to try and scam you against their will.
    0:17:01 And on Wondery’s new true crime podcast, they’re telling the story of those trapped inside scam factories,
    0:17:08 which they report has heavily guarded compounds on the other side of the world where people are coerced into becoming scammers.
    0:17:15 Told through the eyes of one family’s harrowing account of the sleepless nights and dangerous rescue attempts trying to escape one of these compounds,
    0:17:24 Scam Factory is an explosive new podcast that exposes what they say is a multi-billion dollar criminal empire operating in plain sight.
    0:17:28 You can follow Scam Factory on the Wondery app or wherever you get your podcasts.
    0:17:34 You can listen to all episodes of Scam Factory early and ad-free right now by joining Wondery Plus.
    0:17:42 Support for the gray area comes from Atio.
    0:17:48 Atio is an AI-native CRM built specifically for the next era of companies.
    0:17:54 They say it’s extremely powerful, can adapt to your unique data structures, and scales with any business model.
    0:18:00 Setting up Atio takes less than a minute, and in seconds of syncing your emails and calendar,
    0:18:06 you’ll see all your relationships in a fully-fledged platform, all enriched with actionable data.
    0:18:10 With Atio, you can also create email sequences and customizable reports.
    0:18:18 And on top of it all, you can build AI-powered automations and use its research agent to tackle some of your most complex processes,
    0:18:22 so you can focus on what matters, building your company.
    0:18:27 You can join industry leaders like Flatfile, Replicate, Modal, and more.
    0:18:34 You can go to atio.com slash gray area, and you’ll get 15% off your first year.
    0:18:38 That’s A-T-T-I-O dot com slash gray area.
    0:18:42 A-T-T-I-O dot com slash gray area.
    0:19:03 When does uncertainty become paralyzing?
    0:19:06 I mean, at some point, you have to decide and act, right?
    0:19:11 But maybe the mistake here is assuming that one needs to be certain in order to act.
    0:19:19 Well, you have to be relatively sure, or the road does fork, you know, metaphorically and literally.
    0:19:22 And you, you know, usually have to take one.
    0:19:23 And you’re right.
    0:19:27 Forward motion involves choices, involves decisions, and involves solutions.
    0:19:30 And uncertainty is never the end goal.
    0:19:35 Uncertainty is a vehicle and an approach to life.
    0:19:47 But I also think another really important point is that most of the time, and again, I just kept coming up with this reiteration in my research again and again in different forms.
    0:19:51 But most of the time, it’s our fear of uncertainty that leads to paralysis.
    0:19:53 It’s not the uncertainty itself.
    0:20:11 If we approach uncertainty knowing it’s a space of possibilities, or as another psychologist told me, an opportunity for movement, then we, you know, roll up our sleeves and be present in the moment and start investigating and exploring.
    0:20:21 But if we are afraid of uncertainty, and if you are intolerant of uncertainty, you are more likely to treat uncertainty as a threat.
    0:20:32 The very, very simplest definition of intolerance versus tolerance of uncertainty is treating being unsure or something surprising as a threat versus a challenge.
    0:20:42 And one of the classic signs that you fall on the extreme of the spectrum is that you think surprises, et cetera, are unfair.
    0:20:44 And, of course, we all do at certain points.
    0:20:48 I mean, we all do think that the traffic jam is unfair.
    0:21:04 But maybe if you can think of it as a challenge or reframe it, which actually there were studies during the pandemic, people who were intolerant of uncertainty were more likely to use coping strategies based on denial, avoidance, and substance abuse.
    0:21:06 And, of course, hey, we all did some of that, too.
    0:21:17 But people who are tolerant of uncertainty were more likely to use problem-solving-focused strategies such as reframing the situation.
    0:21:27 You cite some research about fear of the unknown as a root cause of things like anxiety and depression.
    0:21:32 It certainly makes intuitive sense, but what do we know about that relationship?
    0:21:42 Well, this is a very new but rising theoretical understanding of mental challenges in the psychology world.
    0:22:03 That basically, more and more psychologists and clinicians are beginning to see fear of the unknown as the transdiagnostic root or at least vulnerability factor to many, many mental challenges, conditions such as, you know, everything from PTSD to anxiety.
    0:22:24 But by narrowing down treatments to just trying to help people bolster practicing not knowing, basically, boast their practice with tolerance of uncertainty, they’re actually beginning to find that that might be a really important way to shift even intractable anxiety.
    0:22:34 So there’s been one gold standard peer-reviewed study by probably one of the world’s greatest experts on anxiety, Michel Dugas.
    0:22:44 And he found that people who were taught simple strategies to basically try on uncertainty, their intractable anxiety went down.
    0:22:50 They also worried just about as much as most people, which probably is still a lot these days.
    0:22:51 It also helped their depression.
    0:23:09 And then other studies with multiple different kind of populations so that these kind of very laser-sharp focused strategies about uncertainty actually boost at least self-reported resilience in patients with multiple sclerosis who are dealing with a lot of medical uncertainty.
    0:23:12 So that’s, it’s really, really exciting.
    0:23:15 And may I tell one little story about this work?
    0:23:16 Absolutely.
    0:23:23 Michel Dugas told me a wonderful story where he said there was a teacher, one of his early patients, who was afraid of birds.
    0:23:29 And she was so afraid she’d run to her car even from the classroom when she saw one run, you know, fly by her window.
    0:23:32 And he did one thing, one thing only.
    0:23:34 He gave her a guidebook to birds.
    0:23:37 And then she actually ended up adopting a pet bird.
    0:23:40 Well, he said to me, that’s what I’m trying to do with uncertainty.
    0:23:53 So if we can take a closer look and understand this state of mind as something that’s not something to fear, but as something that’s a source of wonder and delight, then we can move forward.
    0:24:01 So much of this is about that need to control things and all the anxiety that comes when you realize you can’t do that.
    0:24:02 And it really matters, doesn’t it?
    0:24:10 Because so much of life is about our attitude, the way we choose to interpret what’s happening to us, the way we choose to respond to it.
    0:24:12 You know, is it a problem or an opportunity?
    0:24:16 Is it pointless suffering or a chance for growth?
    0:24:19 And this not knowing we’re talking about, it’s the same thing.
    0:24:23 It can be a source of wonder or it can be a source of fear.
    0:24:28 And choosing is really our only superpower here, if we have one.
    0:24:29 Right, exactly.
    0:24:31 Choosing and practice, I would say.
    0:24:37 Because there are opportunities to be uncertain that are threaded throughout the day.
    0:24:48 They’re almost invisible because it’s so easy and innate in the human condition to stick with what’s predictable, to stick with the familiar.
    0:25:04 In fact, one of the exercises that psychologists are going to be giving Columbus, Ohio, high schoolers in order to boost their resilience, to help them bolster their tolerance of uncertainty, is to just answer their cell phones without caller ID.
    0:25:11 And I told a young relative of mine about that, and she said, oh, that would be terrifying.
    0:25:14 And this is a very, very simple practice.
    0:25:17 But another is to try a new dish in a restaurant.
    0:25:18 And I’m pretty adventuresome.
    0:25:20 I’ve lived all over the world.
    0:25:22 I, you know, jump in the cold ocean.
    0:25:26 You know, I really am fairly tolerant of uncertainty.
    0:25:32 And yet, when I think about it, you know, what do we like better than that same old clam spaghetti on a Friday night?
    0:25:35 And so that doesn’t mean you always have to be uncomfortable.
    0:25:47 But I think that at this point in time, in this era, when the uncertainty, that is what humans cannot know, seems to be rising.
    0:26:09 At this moment, the worst possible response we can have is to retreat into certainty and familiarity and obviousness that curtails our creativity and our ability to solve the precisely complex problems that are at our doorstep, the lethal problems at our doorstep.
    0:26:30 And so by doing precisely the wrong thing, the thing that we don’t want to do, the uncomfortable thing, by flipping our worldview to make uncertainty at least something to admire, to explore, to embrace, that’s the way we can move forward at this time in our world history.
    0:26:35 I think I think I have a more complicated relationship with uncertainty.
    0:26:39 Philosophically, I’m a big believer in the virtue of uncertainty.
    0:26:42 I mean, it’s built into the name of the show, the gray area.
    0:26:53 But if I’m being honest, in my life, in my actual life, I think the fear of the unknown has kept me from living the life I truly want to live.
    0:27:12 And the way it often manifests is in this instinct to stick to the refuge of routine or this impulse to constantly imagine all the ways something might go badly, which really, in the end, just becomes a justification for not trying anything new.
    0:27:21 And it’s strange that intellectually, I’m very comfortable with ambiguity, but in my actual life, I often behave as though I’m terrified of it.
    0:27:25 And this makes me feel a little schizophrenic, but maybe it shouldn’t.
    0:27:26 Maybe it’s common.
    0:27:29 Tell me this is common, Maggie.
    0:27:29 Help me out here.
    0:27:36 Well, I would say as a human, again, we dislike uncertainty for a real reason.
    0:27:47 We need and want answers, and this unsettling feeling you have is your innate way of signaling that you’re not in the routine anymore.
    0:27:54 And so it’s really important to understand, in some ways, how rare and wonderful uncertainty is.
    0:27:58 At the same time, we also need routine and familiarity.
    0:28:03 Most of life is what scientists call predictive processing.
    0:28:12 That is, we’re constantly making up assumptions and predicting, you just don’t think that your driveway is going to be in a different place when you get home tonight.
    0:28:17 You can expect that you know how to tie your shoelaces when you get up in the morning.
    0:28:28 And so, therefore, we are enmeshed in this incredible world of our assumptions to the extent to which, you know, scientists say we live in a consensual hallucination.
    0:28:33 It’s so human and so natural to stick to routine and to have that comfort.
    0:28:42 If we were just a living mess of openness to newness and having to learn everything again, we really would be in trouble.
    0:28:45 And I don’t want to say it’s a middle ground at all.
    0:28:49 I actually think we should live more on the edge, far more than our culture permits us to do now.
    0:28:56 And so, I don’t think you should feel, I don’t know, maybe you should change out of that purple sweatshirt tomorrow, Sean.
    0:29:03 It’s more things like, you know, like my wife is a camper, you know, and she always wants to go camping.
    0:29:05 And I didn’t grow up camping.
    0:29:08 And when she brings it up, I’m like, yeah, well, but, you know, what if it rains?
    0:29:12 What if my air mattress runs out of air and I can’t sleep?
    0:29:12 You know, whatever.
    0:29:13 It’s just all this shit.
    0:29:18 You can constantly conjure up all the way something can go sideways, no matter what it is.
    0:29:24 You can always imagine the million and one things around the bend that might, you know, derail whatever the plans are.
    0:29:33 And I guess ultimately it comes down to whether or not you’re comfortable just adapting to that and kind of rolling with it or whether you perceive all these things that might go wrong as catastrophic.
    0:29:53 Right. Well, there is work actually to help people deal with stress in a way and this, you know, the good stress of uncertainty by teaching them that when people are able to understand that their body and brain are revving up for this new occasion, they’re actually more present in the moment.
    0:29:59 And so, isn’t that really what seeking routine, you know, isn’t?
    0:30:06 I mean, you know, to be anxious about the unknown is to inhibit or close down your present orientation.
    0:30:24 You know, when you are able to be in that moment and see the nuance beyond the campfire smoke or the bear who really was sighted two miles away and now is a mile away or all those sort of little factors, you can begin to see the more complex.
    0:30:34 And one thing that’s really interesting about interventions to help people bolster their tolerance of uncertainty is that it harnesses uncertainty’s power and strength.
    0:30:38 It’s not that it’s a jolly good thing to be uncertain all the time.
    0:30:46 It’s just that it lends itself to peeking into the complexity of the world, the complexity that’s already there.
    0:30:52 I’m glad you went there because there’s also a social and political dimension to all of this.
    0:31:01 You know, history is littered with examples of otherwise sane people doing terrible things in defense of absolute truths.
    0:31:05 And there’s actually an experiment you mentioned in the book.
    0:31:08 It’s the Berkeley cat-dog experiment.
    0:31:12 And it speaks to the political hazards of a closed mind.
    0:31:20 And for people who aren’t familiar, the basic gist is people were initially shown a picture that very clearly resembled a cat.
    0:31:28 But then they were gradually shown more drawings that bit by bit started to look more dog-like until finally it was just a picture of a dog.
    0:31:34 But interestingly, a huge number of people refused to let go of their initial answer almost all the way to the end.
    0:31:42 And it was a study in authoritarianism and the nature of the closed mind and how that manifests in a political context.
    0:31:47 And somehow I wasn’t aware of this study, but it is pretty instructive, isn’t it?
    0:31:48 Right, exactly.
    0:32:02 And as the psychologist who created the study said, the people who just wouldn’t admit that it was becoming a dog refused to leave the safe harbor of their definite ideas or something like that, which is, you know, exactly.
    0:32:04 Like, again, it underscores change.
    0:32:14 And, you know, when we look around and we see 80% of Republicans and 80% of Democrats say the other side has few or no good ideas.
    0:32:22 And you see the U.S. rankings among other countries, you know, we rank the highest on polarization rates by degree and et cetera.
    0:32:33 You know, when you see 50% of people say they rarely, if ever, change their minds, I mean, you’re seeing this play out in life today, very much so.
    0:32:40 It’s just a fact of life that things will change, you know, the world won’t conform to your wishes.
    0:32:43 And so you end up going one of two ways.
    0:32:51 You either embrace the limits of your own knowledge or you distort the world in order to make it align with your story of it.
    0:32:53 And I think bad things happen when you do.
    0:32:57 That’s why I think this is politically very important.
    0:32:57 Yes.
    0:33:08 And I think that it’s also backbreaking work, so to speak, to continually retreat into our certainties and close our eyes to the mutability of the world.
    0:33:24 I mean, I had a real epiphany when I was doing some writing about a Head Start program that teaches people from very challenged backgrounds, both parent and preschooler, to pause and reflect throughout the very chaotic days.
    0:33:35 And it seems like something that doesn’t have much to do with uncertainty, but they were basically inhabiting the question, even though it was a very difficult thing to snatch these moments of reflection within their lives.
    0:33:46 And in parallel to that, there’s also a lot of new movement to understand the strengths of people who live in lower economic situations that are often chaotic.
    0:33:53 Unpredictability is now seen as a real core issue in challenged situations.
    0:34:16 But what was amazing to me is I realized how much I grew up expecting that stability and predictability was just an entitlement, that this is the way we should live, that this is the skill set you need to adapt in order to thrive, et cetera, et cetera.
    0:34:26 So we basically have sort of airbrushed out of our psyches in many, many ways, the ability to live in precarious situations.
    0:34:36 Yeah, I’m glad you said that because if you come from a place of precarity or if you exist in that space, comfort with uncertainty may not be a luxury you can afford.
    0:34:41 If you don’t feel safe for good reasons, uncertainty takes on a different hue.
    0:34:43 And that’s something that’s definitely worth acknowledging.
    0:34:51 Yes, and there are tremendous costs living in situations where you’re experiencing higher degrees of precarity.
    0:35:04 But at the same time, I think it’s really important for many, many people today to understand, again, that adaptability is a skill that maybe we all have to cultivate.
    0:35:12 We don’t want anyone to live in poverty or to be abandoned in an international institution, an orphanage.
    0:35:21 But at the same time, we do all of ourselves an injustice by not understanding the full spectrum of human capability to adapt.
    0:35:30 After one last short break, Maggie tells us how embracing uncertainty has made her hopeful for the future.
    0:35:32 We’ll be right back.
    0:35:51 It’s been reported that one in four people experience sensory sensitivities, making everyday experiences like a trip to the dentist especially difficult.
    0:35:57 In fact, 26% of sensory-sensitive individuals avoid dental visits entirely.
    0:36:03 In Sensory Overload, a new documentary produced as part of Sensodyne’s Sensory Inclusion Initiative,
    0:36:15 We follow individuals navigating a world not built for them, where bright lights, loud sounds, and unexpected touches can turn routine moments into overwhelming challenges.
    0:36:23 Burnett Grant, for example, has spent their life masking discomfort in workplaces that don’t accommodate neurodivergence.
    0:36:26 I’ve only had two full-time jobs where I felt safe, they share.
    0:36:29 This is why they’re advocating for change.
    0:36:39 Through deeply personal stories like Burnett’s, Sensory Overload highlights the urgent need for spaces, dental offices, and beyond that embrace sensory inclusion.
    0:36:44 Because true inclusion requires action with environments where everyone feels safe.
    0:36:48 Watch Sensory Overload now, streaming on Hulu.
    0:36:52 Support for this show comes from Indeed.
    0:36:55 You just realized your business needed to hire somebody yesterday.
    0:36:58 How can you find amazing candidates fast?
    0:36:59 Easy.
    0:37:00 Just use Indeed.
    0:37:07 With Indeed Sponsored Jobs, your post jumps to the top of the page for relevant candidates, and you’re able to reach the people you want faster.
    0:37:08 And it makes a huge difference.
    0:37:17 According to Indeed Data Worldwide, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.
    0:37:24 Plus, with Indeed Sponsored Jobs, there are no monthly subscriptions, no long-term contracts, and you only pay for results.
    0:37:26 There’s no need to wait any longer.
    0:37:28 Speed up your hiring right now with Indeed.
    0:37:36 And listeners to this show will get a $100 sponsored job credit to get your jobs more visibility at Indeed.com slash VoxCA.
    0:37:43 Just go to Indeed.com slash VoxCA right now and support this show by saying you heard about Indeed on this podcast.
    0:37:45 Indeed.com slash VoxCA.
    0:37:46 Terms and conditions apply.
    0:37:47 Terms and conditions apply.
    0:37:47 Hiring, Indeed.
    0:37:49 Hiring, Indeed, is all you need.
    0:37:50 Hiring, Indeed.com slash VoxCA.
    0:37:50 Hiring, Indeed.com slash VoxCA.
    0:37:52 Okay, business leaders.
    0:37:54 Are you playing defense or are you on the offense?
    0:37:56 Are you here just, excuse me.
    0:37:57 Excuse me.
    0:37:58 Hey, I’m trying to talk business here.
    0:38:03 As I was saying, are you here just to play or are you playing to win?
    0:38:05 If you’re in it to win, meet your next MVP.
    0:38:07 NetSuite by Oracle.
    0:38:11 NetSuite is your full business management system in one suite.
    0:38:17 With NetSuite, you’re running your accounting, your financials, HR, e-commerce, and more, all from your online dashboard.
    0:38:22 One source of truth means every department’s working from the same numbers with no data delays.
    0:38:28 And with AI embedded throughout, you’re automating manual tasks, plus getting fast insights for your next move.
    0:38:34 Whether you’re competing on your home turf or looking to conquer international markets, NetSuite helps you get the W.
    0:38:40 Over 40,000 businesses have already made the move to NetSuite, the number one cloud ERP.
    0:38:45 Right now, get the CFO’s guide to AI and machine learning at NetSuite.com slash Vox.
    0:38:48 Get this free guide at NetSuite.com slash Vox.
    0:38:49 Okay, guys.
    0:39:03 Is there something in particular about this moment that makes this all seem all the more urgent to you?
    0:39:04 I mean, things are always in flux.
    0:39:06 Things are always changing.
    0:39:13 Does now seem like an especially dynamic moment that really summons us to lean into the ambiguity?
    0:39:14 I think it is.
    0:39:22 I mean, you know, for hundreds of years, particularly in the West, we’ve been pursuing what Dewey called the quest for certainty.
    0:39:34 I see this massive, long crumbling in humanity’s or at least many societies’ ability to assume certainty where there was none.
    0:39:42 And so you can also see many, you know, different studies showing rise in precarity of work hours or rise in, you know, weather patterns, etc.
    0:39:46 It does seem as though things might be in more flux.
    0:40:00 And so that is the time when I think we need a sea change in our attitudes toward not knowing in order to face this moment and not hide in our devices or hide in our certainties.
    0:40:10 Most people would say that confidence is a good thing and confidence seems inextricably bound up with knowing that you know.
    0:40:12 What are we missing there?
    0:40:14 Is that just the wrong way to think about confidence?
    0:40:17 Well, there are different types of confidence.
    0:40:19 There are different degrees of confidence.
    0:40:24 So you can be confident while being open-minded to other suggestions.
    0:40:35 Linus Pauling in the great race to discover the structure of DNA came up with a solution but didn’t listen to his colleagues, hardly did any homework, etc., etc.
    0:40:36 Now, that’s hubris.
    0:40:38 That’s not confidence.
    0:40:45 Confidence is, I think, being open-eyed and open-minded and flexible.
    0:40:55 And people have come up to me when I’ve been doing talks about this book saying, for instance, one woman who headed state budget for Rhode Island came up to me afterwards and said,
    0:41:01 Ah, I always used to end meetings saying, Is there anything more we should know?
    0:41:06 And she felt somehow sheepish about that, as if it was a weak thing.
    0:41:09 And now she really feels as though that was actually wise.
    0:41:12 So you can be confident and be open.
    0:41:15 I think I’m going to jump off that cliff.
    0:41:20 But maybe you might want to be stopped by somebody.
    0:41:20 Don’t do it, Maggie.
    0:41:27 It’s also, uncertainty very, very much is about knowing the limits of your knowledge.
    0:41:37 So even, for instance, something as simple as a Google search is associated with people thinking they know more than they do, even if they actually didn’t find what they were looking for.
    0:41:44 And that’s really important, really important, because if you don’t know the limits of your knowledge, you can’t push beyond it.
    0:41:49 You can’t know what you don’t know, which, of course, is the starting point of all learning.
    0:41:53 Can we learn to be more tolerant of uncertainty?
    0:42:00 Because if we can’t teach this, if we can’t absorb it, then what good is all this knowledge?
    0:42:01 Exactly.
    0:42:03 No, of course we can.
    0:42:15 And all great understandings of wisdom and knowing and learning throughout the ages have been infused with this respect for uncertainty.
    0:42:25 And it’s really important also to mention that this spectrum, this disposition of tolerance or intolerance toward uncertainty is situational as well.
    0:42:36 We might live on the spectrum, we might be able to change or bolster our tolerance of uncertainty, but also on any given day, you know, you will maybe lean one way or the other.
    0:42:57 So when you’re tired or you have information overload or studies show when you feel compelled to give an answer, which basically describes daily living today, you’re more likely to tend to be seeking an answer and also seeking what’s called need for closure, which is really important.
    0:43:05 You need and want to close down on an answer when you feel stressed and beseeched.
    0:43:23 And I think we certainly can in little possible ways, you know, just adopting some of those daily practices like trying something new or perhaps one strategy that’s gaining attention when it comes to understanding the other, people who oppose your different views, is perspective taking.
    0:43:39 And taking the perspective of another is just jolting yourself from your assumptions about the fact that you do know someone’s perspective, you’re reminding yourself, you’re jolting yourself into what Socrates called perplexity, productive perplexity.
    0:43:46 It definitely seems like certain people are wired in such a way that uncertainty is just untenable.
    0:43:51 You know, we just did an episode with Robert Sapolsky about the illusion of free will.
    0:43:56 And so that’s rattling around in the back of my head as we’re talking about this.
    0:43:58 Yes, I think so.
    0:44:09 And yet I also think on the hopeful note, we maybe need to make more visible the way in which our entire cultures are predicated on certain types of language.
    0:44:17 You know, for instance, hedge words, words like maybe and sometimes they’re seen as weak, linguistically give people two different signals.
    0:44:23 One, that you’re receptive to another’s point of view, and two, that there’s more to know out there.
    0:44:29 So just by throwing in the word maybe, studies show that you don’t look weak as you might assume.
    0:44:34 What you’re just saying about the power of words like maybe, I’m really skeptical of that.
    0:44:37 And it goes back to the political problems here.
    0:44:38 You actually talk about this in the book.
    0:44:41 Do you know what people don’t like in leaders?
    0:44:46 Leaders who are intellectually honest and say things like, I don’t know, or I’m not sure.
    0:44:52 If you want to not get elected, just be intellectually honest in that way and humble in that way and see what happens.
    0:44:53 We don’t like that.
    0:44:54 We don’t like it in ourselves.
    0:44:57 We certainly don’t like it in leaders.
    0:45:02 And I don’t know what to do about that, but it would be better if it were otherwise.
    0:45:09 But, you know, on the other hand, medicine is sort of, you would think, the final frontier for unsureness.
    0:45:17 And yet there are beginning programs by cutting-edge leaders to teach doctors to say, I don’t know.
    0:45:22 It’s actually seen as more positive among patients than expected.
    0:45:33 In Maine, there was a program to teach young residents to say, gee, I need to look that up or, wow, I don’t know, which is, you know, of course, really almost impossible to utter.
    0:45:37 And yet the word that kept coming up was that it gave them courage.
    0:45:44 The courage to think and rethink, to consider based on what was actually happening rather than their assumptions.
    0:45:52 You would never think of courage and uncertainty being, you know, associated or close related, but they are.
    0:45:55 In fact, William James talked about the courage of a maybe.
    0:46:00 So I think, yes, all right, maybe politics is, needs some dire help.
    0:46:14 But medicine in business and in AI, there’s a new movement to create robots and models that are unsure in their aims, which is a sea change, a complete reimagining of the field led by Stuart Russell.
    0:46:18 They’re creating robots that are more teachable, honest, transparent.
    0:46:29 And here we are, again, looking at a sort of element in our culture, you know, technology or politics or language that influences us.
    0:46:38 But if you can create a technology that holds up a mirror to our better selves, you might have a good influence on us from our technologies.
    0:46:41 And so I see the seeds of change.
    0:46:44 I actually came away from writing this book hopeful.
    0:46:48 You’ve now referenced Dewey and William James.
    0:46:50 Are you a fellow pragmatist?
    0:46:52 Are you on team pragmatism?
    0:46:53 Because we are on this show.
    0:46:55 I am totally.
    0:46:59 And I just never studied philosophy, which I dearly regret.
    0:47:00 I was too daunted.
    0:47:04 And so I’ve been an amateur reader.
    0:47:17 And I just, I try to read all sorts of types of philosophy that helped me understand what I might be studying at the time or researching.
    0:47:21 But I went back and back and back and back to Dewey and I dearly love him.
    0:47:25 And I feel like going to Vermont and visiting his grave or whatever there is.
    0:47:40 And I would feel honored to consider myself a pragmatist because on one hand, I spend so much of my life out on journeys of the mind, you know, asking what if questions and walking around and around to try to get a 360 degree look at some of the things I’m looking at.
    0:47:45 And to say something or something pragmatic about that, woo-hoo.
    0:47:49 I read something about your morning swims.
    0:47:50 What’s the story there?
    0:47:54 Well, during the pandemic, I had been New York, Rhode Island, New York, Rhode Island.
    0:47:55 And then we switched.
    0:47:58 And so the grand experiment was to live in the country.
    0:47:59 The pool’s closed.
    0:48:00 I’m a swimmer.
    0:48:02 It’s really important for my writing and all that sort of thing.
    0:48:09 And then I got hooked, like many people, it’s kind of a global phenomenon, on ocean swimming.
    0:48:14 So four seasons, rain or shine, snow, I do it with a wetsuit.
    0:48:20 But I actually began to realize I’m really fascinated by why is this so joyful?
    0:48:21 Oh, there’s the exercise.
    0:48:22 There’s a social camaraderie.
    0:48:26 You’re kind of swimming with your subway car, I call it, because a bunch of strangers get together.
    0:48:45 And then I began to feel or understand that really it was a daily dose of uncertainty, that you’re living at the edge because you might see the app or you might know that particular beach, but you really don’t know what’s going to happen even in the 30 minutes you’re out there.
    0:48:55 And so I began to realize that maybe the joy in it and the edginess and the discomfort there was really just what I was writing about.
    0:49:05 So when someone is confronted with that feeling of fear that comes with not knowing or that anxiety that comes with not knowing, how should they sit with that?
    0:49:07 What is your practical advice?
    0:49:32 Well, I think first telling oneself that this is, you know, your body and brain’s way of signaling that there’s a moment when the status quo won’t do, that this might be uncomfortable, but that is not, you know, a situation or a state of mind that is against moving forward, but actually propelling you forward.
    0:49:40 Because it is discomfort to admit to or to see complexity, nuance, other people’s perspectives.
    0:49:46 I mean, you know, I don’t like it when an editor says this needs to be improved, et cetera, et cetera.
    0:49:59 So I think that if we truly understand, and it’s just changed my life to write this book and to at least loosen a little bit of the fear that I might carry into really new situations.
    0:50:07 From giving a speech to being in the presence of someone who’s very upset, a friend or a daughter who’s really upset.
    0:50:17 And I used to want to just offer a solution and give that silver lining and, you know, get that moment over with and, you know, get them on the road to happiness.
    0:50:28 And now I feel much more patient and with that comes the ability to follow a path down an unexpected path or even take a detour.
    0:50:40 At one point I said to a friend, I’m writing this book in a spiraling fashion, going around and around like those kind of labyrinthian walking gardens or, you know, Zen Buddhist.
    0:50:45 And, of course, she looked at me with absolute horror, but I think she actually understood what I meant, too.
    0:50:46 Yeah.
    0:50:54 You write in the book that embracing uncertainty is really how we become alive to the possibilities of life.
    0:50:57 And that really is the bottom line here for me.
    0:51:07 Clinging to our preconceptions and our fears of the unknown is probably the sheerest way to miss out on a well-lived life.
    0:51:09 And so I guess that’s the note I want to end on.
    0:51:12 Is there anything else you’d like to add, Maggie?
    0:51:15 No, I think you said it perfectly.
    0:51:25 I think that this is all about being fully alive, both to the disquieting and the beautifully joyous, positive elements of life.
    0:51:37 Because if we can’t contend with uncertainty, then we can’t contend with life because life will always be contradictory, paradoxical, mutable, dynamic, everything we’ve been talking about.
    0:51:43 Once again, the book is called Uncertain, The Wisdom and Wonder of Being Unsure.
    0:51:46 Maggie Jackson, this was a pleasure.
    0:51:47 Thanks for the chat.
    0:51:48 Thank you.
    0:51:49 It’s an honor talking with you.
    0:52:06 Our producer is John Ahrens.
    0:52:08 Jorge Just is our editor.
    0:52:11 Patrick Boyd engineered this episode.
    0:52:14 Alex Overington wrote our theme music.
    0:52:19 If you dug the show, please rate and review.
    0:52:21 And also, I want to hear from you.
    0:52:23 Tell me what you think of the episode.
    0:52:26 Drop us a line at thegrayareaatvox.com.
    0:52:31 New episodes of The Gray Area drop on Mondays.
    0:52:32 Listen and subscribe.
    0:52:48 We’ll see you next time.

    Humans hate uncertainty. It makes us feel unsafe and uneasy. We often organize our lives to avoid it. When it’s foisted upon us, we don’t always know how to act.

    But writer and journalist Maggie Jackson argues that uncertainty can actually be good for us, and that we’re doing ourselves a disservice by avoiding it.

    She tells Sean that embracing uncertainty can spark creativity, improve problem solving skills, and help us lead better, more hopeful lives.

    This episode originally aired in January 2024.

    Host: Sean Illing (@SeanIlling)

    Guest: Maggie Jackson, author of Uncertain: The Wisdom and Wonder of Being Unsure

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • A moment for silence

    AI transcript
    0:00:30 Thumbtack presents the ins and outs of caring for your home. Out. Indecision. Overthinking. Second-guessing every choice you make. In. Plans and guides that make it easy to get home projects done. Out. Beige. On beige. On beige. In. Knowing what to do, when to do it, and who to hire. Start caring for your home with confidence.
    0:00:31 Download Thumbtack today.
    0:00:49 Wanna own part of the company that makes your favourite burger? Now you can. With partial shares from T_D_ Direct Investing, you can own less than one full share, so expensive stocks are within reach. Learn more at T_D_ dot com slash partial shares. T_D_ Ready. for you.
    0:01:19 how often do you find silence? I mean real silence. It’s always been hard, but in today’s world the clamour of technology and distractions is unrelenting. And there’s a part of us that likes this, that wants to be distracted, wants to be diverted, wants to be occupied by someone or something. Which means that when we do actually find a bit of silence, we don’t always know what to do with it. Here, I’ll show you what I mean. Let’s have a moment of silence
    0:01:20 together.
    0:01:52 So what happened? Did you think about your to-do list? Did you worry? Did you panic? Did you almost switch to another podcast? Or did you enjoy it? Regardless, that moment in which seemingly nothing happened was an experience. One that could, if you let it, affect you as profoundly as any other experience.
    0:02:15 So what would happen if we allowed ourselves to sit in silence more often? Could moments of silence be restorative or exploratory? Could sitting in silence prepare us for the times when we can’t block out the noise? Let’s try it again. This time I’m not springing it on you, so maybe it will feel different. I’ll try to do it for the same amount of time.
    0:02:28 I’m Sean Ealing and this is The Grey Area.
    0:02:38 Today’s guest is Pico Iyer. He’s the author of fifteen books and a long time columnist for many publications around the world.
    0:02:48 He’s also spent decades travelling with Adhavi Lama as a friend and travel writer. His latest book is called A Flame, Learning from Silence.
    0:02:53 The book is what it sounds like, a meditation on silence.
    0:03:22 And it’s based on Pico’s experiences over thirty years at a Catholic monastery on the coast of northern California. Pico isn’t a Catholic or even a Christian. In fact, he’s not a religious person at all. But he is a curious open-minded writer with a deep interest in spiritual and religious experiences. And his work, this book in particular, reflects that. So I wanted to speak with him about how we can all find silence and hopefully benefit from it.
    0:03:30 Pico Iyer, welcome to the show. [SPEAKER_TURN]
    0:03:32 I am so happy to be here. Thank you, Sean. [SPEAKER_TURN]
    0:04:00 I’m happy to have you here. It’s such a curious thing about you that you’re not religious, you’re not a meditator or anything like that, and yet you’ve spent so much of your life inhabiting these spiritual spaces, living among monks, uh travelling around the world with the Dalai Lama. I mean how does all that happen? How do you make sense of that tension? Or do you even see any tension in that? [SPEAKER_TURN]
    0:04:30 Yes, I don’t think I see attention. I think I see a complementarity. It feels to me like breathing in and breathing out. And I don’t think my travels or experience would begin to make sense unless I had enough stillness and peace to to try to put them in perspective. And I’ve always, I think, or not always, but since a fairly young age, I’ve thought I don’t want to neglect the inner life. And the external is becoming so deafening and overwhelming, as you know, that I feel y
    0:04:45 I almost have to take conscious measures these days to ensure that I’m putting things in perspective and that my inner life is not neglected, because if the inner life is gone, then it’s like a car without an engine. In other words, all the travel in the world makes no sense at all. [SPEAKER_TURN]
    0:04:49 Did you grow up in a very religious home? What did your parents do? [SPEAKER_TURN]
    0:05:19 probably did actually, which sent me in the opposite direction. You’re running away from religion. But I I know you’re a political theorist and philosopher and that’s exactly what my parents were. They were both philosophers um, both teachers of comparative religions, which means that they were interested in all religions personally as well as professionally. Uh and I suppose beyond that I maybe ha had the advantage of being born to two Hindu parents who’d grown up in British India
    0:05:47 and therefore knew the Bible back to France. And I was born and grew up in England and so I went through very classical Anglican schooling, so by the time I was in college um I probably had some kind of grounding in the Christian world and I had you know my Hindu genes and D_N_A_ too, and my first name is actually Siddharth named after the Buddha. So I think my parents were equipping me for uh being conversant with lots of religious traditions. [SPEAKER_TURN]
    0:06:14 I wanna get into the book a little bit. Um i people may not know this about you, but you lost everything you owned, including your home and a fire, something like thirty years ago, give or take. Um you say that wasn’t exactly what brought you to the monastery in the first place, but you write that it did clear the way for many things. What did it clear the way for? [SPEAKER_TURN]
    0:06:45 Well, it left me in something akin to a desert, a vast open space from which I had to begin crafting my life anew. So I was caught in the middle of that fire, which was the worst in Californian history at the time, for three hours. And when finally a fire truck could get to me and say it was safe to drive downtown, I went to an all-night supermarket, I bought a toothbrush and the toothbrush was the only thing I had in the world, and then I was sleeping on a friend’s floor. And had I not
    0:07:15 in that somewhat diminished state. I don’t think I’d have been so responsive when another friend came and saw me on the floor and said oh why don’t you try going to stay at this Benedictine hermitage um up the road. Uh I don’t think th I ever would have thought of going to stay in a Catholic monastery otherwise as somebody who’s not a Christian and who also uh spent fifteen years going through Christian schools and I thought oh I’ve had enough of that tradition, I was more interested in the traditions I didn’t know about. But uh my friend uh told
    0:07:39 me that this hermitage that if nothing else would give me a bed to sleep in and a wide desk and a private walled garden above the Pacific Ocean, all the food I could eat for just thirty dollars a night at the time. So I thought well, if nothing else, that’s going to be to sleeping on the floor. Um so I tried it and as you know, that was thirty four years later and I’ve been more than a hundred times since and it’s really become my secret home.
    0:07:49 Well tell me about that monastery up in the Big Sur area in northern California. What was it like, how did you spend your days there? [SPEAKER_TURN]
    0:08:19 I suppose the first thing to stress is that although it’s a Catholic monastery, and there are fifteen monks there and ten workers who live with them sustaining the community, they’re open to everyone. And really um the monks are responding to St. Benedict’s call to hospitality, and so there are no rules. Everybody is welcome, you don’t have to do anything, uh they provide you with all that you want, and I think they have the confidence to know that whoever
    0:08:49 you are and whatever you’re longing and whatever your background, just three days in silence, without distraction, free from cell phones, uh in this radiant stretch of coastline above the ocean will help you find what is most sustaining. And some people would call it God and others would give different names to it, but I think it comes to the same thing. So just to give um our audience uh a sort of visual, you drive along Californian highway one which grows emptier and narrower and you’re
    0:09:19 just in this vast elemental landscape with golden meadows running down from the hills to the one side and the flat blue plate of the Pacific Ocean on the other. And then you come to this even narrower road that snakes and twists for two miles around turns to the top of the hill uh where the retreat house um stands. And so everybody who goes to stay has either a trailer on the hill or a very simple room, but with your own garden twelve hundred and fifty feet with a
    0:09:42 unbroken view over the ocean on nine hundred acres of uh glorious natural landscape. So between the nature and between the silence that’s been constructed by years of prayer and meditation, um and between the freedom from all that usually cuts us up into many pieces, um it’s hard not to be transported there. [SPEAKER_TURN]
    0:09:50 How different is the Christianity practiced by those monks from the Christianity practiced by most other Christians? [SPEAKER_TURN]
    0:10:21 So these monks belong to the Kamaldalese congregation within the Benedictine order, and that is the most contemplative uh congregation within the Catholic church. So it’s closest to Zen meditators and people who meditate in every tradition. And so just as you say, I think one of the first surprises for me is that the monks I meet there are much more open-minded than I am, much less dogmatic. When I went and have uh dinner one evening with a prayer who runs the
    0:10:51 On his wall there’s a picture of Jesus in the lotus position meditating. Uh they actually maintain a Hindu ashram in southern India where the Catholic priest wears a doty, sleeps on the floor, eats with his hands, and the motto for that Catholic Hindu ashram is we are here to awaken from the illusion of separateness. So I love that. If you were to ask me why I go there, I would say it’s to wake up wake up first and to cut
    0:11:21 through the illusion of separateness and to feel closer to everyone and everything around me. And of course they are trying to cut through the illusion that they are separate from the other traditions of the world. So it seemed to me the perfect motto. And then I found out that we are here to awaken from the illusion of separateness actually comes from Thich Nhat Hanh, the Vietnamese Buddhist teacher, and I thought how wonderful that these Catholic monks are open and wise enough to take as their motto the sai saying of a contemporary Vietnamese
    0:11:51 That openness is what is so interesting to me. I mean I I’ve never liked conventional religion because, I’ve never liked dogma. I I think r I think religions often lose their connection to the experiential roots of faith and instead become these dogma enforcing institutions. But the fact that these monks seem to have no need for dogma at all is so surprising. O w why do you think they have no need for that? I think it’s
    0:12:21 because they’re so deeply rooted in their own tradition and commitment, that they’re open to learning from everyone. Because they know where they stand, they’re not defensive, they’re not protective, and they’re n the last ones ever to say that their religion is the best or the only one. So I think it’s people who are uncertain of themselves or their faith who are likely to cling to it uh tenaciously or or belligerently, and it’s those who know who they are who are at least
    0:12:51 to do that. But I I love what you say, and I think that’s the reason after thirty four years that I decided to publish a book about these guys and their silence, because I’ve never seen the world as divided as it is right now. And as you say, I think it’s divided because of our words, our beliefs and our ideologies. Uh and the ho the more fiercely we hold to them, the more we’re cutting the world up into us and them. So I was keen to shine a spotlight both on these monks who are so open to
    0:13:21 and and not making distinctions, and are cutting through the illusion of separateness. And I was so eager also to shine a spotlight on silence, because it’s it’s the place that doesn’t ask us to prove or disprove a thing. I think silence lies on the far side of our beliefs and ideologies. And so I find if I start to talk with anybody, however sympathetic that person is, maybe after forty five minutes we’ll find that we’re on different sides of some important issue. But when we’re joined in a moment of
    0:13:36 silence, I think we’re united in that part of us that lies much deeper than our assumptions and our ideologies, and that silence that actually is a bit of a corrective to the divisions that are cutting us up so violently across the nation and across the world right now.
    0:14:16 Support for the grey area comes from Quince. There’s a pretty understandable reason to put off buying new clothes. It can be really expensive. Getting serious about having the essential items that you’re going to have for a long time usually requires a pretty hefty investment. But with Quince, they say you don’t have to break the bank to get high-quality, long-lasting items. Quince offers premium items at an affordable price. They say they have cold-weather must-haves like Mongolian, cashmere, crew-neck sweaters, iconic one hundred percent leather jackets, and versatile
    0:14:25 tile flow-knit active-wear, all priced fifty to eighty percent less than similar brands. Claire White is our colleague here at Vox and she’s tried Quince herself.
    0:14:43 I would absolutely recommend Quince to a friend from their clothing to the home-wear to the different accessories that they offer, it seems like there’s something there for everyone. The pieces will last for quite a long time and the quality is exceptional.
    0:15:00 You can indulge in affordable luxury by going to quince dot com slash grey area for free shipping on your order and three hundred and sixty five day returns. That’s Q U I N C E dot com slash grey area to get free shipping and three hundred and sixty five day returns. Quince dot com slash grey area.
    0:15:29 Support for the grey area comes from Bombas. You can start your spring cleaning routine right now by refreshing your sock drawer and Bombas has lots to choose from to replace those old mismatched pairs hanging out in your dresser drawers. From athletic socks to dress socks to your everyday pair, Bombas are designed to be comfortable all day. They offer more than socks too, comfy T-shirts, waterproof slides and soft underwear.
    0:15:41 I’ve rocked Bombas socks for, I don’t know, at least a year now. I’ve tried the athletic socks and the winter wool socks. They’re both wildly comfortable, super durable and manage to keep my feet cool or warm depending on the season.
    0:16:10 Bombas also wants you to know about their mission, which is for every comfy pair you purchase, they donate another comfy pair to someone facing homelessness. On top of that, Bombas is going international with worldwide shipping to over two hundred countries. You can go to bombas dot com slash grey area and use code grey area for twenty percent off your first purchase. That’s B-O-M-B-A-S dot com slash grey area code, grey area for twenty percent off your first purchase. Bombas dot com slash grey area and use code grey area.
    0:16:28 If managing your business finances feels like a full-time job on top of your actual full-time job, it’s fully time you got some help, double time.
    0:16:38 Found is a business banking platform that automates time-consuming tasks like tracking expenses and looking for tax write-offs, and makes it easier to stay on top of invoices and payments.
    0:17:08 You can even set aside money for different business goals and control spending with different virtual cards. Found says they’ll save you more than money, they’ll save you time. Time that you can devote to doing the things you do best, like managing your business finances. And Found says other small businesses love them too. They say one user said, Found is going to save me so much headache, it makes everything so much easier expenses, income, profits taxes, invoices even. And Found says they have thirty thousand five star reviews just like this.
    0:17:25 You can open a Found account for free at F_O_U_N_D_ dot com slash grey area. Found is a financial technology company, not a bank. Banking services are provided by Piermont Bank member F_D_I_C_. You can join thousands of small business owners who have streamlined their finances with Found.
    0:18:08 You refer to the monastery as a in the book as a a a place beyond divisions. Um and you know, it’s something I think about a lot as a political theorist is how these deep psychological needs play out in our social world, you know, as powerful as the ego is, as much as we all want to be recognised as individuals
    0:18:37 We also have this longing to lose ourselves in the whole, and this is part of the appeal of tribalism in some of our darker political movements, but the kind of self-emptying you describe in the book, the kind of self-dissolution that these monks practice is very different from that, and it feels much more like a kind of love and attentiveness. And it’s it’s honestly awe-inspiring. [SPEAKER_TURN]
    0:19:08 Oh, so so beautifully said. Um and I loved it when you were talking about losing the self in the whole, because I think that’s pretty much my definition of happiness. My sense is that we’re happiest of all when we’re deeply absorbed in something, and we lose ourselves, we forget the time, in an intimate moment with a lover, in a conversation, in a concert, suddenly w we’re gone. And we’re filled up with something much richer than we could ever be. And and and we’re
    0:19:38 without even knowing to use the word or to think in those terms. And that is actually what I experience every time I go on retreat. And all the agitation and all the thoughts of um my fears, my deadlines, my resume, it’s all left down on the highway, and I’m just open for once to everything that’s around me. So I’m in a state of wonder both, as you say, at the monks and seeing their life of devotion and at the fact that I found something
    0:19:50 that is lost, that is there in me and in everybody in the core of our lives, but as we’re racing from the bank to the supermarket, we misplace and then we feel this emptiness, but we don’t know how to address it. [SPEAKER_TURN]
    0:19:53 We’re so conditioned to think of freedom as
    0:20:22 a freedom to. Free to do this, free to do that, free to pursue whatever I want, but this kind of spiritual freedom you’re talking about is a freedom from, right, freedom from constant striving, freedom from the never-ending push and pull of distractions, freedom from ourselves really, I mean I maybe that’s the kind of freedom T_S_ Eliot had in mind when he talked about the life we lost in living, which you quote in the book. [SPEAKER_TURN]
    0:20:54 Exactly. I was just going to cite that very line. And yes, freedom from the need to make anything of yourself in the world, freedom to decorate your C_V_, freedom to make an impression on anyone. And the interesting thing is that deep radiant freedom that they’re experiencing comes through a vow of obedience. Because when you look at them, they they a vow they vow to l obey their God, obey their prayer, and obey everybody else in their community. So to
    0:21:24 initially it looks like the opposite of freedom, because they’re living within circumscribed limits in very simple rooms with a strict regimen. But that very s very strict regimen is precisely I think what gives them a certain freedom, because they know where they’re going to be and what they’re going to be doing every day of their lives. And they’re freed again from a lot of the clutter that confuses us. And I think the biggest freedom maybe, which I find in my life, is that in the w age of acceleration and information
    0:21:43 There’s so much com coming in on me every minute. I can’t dis distinguish the trivial from the essential. I can’t put my hands on what really matters and what I care about They. have consecrated our lives only to what they care about. And so I don’t think they have any of the the confusions um or doubts that the rest of us have. [SPEAKER_TURN]
    0:22:02 you know the this era is defined in so many ways by our our our technology and the attention economy that drives it and you know a a tension I struggle with in my life and I know I’m not alone is this dual impulse to
    0:22:23 on the one hand you wanna pay attention to all the news happening all the things happening in the world you wanna care about the problems and the existential threats and all of that because on some level being a responsible citizen means caring about the world. But on the other hand there’s so much noise there’s, so much nonsense there, are so many
    0:22:53 problems that I as an individual cannot fix and paying attention to all of this makes my life less satisfying and less silent. Um so how do you decide when to retreat into silence and when to open yourself up to the noise and the clamour of the world because you are of the world and you’re responsible for it and our own little ways and you care. So how do you how do you walk that line?
    0:23:24 Yeah, such a good question. I would say I’m responsible for those things that I can affect. And so to speak to the the example you just gave, I remember during the pandemic, I thought every day when I wake up, I can either attend to what’s going to cut me up or I can attend to what’s going to open me up. And I felt that if I were to go online or take in the news, I would hear about morgues being over-full in Bolivia and a thousand
    0:23:54 just dying in Iran. And it’s really tragic, and of course one has to care for it, but I really thought there was nothing I could do about that. And conversely, I would look out of the window this radiant spring afternoon, and I would think about the friends and families and neighbours nearby, and I thought that’s what I can really affect positively. Sadly, there’s very little I can do about most of the external world, but my immediate world is really what I have to attend to, and I don’t want the news to take me away
    0:24:24 from the parts of the world I can positively affect. So I think being a responsible citizen really means thinking about the people whose lives you can positively affect and how you can gather the strength and resources to be a help to them. And I find I don’t gather those resources by uh w reading The New York Times, driving the freeway or watching C_N_N_, and I do gather them by going for a walk or sitting quietly or most of all by
    0:24:25 on retreat. [SPEAKER_TURN]
    0:24:32 I found myself wondering uh when I was reading the book and just now listening to you, what the monks
    0:24:54 would say to someone who accused them of defeatism or quietism who said you know you’ve abandoned the world and gave up on it. I don’t think that’s quite fair, but I I’m sure it’s a common critique and I just wonder how they would respond to that. [SPEAKER_TURN]
    0:25:25 I think they would respond much as the Dalai Lama does. You can only change the world constructively by having dis the discernment to see what would be good for the world. If you just blindly race in um and try to tend to the world, you’re often going to make it worse than it was before. It’s like if suddenly I see somebody fall down on the sidewalk, I will race to help her, but it’s much better if I’m a trained physician who races to help her and can s exactly assess the situation and know what is
    0:25:55 best response to it. You know, I mention in the book I see the Dalai Lama as an a physician in the emergency room, and I think that’s what my monk friends are too. Um they’re not stepping away from the world, they’re stepping into a deeper reality, so it’s better to understand the world. The monks um that I spend time with live for only one thing, and that’s to help others. Um and I think therefore they’re more engaged with the world than many a CEO
    0:26:03 even many a a a a politician. But certainly um they’re not they’re not abandoning the world. I think they’re trying to tend to the world. [SPEAKER_TURN]
    0:26:08 And have you come to think that kind of attentiveness is only possible after
    0:26:12 spending a lot of time in silence?
    0:26:15 In solitude perhaps? [SPEAKER_TURN]
    0:26:46 Yes, I think meditation is famously um the means for gathering the resources for reple replenishing the inner savings account. And it takes v many different forms. It can be silent. Um but I do find that it’s those people who’ve taken the time to develop themselves inwardly, who have the most to give to the rest of the world. You know, the great German philosopher and mystic Meister Eckhart said as long as the inner work is strong, the outer work will
    0:27:08 never be puny. In other words, as long as you take care of what’s inside you, then your career, your relationships, your l life as a responsible citizen will take care of itself. But if you don’t do that, it’s questionable how much you really have to offer to the world. Um good intentions perhaps, but not um the discernment to turn those good inte tensions into fruitful results. [SPEAKER_TURN]
    0:27:38 We imagine silence and solitude as as kind of inseparable, but it is fascinating how much actual deep connection is possible in sharing silence with other people. You you put it, I think, quite beautifully in the book You. um you say I’m reminded that the best in us lies deeper than our words. And so th again we think of these monks as like o like the hermits and recluses, but it’s such a wonderful little community and it doesn’t
    0:27:43 require a lot of chatter, and yet the connections are are as deep as any. [SPEAKER_TURN]
    0:28:13 Yeah, I think that connections are maybe deeper because they’re not simplified or reduced by words. So in answer to your observation, I would say two things. First is another beautiful surprise uh for me was that as a bit of a loner who loves being by myself, one surprise of going to this place was that every time I walk along the monastery road, I’ll meet a fellow traveller, another retreatant. And we’ll stop and we’ll talk for maybe two minutes, three minutes. And I’ll quickly feel this is one of
    0:28:43 closest friends. And that what we’re engaging in exchanging is really rich because we’re not joined by the fact we work in the same business or we come from the same town or we went to the same college. We’re joined by the fact we’ve responded to the same longing. We’ve both come in search of the silence. So we’re both in search of our deepest lost selves. And what we say to one another arises from silence. And so even the briefest interaction there is very rich. And I trust those people that I it’s strange as
    0:29:12 I meet along the road, in a way sadly I wouldn’t trust the stranger I just bumped into on Fifth Avenue in New York or if I’m walking down the street in Santa Barbara. And secondly, as you said so beautifully, the monks are essentially living to look after one another. And as you know in the book one of the things that moves me more and more is how because they’re in this remote location um they’re often cut off from the world entirely by winter storms. And since many of the monks are quite elderly, they have to be
    0:29:43 And the prior, who became a very good friend of mine, would tell me that there was one secret back road only open between eight in the evening and five in the morning. And he would drive five hours through the dark, through the night, night after night after night, just to be with one of his brothers in the hospital. The hospital’s two and a half hours away um by road. Uh so a monk would get helicoptered out and then the prior every night would make the long drive through the dark just to sit by the side
    0:29:58 of his fellow monk. And he said, I am their father, I am the only family they have, I am their brother, literally as a monastic brother, uh I’m their mother in a sense. Uh and to see that degree of service and compassion is really humbling. [SPEAKER_TURN]
    0:30:14 All these trips to the monastery over all these years, have you ever thought about just not coming back into the world and just staying there to to stay in that silence and and and stay in that that space cut off from from all the the craziness? [SPEAKER_TURN]
    0:30:44 uh much too often and much too powerfully. And one good corrective um was when I started s staying with the monks in their enclosure I found how busy their lives were and there wasn’t s as much silence as a a visitor has and that they were leading round the clock um busier li you know busy lives they were in the office twenty four hours hours a day with their colleagues every hour for the rest of their lives and that in many ways it’s more all consuming than a
    0:30:55 job would be. So my temptation to become a monk has always been too strong, but again and again I’ve seen it’s my romantic illusion of what being a monk is rather than the real reality. [SPEAKER_TURN]
    0:31:12 Well look, perhaps becoming a monk is a b a bit extreme, but w why remain secular after all these years of religious exploration? I mean have you ever felt tempted to make that leap? Does it seem almost irrelevant to you at this point? [SPEAKER_TURN]
    0:31:42 I’m I’m so happy you said it might seem irrelevant, ’cause I I think that’s the best answer. In other words, I think what I believe is much less important than how I act. And the beliefs in some ways are material or they’re a luxury, because we all know people who strongly will assert their belief and act in ways that horrify us. And we all know people who claim to have no beliefs and act with a selflessness and compassion that could put a cardinal to shame. So I’m I’m happy not to get into the realm of belief, which can seem an
    0:32:12 indulgence and certainly can divide the world. And I’m much more concerned on how can I be a better friend, a better husband, uh a better father. So I’ve never felt a need to join a group or to subscribe to a theory or a system of belief or a uh a particular understanding of the world. But I have wanted to try um to lead a kinder and more wide awake life. Um and I think again I began by saying maybe an inner life is um a way of putting
    0:32:32 it that I respond to more happily than talking about spiritual spirituality or religion or any of those. I think if you have a rich inner life, you’ll be able to give more to other people. And if you neglect you in a life, there’s going to be a certain emptiness that you share with other people. And I prefer, I think, um to put it in those terms. [SPEAKER_TURN]
    0:32:37 What does a word like God mean to you at this point? [SPEAKER_TURN]
    0:33:08 And it’s a beautiful way of describing a truth that all of us um know and, intuit but lose sight of, and I go to the Hermitage to be reminded of that, and I don’t always happen to use that word for it, but it’s like many many languages. I can’t speak Aramaic, but it doesn’t mean that words in Aramaic are false, it just means that I don’t happen to understand them because I can only function in English. Um as you know, there was a moment in this book when actually I was on in my birth on my
    0:33:38 there. And I went to have um an interview, a television interview. And I was told uh before the interview by the producers, you know, at the end there were going to be some rapid-fire questions. So we’ll tell you what they are so you can be prepared. So they told me the questions to anticipate. And I went and I had the hour-long interview. And at the end there were rapid-fire questions, but they were totally different from the ones the producers had prepared me for. I think they’d got it mixed up and given me the questions for somebody else. So out of nowhere the interviewer said uh
    0:34:08 what what’s your definition of God? And because I was completely unprepared, I said reality. And I realised if I’d been prepared for that question or thought about it for a hundred days, I couldn’t have come up with a better answer for how I see things. Um but because it came out of me unthinkingly, it was exactly the right answer, the answer I could trust. Uh and so what does that mean? Does it mean that God is real? It could mean that. Does it mean, as the Buddhists will say, that really the divinity we have to bow before
    0:34:38 reality could mean that. But um, you know, I think the n notion of God is a is a really helpful one if it um helps people navigate the complications of the world. But if people choose to use other words, that may be more helpful to them. You know, the Dalai Lama wonderfully says that there’s a reason that there are many religious traditions in the world, and it’s the same reason that there are many um medical traditions, because some people find their system responds best to Chinese
    0:34:56 others respond well to ayurveda, others respond best to western medicine. Um all of us have the same problems, but each of our systems perhaps is most helped by one medical system rather than another. Um and I think that’s how I feel about um religious traditions.
    0:35:26 a word you use a lot is mystery, and I quite like that. Um I think like you I’ve I’ve always enjoyed the questions more than the answers and to the extent that spiritual and religious traditions are just trying to keep us in contact with the mysteries of existence, I I find them very valuable. Um but I still think I’ll always believe that the dogmas and the
    0:35:38 which are all too human do, more harm than good. But maybe I’m being too harsh in that judgement, I don’t know. [SPEAKER_TURN]
    0:36:08 No. No. I mean I I ag I happen to agree with you one hundred percent. And the sorrow is that the church is so imperfect, members of every church are so human and flawed, um dogma is so pernicious that many of us attempted to throw the baby out with the bath water. And we see so many terrible things done in the name of religion that uh we assume that a religion itself is corrupt, which I think is unfair. What
    0:36:38 do with the heavens is always going to be human and extremely fallible and often destructive. What the heavens do with humans is much more inarguable. Right at the centre of my previous book uh was a chapter on Jerusalem, which to me speaks for exactly what we’re describing. Because I’m not Christian or Muslim or Jewish, and yet I move to tears when I go to Jerusalem. And sometimes I’ll be walking down the street in Japan and I’ll be magnetically pulled toward Jerusalem. So
    0:37:08 powerful and charismatic is that place. And yet, as of course the city of faith is the city of division, and for as long as we can remember, Jerusalem has been a centre of bloody and violent conflicts, precisely because my sense one person’s sense of heaven is very different from his neighbour’s. There’s something real and inarguable about our longing for the divine and for the beyond, and yet what we do with it and the ways in which we try to cut it up into names and ideologies w exemplifies the
    0:37:38 of of our humanity and and makes a mockery of it. So I agree with you. I mean I think mystery is is wonderful if it’s a n way of speaking of the ineffable. And I think most wise souls have said if we were to try to understand God it wouldn’t be God. But I mean w the net the nature of it divinity is it’s beyond words and expressions. And I think that’s another reason why I stress silence, ’cause I think silence touches me as no scripture ever could. The Bible, the teachings of the Buddha, the
    0:38:07 they all have great wisdom in them, but I can’t one hundred percent subscribe to them. I c I can’t trust them in the way that I trust silence, which I think again lies beyond all of them in some part of me that I couldn’t begin or try to name or express. And and for all the horrors that are perpetrated in the name of religion, I don’t want to assume therefore that religion is a fraud. I think it’s just that t humans are not always worthy of the possibilities that are given to us.
    0:38:17 Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. [SPEAKER_TURN]
    0:38:52 Support for The Grey Area is brought to you by Wondery and their new show Scam Factory. You’ve probably received some suspicious e-mail or text that was quite obviously a scam deleted, it and moved on with your day. But have you ever stopped to think about who was on the other end of that scam Occasionally? the stranger on the other side is being forced to try and scam you against their will. And on Wondery’s new true crime podcast, they’re telling the story of those trapped inside scam factories, which they
    0:38:58 has heavily guarded compounds on the other side of the world where, people are coerced into becoming scammers.
    0:39:14 Told through the eyes of one family’s harrowing account of the sleepless nights and dangerous rescue attempts trying to escape one of these compounds, Scam Factory is an explosive new podcast that exposes what they say is a multi-billion dollar criminal empire operating in plain sight.
    0:39:24 You can follow Scam Factory on the Wondery app or wherever you get your podcasts. You can listen to all episodes of Scam Factory early and ad-free right now by joining Wondery+.
    0:39:56 Support for the grey area comes from Ateo. Ateo is an A_I_ native C_R_M_ built specifically for the next era of companies. They say it’s extremely powerful, can adapt to your unique data structures and scales with any business model. Setting up Ateo takes less than a minute and in seconds of synching your emails and calendar you’ll see all your relationships in a fully fledged platform, all enriched with actionable data.
    0:40:17 With A_T_O_ you can also create email sequences and customizable reports. And on top of it all, you can build A_I_ powered automations and use its research agent to tackle some of your most complex processes. So you can focus on what matters, building your company. You can join industry leaders like flat file, replicate, modal, and more.
    0:40:32 You can go to A_T_T_O_ dot com slash grey area and you’ll get fifteen percent off your first year. That’s A_T_T_I_O_ dot com slash grey area. A_T_T_I_O_ dot com slash grey area.
    0:41:06 Support for the grey area comes from Upway. If you’re tired of feeling stuck in traffic every day, there might be a better way to adventure, on an e-bike. Imagine cruising past traffic, tackling hills with ease, and exploring new trails, all without breaking a sweat or your wallet. At upway.co you can find e-bikes from top tier brands like Specialized, Cannondale, and Aventon. Add up to sixty percent off retail. Perfect for your next weekend adventure. Whether you’re looking for a rugged mountain bike or a
    0:41:18 League City Cruiser, there’s a ride for everyone. And right now, you can use code grey_area_150 to get a hundred and fifty dollars off your first e-bike purchase of a thousand dollars or more.
    0:41:57 Well, as you know um when you’re not at the monastery leaning into that silence, being still, meditating, intensive reading, these things are hard to do and when we try to do them we often get carried away by distracting thoughts or events.
    0:42:12 So I’m curious what your advice is to people for how to practice silence in the day-to-day Monday, Tuesday, Wednesday, rinse, wash, repeat world that most of us live in most of the time. [SPEAKER_TURN]
    0:42:42 Yeah. I think the harder it is, the more urgent and necessary it is. And when I have friends who say I don’t have time to be silent or I don’t have time to go on retreat, I think they’re the ones who are really in need of it, because somehow they’ve lost control of their lives. And all of us know that if you’re very very busy, you’re unlikely to be wise, and that those people who are really wise are never too busy. So to a typical person um who shares the concerns you just voiced
    0:43:12 I would say go for a walk. Uh go and meet a friend without your cell phone. Try instead of killing time to restore time. I’ll give an example. Uh sitting in this apartment I every evening used to wait for my m wife to come back from work and I never knew if it would be twenty minutes or seventy minutes. So I was just waiting. And I would I would kill the time. I would scroll through the internet or I’d turn on the T_V_ there, there’s never anything to watch on Japanese T_V_. And then one day I thought um why don’t I just
    0:43:42 turn off the lights and listen to some music. And by di and I did. And very quiet music at first, but not so quiet music later. And I was amazed at how much fresher I felt when I heard her key in the door, how much more I had to give to her, how much better I slept, how much less jangled I was when I woke up. And it’s a t tiny example of how I made a little space in my day for doing nothing when the alternative was doing useless stuff. And th doing nothing
    0:44:12 really the best response to that, and the kindest thing I could share with my wife when she did come home. And I think all of us have those um spaces in our days, and it’s up to us how we choose to u to use them. Our aim in this world of distraction is to put ourselves in the space beyond distraction, because so long as we’re cut up and living in little fragments, we’re no use to anyone at all. And as I said, my prejudice is to think the more deeply absorbed we
    0:44:42 something the happier and the fuller and the richer we are. I mean Simone, very years ago said attention is a form of prayer and I loved it when you were talking about the attention economy and I don’t want to give my attention uh to Google and Facebook if there’s a chance of giving my attention to Dostoevsky or Emily Dickinson or the beauty of the the Deer Park down the street from me um or wherever you happen to be. I think there’s beaut natural beauty around you. Um so uh I I think the beauty of
    0:45:00 of of silence and the quality as I associate with silence is that they’re non-denominational and they’re available to everybody in her life. And if her life feels too full and too stressed, that’s a sign that she has to do something akin to taking medicine or or going to t to the to the doctor. [SPEAKER_TURN]
    0:45:15 i in this world of increasingly stunted attention spans, do you do you worry that the monastic life is disappearing, will disappear, and if it does, what do you think will lose? [SPEAKER_TURN]
    0:45:45 I worry a huge amount. And of course there are more because of uh diminished attention spans there are more and more spas and yoga centres and new age places, but unfortunately many of them are based around a single human who’s mortal or around a certain exclusive philosophy and not responsive to people who don’t subscribe to that. And so I do think if monasteries and convents die away and with them the example of people who have given their lives up twenty four hours a day for the rest of their
    0:46:15 to a certain commitment, we’ll lose something very very significant. And all the retreat centres in the world are never going to compensate for that loss. It’s a it’s a severe concern and and the place that I go to uh New Camaraderie in Big Sur, they’re having a great trouble, as all monastic institutions are, getting new people to make a commitment for life. Um and so w wherever you are in every order, we’re losing um those places and and I think that’s a
    0:46:33 loss and I don’t know how it could be repaired, because I’m a perfect bad example. In other words, I go there on retreat and I enjoy all the benefits of it, but I haven’t made the commitment to join them and and to support them in that way. Um so I hope there are lots of people who are wiser and more committed than I am who can keep these places going. [SPEAKER_TURN]
    0:46:42 I’m never gonna become a monk, but I’ve always wanted to visit a monastery and and write about the experience. Uh maybe I’ll go now. [SPEAKER_TURN]
    0:47:12 you’ll never regret it, Sean. And the one though is there’s a monastery near you, wherever you happen to be. I mean there are plenty of them and they many of them open their doors to visitors and all of them I think offer a version of the same silence. My suspicion is if you go once, a uh well you it may well in induced to start going more than once. But even if it doesn’t, just knowing that medicine is nearby, just the memory and just the prospect of a place that brings you closer to what is essential in
    0:47:42 life is going to transform your days. And the more confusing and painful those days are, the more useful it is to recall well, there’s there’s a response to them and there there is um medicine at hand if if I really need it. Um so I yeah, I I g I’ve seemed to have come back in this conversation a lot to the to the medical analogy, but I think it’s because many of us are uh are sick or lost and confused and uh looking for anything that can address that and in my experience
    0:47:48 on retreats and silence has been a one of the best and most irreplaceable medicines I found. [SPEAKER_TURN]
    0:48:15 I think we’re all probably a little sick lost and confused and only aware of it to varying degrees. [SPEAKER_TURN]
    0:48:20 Questions. [SPEAKER_TURN]
    0:48:33 Well it’s a beautiful book um and it was a joy to read um and once again the book is called A Flame, learning from silence. This was wonderful thank, you so much Pico. [SPEAKER_TURN]
    0:48:39 Thank you so much Sean um this is a kind of medicine you’re sharing with your listeners and I’m so grateful for it. [SPEAKER_TURN]
    0:49:01 Alright, I hope you enjoyed this episode. I know I did. This conversation genuinely changed how I think about the value of silence and how much I need to balance out all the noise and chaos in my own life.
    0:49:23 But will it change how I actually use moments of silence? I don’t know. But I guess the only way to find out is to keep trying. Keep looking for those moments where we can find them. And if we can find them, I guess we’ll have to make them. Let’s do that now. Just sit and enjoy a few more seconds of silence together.
    0:50:02 I would love to know what happened during your moment of silence. Did it feel the same as the moment of silence we took before the show? I would also love to know what you thought of the episode or any episode So. drop us a line at the grey area at box dot com or you can leave us a message on our new voicemail line at one eight hundred two one four five seven four nine. And once you’re finished with that, please go ahead and rate and review and subscribe to the
    0:50:03 podcast.
    0:50:32 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Christian Ayala, fact-checked by Melissa Hirsch, and Alex Overington wrote our theme music. New episodes of the grey area drop on Mondays, listen and subscribe. The show is part of Vox, support Vox’s journalism by joining our membership program today. Go to vox.com/members to sign up.
    0:50:35 And if you decide to sign up because of this show, let us know.

    How often do you find silence? And do you know what to do with it when you do?

    Today’s guest is essayist and travel writer Pico Iyer. His latest book is Aflame: Learning From Silence, which recounts his experiences living at a Catholic monastery in California after losing his home in a fire.

    He speaks with Sean about the restorative power of silence, and how being quiet can prepare us for a busy and overstimulated world.

    Host: Sean Illing (@SeanIlling)

    Guest: Pico Iyer, writer and author of Aflame: Learning From Silence

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • How to change your personality

    AI transcript
    0:00:01 [MUSIC PLAYING]
    0:00:05 Support for the gray area comes from Atio.
    0:00:10 Atio is an AI native CRM built for the next era of companies.
    0:00:11 They say its powerful data structure
    0:00:15 adapts to your business model, syncs in all your contacts
    0:00:19 in minutes, and enriches everything with actionable data.
    0:00:22 Atio says its AI research agents tackle complex work
    0:00:26 like finding key decision makers and triaging incoming leads.
    0:00:29 You can go to atio.com/grayarea, and you’ll
    0:00:31 get 15% off your first year.
    0:00:35 That’s atio.com/grayarea.
    0:00:44 Thumbtack presents the ins and outs of caring for your home.
    0:00:48 Out, indecision, overthinking, second guessing,
    0:00:50 every choice you make.
    0:00:56 In, plans and guides that make it easy to get home projects done.
    0:01:02 Out, beige on beige on beige.
    0:01:06 In, knowing what to do, when to do it, and who to hire.
    0:01:09 Start caring for your home with confidence.
    0:01:11 Download Thumbtack today.
    0:01:18 I think all of us, at some point,
    0:01:21 have wondered why we are the way we are.
    0:01:24 Maybe you’re a little neurotic, a worrier,
    0:01:28 or maybe you’re a tad abrasive, confrontational,
    0:01:30 or a bit evasive.
    0:01:32 Maybe you don’t think enough about others,
    0:01:35 or maybe you do, but just a little too much.
    0:01:38 We see our faults as faults.
    0:01:41 But aren’t they really just our personalities?
    0:01:43 And what is that exactly?
    0:01:45 A personality.
    0:01:46 Is it something we’re born with?
    0:01:49 Does it shift over time?
    0:01:51 Can we think and act our way into being a different and,
    0:01:53 hopefully, better person?
    0:01:59 I’m Sean Elling, and this is “The Gray Area.”
    0:02:07 Today’s guest is Olga Hazan.
    0:02:08 She’s a staff writer at The Atlantic
    0:02:11 and the author of the book “Me But Better–
    0:02:14 The Science and Promise of Personality Change.”
    0:02:17 The book is a joy to read, full of ideas,
    0:02:19 but also personal in the sense that Olga documents
    0:02:22 her year-long effort to change things
    0:02:24 she doesn’t like about her own personality.
    0:02:28 Along the way, she does a nice job of weaving in the science
    0:02:31 and marking the limits of what we know and don’t know.
    0:02:35 It’s honest, curious, and reflective.
    0:02:37 And so, it turns out, is Olga.
    0:02:39 So I invited her on the show.
    0:02:47 Olga Hazan, welcome to the show.
    0:02:48 Thanks so much for having me.
    0:02:51 So let’s start with the basics here,
    0:02:55 because personality is one of those concepts
    0:03:00 that we all intuitively understand what it signifies,
    0:03:01 at least loosely.
    0:03:04 But it is pretty tricky to define.
    0:03:06 You’ve now written a book on it, so give me
    0:03:10 your neatest, clearest definition.
    0:03:16 Yeah, personality is the consistent thoughts and behaviors
    0:03:18 that you have every day.
    0:03:21 And some researchers think that, in addition
    0:03:24 to just having those thoughts and feelings and behaviors,
    0:03:26 they also help you achieve your goals.
    0:03:29 So depending on what your goals are,
    0:03:32 your personality kind of helps you get there.
    0:03:34 An example of this would be the personality
    0:03:38 trait of agreeableness, which helps you make friends
    0:03:39 and social connections.
    0:03:42 So people who tend to be more agreeable also
    0:03:45 tend to value friendships and connections
    0:03:47 and achieve more of those.
    0:03:50 You use the word consistent there.
    0:03:54 To what extent is personality just a performance?
    0:03:57 And to what extent is it something much more concrete?
    0:04:02 So personality is, in some ways, a performance.
    0:04:05 Let’s say you describe yourself as an introvert,
    0:04:07 but you have to give a big talk.
    0:04:09 And it’s very important to your career
    0:04:12 that this talk go well, right?
    0:04:16 You are probably going to perform, to a certain extent,
    0:04:17 extraversion.
    0:04:20 Or let’s say you’re going into a room full of investors,
    0:04:22 and you have to raise money for your startup,
    0:04:25 but you’re just a very introverted coder guy who just
    0:04:28 wants to code all day and not talk to anyone.
    0:04:31 You’re going to perform extraversion in that situation,
    0:04:33 too, because it’s very important for you
    0:04:37 to get whatever is at the end of that performance.
    0:04:39 The money or the professional accolades
    0:04:42 or whatever comes with it, it doesn’t have to be financial.
    0:04:45 It could be going on a first date is kind of a performance
    0:04:46 as well.
    0:04:50 So we do all perform elements of these traits
    0:04:52 every single day.
    0:04:54 But what most researchers think is
    0:04:57 that there is sort of a tendency that we all
    0:05:03 have toward a certain pattern of behaviors and thoughts that
    0:05:06 are more or less consistent, especially if we don’t try
    0:05:08 to change them in any meaningful way,
    0:05:11 that you kind of get up and you have these little patterns
    0:05:12 that you fall into.
    0:05:14 And that’s sort of like your, quote, unquote,
    0:05:16 “natural” personality.
    0:05:22 So we have what people call the big five personality traits–
    0:05:25 neuroticism, extraversion, agreeableness,
    0:05:28 which you just mentioned, openness to experience,
    0:05:31 and conscientiousness.
    0:05:33 Are these categories generally accepted
    0:05:38 in the field of psychology, and how useful do you find them?
    0:05:40 They are generally accepted.
    0:05:43 That’s now, if you read a personality study,
    0:05:46 it will be most likely based on the big five.
    0:05:48 So things like Ania Graham and Myers-Briggs
    0:05:51 are not generally accepted.
    0:05:54 That said, they are imperfect.
    0:05:57 There are some cultures that have traits
    0:05:59 that are very important in those cultures
    0:06:02 that the big five doesn’t really pick up as much.
    0:06:06 Meanwhile, things like openness, it’s sort of a catch-all.
    0:06:12 It doesn’t really map very cleanly onto someone’s personality
    0:06:14 as other people would observe it.
    0:06:15 So yeah, it is valid.
    0:06:16 It has weaknesses.
    0:06:22 But personality is so hard to measure and kind of scientifically
    0:06:24 get your head around that it’s sort of the best
    0:06:25 that we have right now.
    0:06:29 Well, part of the inspiration for this project
    0:06:36 is that you wanted to change some things about yourself.
    0:06:37 So what did you want to change?
    0:06:40 And why don’t you love yourself, Olga?
    0:06:42 Don’t you know you’re good enough and smart enough
    0:06:43 and people like you?
    0:06:45 Why do you want to change?
    0:06:47 Yeah, so I did want to change.
    0:06:50 And I also love myself.
    0:06:53 The two are not mutually exclusive.
    0:06:55 Though I know that it can feel that way,
    0:06:58 that if you admit that you want to change,
    0:07:01 that it can feel like you’re saying that you don’t love
    0:07:02 yourself.
    0:07:05 But I think that’s where the idea of personality traits
    0:07:07 as tools to help you achieve your goals
    0:07:09 can be really helpful.
    0:07:11 Because we all have goals we want to achieve,
    0:07:14 even if we all like our lives and ourselves.
    0:07:17 And for me, what I realized is that things were going well.
    0:07:20 I had a pretty nice life.
    0:07:21 Nothing was seriously wrong.
    0:07:26 But my reactions to situations were not benefiting me.
    0:07:31 They were kind of undermining me and making me not
    0:07:32 able to enjoy my life.
    0:07:37 So I start the book out with this actually great sounding
    0:07:41 now as a new parent of this great sounding day in Miami,
    0:07:43 where honestly, all that happened
    0:07:46 is that I got a bad haircut, then immediately
    0:07:48 had to get professional photos taken,
    0:07:51 then got stuck in traffic, and then
    0:07:57 had this weird debacle with a grocery store shopping cart.
    0:08:02 And honestly, just because of my high neuroticism at the time,
    0:08:05 the accumulation of all of those small things
    0:08:09 made me have this epic meltdown when I got back to my hotel.
    0:08:13 And I realized that that happened a lot in various ways.
    0:08:17 Small things would happen that would make me not
    0:08:19 able to appreciate the big picture or not
    0:08:23 able to just be happy with what I have or be grateful.
    0:08:25 And so that’s really what I wanted to work on
    0:08:27 is appreciating my life for what it was.
    0:08:31 And also just outside of neuroticism,
    0:08:34 I was feeling the COVID social isolation.
    0:08:38 And I wanted to deepen my social connections as well.
    0:08:40 So that’s why I wanted to change.
    0:08:45 Would you say that you had or have a tendency to catastrophize?
    0:08:46 Because I do.
    0:08:48 And I don’t know if that’s a function of neuroticism
    0:08:51 or something else, but I would say that is the one thing
    0:08:55 that I’m trying most aggressively to stop.
    0:08:58 Is that tendency part of neuroticism?
    0:09:00 Or is it a little more complicated than that?
    0:09:04 Yeah, it’s definitely part of neuroticism.
    0:09:06 Neuroticism is sort of the trait that’s–
    0:09:09 so to kind of simplify it, it’s associated
    0:09:10 with depression and anxiety.
    0:09:15 And basically, all those are just like a feeling of threat.
    0:09:19 Like you just constantly see threats everywhere.
    0:09:21 The reason you’re catastrophizing
    0:09:24 is not because you’re silly or because you’re not realistic,
    0:09:27 but because you kind of can see the threats coming
    0:09:29 from every direction.
    0:09:32 And you’re like, how do I prevent those from happening?
    0:09:35 And that’s what makes people who are high in neuroticism
    0:09:36 so miserable.
    0:09:38 Well, I like the–
    0:09:41 I think it was Jud Brewer argument
    0:09:45 that you talk about in the book that anxiety is a habit loop
    0:09:51 where anxiety triggers the behavior of worry, which
    0:09:53 feels like it’s a temporary relief,
    0:09:56 but really it just makes us more anxious in the long run.
    0:09:57 And this is something–
    0:10:01 this is something neurotic people do by default, right?
    0:10:04 I mean, it’s just the first instinct.
    0:10:05 Oh, yeah.
    0:10:08 I always thought anxiety and worry were the same thing.
    0:10:09 But worry is actually–
    0:10:10 it’s a behavior.
    0:10:13 It’s almost like a self-soothing behavior.
    0:10:15 And people who are very anxious think
    0:10:19 that if you just worry enough, you won’t be anxious anymore.
    0:10:21 But instead, worry kind of sometimes
    0:10:23 can make you more anxious.
    0:10:25 Like you’re never going to get to the end of the worrying.
    0:10:28 Well, it’s also about the discomfort with uncertainty,
    0:10:29 right?
    0:10:30 You talk about the neurotic person
    0:10:34 is the one who gets the have a second–
    0:10:37 do you have a second slack from your boss and freaks out?
    0:10:38 I’m the type.
    0:10:40 If I get that, do you have a second out of nowhere
    0:10:41 from the boss?
    0:10:46 I’m filing for food stamps before lunch.
    0:10:49 It’s just my mind just goes there.
    0:10:51 OK, this is becoming too much about me already.
    0:10:52 No, it’s OK.
    0:10:52 Yeah, I know.
    0:10:53 I’m right there with you.
    0:10:55 But uncertainty is wrapped up with this, right?
    0:11:01 It’s just– it’s an uneasiness about what the future might
    0:11:04 hold and our ability to control or not control.
    0:11:08 And so you’re just anxious about the world.
    0:11:13 And I mean, I’m sure there’s evolutionary utility in that.
    0:11:15 But boy, past a certain point, it just
    0:11:18 becomes pathological, really.
    0:11:21 Yeah, I mean, that’s a huge part of it.
    0:11:23 Neuroticism is all intertwined with a feeling
    0:11:27 of wanting control, of really fearing uncertainty.
    0:11:29 In the modern world, it’s all about learning
    0:11:33 how to live with uncertainty and accept
    0:11:37 that there is uncertainty in the world without letting
    0:11:40 it rule you, basically, this fear of uncertainty.
    0:11:41 What about agreeableness?
    0:11:43 Agreeableness sounds pretty agreeable.
    0:11:48 I mean, nobody wants to be called disagreeable, I don’t think.
    0:11:51 But is agreeableness more complicated than that?
    0:11:53 I mean, how much agreeableness is too much?
    0:11:57 When do we need to be a little disagreeable?
    0:12:02 Yeah, agreeableness was one of the ones that I was working on.
    0:12:07 And it’s basically like warmth and empathy toward others
    0:12:09 and also trust.
    0:12:13 And that element of agreeableness can be really good.
    0:12:15 And it can deepen your relationships
    0:12:18 and give you more fulfilling friendships.
    0:12:22 Where some people say that they’re actually too agreeable
    0:12:25 and they want to pair back is when they feel like they’re
    0:12:27 being people-pleasers.
    0:12:30 And they feel like people walk all over them
    0:12:32 or they don’t know how to say no.
    0:12:34 So part of agreeableness is learning
    0:12:38 how to communicate boundaries, how to make friends,
    0:12:40 but also not just by saying yes to everything
    0:12:42 that your friends ask of you.
    0:12:46 And to still have your own boundaries and your own things
    0:12:48 that you’re willing and not willing to do.
    0:12:52 So for your year-long personality transformation
    0:12:55 project, you did focus on all five of these traits
    0:12:57 to varying degrees.
    0:13:01 Which did you find was the hardest to tweak in any direction?
    0:13:05 So neuroticism was the hardest by far for me.
    0:13:11 It is because the way to improve on neuroticism
    0:13:15 is meditation or any kind of mindfulness practice.
    0:13:23 It can be yoga, not core power, like slow contemplative yoga.
    0:13:26 It can be different forms of mindfulness,
    0:13:28 but it’s basically mindfulness.
    0:13:30 And I found that really challenging.
    0:13:33 I am not a natural meditator.
    0:13:37 I kind of have a loop of ongoing concerns and worries
    0:13:42 and to-do list when I’m not thinking about anything.
    0:13:45 I don’t like it when people are too relaxed.
    0:13:48 I find that irritating.
    0:13:48 Really?
    0:13:50 Why?
    0:13:52 I just– I think it was a little bit hard for me
    0:13:54 to let go of my anxiety.
    0:13:58 Because on some level, I think–
    0:13:59 and I still sometimes kind of think this–
    0:14:04 I think that anxiety is protective, at least for me.
    0:14:07 It forces me to do things.
    0:14:07 And it helps me.
    0:14:10 It is like the fire under me.
    0:14:14 And I think at times, I was a little bit like, oh, sure.
    0:14:17 This is fine for people who don’t have a lot going on,
    0:14:19 but I need my anxiety.
    0:14:21 How long did you try meditating?
    0:14:24 I mean, did you ultimately find it to be helpful?
    0:14:30 Did you score less neurotic at the end of that practice?
    0:14:32 I think I did meditation really, really seriously
    0:14:34 for about six months of this.
    0:14:39 And it did work in the sense that my neuroticism went down.
    0:14:44 But when I said that neuroticism is depression and anxiety,
    0:14:47 it was actually mostly my depression score that went down.
    0:14:49 So I became less depressed.
    0:14:54 And my anxiety also went down, but it was still quite high.
    0:14:56 It was not as high as it had been,
    0:14:59 but it didn’t go away completely.
    0:15:02 But I think one reason why I became less depressed
    0:15:06 is that the class that I took, which was called MBSR–
    0:15:08 it was the meditation class that I took–
    0:15:12 had a lot of Buddhist teachings that were part of it.
    0:15:14 So one of the things that my meditation teacher said
    0:15:18 was things happen that we don’t like.
    0:15:21 And for me, even though obviously things
    0:15:23 happen that we don’t like, I realized
    0:15:26 that I was someone who, when things would go wrong,
    0:15:29 I would start to blame myself very intensely.
    0:15:32 And I would have this very intense self-blame that would
    0:15:36 be very hard to break out of, even if it was something that
    0:15:37 was clearly not my fault.
    0:15:42 It was like an act of God or really awful traffic
    0:15:46 or just something that had nothing to do with me.
    0:15:48 I would start to be like, well, I should have left earlier.
    0:15:50 I should have, blah, blah, blah, I should have predicted this.
    0:15:54 And I think just this reminder that things happen that we don’t
    0:15:56 like and that everyone has things that happen in their life
    0:15:57 that they would rather not happen.
    0:16:00 And we all have to deal with that.
    0:16:03 I don’t know, that was weirdly very helpful to me.
    0:16:06 What is the scientifically best personality?
    0:16:09 And look, there is a part of my philosophical soul
    0:16:12 that shudders, even at asking this question,
    0:16:14 because I don’t think science can make these kind of value
    0:16:15 judgments.
    0:16:19 But what I’m getting at is, what does the research
    0:16:21 on happiness and personality tell us
    0:16:25 about what kinds of traits tend to be most correlated
    0:16:29 with happiness and well-being and a flourishing life?
    0:16:32 If your goal is happiness, which I am not saying that it has
    0:16:34 to be, there’s more life than happiness.
    0:16:38 But as far as happiness, well-being, longevity,
    0:16:42 all those goodies, it’s basically being high but not
    0:16:45 too high on all of the five traits.
    0:16:48 So being pretty extroverted, pretty agreeable,
    0:16:53 pretty open to experiences, quite very conscientious,
    0:16:57 and then very emotionally stable.
    0:17:00 You say in the book that extroverts are happier, in part,
    0:17:05 because they interpret ambiguous stimuli more positively.
    0:17:06 How true is this?
    0:17:08 I mean, I’m sure there are some people out there
    0:17:11 who might find this kind of claim a little crude.
    0:17:15 So how clear is the evidence on this?
    0:17:19 How confident are we that extroverts in general are happier?
    0:17:22 I mean, they certainly look like they’re having more fun,
    0:17:24 but that’s anecdotal.
    0:17:26 So the evidence that extroverts are happier
    0:17:28 is pretty consistent.
    0:17:31 It’s been replicated quite a few times,
    0:17:33 including by researchers who weren’t connected
    0:17:36 to the original studies and were dubious,
    0:17:39 and they replicated it.
    0:17:41 And the one researcher who did that, who I talked to,
    0:17:43 is himself an introvert.
    0:17:44 So it is pretty clear.
    0:17:47 The reasons why are less clear.
    0:17:49 So as you mentioned, one interpretation
    0:17:52 is that they walk into a room full of people,
    0:17:56 and they’re all strangers, and they don’t immediately
    0:17:57 get a smile out of anyone.
    0:18:01 It’s just kind of a straight-faced kind of people
    0:18:03 are like, what are you doing here?
    0:18:05 I, an introvert, would be like, oh, my god.
    0:18:07 I’m not supposed to be here.
    0:18:09 Nobody likes me.
    0:18:12 I need to leave kind of just like flea, flea, flea.
    0:18:13 It’s that self-talk, right?
    0:18:15 All that self-chat-er.
    0:18:16 Right, right, right.
    0:18:19 An extrovert would be like, oh, awesome.
    0:18:21 I just need to introduce myself around.
    0:18:24 And pretty soon, people will warm up to me.
    0:18:27 They just have a different interpretation of events
    0:18:30 that helps them be happier.
    0:18:32 They are more active.
    0:18:34 They’re just always out and doing things,
    0:18:36 like the people who are signed up for a million clubs
    0:18:39 and things are extroverts.
    0:18:43 And they have more social connections, not just friends.
    0:18:46 They also have more weak ties, more acquaintances,
    0:18:49 just people they talk to throughout the day.
    0:18:52 And that helps them feel happier.
    0:18:55 (gentle music)
    0:19:09 Support for the gray area comes from Blue Nile.
    0:19:13 Okay, so you’ve decided to pop the question
    0:19:17 and you’re 99.9% sure that your partner will say yes.
    0:19:21 Now here comes the hard part, picking out the ring.
    0:19:22 And you’ve got some decisions to make.
    0:19:26 What shape, size, style, color, clarity.
    0:19:28 It can all seem so overwhelming.
    0:19:30 You’re thinking about putting it off
    0:19:31 for another few months.
    0:19:34 But you don’t have to, because Blue Nile can help you
    0:19:37 get the perfect ring to go alongside that big question.
    0:19:38 At Blue Nile, they say you can create
    0:19:40 a bigger, more brilliant engagement ring
    0:19:41 than you can imagine.
    0:19:45 At a price you’ll rarely find at a traditional jeweler.
    0:19:49 Since 1999, Blue Nile has been the original online jeweler.
    0:19:50 They say they’ve always been committed
    0:19:52 to ensuring that the highest ethical standards
    0:19:55 are observed when sourcing diamonds and jewelry.
    0:19:57 And as a bonus, your surprise will stay safe
    0:20:00 because every Blue Nile order arrives in packaging
    0:20:02 that won’t give away what’s inside.
    0:20:06 Right now, you can get $50 off your purchase of $500 or more
    0:20:09 with code grayarea@bluenile.com.
    0:20:12 That’s $50 off with code grayarea@bluenile.com.
    0:20:13 BlueNile.com.
    0:20:21 Support for the gray area comes from Greenlight.
    0:20:23 Was there ever a time you were old enough
    0:20:26 to start handling your own finances and thought,
    0:20:28 how come no one ever taught me this stuff?
    0:20:29 You’re not alone.
    0:20:33 Money management isn’t exactly taught in history class.
    0:20:36 That’s why Greenlight has created a debit card and money
    0:20:39 at Made for Families that lets kids learn how to save,
    0:20:41 invest, and spend wisely.
    0:20:44 So your kids don’t have to get caught off guard one day
    0:20:46 when it comes to managing their own money.
    0:20:48 With Greenlight, parents can send money to their kids
    0:20:51 while also keeping an eye on their spending and saving.
    0:20:53 Plus, kids can play games on an app
    0:20:57 that teaches money skills in a fun, accessible way.
    0:20:59 The Greenlight app even includes a chores feature
    0:21:02 where you can set up one time or recurring chores,
    0:21:04 customized to your family’s needs,
    0:21:08 and reward kids with allowance for a job well done.
    0:21:09 My kid is too young for a finance talk,
    0:21:11 but one of our colleagues here at Vox
    0:21:13 uses Greenlight with his two boys
    0:21:15 and he absolutely loves it.
    0:21:18 Start your risk-free Greenlight trial today
    0:21:20 at greenlight.com/grayarea.
    0:21:24 That’s greenlight.com/grayarea to get started.
    0:21:26 Greenlight.com/grayarea.
    0:21:33 Support for the gray area comes from Shopify.
    0:21:35 Running a business can be a grind.
    0:21:37 In fact, it’s kind of a miracle
    0:21:39 that anyone decides to start their own company.
    0:21:42 It takes thousands of hours of grueling,
    0:21:44 often thankless work to build infrastructure,
    0:21:47 develop products, and attract customers.
    0:21:49 And keeping things running smoothly requires
    0:21:51 a supportive, consistent team.
    0:21:53 If you want to add another member to that team,
    0:21:56 a platform you and your customers can rely on,
    0:21:58 you might want to check out Shopify.
    0:22:01 Shopify is an all-in-one digital commerce platform
    0:22:02 that wants to help your business
    0:22:04 sell better than ever before.
    0:22:06 It doesn’t matter if your customers spend their time
    0:22:08 scrolling through your feed
    0:22:10 or strolling past your physical storefront.
    0:22:13 There’s a reason companies like Mattel and Heinz
    0:22:16 turn to Shopify to sell more products to more customers.
    0:22:19 Businesses that sell more sell with Shopify.
    0:22:21 Want to upgrade your business
    0:22:23 and get the same checkout Mattel uses?
    0:22:25 You can sign up for your $1 per month trial period
    0:22:29 at Shopify.com/fox, all lowercase.
    0:22:33 That’s Shopify.com/fox to upgrade your selling today.
    0:22:35 Shopify.com/fox.
    0:22:38 (upbeat music)
    0:22:55 – Well, let’s talk about change,
    0:22:58 the science of personality change.
    0:23:02 As you say in the book, there is this idea
    0:23:07 that at around 30, our personalities are set like plaster.
    0:23:11 How true is that?
    0:23:15 I mean, how fixed is our personality?
    0:23:20 – So that idea is sort of not considered
    0:23:23 totally true anymore.
    0:23:25 There’s been quite a bit of research that shows
    0:23:29 that even when people don’t try to change,
    0:23:31 they actually end up changing
    0:23:33 over the course of their lives.
    0:23:37 So one example is that people get less neurotic
    0:23:38 as they get older.
    0:23:40 They also tend to get less open to experiences.
    0:23:44 So if you ever notice that people get more conservative
    0:23:45 as they get older, that could be
    0:23:48 because openness to experiences goes down.
    0:23:49 In studies where they follow people
    0:23:52 over decades and decades, most of those people
    0:23:55 in those studies change on at least one personality trait
    0:24:00 from young adulthood to late adulthood, their 60s.
    0:24:04 So it’s true, you’re not gonna be like unrecognizable
    0:24:08 probably, but people do change over time
    0:24:09 just kind of naturally.
    0:24:12 But what kind of the heart of my book is about
    0:24:15 is about changing your personality intentionally,
    0:24:18 which is sort of an even newer branch of research
    0:24:20 where they actually ask people
    0:24:22 if they would like to change their personalities,
    0:24:24 give them activities that are meant
    0:24:26 to help change their personalities
    0:24:29 and then kind of measure their personalities after the fact.
    0:24:33 And so then your personality would change even more.
    0:24:34 – This part of it is so interesting to me.
    0:24:37 I mean, I’ve had psychologists on the show
    0:24:40 before people like Paul Bloom who I love.
    0:24:42 I think he’s just fantastic.
    0:24:47 And I may be bastardizing his argument here.
    0:24:49 So if you’re listening, Paul, you can write in and tell me.
    0:24:52 But he always says something to the effect,
    0:24:55 not necessarily that we look, you are your brain
    0:24:56 and that’s it.
    0:25:01 But he does suggest that by the time you’re pretty young,
    0:25:04 five, six, seven, eight, whatever,
    0:25:08 somewhere around there, your personality is kind of clear
    0:25:09 and it’s kind of constant.
    0:25:11 You kind of are what you are.
    0:25:12 You can tinker a little bit at the margins
    0:25:14 and the environment matters.
    0:25:15 Of course, it always matters,
    0:25:19 but you really are sort of, you kind of are what you are,
    0:25:22 which isn’t to say that you can’t change anything,
    0:25:24 but you kind of are what you are.
    0:25:27 I mean, do you think that is a little overstated?
    0:25:29 – Yeah, I mean, I think,
    0:25:30 so there is a little bit of truth to that.
    0:25:34 So part of personality is inherited, right?
    0:25:36 It is genetic.
    0:25:39 So like, in some ways you start to see
    0:25:41 someone’s personality emerge in childhood
    0:25:45 and like they’re gonna be kind of like that.
    0:25:47 You know, probably for the rest of their lives,
    0:25:51 like, you know, barring anything major.
    0:25:53 But when you talk about tinkering at the margins,
    0:25:56 like that is actually like quite important.
    0:26:00 Like a lot of therapy is basically
    0:26:02 just tinkering at the margins.
    0:26:05 Like one of the books that I read kind of
    0:26:08 in reporting out my book is 10% happier.
    0:26:11 And that was Dan Harris meditating every day
    0:26:16 for like an hour a day, just to become 10% happier.
    0:26:18 – That’s a lot though.
    0:26:18 10% is a lot.
    0:26:19 – Yeah, yeah.
    0:26:21 I mean, but that’s, so like it kind of is,
    0:26:22 it depends on how you look at it.
    0:26:26 Like, I was a really anxious kid and I’m an anxious adult.
    0:26:30 You know, does that mean that I am exactly the same
    0:26:31 as I was when I was seven?
    0:26:34 I mean, you know, I’m recognizable,
    0:26:39 but I also think that I have knowledge and tools now
    0:26:44 to like control my anxiety a lot better obviously
    0:26:47 than I did when I was a kid or a teen, even a young adult.
    0:26:48 So I don’t know.
    0:26:51 I think that’s true, but also the margins
    0:26:53 are really important.
    0:26:56 – Yeah, no, there’s a lot of difference in that.
    0:26:58 Little tweaks here and there do matter.
    0:27:01 So, you know, thoughts and behaviors
    0:27:02 are these two elements of personality.
    0:27:05 I mean, how much power do we really have
    0:27:08 to alter our behavior by consciously,
    0:27:11 deliberately altering our thoughts?
    0:27:13 I mean, how clear is that relationship?
    0:27:16 Because if it is fairly clear that that is,
    0:27:19 seems like one of the more reliable ways to go about,
    0:27:23 you know, making some of these tweaks.
    0:27:25 – The traits where it’s all behavioral
    0:27:28 are definitely the easiest to change.
    0:27:31 So conscientiousness is a good example of this.
    0:27:34 It’s the one that’s all about being organized
    0:27:37 and on time, eating healthy, you know, exercising.
    0:27:40 What they’ve found is basically that
    0:27:43 you don’t have to like really want it
    0:27:45 in order to become more conscientious.
    0:27:47 You just kind of have to do the stuff
    0:27:49 associated with conscientiousness.
    0:27:51 So like making the to-do list,
    0:27:53 making the calendar reminders,
    0:27:57 leaving, you know, whatever, 10 minutes earlier,
    0:27:58 you know, decluttering your closets.
    0:28:01 Like if you do enough of that stuff,
    0:28:03 kind of regularly and consistently,
    0:28:06 that is conscientiousness.
    0:28:07 Like you will become more conscientious.
    0:28:10 You will get stuff done and like achieve your goals
    0:28:14 and have a higher level of conscientiousness.
    0:28:16 With some of the other ones like neuroticism
    0:28:18 or even agreeableness,
    0:28:21 like the reason why they’re harder to change
    0:28:23 is that you have to really want it.
    0:28:26 And it is kind of more about your thought processes
    0:28:29 and like challenging your thoughts
    0:28:33 and, you know, thinking about situations differently.
    0:28:37 Like if I was to revisit that day in Florida,
    0:28:39 now or in Miami,
    0:28:42 I wouldn’t necessarily like do anything differently.
    0:28:44 I would just think about it differently.
    0:28:47 And I would be less anxious
    0:28:49 as a result of how I was thinking about it.
    0:28:52 But like that is obviously harder
    0:28:53 than like making a to-do list.
    0:28:58 – Well, I did, I like that quote from Jerome Brunner
    0:28:59 in the book.
    0:29:01 You more likely act yourself into feeling
    0:29:05 than feel yourself into action,
    0:29:07 which kind of just feels like, you know, fake it
    0:29:08 till you make it.
    0:29:10 – So Nate Hudson, who’s like the main researcher
    0:29:12 that does the personality change research
    0:29:15 is I think my quote from him was like,
    0:29:18 fake it till you make it is a reasonable way
    0:29:21 to do personality change.
    0:29:24 And that’s because a lot of this is sort of like
    0:29:26 the actions kind of make you think
    0:29:27 about things differently.
    0:29:31 So one example for me was with Extraversion,
    0:29:34 where I really did not want to go to all the stuff
    0:29:35 that I signed up for.
    0:29:38 So I signed up for like improv class
    0:29:41 and I just really dreaded it every single time.
    0:29:43 I did not really want to go,
    0:29:46 but I kind of found that if I like made myself go,
    0:29:47 it would make me happier.
    0:29:50 And I did have a good time and I enjoyed it,
    0:29:54 but it just like my thought process around improv
    0:29:55 was I’m not good at it.
    0:29:56 I’m not going to have fun.
    0:29:57 I don’t like this.
    0:29:58 I’m an introvert.
    0:30:01 So that was like sort of the clearest example of how
    0:30:04 sometimes you just kind of have to do something
    0:30:07 and the thoughts will follow from there.
    0:30:09 – I want to talk more about improv.
    0:30:11 I’ve always wanted to do it.
    0:30:12 But again, I’m an introvert.
    0:30:15 And I feel like I would just be paralyzed up there.
    0:30:18 But tell me about how long you did that
    0:30:21 and how transformative it was.
    0:30:24 – Improv was probably one of the best things I did.
    0:30:26 And also the scariest.
    0:30:30 I did that for about a year kind of like,
    0:30:32 but it was like several sessions of improv
    0:30:35 that I guess altogether was about a year of a year’s worth.
    0:30:40 And I was at times so afraid that I froze up
    0:30:44 and like didn’t know what to say next.
    0:30:47 But something that’s really cool about improv is that like,
    0:30:50 it’s all about learning that other people
    0:30:54 can supply part of the interaction, right?
    0:30:57 Like you’re not responsible for everything
    0:30:59 going right in improv.
    0:31:04 It’s okay if things are just kind of chaotic and strange
    0:31:06 and not going perfectly.
    0:31:08 And I don’t know, it’s like a good lesson
    0:31:09 to have for social interaction.
    0:31:12 ‘Cause a lot of times when you’re just out there
    0:31:15 dealing with people, it’s going to be kind of crazy
    0:31:17 and you just kind of have to roll with it.
    0:31:21 And I don’t know, to me that was like a good thing to see.
    0:31:36 And that’s where the gray area comes from Mint Mobile.
    0:31:37 Maybe you’re someone who likes to save up
    0:31:39 for a tropical vacation.
    0:31:41 Or maybe you’re finally ready to buy
    0:31:43 the complete works of Friedrich Nietzsche
    0:31:46 at the local university bookstore.
    0:31:48 However you spend your money, I’m willing to bet
    0:31:50 you’d rather use your hard earned cash
    0:31:52 on stuff you actually want.
    0:31:56 And not say on ridiculously inflated cell phone bills.
    0:31:58 Mint Mobile can help with that.
    0:31:59 Mint Mobile says they offer phone plans
    0:32:01 for less than their major competitors,
    0:32:05 offering any three month plan for just $15 a month.
    0:32:07 Their plans don’t include the fine print,
    0:32:09 hidden charges or large monthly bills.
    0:32:12 Instead, customers get unlimited talk and text,
    0:32:14 high speed data and more.
    0:32:17 Delivered on the nation’s largest 5G network.
    0:32:20 If you like your money, Mint Mobile may be for you.
    0:32:24 You can shop plans at mintmobiles.com/grayarea.
    0:32:27 That’s mintmobile.com/grayarea.
    0:32:29 Up front payment of $45 for three month
    0:32:32 five gigabyte plan required, equivalent to $15 a month.
    0:32:35 New customer offer for first three months only.
    0:32:37 Then full price plan options available.
    0:32:41 Taxes and fees extra, see Mint Mobile for details.
    0:32:48 Support for the gray area is brought to you by Wondery
    0:32:51 and their new show, Scam Factory.
    0:32:54 You’ve probably received some suspicious email or text
    0:32:56 that was quite obviously a scam,
    0:32:59 deleted it and moved on with your day.
    0:33:00 But have you ever stopped to think
    0:33:03 about who was on the other end of that scam?
    0:33:05 Occasionally the stranger on the other side
    0:33:09 is being forced to try and scam you against their will.
    0:33:11 And on Wondery’s new true crime podcast,
    0:33:12 they’re telling the story of those
    0:33:15 trapped inside scam factories,
    0:33:17 which they report has heavily guarded compounds
    0:33:19 on the other side of the world
    0:33:22 where people are coerced into becoming scammers.
    0:33:24 Tolled through the eyes of one family’s harrowing account
    0:33:27 of the sleepless nights and dangerous rescue attempts
    0:33:29 trying to escape one of these compounds,
    0:33:32 Scam Factory is an explosive new podcast
    0:33:34 that exposes what they say is a multi-billion dollar
    0:33:38 criminal empire operating in plain sight.
    0:33:40 You can follow Scam Factory on the Wondery app
    0:33:42 or wherever you get your podcasts.
    0:33:45 You can listen to all episodes of Scam Factory early
    0:33:48 and ad-free right now by joining Wondery Plus.
    0:33:55 Support for the gray area comes from Upway.
    0:33:58 If you’re tired of feeling stuck in traffic every day,
    0:34:01 there might be a better way to adventure on an e-bike.
    0:34:05 Imagine cruising past traffic, tackling hills with ease
    0:34:07 and exploring new trails,
    0:34:09 all without breaking a sweat or your wallet.
    0:34:13 At UpWight.co, you can find e-bikes from top tier brands
    0:34:16 like Specialized, Cannondale, and Aventon.
    0:34:18 Add up to 60% off retail.
    0:34:20 Perfect for your next weekend adventure.
    0:34:22 Whether you’re looking for a rugged mountain bike
    0:34:25 or a sleek city cruiser, there’s a ride for everyone.
    0:34:29 And right now, you can use code gray area 150
    0:34:34 to get $150 off your first e-bike purchase of $1,000 or more.
    0:34:45 (upbeat music)
    0:34:48 (upbeat music)
    0:34:56 – Well, look, there’s a,
    0:34:59 I think a very important question you posed near the end.
    0:35:01 And I wanna ask it here.
    0:35:07 How do you know when to keep trying to change?
    0:35:09 I mean, how do you know when you’ve tried enough?
    0:35:12 I mean, isn’t there some point at which
    0:35:15 you do more harm by resisting who you are?
    0:35:19 And would be better off just making peace with that.
    0:35:19 – Yeah.
    0:35:21 I mean, this is like, you know,
    0:35:25 it’s not gonna be a hard and fast rule for everyone.
    0:35:28 But what I found is that when I was doing things
    0:35:31 that were like no longer enjoyable on any level
    0:35:36 and were not getting me any closer to like what I valued
    0:35:37 or like what I actually wanted
    0:35:40 is sort of when I would give up on them.
    0:35:44 So the big example of this is that I led a meetup group
    0:35:48 for a while based around foreign films, which is my hobby.
    0:35:53 And I just like didn’t really enjoy it.
    0:35:54 I just don’t like running meetings.
    0:35:58 I do moderate professionally for work,
    0:36:01 but like I just don’t like to do it in my free time.
    0:36:05 I guess I just, and it kind of like wasn’t, you know,
    0:36:07 I didn’t have that high afterward,
    0:36:09 like I did after improv where I was like,
    0:36:10 yes, that was so fun.
    0:36:13 I kind of felt just like, oh, thank God that’s over.
    0:36:14 And to me, that was like a sign
    0:36:16 that it was maybe just time to wrap up
    0:36:19 and like hand it over to someone else.
    0:36:20 And I think that’s okay.
    0:36:22 Like you don’t, you know,
    0:36:23 trying something doesn’t mean you’re like stuck with it
    0:36:25 for life.
    0:36:27 – Yeah, and look, I ask this in part
    0:36:31 because I am sympathetic to the idea that,
    0:36:36 you know, being a little maladapted to a world
    0:36:39 that’s actually pretty shitty in lots of ways
    0:36:40 isn’t the worst thing.
    0:36:43 And our society has a way of conspiring
    0:36:47 to make good and honest people feel weird and unlikeable.
    0:36:50 And that’s a society problem, not a you problem,
    0:36:53 but also it is generally healthy to be well adjusted.
    0:36:56 So I don’t want to gloss over that either.
    0:37:00 – Yeah, and I mean, even things like neuroticism,
    0:37:03 you know, in small amounts or in certain situations
    0:37:05 can have some benefits.
    0:37:08 Like, I mean, I never did away with my anxiety completely.
    0:37:10 It’s now like at more manageable levels,
    0:37:12 but it’s not like gone.
    0:37:17 And, you know, in the last chapter,
    0:37:19 I interviewed Tracy Dennis Tawari,
    0:37:22 who is a psychologist.
    0:37:25 And she talks about how anxiety can have
    0:37:27 some positive elements.
    0:37:32 And when her son was born, he had like a heart condition.
    0:37:35 And she talks about how anxiety really helped her
    0:37:38 prioritize like finding the right specialists,
    0:37:40 you know, getting him the right treatment,
    0:37:42 coming up with a good treatment plan,
    0:37:44 you know, all of the things that are involved
    0:37:46 in caring for a sick child.
    0:37:49 It would be hard to do that stuff
    0:37:51 if you were completely not anxious,
    0:37:53 like if you just didn’t care about anything.
    0:37:56 Like anxiety is in some ways a way of caring.
    0:38:00 So, you know, I think it’s fine to like find ways
    0:38:03 of living with your anxiety,
    0:38:06 but to not like do away with it entirely.
    0:38:10 – Well, what are the most concrete,
    0:38:14 practical interventions you discovered along the way
    0:38:17 that people might find useful in their own efforts
    0:38:22 to improve or align their values and actions?
    0:38:23 – Sure, I will just toss some out
    0:38:25 that I found worked really well for me.
    0:38:27 I would sign up for something.
    0:38:30 Don’t just tell yourself you’re gonna go out
    0:38:33 to drinks with your friends more.
    0:38:37 Like sign up for a thing that like requires you to be there.
    0:38:40 With improv, you couldn’t miss more than two classes.
    0:38:43 So you had to go, even if you didn’t feel like going.
    0:38:44 – Accountability, right?
    0:38:45 There’s some accountability.
    0:38:46 – Yeah, like that’s what I would do for extroversion
    0:38:48 is I would sign up for a thing.
    0:38:51 For conscientiousness,
    0:38:54 I would actually start by decluttering.
    0:38:57 Like if you feel like you’re really disorganized
    0:39:01 before trying to like quote unquote get organized,
    0:39:04 I would just throw away as much stuff as possible.
    0:39:07 That was like the loud and clear thing
    0:39:09 that all the professional organizers told me
    0:39:13 is that like it’s all about just like having less stuff
    0:39:14 in your life.
    0:39:16 And that can be like, you know, commitments too
    0:39:19 and like sort of extraneous stuff that you’re doing.
    0:39:25 And I honestly would take a meditation class
    0:39:27 for anyone who’s interested in, you know,
    0:39:30 reducing their neuroticism to whatever degree.
    0:39:32 Even if not, like it’s just like an interesting
    0:39:38 intellectual exercise and, you know,
    0:39:39 possibly an emotional exercise.
    0:39:44 – Yeah, I found the ACT acronym, the ACT acronym,
    0:39:48 pretty handy actually.
    0:39:51 It’s, you know, accept your negative feelings,
    0:39:55 commit to your values and take action.
    0:39:57 And you can say anything you like about that,
    0:40:02 but certainly the acceptance part seems really fundamental.
    0:40:04 I mean, one thing that comes across
    0:40:07 in a lot of the stories you tell in the book
    0:40:10 is that it doesn’t matter who you are,
    0:40:12 what you do, where you are,
    0:40:15 you’re going to have negative feelings all the damn time.
    0:40:19 And we add so much unnecessary suffering to our lives
    0:40:22 when we resist those feelings.
    0:40:25 Anyway, I’ll let you say anything you want about that.
    0:40:26 – Yeah, absolutely.
    0:40:28 Yeah, I thought that’s so helpful.
    0:40:31 And that was really how a lot of the people who
    0:40:34 I talked to who did change their personalities
    0:40:36 kind of muddled through
    0:40:39 because those first few attempts at, you know,
    0:40:41 being extroverted or, you know,
    0:40:43 even being conscientious can feel really uncomfortable.
    0:40:45 Like getting up at, you know,
    0:40:47 5 a.m. to go for a run is uncomfortable.
    0:40:49 And so they really were just like,
    0:40:51 I’m going to feel uncomfortable.
    0:40:53 Like I’m not going to like this at first,
    0:40:54 but it’s important to me that I keep doing this.
    0:40:57 And so I’m going to take action and actually do it.
    0:40:58 And I don’t know.
    0:41:01 I think that’s like a good little rule to live by
    0:41:03 for things that matter to you.
    0:41:08 – How important is it to really believe in your own agency?
    0:41:12 Is that a fundamental precondition of any kind of change?
    0:41:14 To believe that it’s possible
    0:41:18 that you have the freedom and the will to do that?
    0:41:21 – The argument that I always get into with people is like,
    0:41:24 some people think that like people never change, right?
    0:41:26 And kind of the extension of that is like,
    0:41:29 I will never change because people never change.
    0:41:31 And if that’s truly what you think,
    0:41:33 you probably aren’t going to try to change
    0:41:35 and you probably won’t change.
    0:41:38 There does have to be like some fundamental openness
    0:41:42 to change in order to even like embark on something like this,
    0:41:45 because it takes a lot of like energy and courage
    0:41:46 to do some of this stuff.
    0:41:50 And you can’t follow through on it
    0:41:53 if you think that like it’s not going to work.
    0:41:54 – All right.
    0:41:57 Once again, the book is called “Me but Better,
    0:42:00 the Science and Promise of Personality Change.”
    0:42:03 Olga Hazan, this was fun.
    0:42:03 Thank you.
    0:42:05 – Yeah, thanks so much for having me.
    0:42:06 This was great.
    0:42:09 (upbeat music)
    0:42:14 – All right, I hope you enjoyed this episode.
    0:42:16 I know I did.
    0:42:18 Personality change is something I thought about a lot
    0:42:21 over the years in part because I’m constantly trying
    0:42:24 to fix things about myself.
    0:42:27 This book and this conversation gave me
    0:42:29 some useful perspective on that,
    0:42:31 both that it’s completely cool
    0:42:33 to want to improve things about yourself,
    0:42:36 but also it’s important to make peace with who you are
    0:42:39 and not make yourself miserable fighting that.
    0:42:43 But as always, I want to know what you think.
    0:42:46 So drop us a line at the grayarea@box.com
    0:42:49 or leave us a message on our new voicemail line
    0:42:53 at 1-800-214-5749.
    0:42:54 And once you’re finished with that,
    0:42:56 please go ahead, rate and review
    0:42:58 and subscribe to the podcast.
    0:43:01 This episode was produced by Beth Morrissey,
    0:43:05 edited by Jorge Just, engineered by Christian Ayala,
    0:43:08 fact checked by Melissa Hirsch,
    0:43:10 and Alex Ovington wrote our theme music.
    0:43:12 New episodes of the grayarea drop on Mondays,
    0:43:14 listen and subscribe.
    0:43:16 The show is part of Vox,
    0:43:18 support Vox’s journalism by joining
    0:43:19 our membership program today.
    0:43:23 Go to vox.com/members to sign up.
    0:43:25 And if you decide to sign up because of this show,
    0:43:26 let us know.
    0:43:38 – All right, Sean, you can do this promo
    0:43:41 talking about all the great Vox media podcasts
    0:43:43 that are gonna be on stage live
    0:43:45 at South by Southwest this March.
    0:43:48 You just need a big idea to get people’s attention,
    0:43:53 to help them keep them from hitting the skip button.
    0:43:53 I don’t know.
    0:43:56 I’m gonna throw it out to the group chat, Kara.
    0:43:57 Do you have any ideas?
    0:44:00 – In these challenging times, we’re a group of mighty hosts
    0:44:02 who have banded together to fight disinformation
    0:44:04 by speaking truth to power,
    0:44:06 like the Avengers, but with more spandex.
    0:44:07 What do you think, Scott?
    0:44:10 – I’m more of an X-man fan myself.
    0:44:12 Call me professor.
    0:44:13 Can I read minds?
    0:44:14 I can’t really read minds,
    0:44:17 but I can empathize with anyone having a mid-life crisis,
    0:44:20 which is essentially any tech leader, so.
    0:44:24 – Mines are important, Scott, but we’re more than that.
    0:44:29 I think that you can’t really separate minds from feelings.
    0:44:31 And we need to talk about our emotions
    0:44:33 and explore the layers of our relationships
    0:44:37 with our partners, coworkers, our families, neighbors,
    0:44:39 and our adjacent communities.
    0:44:41 I just wanna add a touch more.
    0:44:43 From sports and culture to tech and politics,
    0:44:46 Vox Media has an All-Star lineup of podcasts
    0:44:49 that’s great in your feeds, but even better live.
    0:44:51 – That’s it, All-Stars.
    0:44:55 Get your game on, go play, come see a bunch of Vox Media
    0:44:59 All-Stars, and also me at South by Southwest
    0:45:01 on the Vox Media podcast stage,
    0:45:04 presented by Smartsheet and Intuit.
    0:45:06 March 8th through 10th in Austin, Texas.
    0:45:11 Go to voxmedia.com/sxsw.
    0:45:13 You’ll never know if you don’t go.
    0:45:15 You’ll never shine if you don’t glow.

    If you could change anything about your personality, anything at all, what would it be?

    And why would you want to change it?Writer Olga Khazan spent a year trying to answer those questions, and documented the experience in her new book Me, But Better: The Science and Promise of Personality Change.

    In this episode Sean speaks with Olga about the science of personality change, the work it takes to change yourself, and what makes up a personality, anyway.

    Host: Sean Illing (@SeanIlling)

    Guest: Olga Khazan, author of Me, But Better: The Science and Promise of Personality Change.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Is ignorance truly bliss?

    AI transcript
    0:00:01 [MUSIC PLAYING]
    0:00:05 Support for the gray area comes from Atio.
    0:00:10 Atio is an AI native CRM built for the next era of companies.
    0:00:11 They say its powerful data structure
    0:00:15 adapts to your business model, syncs in all your contacts
    0:00:19 in minutes, and enriches everything with actionable data.
    0:00:22 Atio says its AI research agents tackle complex work
    0:00:26 like finding key decision makers and triaging incoming leads.
    0:00:29 You can go to atio.com/grayarea, and you’ll
    0:00:31 get 15% off your first year.
    0:00:35 That’s atio.com/grayarea.
    0:00:39 [MUSIC PLAYING]
    0:00:41 OK, business leaders.
    0:00:44 Are you here to play, or are you playing to win?
    0:00:46 If you’re in it to win, meet your next MVP.
    0:00:48 NetSuite by Oracle.
    0:00:50 NetSuite is your full business management system
    0:00:51 in one convenient suite.
    0:00:53 With NetSuite, you’re running your accounting, your finance,
    0:00:55 your HR, your e-commerce, and more,
    0:00:57 all from your online dashboard.
    0:01:00 Upgrade your playbook, and make the switch to NetSuite,
    0:01:02 the number one cloud ERP.
    0:01:05 Get the CFO’s guide to AI and machine learning
    0:01:07 at netsuite.com/vox.
    0:01:09 NetSuite.com/vox.
    0:01:19 Who hasn’t heard the phrase, ignorance is bliss 1,000 times?
    0:01:23 Like all cliches, it sticks because it’s rooted in truth.
    0:01:28 But it’s worth asking why ignorance can be so satisfying.
    0:01:30 If you read the history of philosophy,
    0:01:32 you don’t find all that much interest
    0:01:34 in the delights of ignorance.
    0:01:38 Instead, you hear a lot about the pursuit of truth,
    0:01:42 which is assumed to be a universal human impulse.
    0:01:47 As Aristotle famously claimed, all human beings want to know.
    0:01:49 And that’s not entirely wrong, of course.
    0:01:51 Most of us do want to know.
    0:01:54 But denial and avoidance are also human impulses.
    0:01:58 Sometimes, they’re even more powerful than our need to know.
    0:02:02 These drives, a need to know, and a strong desire never
    0:02:05 to find out, are often warring within us,
    0:02:07 shaping our worldview, our relationships,
    0:02:11 and our self-image, which raises the question,
    0:02:14 when is ignorance really bliss?
    0:02:15 When isn’t it?
    0:02:16 And how can we tell the difference?
    0:02:22 I’m Sean Ellen, and this is The Gray Area.
    0:02:27 Today’s guest is Mark Lilla.
    0:02:30 Mark is a professor of the humanities
    0:02:33 at Columbia University and the author of a new book called
    0:02:36 Ignorance and Bliss on Wanting Not to Know.
    0:02:39 The questions I just asked are the questions Mark
    0:02:41 grapples with in this book.
    0:02:44 It’s short, elegantly written, and maybe the highest
    0:02:47 compliment I can give is that it reads like a book that
    0:02:51 could have been written at almost any point in modern history.
    0:02:54 What I mean by that is that it’s not reactive to the moment.
    0:02:57 It engages one of the oldest questions in philosophy,
    0:03:00 to know or not to know, and manages
    0:03:04 to offer fresh insights that feel relevant and timeless
    0:03:06 at the same time.
    0:03:08 So I invited Mark on the show to explore why we accept
    0:03:13 and resist the truth and what it means to live in that tension.
    0:03:16 [MUSIC PLAYING]
    0:03:29 Mark Lilla, welcome to the show.
    0:03:32 Good to see you again, Sean.
    0:03:33 Likewise.
    0:03:37 And it’s great to talk about this lovely book of yours.
    0:03:40 Obviously, the book is about our will to know
    0:03:42 and our will to not know.
    0:03:45 And of course, the book opens with a kind of parody
    0:03:49 of Plato’s famous allegory of the cave.
    0:03:53 And I think people know the basic story of Plato’s cave.
    0:03:58 You have these prisoners who spend their whole life bound
    0:04:03 by chains in a cave, looking at shadows being cast on a wall.
    0:04:06 And they mistake those shadows for reality,
    0:04:11 because it’s the only reality they’ve ever known.
    0:04:14 Why don’t you take it from there and just say a bit
    0:04:17 about how you play a little bit with that story?
    0:04:20 Yeah, well, in Plato’s copyrighted edition
    0:04:25 of the story, a stranger comes in and turns
    0:04:28 one of the prisoners around so that he realizes
    0:04:32 that he’s been living in a world of shadows
    0:04:36 and is invited to climb up to the sun,
    0:04:39 and then lives up there until he’s
    0:04:41 told to come back down and get other people.
    0:04:45 In my version of the story, he’s got a little friend
    0:04:49 with him, a young boy, who also goes up.
    0:04:53 And when it comes time to go back down,
    0:04:59 the man tells him he can stay up in staring at the forms
    0:05:02 and being in the pure sunlight and seeing what is.
    0:05:05 And it turns out he’s desperate to return.
    0:05:07 It’s a cold life.
    0:05:10 All of his fantasy and imagination have dried up.
    0:05:13 He misses his virtual friends.
    0:05:16 And eventually he’s taken back down.
    0:05:20 And so I start the book saying, it’s an open question
    0:05:25 whether coming out into sunlight is a good thing.
    0:05:29 So when Plato’s most famous student, Aristotle,
    0:05:34 writes that all human beings want to know,
    0:05:37 do you think that statement is just very importantly
    0:05:41 incomplete, that the impulse to not want to know
    0:05:44 is just as strong and maybe just as important?
    0:05:46 Yeah, I think it’s incomplete.
    0:05:50 And it’s not as if there’s a certain class of people
    0:05:54 who are resisting knowledge and we the enlightened do not,
    0:05:57 but rather that the struggle to know and not
    0:06:00 knows going on in all of us all the time.
    0:06:02 And that we ought to be aware of that
    0:06:08 and be able to try to sort out when that’s a healthy instinct
    0:06:09 and when it’s not.
    0:06:12 When is it rational to not know?
    0:06:14 Oh, there are all sorts of cases.
    0:06:17 A lot of them are trivial.
    0:06:18 We wrap presents.
    0:06:19 Why do we do that?
    0:06:23 Because we want to build the suspense.
    0:06:29 Some people don’t want to know the sex of their unborn child
    0:06:33 because they want to have a surprise.
    0:06:39 We don’t want people to recount the entire plot of a movie
    0:06:40 we want to see.
    0:06:43 We tell them spoiler alert.
    0:06:45 And then there are more serious situations
    0:06:49 where we have to think about how to raise children
    0:06:56 and when they are prepared to absorb new information
    0:07:00 and knowledge and to have certain experiences.
    0:07:02 We also have to think about whether there
    0:07:09 are healthy taboos in society that there are places where
    0:07:12 we try to not have people look in order
    0:07:18 that we can somehow keep our society together.
    0:07:20 One of the problems you play with a little bit
    0:07:25 is that we need to be ignorant.
    0:07:28 We want to be ignorant of certain things.
    0:07:33 But we also really, really hate to admit our own ignorance.
    0:07:37 So we’re constantly playing this game of hide and seek
    0:07:38 with ourselves.
    0:07:43 That’s a bit of a strange, untenable dance, don’t you think?
    0:07:44 It is.
    0:07:45 It is.
    0:07:48 People don’t want to feel that they’re
    0:07:53 insurious in holding things at arm’s distance
    0:07:56 and not thinking about them.
    0:07:58 And I’m not sure where that comes from.
    0:08:00 But certainly it’s the case.
    0:08:01 It certainly is the case.
    0:08:06 And part of it, I think, is to use a metaphor
    0:08:09 that our opinions are not things that we just
    0:08:14 have in a bag that we pull out when they need expression.
    0:08:18 But rather, they feel like prostheses, like an extra limb.
    0:08:24 And if someone refutes our argument or mocks it,
    0:08:29 it feels like something quite intimate has been touched.
    0:08:35 And so that is an incentive to not admit your ignorance
    0:08:38 and to build up all sorts of defenses
    0:08:43 and appeal to bogus authorities in order
    0:08:48 to remain convinced of your own rational capacities
    0:08:51 and your independence.
    0:08:54 And so it becomes a kind of perverse thing
    0:08:59 where you’re constantly trying to patch things together
    0:09:01 to show to yourself and others you understand.
    0:09:04 And in the meantime, you can start
    0:09:06 pulling in some preposterous things that
    0:09:10 become part of your worldview.
    0:09:16 Is there a good model of a wisely ignorant person,
    0:09:18 a sort of counter-socrates, someone
    0:09:21 who climbs the mountain of knowledge
    0:09:25 and says once they reach the peak, you know what?
    0:09:28 I like it better down there in the cave or in the matrix
    0:09:31 or whatever metaphor one prefers.
    0:09:34 Is that ever a justifiable position?
    0:09:36 I think you’re leaving out an option.
    0:09:40 And that option is something that Socrates explores
    0:09:43 in the other platonic dialogues, which
    0:09:47 is learning from your own ignorance.
    0:09:54 That is to recognize that you’re genuinely and generally
    0:09:55 ignorant about things.
    0:09:59 And to continue inquiring with the understanding
    0:10:01 of what you come up with is tentative.
    0:10:04 Especially right now, we live in a world where we’re more
    0:10:07 and more aware of the uncertainty of our knowledge
    0:10:10 because things change so quickly.
    0:10:15 It was very striking to me during COVID just how frustrated
    0:10:19 people seem to be by the fact that the public health
    0:10:22 authorities kept changing their advice.
    0:10:25 First, they said it was all about washing your hands.
    0:10:29 And then they said it was all about masks and so on.
    0:10:30 They get angry about that.
    0:10:33 But that’s the way science works.
    0:10:34 But people don’t like to live that way.
    0:10:37 They like to hear from an authority
    0:10:39 that this is what you do.
    0:10:42 They want a doctor who doesn’t hem and haw
    0:10:45 and doesn’t constantly change the meds and say,
    0:10:47 let’s try this, let’s try that.
    0:10:49 It’s very destabilizing.
    0:10:52 And so I think we have a yearning
    0:10:55 to live standing on solid ground.
    0:10:58 But we don’t stand on solid ground.
    0:11:00 Part of what made Socrates so annoying
    0:11:02 is that he went around pretending not to know anything,
    0:11:06 yet undercutting everyone else’s claims to knowledge.
    0:11:08 So there is that.
    0:11:12 But he also says that the unexamined life isn’t worth living.
    0:11:15 I think if anyone knows a line from Socrates, it’s that one.
    0:11:17 And that’s fine up to a point.
    0:11:19 But I would also say, and I don’t know
    0:11:24 if you would say this as well, that a life that’s nothing
    0:11:26 but examined is equally unworthy,
    0:11:31 that there’s more to life than knowing and understanding.
    0:11:32 Do you agree with that?
    0:11:34 Oh, I do.
    0:11:37 Yeah, if by that you mean that certain things in our lives
    0:11:42 we need to take for granted, that’s for sure.
    0:11:44 I mean, when you think about parental love,
    0:11:46 and not just with young children,
    0:11:49 I mean, if you were really convinced
    0:11:51 that every morning your parents wake up
    0:11:53 and have a working hypothesis
    0:11:56 of whether they love you or not,
    0:11:58 whether you’re lovable or not,
    0:12:02 that would be very destabilizing to feel.
    0:12:05 And it would keep us from establishing bonds
    0:12:09 that presume that the bonds will continue.
    0:12:11 You know, Socrates said the unexamined life
    0:12:12 is not worth living.
    0:12:15 He did not say that the thoroughly examined life
    0:12:17 is worth living or that it was livable.
    0:12:31 Support for the gray area comes from Blue Nile.
    0:12:34 So you’re about to pop the big question, huge moment.
    0:12:36 Hopefully you’ve thought it through
    0:12:39 and put the gorilla costume back in the drawer.
    0:12:40 Save that for the wedding.
    0:12:43 Besides, you’ve got enough to worry about right now,
    0:12:45 just picking the right engagement ring.
    0:12:50 Shape, size, style, setting, cut, color, clarity, character.
    0:12:52 It’s a lot to figure out.
    0:12:56 Maybe a stop at bluenile.com might help.
    0:12:57 At Blue Nile, they say you can create
    0:13:00 a bigger, more brilliant engagement ring
    0:13:01 than you can even imagine.
    0:13:04 At a price you won’t find at a traditional jeweler.
    0:13:06 And according to the company,
    0:13:07 they’re committed to ensuring
    0:13:08 that the highest ethical standards
    0:13:11 are observed when sourcing diamonds and jewelry.
    0:13:13 Plus, your surprise will stay safe
    0:13:15 because every Blue Nile order is insured
    0:13:18 and arrives in packaging that won’t give away what’s inside.
    0:13:21 In most cases, they’re even delivered overnight.
    0:13:25 Right now, you can get $50 off your purchase of $500 or more
    0:13:28 with code grayarea@bluenile.com.
    0:13:32 That’s $50 off with code grayarea@bluenile.com.
    0:13:33 Blue Nile.com.
    0:13:41 Support for the gray area comes from Shopify.
    0:13:43 If you think about the most successful businesses
    0:13:45 in your neighborhood,
    0:13:48 they probably all have a few things in common,
    0:13:52 like great products, clever marketing, a good brand.
    0:13:54 But one thing you might not consider
    0:13:57 is the business behind those businesses.
    0:14:00 Because to find customers grow and sell more,
    0:14:01 you need a partner you can rely on,
    0:14:04 a partner like Shopify.
    0:14:06 Shopify is an all-in-one digital commerce platform
    0:14:08 that wants to help your business
    0:14:10 sell better than ever before.
    0:14:12 It doesn’t matter if your customers spend their time
    0:14:14 scrolling through your feed
    0:14:17 or strolling past your actual physical storefront.
    0:14:19 Shopify says they can help you convert browsers
    0:14:22 into buyers and sell more over time.
    0:14:23 And they say their shop pay feature
    0:14:26 can boost conversion by 50%.
    0:14:27 There’s a reason companies like Allbirds
    0:14:30 turn to Shopify to sell more products to more customers.
    0:14:33 Businesses that sell, sell more with Shopify.
    0:14:35 Want to upgrade your business
    0:14:37 and get the same checkout Allbirds uses?
    0:14:40 You can sign up for your $1 per month trial period
    0:14:43 at Shopify.com/Vox.
    0:14:47 That’s Shopify.com/Vox to upgrade your selling today.
    0:14:50 Shopify.com/Vox.
    0:14:57 Support for the gray area comes from Upway.
    0:14:59 If you’re tired of feeling stuck in traffic every day,
    0:15:03 there might be a better way to adventure on an e-bike.
    0:15:05 Imagine cruising past traffic,
    0:15:08 tackling hills with ease and exploring new trails,
    0:15:11 all without breaking a sweat or your wallet.
    0:15:14 At UpWight.co, you can find e-bikes from top tier brands
    0:15:17 like Specialized, Cannondale, and Aventon.
    0:15:19 Add up to 60% off retail.
    0:15:22 Perfect for your next weekend adventure.
    0:15:24 Whether you’re looking for a rugged mountain bike
    0:15:27 or a sleek city cruiser, there’s a ride for everyone.
    0:15:31 And right now, you can use code gray area 150
    0:15:36 to get $150 off your first e-bike purchase of $1,000 or more.
    0:15:46 (gentle music)
    0:15:48 (gentle music)
    0:15:58 – I think everyone knows the dictum knowledge is power.
    0:16:00 And I think that’s sensible enough, right?
    0:16:01 You can do a lot of things in the world
    0:16:03 if you know and understand.
    0:16:11 Do you think that ignorance also has a kind of power?
    0:16:13 That maybe we overlook?
    0:16:17 – Yeah, I began the book with a quotation
    0:16:21 from George Eliot’s novel, Daniel Deronda,
    0:16:25 saying that we thought a lot about the power of knowledge,
    0:16:28 but we haven’t thought about the power of ignorance.
    0:16:31 And what she means there in the novel
    0:16:35 is the power of people who are ignorant
    0:16:39 to mess things up in life.
    0:16:43 That it’s a kind of social force out there.
    0:16:46 And I think which is certainly the case.
    0:16:49 But ignorance is also power
    0:16:56 if not knowing certain things
    0:16:58 or leaving certain things unexamined,
    0:17:04 permit you to, in fact, continue in your life
    0:17:09 and not be paralyzed.
    0:17:13 I use an example at the beginning of the book
    0:17:16 what would happen if we each had an LED screen
    0:17:19 on our embedded in our foreheads
    0:17:22 and we could read the thoughts of everyone around us.
    0:17:25 I mean, social life would grind to a halt
    0:17:30 because you can’t control your thoughts, right?
    0:17:33 You control what you say.
    0:17:36 And we would constantly be looking
    0:17:38 to see how people are thinking about us
    0:17:42 and we could never develop a stable sense of ourselves.
    0:17:45 And so we need not to know what other people think about us
    0:17:49 even if we’re going to live a philosophical life.
    0:17:53 – There are lots of people who are willfully ignorant
    0:17:56 and there are lots of people who are ignorant
    0:17:58 of their ignorance.
    0:18:03 But then there’s this other species of cynicism
    0:18:06 you talk about in the book
    0:18:10 that knowingly exploits ignorance.
    0:18:15 And historically that has been a potent source
    0:18:17 of political power.
    0:18:22 I mean, is this just an eternal challenge for society?
    0:18:27 – Yes, and the reason is one reason
    0:18:32 is that people need certainty
    0:18:35 and they will demand it.
    0:18:40 And so political leaders, demagogues in particular
    0:18:43 can provide simple answers to things
    0:18:48 that seem very complicated
    0:18:53 and that stir people in a way that can be directed.
    0:18:58 That’s classically how a demagogue works
    0:19:02 and how a demagogue becomes a tyrant.
    0:19:06 And so especially now,
    0:19:10 I’m not surprised that we’re facing
    0:19:15 the kind of aggressive ignorance
    0:19:20 among populists and those who are moved by populists
    0:19:26 because making sense of things right now
    0:19:28 is just very difficult
    0:19:32 because we just don’t know various things
    0:19:34 because our experience is so new.
    0:19:36 For example, what do you do about the fact
    0:19:41 that the state of any nation’s economy
    0:19:46 depends on an international economy
    0:19:51 and that no country has a say, full say,
    0:19:54 in how that international economy operates
    0:19:58 and it will continue to affect everyone in every country.
    0:20:00 So it’s hard to accept the fact
    0:20:04 that our political leaders do not control the economy.
    0:20:08 And so you go to whoever says he’s the answer man
    0:20:10 or she says she’s the answer woman.
    0:20:16 And so it’s very hard to confront the present
    0:20:21 with an open mind and a very sense of the tentativeness
    0:20:22 of how you understand it.
    0:20:25 – You know, there’s a deep philosophical question
    0:20:26 lurking in all this.
    0:20:28 And the question is,
    0:20:32 what is the actual point of knowledge?
    0:20:35 Do we want knowledge for the sake of knowledge
    0:20:39 because it’s inherently good
    0:20:43 or is knowledge only valuable if it’s useful?
    0:20:45 And if knowing something isn’t useful
    0:20:46 or if it’s even worse than that,
    0:20:48 if knowing something is actually painful,
    0:20:51 why would we want to know it?
    0:20:56 – The question that you’re asking for me,
    0:20:59 at least in the book,
    0:21:02 is really a question of different kinds of human characters.
    0:21:07 There are some people who simply something quickens within
    0:21:12 when the opportunity of new knowledge presents itself.
    0:21:18 And so why the soul responds like that is a mystery.
    0:21:25 And Socrates tells various myths about why that might be,
    0:21:27 but it just seems to be a fact.
    0:21:28 And not everyone has it.
    0:21:31 – Do you think there’s anything worth knowing
    0:21:32 regardless of the cost?
    0:21:39 – Self-knowledge can be harmful if it’s partial
    0:21:46 or if just the way you are is such that
    0:21:50 one of your failings or limitations
    0:21:55 is that you’re paralyzed if something in you unpleasant,
    0:21:58 is revealed.
    0:22:01 That’s the story of Augustine in the Confessions
    0:22:02 at the moment where he says,
    0:22:05 “God ripped off the back of me,”
    0:22:08 which was this other face and everything
    0:22:09 that everyone else could see.
    0:22:12 I couldn’t and holds it in front of me and I see myself.
    0:22:14 And in that moment, I’m so horrified
    0:22:18 that something clicks and I give myself over, right?
    0:22:20 And so there could be limits to that.
    0:22:25 But Socrates assumes that all self-knowledge
    0:22:27 is in the end going to be helpful
    0:22:32 because you are now clear to yourself
    0:22:37 and that knowing itself makes people good.
    0:22:42 That once you know the power of your ignorance
    0:22:46 is no longer holding you, so it just goes poof.
    0:22:50 And now you are in the driver’s seat.
    0:22:51 And-
    0:22:52 Do you think that’s true?
    0:22:53 I’m not sure.
    0:22:54 I don’t think that’s true.
    0:22:55 I don’t think that’s true.
    0:23:00 And so it’s hard to believe whether Socrates,
    0:23:01 whether he thought that.
    0:23:06 And the reason is that the way he deals with other people
    0:23:08 in the Platonic Dialogues,
    0:23:11 you see that he has a lot of knowledge
    0:23:13 about how people fall short of that.
    0:23:15 Yeah, I could definitely make a case
    0:23:16 or I could see a case being made
    0:23:18 for always wanting to know.
    0:23:22 You know, abstract truths
    0:23:25 and truths about the external world.
    0:23:27 But when it comes to self-knowledge,
    0:23:28 sometimes when you peer in word,
    0:23:31 what you find is that you’re just a bundle
    0:23:32 of contradictions that can’t be squared.
    0:23:35 And I’m not sure it’s necessarily good
    0:23:38 to be intimately acquainted with that
    0:23:40 and to get hung up on that.
    0:23:42 There is one way in which it is,
    0:23:44 and that’s the Montaigne option.
    0:23:46 You know, the picture Montaigne gives of us
    0:23:49 in the essays is that we’re exactly what you just said.
    0:23:53 And his advice is live with it.
    0:23:54 Just go with it.
    0:23:56 You’re a contradiction.
    0:23:58 I think that’s easier said than done.
    0:24:01 But perhaps still wise, but easier said than done.
    0:24:06 But do you think there is a link,
    0:24:09 maybe even a necessary link
    0:24:11 between self-knowledge and knowledge
    0:24:13 of the external world?
    0:24:15 In other words, on some level,
    0:24:17 do we have to know ourselves
    0:24:21 in order to know the truth about the world outside ourselves?
    0:24:24 – I can think of a couple of answers to that.
    0:24:26 I’m not sure which one would be mine.
    0:24:30 One is that these things are detachable.
    0:24:33 You know, if you meet scientists or,
    0:24:37 you know, I remember spending a year
    0:24:39 at the Institute for Advanced Study,
    0:24:42 and I would sometimes go and sit in this place
    0:24:46 where the scientists and mathematicians were.
    0:24:50 And you could tell these people just had no self-awareness
    0:24:53 in terms of how people reacted to them.
    0:24:55 Perhaps they were just wrapped up in their problems
    0:24:58 and they were discovering things.
    0:25:03 On the other hand, one barrier to us
    0:25:08 in knowing things about the world
    0:25:12 is to know what constitutes knowing.
    0:25:14 And that requires an analysis of ourselves.
    0:25:17 So self-knowledge in the sense of knowledge
    0:25:19 of the human animal.
    0:25:21 And then the third sense,
    0:25:24 while not strictly necessary,
    0:25:27 the exercise of trying to know oneself
    0:25:30 is a kind of training exercise
    0:25:34 for inquiring about the world outside.
    0:25:37 (gentle music)
    0:25:48 Support for the gray area comes from Mint Mobile.
    0:25:51 Maybe you’re someone who likes to save up
    0:25:53 for a tropical vacation.
    0:25:55 Or maybe you’re finally ready to buy
    0:25:57 the complete works of Friedrich Nietzsche
    0:25:59 at the local university bookstore.
    0:26:01 However you spend your money,
    0:26:03 I’m willing to bet you’d rather use your hard-earned cash
    0:26:06 on stuff you actually want.
    0:26:10 And not say are ridiculously inflated cell phone bills.
    0:26:11 Mint Mobile can help with that.
    0:26:13 Mint Mobile says they offer phone plans
    0:26:15 for less than their major competitors,
    0:26:18 offering any three-month plan for just $15 a month.
    0:26:21 Their plans don’t include the fine print,
    0:26:23 hidden charges, or large monthly bills.
    0:26:26 Instead, customers get unlimited talk and text,
    0:26:28 high-speed data, and more,
    0:26:31 delivered on the nation’s largest 5G network.
    0:26:34 If you like your money, Mint Mobile may be for you.
    0:26:37 You can shop plans at mintmobiles.com/grayarea.
    0:26:40 That’s mintmobile.com/grayarea.
    0:26:43 Upfront payment of $45 for a three-month,
    0:26:46 five-gigabyte plan required, equivalent to $15 a month.
    0:26:48 New customer offer for first three months only.
    0:26:51 Then full-price plan options available.
    0:26:54 Taxes and fees extra, see Mint Mobile for details.
    0:27:02 Thumbtack presents the ins and outs of caring for your home.
    0:27:06 Out, uncertainty, self-doubt,
    0:27:09 stressing about not knowing where to start.
    0:27:13 In, plans and guides that make it easy
    0:27:14 to get home projects done.
    0:27:19 Out, word art, sorry, live laugh lovers.
    0:27:24 In, knowing what to do, when to do it, and who to hire.
    0:27:28 Start caring for your home with confidence.
    0:27:29 Download thumbtack today.
    0:27:34 – Attention, Save On Food Shoppers.
    0:27:35 Kids are the best, aren’t they?
    0:27:37 – That’s why at Save On Foods,
    0:27:39 you’ll find so many things that make kids smile,
    0:27:42 like ice cream, cereal with funny mascots,
    0:27:44 toothbrushes in the shape of dinosaurs,
    0:27:46 and did you know we’ve helped raise
    0:27:48 almost $50 million for children’s hospitals?
    0:27:52 Yeah, for new equipment, infrastructure and programs.
    0:27:54 – We’re extra passionate about helping our communities
    0:27:56 because a little extra can mean a lot.
    0:27:59 Learn more at saveonfoods.com.
    0:28:01 – Save On Foods, giving you extra.
    0:28:08 (gentle music)
    0:28:21 – I do want to talk a bit about nostalgia
    0:28:23 before we get out of here,
    0:28:27 which you’ve written about, we’ve spoken about before.
    0:28:31 And I think a conversation about knowledge and ignorance
    0:28:34 is also a conversation about nostalgia.
    0:28:37 The truth is we can’t unsee what we’ve seen,
    0:28:40 though we can, I guess, repress and delude ourselves.
    0:28:41 I think my question to you is,
    0:28:45 at what point in our journey of knowledge,
    0:28:48 as individuals and societies,
    0:28:50 are we overtaken by nostalgia?
    0:28:53 At what point are we just longing to go back
    0:28:57 to a previous time when we didn’t know what we now know?
    0:29:01 – When it comes to whole societies being nostalgic,
    0:29:07 I think that it has to do two things.
    0:29:12 One is illegibility, if I can put it that way.
    0:29:15 And that is when the world becomes illegible,
    0:29:17 the present becomes illegible.
    0:29:20 That means you don’t know how to act.
    0:29:24 And if you don’t know how to act,
    0:29:28 it’s deeply disturbing because you want to be able,
    0:29:31 that’s the second point, to control your environment
    0:29:35 and control things so you can reach your own ends.
    0:29:39 And so a dissatisfaction with the present
    0:29:43 and an absence of knowledge about how to improve things
    0:29:48 are spurs to imagine that just as
    0:29:52 being eight years old seemed less complicated
    0:29:56 and easier than being 68 years old,
    0:30:01 that there was a time when life was ordered in a better way
    0:30:07 in which we knew less about various things
    0:30:10 or certain changes hadn’t happened.
    0:30:15 And maybe we can reverse the machine
    0:30:17 or reverse the train.
    0:30:23 That desire to go back, even on the more individual
    0:30:26 psychological level, there’s always a connection
    0:30:30 between individual and the social manifestations here.
    0:30:34 But it’s part of the reason why we romanticize childhood
    0:30:38 so much, it’s that innocence, it’s the simplicity,
    0:30:41 it’s the freedom from anxieties,
    0:30:44 it’s the freedom to be ignorant and happy
    0:30:46 without judgment or guilt.
    0:30:49 I mean, I think maybe the most beautiful thing
    0:30:54 about my five-year-old son is precisely this kind of freedom.
    0:31:01 He’s not a self competing for status among other selves.
    0:31:05 He is ignorant of all the posturing and insecurities
    0:31:09 that come with being a fully self-conscious person
    0:31:11 in a social world.
    0:31:17 I realize we cannot remain in that state of ignorance forever.
    0:31:21 But surely there’s a lot to learn from it.
    0:31:23 – What do you think is to be learned from it?
    0:31:27 I’d be interested to hear you elaborate on that.
    0:31:31 – I think there’s something to be learned about happiness
    0:31:36 that there are real things in the world
    0:31:38 about which to be anxious and insecure.
    0:31:40 And there are many, many more things
    0:31:42 that we conjure up in our minds
    0:31:47 because of our own neuroses and pathologies and anxieties.
    0:31:51 And to the extent we can be free of that,
    0:31:54 and to the extent we can be like children,
    0:31:58 which is to say, just be present in the world
    0:32:02 moment to moment without any real concerns
    0:32:05 about the past or the future.
    0:32:08 I think we’re better for that.
    0:32:12 And of course, you have to be responsible
    0:32:14 and you have to take accountability, right?
    0:32:16 You can’t be the child forever.
    0:32:20 But surely there’s some tension between those polls
    0:32:22 that we can live in.
    0:32:23 – See, I don’t think you can.
    0:32:25 And the reason you can’t–
    0:32:26 – Not even a little bit, do you know what I’m thinking?
    0:32:29 – No, but what you can preserve is something else.
    0:32:32 But the very fact that you were able to describe it
    0:32:36 means that you’re past it, right?
    0:32:37 That you’re a–
    0:32:38 – Well, shit.
    0:32:41 – Yeah, I mean, you’re aware of Knight.
    0:32:45 The child doesn’t know that he’s Knight, right?
    0:32:47 But what I do take from what you say
    0:32:52 is that it can get you refocused on,
    0:32:59 yeah, on being more in the moment,
    0:33:05 perhaps not trying to control your life so much
    0:33:08 and to let things happen and to take opportunities
    0:33:13 for play, just play and how healthy it is.
    0:33:16 – Play in a sense of awe and discovery,
    0:33:19 which we tend to lose as we move through the world,
    0:33:22 you know, year after year after year,
    0:33:24 that sense of freshness and awe dissipates.
    0:33:30 And to the extent we can still grab ahold of that instinct,
    0:33:31 I think it’s useful.
    0:33:34 – Yeah, well, the first thing with play,
    0:33:36 certainly it’s something that’s totally absent
    0:33:40 from play to an Aristotle and the philosophers
    0:33:43 until they come to Montaigne,
    0:33:46 where he reflects at various points about play
    0:33:48 and he famously says, so what is it?
    0:33:51 Am I playing with my cat or is my cat playing with me?
    0:33:55 But for me, there’s a difference between adult wonder
    0:33:59 and a childlike wonder.
    0:34:05 We’re trying to think myself back
    0:34:07 into a child or just observing children
    0:34:12 that when novelty affects them, wow, here’s something new.
    0:34:22 And so a lot of it has to do with that,
    0:34:24 but they don’t walk away with warm feelings
    0:34:27 about our wonderful world.
    0:34:30 They just, new things happen and they get excited about them
    0:34:32 and you see their smiles
    0:34:34 and they’re running around the room.
    0:34:39 But with us, the wonder, at least for me,
    0:34:42 is tinged with knowledge
    0:34:46 that everything in the world is not wonderful.
    0:34:51 And so when you have these epiphanic moments of wonder,
    0:34:57 it’s the contrast between that and our daily lives
    0:35:01 that seems like manna from heaven when it happens.
    0:35:06 – I do wonder as we kind of careen towards the end here,
    0:35:09 what the upshot of all this thinking and writing
    0:35:11 was for you personally?
    0:35:15 We’ve already sort of gone in a personal direction.
    0:35:17 But I mean, have you changed your relationship
    0:35:22 to your own ignorance as a result of this project?
    0:35:25 – I would hope so, I would hope so.
    0:35:30 I think I have a better understanding of what philosophy is
    0:35:32 and what philosophy can do.
    0:35:33 – What is the answer to that?
    0:35:36 What is it that philosophy can and can’t do?
    0:35:41 – That philosophy that is aware of our ignorance
    0:35:43 is a step forward.
    0:35:48 The greatest cognitive achievement of human beings
    0:35:50 is getting to maybe.
    0:35:53 – I like that.
    0:35:54 I’m gonna leave it right there.
    0:35:58 Once again, the book is called “Ignorance and Bliss.”
    0:36:00 Mark Lilla, always a pleasure, my friend.
    0:36:01 Thanks for coming in.
    0:36:02 – Thanks so much, Sean.
    0:36:10 – All right, friends, I hope you enjoyed this episode.
    0:36:12 You already know I did.
    0:36:16 But as always, you know we wanna know what you think.
    0:36:20 So drop us a line at the gray area at fox.com.
    0:36:22 And if you have just a little bit of extra time,
    0:36:26 please rate and review and subscribe to the podcast.
    0:36:35 This episode was produced by Beth Morrissey,
    0:36:38 edited by Jorge Just,
    0:36:40 engineered by Christian Ayala,
    0:36:42 fact-checked by Melissa Hirsch,
    0:36:44 and Alex O’Brington wrote our theme music.
    0:36:48 New episodes of the gray area drop on Mondays,
    0:36:51 listen and subscribe.
    0:36:52 This show is part of Vox.
    0:36:54 Support Vox’s journalism
    0:36:56 by joining our membership program today.
    0:36:59 Go to Vox.com/members to sign up.
    0:37:02 And if you decide to sign up because of this show,
    0:37:03 let us know.
    0:37:05 (upbeat music)
    0:37:08 (upbeat music)
    0:37:10 (upbeat music)
    0:37:13 (upbeat music)
    0:37:16 (upbeat music)
    0:37:19 (upbeat music)
    0:37:29 [BLANK_AUDIO]

    Are you ever happier not knowing something?

    As Aristotle famously claimed, “All human beings want to know.” But denial and avoidance are also human impulses. Sometimes they’re even more powerful than our curiosity.

    In this episode Sean speaks with professor Mark Lilla about when we’re better off searching for knowledge and when we’re better off living in the dark. Lilla’s new book is called Ignorance and Bliss: On Wanting Not to Know.

    Host: Sean Illing (@SeanIlling)

    Guest: Mark Lilla, professor of humanities at Columbia University and author of Ignorance and Bliss: On Wanting Not to Know.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices