AI transcript
0:00:04 Support for this show is brought to you by Nissan Kicks.
0:00:06 It’s never too late to try new things
0:00:09 and it’s never too late to reinvent yourself.
0:00:12 The all-new re-imagined Nissan Kicks
0:00:14 is the city-sized crossover vehicle
0:00:17 that’s been completely revamped for urban adventure.
0:00:20 From the design and styling to the performance,
0:00:21 all the way to features
0:00:23 like the Bose Personal Plus sound system,
0:00:26 you can get closer to everything you love about city life
0:00:29 in the all-new re-imagined Nissan Kicks.
0:00:34 Learn more at www.nisanusa.com/2025-Kicks.
0:00:36 Available feature,
0:00:39 Bose is a registered trademark of the Bose Corporation.
0:00:46 – Your own weight loss journey is personal.
0:00:47 Everyone’s diet is different,
0:00:49 everyone’s bodies are different,
0:00:50 and according to Noom,
0:00:53 there is no one-size-fits-all approach.
0:00:55 Noom wants to help you stay focused
0:00:57 on what’s important to you,
0:01:00 with their psychology and biology-based approach.
0:01:02 This program helps you understand the science
0:01:04 behind your eating choices
0:01:07 and helps you build new habits for a healthier lifestyle.
0:01:09 Stay focused on what’s important to you
0:01:13 with Noom’s psychology and biology-based approach.
0:01:16 Sign up for your free trial today at Noom.com.
0:01:24 – Can you ever really know what’s going on
0:01:26 inside the mind of another creature?
0:01:31 – In some cases, like other humans or dogs and cats,
0:01:35 we might be able to guess with a bit of confidence,
0:01:39 but what about octopuses or insects?
0:01:41 What about AI systems?
0:01:44 Will they ever be able to feel anything?
0:01:48 Despite all of our progress in science and technology,
0:01:50 we still have basically no idea
0:01:52 how to look inside the private experiences
0:01:54 of other creatures.
0:01:57 The question of what kinds of beings can feel things
0:02:00 and what those feelings are really like
0:02:01 remains one of the biggest mysteries
0:02:04 in both philosophy and science.
0:02:07 And maybe, at some point,
0:02:10 we’ll develop a big new theory of consciousness
0:02:14 that helps us really understand the inside of other minds.
0:02:18 But until then, we’re stuck making guesses
0:02:22 and judgment calls about what other creatures can feel
0:02:25 and about whether certain things can feel at all.
0:02:30 So, where do we draw the line
0:02:34 of what kinds of creatures might be sentient?
0:02:37 And how do we figure out our ethical obligations
0:02:39 to creatures that remain a mystery to us?
0:02:43 I’m O’Shawn Jarrow, sitting in for Sean Illing,
0:02:45 and this is the Gray Area.
0:02:54 My guest today is philosopher of science, Jonathan Birch.
0:02:56 He’s the principal investigator
0:02:58 on the Foundations of Animal Sentience Project
0:03:00 at the London School of Economics,
0:03:03 and author of the recently released book,
0:03:07 The Edge of Sentience, Risk and Precaution in Humans,
0:03:08 Other Animals and AI.
0:03:13 He also successfully convinced the UK government
0:03:17 to consider lobsters, octopuses, and crabs sentient
0:03:19 and therefore, deserving of legal protections,
0:03:22 which is a story that we’ll get into.
0:03:24 And it’s that work that earned him a place
0:03:26 on Vox’s Future Perfect 50 list,
0:03:30 a roundup of 50 of the most influential people
0:03:33 working to make the future a better place for everyone.
0:03:37 And in Birch’s case, for every sentient creature.
0:03:42 In this conversation, we explore everything that we do
0:03:45 and don’t know about sentience
0:03:47 and how to make decisions around it,
0:03:50 given all the uncertainty that we can’t yet escape.
0:03:54 Jonathan Birch, welcome to the Gray Area.
0:03:56 Thanks so much for coming on.
0:03:57 – Thanks for inviting me.
0:04:00 – So, one of the central ideas of your work
0:04:04 is this fuzzy idea of sentience.
0:04:06 And you focus on sentience across creatures,
0:04:08 from insects to animals,
0:04:11 to even potentially artificial intelligence.
0:04:14 And one of the challenges in that work
0:04:17 is defining sentience in the first place.
0:04:19 So, can you talk a little bit about how you’ve come
0:04:21 to define the term sentience?
0:04:25 – For me, it starts with thinking about pain
0:04:27 and thinking about questions like,
0:04:29 can an octopus feel pain?
0:04:31 Can a crab, can a shrimp?
0:04:35 And then realizing that actually pain is too narrow
0:04:40 for what really matters to us and that matters ethically.
0:04:43 Because other negative experiences matter as well,
0:04:47 like anxiety and boredom and frustration
0:04:49 that are not really forms of pain.
0:04:53 And then the positive side of mental life also matters.
0:04:57 Pleasure matters, joy, excitement.
0:04:59 And the advantage of the term sentience for me
0:05:02 is that it captures all of that.
0:05:04 It’s about the capacity to have
0:05:07 positive or negative feelings.
0:05:11 – The way that you define sentience
0:05:13 struck me as kind of basically the way
0:05:15 that I’ve thought about consciousness.
0:05:17 But in your book, you have this handy diagram
0:05:20 that shows how you see sentience and consciousness
0:05:22 as to some degree different.
0:05:24 So how do you understand the difference
0:05:27 between sentience and consciousness?
0:05:29 – The problem with the term consciousness, as I see it,
0:05:32 is that it can point to any other number of things.
0:05:34 Sometimes we are definitely using it
0:05:37 to refer to our immediate raw experience
0:05:39 of the present moment.
0:05:41 But sometimes when we’re talking about consciousness,
0:05:45 we’re thinking of things that are overlaid on top of that.
0:05:47 Herbert Feigel in the 1950s
0:05:49 talked about there being these three layers,
0:05:53 sentience, sapience and selfhood.
0:05:55 Where sapience is about the ability
0:05:59 to not just have those immediate raw experiences,
0:06:01 but to reflect on them.
0:06:03 And selfhood is something different again,
0:06:06 ’cause it’s about awareness of yourself
0:06:09 as this persistent subject of the experiences
0:06:13 that has a past and has a future.
0:06:15 And when we use the term consciousness,
0:06:18 we might be pointing to any of these three things
0:06:22 or maybe the package of those three things altogether.
0:06:25 – So sentience is maybe a bit of a simpler,
0:06:27 more primitive capacity for feeling
0:06:30 where consciousness may include these more complex layers?
0:06:31 – I think of it as the base layer.
0:06:34 Yeah, I think of it as the most elemental,
0:06:37 most basic, most evolutionarily ancient
0:06:40 part of human consciousness
0:06:41 that is very likely to be shared
0:06:43 with a wide range of other animals.
0:06:45 – I do a fair bit of reporting
0:06:48 on these kinds of questions of consciousness and sentience.
0:06:51 And everyone tends to agree that it’s a mystery, right?
0:06:53 And so a lot of emphasis goes on
0:06:56 trying to dispel the mystery.
0:06:58 And what I found really interesting about your approach
0:07:00 is that you seem to take the uncertainty
0:07:02 in the mystery as your starting point.
0:07:04 And rather than focusing on how do we solve this?
0:07:06 How do we dispel it?
0:07:07 You’re trying to help us think through
0:07:11 how to make practical decisions given that uncertainty.
0:07:13 I’m curious how you came to that approach.
0:07:14 – Yeah, the question for me
0:07:16 is how do we live with this uncertainty?
0:07:20 How do we manage risk better than we’re doing at present?
0:07:25 How can we use ideas from across science and philosophy
0:07:28 to help us make better decisions
0:07:29 when faced with those problems?
0:07:32 And in particular to help us err on the side of caution.
0:07:34 – Just to maybe make it explicit,
0:07:37 you mentioned the risk of uncertainty.
0:07:39 What is the risk here?
0:07:41 – Well, it depends on the particular case
0:07:42 we’re thinking about.
0:07:44 One of the cases that brought me to this topic
0:07:47 was the practice of dropping crabs and lobsters
0:07:49 into pans of boiling water.
0:07:52 And it seems like a clear case to me
0:07:55 where you don’t need certainty actually.
0:07:56 You don’t even need knowledge.
0:08:00 You don’t need high probability to see the risk.
0:08:04 And in fact, to do sensible common sense things
0:08:05 to reduce that risk.
0:08:07 – So the risk is the suffering we’re imposing
0:08:10 on these potentially other sentient creatures.
0:08:13 – That’s usually what looms largest for me, yeah.
0:08:15 The risk of doing things
0:08:17 that mean we end up living very badly
0:08:20 because we cause enormous amounts of suffering
0:08:22 to the creatures around us.
0:08:26 And you can think of that as a risk to the creatures
0:08:28 that end up suffering, but it’s also a risk to us.
0:08:31 A risk that our lives will be horrible
0:08:32 and destructive and absurd.
0:08:35 – I worry about my life being horrible
0:08:37 and destructive and absurd all the time.
0:08:39 So this is a handy way to think about it.
0:08:40 – We all should.
0:08:43 – I’d like to turn to your very practical work,
0:08:45 advising the UK government
0:08:49 on the Animal Welfare and Sentience Act of 2022.
0:08:50 The question was put to you
0:08:53 of whether they should consider certain invertebrates
0:08:55 like octopus and crabs and lobsters,
0:08:58 whether they should be included and protected in the bill.
0:09:01 Could you just give a little context on that story
0:09:02 and what led the government to come
0:09:05 and ask you to lead a research team on that question?
0:09:08 – Yeah, it was indirectly a result of Brexit,
0:09:11 the UK leaving the European Union,
0:09:15 because in doing that, we left the EU’s Lisbon Treaty
0:09:18 that has a line in it about respecting animals
0:09:20 as sentient beings.
0:09:22 And so Animal Welfare Organization said to the government,
0:09:25 are you going to import that into UK law?
0:09:27 And they said, no.
0:09:29 And they got a lot of bad press along the lines of,
0:09:32 well, don’t you think animals feel pain?
0:09:35 And so they promised new legislation
0:09:38 that would restore respect for sentient beings
0:09:40 back to UK law.
0:09:43 And they produced a draft of the bill
0:09:46 that included vertebrate animals.
0:09:48 You could say that’s progressive in a way
0:09:50 because fishes are in there, which is great,
0:09:52 but it generated a lot of criticism
0:09:55 because of the omission of invertebrates.
0:09:58 And so in that context, they commissioned a team led by me
0:10:01 to produce a review of the evidence of sentience
0:10:03 in two groups of invertebrates,
0:10:06 the cephalopods like octopuses
0:10:09 and the decopod crustaceans like crabs and lobsters.
0:10:11 I’d already been calling for applications
0:10:14 of the precautionary principle to questions of sentience
0:10:16 and had written about that.
0:10:19 And it already established at the LSE a project
0:10:22 called the Foundations of Animal Sentience Project
0:10:25 that aims to try to place the emerging science
0:10:29 of animal sentience on more secure foundations,
0:10:31 advance it, develop better methods,
0:10:33 and find new ways of putting the science to work
0:10:35 to design better policies,
0:10:37 laws and ways of caring for animals.
0:10:39 So in a way, I was in the right place at the right time.
0:10:43 I was pretty ideally situated to be leading a review like this.
0:10:46 – How do folks actually go about trying to answer
0:10:51 the question of whether a given animal is or is not sentient?
0:10:53 – Well, in lots of different ways.
0:10:55 And I think when we’re looking at animals
0:10:58 that are relatively close to us in evolutionary terms,
0:11:00 like other mammals,
0:11:02 neuroscience is a huge part of it
0:11:05 because we can look for similarities of brain mechanism.
0:11:08 But when thinking about crabs and lobsters,
0:11:09 what we’re not going to find
0:11:11 is exactly the same brain mechanisms
0:11:13 because we’re separated from them
0:11:16 by over 500 million years of evolution.
0:11:17 – That’s quite a bit.
0:11:19 – And so I think in that context,
0:11:23 you can ask big picture neurological questions.
0:11:27 Are there integrative brain regions, for example?
0:11:29 But the evidence is quite limited,
0:11:33 and so behavior ends up carrying a huge amount of weight.
0:11:36 Some of the strongest evidence comes from behaviors
0:11:41 that show the animal valuing pain relief when injured.
0:11:45 So for example, there was a study by Robin Crook
0:11:46 on octobuses, which is where you give the animal
0:11:49 a choice of two different chambers,
0:11:52 and you see which one it initially prefers.
0:11:56 And then you allow it to experience the effects
0:12:00 of a noxious stimulus, a nasty event.
0:12:03 And then in the other chamber that it initially dispreferred,
0:12:07 you allow it to experience the effects of an aesthetic
0:12:10 or a pain relieving drug.
0:12:12 And then you see whether its preferences reverse.
0:12:14 So now going forward,
0:12:17 it goes to that chamber where it had a good experience
0:12:20 rather than the one where it had a terrible experience.
0:12:22 So it’s a pattern of behavior.
0:12:26 In ourselves, this would be explained by feeling pain
0:12:28 and then getting relief from the pain.
0:12:30 And when we see it in other mammals,
0:12:32 we make that same inference.
0:12:34 – Are there any other categories?
0:12:36 ‘Cause we mentioned pain is one bucket of sentience,
0:12:38 but there’s much more to it.
0:12:39 Is there anything else that tends to play
0:12:42 a big role in the research?
0:12:42 – There’s much more to it.
0:12:44 And what I would like to see in the future
0:12:47 is animal sentience research moving beyond pain
0:12:50 and looking for other states that matter,
0:12:53 like joy for instance.
0:12:58 In practice though, by far the largest body of literature
0:13:01 exists for looking at markers of pain.
0:13:05 – I would love to read a paper that tries to assess
0:13:07 to what degree rats are experiencing joy
0:13:09 rather than pain, that would be lovely.
0:13:12 – I mean, studies of play behavior are very relevant here.
0:13:16 The studies of rats playing hide and seek for example,
0:13:18 where there must be something motivating
0:13:20 these play behaviors.
0:13:23 In the human case, we would call it joy, delight,
0:13:26 excitement, something like that.
0:13:29 And so it gets you taking seriously the possibility
0:13:32 there might be something like that in other animals too.
0:13:34 – I think the thing I’m actually left wondering is
0:13:39 what animals don’t show signs of sentience in these cases?
0:13:42 – Right, I mean, there’s many invertebrates
0:13:45 where you have an absence of evidence
0:13:48 ’cause no one has really looked.
0:13:53 So snails for example, there’s frustratingly little evidence.
0:13:58 Also bivalve mollusks, which people talk about a lot
0:14:00 ’cause they eat so many of them.
0:14:03 Very, very little evidence to base our judgments on.
0:14:05 And it’s hard to know what to infer from this.
0:14:07 There’s this slogan that absence of evidence
0:14:09 is not evidence of absence.
0:14:11 And it’s a little bit oversimplifying
0:14:13 ’cause you sort of think, well, you know,
0:14:18 when researchers find some indicators of pain,
0:14:20 they’ve got strong motivations to press on
0:14:22 because it could be a useful pain model
0:14:24 for biomedical research.
0:14:27 And this is exactly what we’ve seen in insects,
0:14:29 particularly Drosophila fruit flies,
0:14:31 that seeing some of those initial markers
0:14:33 has led scientists to think, well, let’s go for this.
0:14:38 And it turns out they’re surprisingly useful pain models.
0:14:39 – A pain model for humans?
0:14:40 – Right, exactly.
0:14:44 Yeah, that traditionally biomedical researchers have used rats
0:14:48 and there’s pressure to replace.
0:14:50 I don’t personally think that replacement here
0:14:53 should mean replacing mammals with invertebrates.
0:14:56 It’s not really the kind of replacement that I support,
0:14:59 but that is how a lot of scientists understand it.
0:15:03 And so they’re looking for ways to replace rats with flies.
0:15:04 – How do they decide
0:15:06 that the fly is a good pain model for humans?
0:15:08 – I mean, researchers have the ability
0:15:11 to manipulate the genetics of flies
0:15:16 at very, very fine grains using astonishing technologies.
0:15:22 So there was a recent paper that basically installed
0:15:26 in some flies sensitivity to chili heat.
0:15:30 Which of course in us, over a certain threshold,
0:15:32 this becomes painful.
0:15:34 So if you have one of the hottest chilies in the world,
0:15:37 you’re not gonna just carry on as normal.
0:15:38 – Certainly not.
0:15:40 – And they showed that the same behavior
0:15:41 can be produced in flies.
0:15:45 You can engineer them to be responsive to chili
0:15:48 and then you can dial up the amount of capsaicin
0:15:49 in the food they’re eating.
0:15:52 And there’ll come a point where they just stop eating
0:15:57 and withdraw from food, even though it leads them to starve.
0:16:00 And things like this that you’re leading researchers
0:16:03 to say, wow, the mechanisms here are mechanisms
0:16:07 we can use for testing out potential pain relieving drugs.
0:16:11 And the fruit flies are a standard model organism,
0:16:13 as they say in science.
0:16:16 So there’s countless numbers of them,
0:16:18 but traditionally they’ve been studied
0:16:20 for genetics primarily.
0:16:21 People haven’t been thinking of them
0:16:24 as model systems of cognitive functions
0:16:28 or of sentience or of pain or of sociality.
0:16:31 And they’re realizing to their surprise
0:16:33 that they’re very good models of all of these things.
0:16:35 And then your question is, well,
0:16:38 why is it such a good model of these things?
0:16:42 Could it be in fact that it possesses sentience of some kind?
0:16:46 – I don’t wanna go too far down this rabbit hole
0:16:48 ’cause I could spend hours asking you about this.
0:16:52 Let’s swing back to your research on the UK’s Act for a second.
0:16:54 You wound up recommending that the invertebrates
0:16:56 you looked at should be included.
0:16:59 And you mentioned this included, you know, octopuses,
0:17:01 which to me seems straightforward.
0:17:04 These seem very intelligent and playful.
0:17:06 I don’t need a lot of research to convince me of that.
0:17:09 But you recommended things like, you know, crabs and lobsters
0:17:11 and things where maybe people’s intuitions differ
0:17:14 a little bit in practical terms.
0:17:17 What changed for the life of a crab
0:17:20 after the UK did formally include them in the bill?
0:17:23 How does that wind up benefiting crabs?
0:17:26 – It’s a topic of ongoing discussion, basically,
0:17:27 ’cause what this new act does
0:17:30 is it creates a duty on policymakers
0:17:33 to consider the animal welfare consequences
0:17:36 of their decisions, including to crabs.
0:17:39 Now, we recommended, don’t just put crabs
0:17:41 in this particular act.
0:17:45 Also, amend the UK’s other animal welfare laws
0:17:48 to be consistent with the new act.
0:17:49 And this we’ve not yet seen.
0:17:52 So we’re really hoping that this will happen
0:17:53 and will happen in the near future.
0:17:56 And it’s something that definitely should happen.
0:17:58 ‘Cause in the meantime, we’ve got a rather confusing picture
0:18:01 where you have these other laws that say
0:18:04 animals should not be caused unnecessary suffering
0:18:07 when they’re killed and people should require training
0:18:09 if they’re going to slaughter animals.
0:18:12 And then you have this new law that says
0:18:14 for legal purposes, decapod crustaceans
0:18:16 are to be considered animals.
0:18:19 And as a philosopher, I’m always thinking,
0:18:20 well, read these two things together
0:18:22 and think about what they logically imply
0:18:23 when written together.
0:18:26 And lawyers don’t like that kind of argument.
0:18:28 Lawyers want a clear precedent
0:18:31 where there’s been some kind of test case
0:18:35 that has convicted someone for boiling a lobster alive
0:18:36 or something like that.
0:18:38 And that’s what we’ve not yet had.
0:18:41 So I’m hoping that lawmakers will act
0:18:44 to clarify that situation.
0:18:46 To me, it’s kind of clear.
0:18:47 How much clearer could it be
0:18:50 that this method causes unnecessary suffering
0:18:51 quite obviously.
0:18:56 And it’s illegal to do that to any animal,
0:18:58 including crabs.
0:19:02 But in practice, because it’s not explicitly ruled out,
0:19:07 it’s not quite good enough at the moment.
0:19:09 We wanna see this explicitly ruled out.
0:19:13 – So we’ll take incremental steps to get there.
0:19:15 – Yeah, in a way, I’m glad people take this issue
0:19:16 seriously at all.
0:19:19 I didn’t really expect that when I started working on it.
0:19:23 And so to have achieved any policy change that benefits
0:19:25 crabs and lobsters in any way,
0:19:27 I’ve gotta count that as a win.
0:19:40 – Support for the gray area comes from Mint Mobile.
0:19:42 There’s nothing like the satisfaction
0:19:44 of realizing you just got an incredible deal.
0:19:47 But those little victories have gotten harder
0:19:48 and harder to find.
0:19:50 Here’s the good news though.
0:19:52 Mint Mobile is resurrecting that incredible
0:19:54 “I got a deal” feeling.
0:19:56 Right now, when you make the switch to a Mint Mobile plan,
0:19:59 you’ll pay just $15 a month when you purchase
0:20:01 a new three month phone plan.
0:20:03 All Mint Mobile plans come with high speed data
0:20:05 and unlimited talk and text delivered
0:20:08 on the nation’s largest 5G network.
0:20:10 You can even keep your phone, your contacts,
0:20:11 and your number.
0:20:13 It doesn’t get much easier than that.
0:20:15 To get this new customer offer
0:20:17 and your new three month premium wireless plan
0:20:18 for just 15 bucks a month,
0:20:21 you can go to mintmobile.com/grayarea.
0:20:24 That’s mintmobile.com/grayarea.
0:20:27 You can cut your wireless bill to 15 bucks a month
0:20:29 at mintmobile.com/grayarea.
0:20:33 $45 upfront payment required equivalent to $15 a month.
0:20:36 New customers on first three month plan only.
0:20:39 Speed slower above 40 gigabytes on unlimited plan,
0:20:41 additional taxes, fees, and restrictions apply.
0:20:43 See Mint Mobile for details.
0:20:50 Support for the gray area comes from Cook Unity.
0:20:52 You know one way to eat chef prepared meals
0:20:53 in the comfort of your home?
0:20:56 You can spend years at culinary school,
0:20:58 work your way up the restaurant industry,
0:21:00 become a renowned chef on your own,
0:21:02 and then cook something for yourself.
0:21:04 Cook Unity delivers meals to your door
0:21:06 that are crafted by award winning chefs
0:21:10 and made with local farm fresh ingredients.
0:21:13 Cook Unity’s selection of over 350 meals
0:21:15 offers a variety of cuisines
0:21:17 and their menus are updated weekly.
0:21:18 So you’re sure to find something
0:21:21 to fit your taste and dietary needs.
0:21:22 One of our colleagues, Nisha,
0:21:24 tried Cook Unity for herself.
0:21:26 – Sometimes you’re just too tired to cook.
0:21:28 I’m a, I have a two and a half year old.
0:21:31 Sometimes you’re just exhausted at the end of the day.
0:21:33 And it’s very easy to default to take out.
0:21:35 So it was really nice to not have the mental load
0:21:37 of having it cook every day,
0:21:40 but having healthy home cooked meals
0:21:42 already prepared for you
0:21:45 and not having to go the takeout route.
0:21:48 – You can get the gift of delivering mouthwatering meals
0:21:49 crafted by local ingredients
0:21:52 and award winning chefs with Cook Unity.
0:21:55 You can go to cookunity.com/grayarea
0:21:58 or enter code grayarea before checkout
0:22:00 for 50% off your first week.
0:22:02 That’s 50% off your first week
0:22:04 by using code grayarea
0:22:07 or going to cookunity.com/grayarea.
0:22:14 – Support for the gray area comes from Shopify.
0:22:18 Viral marketing campaigns have gotten pretty wild lately.
0:22:19 Like in Russia,
0:22:22 one pizza chain offered 100 free pizzas a year
0:22:24 for 100 years to anyone
0:22:26 who got the company logo tattooed on their body.
0:22:29 Apparently 400 misguided souls did it,
0:22:33 which is a story that deserves its own podcast.
0:22:34 But if you want to grow your company
0:22:38 without resorting to a morally dubious viral scheme,
0:22:40 you might want to check out Shopify.
0:22:43 Shopify is an all-in-one digital commerce platform
0:22:46 that wants to help your business sell better than ever before.
0:22:49 Shopify says they can help you convert browsers
0:22:52 into buyers and sell more over time.
0:22:55 And their shop pay feature can boost conversions by 50%.
0:22:58 There’s a reason companies like Allbirds turn to Shopify
0:23:01 to sell more products to more customers.
0:23:03 Businesses that sell more sell with Shopify.
0:23:05 Want to upgrade your business
0:23:07 and get the same checkout Allbirds uses?
0:23:10 You can sign up for your $1 per month trial period
0:23:14 at Shopify.com/Vox, all lowercase.
0:23:18 That’s Shopify.com/Vox to upgrade your selling today.
0:23:20 Shopify.com/Vox.
0:23:28 (gentle music)
0:23:38 – Let’s move to another set of potential beings.
0:23:41 Your work on Sentience covers artificial intelligence.
0:23:44 And one of the things that I’ve been most interested
0:23:46 in watching as the past few years
0:23:48 have really thrust a lot of questions around AI
0:23:52 into the mainstream has been this unbundling
0:23:54 of consciousness and intelligence
0:23:56 or Sentience and intelligence.
0:23:59 We’re clearly getting better at creating
0:24:01 more intelligent systems that can achieve
0:24:05 and with competency perform certain tasks.
0:24:07 But it remains very unclear
0:24:09 if we’re getting any closer to Sentient ones.
0:24:12 So how do you understand the relationship
0:24:15 between Sentience and intelligence?
0:24:17 – I think it’s entirely possible
0:24:21 that we will get AI systems with very high levels
0:24:26 of intelligence and absolutely no Sentience at all.
0:24:27 That’s entirely possible.
0:24:31 And when you think about shrimps or snails, for example,
0:24:34 we can also conceive of how there can be Sentience
0:24:37 with perhaps not all that much intelligence.
0:24:40 – On another podcast, you had mentioned that
0:24:43 it might actually be easier to create AI systems
0:24:45 that are Sentient by modeling them
0:24:47 off of less intelligent systems
0:24:50 rather than just cranking up the intelligence dial
0:24:52 until it bursts through into Sentience.
0:24:54 Why is that?
0:24:55 – That could absolutely be the case.
0:24:59 I see it many possible pathways to Sentient AI.
0:25:01 One of which is through the emulation
0:25:03 of animal nervous systems.
0:25:06 There’s a long running project called Open Worm
0:25:09 that tries to recreate the nervous system
0:25:14 of a tiny worm called C. elegans in computer software.
0:25:16 There’s not a huge amount of funding going into this
0:25:19 because it’s not seen as very lucrative,
0:25:20 just very interesting.
0:25:23 And so even with those very simple nervous systems,
0:25:25 we’re not really at the stage where we can say
0:25:27 they’ve been emulated.
0:25:28 But you can see the pathway here.
0:25:31 You know, suppose we did get an emulation
0:25:33 of a worms nervous system.
0:25:35 I’m sure we would then move on to fruit flies.
0:25:39 If that worked, researchers would be going on to open mouse,
0:25:43 open fish and emulating animal brains
0:25:45 at ever greater levels of detail.
0:25:49 And then in relation to questions of Sentience,
0:25:51 we’ve got to take seriously the possibility
0:25:56 that Sentience does not require a biological substrate,
0:25:59 that the stuff you’re made of might not matter.
0:26:01 It might matter, but it might not.
0:26:03 And so it might be that if you recreate
0:26:08 the same functional organization in a different substrate,
0:26:11 so no neurons of a biological kind anymore,
0:26:13 just computer software,
0:26:15 maybe you would create Sentience as well.
0:26:18 – You’ve talked about this idea that you’ve called
0:26:20 the N equals one problem.
0:26:22 Can you explain what that is?
0:26:28 – Well, this is a term that began in origins of life studies,
0:26:31 where it’s people searching for extraterrestrial life
0:26:34 or studying life’s origin and asking,
0:26:37 well, we only have one case to draw on.
0:26:39 And if we only have one case,
0:26:43 how are we supposed to know what was essential to life
0:26:46 from what was a contingent feature
0:26:48 of how life was achieved on Earth?
0:26:53 And one might think we have an N equals one problem
0:26:55 with consciousness as well.
0:26:59 If you think it’s something that has only evolved once,
0:27:01 seems like you’re always gonna have problems
0:27:02 disentangling what’s essential to it
0:27:06 from what is contingent.
0:27:06 Luckily though,
0:27:09 I think we might be in an N greater than one situation
0:27:12 when it comes to Sentience and consciousness
0:27:14 because of the arthropods like flies and bees
0:27:17 and because of the cephalopods and crabs.
0:27:18 And because of the cephalopods
0:27:21 like octopuses, squid, cuttlefish,
0:27:24 we might even be in an N equals three situation,
0:27:28 in which case, studying those other cases,
0:27:32 octopuses, crabs, insects has tremendous value
0:27:35 for understanding the nature of Sentience
0:27:37 ’cause it can tell us,
0:27:39 it can start to give us some insight
0:27:43 into what might be essential to having it at all
0:27:45 versus what might be a quirk
0:27:48 of how it is achieved in humans.
0:27:50 – Just to make sure I have this right,
0:27:54 if we are in an N equals one scenario with Sentience,
0:27:56 that means that every sentient creature evolved
0:27:59 from the same sentient ancestor.
0:28:01 It’s one evolutionary lineage.
0:28:01 – That’s right.
0:28:05 – And so Sentience has only evolved once on Earth’s history
0:28:07 so it gives us one example to look at.
0:28:08 – Exactly.
0:28:10 – But if we’re not in an N equals one situation,
0:28:12 you mentioned N equals three
0:28:13 and there’s a fair bit of research
0:28:16 suggesting this could be the case or something like it,
0:28:19 then Sentience has evolved three separate times
0:28:22 in three separate kind of cases of form
0:28:24 and the architecture of a being.
0:28:26 – That’s fascinating to me,
0:28:28 the idea that Sentience could have involved
0:28:31 independently multiple times in different ways.
0:28:34 – Yeah, we know it’s true of eyes, for example,
0:28:37 when you look at the eyes of cephalopods,
0:28:40 you see a wonderful mixture of similarities and differences.
0:28:44 So we see convergent evolution, similar thing,
0:28:49 evolving independently to solve a similar problem
0:28:51 and Sentience could be just like that.
0:28:55 – The greater the number of N’s we have here,
0:28:59 the number of separate instances of Sentience evolving,
0:29:03 it strikes me as that lends more credence to the idea
0:29:06 that AI could develop its own independent route
0:29:09 to Sentience as well that might not look exactly
0:29:11 like what we’ve seen in the past.
0:29:14 – It’s also the way towards really knowing
0:29:16 whether it has or not as well
0:29:19 because at present, we’re just not in that situation.
0:29:22 We’re not in a good enough position
0:29:25 to be able to really know that we’ve created Sentience AI
0:29:28 even when we do, we’ll be faced
0:29:31 with horrible disorienting uncertainty.
0:29:33 But to me, the pathway towards better evidence
0:29:36 and maybe one day knowledge lies through
0:29:38 studying other animals.
0:29:41 And it lies through trying to get other N’s,
0:29:45 other independently evolved cases
0:29:48 so that we can develop theories
0:29:51 that genuinely disentangle the quirks
0:29:54 of human consciousness from what is needed
0:29:56 to be conscious at all.
0:30:01 – What kind of evidence would you find compelling
0:30:05 that tests for Sentience in AI systems?
0:30:08 – It’s something I’ve been thinking about a great deal
0:30:12 because when we’re looking at the surface linguistic behavior
0:30:14 of an AI system that has been trained
0:30:18 on over a trillion words of human training data,
0:30:22 we clearly gonna see very fluent talking
0:30:24 about feelings and emotions.
0:30:29 And we’re already seeing that.
0:30:33 And it’s really, I would say not evidence at all
0:30:36 that the system actually has those feelings
0:30:40 because it can be explained as a kind of skillful mimicry.
0:30:43 And if that mimicry serves the system’s objectives,
0:30:45 we should expect to see it.
0:30:48 We should expect our criteria to be gained
0:30:50 if the objectives are served by persuading
0:30:54 the human user of Sentience.
0:30:56 And so this is a huge problem and it points
0:31:01 to the need to look deeper in some way.
0:31:03 These systems are very substantially opaque.
0:31:07 It is really, really hard to infer anything
0:31:10 about what the processes are inside them.
0:31:12 And so I have a second line of research as well
0:31:15 that I’ve been developing with collaborators at Google
0:31:19 that is about trying to adapt some of these animal experiments.
0:31:22 Let’s see if we can translate them over to the AI case.
0:31:24 – These are looking for behavior changes?
0:31:27 – Yeah, looking for subtle behavior changes
0:31:32 that we hope would not be gaming
0:31:35 because they’re not part of the normal repertoire
0:31:37 in which humans express their feelings,
0:31:39 but are rather these very subtle things
0:31:41 that we’ve looked for in other animals
0:31:43 because they can’t talk about their feelings
0:31:45 in the first place.
0:31:47 – So it’s funny, we’re hitting the same problem in AI
0:31:49 that we are in animals and humans,
0:31:53 which is that in both cases, there’s a black box problem
0:31:54 where we don’t actually understand
0:31:55 the inner workings to some degree.
0:31:58 – The problems are so much worse in the AI case though
0:32:03 because when you’re faced with a pattern of behavior
0:32:07 in another animal like an octopus
0:32:10 that is well explained by there being a state
0:32:14 like pain there, that is the best explanation
0:32:15 for your data.
0:32:18 And it doesn’t have to compete with this other explanation
0:32:21 that maybe the octopus read a trillion words
0:32:23 about how humans express their feelings
0:32:27 and stands to benefit from gaming our criteria
0:32:29 and skillfully mimicking us.
0:32:33 We know the octopus is not doing that, that never arises.
0:32:36 In the AI case, those two explanations always compete
0:32:39 and the second one with current systems
0:32:41 seems to be rather more plausible.
0:32:43 And in addition to that,
0:32:46 the substrate is completely different as well.
0:32:48 So we face huge challenges
0:32:49 and I suppose what I’m trying to do
0:32:51 is maintain an attitude of humility
0:32:53 in the face of those challenges.
0:32:57 Now, let’s not be credulous about this,
0:32:59 but also let’s not give up the search
0:33:02 for developing higher quality kinds of test.
0:33:14 Support for the gray area comes from green light.
0:33:28 Anyway, it applies to more than fish.
0:33:30 It’s also a great lesson for parents
0:33:32 who want their kids to learn important skills
0:33:35 that will set them up for success later in life.
0:33:37 As we enter the gifting season,
0:33:39 now might be the perfect time
0:33:41 to give your kids money skills that will last
0:33:43 well beyond the holidays.
0:33:45 That’s where green light comes in.
0:33:46 Green light is a debit card
0:33:49 and money at made specifically with families in mind.
0:33:50 Send money to your kids,
0:33:52 track their spending and saving
0:33:54 and help them develop their financial skills
0:33:57 with games aimed at building the confidence they need
0:34:00 to make wiser decisions with their money.
0:34:02 My kid is a little too young for this.
0:34:04 We’re still rocking piggy banks,
0:34:06 but I’ve got a colleague here at Vox
0:34:08 who uses it with his two boys and he loves it.
0:34:10 You can sign up for green light today
0:34:12 at greenlight.com/grayarea.
0:34:14 That’s greenlight.com/grayarea
0:34:16 to try green light today.
0:34:18 Greenlight.com/grayarea.
0:34:24 Support for the show comes from Give Well.
0:34:27 When you make a charitable donation,
0:34:30 you want to know your money is being well spent.
0:34:32 For that, you might want to try Give Well.
0:34:34 Give Well is an independent nonprofit
0:34:36 that’s spent the last 17 years
0:34:39 researching charitable organizations.
0:34:40 And they only give recommendations
0:34:44 to the highest impact causes that they vetted thoroughly.
0:34:47 According to Give Well, over 125,000 donors
0:34:50 have used it to donate more than $2 billion.
0:34:52 Rigorous evidence suggests that these donations
0:34:55 could save over 200,000 lives.
0:34:58 And Give Well wants to help you make informed decisions
0:34:59 about high impact giving.
0:35:02 So all of their research and recommendations
0:35:04 are available on their site for free.
0:35:06 You can make tax deductible donations
0:35:08 and Give Well doesn’t take a cut.
0:35:09 If you’ve never used Give Well to donate,
0:35:12 you can have your donation matched up to $100
0:35:14 before the end of the year
0:35:16 or as long as matching funds last.
0:35:18 To claim your match, you can go to GiveWell.org
0:35:22 and pick podcast and enter the gray area at checkout.
0:35:24 Make sure they know that you heard about Give Well
0:35:26 from the gray area to get your donation matched.
0:35:30 Again, that’s GiveWell.org to donate or find out more.
0:35:37 Support for the gray area comes from Delete Me.
0:35:39 Delete Me allows you to discover and control
0:35:40 your digital footprint,
0:35:43 letting you see where things like home addresses,
0:35:44 phone numbers, and even email addresses
0:35:47 are floating around on data broker sites.
0:35:50 And that means Delete Me could be the perfect holiday gift
0:35:52 for a loved one looking to help safeguard
0:35:54 their own life online.
0:35:55 Delete Me can help anyone monitor
0:35:59 and remove the personal info they don’t want on the internet.
0:36:01 Claire White, our colleague here at Vox,
0:36:04 tried Delete Me for herself and even gifted it to a friend.
0:36:08 This year, I gave two of my friends a Delete Me subscription
0:36:09 and it’s been the perfect gift
0:36:12 ’cause it’s something that will last beyond the season.
0:36:14 Delete Me will continue to remove their information
0:36:16 from online and it’s something I’ve been raving about
0:36:19 so I know that they’re gonna love it as well.
0:36:21 – This holiday season, you can give your loved ones
0:36:25 the gift of privacy and peace of mind with Delete Me.
0:36:27 Now at a special discount for our listeners.
0:36:30 Today, you can get 20% off your Delete Me plan
0:36:33 when you go to joindeleteme.com/vox
0:36:35 and use promo code Vox at checkout.
0:36:39 The only way to get 20% off is to go to joindeleteme.com/vox
0:36:42 and enter code Vox at checkout.
0:36:46 That’s joindeleteme.com/vox code Vox.
0:36:52 (gentle music)
0:37:00 – One of the major aims of your recent book
0:37:02 is to propose a framework
0:37:04 for making these kinds of practical decisions
0:37:06 about potentially sentient creatures,
0:37:09 whether it’s an animal, whether it’s AI,
0:37:11 given this uncertainty.
0:37:12 Tell me about that framework.
0:37:15 – Well, it’s a precautionary framework.
0:37:18 One of the things I urge is a pragmatic shift
0:37:20 in how we think about the question.
0:37:23 From asking, is the system sentient
0:37:26 where uncertainty will always be with us?
0:37:30 To asking instead, is the system a sentient’s candidate?
0:37:32 Where the concept of a sentient’s candidate
0:37:36 is a concept that we’ve pragmatically engineered.
0:37:39 And what it says is that a system is a sentient’s candidate
0:37:41 when there’s a realistic possibility of sentient’s
0:37:44 that it would be irresponsible to ignore.
0:37:45 And when there’s an evidence base
0:37:49 that can inform the design and assessment of precautions.
0:37:52 And because we’ve constructed the concept like that,
0:37:56 we can use current evidence to make judgments.
0:37:59 The cost of doing that is that those judgments
0:38:02 are not purely scientific judgments anymore.
0:38:05 There’s an ethical element to the judgment as well,
0:38:07 because it’s about when a realistic possibility
0:38:10 becomes irresponsible to ignore.
0:38:13 And that’s implicitly a value judgment.
0:38:15 But by reconstructing the question in that way,
0:38:16 we make it answerable.
0:38:19 – So, presumably then,
0:38:21 given your recommendation to the UK government,
0:38:25 you would say that those invertebrates you looked at
0:38:27 are sentient’s candidates.
0:38:29 That there’s enough evidence to at least consider
0:38:31 the possibility of sentience.
0:38:35 Where would you stop with the current category
0:38:36 of sentient’s candidate?
0:38:39 What is not a sentient’s candidate in your current view?
0:38:43 – I’ve come to the view that insects really are,
0:38:44 which surprises me.
0:38:46 You know, it would have surprised past me
0:38:49 who hadn’t read so much of the literature about insects.
0:38:53 The evidence just clearly shows a realistic possibility
0:38:55 of sentience, it would be irresponsible to ignore.
0:38:59 But really, that’s currently where I stop.
0:39:01 So the cephalopod mollusks, the decapod crustaceans,
0:39:05 the insects, lot of evidence in those cases.
0:39:08 And I think in other invertebrates,
0:39:11 what we should say instead is that we lack the kind of
0:39:14 evidence that would be needed to effectively design
0:39:17 precautions to manage welfare risks.
0:39:21 And so the imperative there is to be getting more evidence.
0:39:24 And so in my book, I call these investigation priorities.
0:39:27 – So insects are sentience candidates.
0:39:30 Where does today’s generation of AI,
0:39:33 let’s LLMs in particular, so open AI is chatgy, BT,
0:39:36 anthropics, Claude, are these sentience candidates
0:39:37 in your view yet?
0:39:40 – I suggest that they’re investigation priorities,
0:39:43 which is already controversial because I’m saying that,
0:39:47 well, just as snails, we need more evidence.
0:39:49 Equally in AI, we need more evidence.
0:39:51 So I’m not being one of those people who just dismisses
0:39:56 the possibility of sentient AI as being a ridiculous one.
0:39:58 But I don’t think they’re sentience candidates
0:40:00 because we don’t have enough evidence.
0:40:02 – When you say that something is a sentience candidate,
0:40:05 it’s implying that we need to consider their welfare
0:40:08 and our behaviors and the decisions that we make.
0:40:09 – In public policy.
0:40:10 Yeah, I mean, in our personal lives,
0:40:13 we might want to be even more precautionary,
0:40:17 but I’m designing here a framework for setting policy.
0:40:19 – Right, ’cause I can imagine,
0:40:21 I think that the standard kind of line
0:40:23 that you get at this point is,
0:40:24 if you’re telling me I need to consider
0:40:26 the welfare of insects,
0:40:28 how can I take a step on the sidewalk?
0:40:30 And one of the ideas that’s central to your framework
0:40:33 is this idea of proportionality, which I really liked.
0:40:36 You talk about how the precautions that we take
0:40:39 should match the scale of the risk of suffering
0:40:41 that our actions kind of carry.
0:40:43 So how do you think about quantifying the risk
0:40:45 of suffering an action carries, right?
0:40:49 Does harming simpler creatures or insects
0:40:52 carry less risk than harming larger, more complex ones
0:40:54 like pigs or octopuses?
0:40:57 – Well, I’m opposed to trying to reduce it
0:41:00 or to a calculation and perhaps disagree
0:41:02 with some utilitarians on that point.
0:41:04 When you’re setting public policy,
0:41:08 cost-benefit analysis has its place,
0:41:10 but we’re not in that kind of situation here.
0:41:13 We’re weighing up very incommensurable things,
0:41:16 things that it’s very, very hard to compare.
0:41:18 And I think in that kind of situation,
0:41:21 you don’t want to be just making a calculation.
0:41:24 What you need to have is a democratic, inclusive process
0:41:27 through which different positions can be represented
0:41:32 and we can try to resolve our value conflicts democratically.
0:41:35 And so in the book, I advocate for citizens assemblies
0:41:39 as being the most promising way of doing this,
0:41:41 where you bring a random sample of the public
0:41:45 into an environment where they’re informed about the risks,
0:41:47 they’re informed about possible precautions,
0:41:50 and they’re given a series of tests to go through
0:41:53 to debate what they think would be proportionate
0:41:54 to those risks.
0:41:57 And things like, we’re all banned from walking now
0:41:59 because it might hurt insects.
0:42:02 I don’t see those as very likely to be judged proportionate
0:42:04 by such an exercise.
0:42:06 But other things we might do to help insects,
0:42:09 like banning certain kinds of pesticides,
0:42:12 I think might well be judged proportionate.
0:42:15 – Is this, this sounds to me almost like a form of jury duty.
0:42:17 You have a random selection of citizens brought together.
0:42:18 – Yeah.
0:42:20 – How do you, when I think about this on one hand,
0:42:21 I think it sounds lovely.
0:42:23 I like the idea of us all coming together
0:42:26 to debate the welfare of our fellow creatures.
0:42:28 It also strikes me as kind of optimistic,
0:42:32 to imagine us not only doing this, but doing it well.
0:42:35 And I’m curious how you think about balancing
0:42:38 the value of expertise in making these decisions
0:42:40 with democratic input.
0:42:44 – Yeah, I’m implicitly proposing a division of labor
0:42:47 where experts are supposed to make this judgment
0:42:51 of sentience candidature or candidacy.
0:42:53 Is the octopus a sentience candidate?
0:42:56 But then they’re not adjudicating
0:42:58 the questions of proportionality.
0:43:00 – So what to do about it?
0:43:02 – Yeah, then it would be a tyranny of expert values.
0:43:05 You’d have this question that calls for value judgments
0:43:06 about what to do.
0:43:08 And you’d be handing that over to the experts
0:43:12 and letting the experts dictate changes to our way of life.
0:43:15 That question of proportionality,
0:43:18 that should be handed over to the citizens assembly.
0:43:21 And I think it doesn’t require ordinary citizens
0:43:24 to adjudicate the scientific disagreement.
0:43:27 And that’s really crucial because if you’re asking
0:43:28 random members of the public to adjudicate
0:43:32 which brain regions they think are more important to sentience,
0:43:34 that’s gonna be a total disaster.
0:43:37 But the point is you give them questions
0:43:40 about what sorts of changes to our way of life
0:43:44 would be proportionate, would be permissible,
0:43:47 adequate, reasonably necessary and consistent
0:43:50 in relation to this risk that’s been identified.
0:43:52 And you ask them to debate those questions.
0:43:55 And I think that’s entirely feasible.
0:43:57 I’m very optimistic about citizens assemblies
0:44:00 as a mechanism for addressing that kind of question,
0:44:02 a question about our shared values.
0:44:05 – Do you see these as legally binding
0:44:07 or kind of making recommendations?
0:44:11 – I think they can only be making recommendations.
0:44:14 What I’m proposing is that on certain specific issues
0:44:17 where we think we need public input,
0:44:19 but we don’t wanna put them to a referendum
0:44:22 because we might need to revisit the issues
0:44:24 when new evidence comes to light
0:44:26 and you need a certain level of information
0:44:28 to understand what the issue is.
0:44:31 Citizens assemblies are great for those kinds of issues.
0:44:34 And because they’re very effective,
0:44:36 the recommendations they deliver
0:44:38 should be given weight by policymakers
0:44:40 and should be implemented.
0:44:43 They’re not substituting for parliamentary democracy,
0:44:46 but they’re feeding into it in a really valuable way.
0:44:51 – One thing that I can’t help but wonder about all of this,
0:44:54 humans are already incredibly cruel to animals
0:44:56 that most of us agree are very sentient,
0:44:58 I’m thinking of pigs or cows.
0:45:01 I think we’ve largely moved away from,
0:45:03 Descartes in the 1600s
0:45:06 where all animals were considered unfeeling machines.
0:45:09 Today we might disagree about how small
0:45:11 and simple down the chain we go
0:45:14 before we lose in consensus on sentience,
0:45:17 but agreeing that they’re sentient
0:45:18 doesn’t seem to have prevented us
0:45:21 from doing atrocious things to many animals.
0:45:24 So I’m curious if the goal is to help guide us
0:45:27 in making more ethical decisions,
0:45:29 how do you think that determining sentience
0:45:31 in other creatures will help?
0:45:37 – You’re totally right that recognizing animals as sentient
0:45:40 does not immediately lead to behavioral change
0:45:42 to treat them better.
0:45:45 And this is the tragedy of how we treat
0:45:48 lots of mammals like pigs and birds like chickens,
0:45:51 that we recognize them as sentient beings,
0:45:54 and yet we fail them very, very seriously.
0:45:57 I think there’s a lot of research to be done
0:46:01 about what kinds of information about sentience
0:46:03 might genuinely change people’s behavior.
0:46:07 And I’m very interested in doing that kind of research
0:46:12 going forward, but with cases like octopuses,
0:46:15 at least there’s quite an opportunity
0:46:16 in this particular case, I think,
0:46:20 because you don’t have really entrenched industries
0:46:22 already farming them.
0:46:24 Part of the problem we face with the pigs and chickens
0:46:28 and so on is that in opposing these practices,
0:46:31 the enemy is very, very powerful.
0:46:33 The arguments are really easy to state
0:46:38 and people do get them and they do see why this is wrong,
0:46:41 but then the enemy is so powerful
0:46:44 that actually changing this juggernaut,
0:46:47 this leviathan is a huge challenge.
0:46:51 By contrast with invertebrate farming,
0:46:54 we’re talking about practices sometimes
0:46:57 that could become entrenched like that in the future,
0:46:59 but are not yet entrenched.
0:47:04 Octopus farming is currently on quite small scales,
0:47:06 shrimp farming is much larger,
0:47:09 insect farming is much larger,
0:47:11 but they’re not as entrenched and powerful
0:47:14 as pig farming, poultry farming.
0:47:16 And so there seem to be real opportunities here
0:47:19 to effect positive change, or at least I hope so.
0:47:21 In the octopus farming case, for example,
0:47:24 we’ve actually seen bands implemented
0:47:27 in Washington State and in California.
0:47:31 And that’s a sign that progress is really possible
0:47:32 in these cases.
0:47:35 – There are talks of banning AI development.
0:47:37 The philosopher Thomas Metzinger is famously called
0:47:41 for a ban until 2050, that might be difficult operationally,
0:47:45 but I’m curious how you think about actions we can take today
0:47:47 at the early stages of these institutions
0:47:49 that might help in the long run.
0:47:51 – Yeah, huge problems.
0:47:55 I do think Metzinger’s proposal deserves to be taken seriously,
0:48:00 but also we need to be thinking about what can we do
0:48:05 that is more easily achieved than banning this stuff,
0:48:08 but then nonetheless makes a positive difference.
0:48:11 And in the book, I suggest there might be some lessons here
0:48:14 from the regulation of animal research
0:48:17 that you can’t just do what you like,
0:48:19 experimenting on animals.
0:48:22 In the UK, at least, there’s quite a strict framework
0:48:25 requiring you to get a license.
0:48:27 And it’s not a perfect framework by any means.
0:48:29 It has a lot of problems,
0:48:33 but it does show a possible compromise
0:48:35 between simply banning something altogether
0:48:39 and allowing it to happen in a completely unregulated way.
0:48:41 And the nature of that compromise
0:48:44 is that you expect the people doing this research
0:48:47 to be transparent about their plans,
0:48:49 to reveal their plans to a regulator.
0:48:53 Who is able to see them and assess the harms and benefits
0:48:54 and only give a license
0:48:57 if they think the benefits outweigh the harms.
0:49:01 And I’d like to see something like that in AI research
0:49:03 as well as in animal research.
0:49:04 – Well, it’s interesting
0:49:05 ’cause it brings us right back
0:49:06 to what you were talking about a little while ago,
0:49:10 which is, if we can’t trust the linguistic output,
0:49:12 we need the research on understanding,
0:49:14 well, how do we even assess harm and risk
0:49:16 in AI systems in the first place?
0:49:18 – As I say, it’s a huge problem coming down the road
0:49:20 for the whole of society.
0:49:24 I think there’ll be significant social divisions opening up
0:49:28 in the near future between people who are quite convinced
0:49:31 that their AI companions are sentient
0:49:33 and want rights for them
0:49:38 and others who simply find that ridiculous and absurd.
0:49:40 And I think that there’ll be a lot of tensions
0:49:42 between these two groups.
0:49:45 And in a way, the only way to really move forward
0:49:49 is to have better evidence than we do now.
0:49:52 And so there needs to be more research.
0:49:55 I’m always in this difficult position of,
0:49:57 I want more research, the tech companies might fund it,
0:49:59 I hope they will, I want them to fund it.
0:50:02 At the same time, it could be very problematic
0:50:04 for them as well.
0:50:06 And so I can’t make any promises in advance
0:50:08 that the outcomes of that research
0:50:11 will be advantageous to the tech companies.
0:50:14 So, but even though I’m in a difficult position there,
0:50:18 I feel like I still have to try and do something.
0:50:21 – Maybe by way of trying to wrap this all up,
0:50:24 you have been involved in these kinds of questions
0:50:25 for a number of years.
0:50:27 And you’ve mentioned a few times throughout the conversation
0:50:29 that you have seen a pace of change
0:50:31 that’s been kind of inspiring.
0:50:32 You’ve seen questions that previously
0:50:35 were not a part of the conversation now,
0:50:37 becoming part of the mainstream conversation.
0:50:41 So what have you seen in the last decade or two
0:50:43 in terms of the degree to which we are really beginning
0:50:44 to embrace these questions?
0:50:46 – I’ve seen some positive steps.
0:50:49 I think issues around crabs and lobsters and octopus
0:50:53 is taking far more seriously than they were 10 years ago.
0:50:56 For example, I really did not expect that California
0:51:00 would bring in an octopus farming ban
0:51:03 and in the legislation cite our work
0:51:07 as being a key factor driving it.
0:51:08 I mean, that was extraordinary.
0:51:11 So it just goes to show that it really pays off sometimes
0:51:14 to do impact driven work.
0:51:16 I think we’ve seen over the last couple of years
0:51:19 some changes in the conversations around AI as well.
0:51:23 The book is written in a very optimistic tone, I think,
0:51:26 because well, you’ve got to hope to make it a reality.
0:51:30 You’ve got to believe in the possibility of us
0:51:34 taking steps to manage risk better than we do.
0:51:36 And the book is full of proposals
0:51:38 about how we might do that.
0:51:42 And I think at least some of these will be adopted in the future.
0:51:49 – I would love to see it, I’m optimistic as well.
0:51:52 Jonathan Birch, thank you so much for coming on the show.
0:51:53 This was a pleasure.
0:51:54 – Thanks, Hachan.
0:52:07 – Once again, the book is The Edge of Sentience,
0:52:10 which is free to read on the Oxford Academic Platform.
0:52:13 We’ll include a link to that in the show notes.
0:52:14 And that’s it.
0:52:17 I hope you enjoyed the episode as much as I did.
0:52:20 I am still thinking about whether we’re in an N equals one
0:52:23 or an N equals three world,
0:52:25 and how the future of how we look for sentience
0:52:29 in AI systems could come down to animal research
0:52:30 that helps us figure out
0:52:35 whether all animals share the same sentient ancestor,
0:52:37 or whether sentience is something
0:52:40 that’s evolved a few separate times.
0:52:43 This episode was produced by Beth Morrissey
0:52:46 and hosted by me, O’Shan Jarrow.
0:52:50 My day job is as a staff writer with Future Perfect at Vox,
0:52:53 where I cover the latest ideas in the science
0:52:55 and philosophy of consciousness,
0:52:57 as well as political economy.
0:53:01 You can read my stuff at box.com/futureperfect.
0:53:04 Today’s episode was engineered by Patrick Boyd,
0:53:08 fact-checked by Anouk Dussot, edited by Jorge Just,
0:53:10 and Alex Overington wrote our theme music.
0:53:14 New episodes of The Gray Area drop on Mondays.
0:53:16 Listen and subscribe.
0:53:17 The show is part of Vox.
0:53:19 Support Vox’s journalism
0:53:22 by joining our membership program today.
0:53:25 Go to vox.com/members to sign up.
0:53:27 And if you decide to sign up because of the show,
0:53:28 let us know.
0:00:04 Support for this show is brought to you by Nissan Kicks.
0:00:06 It’s never too late to try new things
0:00:09 and it’s never too late to reinvent yourself.
0:00:12 The all-new re-imagined Nissan Kicks
0:00:14 is the city-sized crossover vehicle
0:00:17 that’s been completely revamped for urban adventure.
0:00:20 From the design and styling to the performance,
0:00:21 all the way to features
0:00:23 like the Bose Personal Plus sound system,
0:00:26 you can get closer to everything you love about city life
0:00:29 in the all-new re-imagined Nissan Kicks.
0:00:34 Learn more at www.nisanusa.com/2025-Kicks.
0:00:36 Available feature,
0:00:39 Bose is a registered trademark of the Bose Corporation.
0:00:46 – Your own weight loss journey is personal.
0:00:47 Everyone’s diet is different,
0:00:49 everyone’s bodies are different,
0:00:50 and according to Noom,
0:00:53 there is no one-size-fits-all approach.
0:00:55 Noom wants to help you stay focused
0:00:57 on what’s important to you,
0:01:00 with their psychology and biology-based approach.
0:01:02 This program helps you understand the science
0:01:04 behind your eating choices
0:01:07 and helps you build new habits for a healthier lifestyle.
0:01:09 Stay focused on what’s important to you
0:01:13 with Noom’s psychology and biology-based approach.
0:01:16 Sign up for your free trial today at Noom.com.
0:01:24 – Can you ever really know what’s going on
0:01:26 inside the mind of another creature?
0:01:31 – In some cases, like other humans or dogs and cats,
0:01:35 we might be able to guess with a bit of confidence,
0:01:39 but what about octopuses or insects?
0:01:41 What about AI systems?
0:01:44 Will they ever be able to feel anything?
0:01:48 Despite all of our progress in science and technology,
0:01:50 we still have basically no idea
0:01:52 how to look inside the private experiences
0:01:54 of other creatures.
0:01:57 The question of what kinds of beings can feel things
0:02:00 and what those feelings are really like
0:02:01 remains one of the biggest mysteries
0:02:04 in both philosophy and science.
0:02:07 And maybe, at some point,
0:02:10 we’ll develop a big new theory of consciousness
0:02:14 that helps us really understand the inside of other minds.
0:02:18 But until then, we’re stuck making guesses
0:02:22 and judgment calls about what other creatures can feel
0:02:25 and about whether certain things can feel at all.
0:02:30 So, where do we draw the line
0:02:34 of what kinds of creatures might be sentient?
0:02:37 And how do we figure out our ethical obligations
0:02:39 to creatures that remain a mystery to us?
0:02:43 I’m O’Shawn Jarrow, sitting in for Sean Illing,
0:02:45 and this is the Gray Area.
0:02:54 My guest today is philosopher of science, Jonathan Birch.
0:02:56 He’s the principal investigator
0:02:58 on the Foundations of Animal Sentience Project
0:03:00 at the London School of Economics,
0:03:03 and author of the recently released book,
0:03:07 The Edge of Sentience, Risk and Precaution in Humans,
0:03:08 Other Animals and AI.
0:03:13 He also successfully convinced the UK government
0:03:17 to consider lobsters, octopuses, and crabs sentient
0:03:19 and therefore, deserving of legal protections,
0:03:22 which is a story that we’ll get into.
0:03:24 And it’s that work that earned him a place
0:03:26 on Vox’s Future Perfect 50 list,
0:03:30 a roundup of 50 of the most influential people
0:03:33 working to make the future a better place for everyone.
0:03:37 And in Birch’s case, for every sentient creature.
0:03:42 In this conversation, we explore everything that we do
0:03:45 and don’t know about sentience
0:03:47 and how to make decisions around it,
0:03:50 given all the uncertainty that we can’t yet escape.
0:03:54 Jonathan Birch, welcome to the Gray Area.
0:03:56 Thanks so much for coming on.
0:03:57 – Thanks for inviting me.
0:04:00 – So, one of the central ideas of your work
0:04:04 is this fuzzy idea of sentience.
0:04:06 And you focus on sentience across creatures,
0:04:08 from insects to animals,
0:04:11 to even potentially artificial intelligence.
0:04:14 And one of the challenges in that work
0:04:17 is defining sentience in the first place.
0:04:19 So, can you talk a little bit about how you’ve come
0:04:21 to define the term sentience?
0:04:25 – For me, it starts with thinking about pain
0:04:27 and thinking about questions like,
0:04:29 can an octopus feel pain?
0:04:31 Can a crab, can a shrimp?
0:04:35 And then realizing that actually pain is too narrow
0:04:40 for what really matters to us and that matters ethically.
0:04:43 Because other negative experiences matter as well,
0:04:47 like anxiety and boredom and frustration
0:04:49 that are not really forms of pain.
0:04:53 And then the positive side of mental life also matters.
0:04:57 Pleasure matters, joy, excitement.
0:04:59 And the advantage of the term sentience for me
0:05:02 is that it captures all of that.
0:05:04 It’s about the capacity to have
0:05:07 positive or negative feelings.
0:05:11 – The way that you define sentience
0:05:13 struck me as kind of basically the way
0:05:15 that I’ve thought about consciousness.
0:05:17 But in your book, you have this handy diagram
0:05:20 that shows how you see sentience and consciousness
0:05:22 as to some degree different.
0:05:24 So how do you understand the difference
0:05:27 between sentience and consciousness?
0:05:29 – The problem with the term consciousness, as I see it,
0:05:32 is that it can point to any other number of things.
0:05:34 Sometimes we are definitely using it
0:05:37 to refer to our immediate raw experience
0:05:39 of the present moment.
0:05:41 But sometimes when we’re talking about consciousness,
0:05:45 we’re thinking of things that are overlaid on top of that.
0:05:47 Herbert Feigel in the 1950s
0:05:49 talked about there being these three layers,
0:05:53 sentience, sapience and selfhood.
0:05:55 Where sapience is about the ability
0:05:59 to not just have those immediate raw experiences,
0:06:01 but to reflect on them.
0:06:03 And selfhood is something different again,
0:06:06 ’cause it’s about awareness of yourself
0:06:09 as this persistent subject of the experiences
0:06:13 that has a past and has a future.
0:06:15 And when we use the term consciousness,
0:06:18 we might be pointing to any of these three things
0:06:22 or maybe the package of those three things altogether.
0:06:25 – So sentience is maybe a bit of a simpler,
0:06:27 more primitive capacity for feeling
0:06:30 where consciousness may include these more complex layers?
0:06:31 – I think of it as the base layer.
0:06:34 Yeah, I think of it as the most elemental,
0:06:37 most basic, most evolutionarily ancient
0:06:40 part of human consciousness
0:06:41 that is very likely to be shared
0:06:43 with a wide range of other animals.
0:06:45 – I do a fair bit of reporting
0:06:48 on these kinds of questions of consciousness and sentience.
0:06:51 And everyone tends to agree that it’s a mystery, right?
0:06:53 And so a lot of emphasis goes on
0:06:56 trying to dispel the mystery.
0:06:58 And what I found really interesting about your approach
0:07:00 is that you seem to take the uncertainty
0:07:02 in the mystery as your starting point.
0:07:04 And rather than focusing on how do we solve this?
0:07:06 How do we dispel it?
0:07:07 You’re trying to help us think through
0:07:11 how to make practical decisions given that uncertainty.
0:07:13 I’m curious how you came to that approach.
0:07:14 – Yeah, the question for me
0:07:16 is how do we live with this uncertainty?
0:07:20 How do we manage risk better than we’re doing at present?
0:07:25 How can we use ideas from across science and philosophy
0:07:28 to help us make better decisions
0:07:29 when faced with those problems?
0:07:32 And in particular to help us err on the side of caution.
0:07:34 – Just to maybe make it explicit,
0:07:37 you mentioned the risk of uncertainty.
0:07:39 What is the risk here?
0:07:41 – Well, it depends on the particular case
0:07:42 we’re thinking about.
0:07:44 One of the cases that brought me to this topic
0:07:47 was the practice of dropping crabs and lobsters
0:07:49 into pans of boiling water.
0:07:52 And it seems like a clear case to me
0:07:55 where you don’t need certainty actually.
0:07:56 You don’t even need knowledge.
0:08:00 You don’t need high probability to see the risk.
0:08:04 And in fact, to do sensible common sense things
0:08:05 to reduce that risk.
0:08:07 – So the risk is the suffering we’re imposing
0:08:10 on these potentially other sentient creatures.
0:08:13 – That’s usually what looms largest for me, yeah.
0:08:15 The risk of doing things
0:08:17 that mean we end up living very badly
0:08:20 because we cause enormous amounts of suffering
0:08:22 to the creatures around us.
0:08:26 And you can think of that as a risk to the creatures
0:08:28 that end up suffering, but it’s also a risk to us.
0:08:31 A risk that our lives will be horrible
0:08:32 and destructive and absurd.
0:08:35 – I worry about my life being horrible
0:08:37 and destructive and absurd all the time.
0:08:39 So this is a handy way to think about it.
0:08:40 – We all should.
0:08:43 – I’d like to turn to your very practical work,
0:08:45 advising the UK government
0:08:49 on the Animal Welfare and Sentience Act of 2022.
0:08:50 The question was put to you
0:08:53 of whether they should consider certain invertebrates
0:08:55 like octopus and crabs and lobsters,
0:08:58 whether they should be included and protected in the bill.
0:09:01 Could you just give a little context on that story
0:09:02 and what led the government to come
0:09:05 and ask you to lead a research team on that question?
0:09:08 – Yeah, it was indirectly a result of Brexit,
0:09:11 the UK leaving the European Union,
0:09:15 because in doing that, we left the EU’s Lisbon Treaty
0:09:18 that has a line in it about respecting animals
0:09:20 as sentient beings.
0:09:22 And so Animal Welfare Organization said to the government,
0:09:25 are you going to import that into UK law?
0:09:27 And they said, no.
0:09:29 And they got a lot of bad press along the lines of,
0:09:32 well, don’t you think animals feel pain?
0:09:35 And so they promised new legislation
0:09:38 that would restore respect for sentient beings
0:09:40 back to UK law.
0:09:43 And they produced a draft of the bill
0:09:46 that included vertebrate animals.
0:09:48 You could say that’s progressive in a way
0:09:50 because fishes are in there, which is great,
0:09:52 but it generated a lot of criticism
0:09:55 because of the omission of invertebrates.
0:09:58 And so in that context, they commissioned a team led by me
0:10:01 to produce a review of the evidence of sentience
0:10:03 in two groups of invertebrates,
0:10:06 the cephalopods like octopuses
0:10:09 and the decopod crustaceans like crabs and lobsters.
0:10:11 I’d already been calling for applications
0:10:14 of the precautionary principle to questions of sentience
0:10:16 and had written about that.
0:10:19 And it already established at the LSE a project
0:10:22 called the Foundations of Animal Sentience Project
0:10:25 that aims to try to place the emerging science
0:10:29 of animal sentience on more secure foundations,
0:10:31 advance it, develop better methods,
0:10:33 and find new ways of putting the science to work
0:10:35 to design better policies,
0:10:37 laws and ways of caring for animals.
0:10:39 So in a way, I was in the right place at the right time.
0:10:43 I was pretty ideally situated to be leading a review like this.
0:10:46 – How do folks actually go about trying to answer
0:10:51 the question of whether a given animal is or is not sentient?
0:10:53 – Well, in lots of different ways.
0:10:55 And I think when we’re looking at animals
0:10:58 that are relatively close to us in evolutionary terms,
0:11:00 like other mammals,
0:11:02 neuroscience is a huge part of it
0:11:05 because we can look for similarities of brain mechanism.
0:11:08 But when thinking about crabs and lobsters,
0:11:09 what we’re not going to find
0:11:11 is exactly the same brain mechanisms
0:11:13 because we’re separated from them
0:11:16 by over 500 million years of evolution.
0:11:17 – That’s quite a bit.
0:11:19 – And so I think in that context,
0:11:23 you can ask big picture neurological questions.
0:11:27 Are there integrative brain regions, for example?
0:11:29 But the evidence is quite limited,
0:11:33 and so behavior ends up carrying a huge amount of weight.
0:11:36 Some of the strongest evidence comes from behaviors
0:11:41 that show the animal valuing pain relief when injured.
0:11:45 So for example, there was a study by Robin Crook
0:11:46 on octobuses, which is where you give the animal
0:11:49 a choice of two different chambers,
0:11:52 and you see which one it initially prefers.
0:11:56 And then you allow it to experience the effects
0:12:00 of a noxious stimulus, a nasty event.
0:12:03 And then in the other chamber that it initially dispreferred,
0:12:07 you allow it to experience the effects of an aesthetic
0:12:10 or a pain relieving drug.
0:12:12 And then you see whether its preferences reverse.
0:12:14 So now going forward,
0:12:17 it goes to that chamber where it had a good experience
0:12:20 rather than the one where it had a terrible experience.
0:12:22 So it’s a pattern of behavior.
0:12:26 In ourselves, this would be explained by feeling pain
0:12:28 and then getting relief from the pain.
0:12:30 And when we see it in other mammals,
0:12:32 we make that same inference.
0:12:34 – Are there any other categories?
0:12:36 ‘Cause we mentioned pain is one bucket of sentience,
0:12:38 but there’s much more to it.
0:12:39 Is there anything else that tends to play
0:12:42 a big role in the research?
0:12:42 – There’s much more to it.
0:12:44 And what I would like to see in the future
0:12:47 is animal sentience research moving beyond pain
0:12:50 and looking for other states that matter,
0:12:53 like joy for instance.
0:12:58 In practice though, by far the largest body of literature
0:13:01 exists for looking at markers of pain.
0:13:05 – I would love to read a paper that tries to assess
0:13:07 to what degree rats are experiencing joy
0:13:09 rather than pain, that would be lovely.
0:13:12 – I mean, studies of play behavior are very relevant here.
0:13:16 The studies of rats playing hide and seek for example,
0:13:18 where there must be something motivating
0:13:20 these play behaviors.
0:13:23 In the human case, we would call it joy, delight,
0:13:26 excitement, something like that.
0:13:29 And so it gets you taking seriously the possibility
0:13:32 there might be something like that in other animals too.
0:13:34 – I think the thing I’m actually left wondering is
0:13:39 what animals don’t show signs of sentience in these cases?
0:13:42 – Right, I mean, there’s many invertebrates
0:13:45 where you have an absence of evidence
0:13:48 ’cause no one has really looked.
0:13:53 So snails for example, there’s frustratingly little evidence.
0:13:58 Also bivalve mollusks, which people talk about a lot
0:14:00 ’cause they eat so many of them.
0:14:03 Very, very little evidence to base our judgments on.
0:14:05 And it’s hard to know what to infer from this.
0:14:07 There’s this slogan that absence of evidence
0:14:09 is not evidence of absence.
0:14:11 And it’s a little bit oversimplifying
0:14:13 ’cause you sort of think, well, you know,
0:14:18 when researchers find some indicators of pain,
0:14:20 they’ve got strong motivations to press on
0:14:22 because it could be a useful pain model
0:14:24 for biomedical research.
0:14:27 And this is exactly what we’ve seen in insects,
0:14:29 particularly Drosophila fruit flies,
0:14:31 that seeing some of those initial markers
0:14:33 has led scientists to think, well, let’s go for this.
0:14:38 And it turns out they’re surprisingly useful pain models.
0:14:39 – A pain model for humans?
0:14:40 – Right, exactly.
0:14:44 Yeah, that traditionally biomedical researchers have used rats
0:14:48 and there’s pressure to replace.
0:14:50 I don’t personally think that replacement here
0:14:53 should mean replacing mammals with invertebrates.
0:14:56 It’s not really the kind of replacement that I support,
0:14:59 but that is how a lot of scientists understand it.
0:15:03 And so they’re looking for ways to replace rats with flies.
0:15:04 – How do they decide
0:15:06 that the fly is a good pain model for humans?
0:15:08 – I mean, researchers have the ability
0:15:11 to manipulate the genetics of flies
0:15:16 at very, very fine grains using astonishing technologies.
0:15:22 So there was a recent paper that basically installed
0:15:26 in some flies sensitivity to chili heat.
0:15:30 Which of course in us, over a certain threshold,
0:15:32 this becomes painful.
0:15:34 So if you have one of the hottest chilies in the world,
0:15:37 you’re not gonna just carry on as normal.
0:15:38 – Certainly not.
0:15:40 – And they showed that the same behavior
0:15:41 can be produced in flies.
0:15:45 You can engineer them to be responsive to chili
0:15:48 and then you can dial up the amount of capsaicin
0:15:49 in the food they’re eating.
0:15:52 And there’ll come a point where they just stop eating
0:15:57 and withdraw from food, even though it leads them to starve.
0:16:00 And things like this that you’re leading researchers
0:16:03 to say, wow, the mechanisms here are mechanisms
0:16:07 we can use for testing out potential pain relieving drugs.
0:16:11 And the fruit flies are a standard model organism,
0:16:13 as they say in science.
0:16:16 So there’s countless numbers of them,
0:16:18 but traditionally they’ve been studied
0:16:20 for genetics primarily.
0:16:21 People haven’t been thinking of them
0:16:24 as model systems of cognitive functions
0:16:28 or of sentience or of pain or of sociality.
0:16:31 And they’re realizing to their surprise
0:16:33 that they’re very good models of all of these things.
0:16:35 And then your question is, well,
0:16:38 why is it such a good model of these things?
0:16:42 Could it be in fact that it possesses sentience of some kind?
0:16:46 – I don’t wanna go too far down this rabbit hole
0:16:48 ’cause I could spend hours asking you about this.
0:16:52 Let’s swing back to your research on the UK’s Act for a second.
0:16:54 You wound up recommending that the invertebrates
0:16:56 you looked at should be included.
0:16:59 And you mentioned this included, you know, octopuses,
0:17:01 which to me seems straightforward.
0:17:04 These seem very intelligent and playful.
0:17:06 I don’t need a lot of research to convince me of that.
0:17:09 But you recommended things like, you know, crabs and lobsters
0:17:11 and things where maybe people’s intuitions differ
0:17:14 a little bit in practical terms.
0:17:17 What changed for the life of a crab
0:17:20 after the UK did formally include them in the bill?
0:17:23 How does that wind up benefiting crabs?
0:17:26 – It’s a topic of ongoing discussion, basically,
0:17:27 ’cause what this new act does
0:17:30 is it creates a duty on policymakers
0:17:33 to consider the animal welfare consequences
0:17:36 of their decisions, including to crabs.
0:17:39 Now, we recommended, don’t just put crabs
0:17:41 in this particular act.
0:17:45 Also, amend the UK’s other animal welfare laws
0:17:48 to be consistent with the new act.
0:17:49 And this we’ve not yet seen.
0:17:52 So we’re really hoping that this will happen
0:17:53 and will happen in the near future.
0:17:56 And it’s something that definitely should happen.
0:17:58 ‘Cause in the meantime, we’ve got a rather confusing picture
0:18:01 where you have these other laws that say
0:18:04 animals should not be caused unnecessary suffering
0:18:07 when they’re killed and people should require training
0:18:09 if they’re going to slaughter animals.
0:18:12 And then you have this new law that says
0:18:14 for legal purposes, decapod crustaceans
0:18:16 are to be considered animals.
0:18:19 And as a philosopher, I’m always thinking,
0:18:20 well, read these two things together
0:18:22 and think about what they logically imply
0:18:23 when written together.
0:18:26 And lawyers don’t like that kind of argument.
0:18:28 Lawyers want a clear precedent
0:18:31 where there’s been some kind of test case
0:18:35 that has convicted someone for boiling a lobster alive
0:18:36 or something like that.
0:18:38 And that’s what we’ve not yet had.
0:18:41 So I’m hoping that lawmakers will act
0:18:44 to clarify that situation.
0:18:46 To me, it’s kind of clear.
0:18:47 How much clearer could it be
0:18:50 that this method causes unnecessary suffering
0:18:51 quite obviously.
0:18:56 And it’s illegal to do that to any animal,
0:18:58 including crabs.
0:19:02 But in practice, because it’s not explicitly ruled out,
0:19:07 it’s not quite good enough at the moment.
0:19:09 We wanna see this explicitly ruled out.
0:19:13 – So we’ll take incremental steps to get there.
0:19:15 – Yeah, in a way, I’m glad people take this issue
0:19:16 seriously at all.
0:19:19 I didn’t really expect that when I started working on it.
0:19:23 And so to have achieved any policy change that benefits
0:19:25 crabs and lobsters in any way,
0:19:27 I’ve gotta count that as a win.
0:19:40 – Support for the gray area comes from Mint Mobile.
0:19:42 There’s nothing like the satisfaction
0:19:44 of realizing you just got an incredible deal.
0:19:47 But those little victories have gotten harder
0:19:48 and harder to find.
0:19:50 Here’s the good news though.
0:19:52 Mint Mobile is resurrecting that incredible
0:19:54 “I got a deal” feeling.
0:19:56 Right now, when you make the switch to a Mint Mobile plan,
0:19:59 you’ll pay just $15 a month when you purchase
0:20:01 a new three month phone plan.
0:20:03 All Mint Mobile plans come with high speed data
0:20:05 and unlimited talk and text delivered
0:20:08 on the nation’s largest 5G network.
0:20:10 You can even keep your phone, your contacts,
0:20:11 and your number.
0:20:13 It doesn’t get much easier than that.
0:20:15 To get this new customer offer
0:20:17 and your new three month premium wireless plan
0:20:18 for just 15 bucks a month,
0:20:21 you can go to mintmobile.com/grayarea.
0:20:24 That’s mintmobile.com/grayarea.
0:20:27 You can cut your wireless bill to 15 bucks a month
0:20:29 at mintmobile.com/grayarea.
0:20:33 $45 upfront payment required equivalent to $15 a month.
0:20:36 New customers on first three month plan only.
0:20:39 Speed slower above 40 gigabytes on unlimited plan,
0:20:41 additional taxes, fees, and restrictions apply.
0:20:43 See Mint Mobile for details.
0:20:50 Support for the gray area comes from Cook Unity.
0:20:52 You know one way to eat chef prepared meals
0:20:53 in the comfort of your home?
0:20:56 You can spend years at culinary school,
0:20:58 work your way up the restaurant industry,
0:21:00 become a renowned chef on your own,
0:21:02 and then cook something for yourself.
0:21:04 Cook Unity delivers meals to your door
0:21:06 that are crafted by award winning chefs
0:21:10 and made with local farm fresh ingredients.
0:21:13 Cook Unity’s selection of over 350 meals
0:21:15 offers a variety of cuisines
0:21:17 and their menus are updated weekly.
0:21:18 So you’re sure to find something
0:21:21 to fit your taste and dietary needs.
0:21:22 One of our colleagues, Nisha,
0:21:24 tried Cook Unity for herself.
0:21:26 – Sometimes you’re just too tired to cook.
0:21:28 I’m a, I have a two and a half year old.
0:21:31 Sometimes you’re just exhausted at the end of the day.
0:21:33 And it’s very easy to default to take out.
0:21:35 So it was really nice to not have the mental load
0:21:37 of having it cook every day,
0:21:40 but having healthy home cooked meals
0:21:42 already prepared for you
0:21:45 and not having to go the takeout route.
0:21:48 – You can get the gift of delivering mouthwatering meals
0:21:49 crafted by local ingredients
0:21:52 and award winning chefs with Cook Unity.
0:21:55 You can go to cookunity.com/grayarea
0:21:58 or enter code grayarea before checkout
0:22:00 for 50% off your first week.
0:22:02 That’s 50% off your first week
0:22:04 by using code grayarea
0:22:07 or going to cookunity.com/grayarea.
0:22:14 – Support for the gray area comes from Shopify.
0:22:18 Viral marketing campaigns have gotten pretty wild lately.
0:22:19 Like in Russia,
0:22:22 one pizza chain offered 100 free pizzas a year
0:22:24 for 100 years to anyone
0:22:26 who got the company logo tattooed on their body.
0:22:29 Apparently 400 misguided souls did it,
0:22:33 which is a story that deserves its own podcast.
0:22:34 But if you want to grow your company
0:22:38 without resorting to a morally dubious viral scheme,
0:22:40 you might want to check out Shopify.
0:22:43 Shopify is an all-in-one digital commerce platform
0:22:46 that wants to help your business sell better than ever before.
0:22:49 Shopify says they can help you convert browsers
0:22:52 into buyers and sell more over time.
0:22:55 And their shop pay feature can boost conversions by 50%.
0:22:58 There’s a reason companies like Allbirds turn to Shopify
0:23:01 to sell more products to more customers.
0:23:03 Businesses that sell more sell with Shopify.
0:23:05 Want to upgrade your business
0:23:07 and get the same checkout Allbirds uses?
0:23:10 You can sign up for your $1 per month trial period
0:23:14 at Shopify.com/Vox, all lowercase.
0:23:18 That’s Shopify.com/Vox to upgrade your selling today.
0:23:20 Shopify.com/Vox.
0:23:28 (gentle music)
0:23:38 – Let’s move to another set of potential beings.
0:23:41 Your work on Sentience covers artificial intelligence.
0:23:44 And one of the things that I’ve been most interested
0:23:46 in watching as the past few years
0:23:48 have really thrust a lot of questions around AI
0:23:52 into the mainstream has been this unbundling
0:23:54 of consciousness and intelligence
0:23:56 or Sentience and intelligence.
0:23:59 We’re clearly getting better at creating
0:24:01 more intelligent systems that can achieve
0:24:05 and with competency perform certain tasks.
0:24:07 But it remains very unclear
0:24:09 if we’re getting any closer to Sentient ones.
0:24:12 So how do you understand the relationship
0:24:15 between Sentience and intelligence?
0:24:17 – I think it’s entirely possible
0:24:21 that we will get AI systems with very high levels
0:24:26 of intelligence and absolutely no Sentience at all.
0:24:27 That’s entirely possible.
0:24:31 And when you think about shrimps or snails, for example,
0:24:34 we can also conceive of how there can be Sentience
0:24:37 with perhaps not all that much intelligence.
0:24:40 – On another podcast, you had mentioned that
0:24:43 it might actually be easier to create AI systems
0:24:45 that are Sentient by modeling them
0:24:47 off of less intelligent systems
0:24:50 rather than just cranking up the intelligence dial
0:24:52 until it bursts through into Sentience.
0:24:54 Why is that?
0:24:55 – That could absolutely be the case.
0:24:59 I see it many possible pathways to Sentient AI.
0:25:01 One of which is through the emulation
0:25:03 of animal nervous systems.
0:25:06 There’s a long running project called Open Worm
0:25:09 that tries to recreate the nervous system
0:25:14 of a tiny worm called C. elegans in computer software.
0:25:16 There’s not a huge amount of funding going into this
0:25:19 because it’s not seen as very lucrative,
0:25:20 just very interesting.
0:25:23 And so even with those very simple nervous systems,
0:25:25 we’re not really at the stage where we can say
0:25:27 they’ve been emulated.
0:25:28 But you can see the pathway here.
0:25:31 You know, suppose we did get an emulation
0:25:33 of a worms nervous system.
0:25:35 I’m sure we would then move on to fruit flies.
0:25:39 If that worked, researchers would be going on to open mouse,
0:25:43 open fish and emulating animal brains
0:25:45 at ever greater levels of detail.
0:25:49 And then in relation to questions of Sentience,
0:25:51 we’ve got to take seriously the possibility
0:25:56 that Sentience does not require a biological substrate,
0:25:59 that the stuff you’re made of might not matter.
0:26:01 It might matter, but it might not.
0:26:03 And so it might be that if you recreate
0:26:08 the same functional organization in a different substrate,
0:26:11 so no neurons of a biological kind anymore,
0:26:13 just computer software,
0:26:15 maybe you would create Sentience as well.
0:26:18 – You’ve talked about this idea that you’ve called
0:26:20 the N equals one problem.
0:26:22 Can you explain what that is?
0:26:28 – Well, this is a term that began in origins of life studies,
0:26:31 where it’s people searching for extraterrestrial life
0:26:34 or studying life’s origin and asking,
0:26:37 well, we only have one case to draw on.
0:26:39 And if we only have one case,
0:26:43 how are we supposed to know what was essential to life
0:26:46 from what was a contingent feature
0:26:48 of how life was achieved on Earth?
0:26:53 And one might think we have an N equals one problem
0:26:55 with consciousness as well.
0:26:59 If you think it’s something that has only evolved once,
0:27:01 seems like you’re always gonna have problems
0:27:02 disentangling what’s essential to it
0:27:06 from what is contingent.
0:27:06 Luckily though,
0:27:09 I think we might be in an N greater than one situation
0:27:12 when it comes to Sentience and consciousness
0:27:14 because of the arthropods like flies and bees
0:27:17 and because of the cephalopods and crabs.
0:27:18 And because of the cephalopods
0:27:21 like octopuses, squid, cuttlefish,
0:27:24 we might even be in an N equals three situation,
0:27:28 in which case, studying those other cases,
0:27:32 octopuses, crabs, insects has tremendous value
0:27:35 for understanding the nature of Sentience
0:27:37 ’cause it can tell us,
0:27:39 it can start to give us some insight
0:27:43 into what might be essential to having it at all
0:27:45 versus what might be a quirk
0:27:48 of how it is achieved in humans.
0:27:50 – Just to make sure I have this right,
0:27:54 if we are in an N equals one scenario with Sentience,
0:27:56 that means that every sentient creature evolved
0:27:59 from the same sentient ancestor.
0:28:01 It’s one evolutionary lineage.
0:28:01 – That’s right.
0:28:05 – And so Sentience has only evolved once on Earth’s history
0:28:07 so it gives us one example to look at.
0:28:08 – Exactly.
0:28:10 – But if we’re not in an N equals one situation,
0:28:12 you mentioned N equals three
0:28:13 and there’s a fair bit of research
0:28:16 suggesting this could be the case or something like it,
0:28:19 then Sentience has evolved three separate times
0:28:22 in three separate kind of cases of form
0:28:24 and the architecture of a being.
0:28:26 – That’s fascinating to me,
0:28:28 the idea that Sentience could have involved
0:28:31 independently multiple times in different ways.
0:28:34 – Yeah, we know it’s true of eyes, for example,
0:28:37 when you look at the eyes of cephalopods,
0:28:40 you see a wonderful mixture of similarities and differences.
0:28:44 So we see convergent evolution, similar thing,
0:28:49 evolving independently to solve a similar problem
0:28:51 and Sentience could be just like that.
0:28:55 – The greater the number of N’s we have here,
0:28:59 the number of separate instances of Sentience evolving,
0:29:03 it strikes me as that lends more credence to the idea
0:29:06 that AI could develop its own independent route
0:29:09 to Sentience as well that might not look exactly
0:29:11 like what we’ve seen in the past.
0:29:14 – It’s also the way towards really knowing
0:29:16 whether it has or not as well
0:29:19 because at present, we’re just not in that situation.
0:29:22 We’re not in a good enough position
0:29:25 to be able to really know that we’ve created Sentience AI
0:29:28 even when we do, we’ll be faced
0:29:31 with horrible disorienting uncertainty.
0:29:33 But to me, the pathway towards better evidence
0:29:36 and maybe one day knowledge lies through
0:29:38 studying other animals.
0:29:41 And it lies through trying to get other N’s,
0:29:45 other independently evolved cases
0:29:48 so that we can develop theories
0:29:51 that genuinely disentangle the quirks
0:29:54 of human consciousness from what is needed
0:29:56 to be conscious at all.
0:30:01 – What kind of evidence would you find compelling
0:30:05 that tests for Sentience in AI systems?
0:30:08 – It’s something I’ve been thinking about a great deal
0:30:12 because when we’re looking at the surface linguistic behavior
0:30:14 of an AI system that has been trained
0:30:18 on over a trillion words of human training data,
0:30:22 we clearly gonna see very fluent talking
0:30:24 about feelings and emotions.
0:30:29 And we’re already seeing that.
0:30:33 And it’s really, I would say not evidence at all
0:30:36 that the system actually has those feelings
0:30:40 because it can be explained as a kind of skillful mimicry.
0:30:43 And if that mimicry serves the system’s objectives,
0:30:45 we should expect to see it.
0:30:48 We should expect our criteria to be gained
0:30:50 if the objectives are served by persuading
0:30:54 the human user of Sentience.
0:30:56 And so this is a huge problem and it points
0:31:01 to the need to look deeper in some way.
0:31:03 These systems are very substantially opaque.
0:31:07 It is really, really hard to infer anything
0:31:10 about what the processes are inside them.
0:31:12 And so I have a second line of research as well
0:31:15 that I’ve been developing with collaborators at Google
0:31:19 that is about trying to adapt some of these animal experiments.
0:31:22 Let’s see if we can translate them over to the AI case.
0:31:24 – These are looking for behavior changes?
0:31:27 – Yeah, looking for subtle behavior changes
0:31:32 that we hope would not be gaming
0:31:35 because they’re not part of the normal repertoire
0:31:37 in which humans express their feelings,
0:31:39 but are rather these very subtle things
0:31:41 that we’ve looked for in other animals
0:31:43 because they can’t talk about their feelings
0:31:45 in the first place.
0:31:47 – So it’s funny, we’re hitting the same problem in AI
0:31:49 that we are in animals and humans,
0:31:53 which is that in both cases, there’s a black box problem
0:31:54 where we don’t actually understand
0:31:55 the inner workings to some degree.
0:31:58 – The problems are so much worse in the AI case though
0:32:03 because when you’re faced with a pattern of behavior
0:32:07 in another animal like an octopus
0:32:10 that is well explained by there being a state
0:32:14 like pain there, that is the best explanation
0:32:15 for your data.
0:32:18 And it doesn’t have to compete with this other explanation
0:32:21 that maybe the octopus read a trillion words
0:32:23 about how humans express their feelings
0:32:27 and stands to benefit from gaming our criteria
0:32:29 and skillfully mimicking us.
0:32:33 We know the octopus is not doing that, that never arises.
0:32:36 In the AI case, those two explanations always compete
0:32:39 and the second one with current systems
0:32:41 seems to be rather more plausible.
0:32:43 And in addition to that,
0:32:46 the substrate is completely different as well.
0:32:48 So we face huge challenges
0:32:49 and I suppose what I’m trying to do
0:32:51 is maintain an attitude of humility
0:32:53 in the face of those challenges.
0:32:57 Now, let’s not be credulous about this,
0:32:59 but also let’s not give up the search
0:33:02 for developing higher quality kinds of test.
0:33:14 Support for the gray area comes from green light.
0:33:28 Anyway, it applies to more than fish.
0:33:30 It’s also a great lesson for parents
0:33:32 who want their kids to learn important skills
0:33:35 that will set them up for success later in life.
0:33:37 As we enter the gifting season,
0:33:39 now might be the perfect time
0:33:41 to give your kids money skills that will last
0:33:43 well beyond the holidays.
0:33:45 That’s where green light comes in.
0:33:46 Green light is a debit card
0:33:49 and money at made specifically with families in mind.
0:33:50 Send money to your kids,
0:33:52 track their spending and saving
0:33:54 and help them develop their financial skills
0:33:57 with games aimed at building the confidence they need
0:34:00 to make wiser decisions with their money.
0:34:02 My kid is a little too young for this.
0:34:04 We’re still rocking piggy banks,
0:34:06 but I’ve got a colleague here at Vox
0:34:08 who uses it with his two boys and he loves it.
0:34:10 You can sign up for green light today
0:34:12 at greenlight.com/grayarea.
0:34:14 That’s greenlight.com/grayarea
0:34:16 to try green light today.
0:34:18 Greenlight.com/grayarea.
0:34:24 Support for the show comes from Give Well.
0:34:27 When you make a charitable donation,
0:34:30 you want to know your money is being well spent.
0:34:32 For that, you might want to try Give Well.
0:34:34 Give Well is an independent nonprofit
0:34:36 that’s spent the last 17 years
0:34:39 researching charitable organizations.
0:34:40 And they only give recommendations
0:34:44 to the highest impact causes that they vetted thoroughly.
0:34:47 According to Give Well, over 125,000 donors
0:34:50 have used it to donate more than $2 billion.
0:34:52 Rigorous evidence suggests that these donations
0:34:55 could save over 200,000 lives.
0:34:58 And Give Well wants to help you make informed decisions
0:34:59 about high impact giving.
0:35:02 So all of their research and recommendations
0:35:04 are available on their site for free.
0:35:06 You can make tax deductible donations
0:35:08 and Give Well doesn’t take a cut.
0:35:09 If you’ve never used Give Well to donate,
0:35:12 you can have your donation matched up to $100
0:35:14 before the end of the year
0:35:16 or as long as matching funds last.
0:35:18 To claim your match, you can go to GiveWell.org
0:35:22 and pick podcast and enter the gray area at checkout.
0:35:24 Make sure they know that you heard about Give Well
0:35:26 from the gray area to get your donation matched.
0:35:30 Again, that’s GiveWell.org to donate or find out more.
0:35:37 Support for the gray area comes from Delete Me.
0:35:39 Delete Me allows you to discover and control
0:35:40 your digital footprint,
0:35:43 letting you see where things like home addresses,
0:35:44 phone numbers, and even email addresses
0:35:47 are floating around on data broker sites.
0:35:50 And that means Delete Me could be the perfect holiday gift
0:35:52 for a loved one looking to help safeguard
0:35:54 their own life online.
0:35:55 Delete Me can help anyone monitor
0:35:59 and remove the personal info they don’t want on the internet.
0:36:01 Claire White, our colleague here at Vox,
0:36:04 tried Delete Me for herself and even gifted it to a friend.
0:36:08 This year, I gave two of my friends a Delete Me subscription
0:36:09 and it’s been the perfect gift
0:36:12 ’cause it’s something that will last beyond the season.
0:36:14 Delete Me will continue to remove their information
0:36:16 from online and it’s something I’ve been raving about
0:36:19 so I know that they’re gonna love it as well.
0:36:21 – This holiday season, you can give your loved ones
0:36:25 the gift of privacy and peace of mind with Delete Me.
0:36:27 Now at a special discount for our listeners.
0:36:30 Today, you can get 20% off your Delete Me plan
0:36:33 when you go to joindeleteme.com/vox
0:36:35 and use promo code Vox at checkout.
0:36:39 The only way to get 20% off is to go to joindeleteme.com/vox
0:36:42 and enter code Vox at checkout.
0:36:46 That’s joindeleteme.com/vox code Vox.
0:36:52 (gentle music)
0:37:00 – One of the major aims of your recent book
0:37:02 is to propose a framework
0:37:04 for making these kinds of practical decisions
0:37:06 about potentially sentient creatures,
0:37:09 whether it’s an animal, whether it’s AI,
0:37:11 given this uncertainty.
0:37:12 Tell me about that framework.
0:37:15 – Well, it’s a precautionary framework.
0:37:18 One of the things I urge is a pragmatic shift
0:37:20 in how we think about the question.
0:37:23 From asking, is the system sentient
0:37:26 where uncertainty will always be with us?
0:37:30 To asking instead, is the system a sentient’s candidate?
0:37:32 Where the concept of a sentient’s candidate
0:37:36 is a concept that we’ve pragmatically engineered.
0:37:39 And what it says is that a system is a sentient’s candidate
0:37:41 when there’s a realistic possibility of sentient’s
0:37:44 that it would be irresponsible to ignore.
0:37:45 And when there’s an evidence base
0:37:49 that can inform the design and assessment of precautions.
0:37:52 And because we’ve constructed the concept like that,
0:37:56 we can use current evidence to make judgments.
0:37:59 The cost of doing that is that those judgments
0:38:02 are not purely scientific judgments anymore.
0:38:05 There’s an ethical element to the judgment as well,
0:38:07 because it’s about when a realistic possibility
0:38:10 becomes irresponsible to ignore.
0:38:13 And that’s implicitly a value judgment.
0:38:15 But by reconstructing the question in that way,
0:38:16 we make it answerable.
0:38:19 – So, presumably then,
0:38:21 given your recommendation to the UK government,
0:38:25 you would say that those invertebrates you looked at
0:38:27 are sentient’s candidates.
0:38:29 That there’s enough evidence to at least consider
0:38:31 the possibility of sentience.
0:38:35 Where would you stop with the current category
0:38:36 of sentient’s candidate?
0:38:39 What is not a sentient’s candidate in your current view?
0:38:43 – I’ve come to the view that insects really are,
0:38:44 which surprises me.
0:38:46 You know, it would have surprised past me
0:38:49 who hadn’t read so much of the literature about insects.
0:38:53 The evidence just clearly shows a realistic possibility
0:38:55 of sentience, it would be irresponsible to ignore.
0:38:59 But really, that’s currently where I stop.
0:39:01 So the cephalopod mollusks, the decapod crustaceans,
0:39:05 the insects, lot of evidence in those cases.
0:39:08 And I think in other invertebrates,
0:39:11 what we should say instead is that we lack the kind of
0:39:14 evidence that would be needed to effectively design
0:39:17 precautions to manage welfare risks.
0:39:21 And so the imperative there is to be getting more evidence.
0:39:24 And so in my book, I call these investigation priorities.
0:39:27 – So insects are sentience candidates.
0:39:30 Where does today’s generation of AI,
0:39:33 let’s LLMs in particular, so open AI is chatgy, BT,
0:39:36 anthropics, Claude, are these sentience candidates
0:39:37 in your view yet?
0:39:40 – I suggest that they’re investigation priorities,
0:39:43 which is already controversial because I’m saying that,
0:39:47 well, just as snails, we need more evidence.
0:39:49 Equally in AI, we need more evidence.
0:39:51 So I’m not being one of those people who just dismisses
0:39:56 the possibility of sentient AI as being a ridiculous one.
0:39:58 But I don’t think they’re sentience candidates
0:40:00 because we don’t have enough evidence.
0:40:02 – When you say that something is a sentience candidate,
0:40:05 it’s implying that we need to consider their welfare
0:40:08 and our behaviors and the decisions that we make.
0:40:09 – In public policy.
0:40:10 Yeah, I mean, in our personal lives,
0:40:13 we might want to be even more precautionary,
0:40:17 but I’m designing here a framework for setting policy.
0:40:19 – Right, ’cause I can imagine,
0:40:21 I think that the standard kind of line
0:40:23 that you get at this point is,
0:40:24 if you’re telling me I need to consider
0:40:26 the welfare of insects,
0:40:28 how can I take a step on the sidewalk?
0:40:30 And one of the ideas that’s central to your framework
0:40:33 is this idea of proportionality, which I really liked.
0:40:36 You talk about how the precautions that we take
0:40:39 should match the scale of the risk of suffering
0:40:41 that our actions kind of carry.
0:40:43 So how do you think about quantifying the risk
0:40:45 of suffering an action carries, right?
0:40:49 Does harming simpler creatures or insects
0:40:52 carry less risk than harming larger, more complex ones
0:40:54 like pigs or octopuses?
0:40:57 – Well, I’m opposed to trying to reduce it
0:41:00 or to a calculation and perhaps disagree
0:41:02 with some utilitarians on that point.
0:41:04 When you’re setting public policy,
0:41:08 cost-benefit analysis has its place,
0:41:10 but we’re not in that kind of situation here.
0:41:13 We’re weighing up very incommensurable things,
0:41:16 things that it’s very, very hard to compare.
0:41:18 And I think in that kind of situation,
0:41:21 you don’t want to be just making a calculation.
0:41:24 What you need to have is a democratic, inclusive process
0:41:27 through which different positions can be represented
0:41:32 and we can try to resolve our value conflicts democratically.
0:41:35 And so in the book, I advocate for citizens assemblies
0:41:39 as being the most promising way of doing this,
0:41:41 where you bring a random sample of the public
0:41:45 into an environment where they’re informed about the risks,
0:41:47 they’re informed about possible precautions,
0:41:50 and they’re given a series of tests to go through
0:41:53 to debate what they think would be proportionate
0:41:54 to those risks.
0:41:57 And things like, we’re all banned from walking now
0:41:59 because it might hurt insects.
0:42:02 I don’t see those as very likely to be judged proportionate
0:42:04 by such an exercise.
0:42:06 But other things we might do to help insects,
0:42:09 like banning certain kinds of pesticides,
0:42:12 I think might well be judged proportionate.
0:42:15 – Is this, this sounds to me almost like a form of jury duty.
0:42:17 You have a random selection of citizens brought together.
0:42:18 – Yeah.
0:42:20 – How do you, when I think about this on one hand,
0:42:21 I think it sounds lovely.
0:42:23 I like the idea of us all coming together
0:42:26 to debate the welfare of our fellow creatures.
0:42:28 It also strikes me as kind of optimistic,
0:42:32 to imagine us not only doing this, but doing it well.
0:42:35 And I’m curious how you think about balancing
0:42:38 the value of expertise in making these decisions
0:42:40 with democratic input.
0:42:44 – Yeah, I’m implicitly proposing a division of labor
0:42:47 where experts are supposed to make this judgment
0:42:51 of sentience candidature or candidacy.
0:42:53 Is the octopus a sentience candidate?
0:42:56 But then they’re not adjudicating
0:42:58 the questions of proportionality.
0:43:00 – So what to do about it?
0:43:02 – Yeah, then it would be a tyranny of expert values.
0:43:05 You’d have this question that calls for value judgments
0:43:06 about what to do.
0:43:08 And you’d be handing that over to the experts
0:43:12 and letting the experts dictate changes to our way of life.
0:43:15 That question of proportionality,
0:43:18 that should be handed over to the citizens assembly.
0:43:21 And I think it doesn’t require ordinary citizens
0:43:24 to adjudicate the scientific disagreement.
0:43:27 And that’s really crucial because if you’re asking
0:43:28 random members of the public to adjudicate
0:43:32 which brain regions they think are more important to sentience,
0:43:34 that’s gonna be a total disaster.
0:43:37 But the point is you give them questions
0:43:40 about what sorts of changes to our way of life
0:43:44 would be proportionate, would be permissible,
0:43:47 adequate, reasonably necessary and consistent
0:43:50 in relation to this risk that’s been identified.
0:43:52 And you ask them to debate those questions.
0:43:55 And I think that’s entirely feasible.
0:43:57 I’m very optimistic about citizens assemblies
0:44:00 as a mechanism for addressing that kind of question,
0:44:02 a question about our shared values.
0:44:05 – Do you see these as legally binding
0:44:07 or kind of making recommendations?
0:44:11 – I think they can only be making recommendations.
0:44:14 What I’m proposing is that on certain specific issues
0:44:17 where we think we need public input,
0:44:19 but we don’t wanna put them to a referendum
0:44:22 because we might need to revisit the issues
0:44:24 when new evidence comes to light
0:44:26 and you need a certain level of information
0:44:28 to understand what the issue is.
0:44:31 Citizens assemblies are great for those kinds of issues.
0:44:34 And because they’re very effective,
0:44:36 the recommendations they deliver
0:44:38 should be given weight by policymakers
0:44:40 and should be implemented.
0:44:43 They’re not substituting for parliamentary democracy,
0:44:46 but they’re feeding into it in a really valuable way.
0:44:51 – One thing that I can’t help but wonder about all of this,
0:44:54 humans are already incredibly cruel to animals
0:44:56 that most of us agree are very sentient,
0:44:58 I’m thinking of pigs or cows.
0:45:01 I think we’ve largely moved away from,
0:45:03 Descartes in the 1600s
0:45:06 where all animals were considered unfeeling machines.
0:45:09 Today we might disagree about how small
0:45:11 and simple down the chain we go
0:45:14 before we lose in consensus on sentience,
0:45:17 but agreeing that they’re sentient
0:45:18 doesn’t seem to have prevented us
0:45:21 from doing atrocious things to many animals.
0:45:24 So I’m curious if the goal is to help guide us
0:45:27 in making more ethical decisions,
0:45:29 how do you think that determining sentience
0:45:31 in other creatures will help?
0:45:37 – You’re totally right that recognizing animals as sentient
0:45:40 does not immediately lead to behavioral change
0:45:42 to treat them better.
0:45:45 And this is the tragedy of how we treat
0:45:48 lots of mammals like pigs and birds like chickens,
0:45:51 that we recognize them as sentient beings,
0:45:54 and yet we fail them very, very seriously.
0:45:57 I think there’s a lot of research to be done
0:46:01 about what kinds of information about sentience
0:46:03 might genuinely change people’s behavior.
0:46:07 And I’m very interested in doing that kind of research
0:46:12 going forward, but with cases like octopuses,
0:46:15 at least there’s quite an opportunity
0:46:16 in this particular case, I think,
0:46:20 because you don’t have really entrenched industries
0:46:22 already farming them.
0:46:24 Part of the problem we face with the pigs and chickens
0:46:28 and so on is that in opposing these practices,
0:46:31 the enemy is very, very powerful.
0:46:33 The arguments are really easy to state
0:46:38 and people do get them and they do see why this is wrong,
0:46:41 but then the enemy is so powerful
0:46:44 that actually changing this juggernaut,
0:46:47 this leviathan is a huge challenge.
0:46:51 By contrast with invertebrate farming,
0:46:54 we’re talking about practices sometimes
0:46:57 that could become entrenched like that in the future,
0:46:59 but are not yet entrenched.
0:47:04 Octopus farming is currently on quite small scales,
0:47:06 shrimp farming is much larger,
0:47:09 insect farming is much larger,
0:47:11 but they’re not as entrenched and powerful
0:47:14 as pig farming, poultry farming.
0:47:16 And so there seem to be real opportunities here
0:47:19 to effect positive change, or at least I hope so.
0:47:21 In the octopus farming case, for example,
0:47:24 we’ve actually seen bands implemented
0:47:27 in Washington State and in California.
0:47:31 And that’s a sign that progress is really possible
0:47:32 in these cases.
0:47:35 – There are talks of banning AI development.
0:47:37 The philosopher Thomas Metzinger is famously called
0:47:41 for a ban until 2050, that might be difficult operationally,
0:47:45 but I’m curious how you think about actions we can take today
0:47:47 at the early stages of these institutions
0:47:49 that might help in the long run.
0:47:51 – Yeah, huge problems.
0:47:55 I do think Metzinger’s proposal deserves to be taken seriously,
0:48:00 but also we need to be thinking about what can we do
0:48:05 that is more easily achieved than banning this stuff,
0:48:08 but then nonetheless makes a positive difference.
0:48:11 And in the book, I suggest there might be some lessons here
0:48:14 from the regulation of animal research
0:48:17 that you can’t just do what you like,
0:48:19 experimenting on animals.
0:48:22 In the UK, at least, there’s quite a strict framework
0:48:25 requiring you to get a license.
0:48:27 And it’s not a perfect framework by any means.
0:48:29 It has a lot of problems,
0:48:33 but it does show a possible compromise
0:48:35 between simply banning something altogether
0:48:39 and allowing it to happen in a completely unregulated way.
0:48:41 And the nature of that compromise
0:48:44 is that you expect the people doing this research
0:48:47 to be transparent about their plans,
0:48:49 to reveal their plans to a regulator.
0:48:53 Who is able to see them and assess the harms and benefits
0:48:54 and only give a license
0:48:57 if they think the benefits outweigh the harms.
0:49:01 And I’d like to see something like that in AI research
0:49:03 as well as in animal research.
0:49:04 – Well, it’s interesting
0:49:05 ’cause it brings us right back
0:49:06 to what you were talking about a little while ago,
0:49:10 which is, if we can’t trust the linguistic output,
0:49:12 we need the research on understanding,
0:49:14 well, how do we even assess harm and risk
0:49:16 in AI systems in the first place?
0:49:18 – As I say, it’s a huge problem coming down the road
0:49:20 for the whole of society.
0:49:24 I think there’ll be significant social divisions opening up
0:49:28 in the near future between people who are quite convinced
0:49:31 that their AI companions are sentient
0:49:33 and want rights for them
0:49:38 and others who simply find that ridiculous and absurd.
0:49:40 And I think that there’ll be a lot of tensions
0:49:42 between these two groups.
0:49:45 And in a way, the only way to really move forward
0:49:49 is to have better evidence than we do now.
0:49:52 And so there needs to be more research.
0:49:55 I’m always in this difficult position of,
0:49:57 I want more research, the tech companies might fund it,
0:49:59 I hope they will, I want them to fund it.
0:50:02 At the same time, it could be very problematic
0:50:04 for them as well.
0:50:06 And so I can’t make any promises in advance
0:50:08 that the outcomes of that research
0:50:11 will be advantageous to the tech companies.
0:50:14 So, but even though I’m in a difficult position there,
0:50:18 I feel like I still have to try and do something.
0:50:21 – Maybe by way of trying to wrap this all up,
0:50:24 you have been involved in these kinds of questions
0:50:25 for a number of years.
0:50:27 And you’ve mentioned a few times throughout the conversation
0:50:29 that you have seen a pace of change
0:50:31 that’s been kind of inspiring.
0:50:32 You’ve seen questions that previously
0:50:35 were not a part of the conversation now,
0:50:37 becoming part of the mainstream conversation.
0:50:41 So what have you seen in the last decade or two
0:50:43 in terms of the degree to which we are really beginning
0:50:44 to embrace these questions?
0:50:46 – I’ve seen some positive steps.
0:50:49 I think issues around crabs and lobsters and octopus
0:50:53 is taking far more seriously than they were 10 years ago.
0:50:56 For example, I really did not expect that California
0:51:00 would bring in an octopus farming ban
0:51:03 and in the legislation cite our work
0:51:07 as being a key factor driving it.
0:51:08 I mean, that was extraordinary.
0:51:11 So it just goes to show that it really pays off sometimes
0:51:14 to do impact driven work.
0:51:16 I think we’ve seen over the last couple of years
0:51:19 some changes in the conversations around AI as well.
0:51:23 The book is written in a very optimistic tone, I think,
0:51:26 because well, you’ve got to hope to make it a reality.
0:51:30 You’ve got to believe in the possibility of us
0:51:34 taking steps to manage risk better than we do.
0:51:36 And the book is full of proposals
0:51:38 about how we might do that.
0:51:42 And I think at least some of these will be adopted in the future.
0:51:49 – I would love to see it, I’m optimistic as well.
0:51:52 Jonathan Birch, thank you so much for coming on the show.
0:51:53 This was a pleasure.
0:51:54 – Thanks, Hachan.
0:52:07 – Once again, the book is The Edge of Sentience,
0:52:10 which is free to read on the Oxford Academic Platform.
0:52:13 We’ll include a link to that in the show notes.
0:52:14 And that’s it.
0:52:17 I hope you enjoyed the episode as much as I did.
0:52:20 I am still thinking about whether we’re in an N equals one
0:52:23 or an N equals three world,
0:52:25 and how the future of how we look for sentience
0:52:29 in AI systems could come down to animal research
0:52:30 that helps us figure out
0:52:35 whether all animals share the same sentient ancestor,
0:52:37 or whether sentience is something
0:52:40 that’s evolved a few separate times.
0:52:43 This episode was produced by Beth Morrissey
0:52:46 and hosted by me, O’Shan Jarrow.
0:52:50 My day job is as a staff writer with Future Perfect at Vox,
0:52:53 where I cover the latest ideas in the science
0:52:55 and philosophy of consciousness,
0:52:57 as well as political economy.
0:53:01 You can read my stuff at box.com/futureperfect.
0:53:04 Today’s episode was engineered by Patrick Boyd,
0:53:08 fact-checked by Anouk Dussot, edited by Jorge Just,
0:53:10 and Alex Overington wrote our theme music.
0:53:14 New episodes of The Gray Area drop on Mondays.
0:53:16 Listen and subscribe.
0:53:17 The show is part of Vox.
0:53:19 Support Vox’s journalism
0:53:22 by joining our membership program today.
0:53:25 Go to vox.com/members to sign up.
0:53:27 And if you decide to sign up because of the show,
0:53:28 let us know.
2
Can you ever really know what’s going on inside the mind of another creature?
In some cases, like other humans, or dogs and cats, we might be able to guess with a bit of confidence. But what about octopuses? Or insects? What about AI systems — will they ever be able to feel anything? And if they do feel anything, what are our ethical obligations toward them?
In today’s episode, Vox staff writer Oshan Jarow brings those questions to philosopher of science Jonathan Birch.
Birch is the principal investigator on the Foundations of Animal Sentience Project at the London School of Economics, and author of the recently released book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Birch also convinced the UK government to consider lobsters, octopuses, and crabs sentient and therefore deserving of legal protection.
This unique perspective earned Jonathan a place on Vox’s Future Perfect 50 list, an annual celebration of the people working to make the future a better place. The list — published last month — includes writers, scientists, thinkers, and activists who are reshaping our world for the better.
In this conversation, Oshan and Jonathan explore everything we know— and don’t know — about sentience, and how to make ethical decisions about creatures who may possess it.
Guest host: Oshan Jarow
Guest: Jonathan Birch, Author of The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Available for free on the Oxford Academic platform.
Learn more about your ad choices. Visit podcastchoices.com/adchoices