AI transcript
Support for this show is brought to you by Nissan Kicks.
It’s never too late to try new things
and it’s never too late to reinvent yourself.
The all-new re-imagined Nissan Kicks
is the city-sized crossover vehicle
that’s been completely revamped for urban adventure.
From the design and styling to the performance,
all the way to features
like the Bose Personal Plus sound system,
you can get closer to everything you love about city life
in the all-new re-imagined Nissan Kicks.
Learn more at www.nisanusa.com/2025-Kicks.
Available feature,
Bose is a registered trademark of the Bose Corporation.
– Your own weight loss journey is personal.
Everyone’s diet is different,
everyone’s bodies are different,
and according to Noom,
there is no one-size-fits-all approach.
Noom wants to help you stay focused
on what’s important to you,
with their psychology and biology-based approach.
This program helps you understand the science
behind your eating choices
and helps you build new habits for a healthier lifestyle.
Stay focused on what’s important to you
with Noom’s psychology and biology-based approach.
Sign up for your free trial today at Noom.com.
– Can you ever really know what’s going on
inside the mind of another creature?
– In some cases, like other humans or dogs and cats,
we might be able to guess with a bit of confidence,
but what about octopuses or insects?
What about AI systems?
Will they ever be able to feel anything?
Despite all of our progress in science and technology,
we still have basically no idea
how to look inside the private experiences
of other creatures.
The question of what kinds of beings can feel things
and what those feelings are really like
remains one of the biggest mysteries
in both philosophy and science.
And maybe, at some point,
we’ll develop a big new theory of consciousness
that helps us really understand the inside of other minds.
But until then, we’re stuck making guesses
and judgment calls about what other creatures can feel
and about whether certain things can feel at all.
So, where do we draw the line
of what kinds of creatures might be sentient?
And how do we figure out our ethical obligations
to creatures that remain a mystery to us?
I’m O’Shawn Jarrow, sitting in for Sean Illing,
and this is the Gray Area.
My guest today is philosopher of science, Jonathan Birch.
He’s the principal investigator
on the Foundations of Animal Sentience Project
at the London School of Economics,
and author of the recently released book,
The Edge of Sentience, Risk and Precaution in Humans,
Other Animals and AI.
He also successfully convinced the UK government
to consider lobsters, octopuses, and crabs sentient
and therefore, deserving of legal protections,
which is a story that we’ll get into.
And it’s that work that earned him a place
on Vox’s Future Perfect 50 list,
a roundup of 50 of the most influential people
working to make the future a better place for everyone.
And in Birch’s case, for every sentient creature.
In this conversation, we explore everything that we do
and don’t know about sentience
and how to make decisions around it,
given all the uncertainty that we can’t yet escape.
Jonathan Birch, welcome to the Gray Area.
Thanks so much for coming on.
– Thanks for inviting me.
– So, one of the central ideas of your work
is this fuzzy idea of sentience.
And you focus on sentience across creatures,
from insects to animals,
to even potentially artificial intelligence.
And one of the challenges in that work
is defining sentience in the first place.
So, can you talk a little bit about how you’ve come
to define the term sentience?
– For me, it starts with thinking about pain
and thinking about questions like,
can an octopus feel pain?
Can a crab, can a shrimp?
And then realizing that actually pain is too narrow
for what really matters to us and that matters ethically.
Because other negative experiences matter as well,
like anxiety and boredom and frustration
that are not really forms of pain.
And then the positive side of mental life also matters.
Pleasure matters, joy, excitement.
And the advantage of the term sentience for me
is that it captures all of that.
It’s about the capacity to have
positive or negative feelings.
– The way that you define sentience
struck me as kind of basically the way
that I’ve thought about consciousness.
But in your book, you have this handy diagram
that shows how you see sentience and consciousness
as to some degree different.
So how do you understand the difference
between sentience and consciousness?
– The problem with the term consciousness, as I see it,
is that it can point to any other number of things.
Sometimes we are definitely using it
to refer to our immediate raw experience
of the present moment.
But sometimes when we’re talking about consciousness,
we’re thinking of things that are overlaid on top of that.
Herbert Feigel in the 1950s
talked about there being these three layers,
sentience, sapience and selfhood.
Where sapience is about the ability
to not just have those immediate raw experiences,
but to reflect on them.
And selfhood is something different again,
’cause it’s about awareness of yourself
as this persistent subject of the experiences
that has a past and has a future.
And when we use the term consciousness,
we might be pointing to any of these three things
or maybe the package of those three things altogether.
– So sentience is maybe a bit of a simpler,
more primitive capacity for feeling
where consciousness may include these more complex layers?
– I think of it as the base layer.
Yeah, I think of it as the most elemental,
most basic, most evolutionarily ancient
part of human consciousness
that is very likely to be shared
with a wide range of other animals.
– I do a fair bit of reporting
on these kinds of questions of consciousness and sentience.
And everyone tends to agree that it’s a mystery, right?
And so a lot of emphasis goes on
trying to dispel the mystery.
And what I found really interesting about your approach
is that you seem to take the uncertainty
in the mystery as your starting point.
And rather than focusing on how do we solve this?
How do we dispel it?
You’re trying to help us think through
how to make practical decisions given that uncertainty.
I’m curious how you came to that approach.
– Yeah, the question for me
is how do we live with this uncertainty?
How do we manage risk better than we’re doing at present?
How can we use ideas from across science and philosophy
to help us make better decisions
when faced with those problems?
And in particular to help us err on the side of caution.
– Just to maybe make it explicit,
you mentioned the risk of uncertainty.
What is the risk here?
– Well, it depends on the particular case
we’re thinking about.
One of the cases that brought me to this topic
was the practice of dropping crabs and lobsters
into pans of boiling water.
And it seems like a clear case to me
where you don’t need certainty actually.
You don’t even need knowledge.
You don’t need high probability to see the risk.
And in fact, to do sensible common sense things
to reduce that risk.
– So the risk is the suffering we’re imposing
on these potentially other sentient creatures.
– That’s usually what looms largest for me, yeah.
The risk of doing things
that mean we end up living very badly
because we cause enormous amounts of suffering
to the creatures around us.
And you can think of that as a risk to the creatures
that end up suffering, but it’s also a risk to us.
A risk that our lives will be horrible
and destructive and absurd.
– I worry about my life being horrible
and destructive and absurd all the time.
So this is a handy way to think about it.
– We all should.
– I’d like to turn to your very practical work,
advising the UK government
on the Animal Welfare and Sentience Act of 2022.
The question was put to you
of whether they should consider certain invertebrates
like octopus and crabs and lobsters,
whether they should be included and protected in the bill.
Could you just give a little context on that story
and what led the government to come
and ask you to lead a research team on that question?
– Yeah, it was indirectly a result of Brexit,
the UK leaving the European Union,
because in doing that, we left the EU’s Lisbon Treaty
that has a line in it about respecting animals
as sentient beings.
And so Animal Welfare Organization said to the government,
are you going to import that into UK law?
And they said, no.
And they got a lot of bad press along the lines of,
well, don’t you think animals feel pain?
And so they promised new legislation
that would restore respect for sentient beings
back to UK law.
And they produced a draft of the bill
that included vertebrate animals.
You could say that’s progressive in a way
because fishes are in there, which is great,
but it generated a lot of criticism
because of the omission of invertebrates.
And so in that context, they commissioned a team led by me
to produce a review of the evidence of sentience
in two groups of invertebrates,
the cephalopods like octopuses
and the decopod crustaceans like crabs and lobsters.
I’d already been calling for applications
of the precautionary principle to questions of sentience
and had written about that.
And it already established at the LSE a project
called the Foundations of Animal Sentience Project
that aims to try to place the emerging science
of animal sentience on more secure foundations,
advance it, develop better methods,
and find new ways of putting the science to work
to design better policies,
laws and ways of caring for animals.
So in a way, I was in the right place at the right time.
I was pretty ideally situated to be leading a review like this.
– How do folks actually go about trying to answer
the question of whether a given animal is or is not sentient?
– Well, in lots of different ways.
And I think when we’re looking at animals
that are relatively close to us in evolutionary terms,
like other mammals,
neuroscience is a huge part of it
because we can look for similarities of brain mechanism.
But when thinking about crabs and lobsters,
what we’re not going to find
is exactly the same brain mechanisms
because we’re separated from them
by over 500 million years of evolution.
– That’s quite a bit.
– And so I think in that context,
you can ask big picture neurological questions.
Are there integrative brain regions, for example?
But the evidence is quite limited,
and so behavior ends up carrying a huge amount of weight.
Some of the strongest evidence comes from behaviors
that show the animal valuing pain relief when injured.
So for example, there was a study by Robin Crook
on octobuses, which is where you give the animal
a choice of two different chambers,
and you see which one it initially prefers.
And then you allow it to experience the effects
of a noxious stimulus, a nasty event.
And then in the other chamber that it initially dispreferred,
you allow it to experience the effects of an aesthetic
or a pain relieving drug.
And then you see whether its preferences reverse.
So now going forward,
it goes to that chamber where it had a good experience
rather than the one where it had a terrible experience.
So it’s a pattern of behavior.
In ourselves, this would be explained by feeling pain
and then getting relief from the pain.
And when we see it in other mammals,
we make that same inference.
– Are there any other categories?
‘Cause we mentioned pain is one bucket of sentience,
but there’s much more to it.
Is there anything else that tends to play
a big role in the research?
– There’s much more to it.
And what I would like to see in the future
is animal sentience research moving beyond pain
and looking for other states that matter,
like joy for instance.
In practice though, by far the largest body of literature
exists for looking at markers of pain.
– I would love to read a paper that tries to assess
to what degree rats are experiencing joy
rather than pain, that would be lovely.
– I mean, studies of play behavior are very relevant here.
The studies of rats playing hide and seek for example,
where there must be something motivating
these play behaviors.
In the human case, we would call it joy, delight,
excitement, something like that.
And so it gets you taking seriously the possibility
there might be something like that in other animals too.
– I think the thing I’m actually left wondering is
what animals don’t show signs of sentience in these cases?
– Right, I mean, there’s many invertebrates
where you have an absence of evidence
’cause no one has really looked.
So snails for example, there’s frustratingly little evidence.
Also bivalve mollusks, which people talk about a lot
’cause they eat so many of them.
Very, very little evidence to base our judgments on.
And it’s hard to know what to infer from this.
There’s this slogan that absence of evidence
is not evidence of absence.
And it’s a little bit oversimplifying
’cause you sort of think, well, you know,
when researchers find some indicators of pain,
they’ve got strong motivations to press on
because it could be a useful pain model
for biomedical research.
And this is exactly what we’ve seen in insects,
particularly Drosophila fruit flies,
that seeing some of those initial markers
has led scientists to think, well, let’s go for this.
And it turns out they’re surprisingly useful pain models.
– A pain model for humans?
– Right, exactly.
Yeah, that traditionally biomedical researchers have used rats
and there’s pressure to replace.
I don’t personally think that replacement here
should mean replacing mammals with invertebrates.
It’s not really the kind of replacement that I support,
but that is how a lot of scientists understand it.
And so they’re looking for ways to replace rats with flies.
– How do they decide
that the fly is a good pain model for humans?
– I mean, researchers have the ability
to manipulate the genetics of flies
at very, very fine grains using astonishing technologies.
So there was a recent paper that basically installed
in some flies sensitivity to chili heat.
Which of course in us, over a certain threshold,
this becomes painful.
So if you have one of the hottest chilies in the world,
you’re not gonna just carry on as normal.
– Certainly not.
– And they showed that the same behavior
can be produced in flies.
You can engineer them to be responsive to chili
and then you can dial up the amount of capsaicin
in the food they’re eating.
And there’ll come a point where they just stop eating
and withdraw from food, even though it leads them to starve.
And things like this that you’re leading researchers
to say, wow, the mechanisms here are mechanisms
we can use for testing out potential pain relieving drugs.
And the fruit flies are a standard model organism,
as they say in science.
So there’s countless numbers of them,
but traditionally they’ve been studied
for genetics primarily.
People haven’t been thinking of them
as model systems of cognitive functions
or of sentience or of pain or of sociality.
And they’re realizing to their surprise
that they’re very good models of all of these things.
And then your question is, well,
why is it such a good model of these things?
Could it be in fact that it possesses sentience of some kind?
– I don’t wanna go too far down this rabbit hole
’cause I could spend hours asking you about this.
Let’s swing back to your research on the UK’s Act for a second.
You wound up recommending that the invertebrates
you looked at should be included.
And you mentioned this included, you know, octopuses,
which to me seems straightforward.
These seem very intelligent and playful.
I don’t need a lot of research to convince me of that.
But you recommended things like, you know, crabs and lobsters
and things where maybe people’s intuitions differ
a little bit in practical terms.
What changed for the life of a crab
after the UK did formally include them in the bill?
How does that wind up benefiting crabs?
– It’s a topic of ongoing discussion, basically,
’cause what this new act does
is it creates a duty on policymakers
to consider the animal welfare consequences
of their decisions, including to crabs.
Now, we recommended, don’t just put crabs
in this particular act.
Also, amend the UK’s other animal welfare laws
to be consistent with the new act.
And this we’ve not yet seen.
So we’re really hoping that this will happen
and will happen in the near future.
And it’s something that definitely should happen.
‘Cause in the meantime, we’ve got a rather confusing picture
where you have these other laws that say
animals should not be caused unnecessary suffering
when they’re killed and people should require training
if they’re going to slaughter animals.
And then you have this new law that says
for legal purposes, decapod crustaceans
are to be considered animals.
And as a philosopher, I’m always thinking,
well, read these two things together
and think about what they logically imply
when written together.
And lawyers don’t like that kind of argument.
Lawyers want a clear precedent
where there’s been some kind of test case
that has convicted someone for boiling a lobster alive
or something like that.
And that’s what we’ve not yet had.
So I’m hoping that lawmakers will act
to clarify that situation.
To me, it’s kind of clear.
How much clearer could it be
that this method causes unnecessary suffering
quite obviously.
And it’s illegal to do that to any animal,
including crabs.
But in practice, because it’s not explicitly ruled out,
it’s not quite good enough at the moment.
We wanna see this explicitly ruled out.
– So we’ll take incremental steps to get there.
– Yeah, in a way, I’m glad people take this issue
seriously at all.
I didn’t really expect that when I started working on it.
And so to have achieved any policy change that benefits
crabs and lobsters in any way,
I’ve gotta count that as a win.
– Support for the gray area comes from Mint Mobile.
There’s nothing like the satisfaction
of realizing you just got an incredible deal.
But those little victories have gotten harder
and harder to find.
Here’s the good news though.
Mint Mobile is resurrecting that incredible
“I got a deal” feeling.
Right now, when you make the switch to a Mint Mobile plan,
you’ll pay just $15 a month when you purchase
a new three month phone plan.
All Mint Mobile plans come with high speed data
and unlimited talk and text delivered
on the nation’s largest 5G network.
You can even keep your phone, your contacts,
and your number.
It doesn’t get much easier than that.
To get this new customer offer
and your new three month premium wireless plan
for just 15 bucks a month,
you can go to mintmobile.com/grayarea.
That’s mintmobile.com/grayarea.
You can cut your wireless bill to 15 bucks a month
at mintmobile.com/grayarea.
$45 upfront payment required equivalent to $15 a month.
New customers on first three month plan only.
Speed slower above 40 gigabytes on unlimited plan,
additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
Support for the gray area comes from Cook Unity.
You know one way to eat chef prepared meals
in the comfort of your home?
You can spend years at culinary school,
work your way up the restaurant industry,
become a renowned chef on your own,
and then cook something for yourself.
Cook Unity delivers meals to your door
that are crafted by award winning chefs
and made with local farm fresh ingredients.
Cook Unity’s selection of over 350 meals
offers a variety of cuisines
and their menus are updated weekly.
So you’re sure to find something
to fit your taste and dietary needs.
One of our colleagues, Nisha,
tried Cook Unity for herself.
– Sometimes you’re just too tired to cook.
I’m a, I have a two and a half year old.
Sometimes you’re just exhausted at the end of the day.
And it’s very easy to default to take out.
So it was really nice to not have the mental load
of having it cook every day,
but having healthy home cooked meals
already prepared for you
and not having to go the takeout route.
– You can get the gift of delivering mouthwatering meals
crafted by local ingredients
and award winning chefs with Cook Unity.
You can go to cookunity.com/grayarea
or enter code grayarea before checkout
for 50% off your first week.
That’s 50% off your first week
by using code grayarea
or going to cookunity.com/grayarea.
– Support for the gray area comes from Shopify.
Viral marketing campaigns have gotten pretty wild lately.
Like in Russia,
one pizza chain offered 100 free pizzas a year
for 100 years to anyone
who got the company logo tattooed on their body.
Apparently 400 misguided souls did it,
which is a story that deserves its own podcast.
But if you want to grow your company
without resorting to a morally dubious viral scheme,
you might want to check out Shopify.
Shopify is an all-in-one digital commerce platform
that wants to help your business sell better than ever before.
Shopify says they can help you convert browsers
into buyers and sell more over time.
And their shop pay feature can boost conversions by 50%.
There’s a reason companies like Allbirds turn to Shopify
to sell more products to more customers.
Businesses that sell more sell with Shopify.
Want to upgrade your business
and get the same checkout Allbirds uses?
You can sign up for your $1 per month trial period
at Shopify.com/Vox, all lowercase.
That’s Shopify.com/Vox to upgrade your selling today.
Shopify.com/Vox.
(gentle music)
– Let’s move to another set of potential beings.
Your work on Sentience covers artificial intelligence.
And one of the things that I’ve been most interested
in watching as the past few years
have really thrust a lot of questions around AI
into the mainstream has been this unbundling
of consciousness and intelligence
or Sentience and intelligence.
We’re clearly getting better at creating
more intelligent systems that can achieve
and with competency perform certain tasks.
But it remains very unclear
if we’re getting any closer to Sentient ones.
So how do you understand the relationship
between Sentience and intelligence?
– I think it’s entirely possible
that we will get AI systems with very high levels
of intelligence and absolutely no Sentience at all.
That’s entirely possible.
And when you think about shrimps or snails, for example,
we can also conceive of how there can be Sentience
with perhaps not all that much intelligence.
– On another podcast, you had mentioned that
it might actually be easier to create AI systems
that are Sentient by modeling them
off of less intelligent systems
rather than just cranking up the intelligence dial
until it bursts through into Sentience.
Why is that?
– That could absolutely be the case.
I see it many possible pathways to Sentient AI.
One of which is through the emulation
of animal nervous systems.
There’s a long running project called Open Worm
that tries to recreate the nervous system
of a tiny worm called C. elegans in computer software.
There’s not a huge amount of funding going into this
because it’s not seen as very lucrative,
just very interesting.
And so even with those very simple nervous systems,
we’re not really at the stage where we can say
they’ve been emulated.
But you can see the pathway here.
You know, suppose we did get an emulation
of a worms nervous system.
I’m sure we would then move on to fruit flies.
If that worked, researchers would be going on to open mouse,
open fish and emulating animal brains
at ever greater levels of detail.
And then in relation to questions of Sentience,
we’ve got to take seriously the possibility
that Sentience does not require a biological substrate,
that the stuff you’re made of might not matter.
It might matter, but it might not.
And so it might be that if you recreate
the same functional organization in a different substrate,
so no neurons of a biological kind anymore,
just computer software,
maybe you would create Sentience as well.
– You’ve talked about this idea that you’ve called
the N equals one problem.
Can you explain what that is?
– Well, this is a term that began in origins of life studies,
where it’s people searching for extraterrestrial life
or studying life’s origin and asking,
well, we only have one case to draw on.
And if we only have one case,
how are we supposed to know what was essential to life
from what was a contingent feature
of how life was achieved on Earth?
And one might think we have an N equals one problem
with consciousness as well.
If you think it’s something that has only evolved once,
seems like you’re always gonna have problems
disentangling what’s essential to it
from what is contingent.
Luckily though,
I think we might be in an N greater than one situation
when it comes to Sentience and consciousness
because of the arthropods like flies and bees
and because of the cephalopods and crabs.
And because of the cephalopods
like octopuses, squid, cuttlefish,
we might even be in an N equals three situation,
in which case, studying those other cases,
octopuses, crabs, insects has tremendous value
for understanding the nature of Sentience
’cause it can tell us,
it can start to give us some insight
into what might be essential to having it at all
versus what might be a quirk
of how it is achieved in humans.
– Just to make sure I have this right,
if we are in an N equals one scenario with Sentience,
that means that every sentient creature evolved
from the same sentient ancestor.
It’s one evolutionary lineage.
– That’s right.
– And so Sentience has only evolved once on Earth’s history
so it gives us one example to look at.
– Exactly.
– But if we’re not in an N equals one situation,
you mentioned N equals three
and there’s a fair bit of research
suggesting this could be the case or something like it,
then Sentience has evolved three separate times
in three separate kind of cases of form
and the architecture of a being.
– That’s fascinating to me,
the idea that Sentience could have involved
independently multiple times in different ways.
– Yeah, we know it’s true of eyes, for example,
when you look at the eyes of cephalopods,
you see a wonderful mixture of similarities and differences.
So we see convergent evolution, similar thing,
evolving independently to solve a similar problem
and Sentience could be just like that.
– The greater the number of N’s we have here,
the number of separate instances of Sentience evolving,
it strikes me as that lends more credence to the idea
that AI could develop its own independent route
to Sentience as well that might not look exactly
like what we’ve seen in the past.
– It’s also the way towards really knowing
whether it has or not as well
because at present, we’re just not in that situation.
We’re not in a good enough position
to be able to really know that we’ve created Sentience AI
even when we do, we’ll be faced
with horrible disorienting uncertainty.
But to me, the pathway towards better evidence
and maybe one day knowledge lies through
studying other animals.
And it lies through trying to get other N’s,
other independently evolved cases
so that we can develop theories
that genuinely disentangle the quirks
of human consciousness from what is needed
to be conscious at all.
– What kind of evidence would you find compelling
that tests for Sentience in AI systems?
– It’s something I’ve been thinking about a great deal
because when we’re looking at the surface linguistic behavior
of an AI system that has been trained
on over a trillion words of human training data,
we clearly gonna see very fluent talking
about feelings and emotions.
And we’re already seeing that.
And it’s really, I would say not evidence at all
that the system actually has those feelings
because it can be explained as a kind of skillful mimicry.
And if that mimicry serves the system’s objectives,
we should expect to see it.
We should expect our criteria to be gained
if the objectives are served by persuading
the human user of Sentience.
And so this is a huge problem and it points
to the need to look deeper in some way.
These systems are very substantially opaque.
It is really, really hard to infer anything
about what the processes are inside them.
And so I have a second line of research as well
that I’ve been developing with collaborators at Google
that is about trying to adapt some of these animal experiments.
Let’s see if we can translate them over to the AI case.
– These are looking for behavior changes?
– Yeah, looking for subtle behavior changes
that we hope would not be gaming
because they’re not part of the normal repertoire
in which humans express their feelings,
but are rather these very subtle things
that we’ve looked for in other animals
because they can’t talk about their feelings
in the first place.
– So it’s funny, we’re hitting the same problem in AI
that we are in animals and humans,
which is that in both cases, there’s a black box problem
where we don’t actually understand
the inner workings to some degree.
– The problems are so much worse in the AI case though
because when you’re faced with a pattern of behavior
in another animal like an octopus
that is well explained by there being a state
like pain there, that is the best explanation
for your data.
And it doesn’t have to compete with this other explanation
that maybe the octopus read a trillion words
about how humans express their feelings
and stands to benefit from gaming our criteria
and skillfully mimicking us.
We know the octopus is not doing that, that never arises.
In the AI case, those two explanations always compete
and the second one with current systems
seems to be rather more plausible.
And in addition to that,
the substrate is completely different as well.
So we face huge challenges
and I suppose what I’m trying to do
is maintain an attitude of humility
in the face of those challenges.
Now, let’s not be credulous about this,
but also let’s not give up the search
for developing higher quality kinds of test.
Support for the gray area comes from green light.
Anyway, it applies to more than fish.
It’s also a great lesson for parents
who want their kids to learn important skills
that will set them up for success later in life.
As we enter the gifting season,
now might be the perfect time
to give your kids money skills that will last
well beyond the holidays.
That’s where green light comes in.
Green light is a debit card
and money at made specifically with families in mind.
Send money to your kids,
track their spending and saving
and help them develop their financial skills
with games aimed at building the confidence they need
to make wiser decisions with their money.
My kid is a little too young for this.
We’re still rocking piggy banks,
but I’ve got a colleague here at Vox
who uses it with his two boys and he loves it.
You can sign up for green light today
at greenlight.com/grayarea.
That’s greenlight.com/grayarea
to try green light today.
Greenlight.com/grayarea.
Support for the show comes from Give Well.
When you make a charitable donation,
you want to know your money is being well spent.
For that, you might want to try Give Well.
Give Well is an independent nonprofit
that’s spent the last 17 years
researching charitable organizations.
And they only give recommendations
to the highest impact causes that they vetted thoroughly.
According to Give Well, over 125,000 donors
have used it to donate more than $2 billion.
Rigorous evidence suggests that these donations
could save over 200,000 lives.
And Give Well wants to help you make informed decisions
about high impact giving.
So all of their research and recommendations
are available on their site for free.
You can make tax deductible donations
and Give Well doesn’t take a cut.
If you’ve never used Give Well to donate,
you can have your donation matched up to $100
before the end of the year
or as long as matching funds last.
To claim your match, you can go to GiveWell.org
and pick podcast and enter the gray area at checkout.
Make sure they know that you heard about Give Well
from the gray area to get your donation matched.
Again, that’s GiveWell.org to donate or find out more.
Support for the gray area comes from Delete Me.
Delete Me allows you to discover and control
your digital footprint,
letting you see where things like home addresses,
phone numbers, and even email addresses
are floating around on data broker sites.
And that means Delete Me could be the perfect holiday gift
for a loved one looking to help safeguard
their own life online.
Delete Me can help anyone monitor
and remove the personal info they don’t want on the internet.
Claire White, our colleague here at Vox,
tried Delete Me for herself and even gifted it to a friend.
This year, I gave two of my friends a Delete Me subscription
and it’s been the perfect gift
’cause it’s something that will last beyond the season.
Delete Me will continue to remove their information
from online and it’s something I’ve been raving about
so I know that they’re gonna love it as well.
– This holiday season, you can give your loved ones
the gift of privacy and peace of mind with Delete Me.
Now at a special discount for our listeners.
Today, you can get 20% off your Delete Me plan
when you go to joindeleteme.com/vox
and use promo code Vox at checkout.
The only way to get 20% off is to go to joindeleteme.com/vox
and enter code Vox at checkout.
That’s joindeleteme.com/vox code Vox.
(gentle music)
– One of the major aims of your recent book
is to propose a framework
for making these kinds of practical decisions
about potentially sentient creatures,
whether it’s an animal, whether it’s AI,
given this uncertainty.
Tell me about that framework.
– Well, it’s a precautionary framework.
One of the things I urge is a pragmatic shift
in how we think about the question.
From asking, is the system sentient
where uncertainty will always be with us?
To asking instead, is the system a sentient’s candidate?
Where the concept of a sentient’s candidate
is a concept that we’ve pragmatically engineered.
And what it says is that a system is a sentient’s candidate
when there’s a realistic possibility of sentient’s
that it would be irresponsible to ignore.
And when there’s an evidence base
that can inform the design and assessment of precautions.
And because we’ve constructed the concept like that,
we can use current evidence to make judgments.
The cost of doing that is that those judgments
are not purely scientific judgments anymore.
There’s an ethical element to the judgment as well,
because it’s about when a realistic possibility
becomes irresponsible to ignore.
And that’s implicitly a value judgment.
But by reconstructing the question in that way,
we make it answerable.
– So, presumably then,
given your recommendation to the UK government,
you would say that those invertebrates you looked at
are sentient’s candidates.
That there’s enough evidence to at least consider
the possibility of sentience.
Where would you stop with the current category
of sentient’s candidate?
What is not a sentient’s candidate in your current view?
– I’ve come to the view that insects really are,
which surprises me.
You know, it would have surprised past me
who hadn’t read so much of the literature about insects.
The evidence just clearly shows a realistic possibility
of sentience, it would be irresponsible to ignore.
But really, that’s currently where I stop.
So the cephalopod mollusks, the decapod crustaceans,
the insects, lot of evidence in those cases.
And I think in other invertebrates,
what we should say instead is that we lack the kind of
evidence that would be needed to effectively design
precautions to manage welfare risks.
And so the imperative there is to be getting more evidence.
And so in my book, I call these investigation priorities.
– So insects are sentience candidates.
Where does today’s generation of AI,
let’s LLMs in particular, so open AI is chatgy, BT,
anthropics, Claude, are these sentience candidates
in your view yet?
– I suggest that they’re investigation priorities,
which is already controversial because I’m saying that,
well, just as snails, we need more evidence.
Equally in AI, we need more evidence.
So I’m not being one of those people who just dismisses
the possibility of sentient AI as being a ridiculous one.
But I don’t think they’re sentience candidates
because we don’t have enough evidence.
– When you say that something is a sentience candidate,
it’s implying that we need to consider their welfare
and our behaviors and the decisions that we make.
– In public policy.
Yeah, I mean, in our personal lives,
we might want to be even more precautionary,
but I’m designing here a framework for setting policy.
– Right, ’cause I can imagine,
I think that the standard kind of line
that you get at this point is,
if you’re telling me I need to consider
the welfare of insects,
how can I take a step on the sidewalk?
And one of the ideas that’s central to your framework
is this idea of proportionality, which I really liked.
You talk about how the precautions that we take
should match the scale of the risk of suffering
that our actions kind of carry.
So how do you think about quantifying the risk
of suffering an action carries, right?
Does harming simpler creatures or insects
carry less risk than harming larger, more complex ones
like pigs or octopuses?
– Well, I’m opposed to trying to reduce it
or to a calculation and perhaps disagree
with some utilitarians on that point.
When you’re setting public policy,
cost-benefit analysis has its place,
but we’re not in that kind of situation here.
We’re weighing up very incommensurable things,
things that it’s very, very hard to compare.
And I think in that kind of situation,
you don’t want to be just making a calculation.
What you need to have is a democratic, inclusive process
through which different positions can be represented
and we can try to resolve our value conflicts democratically.
And so in the book, I advocate for citizens assemblies
as being the most promising way of doing this,
where you bring a random sample of the public
into an environment where they’re informed about the risks,
they’re informed about possible precautions,
and they’re given a series of tests to go through
to debate what they think would be proportionate
to those risks.
And things like, we’re all banned from walking now
because it might hurt insects.
I don’t see those as very likely to be judged proportionate
by such an exercise.
But other things we might do to help insects,
like banning certain kinds of pesticides,
I think might well be judged proportionate.
– Is this, this sounds to me almost like a form of jury duty.
You have a random selection of citizens brought together.
– Yeah.
– How do you, when I think about this on one hand,
I think it sounds lovely.
I like the idea of us all coming together
to debate the welfare of our fellow creatures.
It also strikes me as kind of optimistic,
to imagine us not only doing this, but doing it well.
And I’m curious how you think about balancing
the value of expertise in making these decisions
with democratic input.
– Yeah, I’m implicitly proposing a division of labor
where experts are supposed to make this judgment
of sentience candidature or candidacy.
Is the octopus a sentience candidate?
But then they’re not adjudicating
the questions of proportionality.
– So what to do about it?
– Yeah, then it would be a tyranny of expert values.
You’d have this question that calls for value judgments
about what to do.
And you’d be handing that over to the experts
and letting the experts dictate changes to our way of life.
That question of proportionality,
that should be handed over to the citizens assembly.
And I think it doesn’t require ordinary citizens
to adjudicate the scientific disagreement.
And that’s really crucial because if you’re asking
random members of the public to adjudicate
which brain regions they think are more important to sentience,
that’s gonna be a total disaster.
But the point is you give them questions
about what sorts of changes to our way of life
would be proportionate, would be permissible,
adequate, reasonably necessary and consistent
in relation to this risk that’s been identified.
And you ask them to debate those questions.
And I think that’s entirely feasible.
I’m very optimistic about citizens assemblies
as a mechanism for addressing that kind of question,
a question about our shared values.
– Do you see these as legally binding
or kind of making recommendations?
– I think they can only be making recommendations.
What I’m proposing is that on certain specific issues
where we think we need public input,
but we don’t wanna put them to a referendum
because we might need to revisit the issues
when new evidence comes to light
and you need a certain level of information
to understand what the issue is.
Citizens assemblies are great for those kinds of issues.
And because they’re very effective,
the recommendations they deliver
should be given weight by policymakers
and should be implemented.
They’re not substituting for parliamentary democracy,
but they’re feeding into it in a really valuable way.
– One thing that I can’t help but wonder about all of this,
humans are already incredibly cruel to animals
that most of us agree are very sentient,
I’m thinking of pigs or cows.
I think we’ve largely moved away from,
Descartes in the 1600s
where all animals were considered unfeeling machines.
Today we might disagree about how small
and simple down the chain we go
before we lose in consensus on sentience,
but agreeing that they’re sentient
doesn’t seem to have prevented us
from doing atrocious things to many animals.
So I’m curious if the goal is to help guide us
in making more ethical decisions,
how do you think that determining sentience
in other creatures will help?
– You’re totally right that recognizing animals as sentient
does not immediately lead to behavioral change
to treat them better.
And this is the tragedy of how we treat
lots of mammals like pigs and birds like chickens,
that we recognize them as sentient beings,
and yet we fail them very, very seriously.
I think there’s a lot of research to be done
about what kinds of information about sentience
might genuinely change people’s behavior.
And I’m very interested in doing that kind of research
going forward, but with cases like octopuses,
at least there’s quite an opportunity
in this particular case, I think,
because you don’t have really entrenched industries
already farming them.
Part of the problem we face with the pigs and chickens
and so on is that in opposing these practices,
the enemy is very, very powerful.
The arguments are really easy to state
and people do get them and they do see why this is wrong,
but then the enemy is so powerful
that actually changing this juggernaut,
this leviathan is a huge challenge.
By contrast with invertebrate farming,
we’re talking about practices sometimes
that could become entrenched like that in the future,
but are not yet entrenched.
Octopus farming is currently on quite small scales,
shrimp farming is much larger,
insect farming is much larger,
but they’re not as entrenched and powerful
as pig farming, poultry farming.
And so there seem to be real opportunities here
to effect positive change, or at least I hope so.
In the octopus farming case, for example,
we’ve actually seen bands implemented
in Washington State and in California.
And that’s a sign that progress is really possible
in these cases.
– There are talks of banning AI development.
The philosopher Thomas Metzinger is famously called
for a ban until 2050, that might be difficult operationally,
but I’m curious how you think about actions we can take today
at the early stages of these institutions
that might help in the long run.
– Yeah, huge problems.
I do think Metzinger’s proposal deserves to be taken seriously,
but also we need to be thinking about what can we do
that is more easily achieved than banning this stuff,
but then nonetheless makes a positive difference.
And in the book, I suggest there might be some lessons here
from the regulation of animal research
that you can’t just do what you like,
experimenting on animals.
In the UK, at least, there’s quite a strict framework
requiring you to get a license.
And it’s not a perfect framework by any means.
It has a lot of problems,
but it does show a possible compromise
between simply banning something altogether
and allowing it to happen in a completely unregulated way.
And the nature of that compromise
is that you expect the people doing this research
to be transparent about their plans,
to reveal their plans to a regulator.
Who is able to see them and assess the harms and benefits
and only give a license
if they think the benefits outweigh the harms.
And I’d like to see something like that in AI research
as well as in animal research.
– Well, it’s interesting
’cause it brings us right back
to what you were talking about a little while ago,
which is, if we can’t trust the linguistic output,
we need the research on understanding,
well, how do we even assess harm and risk
in AI systems in the first place?
– As I say, it’s a huge problem coming down the road
for the whole of society.
I think there’ll be significant social divisions opening up
in the near future between people who are quite convinced
that their AI companions are sentient
and want rights for them
and others who simply find that ridiculous and absurd.
And I think that there’ll be a lot of tensions
between these two groups.
And in a way, the only way to really move forward
is to have better evidence than we do now.
And so there needs to be more research.
I’m always in this difficult position of,
I want more research, the tech companies might fund it,
I hope they will, I want them to fund it.
At the same time, it could be very problematic
for them as well.
And so I can’t make any promises in advance
that the outcomes of that research
will be advantageous to the tech companies.
So, but even though I’m in a difficult position there,
I feel like I still have to try and do something.
– Maybe by way of trying to wrap this all up,
you have been involved in these kinds of questions
for a number of years.
And you’ve mentioned a few times throughout the conversation
that you have seen a pace of change
that’s been kind of inspiring.
You’ve seen questions that previously
were not a part of the conversation now,
becoming part of the mainstream conversation.
So what have you seen in the last decade or two
in terms of the degree to which we are really beginning
to embrace these questions?
– I’ve seen some positive steps.
I think issues around crabs and lobsters and octopus
is taking far more seriously than they were 10 years ago.
For example, I really did not expect that California
would bring in an octopus farming ban
and in the legislation cite our work
as being a key factor driving it.
I mean, that was extraordinary.
So it just goes to show that it really pays off sometimes
to do impact driven work.
I think we’ve seen over the last couple of years
some changes in the conversations around AI as well.
The book is written in a very optimistic tone, I think,
because well, you’ve got to hope to make it a reality.
You’ve got to believe in the possibility of us
taking steps to manage risk better than we do.
And the book is full of proposals
about how we might do that.
And I think at least some of these will be adopted in the future.
– I would love to see it, I’m optimistic as well.
Jonathan Birch, thank you so much for coming on the show.
This was a pleasure.
– Thanks, Hachan.
– Once again, the book is The Edge of Sentience,
which is free to read on the Oxford Academic Platform.
We’ll include a link to that in the show notes.
And that’s it.
I hope you enjoyed the episode as much as I did.
I am still thinking about whether we’re in an N equals one
or an N equals three world,
and how the future of how we look for sentience
in AI systems could come down to animal research
that helps us figure out
whether all animals share the same sentient ancestor,
or whether sentience is something
that’s evolved a few separate times.
This episode was produced by Beth Morrissey
and hosted by me, O’Shan Jarrow.
My day job is as a staff writer with Future Perfect at Vox,
where I cover the latest ideas in the science
and philosophy of consciousness,
as well as political economy.
You can read my stuff at box.com/futureperfect.
Today’s episode was engineered by Patrick Boyd,
fact-checked by Anouk Dussot, edited by Jorge Just,
and Alex Overington wrote our theme music.
New episodes of The Gray Area drop on Mondays.
Listen and subscribe.
The show is part of Vox.
Support Vox’s journalism
by joining our membership program today.
Go to vox.com/members to sign up.
And if you decide to sign up because of the show,
let us know.
Support for this show is brought to you by Nissan Kicks.
It’s never too late to try new things
and it’s never too late to reinvent yourself.
The all-new re-imagined Nissan Kicks
is the city-sized crossover vehicle
that’s been completely revamped for urban adventure.
From the design and styling to the performance,
all the way to features
like the Bose Personal Plus sound system,
you can get closer to everything you love about city life
in the all-new re-imagined Nissan Kicks.
Learn more at www.nisanusa.com/2025-Kicks.
Available feature,
Bose is a registered trademark of the Bose Corporation.
– Your own weight loss journey is personal.
Everyone’s diet is different,
everyone’s bodies are different,
and according to Noom,
there is no one-size-fits-all approach.
Noom wants to help you stay focused
on what’s important to you,
with their psychology and biology-based approach.
This program helps you understand the science
behind your eating choices
and helps you build new habits for a healthier lifestyle.
Stay focused on what’s important to you
with Noom’s psychology and biology-based approach.
Sign up for your free trial today at Noom.com.
– Can you ever really know what’s going on
inside the mind of another creature?
– In some cases, like other humans or dogs and cats,
we might be able to guess with a bit of confidence,
but what about octopuses or insects?
What about AI systems?
Will they ever be able to feel anything?
Despite all of our progress in science and technology,
we still have basically no idea
how to look inside the private experiences
of other creatures.
The question of what kinds of beings can feel things
and what those feelings are really like
remains one of the biggest mysteries
in both philosophy and science.
And maybe, at some point,
we’ll develop a big new theory of consciousness
that helps us really understand the inside of other minds.
But until then, we’re stuck making guesses
and judgment calls about what other creatures can feel
and about whether certain things can feel at all.
So, where do we draw the line
of what kinds of creatures might be sentient?
And how do we figure out our ethical obligations
to creatures that remain a mystery to us?
I’m O’Shawn Jarrow, sitting in for Sean Illing,
and this is the Gray Area.
My guest today is philosopher of science, Jonathan Birch.
He’s the principal investigator
on the Foundations of Animal Sentience Project
at the London School of Economics,
and author of the recently released book,
The Edge of Sentience, Risk and Precaution in Humans,
Other Animals and AI.
He also successfully convinced the UK government
to consider lobsters, octopuses, and crabs sentient
and therefore, deserving of legal protections,
which is a story that we’ll get into.
And it’s that work that earned him a place
on Vox’s Future Perfect 50 list,
a roundup of 50 of the most influential people
working to make the future a better place for everyone.
And in Birch’s case, for every sentient creature.
In this conversation, we explore everything that we do
and don’t know about sentience
and how to make decisions around it,
given all the uncertainty that we can’t yet escape.
Jonathan Birch, welcome to the Gray Area.
Thanks so much for coming on.
– Thanks for inviting me.
– So, one of the central ideas of your work
is this fuzzy idea of sentience.
And you focus on sentience across creatures,
from insects to animals,
to even potentially artificial intelligence.
And one of the challenges in that work
is defining sentience in the first place.
So, can you talk a little bit about how you’ve come
to define the term sentience?
– For me, it starts with thinking about pain
and thinking about questions like,
can an octopus feel pain?
Can a crab, can a shrimp?
And then realizing that actually pain is too narrow
for what really matters to us and that matters ethically.
Because other negative experiences matter as well,
like anxiety and boredom and frustration
that are not really forms of pain.
And then the positive side of mental life also matters.
Pleasure matters, joy, excitement.
And the advantage of the term sentience for me
is that it captures all of that.
It’s about the capacity to have
positive or negative feelings.
– The way that you define sentience
struck me as kind of basically the way
that I’ve thought about consciousness.
But in your book, you have this handy diagram
that shows how you see sentience and consciousness
as to some degree different.
So how do you understand the difference
between sentience and consciousness?
– The problem with the term consciousness, as I see it,
is that it can point to any other number of things.
Sometimes we are definitely using it
to refer to our immediate raw experience
of the present moment.
But sometimes when we’re talking about consciousness,
we’re thinking of things that are overlaid on top of that.
Herbert Feigel in the 1950s
talked about there being these three layers,
sentience, sapience and selfhood.
Where sapience is about the ability
to not just have those immediate raw experiences,
but to reflect on them.
And selfhood is something different again,
’cause it’s about awareness of yourself
as this persistent subject of the experiences
that has a past and has a future.
And when we use the term consciousness,
we might be pointing to any of these three things
or maybe the package of those three things altogether.
– So sentience is maybe a bit of a simpler,
more primitive capacity for feeling
where consciousness may include these more complex layers?
– I think of it as the base layer.
Yeah, I think of it as the most elemental,
most basic, most evolutionarily ancient
part of human consciousness
that is very likely to be shared
with a wide range of other animals.
– I do a fair bit of reporting
on these kinds of questions of consciousness and sentience.
And everyone tends to agree that it’s a mystery, right?
And so a lot of emphasis goes on
trying to dispel the mystery.
And what I found really interesting about your approach
is that you seem to take the uncertainty
in the mystery as your starting point.
And rather than focusing on how do we solve this?
How do we dispel it?
You’re trying to help us think through
how to make practical decisions given that uncertainty.
I’m curious how you came to that approach.
– Yeah, the question for me
is how do we live with this uncertainty?
How do we manage risk better than we’re doing at present?
How can we use ideas from across science and philosophy
to help us make better decisions
when faced with those problems?
And in particular to help us err on the side of caution.
– Just to maybe make it explicit,
you mentioned the risk of uncertainty.
What is the risk here?
– Well, it depends on the particular case
we’re thinking about.
One of the cases that brought me to this topic
was the practice of dropping crabs and lobsters
into pans of boiling water.
And it seems like a clear case to me
where you don’t need certainty actually.
You don’t even need knowledge.
You don’t need high probability to see the risk.
And in fact, to do sensible common sense things
to reduce that risk.
– So the risk is the suffering we’re imposing
on these potentially other sentient creatures.
– That’s usually what looms largest for me, yeah.
The risk of doing things
that mean we end up living very badly
because we cause enormous amounts of suffering
to the creatures around us.
And you can think of that as a risk to the creatures
that end up suffering, but it’s also a risk to us.
A risk that our lives will be horrible
and destructive and absurd.
– I worry about my life being horrible
and destructive and absurd all the time.
So this is a handy way to think about it.
– We all should.
– I’d like to turn to your very practical work,
advising the UK government
on the Animal Welfare and Sentience Act of 2022.
The question was put to you
of whether they should consider certain invertebrates
like octopus and crabs and lobsters,
whether they should be included and protected in the bill.
Could you just give a little context on that story
and what led the government to come
and ask you to lead a research team on that question?
– Yeah, it was indirectly a result of Brexit,
the UK leaving the European Union,
because in doing that, we left the EU’s Lisbon Treaty
that has a line in it about respecting animals
as sentient beings.
And so Animal Welfare Organization said to the government,
are you going to import that into UK law?
And they said, no.
And they got a lot of bad press along the lines of,
well, don’t you think animals feel pain?
And so they promised new legislation
that would restore respect for sentient beings
back to UK law.
And they produced a draft of the bill
that included vertebrate animals.
You could say that’s progressive in a way
because fishes are in there, which is great,
but it generated a lot of criticism
because of the omission of invertebrates.
And so in that context, they commissioned a team led by me
to produce a review of the evidence of sentience
in two groups of invertebrates,
the cephalopods like octopuses
and the decopod crustaceans like crabs and lobsters.
I’d already been calling for applications
of the precautionary principle to questions of sentience
and had written about that.
And it already established at the LSE a project
called the Foundations of Animal Sentience Project
that aims to try to place the emerging science
of animal sentience on more secure foundations,
advance it, develop better methods,
and find new ways of putting the science to work
to design better policies,
laws and ways of caring for animals.
So in a way, I was in the right place at the right time.
I was pretty ideally situated to be leading a review like this.
– How do folks actually go about trying to answer
the question of whether a given animal is or is not sentient?
– Well, in lots of different ways.
And I think when we’re looking at animals
that are relatively close to us in evolutionary terms,
like other mammals,
neuroscience is a huge part of it
because we can look for similarities of brain mechanism.
But when thinking about crabs and lobsters,
what we’re not going to find
is exactly the same brain mechanisms
because we’re separated from them
by over 500 million years of evolution.
– That’s quite a bit.
– And so I think in that context,
you can ask big picture neurological questions.
Are there integrative brain regions, for example?
But the evidence is quite limited,
and so behavior ends up carrying a huge amount of weight.
Some of the strongest evidence comes from behaviors
that show the animal valuing pain relief when injured.
So for example, there was a study by Robin Crook
on octobuses, which is where you give the animal
a choice of two different chambers,
and you see which one it initially prefers.
And then you allow it to experience the effects
of a noxious stimulus, a nasty event.
And then in the other chamber that it initially dispreferred,
you allow it to experience the effects of an aesthetic
or a pain relieving drug.
And then you see whether its preferences reverse.
So now going forward,
it goes to that chamber where it had a good experience
rather than the one where it had a terrible experience.
So it’s a pattern of behavior.
In ourselves, this would be explained by feeling pain
and then getting relief from the pain.
And when we see it in other mammals,
we make that same inference.
– Are there any other categories?
‘Cause we mentioned pain is one bucket of sentience,
but there’s much more to it.
Is there anything else that tends to play
a big role in the research?
– There’s much more to it.
And what I would like to see in the future
is animal sentience research moving beyond pain
and looking for other states that matter,
like joy for instance.
In practice though, by far the largest body of literature
exists for looking at markers of pain.
– I would love to read a paper that tries to assess
to what degree rats are experiencing joy
rather than pain, that would be lovely.
– I mean, studies of play behavior are very relevant here.
The studies of rats playing hide and seek for example,
where there must be something motivating
these play behaviors.
In the human case, we would call it joy, delight,
excitement, something like that.
And so it gets you taking seriously the possibility
there might be something like that in other animals too.
– I think the thing I’m actually left wondering is
what animals don’t show signs of sentience in these cases?
– Right, I mean, there’s many invertebrates
where you have an absence of evidence
’cause no one has really looked.
So snails for example, there’s frustratingly little evidence.
Also bivalve mollusks, which people talk about a lot
’cause they eat so many of them.
Very, very little evidence to base our judgments on.
And it’s hard to know what to infer from this.
There’s this slogan that absence of evidence
is not evidence of absence.
And it’s a little bit oversimplifying
’cause you sort of think, well, you know,
when researchers find some indicators of pain,
they’ve got strong motivations to press on
because it could be a useful pain model
for biomedical research.
And this is exactly what we’ve seen in insects,
particularly Drosophila fruit flies,
that seeing some of those initial markers
has led scientists to think, well, let’s go for this.
And it turns out they’re surprisingly useful pain models.
– A pain model for humans?
– Right, exactly.
Yeah, that traditionally biomedical researchers have used rats
and there’s pressure to replace.
I don’t personally think that replacement here
should mean replacing mammals with invertebrates.
It’s not really the kind of replacement that I support,
but that is how a lot of scientists understand it.
And so they’re looking for ways to replace rats with flies.
– How do they decide
that the fly is a good pain model for humans?
– I mean, researchers have the ability
to manipulate the genetics of flies
at very, very fine grains using astonishing technologies.
So there was a recent paper that basically installed
in some flies sensitivity to chili heat.
Which of course in us, over a certain threshold,
this becomes painful.
So if you have one of the hottest chilies in the world,
you’re not gonna just carry on as normal.
– Certainly not.
– And they showed that the same behavior
can be produced in flies.
You can engineer them to be responsive to chili
and then you can dial up the amount of capsaicin
in the food they’re eating.
And there’ll come a point where they just stop eating
and withdraw from food, even though it leads them to starve.
And things like this that you’re leading researchers
to say, wow, the mechanisms here are mechanisms
we can use for testing out potential pain relieving drugs.
And the fruit flies are a standard model organism,
as they say in science.
So there’s countless numbers of them,
but traditionally they’ve been studied
for genetics primarily.
People haven’t been thinking of them
as model systems of cognitive functions
or of sentience or of pain or of sociality.
And they’re realizing to their surprise
that they’re very good models of all of these things.
And then your question is, well,
why is it such a good model of these things?
Could it be in fact that it possesses sentience of some kind?
– I don’t wanna go too far down this rabbit hole
’cause I could spend hours asking you about this.
Let’s swing back to your research on the UK’s Act for a second.
You wound up recommending that the invertebrates
you looked at should be included.
And you mentioned this included, you know, octopuses,
which to me seems straightforward.
These seem very intelligent and playful.
I don’t need a lot of research to convince me of that.
But you recommended things like, you know, crabs and lobsters
and things where maybe people’s intuitions differ
a little bit in practical terms.
What changed for the life of a crab
after the UK did formally include them in the bill?
How does that wind up benefiting crabs?
– It’s a topic of ongoing discussion, basically,
’cause what this new act does
is it creates a duty on policymakers
to consider the animal welfare consequences
of their decisions, including to crabs.
Now, we recommended, don’t just put crabs
in this particular act.
Also, amend the UK’s other animal welfare laws
to be consistent with the new act.
And this we’ve not yet seen.
So we’re really hoping that this will happen
and will happen in the near future.
And it’s something that definitely should happen.
‘Cause in the meantime, we’ve got a rather confusing picture
where you have these other laws that say
animals should not be caused unnecessary suffering
when they’re killed and people should require training
if they’re going to slaughter animals.
And then you have this new law that says
for legal purposes, decapod crustaceans
are to be considered animals.
And as a philosopher, I’m always thinking,
well, read these two things together
and think about what they logically imply
when written together.
And lawyers don’t like that kind of argument.
Lawyers want a clear precedent
where there’s been some kind of test case
that has convicted someone for boiling a lobster alive
or something like that.
And that’s what we’ve not yet had.
So I’m hoping that lawmakers will act
to clarify that situation.
To me, it’s kind of clear.
How much clearer could it be
that this method causes unnecessary suffering
quite obviously.
And it’s illegal to do that to any animal,
including crabs.
But in practice, because it’s not explicitly ruled out,
it’s not quite good enough at the moment.
We wanna see this explicitly ruled out.
– So we’ll take incremental steps to get there.
– Yeah, in a way, I’m glad people take this issue
seriously at all.
I didn’t really expect that when I started working on it.
And so to have achieved any policy change that benefits
crabs and lobsters in any way,
I’ve gotta count that as a win.
– Support for the gray area comes from Mint Mobile.
There’s nothing like the satisfaction
of realizing you just got an incredible deal.
But those little victories have gotten harder
and harder to find.
Here’s the good news though.
Mint Mobile is resurrecting that incredible
“I got a deal” feeling.
Right now, when you make the switch to a Mint Mobile plan,
you’ll pay just $15 a month when you purchase
a new three month phone plan.
All Mint Mobile plans come with high speed data
and unlimited talk and text delivered
on the nation’s largest 5G network.
You can even keep your phone, your contacts,
and your number.
It doesn’t get much easier than that.
To get this new customer offer
and your new three month premium wireless plan
for just 15 bucks a month,
you can go to mintmobile.com/grayarea.
That’s mintmobile.com/grayarea.
You can cut your wireless bill to 15 bucks a month
at mintmobile.com/grayarea.
$45 upfront payment required equivalent to $15 a month.
New customers on first three month plan only.
Speed slower above 40 gigabytes on unlimited plan,
additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
Support for the gray area comes from Cook Unity.
You know one way to eat chef prepared meals
in the comfort of your home?
You can spend years at culinary school,
work your way up the restaurant industry,
become a renowned chef on your own,
and then cook something for yourself.
Cook Unity delivers meals to your door
that are crafted by award winning chefs
and made with local farm fresh ingredients.
Cook Unity’s selection of over 350 meals
offers a variety of cuisines
and their menus are updated weekly.
So you’re sure to find something
to fit your taste and dietary needs.
One of our colleagues, Nisha,
tried Cook Unity for herself.
– Sometimes you’re just too tired to cook.
I’m a, I have a two and a half year old.
Sometimes you’re just exhausted at the end of the day.
And it’s very easy to default to take out.
So it was really nice to not have the mental load
of having it cook every day,
but having healthy home cooked meals
already prepared for you
and not having to go the takeout route.
– You can get the gift of delivering mouthwatering meals
crafted by local ingredients
and award winning chefs with Cook Unity.
You can go to cookunity.com/grayarea
or enter code grayarea before checkout
for 50% off your first week.
That’s 50% off your first week
by using code grayarea
or going to cookunity.com/grayarea.
– Support for the gray area comes from Shopify.
Viral marketing campaigns have gotten pretty wild lately.
Like in Russia,
one pizza chain offered 100 free pizzas a year
for 100 years to anyone
who got the company logo tattooed on their body.
Apparently 400 misguided souls did it,
which is a story that deserves its own podcast.
But if you want to grow your company
without resorting to a morally dubious viral scheme,
you might want to check out Shopify.
Shopify is an all-in-one digital commerce platform
that wants to help your business sell better than ever before.
Shopify says they can help you convert browsers
into buyers and sell more over time.
And their shop pay feature can boost conversions by 50%.
There’s a reason companies like Allbirds turn to Shopify
to sell more products to more customers.
Businesses that sell more sell with Shopify.
Want to upgrade your business
and get the same checkout Allbirds uses?
You can sign up for your $1 per month trial period
at Shopify.com/Vox, all lowercase.
That’s Shopify.com/Vox to upgrade your selling today.
Shopify.com/Vox.
(gentle music)
– Let’s move to another set of potential beings.
Your work on Sentience covers artificial intelligence.
And one of the things that I’ve been most interested
in watching as the past few years
have really thrust a lot of questions around AI
into the mainstream has been this unbundling
of consciousness and intelligence
or Sentience and intelligence.
We’re clearly getting better at creating
more intelligent systems that can achieve
and with competency perform certain tasks.
But it remains very unclear
if we’re getting any closer to Sentient ones.
So how do you understand the relationship
between Sentience and intelligence?
– I think it’s entirely possible
that we will get AI systems with very high levels
of intelligence and absolutely no Sentience at all.
That’s entirely possible.
And when you think about shrimps or snails, for example,
we can also conceive of how there can be Sentience
with perhaps not all that much intelligence.
– On another podcast, you had mentioned that
it might actually be easier to create AI systems
that are Sentient by modeling them
off of less intelligent systems
rather than just cranking up the intelligence dial
until it bursts through into Sentience.
Why is that?
– That could absolutely be the case.
I see it many possible pathways to Sentient AI.
One of which is through the emulation
of animal nervous systems.
There’s a long running project called Open Worm
that tries to recreate the nervous system
of a tiny worm called C. elegans in computer software.
There’s not a huge amount of funding going into this
because it’s not seen as very lucrative,
just very interesting.
And so even with those very simple nervous systems,
we’re not really at the stage where we can say
they’ve been emulated.
But you can see the pathway here.
You know, suppose we did get an emulation
of a worms nervous system.
I’m sure we would then move on to fruit flies.
If that worked, researchers would be going on to open mouse,
open fish and emulating animal brains
at ever greater levels of detail.
And then in relation to questions of Sentience,
we’ve got to take seriously the possibility
that Sentience does not require a biological substrate,
that the stuff you’re made of might not matter.
It might matter, but it might not.
And so it might be that if you recreate
the same functional organization in a different substrate,
so no neurons of a biological kind anymore,
just computer software,
maybe you would create Sentience as well.
– You’ve talked about this idea that you’ve called
the N equals one problem.
Can you explain what that is?
– Well, this is a term that began in origins of life studies,
where it’s people searching for extraterrestrial life
or studying life’s origin and asking,
well, we only have one case to draw on.
And if we only have one case,
how are we supposed to know what was essential to life
from what was a contingent feature
of how life was achieved on Earth?
And one might think we have an N equals one problem
with consciousness as well.
If you think it’s something that has only evolved once,
seems like you’re always gonna have problems
disentangling what’s essential to it
from what is contingent.
Luckily though,
I think we might be in an N greater than one situation
when it comes to Sentience and consciousness
because of the arthropods like flies and bees
and because of the cephalopods and crabs.
And because of the cephalopods
like octopuses, squid, cuttlefish,
we might even be in an N equals three situation,
in which case, studying those other cases,
octopuses, crabs, insects has tremendous value
for understanding the nature of Sentience
’cause it can tell us,
it can start to give us some insight
into what might be essential to having it at all
versus what might be a quirk
of how it is achieved in humans.
– Just to make sure I have this right,
if we are in an N equals one scenario with Sentience,
that means that every sentient creature evolved
from the same sentient ancestor.
It’s one evolutionary lineage.
– That’s right.
– And so Sentience has only evolved once on Earth’s history
so it gives us one example to look at.
– Exactly.
– But if we’re not in an N equals one situation,
you mentioned N equals three
and there’s a fair bit of research
suggesting this could be the case or something like it,
then Sentience has evolved three separate times
in three separate kind of cases of form
and the architecture of a being.
– That’s fascinating to me,
the idea that Sentience could have involved
independently multiple times in different ways.
– Yeah, we know it’s true of eyes, for example,
when you look at the eyes of cephalopods,
you see a wonderful mixture of similarities and differences.
So we see convergent evolution, similar thing,
evolving independently to solve a similar problem
and Sentience could be just like that.
– The greater the number of N’s we have here,
the number of separate instances of Sentience evolving,
it strikes me as that lends more credence to the idea
that AI could develop its own independent route
to Sentience as well that might not look exactly
like what we’ve seen in the past.
– It’s also the way towards really knowing
whether it has or not as well
because at present, we’re just not in that situation.
We’re not in a good enough position
to be able to really know that we’ve created Sentience AI
even when we do, we’ll be faced
with horrible disorienting uncertainty.
But to me, the pathway towards better evidence
and maybe one day knowledge lies through
studying other animals.
And it lies through trying to get other N’s,
other independently evolved cases
so that we can develop theories
that genuinely disentangle the quirks
of human consciousness from what is needed
to be conscious at all.
– What kind of evidence would you find compelling
that tests for Sentience in AI systems?
– It’s something I’ve been thinking about a great deal
because when we’re looking at the surface linguistic behavior
of an AI system that has been trained
on over a trillion words of human training data,
we clearly gonna see very fluent talking
about feelings and emotions.
And we’re already seeing that.
And it’s really, I would say not evidence at all
that the system actually has those feelings
because it can be explained as a kind of skillful mimicry.
And if that mimicry serves the system’s objectives,
we should expect to see it.
We should expect our criteria to be gained
if the objectives are served by persuading
the human user of Sentience.
And so this is a huge problem and it points
to the need to look deeper in some way.
These systems are very substantially opaque.
It is really, really hard to infer anything
about what the processes are inside them.
And so I have a second line of research as well
that I’ve been developing with collaborators at Google
that is about trying to adapt some of these animal experiments.
Let’s see if we can translate them over to the AI case.
– These are looking for behavior changes?
– Yeah, looking for subtle behavior changes
that we hope would not be gaming
because they’re not part of the normal repertoire
in which humans express their feelings,
but are rather these very subtle things
that we’ve looked for in other animals
because they can’t talk about their feelings
in the first place.
– So it’s funny, we’re hitting the same problem in AI
that we are in animals and humans,
which is that in both cases, there’s a black box problem
where we don’t actually understand
the inner workings to some degree.
– The problems are so much worse in the AI case though
because when you’re faced with a pattern of behavior
in another animal like an octopus
that is well explained by there being a state
like pain there, that is the best explanation
for your data.
And it doesn’t have to compete with this other explanation
that maybe the octopus read a trillion words
about how humans express their feelings
and stands to benefit from gaming our criteria
and skillfully mimicking us.
We know the octopus is not doing that, that never arises.
In the AI case, those two explanations always compete
and the second one with current systems
seems to be rather more plausible.
And in addition to that,
the substrate is completely different as well.
So we face huge challenges
and I suppose what I’m trying to do
is maintain an attitude of humility
in the face of those challenges.
Now, let’s not be credulous about this,
but also let’s not give up the search
for developing higher quality kinds of test.
Support for the gray area comes from green light.
Anyway, it applies to more than fish.
It’s also a great lesson for parents
who want their kids to learn important skills
that will set them up for success later in life.
As we enter the gifting season,
now might be the perfect time
to give your kids money skills that will last
well beyond the holidays.
That’s where green light comes in.
Green light is a debit card
and money at made specifically with families in mind.
Send money to your kids,
track their spending and saving
and help them develop their financial skills
with games aimed at building the confidence they need
to make wiser decisions with their money.
My kid is a little too young for this.
We’re still rocking piggy banks,
but I’ve got a colleague here at Vox
who uses it with his two boys and he loves it.
You can sign up for green light today
at greenlight.com/grayarea.
That’s greenlight.com/grayarea
to try green light today.
Greenlight.com/grayarea.
Support for the show comes from Give Well.
When you make a charitable donation,
you want to know your money is being well spent.
For that, you might want to try Give Well.
Give Well is an independent nonprofit
that’s spent the last 17 years
researching charitable organizations.
And they only give recommendations
to the highest impact causes that they vetted thoroughly.
According to Give Well, over 125,000 donors
have used it to donate more than $2 billion.
Rigorous evidence suggests that these donations
could save over 200,000 lives.
And Give Well wants to help you make informed decisions
about high impact giving.
So all of their research and recommendations
are available on their site for free.
You can make tax deductible donations
and Give Well doesn’t take a cut.
If you’ve never used Give Well to donate,
you can have your donation matched up to $100
before the end of the year
or as long as matching funds last.
To claim your match, you can go to GiveWell.org
and pick podcast and enter the gray area at checkout.
Make sure they know that you heard about Give Well
from the gray area to get your donation matched.
Again, that’s GiveWell.org to donate or find out more.
Support for the gray area comes from Delete Me.
Delete Me allows you to discover and control
your digital footprint,
letting you see where things like home addresses,
phone numbers, and even email addresses
are floating around on data broker sites.
And that means Delete Me could be the perfect holiday gift
for a loved one looking to help safeguard
their own life online.
Delete Me can help anyone monitor
and remove the personal info they don’t want on the internet.
Claire White, our colleague here at Vox,
tried Delete Me for herself and even gifted it to a friend.
This year, I gave two of my friends a Delete Me subscription
and it’s been the perfect gift
’cause it’s something that will last beyond the season.
Delete Me will continue to remove their information
from online and it’s something I’ve been raving about
so I know that they’re gonna love it as well.
– This holiday season, you can give your loved ones
the gift of privacy and peace of mind with Delete Me.
Now at a special discount for our listeners.
Today, you can get 20% off your Delete Me plan
when you go to joindeleteme.com/vox
and use promo code Vox at checkout.
The only way to get 20% off is to go to joindeleteme.com/vox
and enter code Vox at checkout.
That’s joindeleteme.com/vox code Vox.
(gentle music)
– One of the major aims of your recent book
is to propose a framework
for making these kinds of practical decisions
about potentially sentient creatures,
whether it’s an animal, whether it’s AI,
given this uncertainty.
Tell me about that framework.
– Well, it’s a precautionary framework.
One of the things I urge is a pragmatic shift
in how we think about the question.
From asking, is the system sentient
where uncertainty will always be with us?
To asking instead, is the system a sentient’s candidate?
Where the concept of a sentient’s candidate
is a concept that we’ve pragmatically engineered.
And what it says is that a system is a sentient’s candidate
when there’s a realistic possibility of sentient’s
that it would be irresponsible to ignore.
And when there’s an evidence base
that can inform the design and assessment of precautions.
And because we’ve constructed the concept like that,
we can use current evidence to make judgments.
The cost of doing that is that those judgments
are not purely scientific judgments anymore.
There’s an ethical element to the judgment as well,
because it’s about when a realistic possibility
becomes irresponsible to ignore.
And that’s implicitly a value judgment.
But by reconstructing the question in that way,
we make it answerable.
– So, presumably then,
given your recommendation to the UK government,
you would say that those invertebrates you looked at
are sentient’s candidates.
That there’s enough evidence to at least consider
the possibility of sentience.
Where would you stop with the current category
of sentient’s candidate?
What is not a sentient’s candidate in your current view?
– I’ve come to the view that insects really are,
which surprises me.
You know, it would have surprised past me
who hadn’t read so much of the literature about insects.
The evidence just clearly shows a realistic possibility
of sentience, it would be irresponsible to ignore.
But really, that’s currently where I stop.
So the cephalopod mollusks, the decapod crustaceans,
the insects, lot of evidence in those cases.
And I think in other invertebrates,
what we should say instead is that we lack the kind of
evidence that would be needed to effectively design
precautions to manage welfare risks.
And so the imperative there is to be getting more evidence.
And so in my book, I call these investigation priorities.
– So insects are sentience candidates.
Where does today’s generation of AI,
let’s LLMs in particular, so open AI is chatgy, BT,
anthropics, Claude, are these sentience candidates
in your view yet?
– I suggest that they’re investigation priorities,
which is already controversial because I’m saying that,
well, just as snails, we need more evidence.
Equally in AI, we need more evidence.
So I’m not being one of those people who just dismisses
the possibility of sentient AI as being a ridiculous one.
But I don’t think they’re sentience candidates
because we don’t have enough evidence.
– When you say that something is a sentience candidate,
it’s implying that we need to consider their welfare
and our behaviors and the decisions that we make.
– In public policy.
Yeah, I mean, in our personal lives,
we might want to be even more precautionary,
but I’m designing here a framework for setting policy.
– Right, ’cause I can imagine,
I think that the standard kind of line
that you get at this point is,
if you’re telling me I need to consider
the welfare of insects,
how can I take a step on the sidewalk?
And one of the ideas that’s central to your framework
is this idea of proportionality, which I really liked.
You talk about how the precautions that we take
should match the scale of the risk of suffering
that our actions kind of carry.
So how do you think about quantifying the risk
of suffering an action carries, right?
Does harming simpler creatures or insects
carry less risk than harming larger, more complex ones
like pigs or octopuses?
– Well, I’m opposed to trying to reduce it
or to a calculation and perhaps disagree
with some utilitarians on that point.
When you’re setting public policy,
cost-benefit analysis has its place,
but we’re not in that kind of situation here.
We’re weighing up very incommensurable things,
things that it’s very, very hard to compare.
And I think in that kind of situation,
you don’t want to be just making a calculation.
What you need to have is a democratic, inclusive process
through which different positions can be represented
and we can try to resolve our value conflicts democratically.
And so in the book, I advocate for citizens assemblies
as being the most promising way of doing this,
where you bring a random sample of the public
into an environment where they’re informed about the risks,
they’re informed about possible precautions,
and they’re given a series of tests to go through
to debate what they think would be proportionate
to those risks.
And things like, we’re all banned from walking now
because it might hurt insects.
I don’t see those as very likely to be judged proportionate
by such an exercise.
But other things we might do to help insects,
like banning certain kinds of pesticides,
I think might well be judged proportionate.
– Is this, this sounds to me almost like a form of jury duty.
You have a random selection of citizens brought together.
– Yeah.
– How do you, when I think about this on one hand,
I think it sounds lovely.
I like the idea of us all coming together
to debate the welfare of our fellow creatures.
It also strikes me as kind of optimistic,
to imagine us not only doing this, but doing it well.
And I’m curious how you think about balancing
the value of expertise in making these decisions
with democratic input.
– Yeah, I’m implicitly proposing a division of labor
where experts are supposed to make this judgment
of sentience candidature or candidacy.
Is the octopus a sentience candidate?
But then they’re not adjudicating
the questions of proportionality.
– So what to do about it?
– Yeah, then it would be a tyranny of expert values.
You’d have this question that calls for value judgments
about what to do.
And you’d be handing that over to the experts
and letting the experts dictate changes to our way of life.
That question of proportionality,
that should be handed over to the citizens assembly.
And I think it doesn’t require ordinary citizens
to adjudicate the scientific disagreement.
And that’s really crucial because if you’re asking
random members of the public to adjudicate
which brain regions they think are more important to sentience,
that’s gonna be a total disaster.
But the point is you give them questions
about what sorts of changes to our way of life
would be proportionate, would be permissible,
adequate, reasonably necessary and consistent
in relation to this risk that’s been identified.
And you ask them to debate those questions.
And I think that’s entirely feasible.
I’m very optimistic about citizens assemblies
as a mechanism for addressing that kind of question,
a question about our shared values.
– Do you see these as legally binding
or kind of making recommendations?
– I think they can only be making recommendations.
What I’m proposing is that on certain specific issues
where we think we need public input,
but we don’t wanna put them to a referendum
because we might need to revisit the issues
when new evidence comes to light
and you need a certain level of information
to understand what the issue is.
Citizens assemblies are great for those kinds of issues.
And because they’re very effective,
the recommendations they deliver
should be given weight by policymakers
and should be implemented.
They’re not substituting for parliamentary democracy,
but they’re feeding into it in a really valuable way.
– One thing that I can’t help but wonder about all of this,
humans are already incredibly cruel to animals
that most of us agree are very sentient,
I’m thinking of pigs or cows.
I think we’ve largely moved away from,
Descartes in the 1600s
where all animals were considered unfeeling machines.
Today we might disagree about how small
and simple down the chain we go
before we lose in consensus on sentience,
but agreeing that they’re sentient
doesn’t seem to have prevented us
from doing atrocious things to many animals.
So I’m curious if the goal is to help guide us
in making more ethical decisions,
how do you think that determining sentience
in other creatures will help?
– You’re totally right that recognizing animals as sentient
does not immediately lead to behavioral change
to treat them better.
And this is the tragedy of how we treat
lots of mammals like pigs and birds like chickens,
that we recognize them as sentient beings,
and yet we fail them very, very seriously.
I think there’s a lot of research to be done
about what kinds of information about sentience
might genuinely change people’s behavior.
And I’m very interested in doing that kind of research
going forward, but with cases like octopuses,
at least there’s quite an opportunity
in this particular case, I think,
because you don’t have really entrenched industries
already farming them.
Part of the problem we face with the pigs and chickens
and so on is that in opposing these practices,
the enemy is very, very powerful.
The arguments are really easy to state
and people do get them and they do see why this is wrong,
but then the enemy is so powerful
that actually changing this juggernaut,
this leviathan is a huge challenge.
By contrast with invertebrate farming,
we’re talking about practices sometimes
that could become entrenched like that in the future,
but are not yet entrenched.
Octopus farming is currently on quite small scales,
shrimp farming is much larger,
insect farming is much larger,
but they’re not as entrenched and powerful
as pig farming, poultry farming.
And so there seem to be real opportunities here
to effect positive change, or at least I hope so.
In the octopus farming case, for example,
we’ve actually seen bands implemented
in Washington State and in California.
And that’s a sign that progress is really possible
in these cases.
– There are talks of banning AI development.
The philosopher Thomas Metzinger is famously called
for a ban until 2050, that might be difficult operationally,
but I’m curious how you think about actions we can take today
at the early stages of these institutions
that might help in the long run.
– Yeah, huge problems.
I do think Metzinger’s proposal deserves to be taken seriously,
but also we need to be thinking about what can we do
that is more easily achieved than banning this stuff,
but then nonetheless makes a positive difference.
And in the book, I suggest there might be some lessons here
from the regulation of animal research
that you can’t just do what you like,
experimenting on animals.
In the UK, at least, there’s quite a strict framework
requiring you to get a license.
And it’s not a perfect framework by any means.
It has a lot of problems,
but it does show a possible compromise
between simply banning something altogether
and allowing it to happen in a completely unregulated way.
And the nature of that compromise
is that you expect the people doing this research
to be transparent about their plans,
to reveal their plans to a regulator.
Who is able to see them and assess the harms and benefits
and only give a license
if they think the benefits outweigh the harms.
And I’d like to see something like that in AI research
as well as in animal research.
– Well, it’s interesting
’cause it brings us right back
to what you were talking about a little while ago,
which is, if we can’t trust the linguistic output,
we need the research on understanding,
well, how do we even assess harm and risk
in AI systems in the first place?
– As I say, it’s a huge problem coming down the road
for the whole of society.
I think there’ll be significant social divisions opening up
in the near future between people who are quite convinced
that their AI companions are sentient
and want rights for them
and others who simply find that ridiculous and absurd.
And I think that there’ll be a lot of tensions
between these two groups.
And in a way, the only way to really move forward
is to have better evidence than we do now.
And so there needs to be more research.
I’m always in this difficult position of,
I want more research, the tech companies might fund it,
I hope they will, I want them to fund it.
At the same time, it could be very problematic
for them as well.
And so I can’t make any promises in advance
that the outcomes of that research
will be advantageous to the tech companies.
So, but even though I’m in a difficult position there,
I feel like I still have to try and do something.
– Maybe by way of trying to wrap this all up,
you have been involved in these kinds of questions
for a number of years.
And you’ve mentioned a few times throughout the conversation
that you have seen a pace of change
that’s been kind of inspiring.
You’ve seen questions that previously
were not a part of the conversation now,
becoming part of the mainstream conversation.
So what have you seen in the last decade or two
in terms of the degree to which we are really beginning
to embrace these questions?
– I’ve seen some positive steps.
I think issues around crabs and lobsters and octopus
is taking far more seriously than they were 10 years ago.
For example, I really did not expect that California
would bring in an octopus farming ban
and in the legislation cite our work
as being a key factor driving it.
I mean, that was extraordinary.
So it just goes to show that it really pays off sometimes
to do impact driven work.
I think we’ve seen over the last couple of years
some changes in the conversations around AI as well.
The book is written in a very optimistic tone, I think,
because well, you’ve got to hope to make it a reality.
You’ve got to believe in the possibility of us
taking steps to manage risk better than we do.
And the book is full of proposals
about how we might do that.
And I think at least some of these will be adopted in the future.
– I would love to see it, I’m optimistic as well.
Jonathan Birch, thank you so much for coming on the show.
This was a pleasure.
– Thanks, Hachan.
– Once again, the book is The Edge of Sentience,
which is free to read on the Oxford Academic Platform.
We’ll include a link to that in the show notes.
And that’s it.
I hope you enjoyed the episode as much as I did.
I am still thinking about whether we’re in an N equals one
or an N equals three world,
and how the future of how we look for sentience
in AI systems could come down to animal research
that helps us figure out
whether all animals share the same sentient ancestor,
or whether sentience is something
that’s evolved a few separate times.
This episode was produced by Beth Morrissey
and hosted by me, O’Shan Jarrow.
My day job is as a staff writer with Future Perfect at Vox,
where I cover the latest ideas in the science
and philosophy of consciousness,
as well as political economy.
You can read my stuff at box.com/futureperfect.
Today’s episode was engineered by Patrick Boyd,
fact-checked by Anouk Dussot, edited by Jorge Just,
and Alex Overington wrote our theme music.
New episodes of The Gray Area drop on Mondays.
Listen and subscribe.
The show is part of Vox.
Support Vox’s journalism
by joining our membership program today.
Go to vox.com/members to sign up.
And if you decide to sign up because of the show,
let us know.
2
Can you ever really know what’s going on inside the mind of another creature?
In some cases, like other humans, or dogs and cats, we might be able to guess with a bit of confidence. But what about octopuses? Or insects? What about AI systems — will they ever be able to feel anything? And if they do feel anything, what are our ethical obligations toward them?
In today’s episode, Vox staff writer Oshan Jarow brings those questions to philosopher of science Jonathan Birch.
Birch is the principal investigator on the Foundations of Animal Sentience Project at the London School of Economics, and author of the recently released book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Birch also convinced the UK government to consider lobsters, octopuses, and crabs sentient and therefore deserving of legal protection.
This unique perspective earned Jonathan a place on Vox’s Future Perfect 50 list, an annual celebration of the people working to make the future a better place. The list — published last month — includes writers, scientists, thinkers, and activists who are reshaping our world for the better.
In this conversation, Oshan and Jonathan explore everything we know— and don’t know — about sentience, and how to make ethical decisions about creatures who may possess it.
Guest host: Oshan Jarow
Guest: Jonathan Birch, Author of The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Available for free on the Oxford Academic platform.
Learn more about your ad choices. Visit podcastchoices.com/adchoices