AI transcript
0:00:07 I’m Hannah, and this episode is a Valentine special where I talk with Kate Darling, researcher
0:00:12 at MIT Labs, all about our emotional relationships with robots.
0:00:16 We already know that we have an innate tendency to anthropomorphize robots, but as we begin
0:00:20 to share more and more spaces, both social and private, with these machines, what does
0:00:23 that actually mean for how we’ll interact with them?
0:00:28 From our lighter sides, from affection and love and emotional support to our darker sides,
0:00:33 what do these relationships teach us about ourselves, our tendencies, and our behaviors?
0:00:36 How will these relationships in turn change us?
0:00:40 And what models should we be thinking about as we develop these increasingly sophisticated
0:00:43 relationships with our robots?
0:00:47 Besides just that natural instinct that we have to anthropomorphize all sorts of things,
0:00:49 how is it different with robots?
0:00:54 Robots are just so fascinating because we know rationally that they’re machines, that
0:00:57 they’re not alive, but we treat them like living things.
0:01:04 With robots, I think they speak to our primal brain even more than a stuffed animal or some
0:01:10 other object that we might anthropomorphize because they combine movement and physicality
0:01:14 in this way that makes us automatically project intent onto them.
0:01:19 So that’s why we project more onto them than the Samsung monitor that I’m looking at back
0:01:23 here because it looks like it has agency in the world?
0:01:24 Yeah.
0:01:25 I think that tricks our brains.
0:01:28 There’s a lot of studies that show that we respond differently to something in our physical
0:01:31 space than just something on a screen.
0:01:35 So even though people will imbue everything with human-like qualities, robots take it
0:01:39 to a new level because of this physical movement.
0:01:44 People will do it even with very, very simple robots, just like the Roomba vacuum cleaner.
0:01:49 It’s not a very compelling anthropomorphic robot, and yet people will name them and feel
0:01:54 bad for them when they get stuck and insist on getting the same one back.
0:02:01 If it gets broken, and so if you take that and then you create something that is specifically
0:02:05 designed to push those buttons in us, then it gets really interesting.
0:02:10 So what is the reason why we should be aware of this tendency besides this cute attachment
0:02:15 to our Roombas or our vectors or whatever our pet robots are?
0:02:18 Why does it matter that we develop these relationships with them?
0:02:23 Well, I think it matters because right now, robots are really moving into these new shared
0:02:24 spaces.
0:02:30 I mean, we’ve had robots for decades, but they’ve been increasing efficiency in manufacturing
0:02:31 contexts.
0:02:34 They haven’t been sharing spaces with people.
0:02:39 And as we integrate these robots into these shared spaces, it’s really important to understand
0:02:43 that people treat them differently than other devices, and they don’t treat them like toasters.
0:02:49 They treat them subconsciously like living things, and that can lead to some almost comical
0:02:58 challenges as we try and figure out how to treat these things as tools in contexts where
0:03:03 they’re meant to be tools, but then at the same time kind of want to treat them differently.
0:03:08 When you talk about these giant manufacturing robots that do exist in plants and factories
0:03:11 on the floor, do we see that there?
0:03:17 So there’s this company in Japan that has this standard assembly line for manufacturing,
0:03:23 and a lot of companies, they have people working alongside the robots on the assembly line,
0:03:27 and so the people will come in and they do these aerobics in the morning to warm up
0:03:32 their bodies for the day, and they have the robots do the aerobics with their arms with
0:03:37 the people so that they’ll be perceived more like colleagues and less as machines.
0:03:40 People are more accepting of the technology.
0:03:45 People enjoy working with it more, and I think it’s really important to acknowledge this
0:03:48 emotional connection because you can harness it too.
0:03:50 So we know we have this tendency.
0:03:57 If we think about being aware of it and being able to foster it or diminish it, what are
0:04:00 some of the ways in which we negotiate those relationships?
0:04:05 My focus is trying to think about what that means ethically, what that means in terms
0:04:12 of maybe changing human behavior, what challenges we might want to anticipate.
0:04:18 We’re seeing a lot of interesting use cases for social robots that are specifically designed
0:04:22 to get people to treat them like living things, to develop an emotional connection.
0:04:29 One of my favorite current use cases for this technology is a replacement for animal therapy.
0:04:30 We have therapeutic robots.
0:04:35 They’re used as medical devices with dementia patients or in nursing homes.
0:04:39 I saw an article about that recently, specifically with people with dementia.
0:04:41 Yeah, there’s a baby seal that’s very popular.
0:04:42 That’s right.
0:04:43 That’s the one I read about.
0:04:45 A lot of people think it’s creepy.
0:04:50 We’re giving old people robots and having them nurture something that’s not alive.
0:04:57 But then you look at some of the advantages of having them have this experience when their
0:05:00 lives have been reduced to being taken care of by other people.
0:05:04 It’s actually an important psychological experience for them to have.
0:05:10 They’ve been able to use these robots as alternatives to medication for calming distressed patients.
0:05:15 This isn’t a replacement for human care and that’s also not how it’s being used.
0:05:20 It’s really being used as a replacement for animal therapy where we can’t use real animals
0:05:25 because people will consistently treat them more like a living thing than a device.
0:05:29 What is the initial interaction like when you hold something like that?
0:05:31 Is there a prelude that’s necessary?
0:05:36 Do you have to educate a little bit those patients or do they just put the robot seal
0:05:38 in their arms?
0:05:43 The most clever robotic design doesn’t require any prelude or anything because you will
0:05:46 automatically respond to the cues.
0:05:48 The baby seal is very simple.
0:05:53 It just makes these little cute sounds and movements and responds to your touch and will
0:05:56 purr a little bit.
0:06:02 It’s very intuitive and it’s also not trying to be a cat or anything that you would be
0:06:07 more intimately familiar with because no one has actually held a baby seal before.
0:06:11 It’s much easier to suspend your disbelief and just go with it.
0:06:18 What are some of the very broad umbrella concerns that we want to be thinking about as we’re
0:06:20 watching these interactions develop?
0:06:24 A lot of my work has been around empathy and violence towards robotic objects.
0:06:28 Are we already being violent towards them?
0:06:29 Sometimes.
0:06:35 There was this robot called Hitchbot that hitchhiked all the way across the entire country of Canada,
0:06:38 just relying on the kindness of strangers.
0:06:42 It was trying to do a road trip through the US and it made it to Philadelphia and then
0:06:44 it got vandalized beyond repair.
0:06:49 Of course, Philadelphia, by the way, because I’m from New Jersey.
0:06:55 As you’re telling me this story, I’m already imagining this alien life doing a little journey
0:06:56 through the world.
0:07:01 I’m completely projecting this narrative onto it and that was the interesting thing about
0:07:02 the story.
0:07:06 It wasn’t that the robot got beat up, but it was people’s response to that, that they
0:07:12 were empathizing with this robot that was just trying to hitchhike around and that it
0:07:15 got … People were so sad when this robot got …
0:07:17 It’s poor little stranger in a strange land.
0:07:18 Yeah.
0:07:22 There was news about this all over the world that hit international news.
0:07:23 What did we learn from that?
0:07:26 Why is it interesting that we empathize with them?
0:07:31 Even more interesting to me is the question, how does interacting with these very life-like
0:07:35 machines influence our behavior?
0:07:42 Could you use them therapeutically to help children or prisoners or help improve people’s
0:07:43 behavior?
0:07:49 Then the flip side of that question is, could it be desensitizing to people to be violent
0:07:53 towards robotic objects that behave in a really life-like way?
0:07:59 Is that a healthy outlet for people’s violent behavior to go and beat up robots that respond
0:08:04 in a really life-like way, or is that training our cruelty muscles?
0:08:09 Isn’t that a new version of almost the old video game argument?
0:08:11 How is it shifting?
0:08:17 It’s the exact same question, which by the way, I don’t think we’ve ever really resolved.
0:08:24 We mostly decided that people can probably compartmentalize, but children we’re not
0:08:29 sure about, and so we restrict very violent games to adults.
0:08:36 So we’ve decided that we might want to worry about the kids, but adults can probably handle
0:08:37 it.
0:08:44 Now, robots, I think, make us need to re-ask the question because they have this visceral
0:08:52 physicality that we know from research people respond differently to than things on a screen.
0:08:57 There’s a question of whether we can compartmentalize as well with robots, specifically because
0:09:01 they are so present in the world with us.
0:09:06 So do you think that’s because it’s almost a somatic relationship to them?
0:09:10 Will it matter the same way when we are immersed in, say, virtual reality?
0:09:15 I mean, as virtual reality gets more physical, I think that the two worlds merge.
0:09:23 And so even though the answer could very well be people can still distinguish between what’s
0:09:29 fake and what’s real, and just because they beat up their robot doesn’t mean that they’re
0:09:33 going to go and beat up a person or that their barrier to doing that is lower, but we don’t
0:09:34 know.
0:09:35 How do you start looking at that?
0:09:39 What are the details that start giving you an inkling one way or the other?
0:09:45 The way that I think we’re beginning to start to get at the question is just trying to figure
0:09:48 out what those relationships look like at first.
0:09:54 So I’ve done some work on how do people’s tendencies for empathy relate to their hesitation
0:09:57 to hit a robot?
0:10:01 Just to try and establish that people do empathize with the robots because we don’t…
0:10:02 We have to show that first.
0:10:04 Yeah, we have to show that first.
0:10:05 It’s so interesting.
0:10:10 We all know what that feeling is, but to show, to demonstrate, to model it, and then see
0:10:17 it and recognize it in our kind of research experimentation, how do you actually categorize
0:10:19 the response of empathy?
0:10:24 One of the things we did was have people come into the lab and smash robots with hammers
0:10:31 and time how long they hesitated to smash the robot when we told them to smash it.
0:10:36 Did you give them a framework around this experiment or just have them walk in and just
0:10:37 start?
0:10:38 Definitely.
0:10:42 They did not know that they were going to be asked to hit the robot and we did psychological
0:10:48 empathy testing with them to try and establish a baseline for how they scored on empathic
0:10:53 concern generally, but also we had a variety of conditions.
0:10:59 So what we were trying to look at was a difference, for example, in would people hesitate more
0:11:05 if the robot had a name and a backstory versus if it was introduced to them as an object?
0:11:08 Oh, well, presumably the name and the backstory, right?
0:11:09 Yeah.
0:11:15 Yeah, not a huge surprise that when the robot’s name is Frank, people hesitate more.
0:11:17 So sorry, Frank.
0:11:25 We actually tried measuring slight changes in the sweat on their skin to see if they
0:11:29 were more physically aroused.
0:11:33 Unfortunately, those sensors were really unreliable, so we couldn’t get reliable data from that.
0:11:39 We tried coding the facial expressions, which was also difficult.
0:11:43 That’s what I was wondering about, because as one human, reading another human, you do
0:11:46 have some sense, right?
0:11:52 And I have to say the videos of this experiment are much more compelling than just the hesitation
0:11:59 data because people really did, like one woman was looking at this robot, which was a very
0:12:05 simple, looked kind of like a cockroach, was just a thing that moved around like an insect.
0:12:10 And so this one woman is holding the mallet and stealing herself, and she’s muttering
0:12:11 to herself.
0:12:12 It’s just a bug.
0:12:13 It’s just a bug.
0:12:21 So the videos were compelling, but we just didn’t find it easy enough to code them in
0:12:25 a way that would be scientifically sound or reliable.
0:12:27 So we relied just on the timing of the hesitation.
0:12:33 Other studies have measured people’s brainwaves while they watch videos of robots being tortured.
0:12:36 So there are a bunch of different ways that people have tried to get at this.
0:12:42 So when we start learning about our capacity for violence towards robots, are you thinking
0:12:48 about that in terms of what it teaches us back about humans or about what going forward,
0:12:50 the reason we need to know this?
0:12:56 We are learning actually more about human psychology as we watch people interact with
0:13:02 these machines that don’t communicate back to us in an authentic way.
0:13:08 So that’s interesting, but I think that it’s mainly important because we’re already facing
0:13:12 some questions of regulating robots.
0:13:16 For example, there’s been a lot of moral panic around sex robots.
0:13:21 We already need to be answering the question, do we want to allow this type of technology
0:13:23 to exist and be used and be sold?
0:13:27 Do we want to only allow for it in therapeutic contexts?
0:13:29 Do we want to ban it completely?
0:13:34 And the fact is we have no evidence to guide us in what we should be doing.
0:13:39 So it’s all coming down to the same question of like, is this desensitizing or is this
0:13:40 enhancing basically?
0:13:41 Yeah.
0:13:48 Unfortunately, a lot of the discussions are just fueled by superstition or moral panic
0:13:53 or in this context, a lot of it is science fiction and pop culture.
0:14:00 And our constant tendency to compare robots to humans and look at them as human replacements
0:14:04 versus thinking a little bit more outside of the box and viewing them as something that’s
0:14:07 more supplemental to humans.
0:14:10 Do we have a model for what that even might be?
0:14:18 I’ve been trying to argue that animals might be the better analogy to these machines that
0:14:24 can sense and think and make autonomous decisions and learn and that we treat like they’re alive,
0:14:31 but we know that they’re not actually alive or feel anything or have emotions or can make
0:14:33 moral decisions.
0:14:36 They are still controlled by humans.
0:14:37 Property.
0:14:38 Property.
0:14:39 They’re property.
0:14:44 And throughout history, we’ve treated some animals as property, as tools.
0:14:50 Some animals we’ve turned into our companions and I think that that is how we’re going
0:14:52 to start integrating robotic technology as well.
0:14:56 We’re going to be treating a lot of it like products and tools and property.
0:15:01 And some of it we’re going to become emotionally attached to and we might integrate in different
0:15:02 ways.
0:15:08 But we definitely should stop thinking about robots as human replacements and start thinking
0:15:14 about how to harness them as a partner that has a different skill set.
0:15:18 So while you’re talking, I’m thinking about the incredibly fraught space of how we relate
0:15:20 to animals.
0:15:25 Some people might argue that since that’s such a gray area as it is and we’re always
0:15:30 feeling our way, and that model is always changing, it almost sounds like it just makes
0:15:33 it messier in a way.
0:15:38 And I also think there’s a way in which we have this primal instinct of how to relate
0:15:39 to animals.
0:15:44 Do you think we have the same kind of seed for a primal relationship with robots there?
0:15:45 I think we do.
0:15:51 I think that ironically, we’re learning more about our relationship to animals through
0:15:56 interacting with robots because we’re realizing that we’re complete hypocrites.
0:15:57 Oh.
0:15:58 Well, yeah.
0:16:07 I think we fancy ourselves as caring about the inner biological workings of the animals
0:16:09 and whether animals can suffer.
0:16:12 And we actually don’t care about any of that.
0:16:15 We care about what animals we relate to.
0:16:19 And a lot of that is cultural and emotional, and a lot of that is based on which animals
0:16:21 are cute.
0:16:26 For example, in the United States, we don’t eat horses.
0:16:31 It’s considered taboo, whereas in a lot of parts of Europe, people are like, well, horses
0:16:33 and cows are both delicious.
0:16:35 Why would you distinguish between the two?
0:16:38 There’s no inherent biological reason to distinguish.
0:16:39 Right.
0:16:40 And by the way, we boil them into glue.
0:16:46 And yet culturally, we feel this like bond with horses in the U.S. as this majestic beast,
0:16:50 and it seems so wrong to us to eat them.
0:16:53 The history of animal rights is full of stories like this.
0:17:01 Like, the Save the whales campaign didn’t start until people recorded whales singing.
0:17:05 Before that, people did not care about whales, but then once we heard that they can sing
0:17:09 and make this beautiful music, we were like, oh, we must save these beautiful creatures
0:17:12 that we can now suddenly relate to.
0:17:15 Because it needs to be about us kind of on some deep level.
0:17:21 The sad but important realization is that we relate to things that are like us and we
0:17:26 can build robots that are like that, and we are going to relate to those robots more than
0:17:27 to other robots.
0:17:33 So it’s a principle almost of design thinking, then, when you think about, well, I want this
0:17:40 robot to have a relationship to humans like cattle pulling a plow.
0:17:44 It gives you a sort of vision of a different spectrum of relationships for starters.
0:17:47 I mean, we’ve even tried to design animals accordingly.
0:17:53 We’ve bred dogs to look specific ways, so that we relate more to them.
0:17:57 And the interesting thing about robots is that we have even more freedom to design them
0:18:00 in compelling ways than we do with animals.
0:18:02 It takes a while to breed animals.
0:18:03 Yeah, generations.
0:18:11 Yeah, so I think we’re going to see the same types of manipulations of the robot breeds.
0:18:16 Why would you go down on that spectrum to the lesser relationships when it’s something
0:18:19 that is performing a service to humans?
0:18:25 If it’s not directly harmful to have people develop an emotional attachment, it’s probably
0:18:27 not a bad idea to do.
0:18:34 But a lot of the potential for robots right now is in taking over tasks that are dirty,
0:18:36 dull, and dangerous.
0:18:42 And so if we’re using robots as tools to go do the thing, it might make sense to design
0:18:45 them in a way that’s less compelling to people so that we don’t feel bad for them when they’re
0:18:48 doing the dirty, dull, dangerous work.
0:18:50 There are contexts where it can be harmful.
0:18:55 So for example, you have in the military, you have soldiers who become emotionally attached
0:18:57 to the robots that they work with.
0:19:01 And that can be anything from inefficient to dangerous because you don’t want them hesitating
0:19:06 for even a second to use these machines the way that they’re intended to be used.
0:19:07 Like police dogs.
0:19:09 That’s a great analogy.
0:19:16 If you become too attached to the thing that you’re working with, if it’s intended to
0:19:21 go into harm’s way in your place, for example, which is a lot of how we’re using robots
0:19:26 these days, bomb disposal units, stuff like that, you don’t want soldiers becoming emotionally
0:19:33 affected by sending the robot into harm’s way because they could risk their lives.
0:19:37 So it’s really important to understand that these emotional connections we form with these
0:19:40 machines can have real world consequences.
0:19:48 Another interesting area is responsibility for harm because it does get a lot of attention
0:19:51 from policymakers and from the general public.
0:19:56 With robots generally, there’s a lot of throwing up our hands, like how can we possibly hold
0:20:00 someone accountable for this harm if the robot did something no one could anticipate?
0:20:09 I think we’re forgetting that we have a ton of history with animals where we have things
0:20:14 that we’ve treated as property that can make autonomous unpredictable decisions that can
0:20:15 cause harm.
0:20:21 So there’s this whole body of legislation that we can look to, basically.
0:20:22 Yes.
0:20:26 The smorgasbord of different solutions we’ve had is really compelling.
0:20:34 The Romans even had rules around, if your oxen tramples the neighbor’s field, the neighbor
0:20:40 might actually be able to appropriate your oxen or even kill your oxen.
0:20:47 We’ve had animal trials, I talked about that in a podcast with Peter Leeson about the trials
0:20:49 of the rats for decimating crops.
0:20:56 There’s different ways even today that we like to assign responsibility for harm.
0:21:01 There’s the very pragmatic, okay, how do we compensate the victim of harm?
0:21:05 How do we hold the person who caused the harm accountable so that there’s an incentive to
0:21:07 not do it again?
0:21:11 And a lot of that is done through civil liability.
0:21:19 There’s also, however, criminal law that is kind of a primitive concept when you think
0:21:20 about it.
0:21:26 There was just this case in India where an old man was stoned to death with bricks by
0:21:29 monkeys who were intentionally flinging bricks.
0:21:35 And the family tried to get the police to do something about the monkeys and hold the
0:21:39 monkeys criminally accountable for what happened.
0:21:42 Just because of that human assigning of blame?
0:21:50 Yes, because it wasn’t enough to just have some sort of monetary compensation.
0:21:56 They really wanted these monkeys to suffer a punishment for what they had done.
0:21:59 And I know it seems silly, but we do sometimes have that tendency.
0:22:05 So it’s interesting to think about ways that we might actually want to hold machines themselves
0:22:08 accountable and ways that that’s problematic as well.
0:22:13 So can you illustrate what that would look like with robots when we think about those
0:22:15 different ways of assigning responsibility?
0:22:16 Yeah.
0:22:22 So for example, the way that we regulate pit bulls currently in some countries is really
0:22:23 interesting.
0:22:29 Austria has decided there are some breeds of dogs that we are going to place much stricter
0:22:35 requirements on than just the other dog breeds.
0:22:41 So you need to get what’s basically the equivalent of a driver’s license to walk these dogs.
0:22:46 They have to have special collars and they have to be registered.
0:22:50 And you could imagine for certain types of robots having a registry, having requirements,
0:22:57 having a different legal accountability, like strict liability versus, “Oh, did I intend
0:23:00 to cause this harm or did I cause it through neglect?”
0:23:05 The way that we distinguish, for example, between wild animals and pets.
0:23:10 If you have a tiger and the tiger kills the postal service worker, that’s going to be
0:23:14 your fault regardless of how careful you were with the tiger because we say having a tiger
0:23:16 is just inherently dangerous.
0:23:21 It’s almost, the model is sort of developing different ideas around certain categories
0:23:28 and groups of the way we relate to them, depending on whether those relationships are based on
0:23:35 then sort of our emotional narratives around it or evidence based becomes really important.
0:23:41 The heart of it is that we need to recognize that social robots could have an impact on
0:23:46 people’s behavior and that it’s something that we might actually need to regulate.
0:23:51 One of the interesting conversations that’s happening right now is around autonomous weapon
0:23:57 systems and accountability for harm in settings of war, where we have war crimes, but they
0:24:00 require intentionality.
0:24:06 And if a robot is committing a war crime, then there’s maybe not this moral accountability.
0:24:10 But wouldn’t it be an obvious whoever programmed and owns the robot?
0:24:16 Because you need someone to have intentionally caused this rather than accidentally.
0:24:22 The thing about robots is that they can actually now make decisions based on the data that
0:24:30 they gather that isn’t a glitch in the code, but is something that we didn’t foresee happening.
0:24:36 We’ve used autonomous unpredictable agents as weapons in war previously.
0:24:45 For example, the Soviets, they trained dogs to run under tanks, enemy tanks, and they
0:24:51 had explosives attached to them and they were meant to blow up the tanks.
0:24:53 And a bunch of things went wrong.
0:25:00 So first of all, they had trained the dogs on their own tanks, which means that the
0:25:09 dogs would sometimes blow up their own tanks instead of the enemy tanks.
0:25:15 They didn’t train the dogs to be able to deal with some of the noise on the battlefield
0:25:21 and the shooting, so the dogs got scared and would run back to their handlers with these
0:25:27 explosives attached to them, and the handlers had to end up shooting the dogs.
0:25:31 And we’re not perfect at programming robots either, so there’s a lot of things that can
0:25:37 go wrong that don’t necessarily, they’re not glitches in code.
0:25:40 It’s unanticipated consequences.
0:25:44 So when we’re thinking about regulating things, I think that’s a pretty good analogy to look
0:25:50 at the history of how we’ve handled these things in the past and who we’ve held accountable.
0:25:55 The interesting thing that occurs to me is how do we both acknowledge our human emotional
0:25:59 attachment and yet not let it direct us too much?
0:26:00 What’s that balance like?
0:26:02 Step one is probably awareness, right?
0:26:07 But is it something we can manage and navigate or is it kind of beyond our control?
0:26:14 I think we struggle with that culturally as well, because we have this Judeo-Christian
0:26:19 distinction, like we have this clear line between things that are alive and things that are
0:26:24 not alive, whereas in some other countries, they don’t necessarily make that distinction.
0:26:29 Like in Japan, they have this whole history of Shintoism and treating objects as things
0:26:30 with souls.
0:26:39 And so it’s, I think, easier for them to view robots as just another thing with a soul and
0:26:44 they don’t have this contradiction inside themselves of, “Oh, I’m treating this thing
0:26:46 like a living thing, but it’s just a machine.”
0:26:49 Oh, that’s so fascinating because I would have thought it would be the other way.
0:26:53 If you think everything has a soul, it’s sort of harder to disentangle, but you’re saying
0:26:57 you sort of are desensitized to it in a way.
0:27:02 Or you’re more used to viewing everything as connected but different.
0:27:07 And so, you know, you still face the same design challenges of how do you get people
0:27:12 to treat robots like tools in settings where you don’t want them to get emotionally attached
0:27:13 to them.
0:27:17 So those design challenges still exist, but I think as a society, you’re not also dealing
0:27:21 with this contradiction of, “I want to treat this thing like a machine, but I’m treating
0:27:22 it differently.”
0:27:27 Right, the sort of ethical wrappers around this that we need to be aware of when we’re
0:27:32 starting to introduce these different types of interactions as these relationships become
0:27:33 more sophisticated.
0:27:36 Thank you so much for joining us on the A16Z podcast.
0:27:36 Thanks for having me.
with Kate Darling (@grok_) and Hanne Tidnam (@omnivorousread)
We already know that we have an innate tendency to anthropomorphize robots. But beyond just projecting human qualities onto them, as we begin to share more and more spaces, social and private, what kind of relationships will we develop with them? And how will those relationships in turn change us?
In this Valentine’s Day special, Kate Darling, Researcher at MIT Labs, talks with a16z’s Hanne Tidnam all about our emotional relations with robots. From our lighter sides — affection, love, empathy, and support — to our darker sides, what will these new kinds of relationships enhance or de-sensitize in us? Why does it matter that we develop these often intense attachments to these machines that range from tool to companion — and what do these relationships teach us about ourselves, our tendencies and our behaviors? What kinds of models from the past can we look towards to help us navigate the ethics and accountability that come along with these increasingly sophisticated relationships with robots?