AI transcript
0:00:05 I’m a carpenter.
0:00:06 I’m a graphic designer.
0:00:08 I sell dog socks online.
0:00:12 That’s why BCAA created One Size Doesn’t Fit All insurance.
0:00:15 It’s customizable based on your unique needs.
0:00:18 So whether you manage rental properties or paint pet portraits,
0:00:23 you can protect your small business with B.C.’s most trusted insurance brand.
0:00:28 Visit bcaa.com slash smallbusiness and use promo code radio to receive $50 off.
0:00:29 Conditions apply.
0:00:39 There’s a lot of uncertainty when it comes to artificial intelligence.
0:00:44 Technologists love to talk about all the good these tools can do in the world.
0:00:46 All the problems they might solve.
0:00:56 And yet, many of those same technologists are also warning us about all the ways AI might upend society.
0:01:04 It’s not really clear which, if either, of these narratives are true.
0:01:08 But three things do seem to be true.
0:01:11 One, change is coming.
0:01:15 Two, it’s coming whether we like it or not.
0:01:20 Hell, even as I write this document, Google Gemini is asking me how it can help me today.
0:01:21 It can’t.
0:01:24 Today’s intro is 100% human-made.
0:01:31 And finally, it’s abundantly clear that AI will affect all of us.
0:01:39 Yet, very few of us have any say in how this technology is being developed and used.
0:01:43 So, who does have a say?
0:01:47 And why are they so worried about an AI apocalypse?
0:01:51 And how are their beliefs shaping our future?
0:01:57 I’m Sean Elling, and this is The Gray Area.
0:02:13 My guest today is Vox host and editorial director, Julia Longoria.
0:02:27 She spent nearly a year digging into the AI industry, trying to understand some of the people who are shaping artificial intelligence, and why so many of them believe that AI is a threat to humanity.
0:02:33 She turned that story into a four-part podcast series called Good Robot.
0:02:39 Most stories about AI are focused on how the technology is built and what it can do.
0:02:51 Good Robot, instead, focuses on the beliefs and values, and most importantly, fears, of the people funding, building, and advocating on issues related to AI.
0:03:07 What she found is a set of ideologies, some of which critics and advocates of AI adhere to, with an almost religious fervor, that are influencing the conversation around AI, and even the way the technology is built.
0:03:22 Whether you’re familiar with these ideologies or not, they’re impacting your life, or certainly they will impact your life, because they’re shaping the development of AI as well as the guardrails, or lack thereof, around it.
0:03:29 So I invited Julia onto the show to help me understand these values, and the people who hold them.
0:03:39 Julia Longoria, welcome to the show.
0:03:41 Thank you for having me.
0:03:46 So, it was quite the reporting journey we went on for this series.
0:03:48 It’s really, really well done.
0:03:51 So, first of all, congrats.
0:03:51 Thank you.
0:03:56 Thank you for having me on that, and we’re actually going to play some clips from it today.
0:03:57 I’m glad you enjoyed it.
0:04:03 It’s, it’s, you’re in, I’m in that, you know, nerve-wracking first few weeks when it comes out, so it makes me feel good to hear that.
0:04:13 So, going into this thing, you wanted to understand why so many people are worried about an AI apocalypse.
0:04:18 And if you should be afraid, and if you should be afraid to, we will get to the answers, I promise.
0:04:23 But why were these the motivating questions for you?
0:04:29 You know, I come to artificial intelligence as a normie, as people in the know called me.
0:04:32 I don’t know much about it.
0:04:33 I didn’t know much about it.
0:04:38 But I had the sense, as an outsider, that the stakes were really high.
0:04:52 And it seemed like people talked about it in a language that I didn’t understand, and talking about these stakes that felt like really epic, but kind of like impenetrable to someone who didn’t speak their language.
0:05:02 So, I guess I just wanted to start out with, like, the biggest, most epic, like, almost most ignorant question, you know, like, okay, people are afraid.
0:05:06 There, some people are afraid that AI could just wipe us all out.
0:05:07 Where does that fear come from?
0:05:18 And just have that be a starting point to break the ice of this area that, like, honestly has felt kind of intangible and hard for me to even wrap my head around.
0:05:27 Yeah, I mean, I appreciate your normie status, because that’s the position almost all of us are in.
0:05:35 You know, we’re on the outside looking in, trying to understand what the hell is happening here.
0:05:40 What did being a normie mean to you as you waded into this world?
0:05:46 I mean, did you find that that outside perspective was actually useful in your reporting?
0:05:48 Definitely, yeah.
0:05:52 I think that’s kind of how I try to come to any topic.
0:05:59 Like, I’ve also reported on the Supreme Court, and that’s, like, another world that speaks its own dense, impenetrable language.
0:06:06 And, you know, like the Supreme Court, like, artificial intelligence affects all of our lives deeply.
0:06:20 And I feel like because it is such a, you know, sophisticated technology, and the people who work in it are so deep in it, it’s hard for normies to ask the more ignorant questions.
0:06:31 And so I feel like having the microphone and being armed with, you know, my Vox byline, I was able to ask the dumb question.
0:06:36 And, you know, I think I always said, like, you know, I know the answer to some of these questions.
0:06:41 But I’m asking on behalf of, like, the listener.
0:06:42 And sometimes I knew the answer.
0:06:43 Sometimes I didn’t.
0:07:04 I don’t know about you, but for me, and I’m sure a lot of people listening, it is maddening to be continually told that, you know what, we might be on the wrong end of an extinction event here, caused by this tiny minority of non-normies building this stuff.
0:07:13 And that it’s possible for so few to make decisions that might unravel life for the rest of us is just, well, maddening.
0:07:14 It is maddening.
0:07:15 It is maddening.
0:07:19 And to even hear it be talked about, like, this affects all of us.
0:07:22 So shouldn’t we, shouldn’t it be the thing that we’re all talking about?
0:07:29 But it feels like it’s reserved for a certain group of people who get to make the decisions and get to set the terms of the conversation.
0:07:39 Let’s talk about the ideologies and all the camps that make up this weird, insular world of AI.
0:07:44 And I want to start with the, what you call the AI safety camp.
0:07:46 What is their deal?
0:07:48 What should we know about them?
0:07:54 So AI safety is a term that’s evolved over the years.
0:08:07 But it’s kind of like people who fear that AI could be an existential risk to humanity, whether that’s like AI going rogue and doing things we didn’t want it to do.
0:08:13 It’s about the biggest worry, I guess, of all of us being wiped out.
0:08:18 We never talked about a cell phone apocalypse or an internet apocalypse.
0:08:22 I guess maybe if you count Y2K.
0:08:25 But even that wasn’t going to wipe out humanity.
0:08:30 But the threat of an AI apocalypse, it feels like it’s everywhere.
0:08:34 Mark my words, AI is far more dangerous than nukes.
0:08:39 From billionaire Elon Musk to the United Nations.
0:08:46 Today, all 193 members of the United Nations General Assembly have spoken in one voice.
0:08:48 AI is existential.
0:08:55 But then it feels like scientists in the know can’t even agree on what exactly we should be worried about.
0:08:59 And where does the term AI safety come from?
0:09:12 We trace the origin to a man named Eliezer Yudkowsky, who, you know, I think not all AI safety people today agree with Eliezer Yudkowsky.
0:09:16 But basically, you know, Eliezer Yudkowsky wrote about this fear.
0:09:24 Actually, as a teenager, he became popular, sort of found his following when he wrote a Harry Potter fan fiction.
0:09:26 As one does.
0:09:27 As one does.
0:09:31 It’s actually one of the most popular Harry Potter fan fictions out there.
0:09:34 It’s called Harry Potter and the Methods of Rationality.
0:09:36 And he wrote it almost as a way.
0:09:38 Love it.
0:09:44 He wrote it almost as a way to get people to think differently about AI.
0:09:53 He had thought deeply about the possibility of building a artificial intelligence that was smarter than human beings.
0:09:55 Like, he kind of imagined this idea.
0:10:02 And at first, he imagined it as a good robot, which is the name of the series, that could save us.
0:10:13 But, you know, eventually he realized, like, or came to fear that it could probably go very poorly if we built something smarter than us, that it would, it could result in it killing us.
0:10:18 So, anyway, that’s the origin, but it’s sort of, his ideas have caught on.
0:10:27 Open AI, actually, the CEO, Sam Altman, talks about how Eliezer was like an early inspiration for him making the company.
0:10:36 They do not agree on a lot because Eliezer thinks Open AI, the chat GPT company, is on track to cause an apocalypse.
0:10:42 But, anyway, that’s, that’s the gist, is like, AI safety is like, AI could kill us all.
0:10:43 How do we prevent that?
0:10:51 So, it’s really, it’s about, it’s focused on the sort of long-range existential risks.
0:10:51 Correct.
0:10:53 And some people don’t think it’s long-range.
0:10:57 Some of these people think that that could happen very soon.
0:11:02 So, this Yudkowsky guy, right, he makes these two general claims, right?
0:11:07 One is that we will build an AI that’s smarter than us, and it will change the world.
0:11:14 And the second claim is that to get that right is extraordinarily difficult, if not impossible.
0:11:19 Why does he think it’s so difficult to get this right?
0:11:22 Why is he so convinced that we won’t?
0:11:28 He thinks about this in terms of thought experiments.
0:11:38 So, just kind of taking, taking this premise that we could build something that outpaces us at most tasks.
0:11:47 He tries to explain the different ways this could happen with these, like, quirky parables.
0:11:54 And we start with his most famous one, which is the paperclip maximizer thought experiment.
0:12:00 Suppose, in the future, there is an artificial intelligence.
0:12:13 We’ve created an AI so vastly powerful, so unfathomably intelligent, that we might call it superintelligent.
0:12:18 Let’s give this superintelligent AI a simple goal.
0:12:21 Produce…
0:12:23 Paperclips
0:12:33 Because the AI is superintelligent, it quickly learns how to make paperclips out of anything in the world.
0:12:41 It can anticipate and foil any attempt to stop it, and will do so because its one directive is to make more paperclips.
0:12:50 Should we attempt to turn the AI off, it will fight back because it can’t make more paperclips if it is turned off.
0:12:55 And it will beat us because it is superintelligent and we are not.
0:12:57 The final result?
0:13:08 The entire galaxy, including you, me, and everyone we know, has either been destroyed or been transformed.
0:13:15 Into paperclips.
0:13:31 The gist is, we build something so smart we fail to understand it, how it works, and we could try to give it good goals to help improve our lives.
0:13:39 But maybe that goal has an unintended consequence that could lead to something catastrophic that we couldn’t have even imagined.
0:13:45 Right, and it’s such a good example because a paperclip is like the most innocuous, trivial thing ever, right?
0:13:47 Like what could possibly go wrong?
0:13:52 Is Yukowski, even within the safety camp, on the extremes?
0:13:57 I mean, I went to his website, and I just want to read this quote.
0:13:59 He writes,
0:14:07 It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight.
0:14:15 Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity.
0:14:17 I mean, come on, dude.
0:14:19 It’s so dramatic.
0:14:23 I mean, that, he seems convinced that the game is already up here.
0:14:27 We’re just, we just don’t know how much sand is left in the hourglass.
0:14:31 I mean, is he on the margins even within this camp, or is this a fairly representative view?
0:14:32 Definitely, yeah.
0:14:32 Okay.
0:14:35 No, no, it’s, he’s on the margins, I would say.
0:14:37 It’s, he’s like an extreme case.
0:14:40 He had a big influence on the industry early on.
0:14:47 So, in that sense, he, he was like an early influencer of all these people who ended up going into AI.
0:14:50 A lot of people I talked to went into AI because of his writings.
0:14:53 I can’t square that circle, right?
0:14:54 If they were influenced by him.
0:14:55 No.
0:14:56 And this whole thing is, don’t do this, we’re going to die.
0:14:58 Why are they doing it?
0:15:03 To me, it felt like similar to the world of religion, almost like a schism.
0:15:10 Believers in the superintelligence, and then people who thought we shouldn’t try and build it, and then the people who thought we should.
0:15:21 Yeah, I mean, I, I guess with any kind of grand thinking about the fate of humanity, you end up with these, it starts to get very religious-y very quickly,
0:15:26 even if it’s cloaked in the language of science and secularism, as this is.
0:15:31 The religious part of it, I mean, did that, did the parallels there jump out to you pretty immediately?
0:15:43 That, that the people at the level of ideology are treating this, thinking about this, as though it is a religious problem or a religious worldview?
0:15:44 It really did.
0:16:01 It did jump out at me really early, because I think, like, going into reporting on a technology, you expect to be kind of bogged down by technological language and terminology that’s, like, in the weeds of whatever, computer science or whatever it is.
0:16:14 But, but the words that were hard to understand were, like, superintelligence and AGI, and then hearing about, you know, the CEO of OpenAI, Sam Altman, talking about a magic intelligence in the sky.
0:16:18 And the question I had was, like, what are these guys talking about?
0:16:21 But it was almost like they were talking about a god, is what it felt like to me.
0:16:23 Yeah.
0:16:24 All right.
0:16:27 I have some thoughts on the religious thing, but let me table that for a second.
0:16:30 I think we’ll, we’ll end up circling back to that.
0:16:35 I want to finish our little survey of the, of the tribes, the gangs here.
0:16:39 The other camp you talk about are the, the AI ethicists.
0:16:40 What’s their deal?
0:16:42 What are they concerned about?
0:16:48 How are they different from the safetyists who are focused on these existential problems or risks?
0:17:01 Yeah, the AI ethicists that I spoke to came to AI pretty early on, too, like, just a couple years, maybe after, a few years after Eliezer was writing about it.
0:17:02 They were working on algorithms.
0:17:06 They were working on AI as it existed in the world.
0:17:08 So that, that was a key difference.
0:17:11 They weren’t thinking about things in, like, these hypotheticals.
0:17:24 But AI ethicists, where AI safety folks tend to worry about the ways in which AI could be an existential risk in the future, it could wipe us out.
0:17:32 AI ethicists tended to worry about harms that AI was doing right now, in the present.
0:17:56 Whether that was through, you know, governments using AI to surveil people, bias in AI data, the data that went into building AI systems, you know, racial bias, gender bias, and ways that algorithmic systems were making racist decisions, sexist decisions, decisions that were harmful to disabled people.
0:17:57 They were worried about things now.
0:17:59 Tell me about Margaret Mitchell.
0:18:10 She’s a researcher and a colorful character in the series, and she’s an ethicist, and she coined the everything is awesome problem.
0:18:12 Tell me about that.
0:18:15 That’s an interesting example of the sorts of things they worry about.
0:18:22 Yeah, so Margaret Mitchell was working on AI systems in the early days, like long before we had ChatGPT.
0:18:28 She was working on a system at Microsoft that was vision to language.
0:18:35 So it was taking a series of images of a scene and trying to describe it in words.
0:18:43 And so she, you know, she was giving the system things like images of weddings or images of different events.
0:18:50 And she gave the system a series of images of what’s called the Hempstead Blast.
0:19:03 It was at a factory, and you could see from the sequence of images that the person taking the photo had like a third-story view sort of overlooking the explosion.
0:19:11 So it was a series of pictures showing that there was this terrible explosion happening, and whoever was taking the photo was very close to the scene.
0:19:20 So I put these images through my system, and the system says, wow, this is a great view.
0:19:22 This is awesome!
0:19:35 The system learned from the images that it had been trained on that if you were taking an image from, you know, from above, down below, like, that that’s a great view.
0:19:42 And that if there were, like, all these, you know, different colors, like in a sunset, which the explosion had made all these colors, that that was beautiful.
0:19:52 And so she saw really early on before, you know, this AI moment that we’re living, that the data that these systems are trained on is crucial.
0:20:01 And so her worry with systems like ChatGPT are, they’re trained on, like, basically the entire internet.
0:20:08 And so the technologists making the system lose track of, like, what kinds of biases could be in there.
0:20:13 And, yeah, this is, like, sort of her origin story of worrying about these things.
0:20:25 And she went and worked for Google’s AI ethics team and later was fired after trying to get a paper published there about these worries.
0:20:31 So why is the everything is awesome problem a problem, right?
0:20:39 I mean, I guess someone may hear that and go, well, okay, that’s kind of goofy and quirky that an AI would interpret a horrible image in that way.
0:20:44 But what actual harm is that going to cause in the world?
0:20:45 Right.
0:21:06 I mean, the way she puts it is, you know, if you were training a system to, like, launch missiles and you gave it some of its own autonomy to make decisions, like, you know, she was like, you could have a system that’s, like, launching missiles in pursuit of the aesthetic of beauty.
0:21:10 So, in a sense, it’s a bit of a thought experiment on its own, right?
0:21:19 It’s like she’s not worried about this in particular, but worried about implications for biased data in future systems.
0:21:21 Yeah, it’s the same thing with the paperclip example, right?
0:21:27 It’s just, it’s unintended, the bizarre and unintended consequences of these things, right?
0:21:34 What seems goofy and quirky at first may, a few steps down the road, be catastrophic, right?
0:21:38 And if you’re not, if you can’t predict that, maybe you should be a little careful about building it.
0:21:40 Right, right, exactly.
0:21:52 So, do the AI ethics people in general, do they think the concerns about an extinction event or existential threats, do they think those concerns are valid?
0:22:01 Or do they think they’re mostly just science fiction and a complete distraction from, you know, actual present-day harms?
0:22:10 I should say at the outset that, you know, I found that the AI ethics and AI safety camps, they’re less camps and more of a spectrum.
0:22:18 So, I don’t want to say that every single AI ethics person I spoke to was like, these existential risks are nonsense.
0:22:28 But by and large, people I spoke to in the ethics camp said that these existential risks are a distraction.
0:22:37 It’s like this epic fear that’s attention grabbing and, you know, goes viral and takes away from the harms that AI is doing right now.
0:22:45 It takes away attention from those things and it, crucially, in their view, takes away resources from fighting those kinds of harms.
0:22:46 In what way?
0:23:04 You know, I think when it comes to funding, if you’re like a billionaire who wants to give money to companies or charities or, you know, causes and you want to leave a legacy in the world, I mean, do you want to make sure that data and AI systems is unbiased or do you want to make sure that you save humanity from apocalypse, you know?
0:23:09 Yeah. I should ask about the effect of altruists.
0:23:15 They’re another camp, another school of thought, another tradition of thought, whatever you want to call it, that you talk about in the series.
0:23:19 How do they fit in to the story? Or how are they situated?
0:23:24 Yeah. So, effective altruism is a movement that’s had an effect on the AI industry.
0:23:37 It’s also had an effect on Vox. Future Perfect is the Vox section that we collaborated with to make Good Robot and it was actually inspired by effective altruism.
0:23:52 The whole point of the effective altruism movement is to try to do the most good in the world and EA, as it’s sometimes called, comes up with a sort of formula for how to choose which causes you should focus on and put your efforts toward.
0:24:07 So, early rationalists like Eliezer Yudkowsky encountered early effective altruists and tried to convince them that the highest stakes issue of our time, the cause that they should focus on is AI.
0:24:18 Effective altruism is traditionally known to give philanthropic dollars to things like malaria nets, but they also gave philanthropic dollars to saving us from an AI apocalypse.
0:24:27 And so, the AI safety industry is really a big part of how it was financed is that effective altruism rallied as a cause around it.
0:24:36 These are the people who think we really have an obligation to build a good robot in order to protect future humans.
0:24:40 And again, I don’t know what they mean by good.
0:24:44 I mean, good and bad, those are value judgments.
0:24:45 This is morality, not science.
0:24:49 There’s no utility function for humanity.
0:24:58 It’s like, I don’t know who’s defining the goodness of the good robot, but I’ll just say that I don’t think it’s as simple as some of these technologists seem to think it is.
0:25:03 And maybe I’m just being annoying philosophy guy here, but whatever, here I am.
0:25:12 Yeah, no, I think everyone in the AI world that I talk to just like was really striving toward the good, like whatever that looked like.
0:25:17 Like AI ethics saw like the good robot as a specific set of values.
0:25:23 And folks in effective altruism were also like baffled by like, how do I do the most good?
0:25:28 And trying to use math to, you know, put a utility function on it.
0:25:35 And it’s like, the truth is a lot more messy than a math problem of how to do the most good.
0:25:36 You can’t really know.
0:25:41 And yeah, I think sitting in the messiness is hard for a lot of us.
0:25:50 And I don’t know how you do that when you’re fully aware that you’re building or attempting to build something that you don’t fully understand.
0:25:51 That’s exactly right.
0:26:01 Like in the series, like we tell the story of effective altruism through the parable of the drowning child, of this child who’s drowning in a pond, a shallow pond.
0:26:04 Okay.
0:26:08 On your way to work, you pass a small pond.
0:26:14 Children sometimes play in the pond, which is only about knee deep.
0:26:17 The weather’s cool, though, and it’s early.
0:26:21 So you’re surprised to see a child splashing about in the pond.
0:26:31 As you get closer, you see that it is a very young child, just a toddler, who’s flailing about, unable to stay upright or walk out of the pond.
0:26:35 You look for the parents or babysitter, but there’s no one else around.
0:26:40 The child is unable to keep her head above the water for more than a few seconds at a time.
0:26:43 If you don’t wade in and pull her out, she seems likely to drown.
0:26:53 Wading in is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy.
0:27:01 By the time you hand the child over to someone responsible for her and change your clothes, you’ll be late for work.
0:27:04 What should you do?
0:27:12 Are you going to save it even though you ruin your suit?
0:27:15 Everyone answers, yes.
0:27:23 And this sort of utilitarian philosophy behind effective autism asks, well, what if that child were far away from you?
0:27:25 Would you still save it if it was oceans away from you?
0:27:28 And that’s where you get to malaria nets.
0:27:32 You’re going to donate money to save children across an ocean.
0:27:38 But, yeah, this idea of, like, well, what if the child hasn’t been born yet?
0:27:43 And that’s the future child that would die from an AI apocalypse.
0:27:49 But, like, abstracting things so far in advance, you could really just justify anything.
0:27:51 And that’s the problem, right?
0:27:52 Yeah, right.
0:28:09 Of focusing on the long term in that way, the willingness to maybe overlook or sacrifice present harms in service to some unknown future, that’s a dangerous thing.
0:28:19 There are dangers in being willfully blind to present harms because you think there’s some more important or some more significant harm down the road.
0:28:27 And you’re willing to sacrifice that harm now because you think it’s, in the end, justifiable.
0:28:30 Yeah, at what point are you starting to play God, right?
0:28:37 So I come from the world of political philosophy, and in that maybe equally weird world.
0:28:48 Whenever you have competing ideologies, what you find at the root of those disagreements are very different views about human nature, really.
0:28:53 And all the differences really spring from that divide.
0:28:58 Is there something similar at work in these AI camps?
0:29:11 Do you find that these people that you talk to have different beliefs about how good or bad people are, different beliefs about what motivates us, different beliefs about our ability to cooperate and solve problems?
0:29:15 Is there a core dispute at that basic level?
0:29:22 There’s a pretty striking demographic difference between AI safety folks and AI ethics folks.
0:29:26 Like, I went to a conference, two conferences, one of each.
0:29:38 And so immediately you could see, like, AI safety folks were skewed white and male, and AI ethics folks skewed, like, more people of color, more women.
0:29:44 And so, like, people talked about blind spots that each camp had.
0:30:01 And so if you’re, you know, a white male moving around the world, like, you’re not fearing the sort of, like, racist, sexist, ableist, like, consequences of AI systems today as much, because it’s just not in your view.
0:30:30 It’s been a rough week for your retirement account, your friend who imports products from China for the TikTok shop, and also Hooters.
0:30:35 Hooters has now filed for bankruptcy, but they say they are not going anywhere.
0:30:39 Last year, Hooters closed dozens of restaurants because of rising food and labor costs.
0:30:47 Hooters is shifting away from its iconic skimpy waitress outfits and bikini days, instead opting for a family-friendly vibe.
0:30:54 They’re vowing to improve the food and ingredients, and staff is now being urged to greet women first when groups arrive.
0:30:57 Maybe in April of 2025, you’re thinking, good riddance?
0:31:01 Does the world still really need this chain of restaurants?
0:31:09 But then we were surprised to learn of who exactly was mourning the potential loss of Hooters.
0:31:11 Straight guys who like chicken, sure.
0:31:14 But also a bunch of gay guys who like chicken?
0:31:19 Check out Today Explained to find out why exactly that is, won’t ya?
0:31:19 Today Explained to find out why exactly that is, won’t ya?
0:31:21 Today Explained to find out why exactly that is, won’t ya?
0:31:21 Today Explained to find out why exactly that is, won’t ya?
0:31:21 Today Explained to find out why exactly that is, won’t ya?
0:31:21 Today Explained to find out why exactly that is, won’t ya?
0:31:21 Today Explained to find out why exactly that is, won’t ya?
0:31:36 Did all the people you spoke to, regardless of the camps they were in, did they all more
0:31:43 or less agree that what we’re doing here is attempting to build God, or something God-like?
0:31:45 No, I think no.
0:31:52 A lot of, I would say a lot of the AI safety people I spoke to like bought into this idea
0:31:55 of a super intelligence and a God-like intelligence.
0:31:59 I should say, I don’t think that’s every AI safety person by any means.
0:32:06 But AI ethics people for the most part just didn’t buy, just completely, everyone I spoke
0:32:14 to talked about it as being just AI hype as a way to like amp up the capability of this
0:32:18 technology that’s really in its infancy and is not God-like at this point.
0:32:27 I saw that when Sam Altman, the CEO of OpenAI, he was on Joe Rogan’s podcast and he was asked
0:32:30 whether they’re attempting to build God and he said, I have the quote here, I guess it comes
0:32:35 down to a definitional disagreement about what you mean by it becomes a God.
0:32:39 I think whatever we create will be subject to the laws of physics in this universe.
0:32:40 Okay.
0:32:44 So, so God or no God.
0:32:45 Right.
0:32:45 Yeah.
0:32:47 I mean, it’s, it’s, he’s called it though.
0:32:49 I don’t know if it’s tongue in cheek.
0:32:53 It’s all like very, you know, hard to read, but he’s called it like the magic intelligence
0:32:54 in the sky.
0:33:02 And Anthropics CEO has called AI systems machines of loving grace, which sounds like this is religious
0:33:03 language, you know?
0:33:04 Okay.
0:33:05 Come on now.
0:33:09 What in the world is that supposed to mean?
0:33:12 What is a machine of loving grace?
0:33:14 Does he know what that means?
0:33:21 I think it’s like this, you know, it’s a very optimistic view of what machines can do for
0:33:21 us.
0:33:26 Like, you know, the idea that machines can help us cure cancer.
0:33:27 And I don’t know.
0:33:32 I think that’s ultimately probably what he means, but it does, there’s an element of
0:33:36 it that I just completely, you know, roll my eyes, raise my eyebrows at where it’s like,
0:33:43 I don’t think we should be so reverent of a technology that’s like flawed and needs to
0:33:44 be regulated.
0:33:47 And I think that reverence is dangerous.
0:33:55 Why do you think it matters that people like Altman or the CEO of Anthropic have reverence
0:33:57 or have reverence for machines, right?
0:33:59 Who cares if they think they’re building God?
0:34:03 Does it matter really in terms of what it will be and how it will be deployed?
0:34:11 Well, I think that if you believe you’re, if you have these sorts of delusions of grandeur
0:34:16 about what you’re making and if you talk about it as a machine of loving grace, like, I don’t
0:34:23 know, it seems like you don’t have the level of skepticism that I want you to be having.
0:34:27 And we’re not regulating these companies at this point.
0:34:29 We’re relying on them to regulate themselves.
0:34:34 So yeah, it’s a little worrying when you talk about building something so powerful.
0:34:37 And so intelligent and you’re not being checked.
0:34:38 Yeah.
0:34:43 I don’t expect my toaster to tell me it loves me in the morning, right?
0:34:45 I just want my bagels crispy.
0:34:48 But I understand that my toaster is a technology.
0:34:49 It’s a tool with a function.
0:34:55 To talk about machines of loving grace suggests to me that these people do not think they’re
0:34:56 just building tools.
0:34:58 They think they’re building creatures.
0:34:59 They think they’re building God.
0:35:00 Yeah.
0:35:05 And, you know, Margaret Mitchell, as you’ll hear in the series, she talks about how she
0:35:07 thinks we shouldn’t be building a God.
0:35:13 We should be building, you know, machines, AI systems that are going to fulfill specific
0:35:14 purposes.
0:35:18 Like specifically, she talks about a smart toaster that makes really good toast.
0:35:26 And I don’t think she means a toaster in particular, but just building systems that are designed
0:35:32 to help humans achieve a certain goal, like something specific out in the world.
0:35:40 Whether that’s, you know, like helping us figure out how proteins fold or helping us figure out
0:35:45 how animals communicate, which are some of the things that we’re using AI to do in a narrow way.
0:35:52 She talks about this as an artificial narrow intelligence, as distinct from artificial general
0:35:58 intelligence, which is sort of the super intelligent God AI that’s, you know, quote unquote, smarter
0:36:00 than us at most tasks.
0:36:07 I mean, this is an old idea in the history of philosophy that God is like fundamentally
0:36:09 just a projection of human aspirations, right?
0:36:15 That our image of God is really a mirror that we’ve created, a mirror that reflects our idea
0:36:17 of a perfect being, a being in our image.
0:36:23 And this is something you talk about in the series, and that this is what we’re doing with AI.
0:36:31 We’re building robots in our image, which, you know, raises the question, well, in whose image exactly, right?
0:36:35 If AI is a mirror, it’s not a mirror of all of us, is it, right?
0:36:37 It’s a mirror of the people building it.
0:36:44 And the people building it are, I would say, not representative of the entire human race.
0:36:53 Yeah, you’ll hear in the series, like, I latched on to this idea of, like, AI is a mirror of us.
0:36:58 And that’s so interesting that, like, yeah, God, the concept of God is also like a mirror.
0:37:04 But if you think about it, I mean, large language models are made from basically the Internet,
0:37:09 which is, like, all of our thoughts and our musings as humans on the Internet.
0:37:14 It’s a certain lens on human behavior and speech.
0:37:22 But it’s also, yeah, like, AI is, like, the decisions that its creators make of what data to use,
0:37:25 of how to train the system, how to fine-tune it.
0:37:30 And when I used ChatGPT, it was very complimentary of me.
0:37:33 And I found it to be this almost, like, smooth, smooth…
0:37:35 It charmed you. You got charmed.
0:37:39 Yeah, I got charmed. It was, like, so, it gave me the compliments I wanted to hear.
0:37:48 And I think it’s, like, this smooth, frictionless version of humanity where it compliments us and makes us feel good.
0:37:53 And it also, like, you know, you don’t have to write that letter of recommendation for your person.
0:37:55 You don’t have to write that email.
0:37:57 You could just… It’s just smooth and frictionless.
0:38:10 And I worry that, you know, in making this, like, smooth mirror of humanity, like, where do we lose our humanity if we keep relying, like, keep seeding more and more to AI systems?
0:38:17 I want it to be a tool to help us, like, achieve our goals rather than, like, this thing that replaces us.
0:38:24 Yeah, I won’t lie. I mean, I did. I just recently got my chat GPT account.
0:38:29 And I did ask it what it thought of Sean Elling, host of the Gray Area podcast.
0:38:30 What did it say?
0:38:31 And it was very complimentary.
0:38:35 It’s extremely, extremely generous.
0:38:38 And I was like, oh, shit, yeah, this thing gets it.
0:38:41 Oh, this is okay. All right.
0:38:41 Maybe it is a god.
0:38:42 Now I trust it.
0:38:45 Clearly it’s an all-knowing, omnipotent one.
0:38:53 That’s what I came away with, like, you know, from the series and the reporting is, like, I think before I used to be very afraid of AI and using it and not knowing.
0:38:59 And now I feel, like, armed to be skeptical in the right ways and to try to use it for good.
0:39:03 Yeah. So that’s what I hope people get out of the series anyway.
0:39:15 Are you worried about us losing our humanity or just becoming so different that we don’t recognize ourselves anymore?
0:39:20 I am worried that it’ll just make us more isolated.
0:39:30 And it’s so good at giving us what we want to hear that we won’t, like, you know, find the friction, search for the friction in life that makes life worth living.
0:39:42 Yeah, yeah. So, look, I mean, the different camps may disagree about a lot, but they seem to converge on the basic notion that this technology is transformative.
0:39:45 It’s going to transform our lives.
0:39:55 It’s probably going to transform the economy and the way this stuff gets developed and deployed and the incentives driving it are really going to matter.
0:40:08 Is it your sense that checks and balances are being put in place to guide this transformation so that it does benefit more people than it hurts, or at least as much as possible?
0:40:11 I mean, was this something you explored in your reporting?
0:40:17 Yeah, I mean, you know, I think a lot of the people I spoke to really wanted regulation.
0:40:25 But I think ultimately, like, there isn’t really regulation in the U.S. on the AI safety front or the AI ethics front.
0:40:32 The technology is dramatically outpacing regulators’ ability to regulate it.
0:40:35 So, that’s troubling. Like, it’s not great.
0:40:42 I would imagine the ethicists would be a little more focused on imposing regulations now.
0:40:45 But it doesn’t seem like they’re making a lot of headway on that front.
0:40:48 I’m not sure how regulatable it is.
0:41:07 Yeah, I think that was one of my frustrations just listening to all this infighting was, like, I felt like these two groups that, like, they have a lot in common and they should be pursuing, like, a common goal of getting some good regulation, of, you know, having some strong safeguards in place for both AI safety and AI ethics concerns.
0:41:16 And ultimately, you know, we tell the story of how some of them did come together to write an open letter calling for both kinds of regulations.
0:41:21 But they’ve not, you know, and that’s encouraging to see people working together.
0:41:29 But ultimately, I don’t think they’ve made, at this point, strides in getting anything significant past.
0:41:30 You know, it’s interesting.
0:41:33 You’re reporting on this in the series.
0:41:38 And our employer, Vox, has a deal with OpenAI.
0:41:45 And in the course of your reporting, you were trying to find out what you could about that deal.
0:41:49 How did that go, if you’re comfortable talking about it?
0:41:50 Yeah, yeah.
0:41:55 Yeah, so our, the parent, we should say, the parent company of our Vox, Vox Media.
0:41:58 I know the language I need to use.
0:42:00 I have it down back, as you can’t tell.
0:42:14 But, you know, kind of shortly after we decided to tackle AI in this series, we learned that Vox Media was entering a partnership with OpenAI, the ChatGPT company.
0:42:20 We learned it meant that OpenAI could train its models on our journalism.
0:42:29 And I guess for personally, it just felt like I wanted to know if they were training on my voice, you know?
0:42:30 Yeah, me too.
0:42:33 That, to me, feels really, yeah, really personal.
0:42:35 Like, there’s so much emotional information in a voice.
0:42:42 Like, I feel very naked going out on air and having people listen to my voice.
0:42:46 And I spend so much time carefully crafting what I say.
0:42:52 And so the idea that they would train on my voice and I don’t do what with it.
0:42:52 I don’t know.
0:42:56 One of our editors pointed out, like, that’s part of the story.
0:42:59 You know, like, AI is, like, entering our lives.
0:43:03 More and more AI systems and robots are entering our lives and having this.
0:43:10 And for me personally, it’s like, yeah, like, literally, my work, our work is being used to train these systems.
0:43:13 Like, what does that mean for us, for our work?
0:43:20 It felt, and, you know, I reached out to Vox Media and to OpenAI for an interview.
0:43:27 And they both declined, which made it feel even, you know, just, you feel really helpless.
0:43:35 And, I mean, there’s not much more answers that I have than that.
0:43:39 Yeah, well, I mean, you even interview a guy on the show.
0:43:41 You know, he’s a former OpenAI employee.
0:43:46 You know, and you’re raising these concerns and he’s sort of dismissive of it, right?
0:43:49 Like, you know, whatever data they’re getting.
0:43:49 He just laughed at us.
0:44:00 I would be quite surprised if the data provided by Vox is itself very valuable to OpenAI.
0:44:03 I would imagine it’s a tiny, tiny drop in that bucket.
0:44:10 If all of ChatGPT’s training data were to fit inside the entire Atlantic Ocean,
0:44:17 then all of Vox’s journalism would be like a few hundred drops in that ocean.
0:44:22 Rightly, you’re like, well, fuck, it matters to me.
0:44:27 It’s my work, it’s my voice, and it may eventually be my job, right?
0:44:33 And the point here is, like, that this is a thing now that our job,
0:44:39 the fact that our job and many other jobs are already tangled up with AI in this way,
0:44:42 it’s just a reminder that this isn’t the future, right?
0:44:49 It’s here now, and it’s only going to get more strange and complicated.
0:44:50 Totally, yeah.
0:44:56 And I don’t know, I guess I understand, like, the impulse from, like, from Vox Media to be like,
0:45:01 okay, we want to have, we want to be compensated for, you know,
0:45:05 licensing our journalists’ work who work so hard and we pay them.
0:45:15 But it feels, yeah, it just feels like, it feels weird to not have a say when it’s the work you’re doing.
0:45:16 So, I have your views on online and online and online and online.
0:45:17 So, I have your views on online and online, like, online and online.
0:45:20 So, I have your views on online and online and online, like, online and online and online.
0:45:22 So, I have your views on online and online, like, online and online and online.
0:45:24 So, I have your views on online and online and online and online and online, like, online and online.
0:45:38 So, I have your views on online and online and online.
0:45:38 So, I have your views on online and online and online.
0:45:40 So, I have your views on online and online and online.
0:45:50 So, have your views on AI in general changed all that much after doing this series?
0:45:57 I mean, you say at the end that when you look at AI, just what you see is a funhouse mirror.
0:45:59 What does that mean?
0:46:07 AI, like a lot of our technologies and I guess like our visions of God, as you talk about, are a reflection of ourselves.
0:46:17 And so, I think it was a comforting realization to me to realize that, like, the story of AI is not some, like, technological story I can’t understand.
0:46:26 Like, the story of AI is a story about humans who are trying really hard to make a technology good and failing to varying degrees.
0:46:44 But, yeah, I think fundamentally the course of, like, reporting it for me just brought the technology down to earth and made me a little more empowered to ask questions, to be skeptical, and to use it in my life with the right amount of skepticism.
0:46:48 So, what do you hope people get out of this series?
0:46:54 Normies who enter into it, you know, without a sort of solidified position on it.
0:46:56 What do you hope they take away from it?
0:47:14 I hope that people who didn’t feel like they had any place in the conversation around AI will feel, like, invited to the table and will be more informed and skeptical and curious and excited about the technology.
0:47:18 And I hope that it brings it down to earth a little bit.
0:47:21 Julia Longoria, this has been a lot of fun.
0:47:23 Thank you so much for coming on the show.
0:47:27 And the series, once again, is called Good Robot.
0:47:28 It is fantastic.
0:47:30 You should go listen to it immediately.
0:47:31 Thank you.
0:47:32 Thank you.
0:47:41 All right.
0:47:43 I hope you enjoyed this episode.
0:47:53 If you want to listen to Julia’s Good Robot series, and of course you do, you can find all four episodes in the Vox Unexplainable podcast feed.
0:47:57 We’ll drop a link to the first episode in the show notes.
0:48:00 And as always, we want to know what you think.
0:48:04 So drop us a line at the gray area at vox.com.
0:48:12 Or you can leave us a message on our new voicemail line at 1-800-214-5749.
0:48:17 And once you’re done with that, please go ahead, rate, review, subscribe to the pod.
0:48:19 That stuff really helps.
0:48:32 This episode was produced by Beth Morrissey, edited by Jorge Just, engineered by Erica Wong, fact-checked by Melissa Hirsch, and Alex Overington wrote our theme music.
0:48:35 New episodes of the gray area drop on Mondays.
0:48:37 Listen and subscribe.
0:48:39 The show is part of Vox.
0:48:43 Support Vox’s journalism by joining our membership program today.
0:48:47 Members get access to this show without any ads.
0:48:49 Go to vox.com/members to sign up.
0:48:53 And if you decide to sign up because of this show, let us know.
There’s a lot of uncertainty when it comes to artificial intelligence. Technologists love to talk about all the good these tools can do in the world, all the problems they might solve. Yet, many of those same technologists are also warning us about all the ways AI might upend society, how it might even destroy humanity.
Julia Longoria, Vox host and editorial director, spent a year trying to understand that dichotomy. The result is a four-part podcast series — called Good Robot — that explores the ideologies of the people funding, building, and driving the conversation about AI.
Today Julia speaks with Sean about how the hopes and fears of these individuals are influencing the technology that will change all of our lives.
Host: Sean Illing (@SeanIlling)
Guest: Vox Host and Editorial Director Julia Longoria
Good Robot is available in the Vox Unexplainable feed.
Learn more about your ad choices. Visit podcastchoices.com/adchoices