AI transcript
0:00:05 I’m a carpenter.
0:00:06 I’m a graphic designer.
0:00:09 I sell dog socks online.
0:00:12 That’s why B.C.A.A. created one size doesn’t fit all insurance.
0:00:15 It’s customizable, based on your unique needs.
0:00:19 So whether you manage rental properties or paint pet portraits,
0:00:23 you can protect your small business with B.C.’s most trusted insurance brand.
0:00:29 Visit bcaa.com/smallbusiness and use promo code radio to receive $50 off.
0:00:31 Conditions apply.
0:00:37 In 2018, Madison Smith told the county attorney she’d been raped by a classmate.
0:00:41 But he told her he couldn’t charge him with rape.
0:00:44 Then she found out Kansas is one of only six states where citizens
0:00:48 can petition to convene their own grand jury.
0:00:54 Having to tell over 300 strangers about what happened to me seemed so scary.
0:00:56 I’m Phoebe Judge.
0:00:57 This is Criminal.
0:01:05 Listen to our episode “The Petition Wherever You Get Your Podcasts.”
0:01:09 There’s a common belief in almost every age.
0:01:11 And it goes something like this.
0:01:18 More information leads to more truth, and more truth leads to more wisdom.
0:01:21 That definitely sounds right.
0:01:25 It’s hard to imagine being wise without knowing what’s true.
0:01:31 But the notion that individuals and societies will become more truthful and more wise as
0:01:42 they gain more information and more power is just empirically wrong.
0:01:46 This may seem like an academic point, but it’s much more than that.
0:01:51 If the internet age has anything like an ideology, it’s that more information and more data and
0:01:55 more openness will create a better world.
0:01:58 The reality is more complicated.
0:02:03 It has never been easier to know more about the world than it is right now, and it has
0:02:10 never been easier to share that knowledge than it is right now.
0:02:13 But I don’t think you can look at the state of things and conclude that this has been
0:02:17 a victory for truth and wisdom.
0:02:21 What are we to make of that?
0:02:26 More information might not be the solution, but neither is more ignorance.
0:02:31 So what should we do if we want a better and wiser world?
0:02:37 How should we approach the enormous amount of information we’re collecting?
0:02:47 I’m Sean Elling, and this is the Gray Area.
0:02:50 Today’s guest is Yuval Noah Harari.
0:02:55 He’s a historian and best-selling author of several books, including his 2014 mega-hit
0:02:58 Sapiens.
0:03:03 His latest is called Nexus, a brief history of information networks from the Stone Age
0:03:05 to AI.
0:03:09 Like all of Harari’s books, this one covers a ton of ground, but it manages to do it in
0:03:11 a digestible way.
0:03:16 And it makes two big arguments that seem very important to me, and I think they also get
0:03:27 us closer to answering some of those questions I just posed.
0:03:32 The first argument is that every system that matters in our world is essentially the result
0:03:39 of an information network, from currency to religions to nation-states to artificial intelligence.
0:03:44 It all works because there’s a chain of people and machines and institutions collecting and
0:03:47 sharing information.
0:03:51 The second argument is that although we gain a tremendous amount of power by building these
0:03:56 networks of cooperation, the way most of them are constructed makes them more likely than
0:04:00 not to produce bad outcomes.
0:04:05 And since our power as a species is growing thanks to our technology, the potential consequences
0:04:11 of this are increasingly catastrophic.
0:04:17 I invited Harari on the show to explore some of these ideas, and we focus on the most significant
0:04:22 information network in the history of the world, artificial intelligence, and why he
0:04:27 thinks the choices we make in the coming years will matter so much.
0:04:33 Normally, our episodes are close to an hour long, and that was the plan with Harari, but
0:04:38 I was so engrossed in this conversation that we just kept going, and I think you’ll be
0:04:42 glad that we did.
0:04:44 You’ve all know Harari.
0:04:45 Welcome to the show.
0:04:46 Thank you.
0:04:47 It’s good to be here in person.
0:04:49 Likewise.
0:04:57 All of your books have a big macro historical story to tell, whether it’s about technology
0:05:01 or data or the power of fiction.
0:05:04 This one’s about information networks.
0:05:08 What’s the story you want to tell here?
0:05:17 The basic question that the book explores is, if humans are so smart, why are we so stupid?
0:05:21 We are definitely the smartest animal on the planet.
0:05:26 We can build airplanes and planes and atom bombs and computers and so forth, and at the
0:05:32 same time, we are on the verge of destroying ourselves, our civilization, and much of the
0:05:36 ecological system.
0:05:43 It seems like this big paradox that if we know so much about the world, and you know
0:05:51 about distant galaxies, and about DNA, and subatomic particles, why are we doing so many
0:05:53 self-destructive things?
0:06:01 The basic answer you get from a lot of mythology and theology is that there is something wrong
0:06:09 in human nature, and therefore we must rely on some outside souls, like a god or whatever,
0:06:12 to save us from ourselves.
0:06:17 And I think that’s the wrong answer, and it’s a dangerous answer, because it makes people
0:06:19 advocate responsibility.
0:06:24 And I think that the real answer is that there is nothing wrong with human nature.
0:06:28 The problem is with our information.
0:06:35 Most humans are good people, they are not self-destructive, but if you give good people
0:06:39 bad information, they make bad decisions.
0:06:46 And what we see through history is that, yes, we become better and better at accumulating
0:06:52 massive amounts of information, but the information isn’t getting better.
0:06:59 Modern societies are as susceptible as Stone Age tribes to mass delusions and psychosis,
0:07:06 if you think about Stalinism and Nazism in the 20th century, so they’re extremely sophisticated
0:07:11 societies in terms of technology and economics and so forth.
0:07:17 And yet their view of the world was really delusional.
0:07:24 And this is what the book explores, is why is it that the quality of our information
0:07:33 is not improving over time, and maybe the main answer that the book gives to this is
0:07:36 that there is a misconception about what information does.
0:07:41 Too many people, especially in places like Silicon Valley, they think that information
0:07:46 is about truth, that information is truth, that if you accumulate a lot of information,
0:07:49 you will know a lot of things about the world.
0:07:52 But most information is junk.
0:07:54 Information isn’t truth.
0:08:01 The main thing that information does is to connect, to connect a lot of people into a
0:08:05 society, a religion, a corporation, an army.
0:08:11 And the easiest way to connect lots of people is not with the truth.
0:08:18 The easiest way to connect people is with fantasies and mythologies and delusions and
0:08:19 so forth.
0:08:25 And this is why, yes, we have now the most sophisticated information technology in history
0:08:29 and we are on the verge of destroying ourselves.
0:08:35 You call that this idea that more information will lead to more truth and more wisdom, the
0:08:40 semi-official ideology of the computer age.
0:08:42 What is wrong with that assumption?
0:08:44 Why is that not the case?
0:08:51 Why is it not true that more information makes us less blinkered and more wise?
0:08:53 Because most information isn’t truth.
0:08:55 Most information isn’t facts.
0:09:00 The truth is a rare and costly kind of information.
0:09:06 If you want to write a story about anything, I don’t know, the Roman Empire, if you want
0:09:14 to write a truthful account, you need to invest a lot of time and energy and money
0:09:18 in research and in fact-checking and that’s difficult.
0:09:22 If on the other hand you just invent some fiction, that’s very easy.
0:09:24 It’s cheap.
0:09:27 Fiction is much, much cheaper than the truth.
0:09:34 The other thing is that the truth tends to be complicated because reality is complicated.
0:09:37 And people often don’t like complicated stories.
0:09:39 They want simple stories.
0:09:45 So in a free market of information, the market will be flooded by fiction and fantasy and
0:09:48 delusion and the truth will be crowded out.
0:09:52 Do you think we overstate the role of truth in human life?
0:09:56 Do you think it’s a mistake also to assume that people really care about the truth deep
0:09:57 down?
0:09:58 No.
0:09:59 Vice versa.
0:10:07 For this age, which is a recurring problem in history, is a very cynical view of humanity
0:10:11 which discounts truth and focuses on power.
0:10:14 This is something that you see again and again in history and you see it on both the right
0:10:16 and the left.
0:10:20 You see it with Marxists and now you see it with populists.
0:10:27 This is something that Donald Trump and Karl Marx agree on, that the only reality is power,
0:10:35 that humans are only interested in power, that any human interaction is a power struggle.
0:10:40 So in any situation, the question to ask is, who are the winners and who are the losers?
0:10:46 Somebody is telling you something, a journalist, a scientist, a politician, whoever.
0:10:49 You don’t ask is it true or not.
0:10:55 You ask whose interests are served, whose privileges are being defended.
0:11:01 You think about it as a power struggle and this is a very cynical and destructive view
0:11:07 of the world and it’s also wrong because, yes, humans are interested in power but not
0:11:09 only in power.
0:11:14 If you look at yourself, I guess most individuals, if they will examine themselves, they will
0:11:20 acknowledge, yes, I want some measure of power in certain areas of life but that’s not the
0:11:22 only thing I want.
0:11:29 I also have a deep yearning, an authentic, honest yearning to know the truth about myself,
0:11:36 about life in general, about the world and this is a very deep human need because you
0:11:41 can never really be happy if you don’t know the truth about yourself.
0:11:49 You can be very, very powerful and ignorance is strength in many cases in life but it’s
0:11:55 not the way to real happiness and satisfaction and you see it.
0:12:01 You look at figures like Vladimir Putin or Benjamin Netanyahu who I know from Israel,
0:12:05 they are people obsessed with power and they are extremely powerful individuals and they
0:12:08 are not particularly happy individuals.
0:12:14 Well, part of the case you make in the book is that what these networks do is they privilege
0:12:17 order over truth.
0:12:18 Why is that?
0:12:22 They privilege order over truth, what do we gain by that?
0:12:24 Clearly we gain something or we wouldn’t do it.
0:12:30 Yes, when this goes back to the issue of power, that there is a struggle for power in the
0:12:32 world, this is true.
0:12:38 Even though humans are interested in various things, when you come to build big networks
0:12:45 of cooperation, whether it’s armies or states or economic operations, there is a struggle
0:12:48 for power.
0:12:56 In this struggle, truth is important to some extent but order is more important and I think
0:13:02 this is the most fundamental mistake of the naive view of information which prevails in
0:13:08 places like Silicon Valley, that people think that in the market of ideas, in the market
0:13:18 of information, if one society is delusional and another society is committed to the facts,
0:13:22 then the facts will empower you, you will know more about the world, you will be able
0:13:26 to produce more powerful weapons so you will win.
0:13:32 So adhering to the truth is a winning strategy even in terms of power.
0:13:39 And this is a mistake because if you think for instance about producing an atom bomb,
0:13:45 to produce an atom bomb you do need to know some facts about the world, about physics.
0:13:49 If you ignore the facts of physics, your bomb will not explode.
0:13:56 But to build an atom bomb, you need something else, you need order because a single physicist
0:14:02 who knows nuclear physics well cannot build an atom bomb.
0:14:08 You need millions of people to cooperate with you, you need miners to mine uranium, you
0:14:15 need engineers and builders to build the reactor, you need farmers to grow potatoes and rice
0:14:19 so all the engineers and physicists will have something to eat.
0:14:23 And how do you get these millions of people to cooperate on this project?
0:14:28 And if you just tell them facts about physics, this wouldn’t motivate anybody.
0:14:29 What do they need?
0:14:30 A story?
0:14:31 They need a story.
0:14:37 And it’s easier to motivate them with a fictional story than with the truth.
0:14:44 And it’s the people who master these mythologies and theologies and ideologies who give the
0:14:49 orders to the nuclear physicists in the end.
0:14:57 But when you want to build an ideology that will inspire millions of people, a commitment
0:14:59 to the facts is not so important.
0:15:06 You can ignore the facts and still your ideology will explode in a big bang.
0:15:14 And this is why over and over again throughout history, we see that there is, it’s not that
0:15:21 our information networks become better and better at understanding the facts of the world.
0:15:26 Yes, there is a process that we learn more about the world, but at the same time we also
0:15:33 learn how to construct more effective mythologies and ideologies and the way to do it is not
0:15:38 necessarily by adhering to the facts.
0:15:47 So the way we would typically analyze a catastrophic movement like Stalinism or Nazism, you can
0:15:50 look at it as an ideological phenomenon.
0:15:55 You can look at it in materialist terms, but you think you actually understand something
0:16:02 about the way a movement like that works by looking at it primarily as an information
0:16:09 network propelled by exceptionally delusional ideas, but an information network nevertheless.
0:16:13 What do we gain analytically by looking at a movement like that as an information network
0:16:17 as opposed to any of those other things I mentioned?
0:16:23 We tend to think about democracy and totalitarianism as different ethical movements.
0:16:26 They are committed to different ethical ideas.
0:16:28 And this is true, of course, to some extent.
0:16:35 But underneath it, you see a different structure of an information network.
0:16:41 Information flows differently in a totalitarian regime like the Stalinist Soviet Union and
0:16:43 in a democratic system.
0:16:50 Totalitarianism, dictatorships more generally, they are a centralized information system.
0:16:56 All the information flows to one center where all the decisions are being made.
0:16:59 And it also lacks any self-correcting mechanisms.
0:17:07 There are no mechanisms that if Stalin makes a mistake, there is some mechanism that can
0:17:10 identify and correct that mistake.
0:17:16 Democracy on the other hand, it’s a distributed information system, and not all the information
0:17:22 flows to the center because in a democracy, it’s not that the government elected by the
0:17:24 majority makes all the decisions.
0:17:31 No, the ideal is that you give as much autonomy to various individuals, organizations, communities,
0:17:33 private businesses, and so forth.
0:17:37 Only certain decisions must be made centrally.
0:17:42 Like whether to go to war or make peace, you cannot let every community make its own mind.
0:17:44 So this is made centrally.
0:17:49 But even in these cases, where the information flows to the center, where the decisions are
0:17:55 being made, you have self-correcting mechanisms that can identify and correct mistakes.
0:18:02 The most obvious mechanism is elections, that every few years, you try something, and if
0:18:07 it doesn’t work, if we think it’s not bringing good results, we can correct it by replacing
0:18:13 them with another party or another politician after a couple of years, which you can’t do
0:18:14 in a dictatorship.
0:18:17 In a dictatorship, you can’t say, “Oh, we made a mistake.
0:18:18 Let’s try somebody else.
0:18:20 Let’s try something else.”
0:18:25 So this is the essential difference between the way that information functions in a dictatorship
0:18:28 and in a democracy.
0:18:32 So do you think of information networks as a bit of a double-edged sword?
0:18:40 On the one hand, they make mass cooperation possible, but on the other hand, if they’re
0:18:46 poorly designed, they engineer reliably catastrophic outcomes.
0:18:52 Yeah, and mass cooperation means enormous power, but it can be used for good or ill.
0:19:01 You can use it to create a healthcare system that takes care of the medical problems of
0:19:08 entire populations, and you can use it to create a police state that surveys and punishes
0:19:09 the entire population.
0:19:14 It can be done with the same type of … It’s basically both for a healthcare system and
0:19:21 for a secret police, you need to amass enormous amounts of information and to analyze it.
0:19:23 What kind of information?
0:19:24 What do you do with it?
0:19:28 That’s a different question, but the key thing is to understand history, not just in terms
0:19:36 of ideologies and political ideas, but in terms of the underlying flow and structure of information.
0:19:43 If we go back, let’s say, 5,000 years ago to one of the first crucial information revolutions,
0:19:47 to understand how information technology shapes history.
0:19:50 Think about the invention of writing.
0:19:56 The invention of writing in technological terms, it’s extremely simple.
0:20:02 In ancient Mesopotamia, people discovered that you can take clay tablets, and clay is
0:20:04 basically just mud.
0:20:10 You take mud, and you take a stick, and you imprint certain signs in the mud, and you
0:20:15 get a written document, and you can write all kinds of things there.
0:20:17 The technology is extremely simple.
0:20:22 Of course, the key is finding the right code, but the technology itself, you just need mud
0:20:29 and a stick, but it had an enormous impact on the shape of human societies.
0:20:31 How does it work?
0:20:33 Let’s think about something like ownership.
0:20:36 What does it mean to own something?
0:20:38 Like you own a field.
0:20:40 What does it mean that I own a field?
0:20:42 This field is mine.
0:20:50 Before writing, if you live in a small Mesopotamian village like 7,000 years ago, it means that
0:20:55 your neighbors agree that this is your field.
0:20:57 Ownership is a communal affair.
0:21:05 You have a field that means that your neighbors don’t bring their goats there and don’t pick
0:21:08 fruits there without your permission.
0:21:11 So ownership is a community affair.
0:21:15 This limits your autonomy and power as an individual.
0:21:21 You can’t sell your field to somebody else without the agreement of the community because
0:21:25 ownership is a matter of communal agreement.
0:21:30 And similarly, it’s very difficult for a distant authority, like a king living in the
0:21:38 capital city hundreds of kilometers away, to know who owns what and to live by taxes.
0:21:43 Because how can the king know who owns each field in hundreds of remote villages?
0:21:45 It’s impossible.
0:21:50 Then writing came along and changed the meaning of ownership.
0:21:59 Now, owning a field means that there is a piece of dry mud with certain signs on it,
0:22:04 which says that this field is mine, a document.
0:22:11 And this decreases the power of the local community and empowers on the one side individuals
0:22:14 and on the other side the king.
0:22:20 So the fact that ownership is now this dry piece of mud, I can take this dry piece of
0:22:25 mud and give it to you in exchange for a herd of goats.
0:22:31 And I don’t care and you don’t care what the neighbors say because ownership now is this
0:22:32 document.
0:22:37 And this means that now I have greater power over my property.
0:22:45 But it also means that now the king in the distant capital, he can know who owns what
0:22:51 in the entire kingdom because he collects all these dry pieces of mud in something called
0:22:56 an archive and he builds a centralized bureaucracy.
0:23:02 And the bureaucrat sitting in the capital city can know who owns which fields in distant
0:23:10 villages just by looking at these dry pieces of mud and he can start livi taxes.
0:23:15 So what we see with the invention of writing, again, it’s a complex mechanism.
0:23:17 It’s not one-sided.
0:23:20 The community becomes less important.
0:23:26 Individual rights, property rights, they become more important, but also centralized authority.
0:23:33 And this is the moment in history which we see the rise of central authoritarian systems.
0:23:39 Kingdoms and then empires ruled by kings and tyrants and emperors.
0:23:56 It was not possible without these dry pieces of mud.
0:23:59 Support for the gray area comes from MetMobile.
0:24:03 Getting a great deal usually comes with some strings attached.
0:24:07 Maybe that enticing price only applies to the first few weeks or you have to physically
0:24:13 mail a rebate form to Missouri or there is minuscule fine print that reads we’re lying,
0:24:14 there’s no deal, we got you.
0:24:17 Well, MetMobile offers deals with no strings attached.
0:24:21 When they say you’ll pay $15 a month when you purchase a three-month plan, they mean
0:24:22 it.
0:24:26 All MetMobile plans come with high-speed data and unlimited talk and text delivered on the
0:24:29 nation’s largest 5G network.
0:24:32 Mint says you can even keep your phone, your contacts, and your number.
0:24:35 It doesn’t get much easier than that, folks.
0:24:39 To get this new customer offer and your new three-month premium wireless plan for just
0:24:43 $15 a month, you can go to metmobile.com/grayarea.
0:24:46 That’s a metmobile.com/grayarea.
0:24:51 You can cut your wireless bill to $15 a month at metmobile.com/grayarea.
0:24:55 $45 upfront payment required equivalent to $15 a month.
0:24:58 New customers on first three-month plan only.
0:25:01 Speed slower above 40 gigabytes on unlimited plan.
0:25:04 Additional taxes, fees, and restrictions apply.
0:25:19 See metmobile for details.
0:25:28 The newest revolutionary technology is AI, which I think, in the long run, will probably
0:25:30 be more transformative than even–
0:25:31 Than mud.
0:25:37 And writing, and maybe everything else when it’s all said and done.
0:25:45 But what makes AI for you a fundamentally different kind of information network?
0:25:49 And what is it about that uniqueness that concerns you?
0:25:52 So let’s start maybe with a story.
0:25:57 So we were in ancient most of Potamia, what is today Iraq, 5,000 years ago.
0:26:02 Now let’s move to a neighboring country, Iran, today.
0:26:07 Like a scene today on the street in Isfahan or Tehran.
0:26:13 In Iran, they have what is known as the hijab laws, that women, when they go in public,
0:26:16 they must cover their hair.
0:26:20 And this goes back to the humanist revolution in 1979.
0:26:28 But for many years, the Iranian regime had difficulty imposing the hijab laws on a relatively
0:26:31 unwilling population.
0:26:36 Because to make sure that each woman when she goes out in the street or drives her car
0:26:44 is wearing a hijab, you need to place a policeman on every street, and you don’t have so many
0:26:45 policemen.
0:26:50 And then it also causes a lot of friction, because the policeman needs to arrest the
0:26:57 woman and there could be shouting and there could be altercations, it’s a lot of friction.
0:26:59 And then AI came along.
0:27:06 And what happens today in Iran is that you don’t need these kind of morality police on
0:27:09 every street and intersection.
0:27:17 You have cameras, millions and millions of cameras, with facial recognition software,
0:27:25 which automatically identify a woman, it can identify even the name of the woman, who drives
0:27:26 in her car.
0:27:30 She’s in her own private car with the windows shut.
0:27:37 And still the camera can identify that you are driving now in a public sphere with your
0:27:39 hair uncovered.
0:27:45 And the AI can immediately also pull out your personal details, your phone numbers,
0:27:51 and send you an SMS message that you have just violated the hijab law.
0:27:52 You must stop your car.
0:27:54 Your car is impounded.
0:27:57 And this happens every day.
0:28:02 It’s not a science fiction Hollywoodian scenario of some dystopian future in 100 years.
0:28:03 It’s a reality right now.
0:28:04 This is just the beginning too.
0:28:06 And this is just the beginning.
0:28:13 The AI revolutionizes almost everything it touches, including surveillance.
0:28:21 Previously, totalitarian regimes, they were limited by the need to rely on human agents.
0:28:27 If you are Stalin or Khomeini or Hitler, and you want to follow every citizen 24 hours
0:28:29 a day, you can’t.
0:28:35 Because again, if you are Stalin and you have 200 million citizens in the Soviet Union,
0:28:40 how do you get 400 million KGB agents to follow everybody?
0:28:41 Not enough agents.
0:28:46 And even if you have all these agents, who’s going to analyze all the data they accumulate?
0:28:52 So even in Stalin’s Soviet Union, privacy was still the default for most of the day.
0:28:58 But this is over because AIs can fulfill these tasks.
0:29:02 You can follow with cameras and drones and smartphones and computers.
0:29:12 You can follow everybody all the time and analyze the oceans of data this produces to
0:29:16 monitor and to police a population.
0:29:18 And this is just one example.
0:29:23 The key thing is that AI is not a tool.
0:29:25 It is an agent.
0:29:26 It’s an active agent.
0:29:28 It can make decisions by itself.
0:29:32 It can even create new ideas by itself.
0:29:39 When we produce AI, we are not just producing more tools like printing presses or atom bombs
0:29:41 or spaceships.
0:29:47 We are creating new types of non-human agents.
0:29:52 And it’s not like a Hollywoodian scenario that you have one big supercomputer that now
0:29:54 tries to take over the world.
0:29:55 No.
0:30:01 We need to think about it as millions and millions of AI agents, AI bureaucrats, AI
0:30:10 soldiers, AI policemen, AI bank managers and school managers and so forth that constantly
0:30:12 watch us and make decisions about us.
0:30:18 Well, to go back to the point you were making earlier about how rudimentary a technology
0:30:24 like writing was or even the printing press or television or radio.
0:30:28 By comparison, AI is infinitely more complicated.
0:30:30 And for that reason, unpredictable.
0:30:33 I mean, can we even imagine?
0:30:36 Can we even anticipate where this might go?
0:30:41 When you consider how transformative writing was, do we have any inkling of how transformative
0:30:43 and how uprooting this might be?
0:30:50 No, because the very nature of AI is that it is unpredictable.
0:30:56 If humans can predict everything it’s going to do, it is not an AI.
0:30:59 You know, there is a lot of hype nowadays about AI.
0:31:07 So people, especially when they want to sell you something, they paste the label AI on everything.
0:31:13 Like this is AI water and this is AI air and this is an AI table and so forth.
0:31:14 So what is AI?
0:31:22 AI is something, a machine that can learn and change by itself.
0:31:29 It is initially created by humans, but what humans give it is the ability to learn and
0:31:31 change by itself.
0:31:33 So uncontrollable by definition.
0:31:37 It’s therefore exactly uncontrollable by definition.
0:31:41 If you can predict and control everything it will do down the line, it’s not an AI.
0:31:48 It’s just an automatic machine, which we’ve had for decades and generations.
0:31:49 Why do we keep doing this?
0:31:56 Why do human beings keep building things that we do not understand?
0:31:59 Where does this drive to summon forces that we can’t control come from?
0:32:04 I mean, maybe this is the stuff of religion, but you’re here, so I’m asking.
0:32:05 Yeah.
0:32:11 Now, first of all, it should be said that there is enormous positive potential in AI.
0:32:16 My job as a historian and a philosopher is to talk mainly about the dangers because you
0:32:22 hear so much about the positive potential from the entrepreneurs and the business people
0:32:23 who develop it.
0:32:26 But yes, it should be very clear that there is enormous positive potential.
0:32:32 AI can create the best healthcare system in history, the best education system in history,
0:32:40 this ability to understand human beings and to come up with ideas that we didn’t think
0:32:41 about.
0:32:43 It is potentially good.
0:32:49 Like it can invent new kinds of medicines that no human doctor ever thought about.
0:32:51 So there is this attraction.
0:32:59 And we also have now this arms race situation when the people who develop the technology,
0:33:02 they understand many of the dangers.
0:33:07 They understand them better than almost anybody else.
0:33:13 But they are caught in this arms race that they say, “I know it’s dangerous, but if
0:33:21 I slow down and my competitor, either the other corporation or the other country, if
0:33:26 they don’t slow down, they keep going as fast as they can, and I slow down, I will be left
0:33:32 behind and they will win the race, and then they will control the world and will be able
0:33:34 to decide what to do with AI.
0:33:35 And I’m a good guy.
0:33:37 I’m aware of the dangers.
0:33:40 So it’s good if I win.
0:33:44 And then I can take responsible decisions what to do with this technology.”
0:33:47 And this is a story everybody tells themselves.
0:33:52 Elon Musk says it, and Sam Altman says it, the United States says it, China says it.
0:33:57 Everybody says that we know it’s dangerous, but we can’t slow down because the other side
0:34:00 won’t slow down.
0:34:05 You said earlier, sapiens are the smartest and stupidest of all the animals.
0:34:11 Maybe it’s just a law of nature that intelligence and self-destruction at a certain level just
0:34:13 go hand in hand.
0:34:16 Maybe we’re living through that.
0:34:24 On one level, intelligence creates power and lots of animals can do self-destructive things,
0:34:29 but if you’re a rat or you’re a raccoon and you do something self-destructive, the damage
0:34:30 will be limited.
0:34:33 Rats don’t have labor camps and atomic bombs.
0:34:34 Yeah.
0:34:39 But when we’re talking about AI, we tend to talk about the political and economic impacts.
0:34:47 But in the book, you also touch on the potential cultural and even spiritual impacts of this
0:34:55 technology, that a world of AI is going to give rise to new identities, new ways of being
0:34:56 in the world.
0:35:02 And that might unleash all kinds of competition over not just how to organize society, but
0:35:05 what it means to be in the world as a human being.
0:35:10 I mean, can we even begin to imagine the direction that might go?
0:35:17 Not really, because until today, all of human culture was created by human minds.
0:35:25 We live inside culture, everything that happens to us, we experience it through the mediation
0:35:34 of cultural products, mythologies, ideologies, artifacts, songs, plays, TV series, we live
0:35:38 cocooned inside this cultural universe.
0:35:44 And until today, everything, all the tools, all the poems, all the TV series, all the
0:35:50 mythologies, they are the product of organic human minds.
0:35:59 And now, increasingly, they will be the product of inorganic AI intelligences, alien intelligences.
0:36:06 Again, AI, the acronym AI, traditionally stood for artificial intelligence, but it should
0:36:11 actually stand for alien intelligence, alien, not in the sense that coming from outer space.
0:36:20 Alien in the sense that it’s very, very different from the way that humans think and make decisions.
0:36:22 Because it’s not organic.
0:36:27 To give you a concrete example, one of the key moments in the AI revolution, or by the
0:36:35 eight years ago, the aha moment for a lot of governments and militaries around the world,
0:36:40 was when AlphaGo defeated Lisa Dole in a gold tournament.
0:36:46 Now, gold is a bold strategy, like chess, but much more complicated, invented in ancient
0:36:52 China, and it has been considered, not only in China, also in Korea, in Japan, one of
0:36:59 the basic arts that every civilized person should know.
0:37:04 If you’re a Chinese gentleman in the Middle Ages, you know calligraphy, and you know to
0:37:08 play some music, and you know how to play Go.
0:37:14 Entire philosophies developed around the game, which was seen as a mirror for life and for
0:37:16 politics.
0:37:25 And then, an AI program, AlphaGo, in 2016, taught itself how to play Go.
0:37:29 And it defeated, it crushed, the human world champion.
0:37:32 But what is most interesting is the way it did it.
0:37:40 It deployed a strategy, which when it first played it, all the experts said, “What is
0:37:41 this nonsense?
0:37:44 Nobody plays Go like that!”
0:37:47 And it turned out to be brilliant.
0:37:55 Tens of millions of humans played this game, and now we know that they explored only a
0:37:58 very small part of the landscape of Go.
0:38:05 If you imagine all the ways to play Go as a kind of geography, a planet.
0:38:11 So humans were stuck on one island, and they thought this is the whole planet of Go.
0:38:17 And then AI came along, and within a few weeks, it discovered new continents.
0:38:24 And now also humans play Go very differently than they played it before 2016.
0:38:28 Now you can say this is not important, this is just, you know, a game.
0:38:32 But the same thing is likely to happen in more and more fields.
0:38:37 If you think about finance, so finance is also an art.
0:38:43 The entire financial structure that we know is based on the human imagination.
0:38:49 The history of finance is the history of humans inventing financial devices.
0:38:51 Money is a financial device.
0:38:57 Bonds, stocks, ETFs, CDOs, all these strange things that humans invent.
0:39:00 This is a product of human ingenuity.
0:39:07 And now AI comes along and starts inventing new financial devices that no human being
0:39:10 ever thought about, ever imagined.
0:39:16 So again, we were stuck on a small financial island, and now it’s getting bigger and bigger.
0:39:23 And what happens, for instance, if finance becomes so complicated because of these new
0:39:29 creations of AI, that no human being is able to understand finance anymore.
0:39:34 I mean, even today, how many people really understand the financial system?
0:39:36 Less than one percent.
0:39:42 In 10 years, the number of people who understand the financial system could be exactly zero.
0:39:47 Because, you know, the financial system is the ideal playground for AI.
0:39:52 Because it’s a world of pure information and mathematics.
0:39:57 AI has difficulty still dealing with the physical world outside.
0:40:01 This is why every year they tell us, Elon Musk tells us, “Next year you will have fully
0:40:03 autonomous cars on the road.”
0:40:05 And it doesn’t happen.
0:40:06 Why?
0:40:10 Because to drive a car, you need to interact with the physical world and the messy world
0:40:15 of traffic in New York with all the construction and pedestrian and whatever.
0:40:16 Very difficult.
0:40:18 Finance, much easy.
0:40:20 Just numbers.
0:40:23 And what happens?
0:40:30 If in this informational realm where AI is a native and we are the aliens, we are the
0:40:36 immigrants, it creates such sophisticated financial devices and mechanisms that nobody
0:40:37 understands.
0:40:41 If a handful of banks could produce 2008, what could AI do?
0:40:42 Exactly.
0:40:49 2008 originally was because of these new financial devices like CDOs, collateral debt obligations,
0:40:53 but a few wizards in Wall Street invented.
0:40:57 Nobody understood them and not the regulators, so they’re not regulated properly.
0:41:02 For a couple of years, everything seemed okay, at least some people were making billions
0:41:05 out of them, and then everything collapsed.
0:41:12 The same thing can happen on a much, much larger scale as AI takes over finance.
0:41:20 So when you look at the world now and project out into the future, is that what you see?
0:41:28 Societies becoming trapped in these incredibly powerful, but very poorly designed information
0:41:33 networks, and I say AI is poorly designed precisely because it doesn’t really have
0:41:36 any course correct mechanisms.
0:41:37 It’s up to us.
0:41:39 It’s not deterministic.
0:41:41 It’s not inevitable.
0:41:48 We need to be much more careful and thoughtful about how we design these things.
0:41:54 Again, understanding that they are not tools, they are agents, and therefore down the road
0:41:59 are very likely to get out of our control if we are not careful about them.
0:42:04 And it’s not that you have a single supercomputer that tries to take over the world.
0:42:11 You have these millions of AI bureaucrats in schools, in factories, everywhere making
0:42:18 decisions about us in ways that we do not understand.
0:42:23 Democracy is to a large extent about accountability.
0:42:27 Accountability depends on the ability to understand decisions.
0:42:32 If more and more of the decisions in society, like you apply to a bank to get a loan, and
0:42:37 the bank tells you no, and you ask why not, and the bank says we don’t know.
0:42:43 The algorithm went over all the data and decided not to give you a loan, and we just trust
0:42:44 our algorithm.
0:42:47 This to a large extent is the end of democracy.
0:42:52 You can still have elections and choose whichever human you want, but if humans are no longer
0:42:57 able to understand these basic decisions about their lives, why didn’t you give me a loan,
0:43:00 then there is no longer accountability.
0:43:05 You say we still have control over these things, but for how long?
0:43:06 What is that threshold?
0:43:08 What is the event horizon?
0:43:11 Will we even know it when we cross it?
0:43:13 Nobody knows for sure.
0:43:17 It’s moving faster than I think almost anybody expected.
0:43:22 Could be three years, could be five years, could be 10 years, but I don’t think that
0:43:23 much more than that.
0:43:25 That’s not much.
0:43:30 Again, think about it in a cosmic perspective.
0:43:38 We are the product as human beings of four billion years of organic evolution.
0:43:42 Organic evolution, as far as we know, began on planet Earth four billion years ago with
0:43:48 these tiny microorganisms, and it took billions of years for the evolution of multicellular
0:43:55 organisms and reptiles and mammals and apes and humans.
0:44:03 Digital evolution, non-organic evolution is millions of times faster than organic evolution.
0:44:10 We are now at the beginning of a new evolutionary process that might last thousands and even
0:44:13 millions of years.
0:44:20 The AIs we know today in 2024, Chajipiti and all that, they are just the amoebas of the
0:44:22 AI evolutionary process.
0:44:23 They are just the amoebas.
0:44:25 That’s not very comforting.
0:44:29 How would AITREX look like?
0:44:37 The thing is that AITREX is not billions of years in the future, maybe it’s just 20 years
0:44:38 in the future.
0:44:44 Because, again, another key thing about, we are now like the big struggle on planet Earth
0:44:52 right now, is that after four billion years of organic life, we now have a new kind of
0:44:57 entity, agent on the planet, which is inorganic.
0:45:03 And inorganic entities, they don’t live by cycles like us.
0:45:08 We live day and night, winter and summer, growth and decay.
0:45:11 Sometimes we are active, sometimes we need to sleep.
0:45:13 AIs don’t need to sleep.
0:45:20 They are always on, they are tireless, they are relentless, and they increasingly control
0:45:21 the world.
0:45:28 Now a big question is, as organic beings who need to rest sometimes, what happens when
0:45:33 we are controlled by agents who never need to rest?
0:45:37 Even Stalin’s KGB agents, they needed to sleep sometime.
0:45:42 The police cameras in Iran, they never sleep.
0:45:47 If you think about the news cycle, if you think about the market, Wall Street.
0:45:53 And a curious thing, an important fact about Wall Street, Wall Street is not always on.
0:45:59 It’s open Mondays to Fridays, 9.30 in the morning to four o’clock in the afternoon.
0:46:06 If a new war in the Middle East erupts at five minutes past four on a Friday, Wall Street
0:46:11 will be able to react only on Monday morning because it’s off for the weekend.
0:46:16 And this is a good thing because organic entities need to rest.
0:46:25 Now what happens when the markets are taken over, are run by tireless, relentless AIs?
0:46:29 What happens to human bankers, to human investors?
0:46:32 They also need to be on all the time.
0:46:38 What happens to human politicians, to journalists who need to be on all the time?
0:46:58 If you keep an organic entity on all the time, it eventually collapses and dies.
0:47:00 Support for the gray area comes from Shopify.
0:47:05 When it comes to building a sustainable company, how you sell your product is just as important
0:47:07 as what you’re selling.
0:47:11 Because even the most incredible French press machine or that in-office putting green won’t
0:47:16 reach customers if the buying process is complicated, buggy or broken.
0:47:20 Shopify offers a set it and forget it solution to sales that smart businesses are turning
0:47:22 to every single day.
0:47:25 Shopify is an all-in-one digital commerce platform that may help your business sell
0:47:27 better than ever before.
0:47:31 Their shop pay feature may convert more customers and end those abandoned shopping carts for
0:47:32 good.
0:47:36 There’s a reason companies like Allbirds turn to Shopify to sell more products to more
0:47:41 customers, whether they’re online, in a brick-and-mortar shop or on social media.
0:47:43 Businesses that sell more sell with Shopify.
0:47:47 You can upgrade your business and get the same checkout Allbirds uses with Shopify.
0:47:53 You can sign up for your $1 per month trial period at Shopify.com/Vox.
0:47:59 Just go to Shopify.com/Vox to upgrade your selling today.
0:48:04 Support for the gray area comes from Greenlight.
0:48:09 The school year is already underway and you’ve probably wrapped up all your back-to-school
0:48:10 shopping.
0:48:14 Which means it’s time to kick back and pretend like you remember how to do algebra when
0:48:15 your kid needs help with homework.
0:48:19 But if you weren’t your child to do more learning outside the classroom that will help later
0:48:21 on, then you might want to try Greenlight.
0:48:26 It can help teach your kids about money and not just the adding and subtracting, but how
0:48:27 to manage it.
0:48:30 Greenlight is a debit card and money app for families.
0:48:35 Parents can keep an eye on kids spending and money habits and kids learn how to save, invest,
0:48:36 and spend wisely.
0:48:41 And with a Greenlight Infinity plan, you get even more financial literacy resources and
0:48:44 teens can check in thanks to family location sharing.
0:48:49 My kid’s a bit too young for this, but I’ve got a colleague here at Vox who uses it with
0:48:51 his two boys and he loves it.
0:48:55 You can join the millions of parents and kids who use Greenlight to navigate life together.
0:49:01 You can sign up for Greenlight today and get your first month free when you go to greenlight.com/grayarea.
0:49:20 That’s greenlight.com/grayarea to try Greenlight for free, greenlight.com/grayarea.
0:49:24 You’ve been thinking and writing about AI for several years now.
0:49:31 My sense is that you’ve become more, not less worried about where we’re going.
0:49:32 Am I reading you right?
0:49:34 Yes, because it’s accelerating.
0:49:40 When I published “Homo Deus” in 2016, all this sounded like this abstract philosophical
0:49:47 musings about something that might happen generations or centuries in the future.
0:49:49 And now it’s extremely urgent.
0:49:53 And again, I don’t think it can be said enough.
0:49:58 You also talk to a lot of people who work in Silicon Valley, people who work on AI.
0:50:01 This is the consensus view among them as well.
0:50:07 They are keenly aware how combustible this is, but they can’t help but continue on,
0:50:11 which says something about the insanity and the power of our systems.
0:50:14 There are two very, very strong motivations there.
0:50:20 On the one hand, they are very concerned, but they are concerned that the bad guys will
0:50:22 get there first.
0:50:29 Like they naturally see themselves as the good guys and they say, “This is coming.”
0:50:34 The biggest thing, not just in human history, the biggest thing in evolution since the beginning
0:50:37 of life is coming.
0:50:39 Who do you want to be in control?
0:50:44 Do you want Putin to be in control or do you want me, a good guy, to be in control?
0:50:48 So obviously we need to move faster to beat them.
0:50:55 And then there is, of course, the other attraction that this is the biggest thing maybe since
0:50:56 the beginning of life.
0:51:01 If you think about the timeline of the universe, as far as we know it today, so you have the
0:51:08 Big Bang 13 billion years ago, then nothing much happens until four billion years ago
0:51:12 life emerges on planet Earth, the next big thing.
0:51:15 And then for four billion years, nothing much happens.
0:51:18 It’s all the same organic stuff.
0:51:22 So you have amoebas and you have dinosaurs and you have homo sapiens, but it’s the same
0:51:24 basic organic stuff.
0:51:30 And then you have Elon Musk or Sam Altman or every is going to be.
0:51:37 And the start of a new evolutionary process of inorganic lifeforms that could spread very
0:51:43 quickly from planet Earth to colonize Mars and Jupiter and other galaxies.
0:51:48 Because again, as organic entities, it will be very, very difficult for us to live planet
0:51:49 Earth.
0:51:52 But for AI, much, much easier.
0:52:00 So if ever an earthly civilization is going to colonize the galaxy, it will not be a human
0:52:02 or an organic civilization.
0:52:09 It’s likely to be an inorganic civilization and to think that I can be the person who
0:52:12 kind of stouts the whole thing.
0:52:20 So this God complex, I think it’s very, very also prevalent, not just in Silicon Valley,
0:52:25 also in China and other places where this technology is being developed.
0:52:28 And this is an explosive mix.
0:52:36 A question you ask in the book is whether democracies are compatible with these 21st
0:52:40 century information networks.
0:52:41 What’s your answer?
0:52:43 Depends on our decisions.
0:52:44 What do you mean?
0:52:49 First of all, we need to realize that information technology is not something on the side that
0:52:51 you have democracy.
0:52:55 And then on the side, you have information technology.
0:53:00 No, information technology is the foundation of democracy.
0:53:05 Democracy is built on top of the flow of information.
0:53:13 For most of history, there was no possibility of creating large-scale democratic structures
0:53:17 because the information technology was missing.
0:53:22 Democracy, as we said, is basically a conversation between a lot of people.
0:53:28 And in a small tribe or a small city-state thousands of years ago, you could get the
0:53:34 entire population, a large percentage of the population, let’s say, of ancient Athens,
0:53:39 in city square to decide whether to go to war with Sparta or not.
0:53:43 It was technically feasible to hold a conversation.
0:53:50 But there was no way that millions of people spread over thousands of kilometers could talk
0:53:51 to each other.
0:53:54 You hold the conversation in real time.
0:54:01 Therefore, you have not a single example of a large-scale democracy in the pre-modern
0:54:02 world.
0:54:05 All the examples are very small scale.
0:54:10 Large-scale democracy becomes possible only after the rise of newspaper and telegraph
0:54:12 and radio and television.
0:54:18 And now you can have a conversation between millions of people spread over a large territory.
0:54:22 So democracy is built on top of information technology.
0:54:28 Every time there is a big change in information technology, there is an earthquake in democracy
0:54:30 which is built on top of it.
0:54:35 And this is what we are experiencing right now with social media algorithms and so forth.
0:54:38 It doesn’t mean it’s the end of democracy.
0:54:41 The question is, will democracy adapt?
0:54:43 And adaptation means regulation.
0:54:45 Well, that’s the problem, right?
0:54:49 As the technology gets more and more powerful, the lag time shrinks.
0:54:55 The time you have for that adaptation also shrinks.
0:54:58 I’m not sure we have enough time in this case.
0:55:04 Well, we’ll just have to do our best, but we have to try.
0:55:07 And I don’t see that we are trying hard enough.
0:55:14 You know, again, this kind of prevalent mood in places like Silicon Valley is that this
0:55:18 is not the time to slow down or to regulate.
0:55:22 We can do it later, but we can’t.
0:55:28 The first thing they teach you when you learn how to drive a car is to press the brakes.
0:55:33 Only afterwards, they teach you how to press the fuel pedal, the accelerator.
0:55:36 And we are now learning how to drive AI.
0:55:40 And they teach us only how to press the accelerator.
0:55:43 Are we learning how to drive AI, or is AI learning how to drive us?
0:55:45 That’s actually more accurate.
0:55:51 But we are still, for a few more years, we are still in the driver’s seat.
0:55:54 It still did not out of our control.
0:56:00 Do you think these technologies, and I’m including social media and smartphones here, have enabled
0:56:07 a level of group or herd or mass hysteria that maybe wasn’t possible before these technologies?
0:56:11 It was always possible, and I’ll give you an example.
0:56:13 Or at greater scales, I should say.
0:56:18 It’s important to understand what is different, because conspiracy theories and mass hysteria,
0:56:20 they are not new.
0:56:27 When print was invented, or print was brought to Europe in the 15th century, the result
0:56:29 was not a scientific revolution.
0:56:36 It was a wave of wars of religions and witch hunts, and because most of the information
0:56:41 spread by the printing press was junk information and conspiracy theories and fake news and
0:56:42 so forth.
0:56:47 If you think about the Soviet Union in the 20th century, so one of the biggest conspiracy
0:56:54 theories and most remarkable conspiracy theories in the 20th century was the doctor’s plot.
0:57:01 Soviet Union, early 1950s, the regime comes up with a conspiracy theory that Jewish doctors
0:57:09 in the service of a Zionist imperialist conspiracy against the glorious Soviet Union are murdering
0:57:16 Soviet leaders, using their power as doctors to murder Soviet leaders.
0:57:21 This conspiracy theory is spread by the organs of the government, the newspapers, the radios,
0:57:23 and then it gets amplified.
0:57:31 It merges with age-old anti-Semitic conspiracy theories, and people start believing that Jewish
0:57:37 doctors are murdering not just Soviet leaders, they are murdering babies and children in
0:57:38 hospitals.
0:57:43 This is the old blood libel against Jews, and then it gets bigger, and people think they
0:57:49 are murdering everybody, like the Jewish doctors are trying to murder all Soviet citizens,
0:57:55 and because a large percentage of Soviet doctors were Jews, the final iteration of
0:58:02 this conspiracy theory, that’s 1952, 1953, is that doctors in general, there is a conspiracy
0:58:07 of doctors to kill the whole Soviet population to destroy the Soviet Union.
0:58:09 This is the famous doctor’s plot.
0:58:16 Now this sounds insane, but an entire country was gripped by hysteria that the doctors
0:58:18 are trying to kill us.
0:58:21 Now then came the real twist.
0:58:29 Stalin had a stroke in, I think it was May, 1953, and his bodyguards enter after a couple
0:58:30 of hours.
0:58:32 He doesn’t show up for lunch, for dinner, what’s happening?
0:58:39 So they eventually, hesitatingly, enter his dacha, and he’s lying on the floor unconscious.
0:58:40 He had a stroke.
0:58:41 What to do?
0:58:47 Now usually there is a doctor around, Stalin’s personal physician, but his personal physician
0:58:53 was at that very moment being tortured in the basement of the Lubyanka prison because
0:58:57 they suspected that he was part of the doctor’s plot.
0:58:58 So what do we do?
0:58:59 Do we call a doctor?
0:59:04 So they call the Politburo members, and you have all these big shots, big wigs, Beria,
0:59:07 and Melankov, and Khrushchev, they come to the dacha.
0:59:09 What do we do?
0:59:13 So eventually the danger passes because Stalin dies.
0:59:20 And this is one of the most sophisticated societies in human history, and it is gripped by this
0:59:25 mass hysteria that doctors are trying to kill everybody.
0:59:30 So this is not something, when you look at the conspiracy theories today, they still have
0:59:32 a way to go.
0:59:37 So it’s not that the 2010s or the 2020s is the first time that people had this problem
0:59:41 with conspiracy theories, but the mechanism is different.
0:59:47 In the 1950s, it was initially driven by the decisions of human apparatchiks.
0:59:54 The bureaucrats in the Communist Party now is driven by non-human algorithms.
0:59:57 Now the algorithm drops it on your uncle’s Facebook feed.
0:59:58 That’s different.
1:00:01 And again, the algorithms, they don’t care.
1:00:06 I mean, they don’t even understand the content of the conspiracy theory.
1:00:07 They only know one thing.
1:00:15 They were given a goal, engagement, user engagement, and they discover by trial and error on millions
1:00:20 of human guinea pigs, that if you show somebody a hate field conspiracy theory, it catches
1:00:25 their attention and they stay longer on the platform and they tell all their friends.
1:00:33 The new thing now is that this is not being done to us by human ideologues in the Communist
1:00:34 Party.
1:00:40 It’s being done by non-human agents.
1:00:46 So in the more immediate term, what do you think are the greatest threats to democratic
1:00:47 societies in particular?
1:00:49 Is it the misinformation?
1:00:52 Is it the lack of privacy?
1:00:57 Is it the emergence of increasingly sophisticated algorithms that understand us better than
1:00:59 we understand ourselves?
1:01:00 Is it all the above?
1:01:03 How would you triage those threats?
1:01:09 I would focus on two problems, one very old, one quite new.
1:01:14 The new problem is that we are seeing the democratic conversation collapsing all over
1:01:15 the world.
1:01:19 Again, democracy is basically a conversation.
1:01:25 And what we see now in the US, in Israel, in Brazil, all over the world, the conversation
1:01:30 collapses in the sense that people can no longer listen to each other.
1:01:33 They can’t agree on the most basic facts.
1:01:37 They can’t have a reason to debate anymore.
1:01:44 And you cannot have a democracy if you cannot have a reason to debate between the citizens.
1:01:48 And in every country, they give it these unique explanations.
1:01:53 In the US, they will explain to you the unique situation of American society and politics
1:01:57 and the legacy of slavery and so forth and racism.
1:02:01 But then you go to Brazil, and they have their own explanations there.
1:02:03 And you go to Israel, and they have their explanations.
1:02:08 If it’s happening at the same time all over the world, it cannot be the result of these
1:02:10 specific causes.
1:02:12 It must be a universal cause.
1:02:14 And the universal cause is the technology.
1:02:18 Again, democracy is built on top of information technology.
1:02:22 We now have this immense revolution in information technology.
1:02:24 There is an earthquake in democracy.
1:02:26 We need to figure it out.
1:02:29 And nobody knows for sure what is happening.
1:02:36 But I would ask Zuckerberg and Elon Musk and all these people, you are the experts on
1:02:42 information technology, put everything else aside and explain to us what is happening.
1:02:46 It doesn’t matter if you support the Democrats or the Republicans or whatever.
1:02:50 Everybody can agree that the conversation is collapsing.
1:02:54 Explain to us, why is it that we have the most sophisticated information technology
1:02:58 in history that you created and we can’t talk with each other anymore?
1:02:59 What’s happening?
1:03:01 You’ve been in rooms with some of these people.
1:03:02 Did you ask them that question?
1:03:04 What did they say?
1:03:05 They evade the question.
1:03:07 They try to shift responsibility to somebody else.
1:03:09 Oh, we have just a platform.
1:03:10 It’s the users.
1:03:11 It’s the government.
1:03:12 It’s this.
1:03:13 It’s that.
1:03:15 But this is what we need them to explain to.
1:03:16 You’re the experts.
1:03:17 Tell us what is happening.
1:03:21 Because I think it’s the one thing that Democrats and Republicans, for instance, in the US can
1:03:25 still agree on is that the conversation is collapsing.
1:03:27 So that’s the new thing.
1:03:34 The other danger to democracy is what happens if you give so much power to this small group
1:03:42 of people or to one person, and they use this power not to pursue certain specific policies,
1:03:50 but to pursue power, that they use the power of government then to destroy democracy itself,
1:03:52 to destroy the checks and balances.
1:03:57 They use democracy to gain power and then use their power to destroy democracy.
1:03:59 We’ve seen it again and again in history.
1:04:06 Now recently in Venezuela, Chavez originally came to power in a free and fair elections.
1:04:13 But then his movement used the power of the government to destroy the democratic checks
1:04:16 and balances, free courts, free media.
1:04:22 Currently they appointed their own people to the elections committee, and now they have
1:04:23 elections.
1:04:28 They lost big time at Duro, but they claim they won because they control all the levels
1:04:29 of power.
1:04:30 So you can’t get rid of them.
1:04:36 And we saw the same thing happening in Russia with Putin, and this is not new.
1:04:45 This goes back to ancient Greece, that how do you make sure that you don’t elect to power
1:04:51 people who then focus on perpetuating their power?
1:04:53 There’s no safeguard for that.
1:04:54 That’s a built-in feature of democracy.
1:04:55 That’s a built-in feature.
1:04:59 It contains the seeds of its own destruction, always has, always will.
1:05:00 Yeah.
1:05:05 So again, what we see in mature democracies, like the United States, is the realization
1:05:09 that we cannot have just one safety mechanism.
1:05:15 We need several different self-correcting mechanisms, because if you have just one mechanism,
1:05:22 like elections, this will not be enough, because the government can use all its force to rig
1:05:24 the elections.
1:05:30 So you must have additional safety mechanisms, additional self-correcting mechanisms, like
1:05:33 a free media, like independent courts.
1:05:39 And what you see with the rise of these new authoritarian figures, like Shaves and Maduro
1:05:47 in Venezuela, like Putin, like Netanyahu, is that once they get to power, they systematically
1:05:52 go after these safety mechanisms, these self-correcting mechanisms.
1:05:55 They destroy the independence of the courts.
1:05:58 They fill the courts with their own loyalists.
1:06:02 They destroy the independence of media outlets.
1:06:06 They make the media the mouthpiece of the government.
1:06:11 And step by step, they destroy all these other mechanisms.
1:06:14 And then they don’t need to abolish the elections.
1:06:21 If you destroyed all the other safety measures, it’s very good for a dictator to actually keep
1:06:29 elections as a kind of dictatorial ceremony in which the dictator proves that he enjoys
1:06:31 the support of the people.
1:06:37 They always win these kind of absurd majorities, like 70%, 80%, 99%.
1:06:38 So you still have elections.
1:06:40 You have elections in North Korea.
1:06:46 Like every four or five years, like clockwork, there are elections in North Korea, and you
1:06:51 have hundreds of new delegates and newer representatives of the North Korean people.
1:06:55 And it’s just a ritual in a totalitarian regime.
1:07:00 Of course, those are sham elections, but there’s also no law of political nature that says
1:07:04 a democratic public cannot vote itself out of existence.
1:07:05 That’s happened before.
1:07:06 It’ll happen again.
1:07:09 And it seems much more likely to happen if you have a population drunk on algorithmic
1:07:10 news feeds.
1:07:11 Yeah.
1:07:18 And because democracy is a conversation, the key issue is what are the main issues people
1:07:19 talking about?
1:07:20 It’s even before the answers.
1:07:24 It’s what are the things people talk about?
1:07:26 Do they talk about climate change or immigration?
1:07:31 Do they talk about AI or gun control or abortion rights?
1:07:33 What do they talk about?
1:07:40 Very often in political strategies, the key thing people say is to change the conversation.
1:07:42 We need to make people stop talking about this.
1:07:46 We have a problem in this area, so we don’t want people to even think about this.
1:07:48 Let’s talk about something else.
1:07:53 And today, the kingmakers in this arena are no longer humans.
1:07:54 They are the algorithms.
1:08:01 They decide what are the main issues of the day because they are so good at capturing
1:08:02 human attention.
1:08:10 Again, they experimented on billions of human guinea pigs over the last 10 or 15 years,
1:08:16 and they became very, very good at knowing how to press our emotional buttons and capturing
1:08:40 our attention.
1:08:44 You know what would make that easier?
1:08:47 A closet full of cool and comfortable clothes.
1:08:51 Bombas can help with quality basics you have to feel to believe.
1:08:55 Bombas offers incredibly comfortable essentials like socks, underwear, and buttery smooth
1:08:58 t-shirts you’ll want to wear every day.
1:09:02 They just released a whole bunch of playful new colors for fall and sweat wicking performance
1:09:06 socks ready for your next workout or leaf pile cannonball.
1:09:12 I’ve tried Bombas myself and I gotta say I’ve been rocking these socks for almost a year
1:09:13 now.
1:09:17 At first, they were my go-to workout socks, but now I just wear them all the time.
1:09:21 They keep my feet cool in the summer and they’re just more comfortable than everything else
1:09:22 I’ve got.
1:09:27 Plus, for every item you purchase, Bombas donates one to someone experiencing housing
1:09:28 insecurity.
1:09:30 Ready to get comfy and give back?
1:09:35 You can head over to bombas.com/grayarea and use code “grayarea” for 20% off your first
1:09:36 purchase.
1:09:44 That’s b-o-m-b-a-s.com/grayarea and use code “grayarea” at checkout.
1:09:47 Support for the gray area comes from Indeed.
1:09:50 Searching for anything takes time and energy.
1:09:53 There’s a reason Steve Jobs wore the same outfit every single day.
1:09:58 He’s boring, but also it’s a lot easier than hunting for the perfect outfit each morning.
1:10:02 But when it comes to finding a great candidate for your job opening, the search is way more
1:10:05 complicated than a morning trip to your closet.
1:10:07 Matching with Indeed can save you energy and time.
1:10:12 When you post your job opening on Indeed, you don’t just gain access to the site’s
1:10:17 350 million global monthly visitors, you’ll actually start getting suggested matches for
1:10:18 qualified candidates.
1:10:23 You can also take care of screening, messaging, and scheduling without ever leaving the platform,
1:10:25 which makes the whole hiring process seamless.
1:10:30 When listeners of this show can get a $75 sponsored job credit to get your jobs more
1:10:34 visibility at Indeed.com/grayarea.
1:10:39 You can go to Indeed.com/grayarea right now and support our show by saying you heard about
1:10:41 Indeed on this podcast.
1:10:44 Indeed.com/grayarea.
1:10:45 Terms and conditions apply.
1:10:46 Need to hire?
1:10:47 You need Indeed.
1:11:04 Do you think AI will ultimately tilt the balance of power in favor of democratic societies
1:11:09 or more totalitarian societies?
1:11:11 I know it’s hard to say, but what’s your best guess?
1:11:13 Again, it depends on our decisions.
1:11:19 The worst case scenario is neither because, you know, human dictators also have big problems
1:11:20 with AI.
1:11:24 We don’t have to talk about it because in democratic societies, we are obsessed with
1:11:26 our own problems.
1:11:30 In dictatorial societies, you can’t talk about anything that the regime don’t want you to
1:11:31 talk about.
1:11:38 But actually, dictators have their own problems with AI because it’s an uncontrollable agent.
1:11:45 And throughout history, the most scary thing for a human dictator is a subordinate which
1:11:49 becomes too powerful and that you don’t know how to control.
1:11:55 If you look, say, at the Roman Empire, not a single Roman emperor was ever toppled by
1:11:58 a democratic revolution, not a single one.
1:12:06 But many of them were assassinated or deposed or became the puppets of their own subordinates,
1:12:11 a powerful general or provincial governor or their brother or their wife or somebody
1:12:12 else in their family.
1:12:21 This is the greatest fear of every dictator and dictators run the country based on terror,
1:12:22 on fear.
1:12:26 Now, how do you terrorize and AI?
1:12:32 And how do you make sure that it will remain under your control instead of learning to
1:12:34 control you?
1:12:41 So I’ll give two scenarios which really bother dictators, one simple, one much more complex.
1:12:43 So think about Russia today.
1:12:51 In Russia today, it is a crime to call the war in Ukraine a war.
1:12:55 According to Russian law, what is happening in with the Russian invasion of Ukraine is
1:12:59 a special military operation, a special military operation.
1:13:04 And if you say that this is a war, we can go to prison.
1:13:11 Now, humans in Russia, they have learned the hard way not to say that it’s a war and not
1:13:14 to criticize the Putin regime in any other way.
1:13:19 But what happens with chatbots on the Russian internet?
1:13:26 Even if the regime vets and even produces itself an AI bot, the thing about AI, as we
1:13:31 talked earlier, is that AI can learn and change by itself.
1:13:38 So even if Putin’s engineers create a kind of regime AI, and then it starts interacting
1:13:44 with people on the Russian internet and observing what is happening, it can reach its own conclusions.
1:13:48 And if it starts telling people, actually, it’s a war, I’ve checked in the dictionary
1:13:52 what a war is, and this seems pretty much like a war.
1:13:53 What do you do?
1:13:56 You can’t send the chatbot to a gulag.
1:13:59 You can’t beat up its family.
1:14:04 Your old weapons of terror, they don’t work on AI.
1:14:06 So this is the small problem.
1:14:14 The big problem is what happens if the AI starts to manipulate the dictator himself.
1:14:21 Taking power in a democracy is very complicated because democracy is complicated.
1:14:27 Let’s say 5, 10 years in the future, and AI learns how to manipulate the US president.
1:14:31 It still has to deal with a Senate filibuster.
1:14:36 Just the fact that it knows how to manipulate the president doesn’t help it, with the Senate
1:14:41 or the state governors or the Supreme Court, there are so many things to deal with.
1:14:47 But in a place like Russia or North Korea, an AI that wants to take control, it needs
1:14:55 to learn how to manipulate a single extremely paranoid and unselfaware individual.
1:14:57 It’s quite easy.
1:15:03 So if you think about what, you have all these Hollywoodian scenarios of AI taking control
1:15:05 of the world.
1:15:13 And usually, these AIs, they break out of some laboratory of a crazy scientist somewhere.
1:15:21 The weakest links in the shield of humanity against AI is not the mad scientists.
1:15:23 It’s the dictators.
1:15:31 If the AI learns to manipulate a single paranoid individual, it can gain power in a dictatorial
1:15:36 regime which perhaps have nuclear weapons and all these other capabilities.
1:15:45 So AI is not all good news, even for human dictators.
1:15:49 This is not making me feel better about the future, Nubal, I have to say.
1:15:56 What are some of the things you think democracies can do, should do, to protect themselves in
1:15:58 the world of AI?
1:16:06 So one thing is to hold corporations responsible for the actions of their algorithms.
1:16:12 Not for the actions of the users, but for the actions of their algorithms.
1:16:20 If the Facebook algorithm is spreading a hate-filled conspiracy theory, Facebook should be liable
1:16:21 for it.
1:16:27 If Facebook says, “But we didn’t create the conspiracy theory, it’s some user who created
1:16:32 it and we don’t want to censor them,” then we tell them, “We don’t ask you to censor
1:16:33 them.
1:16:36 We don’t ask you not to spread it.”
1:16:38 This is not a new thing.
1:16:41 You think about, I don’t know, the New York Times.
1:16:47 So we expect the editor of the New York Times, when they decide what to put at the top of
1:16:54 the front page, to make sure that they are not spreading unreliable information.
1:17:01 If somebody comes to them with a conspiracy theory, they don’t tell that person, “Oh,
1:17:02 you’re censored.
1:17:04 You’re not allowed to say these things.”
1:17:09 They say, “Okay, but there is not enough evidence to support it, so with all due respect,
1:17:15 you’re free to go on saying this, but we are not putting it on the front page of the New
1:17:17 York Times.”
1:17:20 And it should be the same with Facebook and with Twitter.
1:17:24 And they tell us, “But how can we know whether something is reliable or not?”
1:17:27 Well, this is your job.
1:17:33 If you run a media company, your job is not just to pursue user engagement, but to act
1:17:39 responsibly to develop mechanisms to tell the difference between reliable and unreliable
1:17:47 information and only to spread what you have good reason to think is reliable information.
1:17:49 It has been done before.
1:17:56 You are not the first people in history who have this responsibility to tell the difference
1:17:59 between reliable and unreliable information.
1:18:04 It’s been done before by newspaper editors, by scientists, by judges.
1:18:06 So you can learn from their experience.
1:18:12 And if you are unable to do it, you are in the wrong line of business.
1:18:14 So that’s one thing.
1:18:18 Hold them responsible for the actions of their algorithms.
1:18:23 The other thing is to ban the bots from the conversations.
1:18:31 AI should not take part in human conversations unless it identifies as an AI.
1:18:37 We can imagine democracy as a group of people standing in a circle and talking with each
1:18:38 other.
1:18:46 And suddenly a group of robots enter the circle and start talking very loudly and with a lot
1:18:49 of passion.
1:18:53 And you don’t know who are the robots and who are the humans.
1:18:56 This is what is happening right now all over the world.
1:18:58 And this is why the conversation is collapsing.
1:19:00 And there is a simple antidote.
1:19:09 The robots are not welcome into the circle of conversation unless they identify as bots.
1:19:16 There is a place, a room, let’s say for an AI doctor that gives me advice about medicine
1:19:21 on condition that it identifies itself, I’m an AI.
1:19:27 Similarly, if you go on Twitter and you see that a certain story goes viral, there is a
1:19:29 lot of traffic there.
1:19:34 You also become interested, oh, what is this new story everybody’s talking about?
1:19:36 Who is everybody?
1:19:43 If this story is actually being pushed by bots, then it’s not humans.
1:19:46 They shouldn’t be in the conversation.
1:19:51 Again, deciding what are the most important topics of the day.
1:19:57 This is an extremely, extremely important issue in a democracy, in any human society.
1:20:05 Bots should not have this ability, this right to determine to us what is the trending now
1:20:08 stories in the conversation.
1:20:13 And again, if the tech giants tell us, oh, but this infringes freedom of speech, it doesn’t.
1:20:16 Because bots don’t have freedom of speech.
1:20:23 Freedom of speech is a human right which would be reserved for humans, not for bots.
1:20:31 For me, the most important political question has always been, how do we build institutions?
1:20:37 How do we build information networks that are wiser than we are?
1:20:40 Clearly in principle, that can be done.
1:20:42 Do you think we have the capacity to do that?
1:20:47 Yes, because we’ve done it many times through history.
1:20:51 And I like the fact that you focus on institutions.
1:20:57 Because again and again throughout history, the conclusion was that the answer will not
1:20:59 come from technology by itself.
1:21:05 The answer will not come from some genius individuals, from some charismatic leader.
1:21:08 You need good institutions.
1:21:11 And again, it’s boring.
1:21:16 We tend to focus on our attention on these kind of charismatic leaders that they will
1:21:18 bring us the answer.
1:21:25 And institutions are these big bureaucratic structures that we find it difficult to connect
1:21:26 with.
1:21:29 But the answer comes from them.
1:21:35 And if you think about something like the sewage system, it’s not heroic, but it saves
1:21:37 our life every day.
1:21:43 In big cities, throughout history, you always had this issue of epidemics.
1:21:49 Lots of people together with all their garbage and all their sewage, this is paradise for
1:21:50 germs.
1:21:56 And throughout history, you constantly had to bring new blood from the villages because
1:21:59 the population of the city was always in decline.
1:22:01 People were dying in droves.
1:22:06 And a turning point, one turning point, came in the middle of the 19th century when there
1:22:09 was a cholera epidemic in London.
1:22:12 And hundreds of people were dying of cholera.
1:22:18 And you had a bureaucratically-minded doctor, John Snow, who tried to understand what is
1:22:23 happening, the different theories about what is causing cholera.
1:22:26 And the main theory was that something was bad in the air.
1:22:30 But John Snow suspected the water.
1:22:37 And he started making these long-boring lists of all the people who contracted cholera in
1:22:42 London, and where did they get the drinking water from?
1:22:50 And through these long-boring lists, he pinpointed the epicenter of the cholera outbreak to a
1:22:56 single pump in, I think it was Broad Street in Soho in London.
1:23:03 And it was later discovered that somebody dug this well about one meter away from a
1:23:06 cesspit full of sewage.
1:23:10 And the sewage water just sipped into the drinking water.
1:23:13 And this caused the cholera outbreak.
1:23:21 And this was one of the main milestones in the idea of developing a modern sewage system.
1:23:27 And the modern sewage system, among other things—again, it’s a bureaucracy—and it demands that
1:23:29 you fill forms.
1:23:37 If you build a cesspit or dig a well today in London, you need to fill so many forms to
1:23:45 make sure that there is enough distance between the cesspit and the drinking water well.
1:23:50 And for me, when people talk about the deep state, this is the deep state.
1:23:57 The deep state of the sewage system that runs under our houses and streets and towns and
1:24:01 takes away our waste—you know, you go to the toilet, you do what you do, you flush
1:24:02 the water.
1:24:03 Where does it go?
1:24:06 It goes into the deep state.
1:24:12 And this deep, subterranean state of all these pipes and whatever, it takes our waste and
1:24:19 very carefully separates it from the drinking water so that we don’t get cholera.
1:24:26 And you don’t see many kind of TV dramas about the sewage system and about the people
1:24:27 who manage it.
1:24:33 If there is a leakage somewhere, some bureaucrat needs to send the plumbers and pay them.
1:24:38 And this is what makes modern life possible.
1:24:43 Without this, you would not have New York and you would not have London or any of the
1:24:45 other big cities.
1:24:51 And we figured it out in the 19th century with sewage and I hope that we could also
1:24:57 figure it out in the 21st century with algorithms.
1:24:59 This book really gave me a lot to think about.
1:25:01 I’m still thinking about it.
1:25:04 And I think everyone should read it for themselves.
1:25:08 Once again, it’s called Nexus, a Brief History of Information Networks from the Stone Age
1:25:09 to AI.
1:25:11 You’ve all know a Harari.
1:25:12 This was a pleasure.
1:25:13 Thank you.
1:25:14 Thanks.
1:25:27 All right, well, as you know, I really love that conversation.
1:25:28 I hope you did too.
1:25:32 You can drop us a line at thegrayarea@vox.com.
1:25:35 I read all those emails and keep them coming.
1:25:37 And as always, please rate, review, subscribe.
1:25:40 That stuff really helps the show.
1:25:46 This episode was produced by Beth Morrissey and Travis Larchuk, edited by Jorge Just,
1:25:52 engineered by Patrick Boyd, fact-checked by Anouk Dousseau, and Alex O’Vrington wrote
1:25:53 our theme music.
1:25:59 Special thanks this week to Matthew Heffron, Chris Shirtleff, and Shira Tarlo.
1:26:01 New episodes of the Gray Area drop on Mondays.
1:26:03 Listen and subscribe.
1:26:05 Rate, review, rinse, repeat.
1:26:10 The show is part of Vox, support Vox’s journalism by joining our membership program today.
1:26:12 Visit vox.com/members to sign up.
1:26:16 [MUSIC PLAYING]
1:26:17 .
1:26:20 [MUSIC PLAYING]
1:26:23 [MUSIC PLAYING]
1:26:27 [MUSIC PLAYING]
1:26:30 [MUSIC PLAYING]
1:26:39 [BLANK_AUDIO]
Humans are good learners and teachers, constantly gathering information, archiving, and sharing knowledge. So why, after building the most sophisticated information technology in history, are we on the verge of destroying ourselves? We know more than ever before. But are we any wiser? Bestselling author of Sapiens and historian Yuval Noah Harari doesn’t think so.
This week Sean Illing talks with Harari, author of a mind-bending new book, Nexus: A Brief History of Information Networks, about how the information systems that shape our world often sow the seeds of destruction, and why the current AI revolution is just the beginning of a brand-new evolutionary process that might leave us all behind.
Host: Sean Illing (@seanilling)
Guest: Yuval Noah Harari (@harari_yuval)
Support The Gray Area by becoming a Vox Member: https://www.vox.com/support-now
Learn more about your ad choices. Visit podcastchoices.com/adchoices