Controlling AI

AI transcript
0:00:02 Hi, and welcome to the A16Z podcast.
0:00:07 I’m Doss, and in this episode, Frank Chen interviews UC Berkeley Professor of Computer
0:00:09 Science, Stuart Russell.
0:00:13 Russell literally wrote the textbook for artificial intelligence that has been used to educate
0:00:16 an entire generation of AI researchers.
0:00:20 More recently, he’s written a follow-up, Human-Compatible, Artificial Intelligence in
0:00:22 the Problem of Control.
0:00:27 Their conversation covers everything from AI misclassification and bias problems, to
0:00:31 the questions of control and competence in these systems, to a potentially new and better
0:00:34 way to design AI.
0:00:38 But first, Russell begins by answering, “Where are we really when it comes to artificial
0:00:42 general intelligence, or AGI, beyond the scary picture of Skynet?”
0:00:48 Well, the Skynet metaphor is one that people often bring up, and I think, generally speaking,
0:00:49 Hollywood has got it wrong.
0:00:56 They always portray the risk as an intelligent machine that somehow becomes conscious, and
0:01:01 it’s the consciousness that causes the machine to hate people and want to kill us all.
0:01:04 And this is just a mistake.
0:01:07 The problem is not consciousness, it’s really competence.
0:01:13 And if you said, “Oh, by the way, your laptop’s now conscious,” it doesn’t change the rules
0:01:14 of C++.
0:01:15 Right?
0:01:18 The software still runs exactly the way it was always going to run when you didn’t think
0:01:19 it was conscious.
0:01:23 So, on the one hand, we have people like Elon Musk saying artificial general intelligence,
0:01:28 like that’s a real possibility, it may be sooner than a lot of people think.
0:01:32 And on the other, you’ve got people like Andrew Ng who are saying, “Look, we’re so far away
0:01:33 from AGI.
0:01:37 All of these questions seem premature, and I’m not going to worry about the downstream
0:01:43 effects of super intelligent systems until I worry about overpopulation on Mars.”
0:01:45 So, what’s your take on the debate?
0:01:51 Yeah, so he, in fact, upgraded that to overpopulation on Alpha Centauri.
0:01:56 So let’s first of all talk about timelines and predictions for achieving human level
0:01:59 or superhuman AI.
0:02:05 So Elon actually is reflecting advice that he’s received from AI experts.
0:02:12 So some of the people, for example, at OpenAI, think that five years is a reasonable timeline.
0:02:18 And that the necessary steps mainly involve much bigger machines and much more data.
0:02:23 So no conceptual or computer science breakthroughs, just more compute, more storage, and we’re
0:02:24 there.
0:02:25 Yeah.
0:02:30 So I really don’t believe that, crudely speaking, like the bigger, faster the computer, the
0:02:32 faster you get the wrong answer.
0:02:38 But I believe that we have several major conceptual breakthroughs that still have to happen.
0:02:42 We don’t have anything resembling real understanding of natural language, which would be essential
0:02:48 for systems to then acquire the whole of human knowledge.
0:02:54 We don’t have the capability to flexibly plan and make decisions over long time scales.
0:03:02 So we’re very impressed by AlphaGo or AlphaZero’s ability to think 60 or 100 moves ahead.
0:03:04 That’s superhuman.
0:03:10 But if you apply that to a physical robot whose decision cycle is a millisecond, that
0:03:16 gets you a tenth of a second into the future, which is not very useful if what you’re trying
0:03:21 to do is not just lay the table for dinner, but do it anywhere, in any house in any country
0:03:23 in the world, figure it out.
0:03:28 Laying the table for dinner is several million or tens of millions of motor control decisions.
0:03:33 And at the moment, the only way you can generate behavior on those timescales is actually to
0:03:39 have canned subroutines that humans have defined, pick up a fork.
0:03:43 Okay, I can train picking up a fork, but I’ve defined picking up a fork as a thing.
0:03:49 So machines right now are reliant on us to supply that hierarchical structure of behavior.
0:03:54 When we figure out how they can invent that for themselves as they go along and invent
0:04:00 new kinds of things to do that we’ve never thought of, that will be a huge step towards
0:04:01 real AI.
0:04:05 As we march towards general intelligence, this literal ability to think outside the
0:04:09 box will be one of the hallmarks, I think we look for.
0:04:14 If you think about what we’re doing now, we’re trying to write down human objectives.
0:04:19 It’s just that we tend, because we have very stupid systems, they only operate in these
0:04:21 very limited contexts, like a go board.
0:04:26 And on the go board, a natural objective is win the game.
0:04:31 If AlphaGo was really smart, even if you said win the game, well, I can tell you, here’s
0:04:35 what chess players do when they’re trying to win the game.
0:04:41 They go outside the game and a more intelligent AlphaGo would realize, okay, well, I’m playing
0:04:43 against some other entity.
0:04:44 What is it?
0:04:45 Where is it?
0:04:51 There must be some other part of the universe besides my own processor and this go board.
0:04:56 And then it figures out how to break out of its little world and start communicating.
0:05:01 Maybe it starts drawing patterns on the go board with go pieces to try and figure out
0:05:05 visual language it can use to communicate with these other entities.
0:05:10 Now, how long do these kinds of breakthroughs take?
0:05:16 Well, if you look back at nuclear energy, for the early part of the 20th century, when
0:05:23 we knew that nuclear energy existed, so from E equals MC squared in 1905, we could measure
0:05:26 the mass differences between different atoms.
0:05:31 We knew what their components were, and they also knew that radium could emit vast quantities
0:05:32 of energy over a very long period.
0:05:36 So they knew that there was this massive store of energy.
0:05:42 But mainstream physicists were adamant that it was impossible to ever release it.
0:05:43 To harness it in some way.
0:05:48 So there was a famous speech that Lord Rutherford gave, and he was the man who split the atoms,
0:05:51 so it’s like the leading nuclear physicist of his time.
0:05:54 And that was September 11th, 1933.
0:06:01 And he said that the possibility of extracting energy by the transmutation of atoms is moonshine.
0:06:04 But the question was, is there any prospect in the next 25 or 30 years?
0:06:07 So he said, no, it’s impossible.
0:06:11 And then the next morning, Leo Zillard actually read a report of that in The Times and went
0:06:16 for a walk and invented the nuclear chain reaction based on neutrons, which people hadn’t
0:06:17 thought of before.
0:06:18 And that was a conceptual breakthrough.
0:06:22 You went from impossible to now it’s just an engineering challenge.
0:06:24 So we need more than one breakthrough, right?
0:06:30 It takes time to sort of ingest each new breakthrough and then build on that to get to the next
0:06:31 one.
0:06:36 So the average AI researcher thinks that we will achieve superhuman AI sometime around
0:06:38 the middle of this century.
0:06:41 So my personal belief is actually more conservative.
0:06:47 One point is, we don’t know how long it’s going to take to solve the problem of control.
0:06:52 If you ask the typical AI researcher, okay, and how are we going to control machines that
0:06:56 are more intelligent than us, does that beats me?
0:07:02 So you’ve got this multi-hundred billion-dollar research enterprise with tens of thousands
0:07:08 of brilliant scientists all pushing towards a long-term goal where they have absolutely
0:07:11 no idea what to do if they get there.
0:07:15 So coming back to Andrew Ng’s prediction, the analogy just doesn’t work.
0:07:22 If you said, okay, the entire scientific establishment on Earth is pushing towards a migration of
0:07:26 the human race to Mars, and they haven’t thought about what we’re going to breathe when we
0:07:30 can get there, you’d say, well, that’s clearly insane.
0:07:31 Yeah.
0:07:34 And that’s why you’re arguing, we need to solve the control problem now, or at least
0:07:37 the right design approaches to solving this control problem.
0:07:43 It’s clear that the current formulation, the standard model of AI as build machines that
0:07:45 optimize fixed objectives is wrong.
0:07:53 We’ve known this principle for thousands of years that be careful what you wish for.
0:07:57 King Midas wished for everything he touched to turn to gold.
0:08:02 That was the objective he gave to the machine, which happened to be the gods, and the gods
0:08:06 gave him his objective, and then that was his food, and his drink, and his family all
0:08:09 turned to gold, and then he dies in misery.
0:08:16 And so we’ve known this for thousands of years, and yet we built the field of AI around
0:08:22 this definition of machines that carry out plans to achieve objectives that we put into
0:08:23 them.
0:08:32 It only works if and only if we are able to completely, perfectly specify the objective.
0:08:38 So the guidance is don’t put fixed objectives into machines, but build machines in a way
0:08:44 that acknowledges the uncertainty about what the true objective is.
0:08:51 For example, take a very simple machine learning task, learning to label objects and images.
0:08:52 So what should the objective be?
0:09:00 Well, you go and talk to a room full of computer vision people, they will say labeling accuracy,
0:09:04 and that’s actually the metric used for all these competitions.
0:09:09 In fact, this is the wrong metric, because different kinds of misclassifications have
0:09:13 different costs in the real world.
0:09:17 Misclassifying one type of Yorkshire Terrier as a different type of Yorkshire Terrier is
0:09:19 not that serious.
0:09:23 Classifying a person as a gorilla is really serious.
0:09:28 And Google found that out when the computer vision system did exactly that, and it probably
0:09:32 cost them billions in goodwill and public relations.
0:09:40 And that opened up actually a whole series of people observing the ways that these online
0:09:47 systems were basically misbehaving in the way they classified people.
0:09:53 If you do a search on Google Images for CEO, I think it was one of the women’s magazines
0:10:00 pointed out that the first female CEO appears on the 12th row of photographs and turns out
0:10:02 to be Barbie.
0:10:06 So if accuracy isn’t the right metric, what are the design paths that you’re suggesting
0:10:08 we optimize for?
0:10:13 If you’re going to have that image labeling system take action in the real world and posting
0:10:16 a label on the web is an action in the real world.
0:10:20 Then you have to ask, “What’s the cost of misclassification?”
0:10:27 And when you think, “Okay, so ImageNet has 20,000 categories, and so there are 400 millions
0:10:32 or 20,000 squared different ways of misclassifying one object as another.”
0:10:38 So now you’ve got 400 million unknown costs, and obviously you can’t specify a joint distribution
0:10:41 over 400 million numbers one by one.
0:10:42 It’s far too big.
0:10:47 So you might have some general guidelines that misclassifying one type of flower as
0:10:53 another is not very expensive, misclassifying a person as inanimate object, those are going
0:10:54 to be more expensive.
0:10:59 But generally speaking, you have to operate under uncertainty about what the costs are.
0:11:01 And then how does the algorithm work?
0:11:07 One of the things it should do, actually, is refuse to classify certain photographs,
0:11:13 saying, “I’m not sure enough about what the cost of misclassification might be, so I’m
0:11:15 not going to classify it.”
0:11:18 So that’s definitely a divergence from state-of-the-art today, right?
0:11:23 State-of-the-art today is you’re going to assign some class to it, right?
0:11:26 That’s a dog, or a Yorkshire terrier, or a pedestrian, or a tree.
0:11:30 And then the algorithms can say, “I’m really sure,” or, “I’m not really sure,” and then
0:11:31 a human decides.
0:11:36 You’re saying something different, which is, “I don’t understand the cost of uncertainty,
0:11:40 so therefore, I’m not even going to give you a classification or a confidence interval
0:11:41 on the classification.”
0:11:42 Like, I shouldn’t.
0:11:43 It’s irresponsible for me.
0:11:48 So I could give confidence intervals and probability, but that wouldn’t be what image-labeling
0:11:50 systems typically do.
0:11:53 They’re expected to plump for one label.
0:11:58 And the argument would be, if you don’t know the costs of plumping for one label or another,
0:12:01 then you probably shouldn’t be plumping, right?
0:12:08 And I read that Google photos won’t label gorillas anymore.
0:12:11 So you can give it a picture that’s perfectly, obviously, a gorilla, and it’ll say, “I’m
0:12:13 not sure what I’m seeing here.”
0:12:19 And so how do we make progress on designing systems that can factor in this context, sort
0:12:22 of understanding the uncertainty, characterizing the uncertainty?
0:12:24 So there’s sort of two parts to it.
0:12:30 One is, how does the machine behave, given that it’s going to have radical levels of
0:12:34 uncertainty about many aspects of our preference structure?
0:12:38 And then the second question is, how does it learn more about our preference structure?
0:12:44 As soon as the robot believes that it has absolute certainty about the objective, it
0:12:47 no longer has a reason to ask permission.
0:12:52 And in fact, if it believes that the human is even slightly irrational, which of course
0:12:59 we are, then it would resist any attempt by the human to interfere or to switch it off,
0:13:06 because the only consequence of human interference in that case would be a lower degree of achievement
0:13:08 of the objective.
0:13:14 So you get this behavior where a machine with a fixed objective will disable its own off-switch
0:13:21 to prevent interference with what it is sure is the correct way to go forward.
0:13:28 And so we want a very high threshold on confidence that it’s understood what my real preference
0:13:29 or desire is.
0:13:34 Well, I actually think it’s in general not going to be possible for the machine to have
0:13:38 high confidence that it’s understood your entire preference structure.
0:13:43 You may understand aspects of it, and if it can satisfy those aspects without messing
0:13:49 with the other parts of the world that it doesn’t know what you want, then that’s good.
0:13:55 But there are always going to be things that it never occurs to you to write down.
0:14:00 So I can see how this design approach would lead to much safer systems because you have
0:14:02 to factor in the uncertainty.
0:14:07 I can also imagine sort of a practitioner today sitting in their seat going, “Wow, that
0:14:08 is so complex.
0:14:10 I don’t know how to make progress.
0:14:14 So what do you say to somebody who’s now thinking, “Wow, I thought my problem was X-hard, but
0:14:17 it’s really 10X or 100X or 1000X-hard?”
0:14:26 So interestingly, the safe behaviors fall out as solutions of a mathematical game with
0:14:27 a robot and a human.
0:14:31 In some sense, they’re cooperating because they both have the same objective, which is
0:14:36 whatever it is the human wants, just that the robot doesn’t know what that is.
0:14:43 So if you formulate that as a mathematical game and you solve it, then the solution exhibits
0:14:49 these desirable characteristics that you want, namely deferring to the human, allowing yourself
0:14:55 to be switched off, asking permission, only doing minimally invasive things.
0:15:00 We’ve seen, for example, in the context of self-driving cars, that when you formulate
0:15:07 things this way, the car actually invents for itself protocols for behaving in traffic
0:15:09 that are quite helpful.
0:15:14 For example, one of the constant problems with self-driving cars is how they behave
0:15:19 at four-way stop signs, because they’re never quite sure who’s going to go first and they
0:15:21 don’t want to cause an accident.
0:15:25 They’re optimized for safety, so they’ll end up stuck at that four-way intersection.
0:15:30 So they’re stuck and everyone ends up pulling around them, and it will probably cause accidents
0:15:32 rather than reducing accidents.
0:15:37 So what the algorithm figured out was that if it got to the stop sign and it was unclear
0:15:43 who should go first, it would back up a little bit, and that’s a way of signaling to the
0:15:48 other driver that it has no intention of going first and therefore they should go.
0:15:54 That falls out as a solution of this game theoretic design for the problem.
0:15:57 Let’s go to another area where machine learning is often being used.
0:16:03 I’m about to make a loan to an individual, and so they’ve taken all this data, they figure
0:16:06 out your credit worthiness, and they say loan or not.
0:16:12 How would game theory inside loan decision making be different than traditional methods?
0:16:20 So what happens with traditional methods is that they make decisions based on past data,
0:16:27 and a lot of that past data reflects biases that are inherent in the way society works.
0:16:32 So if you just look at historical data, you might end up making decisions that discriminate
0:16:37 in effect against groups that have previously been discriminated against, because that prior
0:16:44 discrimination resulted in lower loan performance, and so you end up actually just perpetuating
0:16:48 the negative consequences of social biases.
0:16:53 So loan underwriting in particular has to be inspectable, and the regulators have to
0:16:59 be able to verify that you’re making decisions on criteria that neither mention race or that
0:17:00 are not proxies for race.
0:17:06 So the principles of those regulations need to be expanded to a lot of other areas.
0:17:14 For example, data seems to be suggesting that the job ads that people see online are extremely
0:17:16 biased by race.
0:17:22 If you’re just trying to fit historical data and maximize predictive accuracy, you’re missing
0:17:27 out these other objectives about fairness at the individual level and the social level.
0:17:31 So economists call this the problem of externality.
0:17:37 And so pollution is the classic example, where a company can make more money by just dumping
0:17:43 pollution into rivers and oceans and the atmosphere rather than treating it or changing its processes
0:17:45 to generate less pollution.
0:17:47 So it’s imposing costs on everybody else.
0:17:51 The way you fix that is by fines or tax penalties.
0:17:55 You create a price for something that doesn’t have a price.
0:18:01 Now the difficulty, and this is also true with the way social media content selection
0:18:06 algorithms have worked, it’s, I think, very hard to put a price on this.
0:18:12 And so the regulators dealing with loan underwriting have not put a price on it.
0:18:16 They put a rule on it saying you cannot do things that way.
0:18:21 So let’s take making a recommendation at an e-commerce site for here’s a product that
0:18:26 you might like, how would we do that differently by baking game theory?
0:18:32 So the primary issue with recommendations is understanding user preferences.
0:18:37 One of the problems I remember with a company that sends you a coupon to buy a vacuum cleaner
0:18:40 and you buy a vacuum cleaner, great.
0:18:44 So now it knows you really like vacuum cleaners, it keeps sending you coupons for vacuum cleaners.
0:18:48 But of course you just bought a vacuum cleaner, so you’ve no interest in getting another vacuum
0:18:49 cleaner.
0:18:57 So just this distinction between consumable things and non consumable things is really
0:19:00 important when you want to make recommendations.
0:19:08 And I think you need to come to the problem for an individual user with a reasonably rich
0:19:14 prior set of beliefs about what that user might like based on demographic characteristics.
0:19:19 How do you then adapt that and update it with respect to the decisions that the user makes
0:19:25 about what products to look at, which coupons they cash in, which ones they don’t, and so
0:19:26 on?
0:19:32 And one of the things that you might see falling out would be that the recommendation system,
0:19:35 it might actually ask you a question.
0:19:39 I’ve noticed that you’ve showed no interest in all these kinds of projects.
0:19:41 Are you in fact a vegetarian?
0:19:45 As you look back at your own career in this space, are you surprised that the field is
0:19:47 where it is?
0:19:54 Ten years ago, I would have been surprised to see that speech recognition is now just
0:20:00 a commodity that everyone is using on their cell phones across the entire world.
0:20:03 When I was an undergrad, they said, “We definitely have to solve the Turing test before we’re
0:20:06 going to get speaker and independent natural language.”
0:20:13 And I worked on self-driving cars in the early 90s, and it was pretty clear that the perception
0:20:15 capabilities were the real bottleneck.
0:20:21 The system would detect about 99% of the other cars, so every 100th car, you just wouldn’t
0:20:22 see it.
0:20:23 So these are things that are coming true.
0:20:26 They were sort of holy grails.
0:20:32 It’s interesting that even though they achieve superhuman performance on these testbed datasets,
0:20:38 there are still these adversarial examples that show that actually it’s not seeing things
0:20:40 the same way that humans are seeing things.
0:20:42 Definitely making different mistakes than we make.
0:20:45 And so it’s fragile in ways that we don’t understand.
0:20:53 For example, OpenAI has a system with simulated humanoid robots that learn to play soccer.
0:20:54 One learns to be the goalkeeper.
0:20:58 The other one learns to take penalties, and it looks great.
0:21:00 This was a big success.
0:21:04 He basically said, “Okay, can we get adversarial behavior from the goalkeeper?”
0:21:10 So the goalkeeper basically falls down on the ground immediately, waggles its leg in
0:21:17 the air, and the penalty taker, when he’s kicking the ball, just completely falls apart, right?
0:21:18 I don’t know how to respond to that.
0:21:21 And never actually gets around to kicking the ball at all.
0:21:24 I don’t know whether he’s laughing, say, “Oh, you can’t kick the ball at what?”
0:21:31 So it’s not that just because we have superhuman performance on some nicely curated data set
0:21:35 that we actually have superhuman vision or superhuman motor control learning.
0:21:38 Are you optimistic about the direction for the field?
0:21:43 So one reason I’m optimistic is that as we see more and more of these failures of the
0:21:47 standard model, people will say, “Oh, well, clearly we need to build these systems this
0:21:54 other way because that sort of gives us guarantees that it won’t do anything rash, it lost permission,
0:22:00 it will adapt to the user gradually, and it’ll only start taking bigger steps when it’s reasonably
0:22:03 sure that that’s what the user wants.”
0:22:08 I think there are reasons for pessimism as well in misuse, for surveillance, misinformation.
0:22:15 I mean, there’s more awareness of it, but there’s nothing concrete being done about that
0:22:22 with a few honorable exceptions like San Francisco’s ban on face recognition in public spaces,
0:22:28 and the California’s ban actually on impersonation of humans by AI systems.
0:22:29 The deepfakes?
0:22:34 Not just deepfakes, but for example, robo-calls where I’m pretending to schedule a haircut
0:22:38 appointment and I didn’t self-identify as an AI.
0:22:44 So Google has now said they’re going to have their machine self-identify as an AI.
0:22:46 It’s a relatively simple thing to comply with.
0:22:50 It doesn’t have any great economic cost, but I think it’s a really important step that
0:22:54 should be rolled out globally.
0:22:59 That principle of not impersonating human beings is a fundamentally important principle.
0:23:04 Another really important principle is don’t make machines that decide to kill people.
0:23:06 Ban on offensive weapons.
0:23:12 Sounds pretty straightforward, but again, although there’s much greater awareness of
0:23:20 this, there are no concrete steps being taken, and countries are now moving ahead with this
0:23:21 technology.
0:23:30 So I just last week found out that a Turkish defense company is selling an autonomous quadcopter
0:23:37 with a kilogram of explosive that uses face recognition and tracking of humans and is
0:23:40 sold as an anti-personnel weapon.
0:23:45 So we made a movie called Slaughterbots to illustrate this concept.
0:23:48 We’ve had more than 75 million views.
0:23:52 So to bring this home to people who are sitting at their desks working on machine learning
0:23:57 systems, if you could give them a piece of advice on what they should be doing, what
0:24:00 should they be doing differently having heard this podcast that they might not have been
0:24:01 thinking.
0:24:05 So for some applications, it probably isn’t going to change very much.
0:24:10 One of my favorite applications of machine learning and computer vision is the Japanese
0:24:18 cucumber farmer who downloaded some software and trained a system to pick out bad cucumbers
0:24:19 from his…
0:24:23 And sorts them into the grades, the Japanese are very fastidious about the grades of produce
0:24:26 and he did it so inexpensively.
0:24:32 So that’s a nice example and it’s not clear to me that there’s any particular way you
0:24:33 might change that because it’s…
0:24:35 No game theory really needed for it.
0:24:36 It’s a very…
0:24:39 I mean, in some sense, it’s a system that has a very, very limited scope of action, which
0:24:43 is just to sort cucumbers.
0:24:47 The sorting is not public and there’s no danger that it’s going to label a cucumber as a person
0:24:48 or anything like that.
0:24:54 But in general, you want to think about, first of all, what is the effect of the system that
0:24:56 I’m building on the world, right?
0:25:03 And it’s not just that it accurately classifies cucumbers or photographs.
0:25:08 It’s that of course people will buy the cucumbers or people will see the photographs and what
0:25:11 effect does it have.
0:25:17 And so often when you’re defining these objectives for a machine learning algorithm, they’re
0:25:25 going to leave out effects that the resulting algorithm is going to have on the real world.
0:25:31 And so can you fold those other effects back into the objective rather than just optimizing
0:25:38 some narrow subset like click through, for example, which could have extremely bad external
0:25:39 effects.
0:25:45 So the model, if you sort of want to kind of anthropomorphize model, you would rather
0:25:50 have the perfect butler than the genie in the lamp.
0:25:51 Right.
0:25:53 All powerful, kind of unpredictable genie.
0:25:54 Right.
0:25:57 And very literal-minded about this is the objective.
0:25:58 Right.
0:25:59 Awesome.
0:26:01 Well, Stuart, thanks so much for joining us on the E16Z podcast.
0:26:02 Okay.
0:26:03 Thank you, Frank.

AI can do a lot of specific tasks as well as, or even better than, humans can — for example, it can more accurately classify images, more efficiently process mail, and more logically manipulate a Go board. While we have made a lot of advances in task-specific AI, how far are we from artificial general intelligence (AGI), that is AI that matches general human intelligence and capabilities?

In this podcast, a16z operating partner Frank Chen interviews Stuart Russell, the Founder of the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. They outline the conceptual breakthroughs, like natural language understanding, still required for AGI. But more importantly, they explain how and why we should design AI systems to ensure that we can control AI, and eventually AGI, when it’s smarter than we are. The conversation starts by explaining what Hollywood’s Skynet gets wrong and ends with why AI is better as “the perfect Butler, than the genie in the lamp.”

Leave a Comment