AI transcript
0:00:16 We are on a mission to make you remarkable.
0:00:20 Helping me in this episode is Leslie Valiant.
0:00:25 He is a professor of computer science and applied mathematics at Harvard.
0:00:31 Valiant received a BA in mathematics from King’s College, a diploma of Imperial College
0:00:37 from Imperial College, that makes sense, and a PhD in computer science from the University
0:00:38 of Warwick.
0:00:45 Prior to joining Harvard in 1982, he held faculty positions at Carnegie Mellon University,
0:00:49 the University of Leeds, and the University of Edinburgh.
0:00:55 Valiant has made foundational, remarkable contributions to computer science, including
0:01:01 the theory of probably approximately correct learning, the concept of p-completeness in
0:01:06 complexity theory, and the bulk synchronous parallel processing model.
0:01:13 And if you believe I understand all of what I just said, you are in for a big surprise.
0:01:19 Valiant is the recipient of the Turing Award, the highest honor in computer science.
0:01:25 The award committee said this about him, “Rather does one see such a striking combination
0:01:29 of depth and breadth as in Valiant’s work.
0:01:34 He is truly a heroic figure in theoretical computer science and a role model for his
0:01:42 courage and creativity in addressing some of the deepest unsolved problems in science.”
0:01:48 Valiant’s book, The Importance of Being Educable, explores the unique ability of humans to absorb
0:01:50 and apply knowledge.
0:01:56 He argues that understanding our educability is crucial for safeguarding our future, especially
0:02:00 with the rise of artificial intelligence systems.
0:02:09 I’m Guy Kawasaki, this is Remarkable People, and now here is the remarkable Leslie Valiant.
0:02:12 What exactly is wrong with the term “intelligence”?
0:02:18 It is that we don’t know what it means, it’s got no definition.
0:02:24 No one can tell you exactly how to recognize an intelligent person, what’s the behavior
0:02:26 you’re supposed to look for.
0:02:31 One maybe trivial symptom of this, which may not be that important, but just as a symptom.
0:02:37 So there’s much discussion about the SAT test, which many American students do.
0:02:41 And I was surprised to find out recently what the letter A stands for.
0:02:46 So certainly, S stands for standard and T for test, but what does the A stand for?
0:02:52 So apparently, the A used to stand for aptitude, standard aptitude test.
0:02:56 They changed that, then it’s for standard assessment test.
0:03:01 They changed that, then it was SAT colon reasoning, they changed that.
0:03:05 So now officially, the A stands for nothing, it’s just a brand.
0:03:10 So whatever it tests for, the society is buying without any label on what it’s meant to do.
0:03:17 So if you buy a food in a store, it says more than that there’s a correlation between you
0:03:22 being less hungry afterwards and eating this, but there’s a correlation between these SAT
0:03:25 tests and something, and that’s all that’s promised.
0:03:30 So the problem with intelligence is that it’s widely used, but it’s got no definition.
0:03:31 Isn’t that fixable?
0:03:36 Can’t we get MacArthur Fellows and Turing Award winners and all you guys together say,
0:03:38 “Can you just please define intelligence?
0:03:40 How hard could that be?”
0:03:46 So in the book, I quote some reports of American Psychological Association some time ago where
0:03:51 they report on some other groups saying that they asked 24 experts in intelligence and
0:03:54 they all gave somewhat different answers.
0:04:00 So there’s some concepts for which we have a word, but we don’t really know what it
0:04:01 means.
0:04:06 Seeing as how you’re saying that there’s no sort of agreed upon consistent definition
0:04:13 of the word, how are we seemingly freely talking about AI so much right now?
0:04:18 How can we talk about artificial intelligence if we don’t even know what intelligence is?
0:04:19 Exactly.
0:04:25 So in fact, I think using word intelligence didn’t help the AI field to progress because
0:04:26 there was no goal there.
0:04:32 In my view, it progressed because AI adopted a more specialized definition or rather used
0:04:33 one.
0:04:34 And this is learning.
0:04:38 So learning is something which you can define much better.
0:04:41 So learning from examples, after you’ve seen some examples, you can classify future examples
0:04:43 better than you could before.
0:04:47 So learning from examples is something that’s a very specific ability.
0:04:49 It can be defined.
0:04:53 The computer programs are doing quite well and in practice these large language models
0:04:54 do exactly that.
0:04:58 Okay, so there’s some evidence that they do something interesting.
0:05:04 So AI, in my view, has progressed by trying to realize by computer tasks for which there
0:05:05 is a definition.
0:05:10 In a sense, what you’re saying is that intelligence is such a quagmire that you’re just going
0:05:13 to go outside the box and you’re going to create this new standard.
0:05:18 I’m not saying I want to create a new standard or that’s not how I start.
0:05:23 I’m saying how I phrase it in the book is that in some sense, it’s a very old question,
0:05:29 of course, is what’s the intellectual capability which humans have, which other species on
0:05:34 Earth do not, and the capability which enabled us to create our complex civilization, science
0:05:36 and everything else.
0:05:41 And although this may seem overly ambitious question, in some sense, it’s not because
0:05:44 in the evolutionary terms, we evolved pretty fast.
0:05:47 In the short term, this must have evolved.
0:05:52 So I’m trying to write down this capability.
0:05:58 And the first part of this is exactly what I was describing with learning from examples.
0:06:03 So it’s something which we understand well in computers.
0:06:07 Everyone sees how it works with these large language models in the last year or so.
0:06:09 And then I add a few things on.
0:06:12 So I think I’m doing a natural science.
0:06:15 I’m trying to explain human capabilities.
0:06:20 And I think something real, which has a definition, whether it turns out to be useful, that remains
0:06:21 to be seen.
0:06:25 In some sense, it is a replacement for intelligence, but in a special way.
0:06:32 So I’m curious that in the creation of your book and the title and really the writing
0:06:39 of the entire thing, did you ever think about the difference between “educatable” versus
0:06:40 “educable”?
0:06:44 Did you ever think of the nuances between those two words or now in modern use, their
0:06:46 equivalence, or it doesn’t matter?
0:06:51 I’m not quite sure how you mean any of those.
0:06:56 So at one point, these terms were used in the sense of yes/no.
0:07:01 So with children, could be educated in a regular school as opposed to having to be sent out.
0:07:04 So I think that’s the use these terms had maybe 50 years ago.
0:07:07 And maybe that’s the way in which you mean “educatable”, right?
0:07:12 But what I’m trying to capture is to define more ground up, the capability which I think
0:07:17 all of us have to some extent, and I’m not trying to distinguish with us some of us have
0:07:18 more of it than others.
0:07:22 Just generally, we’re trying to distinguish us from other species.
0:07:26 So it’s not to be confused with the other senses in which this word may have been used
0:07:27 in the past.
0:07:29 Just like its new technical term, if you like.
0:07:30 Okay.
0:07:37 Now, from your point of view, is this level of “educableness” if that’s the word?
0:07:45 Is it a person’s or a machine’s innate ability or is it also subject to the methods and resources
0:07:49 and environments around that person or is it just the person?
0:07:54 So the capability itself, some of it there at the beginning, the question is whether
0:07:58 it can be enhanced by some sort of life experience.
0:07:59 I don’t know.
0:08:00 Okay.
0:08:02 So I think that’s a good subject for investigation.
0:08:05 Certainly if you can enhance the rate at which you can learn, for example, that would be
0:08:07 fantastic for society.
0:08:10 But I think that’s a subject for research.
0:08:15 And I should add that most capabilities can be enhanced by some sort of training and the
0:08:16 right environment.
0:08:22 So the default assumption has to be that this also can benefit from suitable interventions.
0:08:23 But I don’t know.
0:08:28 You’re saying theoretically, if we could do this controlled experiment and we took you
0:08:33 or we took Stephen Wolf from or we took Neil deGrasse Tyson or I don’t know, Freeman Dyson
0:08:38 or something and we stuck you in the middle of the Amazon jungle with no internet, no
0:08:40 books, no nothing.
0:08:42 Clearly you’re still the same person.
0:08:44 If we go back 20 years later, are you running the jungle?
0:08:46 Are you the leaders there?
0:08:52 How much did where you are in modern society play into the fact that you’re such overachievers?
0:08:55 Well, well, first, I’m not equating.
0:08:59 So let’s suppose that you can measure adequability.
0:09:01 Different people have different levels of adequability.
0:09:05 Let us assume that I’m not even certain of that, but let’s assume that what it correlates
0:09:08 with in society again, I don’t know.
0:09:10 It’s not obvious.
0:09:16 The most educable people may not be the ones who are most successful in politics or academia
0:09:17 or whatever.
0:09:18 I’m not sure.
0:09:24 I don’t want to speculate, but it would be if you put someone in the jungle, then adequability
0:09:30 would cause one to how fast they can learn what’s going on there relative to what they
0:09:31 know.
0:09:34 Obviously, it may be that they know so little that’s relevant to how to survive there that
0:09:37 the adequability isn’t enough to survive.
0:09:44 But it’s a measure of how fast you can learn and understand what’s going on.
0:09:50 If I had to put my money on you or Stephen Wolfram to figure out Kurare before the random
0:09:53 person in the jungle, I think that’s a pretty good bet.
0:09:57 So I don’t want to speculate, but certainly there are some people, it’s maybe the entrepreneurs
0:10:02 who are very good at picking up lots of information, sitting through lots of information, understanding
0:10:03 what’s going on fast.
0:10:08 So I’m not quite sure which profession exhibit educability the most.
0:10:13 For example, scientists obviously need it, but they may not be the most extreme because
0:10:17 they’re also good at concentrating for a long time to pursue a direction.
0:10:23 Educability may be more where you keep finding out new things all the time and relating everything
0:10:24 to each other and running with it.
0:10:46 Are you familiar with the work of Carol Dweck and the growth mindset?
0:10:52 She’s a professor at Stanford and she in the early 2000s pioneered this dichotomy between
0:10:55 the growth mindset and the fixed mindset.
0:10:59 The fixed mindset basically says that you think you are what you are.
0:11:00 You cannot be anymore.
0:11:03 You cannot learn new skills.
0:11:06 It also means you think you cannot deter your rates.
0:11:10 So if you’re a genius, you believe you’ll always be a genius and Carol Dweck’s theory
0:11:15 is that the growth mindset means you can learn things, you can do things and it seems to
0:11:23 me there’s a great deal of parallel between your theory of educable and the growth mindset.
0:11:27 So I thought that maybe great minds thought alike and you knew each other.
0:11:33 No, but I mean that theory may be more about one’s attitude, but I think this educability
0:11:35 is something which kind of everyone has.
0:11:40 So the fact that most people are gathering information all the time, they’re watching
0:11:44 movies for example, they’re reading novels, they’re looking at their cell phones, they’re
0:11:46 all soaking in information.
0:11:50 Much of the information is of no relevance to them, it’s got no benefit to them, but
0:11:51 that’s what they do.
0:11:52 They soak in information.
0:11:53 Okay.
0:11:57 So I think it’s this distinction whether you regard your ability to soak in information
0:12:00 or whether you’re going to do it usefully to yourself, to benefit yourself, to find
0:12:01 a new profession.
0:12:03 I think that’s not exactly the same thing.
0:12:04 Okay.
0:12:09 I think what I’m describing is something which we freely all have and I quite generously
0:12:10 I think.
0:12:18 In your book you mentioned this example of chimps escaping from their cage in a zoo using
0:12:24 a fallen tree to jump over the wall and then the other chimps watched that and learned
0:12:27 and also jumped over the wall.
0:12:33 And so are you saying then that in this case these animals are educable?
0:12:36 Is that the conclusion to draw that they pass the test for educable?
0:12:43 No, I’m saying that they look pretty smart and they can solve problems and they can obviously
0:12:48 the first one to jump over the wall figured out what to do assuming that hadn’t seen something
0:12:50 like that being done and the rest could copy.
0:12:55 But so by educable, described in great length in the book, so it’s a combination of learning
0:13:01 from experience, being able to train together what you’ve learned and some kind of reasoning.
0:13:06 And the third part is being able to learn from someone else, describing to explicitly
0:13:08 what should be done.
0:13:12 So in some sense, copying behavior is like that, but in humans it’s much more general
0:13:13 that we sit in.
0:13:18 If you go to college, you sit in a lecture room and someone tells you the laws of quantum
0:13:23 mechanics and they sit there for 30 lectures and at the end you can do a lot of stuff.
0:13:28 So I think it’s only humans who have this ability to be able to soak in a lot of information
0:13:29 given explicitly.
0:13:34 So it’s like other people having had the experience, other people having done the experiments,
0:13:38 but they can tell you the conclusions and you can internalize it yourself and use it
0:13:41 as if you had the experience yourself.
0:13:48 Have you made observations yet about the presence of educableness in life?
0:13:50 Is it normally distributed?
0:13:53 Is it distributed differently to genders?
0:13:59 Does your educableness, does it grow or decay chronologically through life?
0:14:04 Have we had some longitudinal studies and cross-cultural studies, cross-gender studies
0:14:06 to understand more about this concept?
0:14:09 No, we haven’t had any studies.
0:14:12 It’s a new concept, we haven’t had any studies.
0:14:16 So to answer any of the questions, you’d have to develop some test which you possibly think
0:14:22 does measure educability and these tests would have to be of the nature that you’re…
0:14:27 It’s like if you do a one-hour test, in principle you could have a test which is like a one semester
0:14:31 course or you could have a one-hour test, but what you’d be measuring is how much you’ve
0:14:34 learned during that one hour.
0:14:39 So you’re given some questions where it wouldn’t help you to have previous knowledge to answer
0:14:40 the questions.
0:14:46 So it’s related to current IQ tests, but it would have a different emphasis.
0:14:52 It’s a new kind of question, but the only related point I’d make is that if one accepts
0:14:57 this notion that what Kersh has assumed is his educability, which means an extreme ability
0:15:03 to learn and soak up information, then this also illustrates why it’s very difficult to
0:15:10 answer the questions you ask, I mean to try to prove differences between groups in abilities
0:15:15 and other abilities is difficult just because if you make some measurement of the performance
0:15:19 of a group today, somewhere else at a different time, the outcome may be different because
0:15:23 the group may have different experiences.
0:15:29 So we are being so subject, so extremely subject to outside influences throughout educability
0:15:35 that we’re the least promising objects of scientific study, if you like.
0:15:39 By doing surveys on humans in different places, you shouldn’t be surprised that these results
0:15:45 are transferable, generalizable from different places, different times, just because we’re
0:15:49 so prone to change the point of educability that we’re so prone to change, so prone to
0:15:55 be influenced by our environment, it’s very hard to answer any of the questions you ask.
0:16:01 If you think about how would you possibly control all the variables in an experiment
0:16:02 like that?
0:16:03 Right?
0:16:07 Although, I gotta tell you, if I had to guess, I would tell you that women are more educable
0:16:08 than men.
0:16:12 There’s no doubt in my mind, but I don’t have any objective proof for that.
0:16:14 Just my life experiences.
0:16:20 So if we’re at such like the starting point of all of this, let’s say somebody’s listening
0:16:24 to this and saying, “Yeah, you know what, really, I want myself, I want my kids, I want
0:16:26 my company to be more educable.
0:16:28 What can I do now?
0:16:33 Give me some tactical and practical stuff, Leslie, help me out here.”
0:16:36 I don’t think I can, I’m sorry.
0:16:44 I think the only rational thing is to do research and develop some tests which possibly
0:16:46 measure this kind of thing.
0:16:49 If we can’t measure it, then there’s not much we can say about it.
0:16:54 So I think it’s a long-term project, the question of whether we can improve our educability.
0:16:55 I think it’s a long-term project.
0:17:01 Of course, many people have ideas about learning to learn and teaching to learn, but to validate
0:17:07 any of these things is difficult without the measure of when you can declare yourself to
0:17:10 be successful in having an enhanced learning.
0:17:17 If you were to just take a social welfare perspective and step back, it’s hard to imagine
0:17:22 more things that could be more important than this.
0:17:28 This could be the key to the survival of mankind to figure out how to make people more educable.
0:17:31 Personally, I agree.
0:17:32 I do.
0:17:37 So I think that, yeah, I think exactly, I think it’s a public issue for discussion.
0:17:40 Listen, let me give you some shallow thinking.
0:17:44 Let me see if this metaphor will work for you.
0:17:52 Is it fair to say that intelligence, even though we can’t really define it, intelligence
0:17:59 is like the chip speed and educableness is programmability.
0:18:00 Is that a fair statement?
0:18:01 No.
0:18:02 I don’t think so.
0:18:10 I think educability is, if supposing, let’s talk about a computer rather than a human.
0:18:15 Eucability is like a discussion of what the capabilities are of this thing.
0:18:17 And intelligence, I don’t know what that is.
0:18:19 I really don’t know what intelligence is.
0:18:25 Some people say intelligence is what a standard intelligence test measures, which is a bit
0:18:26 circular.
0:18:29 But again, I don’t quite know what that is.
0:18:31 Any cognitive capabilities are correlated.
0:18:35 If you’re good at one thing, often you’re good at something else, often a weak correlation,
0:18:37 strong correlation.
0:18:41 So these intelligence tests are just something, maybe arbitrary, which is correlated with
0:18:45 many things, which colleges may want to find this out because it may be correlated with
0:18:47 how our students do.
0:18:49 But many other tests would also.
0:18:51 So I really don’t know what intelligence is.
0:18:59 Personally, I’m really, so I don’t think I go around and say, oh, this person, my neighbor’s
0:19:00 intelligent.
0:19:06 I don’t think I would necessarily say that orthogonal to this is, is there no concept
0:19:11 that we need to take in account something like morality, right?
0:19:16 So that morality, you could be the most educable person in the world.
0:19:20 Not that I believe this, but let’s just suppose for, for a second that we think Donald Trump
0:19:22 is educable.
0:19:24 But I would say he has zero morality, right?
0:19:29 You, you, you could take an educable person who could learn from others and extrapolate
0:19:31 and all do all these great things.
0:19:33 But what if that person is fundamentally evil?
0:19:34 Then what?
0:19:36 Then, then it’s unfortunate.
0:19:44 But certainly, educability only defines exactly, as they say, someone’s basic cognitive capabilities.
0:19:49 And it’s about how facing the book that it’s the not know that I talk about beliefs that
0:19:53 you learn beliefs of the people’s beliefs that tell you their beliefs.
0:19:57 And so in that sense, the theory is about capabilities.
0:19:59 So in one sense, it’s morally totally neutral.
0:20:03 It doesn’t discuss morality because, as you say, the beliefs could be good or could be
0:20:04 bad.
0:20:10 But what one’s reaction to this should be, it shouldn’t be moral neutrality.
0:20:15 As you say, some people have different beliefs and some we think are evil and some we think
0:20:16 are good.
0:20:21 And just because our capabilities are neutral to morality, it doesn’t mean that we should
0:20:22 be.
0:20:25 We should still certainly fight for what you believe is right and against things which
0:20:28 you believe are evil.
0:20:34 But the fact that because our basic capability is kind of neutral on morals, totally bad
0:20:36 beliefs can spread through society.
0:20:39 We don’t seem to have any good defense against that.
0:20:45 If people are listening to this podcast and they’re struggling with this educable concept,
0:20:52 do you have any people that might be well known in the the valiant hall of fame of
0:20:58 educableness that you can say, oh, I understand that now he cites this person as educable.
0:21:02 Is there anybody like that you can say, think of him or think of her when you think about
0:21:03 educability.
0:21:07 You mean someone who shows educability in the extreme?
0:21:08 Yeah.
0:21:09 Who’s your hero?
0:21:11 Who’s in your hall of fame of educableness?
0:21:17 I don’t know, but I sincerely believe that this is something which we all have.
0:21:20 I’m not trying to define something which will separate us.
0:21:24 I’m really trying to define something which we all have and which unifies us.
0:21:27 So I think the educability is very important, otherwise I wouldn’t have spent so much time
0:21:28 on it.
0:21:33 But I don’t know, as we said before, what this correlates with exactly.
0:21:39 What human formative correlates best with, what kind of people exhibit it the most.
0:21:40 Yeah.
0:21:43 So at this point, I just don’t have a good answer to your question.
0:21:47 It’s a natural question, but it’s not the direction in which I’m looking.
0:21:48 Okay.
0:21:53 If we can shift gears slightly towards artificial intelligence, I know I just used the I word,
0:21:55 but that’s what everybody use, right?
0:22:02 At this point, when people say artificial intelligence, do you think it means that it
0:22:09 refers to what a machine can do like a human or what a machine cannot do like a human?
0:22:11 Like what is artificial anymore?
0:22:16 I think the meaning of the term has changed in history, but at the moment in the media,
0:22:21 it clearly means the kind of things which current machine systems can do.
0:22:22 Okay.
0:22:27 So by AI people seem to mean large language models and things for which you can download
0:22:30 some software with which they can do.
0:22:35 So it’s in the area of machine capabilities and manipulating language pictures.
0:22:37 So I think that’s what it means now.
0:22:43 It’s artificial in the sense that it’s a machine doing what humans can do and that’s what makes
0:22:45 it artificial.
0:22:46 Yeah.
0:22:49 The artificial part always means that it’s doing it.
0:22:53 Now, in your opinion, is chat GPT educable?
0:22:59 No, no, basically chat GPT, just the one of the three requirements of educability, which
0:23:01 is learning from examples.
0:23:04 So it’s trained to predict the next syllable of texts.
0:23:06 That’s what it’s trained for doing, basically.
0:23:12 They predict syllable by syllable texts from being given billions of examples.
0:23:19 Now, I understand at a technical standpoint, but from the outside looking in, if you just
0:23:23 give it a series of prompts, I don’t think most people would look at it like, oh, here
0:23:26 comes syllable after syllable very rapidly.
0:23:29 It looks very cogent and salient to me.
0:23:31 What’s going on there?
0:23:32 How does that thing work then?
0:23:39 It’s trained on a very large number of sentences, so the next syllable will be very likely to
0:23:43 come from some sentence or some phrase which has been uttered before many times.
0:23:49 And also, it works on very large windows of texts, so it predicts the next thing on many
0:23:51 hundreds of characters.
0:23:56 So it does give the impression of some sort of stream of consciousness, as if it can remember
0:24:01 what it’s talking about for a while, because it’s got this very large bit of text from
0:24:03 which it’s predicting.
0:24:10 But it uses the fact that it’s got vast numbers of sentences stored, which it can use.
0:24:17 And so some of the mystery is that with machine learning in general, intuitively, what happens
0:24:21 if you’ve got billions of examples is something which is really counterintuitive.
0:24:26 So if you train on these numbers, people never looked at before, then the phenomena are almost
0:24:28 different in kind.
0:24:34 It’s okay being impressed, I’m just impressed, so the smoothness of the sentence is amazing,
0:24:39 but I don’t think one should assume that these things are providing you more than what it’s
0:24:43 doing, predicting the next syllable, so certainly you shouldn’t take it to advice, for example,
0:24:46 for some important decision you have to make.
0:24:53 If you were to take a Turing test orientation, and now, okay, you’re going to have to correct
0:24:57 me if I get this wrong, but the Turing test is if you’re interacting with this thing and
0:25:00 you don’t know if it’s a human or a machine, but you can’t tell the difference between
0:25:03 a human and a machine.
0:25:04 Isn’t it a human?
0:25:05 Yeah.
0:25:06 Okay.
0:25:11 So Turing wrote this paper in 1950, we discussed the word thinking and also intelligence and
0:25:16 basically said that, yes, if you can’t tell the difference between a human and a machine,
0:25:19 then you can’t say that this human is thinking and the machine isn’t thinking.
0:25:22 So this is like almost a definition of thinking.
0:25:26 So in some sense it’s the opposite of what I’m trying to do.
0:25:30 I’m saying that you should define what thinking is and what intelligence is.
0:25:32 So Turing didn’t do this, he said, it is what it is.
0:25:36 If it looks like thinking, it’s thinking.
0:25:42 So Turing test is where the machine can effectively impersonate a person, but again, I don’t know
0:25:44 how far that takes us.
0:25:48 So for example, if you look at these large language models, in some sense it looks like
0:25:55 a human, but so the Turing thing is that you do some 20 questions with it, you ask questions
0:26:01 and see what the answers and for example, you’ll find that large language models don’t
0:26:04 know about yesterday’s news because it was trained a long time ago.
0:26:05 Okay.
0:26:08 So clearly it doesn’t pass the Turing test technically.
0:26:09 Okay.
0:26:15 But the point is that the Turing test philosophically is of course was very important, it’s intrigued
0:26:17 people for decades.
0:26:21 But by itself it doesn’t tell you what to do to make an intelligent machine.
0:26:25 It just says that you shouldn’t quibble philosophically.
0:26:35 So the fact that chat GPT is not educable, does that make you more comfortable with everybody
0:26:38 jumping on large language models or does it scare you more?
0:26:43 Is it a good thing or a bad thing that it isn’t educable at this point?
0:26:49 The only way of saying it is that with a completely at all levels, there are some choices to be
0:26:54 made and it’s not clear what the best choice is, but with large language models, with just
0:27:00 learning from examples, as we all know now, we always did, the training set is all important.
0:27:05 For example, these large language models can have a political orientation, depending on
0:27:09 what text is being trained on, they could have a bias just depends what is trained on.
0:27:13 Even at this level, it’s not clear what you should be doing.
0:27:17 I mean, depending on your political orientation, you may want to train in a different way.
0:27:21 Already we’re disagreeing on what the ideal large language model is.
0:27:27 If you make things educable, then the problems pile up because if you just tell your system
0:27:31 beliefs, then it’s an important question of what beliefs you give it.
0:27:36 So maybe telling it evil beliefs is just such a good idea.
0:27:40 So although we’re wonderful, we think and are educable, there are all kinds of problems
0:27:43 with reducing our educability.
0:27:46 No one knows what the best method of education is.
0:27:49 No one knows what the best beliefs are to instill in people.
0:27:52 No one knows what right beliefs are and wrong beliefs are.
0:27:57 So if we made machines with the same capabilities as humans, it wouldn’t solve anything because
0:28:01 we don’t quite know what the best thing would be is to educate humans.
0:28:02 Okay.
0:28:06 So if you have an educational machine, you have to educate them and make choices to be
0:28:07 made in education.
0:28:35 What do you think poses a greater existential threat to the survival of mankind, mankind
0:28:38 or artificial intelligence?
0:28:39 Mankind.
0:28:47 So the dangers of misusing artificial intelligence or having accidents or things like that are
0:28:52 obviously present as there are with any other dangerous technology, chemistry, nuclear weapons,
0:28:53 whatever.
0:28:58 But I think the extreme viewpoint that somehow machines will take over because they become
0:29:04 so intelligent that they’ll control the world, I think that’s a misplaced fear.
0:29:08 So certainly we have to distinguish control from other capabilities.
0:29:12 So we certainly shouldn’t give control to a machine if you’re not sure what the machine
0:29:13 is going to do to us.
0:29:14 That’s absurd.
0:29:15 Okay.
0:29:17 So we try to not give it control.
0:29:21 But if we don’t give it control, then what it’ll do will be things I think which we kind
0:29:24 of understand what’s going on, bit like with chemistry.
0:29:27 We understand some chemistry, not all of chemistry.
0:29:33 So we have to treat it as any other powerful technology, but I don’t think by its nature
0:29:35 it’s different from other technologies.
0:29:39 So I don’t think some monster is going to emerge from some machine somewhere, which
0:29:40 we don’t understand.
0:29:42 I’m not worried about that.
0:29:43 Okay.
0:29:49 So is it a fair summary to say that we’re at the starting line of educableness and we
0:29:53 are not sure how to foster it, how to grow it and all this.
0:29:58 And so there’s so much research to be done, but it offers great potential to make the
0:30:02 world a better place if we can just figure out how we can get people to learn more from
0:30:05 each other, learn more from what happened before.
0:30:08 And is that kind of the picture you’re trying to paint?
0:30:09 Yes.
0:30:11 That’s a very nice, generous description.
0:30:12 Yes.
0:30:14 I think that’s a very nice way of putting it.
0:30:15 Yes.
0:30:19 And in a sense, I’m asking the same question twice, but if I’m listening to this and I’m
0:30:25 buying into this, just like people listen to Carol Dweck and buy into the growth mindset.
0:30:27 So now what can I do?
0:30:32 I’m looking for some tips that I can use for my teenage son to make it more educable.
0:30:35 I’m not sure whether I can give a tip.
0:30:39 I think possibly being aware of this dimension.
0:30:44 So being aware to the extent to which we’re soaking up information, we easily absorb ideas
0:30:46 which we hear.
0:30:50 So one of the points I’m making in the book is that our educability is very strong.
0:30:55 We soak up information, we can relate information, use information, but evaluating information
0:30:56 is much more difficult.
0:31:03 So if we hear a theory, it’s evaluating whether it’s true or not, isn’t part of our basic
0:31:08 cognitive capabilities, and probably because it’s impossible if you hear some report of
0:31:12 something happening on the other side of the world, that you can’t go and look at it.
0:31:14 You have to see whom you believe.
0:31:20 So I think appreciating that we are so much subject to what we hear and see, and we can
0:31:22 soak it up and agree with it maybe.
0:31:27 So one parameter which is important to be psychologists look at is, that’s what we said
0:31:31 before, is that you hear all these theories, but which ones do we believe?
0:31:33 So what’s our criteria and for whom to believe?
0:31:38 So if we can, in turn, like all these theories, we have to choose which ones too because they
0:31:39 may be contradictory.
0:31:42 So what are our criteria?
0:31:47 And so psychologists, of course, look at this, that we prefer theories which agree with what
0:31:51 we already believe, theories which agree with our friends, et cetera.
0:31:56 So this educability, I think, is also a weakness, it shows us all the time we’re prone to some
0:32:01 new idea which we believe, but we shouldn’t really, we may be tricked into it, it’s our
0:32:08 weakness to, we have some policy for agreeing with things which already have a sympathy,
0:32:09 but maybe wrong.
0:32:14 So I think it’s a viewpoint which is new to me, just showing how vulnerable we are to
0:32:17 the ideas which are swirling around us.
0:32:23 So I think being aware of this educability notion may help people in understanding what’s
0:32:24 going on.
0:32:28 Listen, fundamentally, I am a marketing person.
0:32:33 So I was chief evangelist of Apple, chief evangelist of an online graphics design service
0:32:34 called Canva.
0:32:38 So I’m all about sales and marketing and evangelism.
0:32:45 And I’m telling you from the outside looking in, you are holding in your hand a golden
0:32:47 opportunity to change the world.
0:32:54 If you could just get people off this kind of SAT IQ test, Mensa memberships and all
0:32:59 that and just point out to them that it’s not about this score, it’s about how you can
0:33:02 adapt and learn and coexist with other people.
0:33:09 That’s much more important than your GPA and your SAT and I’m serious, I think you may
0:33:14 be holding the future of humanity in your hand here, Leslie.
0:33:16 You got a big responsibility here.
0:33:17 Thank you for the comments.
0:33:22 I think there are many ways this can go and I certainly need the help of lots of people
0:33:25 to actually go in any of those ways.
0:33:33 But yeah, so I think there’s a new idea here which is very general relevance to us, I think.
0:33:38 There’s no doubt in my mind that Leslie is onto something with this concept of “educable”.
0:33:43 I think it is just the flip side of Carol Dweck’s growth mindset.
0:33:47 And you know how much I love Carol Dweck’s growth mindset.
0:33:53 Carol Dweck and Leslie Valiant, that would be a remarkable combination.
0:33:58 Maybe I’ll send this episode to Carol and see if she’s interested in meeting Leslie.
0:34:00 The world would shake if this happened.
0:34:05 Anyway, I’m Guy Kawasaki, this is Remarkable People.
0:34:10 Once again, I’m going to remind you, please read our new book, Think Remarkable.
0:34:16 It will help you make a difference and change the world and be remarkable.
0:34:22 Now speaking of “educable”, we have a particularly educable group of people on the Think Remarkable
0:34:23 team.
0:34:30 They are, of course, Madison Nizmer, producer, Tessa Nizmer, researcher, Jeff See and Shannon
0:34:39 Hernandez on sound design, and our best buddies, Alexis Nishimura, now at Santa Clara University,
0:34:44 Louise Shortboard, Magana, and finally, Fallen Yates.
0:34:46 This is the Remarkable People team.
0:34:49 We’re on a mission to make you remarkable.
0:35:00 Until next time, mahalo and aloha.
In this episode of Remarkable People, Guy Kawasaki engages in a captivating discussion with Leslie Valiant, a distinguished professor of computer science and applied mathematics at Harvard University. Valiant introduces the groundbreaking concept of “educability” from his new book “The Importance of Being Educable” – the unique human ability to absorb and apply knowledge effectively. He argues that understanding and harnessing our educability is crucial for navigating the challenges posed by the rise of artificial intelligence. Join Guy and Leslie as they explore the insights from the book, the nature of intelligence, the potential of machine learning, and the importance of fostering our innate capacity to learn and adapt in an ever-changing world.
—
Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.
With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.
Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.
Episodes of Remarkable People organized by topic: https://bit.ly/rptopology
Listen to Remarkable People here: https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827
Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Thank you for your support; it helps the show!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.