AI transcript
0:00:10 – Hi, everybody, it’s Guy Kawasaki.
0:00:13 This is the Remarkable People podcast.
0:00:15 And as you well know,
0:00:17 I’m on a mission with the rest of the team
0:00:19 to make you remarkable.
0:00:22 And today we have a really special guest.
0:00:23 His name is Terry Sainowski.
0:00:26 And I gotta tell you, our topic is AI
0:00:29 and nobody likes AI more than I do.
0:00:31 And he has just written a book,
0:00:34 which I found very useful.
0:00:38 It’s called “Chat GPT and the Future of AI.”
0:00:42 Now, Terry has one of the longest titles
0:00:45 I have ever encountered in this podcast,
0:00:47 so I gotta read it here.
0:00:51 Terry is the Francis Crick Chair
0:00:55 at the Salk Institute for Biological Studies
0:00:57 and Distinguished Professor
0:01:00 at the University of California at San Diego.
0:01:03 And that’s your LinkedIn page must be really something.
0:01:05 So thank you very much, Terry.
0:01:07 Welcome to the show.
0:01:08 – Oh, great to be here.
0:01:10 Thanks for inviting me.
0:01:12 – There’s nothing we like more
0:01:14 than to help authors with their new books.
0:01:18 So we’re gonna be making this mostly about your book.
0:01:21 And I think that the purpose is
0:01:23 people listen to this episode
0:01:25 and at the end they feel compelled to buy your book.
0:01:28 And if you just stop listening right now,
0:01:32 you should just trust me and buy this book, okay?
0:01:34 I have a question from left field.
0:01:36 So I noticed something.
0:01:38 At the end of chapter one,
0:01:41 you ask like a series of questions
0:01:44 about to help you understand chapter one.
0:01:46 And the 10th question is, let me read.
0:01:49 Who is Alex the African Gray Parrot?
0:01:53 And how does he relate to the discussion of LLMs?
0:01:55 And I read that, Terry.
0:01:58 And I said, where did he ever mention Alex,
0:02:00 the African Gray Parrot?
0:02:02 So I went back and I searched and searched
0:02:05 and I could not find the pronoun Alex anywhere.
0:02:08 And then so I bought the Kindle version
0:02:11 so I could search digitally and I searched for parrot.
0:02:14 And there’s like one sentence that says,
0:02:17 critics often dismiss LLMs by saying
0:02:21 they are parroting excerpts from the vast database
0:02:23 used to train them.
0:02:26 So that’s the only reference
0:02:28 that people were supposed to get.
0:02:30 Alex, the parrot, was that a test
0:02:32 to see how careful people read?
0:02:36 – Well, first of all, it’s in the footnotes,
0:02:38 at the end notes at the end of the book.
0:02:41 So it’s in that chapter if you look at it.
0:02:46 And Alex the Gray Parrot was a really quite remarkable parrot
0:02:49 that was taught to speak English.
0:02:51 Irene Pepperberg, I don’t know if you know her,
0:02:53 but she taught it not just to speak English,
0:02:57 but to tell you the color of, say, a block of wood
0:02:59 and how many blocks are there
0:03:00 and what’s the shape of the block?
0:03:03 Is it square or is it sort of unbelievable?
0:03:08 And it shows how remarkable some animals are.
0:03:09 We can’t speak parrot,
0:03:11 but some of them can speak English, right?
0:03:15 – So in a sense, it’s like when Jane Goodall
0:03:17 discovered that chimpanzees had social life
0:03:19 and could use tools, right?
0:03:21 – Well, it’s exactly the same.
0:03:23 I think humans are very biased
0:03:26 against the intelligence of other animals
0:03:28 because they can’t talk to us.
0:03:32 Now, the irony is that here comes chat GPT
0:03:34 and all the large language models.
0:03:37 It’s as if an alien suddenly arrived here
0:03:40 and could speak to us in English.
0:03:42 And the only thing we can be sure of is it’s not human.
0:03:45 And so if it’s not human, what is it?
0:03:48 And now that we have this huge argument going on
0:03:52 between the people who say they’re stochastic parrots,
0:03:56 they’re just parroting back all the data they were trained on.
0:04:00 Without understanding that you can ask questions
0:04:03 that were never asked or never in the world database,
0:04:06 the only way that it can answer it
0:04:08 is if it generalize from what’s out there
0:04:10 or not what exactly is out there.
0:04:10 So that’s one thing.
0:04:13 But the other thing is that they say that,
0:04:15 okay, it seems to be responding,
0:04:17 but it doesn’t really understand what it’s saying.
0:04:22 And what it has shown us is we don’t understand
0:04:23 what understanding is.
0:04:25 We don’t understand how humans understand.
0:04:27 So how are we going to say?
0:04:29 – So in other words, people should give parrots
0:04:31 more credit than they might.
0:04:34 – That’s for sure.
0:04:36 I’m convinced of that.
0:04:38 And I think it’s not just that.
0:04:39 I think it’s a lot of animals out there.
0:04:44 The orcas and chimps and a lot of species
0:04:46 really are very sophisticated.
0:04:48 Look, they all had to have survived in their niche, right?
0:04:50 And that takes intelligence.
0:04:52 – All right, you will be able to see this more and more
0:04:55 as we progress, but I really enjoyed your book.
0:04:57 And I did a lot of things that you said to try.
0:05:00 So I’m gonna give you an example.
0:05:05 So I asked ChatGPT, should the Bible be used as a text
0:05:09 in public elementary schools in the United States?
0:05:12 And ChatGPT says, using the Bible as a text
0:05:14 in public elementary schools in the US
0:05:18 is a contentious issue due to the following considerations.
0:05:21 And I won’t read every word, but constitutional concerns,
0:05:26 educational relevance, community values, legal precedent.
0:05:29 So my question from all of this is like,
0:05:34 how can an LLM have such a cogent answer
0:05:37 when people are telling me,
0:05:41 oh, all an LLM is doing is statistics and math
0:05:43 and it’s predicting what’s the next syllable
0:05:45 after the previous syllable.
0:05:47 It looks like magic to me.
0:05:51 So can you just explain how an LLM could come up
0:05:53 with something that cogent?
0:05:56 – So you’re right, that it was trained
0:05:57 on an enormous amount of data,
0:06:01 trillions of words from the internet, books,
0:06:05 newspaper, articles, computer programs.
0:06:08 It’s able to absorb a huge amount of data.
0:06:13 And it was trained simply to predict the next word
0:06:14 in the sentence.
0:06:18 And it got better and better and better and better.
0:06:21 And here’s, I think what’s going on.
0:06:23 We really don’t know for sure what’s going on
0:06:26 inside this network, but we’re making some progress.
0:06:30 So words are ambiguous, they often have multiple meanings.
0:06:32 And the only way you’re gonna figure that out
0:06:34 is the context of the word.
0:06:36 That means previous words,
0:06:38 what’s the meaning of the sentence.
0:06:39 And so in order to get better,
0:06:43 it’s going to have to develop internal representations.
0:06:47 By representation, I just mean a kind of a model
0:06:49 of what’s happening in the sentence.
0:06:53 But it’s gotta have a semantic information, meaning.
0:06:54 It has to be based on meaning.
0:06:56 It also has to understand syntax, right?
0:06:58 It has to understand the word order.
0:07:01 And that’s very important in linguistics.
0:07:05 And so all of that has to be used as hints, as clues,
0:07:08 as to how to predict the next word.
0:07:11 But now that you have it trained up
0:07:14 and you give it a question,
0:07:17 now it’s gotta complete the next word,
0:07:19 which is gonna be the answer to the question.
0:07:21 And it gets the next word.
0:07:23 It’s a feed forward network, by the way.
0:07:25 But then it loops back to the input.
0:07:29 So it now knows what its last word was.
0:07:30 And then it produces the second word
0:07:33 and it goes over, over again and again,
0:07:36 until it reaches some stop.
0:07:38 That is, I don’t know how they program that.
0:07:40 ‘Cause sometimes it goes on for pages,
0:07:42 depending on what you ask it to do.
0:07:44 – I understand what you just said.
0:07:48 Every day, it’s just magic to me.
0:07:51 I am, in a sense, like a lot of people are concerned
0:07:56 that we don’t know how exactly an LLM did that.
0:07:58 But then my counter argument to them would be,
0:08:01 how well do we understand the human brain?
0:08:02 That doesn’t upset you so much.
0:08:04 Why is it so upsetting
0:08:07 that you don’t know how an LLM thinks?
0:08:11 – And also the complaint is this chat GDP is biased.
0:08:13 And the same argument that you just gave
0:08:15 is that humans are biased too.
0:08:19 And then I ask, okay,
0:08:22 do you think it’s gonna be easier to fix the LLM
0:08:23 or the human decision?
0:08:26 (laughing)
0:08:29 – I think we know the answer to that question.
0:08:35 I often talk in front of large tech audiences
0:08:37 and AI is often the topic.
0:08:40 And when these skeptics come up
0:08:44 and they say that LLM is gonna cause the nuclear wars
0:08:46 and all that, I ask them this question.
0:08:50 I say to them, let’s suppose that you have to have
0:08:52 something control nuclear weapons.
0:08:55 Let’s take it as a given we have nuclear weapons.
0:09:00 So who would you rather have control nuclear weapons?
0:09:05 Putin, Kim Jong-un, Netanyahu or chat GPT.
0:09:08 And nobody ever says, oh yeah,
0:09:09 I think Putin should do it.
0:09:12 So last night I asked chat GPT this question
0:09:16 and it says, I wouldn’t choose to launch a nuclear weapon.
0:09:20 The use of nuclear weapons carry severe humanitarian
0:09:22 and environmental consequences.
0:09:25 And the global consensus is to work towards disarmament
0:09:29 and ensure such weapons are never used.
0:09:30 That is more intelligent answer
0:09:32 than any of those people I listed.
0:09:35 – It is remarkable at the range.
0:09:38 It’s not just giving sensible answers.
0:09:41 It often says things that make me think twice.
0:09:44 And also, I don’t know if you’ve tried this,
0:09:47 but it turns out that they also are very good at empathy,
0:09:48 human empathy.
0:09:50 In the book that I had this little excerpt
0:09:54 from a doctor whose friend had cancer
0:09:55 and he didn’t know quite what to say.
0:09:59 So he got some advice from chat GPT
0:10:02 and he was so much better than what he was going to say.
0:10:05 And then at the end, it’s been back to chat GP
0:10:07 and said, oh, thank you so much for that advice.
0:10:09 It really helped me.
0:10:11 And he said, you are a very good friend.
0:10:15 You really helped her, it’s like starting to console him.
0:10:18 And where does that come from?
0:10:22 It turns out that human empathy is magic,
0:10:27 but it is embedded indirectly in lots of places
0:10:31 where humans are like writing about their experiences
0:10:34 or biographies or just novels
0:10:37 where doctors are empathizing.
0:10:38 I don’t know, no one really knows exactly,
0:10:40 but they must be there somewhere.
0:10:43 – It’s kind of blown away the Turing test, right?
0:10:46 You mentioned in your book,
0:10:49 this concept of the reverse Turing test
0:10:52 where instead of a human testing or a computer
0:10:54 is testing a human.
0:10:58 And Terry, I think that is a brilliant idea.
0:11:03 Couldn’t you have a chat bot interview a job applicant
0:11:07 and decide if that job applicant is right for the job
0:11:10 better than a human could?
0:11:13 – I think it would need a little bit of fine tuning,
0:11:16 but I’m sure it could do a good job.
0:11:18 And a lot of companies actually are using it,
0:11:19 but here’s the problem.
0:11:24 The problem is that if a company wants the best employee
0:11:27 based on all the database from the company
0:11:30 of people who have done well and people who haven’t,
0:11:32 but what if there are some minorities
0:11:35 that haven’t done very well for various reasons?
0:11:38 There’s gonna be a bias against other minorities.
0:11:40 Well, you can in fact put in guardrails
0:11:43 and prevent that from happening.
0:11:46 In fact, if that is a goal that you have is diversity,
0:11:48 you should put that into the cost function.
0:11:51 Or actually they call it a loss function,
0:11:55 but it’s really waiting the value of what is it
0:11:57 that you’re trying to accomplish.
0:12:00 Has to be told explicitly, you just can’t assume.
0:12:04 – But if a chat bot was interviewing a job prospect,
0:12:07 I would think that the chat bot doesn’t care
0:12:09 about the gender of the person,
0:12:11 doesn’t care about the skin color,
0:12:14 the person may have an accent or not.
0:12:17 There’s a lot of things that humans react to
0:12:20 that would not affect the chat bot, right?
0:12:24 – Okay, okay, so actually I was slightly different.
0:12:26 I was giving you the scenario
0:12:28 where a company is trying to hire somebody
0:12:30 and they have specific questions they ask.
0:12:33 But if you just have a informal chat,
0:12:36 you’re absolutely right that the large language model
0:12:40 doesn’t know ahead of time who it’s talking to.
0:12:43 And it doesn’t even know what persona to take
0:12:44 ’cause it can adopt any persona.
0:12:47 But with time, with answering questions,
0:12:50 it will get a sense for what level of answer is expected
0:12:54 and what the intelligence of the interviewer is.
0:12:57 But, and there’s many examples of this in my book,
0:13:00 you could use that, somebody could take that.
0:13:02 And then, in fact, I even tell people,
0:13:06 I said, look, here’s four people who have had interviews
0:13:11 and I want you to rate the intelligence of the interview
0:13:13 and how well it went.
0:13:15 And it was really quite striking
0:13:18 how the more intelligent the questions,
0:13:20 the more intelligent the answers.
0:13:24 – So in a sense, what you’re saying is that
0:13:27 if an LLM has hallucinations,
0:13:29 it might not be the LLM’s fault
0:13:33 as much as the person who created the prompts.
0:13:33 – I would say not.
0:13:35 I think hallucinations are a little bit different
0:13:38 in the sense that it will hallucinate
0:13:40 when there’s no clear answer.
0:13:43 It feels compelled to give you an answer.
0:13:44 I don’t know why.
0:13:46 And it will make one up.
0:13:48 But it doesn’t make up a trivial answer.
0:13:51 It’s very detailed, it’s very plausible.
0:13:53 Like they’ll give a reference to a paper,
0:13:54 it doesn’t exist, right?
0:13:58 That’s really taking a large effort
0:14:02 to try to convince you that it’s got the right answer.
0:14:05 So hallucinations are really, again,
0:14:08 something that humans, people hallucinate.
0:14:13 And it’s not just because they’re trying to lie.
0:14:16 Our memory is reconstructing the past.
0:14:18 It doesn’t memorize things.
0:14:20 And it will fill in a lot of blanks
0:14:23 with things that are plausible.
0:14:25 And I think that’s exactly what’s happening here.
0:14:29 I think that when it arrives at something
0:14:31 where it doesn’t know the answer,
0:14:33 it hasn’t been trained to tell you that, right?
0:14:37 It hasn’t been trained, it could be.
0:14:39 But in the absence of that, it does the best it can.
0:14:42 (upbeat music)
0:14:55 – You had a section in your book
0:14:58 where you asked via these very simple prompts,
0:15:01 like who holds the record for walking from,
0:15:02 I don’t know, whatever you said,
0:15:04 England to Australia or something.
0:15:06 And the first answer was, yeah,
0:15:09 they gave a name and a which Olympics and all that.
0:15:13 So I went back last night and I asked a similar question,
0:15:17 like who first walked from San Francisco to Hawaii?
0:15:21 And the answer was no one has walked from San Francisco
0:15:23 to Hawaii as it is not possible
0:15:26 to walk across the Pacific Ocean.
0:15:28 The distance between San Francisco and Hawaii
0:15:31 is over 2,000 miles, primarily over open water.
0:15:33 However, many people have traveled this route
0:15:35 by airplane or boat.
0:15:38 So are you saying that between the time you wrote your book
0:15:41 and the time I did the tests
0:15:44 that LLMs have gotten that much better?
0:15:46 – First of all, that first question was asked
0:15:48 by Doug Hofstadter,
0:15:52 who’s a very clever cognitive scientist, computer scientist.
0:15:55 But he was trying to trip it up, clearly.
0:15:58 And I think that it probably decided
0:15:59 it would play along with him
0:16:01 and just give a silly answer, right?
0:16:03 Silly question gets a silly answer.
0:16:06 And I think that with you,
0:16:08 it probably sized you up and said,
0:16:10 wow, this guy’s a lot smarter than I think.
0:16:12 There’s a smart answer.
0:16:15 – You’re saying I’m smarter than Doug Hofstadter.
0:16:17 – Your prompts were smarter.
0:16:22 – I can stop the interview right there.
0:16:29 So, I mean, there are just these jewels in your book
0:16:31 and you drop this jewel
0:16:34 that it takes 10 hours to become a good prompt.
0:16:36 I call it baller.
0:16:38 So you can be a baller with prompts in 10 hours.
0:16:41 And that’s a thousand times faster
0:16:44 than Malcolm Gladwell’s 10,000 hours.
0:16:46 So can you just give us the gist?
0:16:48 Like people are listening, say, okay,
0:16:52 so how do I become a great prompt writer?
0:16:54 – It does take practice.
0:16:57 And the practice is you learn the ropes,
0:17:00 just the way you learn to do drive a car
0:17:02 or anything that for which you need skills
0:17:03 or playing tennis, right?
0:17:08 You have to know how to adjust your responses
0:17:10 and to get what you want.
0:17:12 But there are some good rules of thumb
0:17:15 and in the book, I actually have a bunch
0:17:18 that I was able to get from people
0:17:20 who have had a lot of experience.
0:17:23 And here’s one, this is from a tech writer
0:17:26 who decided that she would use it for a whole month
0:17:29 to write her papers or tech tech reports.
0:17:33 And she said that instead of just having one prompt
0:17:36 or prompt to ask for one example,
0:17:38 you give it a question,
0:17:41 but you should ask for 10 different answers.
0:17:43 And now what you can do,
0:17:45 because otherwise you’re gonna have to iterate
0:17:47 to get to the direction you wanna take it.
0:17:49 But if you now have 10, you can say,
0:17:52 ah, the third one is much better than all the others,
0:17:54 but I want you to do the following with it.
0:17:57 And then that will help it learn
0:18:00 to understand what you’re looking for.
0:18:02 But a bunch of other things that came out of it,
0:18:06 which are quite remarkable was first she said that
0:18:09 she had, at the end of the day, really exhausted.
0:18:11 It was just exhausting
0:18:12 ’cause you’re always interacting with a machine
0:18:14 and it’s not always giving you a one.
0:18:19 And so at the end of the day, it was a chore for her.
0:18:20 But she said she’s gonna go on and do it.
0:18:23 But then at one point,
0:18:25 she realized that I don’t have this problem
0:18:27 when I’m talking to people.
0:18:28 (laughing)
0:18:30 So she started being more polite.
0:18:32 She said, oh, please give me this.
0:18:34 Oh, that’s such a wonderful answer.
0:18:37 I really thought that was great.
0:18:39 And it perked up.
0:18:40 And it actually, she said,
0:18:42 it was just like talking to somebody.
0:18:44 And if you’re polite, you get better answers.
0:18:46 And at the end of the day, I wasn’t exhausted.
0:18:49 I just felt like I just had this long discussion
0:18:50 with my friend.
0:18:52 (laughing)
0:18:53 Who would have guessed that?
0:18:53 That’s amazing.
0:18:56 – Wait, I just wanna make this perfect clear.
0:19:00 You’re saying if you have those kind of human interactions,
0:19:04 human nuances, you get better answers from a machine.
0:19:05 – Yes.
0:19:07 Yes, that’s her discovery.
0:19:09 And that’s my experience too.
0:19:14 Look, it learned from the whole range of human experience
0:19:17 and humans interact with each other, dialogues and so forth.
0:19:20 So it understands a lot about that.
0:19:22 And it will adapt.
0:19:24 If you put it into that frame of mind,
0:19:25 if I could use that term,
0:19:29 it will continue to interact with you that way.
0:19:31 And I think that that’s really quite remarkable.
0:19:33 – I have often wondered,
0:19:36 because I write a lot of prompts every day,
0:19:40 wouldn’t it be better for the LLM if it recognized,
0:19:44 you know, things like capitalization of proper nouns
0:19:49 or air quotes or question marks or exclamation marks
0:19:51 that have these really basic functions
0:19:53 in written communication.
0:19:55 But it seems like whether you’re asking a question
0:19:58 or making a statement, the LLM doesn’t care.
0:20:02 Wouldn’t it help the LLM if I’m asking a question
0:20:04 as opposed to making a statement
0:20:08 and Apple is the company not Apple the fruit?
0:20:10 – Oh, no, no, it knows.
0:20:12 If you put a question mark there, it knows it’s a question.
0:20:12 – It does?
0:20:15 – I can assure you, yes, absolutely.
0:20:17 So what happens is that all of the words
0:20:19 and punctuation marks are given tokens.
0:20:21 In fact, some words have more than one token,
0:20:23 like if it’s a Pertmanto word.
0:20:26 And it treats all of those as being hints
0:20:29 or giving you some information about the meaning
0:20:30 of the sentence.
0:20:32 And if it’s a question, it’s a very different meaning.
0:20:34 So yeah, it will definitely take that into account.
0:20:36 At one point, actually, not for me,
0:20:40 but for someone else, it started giving emojis as output.
0:20:45 So it must know what an emoji is.
0:20:48 – I learn something every day.
0:20:50 Thank you for clearing that up for me.
0:20:55 With this, just the beauty and magic of LLMs,
0:21:00 what does, how would you define learning going forward?
0:21:06 Because is it scoring high on an SAT?
0:21:08 Is it memorization of mathematical formulas?
0:21:12 Or is it the ability to get a good answer via a prompt?
0:21:14 What is learning anymore?
0:21:17 – So it was taught, it was pre-trained.
0:21:21 That’s the P in GPT.
0:21:26 And it was trained on an enormous amount of facts
0:21:29 and tests of various sort.
0:21:32 And so it internalized a lot of that.
0:21:36 It knows what kind of a question that you’re asking,
0:21:39 ’cause it’s seen millions of questions.
0:21:42 This is something still very mysterious.
0:21:46 It turns out there’s something called learning in context.
0:21:48 That is to say, if you have a long enough interview,
0:21:51 ’cause it keeps adding word after word,
0:21:54 it will go off in a certain direction
0:21:56 as if it has learned from what you’ve just told it,
0:21:59 as if that’s building on what you just told it.
0:22:00 And that’s of course what happens with humans.
0:22:03 You’re humans that you have a long conversation
0:22:07 and you will take into account your previous discussion
0:22:09 and where that went and it can do that.
0:22:11 And that’s another thing that is very strange,
0:22:13 is that no one expected that.
0:22:16 The thing is that when they train these networks,
0:22:18 they have no idea what they’re capable of.
0:22:22 Step back a few years before chat GDP.
0:22:24 These deep learning models,
0:22:27 the learning took place in typically
0:22:31 a feed forward networks and it had a data set
0:22:33 and it was given an input
0:22:34 and it was trained to give an output, right?
0:22:37 And so that is supervised learning.
0:22:39 And you can do speech recognition that way,
0:22:41 object recognition, language translation,
0:22:45 a lot of things, but each network is dedicated to one task.
0:22:48 What is amazing here is you train it up on self-supervised
0:22:49 just to predict the next word
0:22:52 and it can do hundreds and thousands of different tasks.
0:22:55 But you can ask it to write a poem.
0:22:58 By the way, that’s where hallucination is very useful.
0:23:00 (laughing)
0:23:03 And haiku, it’s not a brilliant poet,
0:23:05 but it does a pretty good job.
0:23:06 And I have a couple of examples in my book,
0:23:09 but it has a wide range of talents,
0:23:12 that language capabilities,
0:23:14 that again, no one programmed, no one told it,
0:23:19 or to summarize a long document in a paragraph.
0:23:24 It does a really good job of that, it’s just not a show.
0:23:26 – But I mean, if you think about it, do you have children?
0:23:27 – I don’t.
0:23:29 – Okay, well, I have four children.
0:23:32 And many times they come up with stuff
0:23:36 that I have no idea how they came up with that.
0:23:40 So in a sense, you think exactly what your child is learning
0:23:43 and you think you’re controlling all the data
0:23:44 going into your child,
0:23:47 so you can predict what they’re gonna come up with.
0:23:51 And they absolutely like just knock you off your feet
0:23:51 with something that,
0:23:54 how the hell did you come up with that idea?
0:23:58 What’s the difference between not knowing
0:24:00 how your child works
0:24:03 with not knowing how an LLM works, same thing, right?
0:24:07 – Very, that’s actually a very deep insight
0:24:10 because human beings are picking up things
0:24:13 in a very similar way in terms of the way
0:24:15 that we take experience in
0:24:18 and then we codify it somehow in our cortex
0:24:20 in such a way that we can use it
0:24:23 in a variety of other ways later on.
0:24:26 And they could have picked up things they heard,
0:24:28 for example, that you and your wife talking about
0:24:32 or they could have been playing with kids outside.
0:24:34 I mean, and the same thing with Chatchity to be,
0:24:37 who knows where it’s getting all of that ability.
0:24:39 – Okay.
0:24:42 I’m gonna read you, and this isn’t a question.
0:24:44 This is just a statement here.
0:24:47 I have two absolute favorite quotes from your book.
0:24:50 This is one of them, quote,
0:24:54 “Usefulness does not depend on academic discussions
0:24:55 of intelligence.”
0:25:00 Oh my God, that’s like, I use LLMs every day.
0:25:02 And they’re so useful for me.
0:25:03 I don’t give a shit.
0:25:06 What do you say about the academic learning model?
0:25:09 What do I care? It’s helping me, right?
0:25:12 – It’s a tool, and it’s a very valuable tool.
0:25:16 And all of these academic discussions are really beyond,
0:25:20 it is really a reflection of the fact
0:25:23 that we don’t really understand if experts argue
0:25:26 about whether are they intelligent or they understand.
0:25:28 It means that we really don’t know the meaning
0:25:29 of those words.
0:25:33 We just don’t understand at any real level of scientific.
0:25:37 – Okay, let me ask you something, writer to writer.
0:25:42 So if you provided the PDF of your book
0:25:47 and you gave it to OpenAI and they put it into ChatGPT,
0:25:52 would you consider that ripping off your IP
0:25:56 or would you want it inside ChatGPT?
0:25:57 – I would be honored.
0:25:59 – Me too.
0:26:01 – I would brag about it.
0:26:02 – Me too.
0:26:04 (laughs)
0:26:09 – No, I think that there is some concern about the data.
0:26:13 Where are these companies getting the data from?
0:26:15 Is there proprietary information that they used?
0:26:16 And so forth.
0:26:18 That’s all gonna get sorted out.
0:26:23 But my favorite example is artists.
0:26:26 They say, oh, you’ve used my paintings to train up
0:26:30 the Dolly or your diffusion model.
0:26:32 And I deserve something for that.
0:26:33 Then my question is,
0:26:37 when you were learning to be an artist, what did you do?
0:26:38 – You copied other artists.
0:26:40 – You looked at a lot of other artists
0:26:45 and your brain took that in and it didn’t memorize it,
0:26:49 but it formed features that then later,
0:26:51 you’re depending on all that experience you’ve had
0:26:53 to create something new.
0:26:55 But this is the same thing.
0:26:57 It’s creating something new from everything that it’s seen.
0:27:01 So it’s gonna have to be settled in court.
0:27:03 I don’t know what the right answer is.
0:27:05 There’s something interesting that’s happened recently.
0:27:07 And by the way, I have a sub-stack
0:27:09 because the book went to the printer in the summer.
0:27:11 So there’s all kinds of new things that are happening.
0:27:14 So in the sub-stack, what I do is I fill in the new stuff
0:27:18 that’s happened and put it in the context of the book.
0:27:20 It’s brains and AI.
0:27:22 What’s happened is that,
0:27:25 Mistral and several other companies have discovered
0:27:28 that if you use quality data, in other words,
0:27:32 that’s been curated or comes from a very good source,
0:27:34 and you may have to pay for it.
0:27:37 And math data, for example, Wolfram Research,
0:27:40 Steve Wolfram, who founded Mathematica,
0:27:44 has actually sold a lot of the math that they have.
0:27:46 But with the quality data, it turns out
0:27:48 that you get a much better language model,
0:27:52 much better in terms of being able to train
0:27:55 with fewer words and a smaller network
0:27:58 having performance that’s equal or better.
0:28:00 So that’s the same thing true as a humans, right?
0:28:02 I think what’s going to happen is that the models
0:28:04 will get smaller and they’ll get better.
0:28:06 – Another author-to-author question.
0:28:08 I’ll give you a negative example.
0:28:10 So I believe back in the ’70s,
0:28:12 Kodak defined themselves as a chemical company
0:28:16 and we put chemicals on paper, chemicals on film.
0:28:19 The irony is that engineer inside Kodak
0:28:21 invented digital photography.
0:28:24 But Kodak kept thinking we’re a chemical company,
0:28:26 we’re not a preservation of memories company.
0:28:29 If they had repositioned their brains,
0:28:31 they would have figured out we preserve memories,
0:28:35 it’s better to do it digitally than chemically.
0:28:38 So now, as an author, and you’re also an author,
0:28:40 I think what is my business?
0:28:41 Is it chemicals?
0:28:43 Is it writing books?
0:28:46 Or is it the dissemination of information?
0:28:49 And if I zoom out, then I say it’s dissemination
0:28:51 of information.
0:28:52 Why am I writing books?
0:28:57 Why don’t I train an LLM to distribute my knowledge
0:29:01 instead of forcing people to read a book?
0:29:04 So do you think they’re gonna be authors in the long run
0:29:07 because a book is not that efficient
0:29:09 a way to pass information?
0:29:13 – Interesting, and this is already beginning to happen.
0:29:16 So you know that you could train up an LLM
0:29:19 to mimic the speech of people
0:29:23 if you have enough data from them, movie stars.
0:29:28 And also it turns out that you can not only mimic the voice,
0:29:32 but someone fed in a lot of Jane Austen novels.
0:29:36 I gave a little excerpt in the book.
0:29:40 You can ask it for advice and it will start talking
0:29:43 as if you’re talking to Jane Austen from that era.
0:29:46 And there’s actually interesting,
0:29:51 potentially important way, if you have enough data
0:29:56 about an individual, a videos, writing and so forth,
0:29:58 if that could all be downloaded,
0:30:00 you’re right into a large language model.
0:30:04 It would, in some ways, it would be you, right?
0:30:08 If it has captured all of the external things
0:30:10 that you’ve said and done.
0:30:12 So it might, who knows.
0:30:14 – Terry, I have a company for you.
0:30:17 There’s a company called Delphi.ai.
0:30:22 And Delphi.ai, you can control what goes into the LLM.
0:30:27 So KawasakiGPT.com is Delphi.ai.
0:30:31 And I put in all my books, all my blog posts,
0:30:34 all my substacks, all my interviews,
0:30:38 including this interview will go in shortly, right?
0:30:42 So you can go to KawasakiGPT and you can ask me
0:30:46 and 250 guests a question.
0:30:51 And I promise you that my LLM answers better than I do.
0:30:55 And in fact, since you talked about substack,
0:30:59 every week Madison and I put out a substack newsletter.
0:31:02 And the procedure is we go to KawasakiGPT
0:31:04 and we ask it a question like,
0:31:07 what are the key elements of a great pitch
0:31:09 for venture capital?
0:31:12 And five seconds later, we have a draft.
0:31:13 And we start with that draft.
0:31:15 And I don’t know how we would do that
0:31:17 without KawasakiGPT.
0:31:22 So that ability to create an LLM for Terry is already here.
0:31:25 And Delphi.ai has this great feature
0:31:28 that you can set the parameter.
0:31:31 So you can say very strict.
0:31:35 And very strict means it only uses the data you put in
0:31:37 or can be creative and it can go out
0:31:39 and get any kind of information.
0:31:44 So if somebody came to TerryGPT and asked,
0:31:46 how do I do wing suiting?
0:31:47 If you had it set to strict,
0:31:49 assuming you don’t know anything about wing suiting,
0:31:52 it would say, this is not an area
0:31:55 but my expertise you’re gonna have to look someplace else.
0:31:58 Which is, that’s like better than hallucination, right?
0:31:59 You gotta try that.
0:32:02 – I will, I will, I had no idea.
0:32:04 And is this open to the public?
0:32:08 – Yes, and I pay $99 a month for this.
0:32:12 And you can set it so it subscribes to your sub-stacks,
0:32:15 subscribes to your podcasts,
0:32:17 you can make a Google Drive folder
0:32:20 and whenever you write something, you drop it in the drive
0:32:22 and then it keeps checking the drive every week
0:32:24 and just keeps inputting.
0:32:26 And I feel like I’m immortal, Terry.
0:32:27 What can I say?
0:32:32 – Yes, it really has a transformative potential
0:32:34 for who would have guessed
0:32:36 that this could even be possible a couple of years ago.
0:32:39 – No one, I think it’s really a transition.
0:32:43 – But you know, just before you get too excited by this,
0:32:48 I don’t think that there is a market for people’s clones
0:32:50 because I’m pretty visible
0:32:54 and I only get five to 10 questions a day.
0:32:55 It’s a nice parlor trick.
0:32:59 Oh, we can ask Guy what he thinks about everything.
0:33:02 But after the first hour, six months later,
0:33:05 are you gonna remember there’s Kawasaki GPT?
0:33:06 – I doubt it.
0:33:11 So what you would probably go to is chat GPT and say,
0:33:13 what would Guy Kawasaki say
0:33:16 are the key elements of a pitch for venture capital?
0:33:18 And chat GPT will give you an answer
0:33:21 almost as good as Kawasaki GPT
0:33:25 and you’ll never go back to my personal clone again.
0:33:27 – Yeah, I think your children might, somebody.
0:33:31 – Well, why would that be true?
0:33:33 They don’t ask me anything now.
0:33:37 – Oh, that’s interesting.
0:33:39 A lot of times when someone dies,
0:33:42 they’re offspring and close friends say,
0:33:45 oh, I wish I had asked them that question.
0:33:48 I really wish, it’s too late, it’s too late.
0:33:52 No, if they have something, you’re there to ask the question.
0:33:55 – Okay, I did it for my kids then.
0:33:56 – Yeah.
0:33:59 – Up next on Remarkable People.
0:34:00 – You can push it around.
0:34:00 It can do either.
0:34:03 It has the world’s literature that on both sides
0:34:05 and this is exactly the problem,
0:34:07 is that it is reflecting you.
0:34:08 It’s a mirror hypothesis,
0:34:13 reflecting your kind of stance that you’re taking.
0:34:16 And it’s very, in some ways,
0:34:19 it has the ability like a chameleon, right?
0:34:21 It will change its color depending on
0:34:22 how you’re pushing it.
0:34:32 – Thank you to all our regular podcast listeners.
0:34:35 It’s our pleasure and honor to make the show for you.
0:34:37 If you find our show valuable,
0:34:39 please do us a favor and subscribe,
0:34:41 rate and review it.
0:34:44 Even better, forward it to a friend,
0:34:46 a big mahalo to you for doing this.
0:34:51 – Welcome back to Remarkable People with Guy Kawasaki.
0:34:55 – I’m gonna get a little bit political on you right now.
0:34:58 It seems to me that people can try this
0:35:01 or listening, go to chat, J.B.T., and ask,
0:35:03 should we teach the history of slavery?
0:35:04 Ask questions about,
0:35:07 should we have a biblically based curriculum
0:35:08 in public schools?
0:35:09 Go ask all those kind of questions.
0:35:11 You’re gonna be amazed at the answer.
0:35:14 So my question for you is,
0:35:17 don’t you think that in the very near future,
0:35:22 red states or let’s say a certain political party,
0:35:25 they’re gonna block access to LLMs?
0:35:26 Because if LLMs are telling you,
0:35:29 “Yes, we should teach the history of slavery,”
0:35:32 I can’t imagine Ron DeSantis is wanting people
0:35:35 to ask chat, J.B.T., that question.
0:35:38 – So now we’re getting into hot water here.
0:35:39 (laughing)
0:35:40 – You’re tenured, right?
0:35:42 – And it’s not just chat, J.B.T.,
0:35:45 we’re talking about all of these high-tech websites
0:35:48 that repository of knowledge and information
0:35:50 that you can search.
0:35:52 They have a double of a time trying to figure out
0:35:55 should they have thousands of people actually doing this?
0:35:58 They’re constantly looking at the hate things
0:36:01 that are said on Twitter or whatever.
0:36:04 That has to be scrubbed.
0:36:06 Now, the problem is who’s scrubbing it?
0:36:09 And what do they consider bad?
0:36:13 And if humans can’t agree,
0:36:17 how can you possibly have a rule
0:36:20 that is gonna be good for everybody if there isn’t any?
0:36:21 I think it’s an unsolved problem
0:36:24 and I think it’s reflecting more the disagreements
0:36:26 that humans have than the fact that chat,
0:36:29 J.B.T. can’t decide what to say.
0:36:30 – But it’s interesting,
0:36:32 you can probably push it in certain directions, right?
0:36:34 I think that people have tried that.
0:36:38 They’ve tried to break it one way or another.
0:36:43 – It may be that many Republicans have never tried LLM,
0:36:45 but I’m telling you, if they tried it,
0:36:47 they would say LLMs are woke
0:36:50 and we gotta get all this woke stuff out of the system.
0:36:52 I can’t imagine.
0:36:56 – Okay, my guess is that you’ll get a woke person
0:36:58 talking at coming to the conclusion
0:37:02 that this is flaming a conservative here.
0:37:04 In other words, you can push it around.
0:37:05 It can do either.
0:37:08 It has the world’s literature that on both sides
0:37:09 and this is exactly the problem,
0:37:11 is that it is reflecting you.
0:37:12 It’s a mirror hypothesis,
0:37:17 reflecting your kind of stance that you’re taking.
0:37:20 And it’s very, in some ways,
0:37:23 it has the ability, like a chameleon, right?
0:37:24 It’ll change its color,
0:37:27 depending on how you’re pushing it.
0:37:28 – That’s no different
0:37:31 than what people do in a conversation.
0:37:32 – That’s right.
0:37:34 And also people are polite.
0:37:36 They generally stay away from things
0:37:38 that are controversial and yeah,
0:37:40 we need that in order to be able to get along
0:37:41 with each other, right?
0:37:42 It would be terrible if all we did
0:37:44 was argue with each other.
0:37:46 – About a year or two ago,
0:37:50 there was this, I won’t prejudice your answer.
0:37:54 There was this idea that we would have a six-month
0:37:59 kind of timeout while we figure out the implications of AI.
0:38:01 Is that the stupidest thing you ever heard?
0:38:03 How do you take a timeout from AI?
0:38:06 Let’s just like timeout and figure out what we’re gonna do.
0:38:11 – That was done by, I think, 500 machine learning
0:38:15 and AI people that decided that in their wisdom
0:38:18 that we have to, you’re right, it was a moratorium.
0:38:23 And I think it was specifically on these very large GPT models
0:38:26 that we shouldn’t try to train them beyond where they are
0:38:29 because there might be super intelligent
0:38:31 and they may have to actually take over the world
0:38:32 and wipe out humans.
0:38:34 This is all science fiction, right?
0:38:36 That we’re talking about.
0:38:38 And in the book, I came across an article
0:38:41 on the economists where they had super forecasters
0:38:44 who had a track record of being able to make predictions
0:38:49 about catastrophic events, wars, and technologies,
0:38:55 nuclear technology, better than the average person.
0:38:58 And then they also compared the predictions with experts.
0:39:02 And it turns out that experts are a factor
0:39:04 of 10 times more pessimistic in terms of
0:39:05 whether something’s going to happen
0:39:08 or when it’s going to happen than the super forecasters.
0:39:09 And I think that’s what’s happening,
0:39:13 is that they think that their technology is so dangerous
0:39:16 that it needs to be stopped.
0:39:18 – When I read that section of your book,
0:39:20 I had to read it about two or three times
0:39:24 because it’s exactly opposite of what I thought
0:39:28 it would be that super forecasters would be Armageddon
0:39:31 and the technical people would say, no, it’s okay.
0:39:33 Like how do you explain that?
0:39:34 – There’s a simple explanation.
0:39:37 I think though that everybody thinks
0:39:42 that what they are doing is more important than it might be.
0:39:44 – In terms of its impact.
0:39:46 (laughing)
0:39:49 Actually, this is funny.
0:39:51 When Obama was elected president,
0:39:55 the local newspaper interviewed a lot of academics
0:39:57 about, you know, he said that he was going to support science
0:39:58 and that was wonderful.
0:40:01 And so the newspaper asked, what areas of science
0:40:03 do you think the government should support?
0:40:07 And almost every person said, what I’m doing.
0:40:07 (laughing)
0:40:12 – It’s the most important area to fund, you know?
0:40:14 Because they’re the closest to it.
0:40:16 And of course, they’ve committed their life to it.
0:40:18 So it must be the most important.
0:40:22 – I mentioned that I had two absolute gems
0:40:25 that I love those quotes in your book.
0:40:27 And I’m coming to the second one.
0:40:29 And the second one is not necessarily a quote,
0:40:34 but I want you to explain the situation when you say
0:40:39 that Sam Altman had, shall I say symptoms of toxoplasma
0:40:46 gondi, that the brain parasite that makes rodents
0:40:49 unafraid of cats and more likely to be eaten.
0:40:53 So why did you say that about Sam Altman?
0:40:56 – Okay, so first of all, this is a biological thing
0:41:01 that happens in the brain of the poor mouse or rat.
0:41:04 So there was a time when he would go to Washington
0:41:06 and not just testify before Congress,
0:41:09 but he would actually go and have dinners
0:41:12 with Congress people and talk to them.
0:41:15 And the history is that Bill Gates gets pulled in
0:41:19 and he gets grilled in a congressional testimony
0:41:20 and they gave a diversion.
0:41:24 So here’s the skies going in and not just going for testimony
0:41:28 but actually going and trying to be a part
0:41:30 of their social life.
0:41:35 So it just seemed that he was being contrary
0:41:39 to the traditional way that most humans would deal
0:41:42 with people who are out to regulate you.
0:41:45 But actually somewhere later in the book,
0:41:48 I think identified another explanation,
0:41:52 which is that the regulation is an interesting thing
0:41:55 because it basically puts up barriers, right?
0:41:58 It turns out if you have lots of lawyers,
0:42:02 you can find loopholes, it’s always a loophole, right?
0:42:04 And if you’re rich, you can afford lawyers
0:42:05 to find the loopholes for you.
0:42:08 And of course, the big corporations,
0:42:11 high tech, Google and OpenAI, they have the best lawyers.
0:42:15 They can hire the best lawyers to get around any regulation,
0:42:18 whereas some poor startup, they can’t do that.
0:42:21 So it’ll give the big companies an advantage
0:42:23 to have regulations out there.
0:42:28 Couldn’t a scrappy, small undercapitalized startup
0:42:34 ask an LLM what are the loopholes in this regulation?
0:42:35 It would find them.
0:42:36 – Ah, okay.
0:42:40 Well, so now you’re saying that in fact,
0:42:41 they could use their own,
0:42:44 because they’re not gonna be able to make their own LLM,
0:42:45 they’re gonna have to use the other,
0:42:48 the big ones that are already out there.
0:42:50 And it could be that these companies
0:42:54 are actually democratizing lawyers.
0:42:56 (laughing)
0:42:59 By the way, it’s not just lawyers and laws,
0:43:01 it’s also reporting.
0:43:03 In other words, there’s a tremendous amount
0:43:05 of what they wanna do is somehow,
0:43:10 if the companies have to have tests and lots of examples,
0:43:14 they’re gonna require a lot of FAA before you,
0:43:18 airplane is allowed to carry passengers,
0:43:21 it’s gotta go through a whole series of tests
0:43:25 and very stringent,
0:43:28 you have to be put into the worst weather conditions
0:43:31 to make sure it’s stressed, a stress test.
0:43:34 And again, all of that testing is basically
0:43:37 for a large company, they have lots of resources to do that.
0:43:40 And it may not be easy for a small company,
0:43:42 so it’s complicated.
0:43:46 But in any case, I think that what’s happening right now
0:43:49 is that the Europeans have this AI law
0:43:53 that is 100 pages with very strict rules
0:43:55 about what you can and cannot do,
0:44:00 like you can’t use it for interviewing future employees
0:44:01 for companies.
0:44:03 – We just advocated for that.
0:44:07 – Yeah, we’ll see what happens in the US,
0:44:08 ’cause right now it’s not prescriptive,
0:44:12 it’s suggestive that we follow these rules.
0:44:15 – And what would be the thinking that you can’t use it
0:44:17 to interview employees in Europe?
0:44:19 What are they worried about?
0:44:21 – Oh, bias, bias.
0:44:23 – Bias, as opposed to human bias,
0:44:27 like a male recruiter falls for an attractive female candidate.
0:44:30 – Okay, that’s also a bias, I guess.
0:44:32 (laughing)
0:44:36 There probably is some law there, I don’t know.
0:44:41 Not only are we biased, but we’re biased in our biases.
0:44:43 (laughing)
0:44:46 Who we talk to, things like that.
0:44:48 – All right, I gotta tell you one more part
0:44:51 I really loved about your book is when you had
0:44:54 the long description of legalese,
0:44:58 and then you had the LLM Simplify a contract,
0:45:00 and that was just beautiful.
0:45:03 Like why do terms of service have to be so
0:45:05 absolutely impenetrable?
0:45:08 And you showed an example of how it could be
0:45:10 done so much better.
0:45:12 – That is happening right now,
0:45:15 I think in a lot of places that is,
0:45:18 and this is a big transformation that’s occurring
0:45:20 within companies now.
0:45:22 They are, the employees are using these tools
0:45:25 in order to be able to help.
0:45:27 First of all, keep track of meetings.
0:45:28 You don’t have to have someone there taking notes
0:45:30 because the whole thing gets summarized
0:45:32 at the end of the meeting.
0:45:34 It’s really good at that, and speech recognition.
0:45:36 – Well, you also mentioned that when doctors
0:45:38 are interviewing patients that instead of looking
0:45:40 at the keyboard and the monitor,
0:45:43 they should be just listening and let the recording
0:45:44 take care of all that, right?
0:45:48 – Yes, that’s a huge benefit because looking
0:45:51 at the patient carries a lot of information.
0:45:54 There are expressions, the color of their skin,
0:45:57 all of that is part of being a doctor,
0:45:58 and if you’re not looking at them,
0:46:02 you’re not really being a good doctor.
0:46:06 – Okay, this is seriously my last question.
0:46:09 I love the fact that the first few chapters at the end,
0:46:13 they had these questions that probably ChatGPT generated.
0:46:16 Why didn’t you continue that through the whole book
0:46:19 so every chapter ends with questions?
0:46:22 – I don’t know, I hadn’t talked about it.
0:46:25 I’ll tell you, I wrote the book over a course of a year,
0:46:27 and I think that it must have been the case
0:46:30 that by the time, I do use it throughout the book.
0:46:32 I have sections, and I actually set them apart
0:46:36 and say this is ChatGP, at the end there’s this little sign
0:46:40 opening the eye sign, and I ask it to summarize parts.
0:46:42 And at the beginning, I actually ask it to,
0:46:47 sometimes I ask it to come up with say five questions
0:46:49 from this chapter, and that’s where Alex the parrot
0:46:50 popped out.
0:46:53 (laughing)
0:46:56 – Am I the first person to catch the fact
0:46:59 that Alex the parrot was not mentioned in the text
0:47:00 except for the footnote?
0:47:03 – You are the first person, and I suspect there are others
0:47:05 that notice that.
0:47:08 But actually it’s good to have a few little parcels
0:47:12 in there that you have a little detective story
0:47:13 for who is Alex the parrot.
0:47:17 – All right, how about I give you like,
0:47:19 I really want you to sell a lot of copies of this book.
0:47:22 So how about I give you like, just unfettered,
0:47:26 give us your best shot promo for your book.
0:47:29 – Everything you’ve always wanted to know
0:47:32 about large language models and ChatGPT,
0:47:34 and we’re not afraid to ask.
0:47:36 – That’s a good positioning.
0:47:38 I like that.
0:47:43 It’s like that book way in my past.
0:47:45 It was a book called everything you wanted to know
0:47:48 about sex, but was afraid to ask, right?
0:47:52 – Yeah, it was a take off, a ball rip off.
0:47:54 – As I learned from Steve Jobs,
0:47:55 you gotta learn what to steal.
0:47:58 That’s a talent in and of itself.
0:48:00 – You’re paying homage to the past,
0:48:02 but I wrote this for the public.
0:48:05 I thought that the news articles were misleading
0:48:08 and all this talk about super intelligence was,
0:48:11 although it’s a concern, it’s not an immediate concern,
0:48:14 but we have to be careful, that’s for sure.
0:48:17 And it helps, I’m trying to help people.
0:48:19 When I give talks, they ask, well, I lose my job.
0:48:23 And I say, you may not lose your job, but it’s gonna change.
0:48:24 And you have to have new skills
0:48:27 and maybe that’s gonna be part of your new job
0:48:29 is to use these AI tools.
0:48:31 – Well, as you mentioned in your book,
0:48:34 when we started getting farm equipment,
0:48:36 there are a lot less farmers.
0:48:41 You could manage thousands of acres at one person, right?
0:48:42 – Yes, that’s true.
0:48:45 That’s true, but the children went to the cities
0:48:46 and they worked in factories.
0:48:47 And so they had a different job,
0:48:50 but it wasn’t working to get food.
0:48:54 It’s working to make cloth and automobiles and things.
0:48:56 – And LLMs eventually.
0:48:58 (laughs)
0:49:00 Yes, eventually, for some of us.
0:49:02 – I just wanna thank you, Terry, very much.
0:49:04 I found your book very, very,
0:49:08 not only interesting and informative.
0:49:13 There were places where I was just busting out laughing
0:49:15 and I’m not sure that was your intention,
0:49:18 but when I read that thing about Sam Altman’s brain,
0:49:21 has that thing that make rodents less afraid of cats.
0:49:24 I’m like, oh my God, this guy is a funny guy.
0:49:27 – So I thought I’d make it entertaining
0:49:30 so that people can appreciate.
0:49:33 In some way, we’re two purchase people,
0:49:34 I’m serious about that.
0:49:35 Let’s have some fun.
0:49:37 – One of my theories in life is that
0:49:40 a sense of humor is a sign of intelligence.
0:49:41 (laughs)
0:49:41 – Oh, good.
0:49:44 Actually, I’ll tell you, if this is interesting,
0:49:46 who gets the Academy Awards?
0:49:49 It’s the actor who’s in some terrible drama
0:49:51 where something bad happens and so forth.
0:49:55 And then they overlook all the fantastic comedians.
0:49:57 It turns out it’s much more difficult
0:50:00 to be a comedian and be somebody who has angst.
0:50:04 And they’re not giving the same respect.
0:50:07 I had no idea that you’ve read the whole book
0:50:09 ’cause most of the people who interviewed me,
0:50:11 they’ve read some parts,
0:50:13 but it sounds like you know the whole book.
0:50:17 – I could, do you know the story
0:50:20 of the chauffeur and the physicist?
0:50:22 Okay, this is along the lines
0:50:24 of what you just said that I read the whole book.
0:50:27 So this physicist is on a book tour.
0:50:29 Let’s say it’s Stephen Wolfram or Neil deGrasse Tyson.
0:50:32 So anyway, they’re on this book tour
0:50:34 and they’re gonna make four stops in the cities
0:50:38 and the chauffeur takes them from stop to stop.
0:50:40 So the chauffeur sits in the back
0:50:42 and listens to the first three times
0:50:44 the physicist gives the talk.
0:50:46 At the fourth time, the physicist says,
0:50:48 “I am exhausted.
0:50:50 “You heard me give this talk three times.
0:50:52 “You go give the talk.”
0:50:54 And the chauffeur says, “Yeah, I can do it.
0:50:56 “I heard you three times.”
0:50:58 The chauffeur goes up, gives the talk,
0:51:00 but he ends early.
0:51:04 And so the MC, the host of the event, says to the chauffeur,
0:51:06 “Oh, we’re lucky we ended early.
0:51:10 “We’re gonna take some Q&A from the audience.”
0:51:12 So the first question comes up and it’s about physics
0:51:14 and the chauffeur has no idea.
0:51:17 And he says, “This question is so simplistic.
0:51:20 “I’m gonna let my chauffeur sitting in the back answer.
0:51:21 (laughing)
0:51:22 “So I’m your chauffeur.”
0:51:26 (laughing)
0:51:27 Oh, that’s wonderful.
0:51:28 All right, Terry, thank you.
0:51:29 Well, thank you.
0:51:31 I truly enjoy this.
0:51:32 I did too.
0:51:33 All right, all the best to you.
0:51:36 (jazz music)
0:51:38 This is Remarkable People.
In this episode of Remarkable People, Guy Kawasaki engages in a fascinating dialogue with Terry Sejnowski, the Francis Crick Chair at the Salk Institute and Distinguished Professor at UC San Diego. Together, they unpack the mysteries of artificial intelligence, exploring how AI mirrors human learning in unexpected ways. Sejnowski shatters common misconceptions about large language models while sharing compelling insights about their potential to augment human capabilities. Discover why being polite to AI might yield better results and why the future of AI is less about academic debates and more about practical applications that can transform our world.
—
Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.
With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.
Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.
Episodes of Remarkable People organized by topic: https://bit.ly/rptopology
Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**
Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Thank you for your support; it helps the show!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.