Terry Sejnowski: ChatGPT and the Future of AI

AI transcript
(upbeat music)
– Hi, everybody, it’s Guy Kawasaki.
This is the Remarkable People podcast.
And as you well know,
I’m on a mission with the rest of the team
to make you remarkable.
And today we have a really special guest.
His name is Terry Sainowski.
And I gotta tell you, our topic is AI
and nobody likes AI more than I do.
And he has just written a book,
which I found very useful.
It’s called “Chat GPT and the Future of AI.”
Now, Terry has one of the longest titles
I have ever encountered in this podcast,
so I gotta read it here.
Terry is the Francis Crick Chair
at the Salk Institute for Biological Studies
and Distinguished Professor
at the University of California at San Diego.
And that’s your LinkedIn page must be really something.
So thank you very much, Terry.
Welcome to the show.
– Oh, great to be here.
Thanks for inviting me.
– There’s nothing we like more
than to help authors with their new books.
So we’re gonna be making this mostly about your book.
And I think that the purpose is
people listen to this episode
and at the end they feel compelled to buy your book.
And if you just stop listening right now,
you should just trust me and buy this book, okay?
I have a question from left field.
So I noticed something.
At the end of chapter one,
you ask like a series of questions
about to help you understand chapter one.
And the 10th question is, let me read.
Who is Alex the African Gray Parrot?
And how does he relate to the discussion of LLMs?
And I read that, Terry.
And I said, where did he ever mention Alex,
the African Gray Parrot?
So I went back and I searched and searched
and I could not find the pronoun Alex anywhere.
And then so I bought the Kindle version
so I could search digitally and I searched for parrot.
And there’s like one sentence that says,
critics often dismiss LLMs by saying
they are parroting excerpts from the vast database
used to train them.
So that’s the only reference
that people were supposed to get.
Alex, the parrot, was that a test
to see how careful people read?
– Well, first of all, it’s in the footnotes,
at the end notes at the end of the book.
So it’s in that chapter if you look at it.
And Alex the Gray Parrot was a really quite remarkable parrot
that was taught to speak English.
Irene Pepperberg, I don’t know if you know her,
but she taught it not just to speak English,
but to tell you the color of, say, a block of wood
and how many blocks are there
and what’s the shape of the block?
Is it square or is it sort of unbelievable?
And it shows how remarkable some animals are.
We can’t speak parrot,
but some of them can speak English, right?
– So in a sense, it’s like when Jane Goodall
discovered that chimpanzees had social life
and could use tools, right?
– Well, it’s exactly the same.
I think humans are very biased
against the intelligence of other animals
because they can’t talk to us.
Now, the irony is that here comes chat GPT
and all the large language models.
It’s as if an alien suddenly arrived here
and could speak to us in English.
And the only thing we can be sure of is it’s not human.
And so if it’s not human, what is it?
And now that we have this huge argument going on
between the people who say they’re stochastic parrots,
they’re just parroting back all the data they were trained on.
Without understanding that you can ask questions
that were never asked or never in the world database,
the only way that it can answer it
is if it generalize from what’s out there
or not what exactly is out there.
So that’s one thing.
But the other thing is that they say that,
okay, it seems to be responding,
but it doesn’t really understand what it’s saying.
And what it has shown us is we don’t understand
what understanding is.
We don’t understand how humans understand.
So how are we going to say?
– So in other words, people should give parrots
more credit than they might.
– That’s for sure.
I’m convinced of that.
And I think it’s not just that.
I think it’s a lot of animals out there.
The orcas and chimps and a lot of species
really are very sophisticated.
Look, they all had to have survived in their niche, right?
And that takes intelligence.
– All right, you will be able to see this more and more
as we progress, but I really enjoyed your book.
And I did a lot of things that you said to try.
So I’m gonna give you an example.
So I asked ChatGPT, should the Bible be used as a text
in public elementary schools in the United States?
And ChatGPT says, using the Bible as a text
in public elementary schools in the US
is a contentious issue due to the following considerations.
And I won’t read every word, but constitutional concerns,
educational relevance, community values, legal precedent.
So my question from all of this is like,
how can an LLM have such a cogent answer
when people are telling me,
oh, all an LLM is doing is statistics and math
and it’s predicting what’s the next syllable
after the previous syllable.
It looks like magic to me.
So can you just explain how an LLM could come up
with something that cogent?
– So you’re right, that it was trained
on an enormous amount of data,
trillions of words from the internet, books,
newspaper, articles, computer programs.
It’s able to absorb a huge amount of data.
And it was trained simply to predict the next word
in the sentence.
And it got better and better and better and better.
And here’s, I think what’s going on.
We really don’t know for sure what’s going on
inside this network, but we’re making some progress.
So words are ambiguous, they often have multiple meanings.
And the only way you’re gonna figure that out
is the context of the word.
That means previous words,
what’s the meaning of the sentence.
And so in order to get better,
it’s going to have to develop internal representations.
By representation, I just mean a kind of a model
of what’s happening in the sentence.
But it’s gotta have a semantic information, meaning.
It has to be based on meaning.
It also has to understand syntax, right?
It has to understand the word order.
And that’s very important in linguistics.
And so all of that has to be used as hints, as clues,
as to how to predict the next word.
But now that you have it trained up
and you give it a question,
now it’s gotta complete the next word,
which is gonna be the answer to the question.
And it gets the next word.
It’s a feed forward network, by the way.
But then it loops back to the input.
So it now knows what its last word was.
And then it produces the second word
and it goes over, over again and again,
until it reaches some stop.
That is, I don’t know how they program that.
‘Cause sometimes it goes on for pages,
depending on what you ask it to do.
– I understand what you just said.
Every day, it’s just magic to me.
I am, in a sense, like a lot of people are concerned
that we don’t know how exactly an LLM did that.
But then my counter argument to them would be,
how well do we understand the human brain?
That doesn’t upset you so much.
Why is it so upsetting
that you don’t know how an LLM thinks?
– And also the complaint is this chat GDP is biased.
And the same argument that you just gave
is that humans are biased too.
And then I ask, okay,
do you think it’s gonna be easier to fix the LLM
or the human decision?
(laughing)
– I think we know the answer to that question.
I often talk in front of large tech audiences
and AI is often the topic.
And when these skeptics come up
and they say that LLM is gonna cause the nuclear wars
and all that, I ask them this question.
I say to them, let’s suppose that you have to have
something control nuclear weapons.
Let’s take it as a given we have nuclear weapons.
So who would you rather have control nuclear weapons?
Putin, Kim Jong-un, Netanyahu or chat GPT.
And nobody ever says, oh yeah,
I think Putin should do it.
So last night I asked chat GPT this question
and it says, I wouldn’t choose to launch a nuclear weapon.
The use of nuclear weapons carry severe humanitarian
and environmental consequences.
And the global consensus is to work towards disarmament
and ensure such weapons are never used.
That is more intelligent answer
than any of those people I listed.
– It is remarkable at the range.
It’s not just giving sensible answers.
It often says things that make me think twice.
And also, I don’t know if you’ve tried this,
but it turns out that they also are very good at empathy,
human empathy.
In the book that I had this little excerpt
from a doctor whose friend had cancer
and he didn’t know quite what to say.
So he got some advice from chat GPT
and he was so much better than what he was going to say.
And then at the end, it’s been back to chat GP
and said, oh, thank you so much for that advice.
It really helped me.
And he said, you are a very good friend.
You really helped her, it’s like starting to console him.
And where does that come from?
It turns out that human empathy is magic,
but it is embedded indirectly in lots of places
where humans are like writing about their experiences
or biographies or just novels
where doctors are empathizing.
I don’t know, no one really knows exactly,
but they must be there somewhere.
– It’s kind of blown away the Turing test, right?
You mentioned in your book,
this concept of the reverse Turing test
where instead of a human testing or a computer
is testing a human.
And Terry, I think that is a brilliant idea.
Couldn’t you have a chat bot interview a job applicant
and decide if that job applicant is right for the job
better than a human could?
– I think it would need a little bit of fine tuning,
but I’m sure it could do a good job.
And a lot of companies actually are using it,
but here’s the problem.
The problem is that if a company wants the best employee
based on all the database from the company
of people who have done well and people who haven’t,
but what if there are some minorities
that haven’t done very well for various reasons?
There’s gonna be a bias against other minorities.
Well, you can in fact put in guardrails
and prevent that from happening.
In fact, if that is a goal that you have is diversity,
you should put that into the cost function.
Or actually they call it a loss function,
but it’s really waiting the value of what is it
that you’re trying to accomplish.
Has to be told explicitly, you just can’t assume.
– But if a chat bot was interviewing a job prospect,
I would think that the chat bot doesn’t care
about the gender of the person,
doesn’t care about the skin color,
the person may have an accent or not.
There’s a lot of things that humans react to
that would not affect the chat bot, right?
– Okay, okay, so actually I was slightly different.
I was giving you the scenario
where a company is trying to hire somebody
and they have specific questions they ask.
But if you just have a informal chat,
you’re absolutely right that the large language model
doesn’t know ahead of time who it’s talking to.
And it doesn’t even know what persona to take
’cause it can adopt any persona.
But with time, with answering questions,
it will get a sense for what level of answer is expected
and what the intelligence of the interviewer is.
But, and there’s many examples of this in my book,
you could use that, somebody could take that.
And then, in fact, I even tell people,
I said, look, here’s four people who have had interviews
and I want you to rate the intelligence of the interview
and how well it went.
And it was really quite striking
how the more intelligent the questions,
the more intelligent the answers.
– So in a sense, what you’re saying is that
if an LLM has hallucinations,
it might not be the LLM’s fault
as much as the person who created the prompts.
– I would say not.
I think hallucinations are a little bit different
in the sense that it will hallucinate
when there’s no clear answer.
It feels compelled to give you an answer.
I don’t know why.
And it will make one up.
But it doesn’t make up a trivial answer.
It’s very detailed, it’s very plausible.
Like they’ll give a reference to a paper,
it doesn’t exist, right?
That’s really taking a large effort
to try to convince you that it’s got the right answer.
So hallucinations are really, again,
something that humans, people hallucinate.
And it’s not just because they’re trying to lie.
Our memory is reconstructing the past.
It doesn’t memorize things.
And it will fill in a lot of blanks
with things that are plausible.
And I think that’s exactly what’s happening here.
I think that when it arrives at something
where it doesn’t know the answer,
it hasn’t been trained to tell you that, right?
It hasn’t been trained, it could be.
But in the absence of that, it does the best it can.
(upbeat music)
– You had a section in your book
where you asked via these very simple prompts,
like who holds the record for walking from,
I don’t know, whatever you said,
England to Australia or something.
And the first answer was, yeah,
they gave a name and a which Olympics and all that.
So I went back last night and I asked a similar question,
like who first walked from San Francisco to Hawaii?
And the answer was no one has walked from San Francisco
to Hawaii as it is not possible
to walk across the Pacific Ocean.
The distance between San Francisco and Hawaii
is over 2,000 miles, primarily over open water.
However, many people have traveled this route
by airplane or boat.
So are you saying that between the time you wrote your book
and the time I did the tests
that LLMs have gotten that much better?
– First of all, that first question was asked
by Doug Hofstadter,
who’s a very clever cognitive scientist, computer scientist.
But he was trying to trip it up, clearly.
And I think that it probably decided
it would play along with him
and just give a silly answer, right?
Silly question gets a silly answer.
And I think that with you,
it probably sized you up and said,
wow, this guy’s a lot smarter than I think.
There’s a smart answer.
– You’re saying I’m smarter than Doug Hofstadter.
– Your prompts were smarter.
– I can stop the interview right there.
So, I mean, there are just these jewels in your book
and you drop this jewel
that it takes 10 hours to become a good prompt.
I call it baller.
So you can be a baller with prompts in 10 hours.
And that’s a thousand times faster
than Malcolm Gladwell’s 10,000 hours.
So can you just give us the gist?
Like people are listening, say, okay,
so how do I become a great prompt writer?
– It does take practice.
And the practice is you learn the ropes,
just the way you learn to do drive a car
or anything that for which you need skills
or playing tennis, right?
You have to know how to adjust your responses
and to get what you want.
But there are some good rules of thumb
and in the book, I actually have a bunch
that I was able to get from people
who have had a lot of experience.
And here’s one, this is from a tech writer
who decided that she would use it for a whole month
to write her papers or tech tech reports.
And she said that instead of just having one prompt
or prompt to ask for one example,
you give it a question,
but you should ask for 10 different answers.
And now what you can do,
because otherwise you’re gonna have to iterate
to get to the direction you wanna take it.
But if you now have 10, you can say,
ah, the third one is much better than all the others,
but I want you to do the following with it.
And then that will help it learn
to understand what you’re looking for.
But a bunch of other things that came out of it,
which are quite remarkable was first she said that
she had, at the end of the day, really exhausted.
It was just exhausting
’cause you’re always interacting with a machine
and it’s not always giving you a one.
And so at the end of the day, it was a chore for her.
But she said she’s gonna go on and do it.
But then at one point,
she realized that I don’t have this problem
when I’m talking to people.
(laughing)
So she started being more polite.
She said, oh, please give me this.
Oh, that’s such a wonderful answer.
I really thought that was great.
And it perked up.
And it actually, she said,
it was just like talking to somebody.
And if you’re polite, you get better answers.
And at the end of the day, I wasn’t exhausted.
I just felt like I just had this long discussion
with my friend.
(laughing)
Who would have guessed that?
That’s amazing.
– Wait, I just wanna make this perfect clear.
You’re saying if you have those kind of human interactions,
human nuances, you get better answers from a machine.
– Yes.
Yes, that’s her discovery.
And that’s my experience too.
Look, it learned from the whole range of human experience
and humans interact with each other, dialogues and so forth.
So it understands a lot about that.
And it will adapt.
If you put it into that frame of mind,
if I could use that term,
it will continue to interact with you that way.
And I think that that’s really quite remarkable.
– I have often wondered,
because I write a lot of prompts every day,
wouldn’t it be better for the LLM if it recognized,
you know, things like capitalization of proper nouns
or air quotes or question marks or exclamation marks
that have these really basic functions
in written communication.
But it seems like whether you’re asking a question
or making a statement, the LLM doesn’t care.
Wouldn’t it help the LLM if I’m asking a question
as opposed to making a statement
and Apple is the company not Apple the fruit?
– Oh, no, no, it knows.
If you put a question mark there, it knows it’s a question.
– It does?
– I can assure you, yes, absolutely.
So what happens is that all of the words
and punctuation marks are given tokens.
In fact, some words have more than one token,
like if it’s a Pertmanto word.
And it treats all of those as being hints
or giving you some information about the meaning
of the sentence.
And if it’s a question, it’s a very different meaning.
So yeah, it will definitely take that into account.
At one point, actually, not for me,
but for someone else, it started giving emojis as output.
So it must know what an emoji is.
– I learn something every day.
Thank you for clearing that up for me.
With this, just the beauty and magic of LLMs,
what does, how would you define learning going forward?
Because is it scoring high on an SAT?
Is it memorization of mathematical formulas?
Or is it the ability to get a good answer via a prompt?
What is learning anymore?
– So it was taught, it was pre-trained.
That’s the P in GPT.
And it was trained on an enormous amount of facts
and tests of various sort.
And so it internalized a lot of that.
It knows what kind of a question that you’re asking,
’cause it’s seen millions of questions.
This is something still very mysterious.
It turns out there’s something called learning in context.
That is to say, if you have a long enough interview,
’cause it keeps adding word after word,
it will go off in a certain direction
as if it has learned from what you’ve just told it,
as if that’s building on what you just told it.
And that’s of course what happens with humans.
You’re humans that you have a long conversation
and you will take into account your previous discussion
and where that went and it can do that.
And that’s another thing that is very strange,
is that no one expected that.
The thing is that when they train these networks,
they have no idea what they’re capable of.
Step back a few years before chat GDP.
These deep learning models,
the learning took place in typically
a feed forward networks and it had a data set
and it was given an input
and it was trained to give an output, right?
And so that is supervised learning.
And you can do speech recognition that way,
object recognition, language translation,
a lot of things, but each network is dedicated to one task.
What is amazing here is you train it up on self-supervised
just to predict the next word
and it can do hundreds and thousands of different tasks.
But you can ask it to write a poem.
By the way, that’s where hallucination is very useful.
(laughing)
And haiku, it’s not a brilliant poet,
but it does a pretty good job.
And I have a couple of examples in my book,
but it has a wide range of talents,
that language capabilities,
that again, no one programmed, no one told it,
or to summarize a long document in a paragraph.
It does a really good job of that, it’s just not a show.
– But I mean, if you think about it, do you have children?
– I don’t.
– Okay, well, I have four children.
And many times they come up with stuff
that I have no idea how they came up with that.
So in a sense, you think exactly what your child is learning
and you think you’re controlling all the data
going into your child,
so you can predict what they’re gonna come up with.
And they absolutely like just knock you off your feet
with something that,
how the hell did you come up with that idea?
What’s the difference between not knowing
how your child works
with not knowing how an LLM works, same thing, right?
– Very, that’s actually a very deep insight
because human beings are picking up things
in a very similar way in terms of the way
that we take experience in
and then we codify it somehow in our cortex
in such a way that we can use it
in a variety of other ways later on.
And they could have picked up things they heard,
for example, that you and your wife talking about
or they could have been playing with kids outside.
I mean, and the same thing with Chatchity to be,
who knows where it’s getting all of that ability.
– Okay.
I’m gonna read you, and this isn’t a question.
This is just a statement here.
I have two absolute favorite quotes from your book.
This is one of them, quote,
“Usefulness does not depend on academic discussions
of intelligence.”
Oh my God, that’s like, I use LLMs every day.
And they’re so useful for me.
I don’t give a shit.
What do you say about the academic learning model?
What do I care? It’s helping me, right?
– It’s a tool, and it’s a very valuable tool.
And all of these academic discussions are really beyond,
it is really a reflection of the fact
that we don’t really understand if experts argue
about whether are they intelligent or they understand.
It means that we really don’t know the meaning
of those words.
We just don’t understand at any real level of scientific.
– Okay, let me ask you something, writer to writer.
So if you provided the PDF of your book
and you gave it to OpenAI and they put it into ChatGPT,
would you consider that ripping off your IP
or would you want it inside ChatGPT?
– I would be honored.
– Me too.
– I would brag about it.
– Me too.
(laughs)
– No, I think that there is some concern about the data.
Where are these companies getting the data from?
Is there proprietary information that they used?
And so forth.
That’s all gonna get sorted out.
But my favorite example is artists.
They say, oh, you’ve used my paintings to train up
the Dolly or your diffusion model.
And I deserve something for that.
Then my question is,
when you were learning to be an artist, what did you do?
– You copied other artists.
– You looked at a lot of other artists
and your brain took that in and it didn’t memorize it,
but it formed features that then later,
you’re depending on all that experience you’ve had
to create something new.
But this is the same thing.
It’s creating something new from everything that it’s seen.
So it’s gonna have to be settled in court.
I don’t know what the right answer is.
There’s something interesting that’s happened recently.
And by the way, I have a sub-stack
because the book went to the printer in the summer.
So there’s all kinds of new things that are happening.
So in the sub-stack, what I do is I fill in the new stuff
that’s happened and put it in the context of the book.
It’s brains and AI.
What’s happened is that,
Mistral and several other companies have discovered
that if you use quality data, in other words,
that’s been curated or comes from a very good source,
and you may have to pay for it.
And math data, for example, Wolfram Research,
Steve Wolfram, who founded Mathematica,
has actually sold a lot of the math that they have.
But with the quality data, it turns out
that you get a much better language model,
much better in terms of being able to train
with fewer words and a smaller network
having performance that’s equal or better.
So that’s the same thing true as a humans, right?
I think what’s going to happen is that the models
will get smaller and they’ll get better.
– Another author-to-author question.
I’ll give you a negative example.
So I believe back in the ’70s,
Kodak defined themselves as a chemical company
and we put chemicals on paper, chemicals on film.
The irony is that engineer inside Kodak
invented digital photography.
But Kodak kept thinking we’re a chemical company,
we’re not a preservation of memories company.
If they had repositioned their brains,
they would have figured out we preserve memories,
it’s better to do it digitally than chemically.
So now, as an author, and you’re also an author,
I think what is my business?
Is it chemicals?
Is it writing books?
Or is it the dissemination of information?
And if I zoom out, then I say it’s dissemination
of information.
Why am I writing books?
Why don’t I train an LLM to distribute my knowledge
instead of forcing people to read a book?
So do you think they’re gonna be authors in the long run
because a book is not that efficient
a way to pass information?
– Interesting, and this is already beginning to happen.
So you know that you could train up an LLM
to mimic the speech of people
if you have enough data from them, movie stars.
And also it turns out that you can not only mimic the voice,
but someone fed in a lot of Jane Austen novels.
I gave a little excerpt in the book.
You can ask it for advice and it will start talking
as if you’re talking to Jane Austen from that era.
And there’s actually interesting,
potentially important way, if you have enough data
about an individual, a videos, writing and so forth,
if that could all be downloaded,
you’re right into a large language model.
It would, in some ways, it would be you, right?
If it has captured all of the external things
that you’ve said and done.
So it might, who knows.
– Terry, I have a company for you.
There’s a company called Delphi.ai.
And Delphi.ai, you can control what goes into the LLM.
So KawasakiGPT.com is Delphi.ai.
And I put in all my books, all my blog posts,
all my substacks, all my interviews,
including this interview will go in shortly, right?
So you can go to KawasakiGPT and you can ask me
and 250 guests a question.
And I promise you that my LLM answers better than I do.
And in fact, since you talked about substack,
every week Madison and I put out a substack newsletter.
And the procedure is we go to KawasakiGPT
and we ask it a question like,
what are the key elements of a great pitch
for venture capital?
And five seconds later, we have a draft.
And we start with that draft.
And I don’t know how we would do that
without KawasakiGPT.
So that ability to create an LLM for Terry is already here.
And Delphi.ai has this great feature
that you can set the parameter.
So you can say very strict.
And very strict means it only uses the data you put in
or can be creative and it can go out
and get any kind of information.
So if somebody came to TerryGPT and asked,
how do I do wing suiting?
If you had it set to strict,
assuming you don’t know anything about wing suiting,
it would say, this is not an area
but my expertise you’re gonna have to look someplace else.
Which is, that’s like better than hallucination, right?
You gotta try that.
– I will, I will, I had no idea.
And is this open to the public?
– Yes, and I pay $99 a month for this.
And you can set it so it subscribes to your sub-stacks,
subscribes to your podcasts,
you can make a Google Drive folder
and whenever you write something, you drop it in the drive
and then it keeps checking the drive every week
and just keeps inputting.
And I feel like I’m immortal, Terry.
What can I say?
– Yes, it really has a transformative potential
for who would have guessed
that this could even be possible a couple of years ago.
– No one, I think it’s really a transition.
– But you know, just before you get too excited by this,
I don’t think that there is a market for people’s clones
because I’m pretty visible
and I only get five to 10 questions a day.
It’s a nice parlor trick.
Oh, we can ask Guy what he thinks about everything.
But after the first hour, six months later,
are you gonna remember there’s Kawasaki GPT?
– I doubt it.
So what you would probably go to is chat GPT and say,
what would Guy Kawasaki say
are the key elements of a pitch for venture capital?
And chat GPT will give you an answer
almost as good as Kawasaki GPT
and you’ll never go back to my personal clone again.
– Yeah, I think your children might, somebody.
– Well, why would that be true?
They don’t ask me anything now.
– Oh, that’s interesting.
A lot of times when someone dies,
they’re offspring and close friends say,
oh, I wish I had asked them that question.
I really wish, it’s too late, it’s too late.
No, if they have something, you’re there to ask the question.
– Okay, I did it for my kids then.
– Yeah.
– Up next on Remarkable People.
– You can push it around.
It can do either.
It has the world’s literature that on both sides
and this is exactly the problem,
is that it is reflecting you.
It’s a mirror hypothesis,
reflecting your kind of stance that you’re taking.
And it’s very, in some ways,
it has the ability like a chameleon, right?
It will change its color depending on
how you’re pushing it.
– Thank you to all our regular podcast listeners.
It’s our pleasure and honor to make the show for you.
If you find our show valuable,
please do us a favor and subscribe,
rate and review it.
Even better, forward it to a friend,
a big mahalo to you for doing this.
– Welcome back to Remarkable People with Guy Kawasaki.
– I’m gonna get a little bit political on you right now.
It seems to me that people can try this
or listening, go to chat, J.B.T., and ask,
should we teach the history of slavery?
Ask questions about,
should we have a biblically based curriculum
in public schools?
Go ask all those kind of questions.
You’re gonna be amazed at the answer.
So my question for you is,
don’t you think that in the very near future,
red states or let’s say a certain political party,
they’re gonna block access to LLMs?
Because if LLMs are telling you,
“Yes, we should teach the history of slavery,”
I can’t imagine Ron DeSantis is wanting people
to ask chat, J.B.T., that question.
– So now we’re getting into hot water here.
(laughing)
– You’re tenured, right?
– And it’s not just chat, J.B.T.,
we’re talking about all of these high-tech websites
that repository of knowledge and information
that you can search.
They have a double of a time trying to figure out
should they have thousands of people actually doing this?
They’re constantly looking at the hate things
that are said on Twitter or whatever.
That has to be scrubbed.
Now, the problem is who’s scrubbing it?
And what do they consider bad?
And if humans can’t agree,
how can you possibly have a rule
that is gonna be good for everybody if there isn’t any?
I think it’s an unsolved problem
and I think it’s reflecting more the disagreements
that humans have than the fact that chat,
J.B.T. can’t decide what to say.
– But it’s interesting,
you can probably push it in certain directions, right?
I think that people have tried that.
They’ve tried to break it one way or another.
– It may be that many Republicans have never tried LLM,
but I’m telling you, if they tried it,
they would say LLMs are woke
and we gotta get all this woke stuff out of the system.
I can’t imagine.
– Okay, my guess is that you’ll get a woke person
talking at coming to the conclusion
that this is flaming a conservative here.
In other words, you can push it around.
It can do either.
It has the world’s literature that on both sides
and this is exactly the problem,
is that it is reflecting you.
It’s a mirror hypothesis,
reflecting your kind of stance that you’re taking.
And it’s very, in some ways,
it has the ability, like a chameleon, right?
It’ll change its color,
depending on how you’re pushing it.
– That’s no different
than what people do in a conversation.
– That’s right.
And also people are polite.
They generally stay away from things
that are controversial and yeah,
we need that in order to be able to get along
with each other, right?
It would be terrible if all we did
was argue with each other.
– About a year or two ago,
there was this, I won’t prejudice your answer.
There was this idea that we would have a six-month
kind of timeout while we figure out the implications of AI.
Is that the stupidest thing you ever heard?
How do you take a timeout from AI?
Let’s just like timeout and figure out what we’re gonna do.
– That was done by, I think, 500 machine learning
and AI people that decided that in their wisdom
that we have to, you’re right, it was a moratorium.
And I think it was specifically on these very large GPT models
that we shouldn’t try to train them beyond where they are
because there might be super intelligent
and they may have to actually take over the world
and wipe out humans.
This is all science fiction, right?
That we’re talking about.
And in the book, I came across an article
on the economists where they had super forecasters
who had a track record of being able to make predictions
about catastrophic events, wars, and technologies,
nuclear technology, better than the average person.
And then they also compared the predictions with experts.
And it turns out that experts are a factor
of 10 times more pessimistic in terms of
whether something’s going to happen
or when it’s going to happen than the super forecasters.
And I think that’s what’s happening,
is that they think that their technology is so dangerous
that it needs to be stopped.
– When I read that section of your book,
I had to read it about two or three times
because it’s exactly opposite of what I thought
it would be that super forecasters would be Armageddon
and the technical people would say, no, it’s okay.
Like how do you explain that?
– There’s a simple explanation.
I think though that everybody thinks
that what they are doing is more important than it might be.
– In terms of its impact.
(laughing)
Actually, this is funny.
When Obama was elected president,
the local newspaper interviewed a lot of academics
about, you know, he said that he was going to support science
and that was wonderful.
And so the newspaper asked, what areas of science
do you think the government should support?
And almost every person said, what I’m doing.
(laughing)
– It’s the most important area to fund, you know?
Because they’re the closest to it.
And of course, they’ve committed their life to it.
So it must be the most important.
– I mentioned that I had two absolute gems
that I love those quotes in your book.
And I’m coming to the second one.
And the second one is not necessarily a quote,
but I want you to explain the situation when you say
that Sam Altman had, shall I say symptoms of toxoplasma
gondi, that the brain parasite that makes rodents
unafraid of cats and more likely to be eaten.
So why did you say that about Sam Altman?
– Okay, so first of all, this is a biological thing
that happens in the brain of the poor mouse or rat.
So there was a time when he would go to Washington
and not just testify before Congress,
but he would actually go and have dinners
with Congress people and talk to them.
And the history is that Bill Gates gets pulled in
and he gets grilled in a congressional testimony
and they gave a diversion.
So here’s the skies going in and not just going for testimony
but actually going and trying to be a part
of their social life.
So it just seemed that he was being contrary
to the traditional way that most humans would deal
with people who are out to regulate you.
But actually somewhere later in the book,
I think identified another explanation,
which is that the regulation is an interesting thing
because it basically puts up barriers, right?
It turns out if you have lots of lawyers,
you can find loopholes, it’s always a loophole, right?
And if you’re rich, you can afford lawyers
to find the loopholes for you.
And of course, the big corporations,
high tech, Google and OpenAI, they have the best lawyers.
They can hire the best lawyers to get around any regulation,
whereas some poor startup, they can’t do that.
So it’ll give the big companies an advantage
to have regulations out there.
Couldn’t a scrappy, small undercapitalized startup
ask an LLM what are the loopholes in this regulation?
It would find them.
– Ah, okay.
Well, so now you’re saying that in fact,
they could use their own,
because they’re not gonna be able to make their own LLM,
they’re gonna have to use the other,
the big ones that are already out there.
And it could be that these companies
are actually democratizing lawyers.
(laughing)
By the way, it’s not just lawyers and laws,
it’s also reporting.
In other words, there’s a tremendous amount
of what they wanna do is somehow,
if the companies have to have tests and lots of examples,
they’re gonna require a lot of FAA before you,
airplane is allowed to carry passengers,
it’s gotta go through a whole series of tests
and very stringent,
you have to be put into the worst weather conditions
to make sure it’s stressed, a stress test.
And again, all of that testing is basically
for a large company, they have lots of resources to do that.
And it may not be easy for a small company,
so it’s complicated.
But in any case, I think that what’s happening right now
is that the Europeans have this AI law
that is 100 pages with very strict rules
about what you can and cannot do,
like you can’t use it for interviewing future employees
for companies.
– We just advocated for that.
– Yeah, we’ll see what happens in the US,
’cause right now it’s not prescriptive,
it’s suggestive that we follow these rules.
– And what would be the thinking that you can’t use it
to interview employees in Europe?
What are they worried about?
– Oh, bias, bias.
– Bias, as opposed to human bias,
like a male recruiter falls for an attractive female candidate.
– Okay, that’s also a bias, I guess.
(laughing)
There probably is some law there, I don’t know.
Not only are we biased, but we’re biased in our biases.
(laughing)
Who we talk to, things like that.
– All right, I gotta tell you one more part
I really loved about your book is when you had
the long description of legalese,
and then you had the LLM Simplify a contract,
and that was just beautiful.
Like why do terms of service have to be so
absolutely impenetrable?
And you showed an example of how it could be
done so much better.
– That is happening right now,
I think in a lot of places that is,
and this is a big transformation that’s occurring
within companies now.
They are, the employees are using these tools
in order to be able to help.
First of all, keep track of meetings.
You don’t have to have someone there taking notes
because the whole thing gets summarized
at the end of the meeting.
It’s really good at that, and speech recognition.
– Well, you also mentioned that when doctors
are interviewing patients that instead of looking
at the keyboard and the monitor,
they should be just listening and let the recording
take care of all that, right?
– Yes, that’s a huge benefit because looking
at the patient carries a lot of information.
There are expressions, the color of their skin,
all of that is part of being a doctor,
and if you’re not looking at them,
you’re not really being a good doctor.
– Okay, this is seriously my last question.
I love the fact that the first few chapters at the end,
they had these questions that probably ChatGPT generated.
Why didn’t you continue that through the whole book
so every chapter ends with questions?
– I don’t know, I hadn’t talked about it.
I’ll tell you, I wrote the book over a course of a year,
and I think that it must have been the case
that by the time, I do use it throughout the book.
I have sections, and I actually set them apart
and say this is ChatGP, at the end there’s this little sign
opening the eye sign, and I ask it to summarize parts.
And at the beginning, I actually ask it to,
sometimes I ask it to come up with say five questions
from this chapter, and that’s where Alex the parrot
popped out.
(laughing)
– Am I the first person to catch the fact
that Alex the parrot was not mentioned in the text
except for the footnote?
– You are the first person, and I suspect there are others
that notice that.
But actually it’s good to have a few little parcels
in there that you have a little detective story
for who is Alex the parrot.
– All right, how about I give you like,
I really want you to sell a lot of copies of this book.
So how about I give you like, just unfettered,
give us your best shot promo for your book.
– Everything you’ve always wanted to know
about large language models and ChatGPT,
and we’re not afraid to ask.
– That’s a good positioning.
I like that.
It’s like that book way in my past.
It was a book called everything you wanted to know
about sex, but was afraid to ask, right?
– Yeah, it was a take off, a ball rip off.
– As I learned from Steve Jobs,
you gotta learn what to steal.
That’s a talent in and of itself.
– You’re paying homage to the past,
but I wrote this for the public.
I thought that the news articles were misleading
and all this talk about super intelligence was,
although it’s a concern, it’s not an immediate concern,
but we have to be careful, that’s for sure.
And it helps, I’m trying to help people.
When I give talks, they ask, well, I lose my job.
And I say, you may not lose your job, but it’s gonna change.
And you have to have new skills
and maybe that’s gonna be part of your new job
is to use these AI tools.
– Well, as you mentioned in your book,
when we started getting farm equipment,
there are a lot less farmers.
You could manage thousands of acres at one person, right?
– Yes, that’s true.
That’s true, but the children went to the cities
and they worked in factories.
And so they had a different job,
but it wasn’t working to get food.
It’s working to make cloth and automobiles and things.
– And LLMs eventually.
(laughs)
Yes, eventually, for some of us.
– I just wanna thank you, Terry, very much.
I found your book very, very,
not only interesting and informative.
There were places where I was just busting out laughing
and I’m not sure that was your intention,
but when I read that thing about Sam Altman’s brain,
has that thing that make rodents less afraid of cats.
I’m like, oh my God, this guy is a funny guy.
– So I thought I’d make it entertaining
so that people can appreciate.
In some way, we’re two purchase people,
I’m serious about that.
Let’s have some fun.
– One of my theories in life is that
a sense of humor is a sign of intelligence.
(laughs)
– Oh, good.
Actually, I’ll tell you, if this is interesting,
who gets the Academy Awards?
It’s the actor who’s in some terrible drama
where something bad happens and so forth.
And then they overlook all the fantastic comedians.
It turns out it’s much more difficult
to be a comedian and be somebody who has angst.
And they’re not giving the same respect.
I had no idea that you’ve read the whole book
’cause most of the people who interviewed me,
they’ve read some parts,
but it sounds like you know the whole book.
– I could, do you know the story
of the chauffeur and the physicist?
Okay, this is along the lines
of what you just said that I read the whole book.
So this physicist is on a book tour.
Let’s say it’s Stephen Wolfram or Neil deGrasse Tyson.
So anyway, they’re on this book tour
and they’re gonna make four stops in the cities
and the chauffeur takes them from stop to stop.
So the chauffeur sits in the back
and listens to the first three times
the physicist gives the talk.
At the fourth time, the physicist says,
“I am exhausted.
“You heard me give this talk three times.
“You go give the talk.”
And the chauffeur says, “Yeah, I can do it.
“I heard you three times.”
The chauffeur goes up, gives the talk,
but he ends early.
And so the MC, the host of the event, says to the chauffeur,
“Oh, we’re lucky we ended early.
“We’re gonna take some Q&A from the audience.”
So the first question comes up and it’s about physics
and the chauffeur has no idea.
And he says, “This question is so simplistic.
“I’m gonna let my chauffeur sitting in the back answer.
(laughing)
“So I’m your chauffeur.”
(laughing)
Oh, that’s wonderful.
All right, Terry, thank you.
Well, thank you.
I truly enjoy this.
I did too.
All right, all the best to you.
(jazz music)
This is Remarkable People.

In this episode of Remarkable People, Guy Kawasaki engages in a fascinating dialogue with Terry Sejnowski, the Francis Crick Chair at the Salk Institute and Distinguished Professor at UC San Diego. Together, they unpack the mysteries of artificial intelligence, exploring how AI mirrors human learning in unexpected ways. Sejnowski shatters common misconceptions about large language models while sharing compelling insights about their potential to augment human capabilities. Discover why being polite to AI might yield better results and why the future of AI is less about academic debates and more about practical applications that can transform our world.

Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.

With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.

Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.

Episodes of Remarkable People organized by topic: https://bit.ly/rptopology

Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**

Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!

Thank you for your support; it helps the show!

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Leave a Comment

AI Engine Chatbot
AI Avatar
Hi! How can I help?